threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\n\nParam isSlice was once used to identity targetTypeId for transformAssignmentIndirection.\n\nIn commit c7aba7c14e, the evaluation was pushed down to transformContainerSubscripts.\n\nNo need to keep isSlice around transformAssignmentSubscripts.\n\nAttach a patch to remove it.\n\nRegards,\nZhang Mingli", "msg_date": "Tue, 13 Sep 2022 11:35:37 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Removed unused param isSlice of function\n transformAssignmentSubscripts" }, { "msg_contents": "On Tue, Sep 13, 2022 at 11:35 AM Zhang Mingli <zmlpostgres@gmail.com> wrote:\n\n> Param isSlice was once used to identity targetTypeId for\n> transformAssignmentIndirection.\n>\n> In commit c7aba7c14e, the evaluation was pushed down to\n> transformContainerSubscripts.\n>\n> No need to keep isSlice around transformAssignmentSubscripts.\n>\n> Attach a patch to remove it.\n>\n\n+1. Good catch.\n\nThanks\nRichard\n\nOn Tue, Sep 13, 2022 at 11:35 AM Zhang Mingli <zmlpostgres@gmail.com> wrote:\nParam isSlice was once used to identity targetTypeId for transformAssignmentIndirection.\n\nIn commit c7aba7c14e, the evaluation was pushed down to transformContainerSubscripts.\n\nNo need to keep isSlice around transformAssignmentSubscripts.\n\nAttach a patch to remove it. +1. Good catch.ThanksRichard", "msg_date": "Tue, 13 Sep 2022 15:20:01 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removed unused param isSlice of function\n transformAssignmentSubscripts" }, { "msg_contents": "On Tue, Sep 13, 2022 at 03:20:01PM +0800, Richard Guo wrote:\n> +1. Good catch.\n\nYes, you are right that this comes from c7aba7c that has changed the\ntransform logic and the check on slicing support, and this makes the\ncode easier to follow. So, applied.\n--\nMichael", "msg_date": "Sun, 18 Sep 2022 15:42:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Removed unused param isSlice of function\n transformAssignmentSubscripts" } ]
[ { "msg_contents": "AFAICS, the Vars forced nonnullable by given clause are only used to\ncheck if we can reduce JOIN_LEFT to JOIN_ANTI, and it is checking the\njoin's own quals there. It seems to me we do not need to pass down\nnonnullable_vars by upper quals to the children of a join.\n\nAttached is a patch to remove the pass-down of nonnullable_vars.\n\nThanks\nRichard", "msg_date": "Tue, 13 Sep 2022 15:00:16 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Do we need to pass down nonnullable_vars when reducing outer joins?" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> AFAICS, the Vars forced nonnullable by given clause are only used to\n> check if we can reduce JOIN_LEFT to JOIN_ANTI, and it is checking the\n> join's own quals there. It seems to me we do not need to pass down\n> nonnullable_vars by upper quals to the children of a join.\n\nHmm, you are right, we are not doing anything useful with that data.\nI can't remember if I had a concrete plan for doing something with it\nor not, but we sure aren't using it now. So pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 05 Nov 2022 16:00:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Do we need to pass down nonnullable_vars when reducing outer\n joins?" }, { "msg_contents": "On Sun, Nov 6, 2022 at 4:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > AFAICS, the Vars forced nonnullable by given clause are only used to\n> > check if we can reduce JOIN_LEFT to JOIN_ANTI, and it is checking the\n> > join's own quals there. It seems to me we do not need to pass down\n> > nonnullable_vars by upper quals to the children of a join.\n>\n> Hmm, you are right, we are not doing anything useful with that data.\n> I can't remember if I had a concrete plan for doing something with it\n> or not, but we sure aren't using it now. So pushed.\n\n\nThanks for pushing it!\n\nThanks\nRichard\n\nOn Sun, Nov 6, 2022 at 4:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> AFAICS, the Vars forced nonnullable by given clause are only used to\n> check if we can reduce JOIN_LEFT to JOIN_ANTI, and it is checking the\n> join's own quals there. It seems to me we do not need to pass down\n> nonnullable_vars by upper quals to the children of a join.\n\nHmm, you are right, we are not doing anything useful with that data.\nI can't remember if I had a concrete plan for doing something with it\nor not, but we sure aren't using it now.  So pushed. Thanks for pushing it!ThanksRichard", "msg_date": "Mon, 7 Nov 2022 10:54:32 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Do we need to pass down nonnullable_vars when reducing outer\n joins?" } ]
[ { "msg_contents": "Shouldn't such tuples be considered dead right away, even if the inserting\ntransaction is still active? That would allow cleaning them up even before\nthe transaction is done.\n\nThere is this code in HeapTupleSatisfiesVacuumHorizon:\n\n else if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmin(tuple)))\n {\n [...]\n /* inserted and then deleted by same xact */\n if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetUpdateXid(tuple)))\n return HEAPTUPLE_DELETE_IN_PROGRESS;\n\nWhy HEAPTUPLE_DELETE_IN_PROGRESS and not HEAPTUPLE_DEAD?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 13 Sep 2022 10:06:03 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Tuples inserted and deleted by the same transaction" }, { "msg_contents": "Hi!\n\nPlease correct me if I'm wrong, despite tuples being inserted and deleted\nby the same\ntransaction - they are visible inside the transaction and usable by it, so\nconsidering them\ndead and cleaning up during execution is a bad idea until the\ntransaction is ended.\n\nOn Tue, Sep 13, 2022 at 11:06 AM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> Shouldn't such tuples be considered dead right away, even if the inserting\n> transaction is still active? That would allow cleaning them up even before\n> the transaction is done.\n>\n> There is this code in HeapTupleSatisfiesVacuumHorizon:\n>\n> else if\n> (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmin(tuple)))\n> {\n> [...]\n> /* inserted and then deleted by same xact */\n> if\n> (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetUpdateXid(tuple)))\n> return HEAPTUPLE_DELETE_IN_PROGRESS;\n>\n> Why HEAPTUPLE_DELETE_IN_PROGRESS and not HEAPTUPLE_DEAD?\n>\n> Yours,\n> Laurenz Albe\n>\n>\n> --\nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!Please correct me if I'm wrong, despite tuples being inserted and deleted by the same transaction - they are visible inside the transaction and usable by it, so considering themdead and cleaning up during execution is a bad idea until the transaction is ended.On Tue, Sep 13, 2022 at 11:06 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:Shouldn't such tuples be considered dead right away, even if the inserting\ntransaction is still active?  That would allow cleaning them up even before\nthe transaction is done.\n\nThere is this code in HeapTupleSatisfiesVacuumHorizon:\n\n        else if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmin(tuple)))\n        {\n            [...]\n            /* inserted and then deleted by same xact */\n            if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetUpdateXid(tuple)))\n                return HEAPTUPLE_DELETE_IN_PROGRESS;\n\nWhy HEAPTUPLE_DELETE_IN_PROGRESS and not HEAPTUPLE_DEAD?\n\nYours,\nLaurenz Albe\n\n\n--Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Tue, 13 Sep 2022 11:47:16 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tuples inserted and deleted by the same transaction" }, { "msg_contents": "On Tue, 2022-09-13 at 11:47 +0300, Nikita Malakhov wrote:\n> On Tue, Sep 13, 2022 at 11:06 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > Shouldn't such tuples be considered dead right away, even if the inserting\n> > transaction is still active?  That would allow cleaning them up even before\n> > the transaction is done.\n> > \n> > There is this code in HeapTupleSatisfiesVacuumHorizon:\n> > \n> >         else if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmin(tuple)))\n> >         {\n> >             [...]\n> >             /* inserted and then deleted by same xact */\n> >             if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetUpdateXid(tuple)))\n> >                 return HEAPTUPLE_DELETE_IN_PROGRESS;\n> > \n> > Why HEAPTUPLE_DELETE_IN_PROGRESS and not HEAPTUPLE_DEAD?\n>\n> Please correct me if I'm wrong, despite tuples being inserted and deleted by the same \n> transaction - they are visible inside the transaction and usable by it, so considering them\n> dead and cleaning up during execution is a bad idea until the transaction is ended.\n\nBut once they are deleted or updated, even the transaction that created them cannot\nsee them any more, right?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 13 Sep 2022 12:04:04 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Tuples inserted and deleted by the same transaction" }, { "msg_contents": "On Tue, Sep 13, 2022 at 11:04 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Tue, 2022-09-13 at 11:47 +0300, Nikita Malakhov wrote:\n> > On Tue, Sep 13, 2022 at 11:06 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > > Shouldn't such tuples be considered dead right away, even if the inserting\n> > > transaction is still active? That would allow cleaning them up even before\n> > > the transaction is done.\n> > >\n> > > There is this code in HeapTupleSatisfiesVacuumHorizon:\n> > >\n> > > else if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetRawXmin(tuple)))\n> > > {\n> > > [...]\n> > > /* inserted and then deleted by same xact */\n> > > if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetUpdateXid(tuple)))\n> > > return HEAPTUPLE_DELETE_IN_PROGRESS;\n> > >\n> > > Why HEAPTUPLE_DELETE_IN_PROGRESS and not HEAPTUPLE_DEAD?\n> >\n> > Please correct me if I'm wrong, despite tuples being inserted and deleted by the same\n> > transaction - they are visible inside the transaction and usable by it, so considering them\n> > dead and cleaning up during execution is a bad idea until the transaction is ended.\n>\n> But once they are deleted or updated, even the transaction that created them cannot\n> see them any more, right?\n\nForgive me if this is not related but if there is a savepoint between\nthe insertion and deletion, wouldn't it be possible for the\ntransaction to recover the deleted tuples?\n\nBest regards\nPantelis Theodosiou\n\n\n", "msg_date": "Tue, 13 Sep 2022 11:40:14 +0100", "msg_from": "Pantelis Theodosiou <ypercube@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tuples inserted and deleted by the same transaction" }, { "msg_contents": "On Tue, 13 Sep 2022, 12:04 Laurenz Albe, <laurenz.albe@cybertec.at> wrote:\n>\n> On Tue, 2022-09-13 at 11:47 +0300, Nikita Malakhov wrote:\n>> Please correct me if I'm wrong, despite tuples being inserted and deleted by the same\n>> transaction - they are visible inside the transaction and usable by it, so considering them\n>> dead and cleaning up during execution is a bad idea until the transaction is ended.\n>\n> But once they are deleted or updated, even the transaction that created them cannot\n> see them any more, right?\n\n\nNot quite. The command that is deleting the tuple might still be\nrunning, and because deletions are only \"visible\" to statements at the\nend of the delete operation, that command may still need to see the\ndeleted tuple (example: DELETE FROM tab t WHERE t.randnum > (select\ncount(*) from tab)); that count(*) will not change during the delete\noperation.\n\nSo in order to mark that tuple as all_dead, you need proof that the\ndeleting statement finished executing. I can think of two ways to do\nthat: either the commit/abort of that transaction (this would be\nsimilarly expensive as the normal commit lookup), or (e.g.) the\nexistence of another tuple with the same XID but with a newer CID.\nThat last one would not be impossible, but probably not worth the\nextra cost of command id tracking.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Tue, 13 Sep 2022 13:13:51 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tuples inserted and deleted by the same transaction" }, { "msg_contents": "On Tue, 13 Sept 2022 at 12:40, Pantelis Theodosiou <ypercube@gmail.com> wrote:\n>\n> Forgive me if this is not related but if there is a savepoint between\n> the insertion and deletion, wouldn't it be possible for the\n> transaction to recover the deleted tuples?\n\nSavepoints result in changed TransactionIds (well, subtransactions\nwith their own ids), so if a tuple was created before a savepoint and\ndeleted after, the values in xmin and xmax would be different, as you\ncan see in the following:\n\nmatthias=> CREATE TABLE tst(i int);\nmatthias=> BEGIN; INSERT INTO tst VALUES (1); SAVEPOINT s1; DELETE\nFROM tst; ROLLBACK TO SAVEPOINT s1;\nCREATE TABLE\nBEGIN\nINSERT 0 1\nSAVEPOINT\nDELETE 1\nROLLBACK\nmatthias=*> SELECT xmin, xmax FROM tst;\n xmin | xmax\n-------+-------\n 62468 | 62469\n(1 row)\n\nNote that this row has different xmin/xmax from being created and\ndeleted in different subtransactions. This means that this needs no\nspecific handling in the HTSVH code that Laurenz asked about.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Tue, 13 Sep 2022 14:40:47 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tuples inserted and deleted by the same transaction" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> But once they are deleted or updated, even the transaction that created them cannot\n> see them any more, right?\n\nI would not trust that claim very far. The transaction might have active\nsnapshots with a command ID between the times of insertion and deletion.\n(Consider a query that is firing triggers as it goes, and the triggers\nare performing new actions that cause the command counter to advance.\nThe outer query should not see the results of those actions.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Sep 2022 09:45:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Tuples inserted and deleted by the same transaction" }, { "msg_contents": "On Tue, 13 Sept 2022 at 15:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > But once they are deleted or updated, even the transaction that created them cannot\n> > see them any more, right?\n>\n> I would not trust that claim very far. The transaction might have active\n> snapshots with a command ID between the times of insertion and deletion.\n> (Consider a query that is firing triggers as it goes, and the triggers\n> are performing new actions that cause the command counter to advance.\n> The outer query should not see the results of those actions.)\n\nI hadn't realized that triggers indeed consume command ids but might\nnot be visible to the outer query (that might still be running). That\ninvalidates the \"or (e.g.) the existence of another tuple with the\nsame XID but with a newer CID\" claim I made earlier, so thanks for\nclarifying.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Tue, 13 Sep 2022 16:13:44 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tuples inserted and deleted by the same transaction" }, { "msg_contents": "On Tue, 2022-09-13 at 16:13 +0200, Matthias van de Meent wrote:\n> On Tue, 13 Sept 2022 at 15:45, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > > But once they are deleted or updated, even the transaction that created them cannot\n> > > see them any more, right?\n> > \n> > I would not trust that claim very far.  The transaction might have active\n> > snapshots with a command ID between the times of insertion and deletion.\n> > (Consider a query that is firing triggers as it goes, and the triggers\n> > are performing new actions that cause the command counter to advance.\n> > The outer query should not see the results of those actions.)\n> \n> I hadn't realized that triggers indeed consume command ids but might\n> not be visible to the outer query (that might still be running). That\n> invalidates the \"or (e.g.) the existence of another tuple with the\n> same XID but with a newer CID\" claim I made earlier, so thanks for\n> clarifying.\n\nYes, that makes sense. Thanks.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 13 Sep 2022 19:06:02 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Tuples inserted and deleted by the same transaction" } ]
[ { "msg_contents": "Hi,\n\nThe commit: Revert SQL/JSON features\nhttps://github.com/postgres/postgres/commit/2f2b18bd3f554e96a8cc885b177211be12288e4a\n\nLeft a little oversight.\nI believe it will be properly corrected when it is applied again.\nHowever, for Postgres 15 this may can cause a small memory leak.\n\nAttached a fix patch.\n\nregards,\nRanier Vilela", "msg_date": "Tue, 13 Sep 2022 10:21:22 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Avoid redudant initialization and possible memory leak\n (src/backend/parser/parse_relation.c)" }, { "msg_contents": "On Tue, Sep 13, 2022 at 10:21:22AM -0300, Ranier Vilela wrote:\n> Hi,\n>\n> The commit: Revert SQL/JSON features\n> https://github.com/postgres/postgres/commit/2f2b18bd3f554e96a8cc885b177211be12288e4a\n>\n> Left a little oversight.\n> I believe it will be properly corrected when it is applied again.\n> However, for Postgres 15 this may can cause a small memory leak.\n\nIt's not a memory leak, the chunk will be freed eventually when the owning\nmemory context is reset, but I agree that one of the two identical\ninitializations should be removed.\n\n\n", "msg_date": "Tue, 13 Sep 2022 22:05:56 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid redudant initialization and possible memory leak\n (src/backend/parser/parse_relation.c)" }, { "msg_contents": "On 2022-Sep-13, Ranier Vilela wrote:\n\n> However, for Postgres 15 this may can cause a small memory leak.\n\nWhat memory leak? There's no leak here.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n", "msg_date": "Tue, 13 Sep 2022 16:09:27 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Avoid redudant initialization and possible memory leak\n (src/backend/parser/parse_relation.c)" }, { "msg_contents": "Em ter., 13 de set. de 2022 às 11:09, Alvaro Herrera <\nalvherre@alvh.no-ip.org> escreveu:\n\n> On 2022-Sep-13, Ranier Vilela wrote:\n>\n> > However, for Postgres 15 this may can cause a small memory leak.\n>\n> What memory leak? There's no leak here.\n>\nYeah, as per Julien's answer, there is really no memory leak, but just\nunnecessary double execution of pstrdup.\nBut for Postgres 15, I believe it's worth avoiding this, because it's\nwasted cycles.\nFor Postgres 16, I believe this will be fixed as well, but for robustness,\nbetter fix soon, IMO.\n\nregards,\nRanier Vilela\n\nEm ter., 13 de set. de 2022 às 11:09, Alvaro Herrera <alvherre@alvh.no-ip.org> escreveu:On 2022-Sep-13, Ranier Vilela wrote:\n\n> However, for Postgres 15 this may can cause a small memory leak.\n\nWhat memory leak?  There's no leak here.Yeah, as per Julien's answer, there is really no memory leak, but just unnecessary double execution of pstrdup.But for Postgres 15, I believe it's worth avoiding this, because it's wasted cycles.For Postgres 16, I believe this will be fixed as well, but for robustness, better fix soon, IMO.regards,Ranier Vilela", "msg_date": "Tue, 13 Sep 2022 12:44:40 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid redudant initialization and possible memory leak\n (src/backend/parser/parse_relation.c)" }, { "msg_contents": "On 2022-Sep-13, Ranier Vilela wrote:\n\n> Yeah, as per Julien's answer, there is really no memory leak, but just\n> unnecessary double execution of pstrdup.\n> But for Postgres 15, I believe it's worth avoiding this, because it's\n> wasted cycles.\n\nYeah, this is a merge mistake. Fix applied, thanks.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 14 Sep 2022 15:56:43 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Avoid redudant initialization and possible memory leak\n (src/backend/parser/parse_relation.c)" } ]
[ { "msg_contents": "While messing with the new guc.h stuff I happened run headerscheck and\nwanted to abort it right away, and in doing so I realized that its\n'trap' line is incorrect: it only removes its temp dir, but it doesn't\nexit the program; so after you C-c it, it will spew a ton of complaints\nabout its temp dir not existing.\n\nAFAICT almost all of our shell scripts contain the same mistake. I\npropose to fix them all as in the attached demo patch, which makes\nheaderscheck exit properly (no silly noise) when interrupted.\n\n(I confess to not fully understanding why every other trap does\n\"rm && exit $ret\" rather than \"rm ; exit\", but I guess rm -fr should not\nfail anyway thus this should OK.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/", "msg_date": "Tue, 13 Sep 2022 20:10:02 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "wrong shell trap" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> While messing with the new guc.h stuff I happened run headerscheck and\n> wanted to abort it right away, and in doing so I realized that its\n> 'trap' line is incorrect: it only removes its temp dir, but it doesn't\n> exit the program; so after you C-c it, it will spew a ton of complaints\n> about its temp dir not existing.\n\nUgh.\n\n> AFAICT almost all of our shell scripts contain the same mistake. I\n> propose to fix them all as in the attached demo patch, which makes\n> headerscheck exit properly (no silly noise) when interrupted.\n\nSounds like a good idea.\n\n> (I confess to not fully understanding why every other trap does\n> \"rm && exit $ret\" rather than \"rm ; exit\", but I guess rm -fr should not\n> fail anyway thus this should OK.)\n\nI didn't write these, but I think the idea might be \"if rm -rf\nfails, report its error status rather than whatever we had before\".\nHowever, if that is the idea then it's wrong, because as you've\ndiscovered we won't exit at all unless the trap string does so.\nSo ISTM that 'ret=$?; rm -rf $tmp; exit $ret' is the correct coding,\nor at least less incorrect than either of these alternatives.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Sep 2022 17:01:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: wrong shell trap" }, { "msg_contents": "On Tue, Sep 13, 2022 at 2:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > AFAICT almost all of our shell scripts contain the same mistake. I\n> > propose to fix them all as in the attached demo patch, which makes\n> > headerscheck exit properly (no silly noise) when interrupted.\n>\n> Sounds like a good idea.\n\nMight not be a bad idea to run shellcheck against the scripts, to see\nif that highlights anything.\n\nI've found that shellcheck makes working with shell scripts less\nterrible, especially when portability is a concern. It can be used to\nenforce consistent coding standards that seem pretty well thought out.\nIt will sometimes produce dubious warnings, of course, but it tends to\nmostly have the right idea, most of the time.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 13 Sep 2022 14:22:45 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: wrong shell trap" } ]
[ { "msg_contents": "I saw the following message recently modified.\n\n> This controls the maximum distance we can read ahead in the WAL to prefetch referenced data blocks.\n\nMaybe the \"we\" means \"PostgreSQL program and you\" but I see it\nsomewhat out of place.\n\nI found other three uses of \"we\" in backend.\n\n> client sent proto_version=%d but we only support protocol %d or lower\n> client sent proto_version=%d but we only support protocol %d or higher\n> System allows %d, we need at least %d.\n\nThis is a little different from the first one. In the three above,\n\"we\" suggests \"The developers and maybe the PostgreSQL program\".\n\nIs it the right word choice as error messages? I'm not confident on\nthe precise wording, but I think something like the following are\nappropriate here.\n\n> This controls the maximum distance to read ahead in the WAL to prefetch referenced data blocks.\n> client sent proto_version=%d but only protocols %d or lower are supported\n> client sent proto_version=%d but only protocols %d or higher are supported\n> System allows %d, at least %d needed.\n\nThoughts?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 14 Sep 2022 11:15:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "A question about wording in messages" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> I saw the following message recently modified.\n>> This controls the maximum distance we can read ahead in the WAL to prefetch referenced data blocks.\n> Maybe the \"we\" means \"PostgreSQL program and you\" but I see it\n> somewhat out of place.\n\n+1, I saw that today and thought it was outside our usual style.\nThe whole thing is awfully verbose for a GUC description, too.\nMaybe\n\n\"Maximum distance to read ahead in WAL to prefetch data blocks.\"\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Sep 2022 22:38:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A question about wording in messages" }, { "msg_contents": "At Tue, 13 Sep 2022 22:38:46 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > I saw the following message recently modified.\n> >> This controls the maximum distance we can read ahead in the WAL to prefetch referenced data blocks.\n> > Maybe the \"we\" means \"PostgreSQL program and you\" but I see it\n> > somewhat out of place.\n> \n> +1, I saw that today and thought it was outside our usual style.\n> The whole thing is awfully verbose for a GUC description, too.\n> Maybe\n> \n> \"Maximum distance to read ahead in WAL to prefetch data blocks.\"\n\nIt seems to sufficiently work for average users and rather easy to\nread, but it looks a short description.\n\nwal_decode_buffer_size has the following descriptions.\n\nShort: Buffer size for reading ahead in the WAL during recovery.\nExtra: This controls the maximum distance we can read ahead in the WAL to prefetch referenced data blocks.\n\nSo, taking the middle of them, how about the following?\n\nShort: Buffer size for reading ahead in the WAL during recovery.\nExtra: This controls the maximum distance to read ahead in WAL to prefetch data blocks.\"\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 14 Sep 2022 13:00:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A question about wording in messages" }, { "msg_contents": "On Wed, Sep 14, 2022 at 7:45 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> I saw the following message recently modified.\n>\n> > This controls the maximum distance we can read ahead in the WAL to prefetch referenced data blocks.\n>\n> Maybe the \"we\" means \"PostgreSQL program and you\" but I see it\n> somewhat out of place.\n>\n> I found other three uses of \"we\" in backend.\n>\n> > client sent proto_version=%d but we only support protocol %d or lower\n> > client sent proto_version=%d but we only support protocol %d or higher\n\nHow about just replacing 'we' with 'server'?\n\n> > System allows %d, we need at least %d.\n>\n\nAnother possibility could be: \"System allows %d, but at least %d are required.\"\n\n> This is a little different from the first one. In the three above,\n> \"we\" suggests \"The developers and maybe the PostgreSQL program\".\n>\n> Is it the right word choice as error messages? I'm not confident on\n> the precise wording, but I think something like the following are\n> appropriate here.\n>\n> > This controls the maximum distance to read ahead in the WAL to prefetch referenced data blocks.\n> > client sent proto_version=%d but only protocols %d or lower are supported\n> > client sent proto_version=%d but only protocols %d or higher are supported\n> > System allows %d, at least %d needed.\n>\n\nThis could be another way to rewrite. Let us see if others have an\nopinion on this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 14 Sep 2022 12:09:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A question about wording in messages" }, { "msg_contents": "On Wed, Sep 14, 2022 at 2:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > I saw the following message recently modified.\n> >> This controls the maximum distance we can read ahead in the WAL to prefetch referenced data blocks.\n> > Maybe the \"we\" means \"PostgreSQL program and you\" but I see it\n> > somewhat out of place.\n>\n> +1, I saw that today and thought it was outside our usual style.\n> The whole thing is awfully verbose for a GUC description, too.\n> Maybe\n>\n> \"Maximum distance to read ahead in WAL to prefetch data blocks.\"\n\n+1\n\nFor \"we\", I must have been distracted by code comment style. For the\nextra useless verbiage, it's common for GUC description to begin \"This\ncontrol/affects/blah\" like that, but I agree it's useless noise.\n\nFor the other cases, Amit's suggestion of 'server' seems sensible to me.\n\n\n", "msg_date": "Fri, 16 Sep 2022 12:10:05 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A question about wording in messages" }, { "msg_contents": "At Fri, 16 Sep 2022 12:10:05 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Wed, Sep 14, 2022 at 2:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > > I saw the following message recently modified.\n> > >> This controls the maximum distance we can read ahead in the WAL to prefetch referenced data blocks.\n> > > Maybe the \"we\" means \"PostgreSQL program and you\" but I see it\n> > > somewhat out of place.\n> >\n> > +1, I saw that today and thought it was outside our usual style.\n> > The whole thing is awfully verbose for a GUC description, too.\n> > Maybe\n> >\n> > \"Maximum distance to read ahead in WAL to prefetch data blocks.\"\n> \n> +1\n> \n> For \"we\", I must have been distracted by code comment style. For the\n> extra useless verbiage, it's common for GUC description to begin \"This\n> control/affects/blah\" like that, but I agree it's useless noise.\n> \n> For the other cases, Amit's suggestion of 'server' seems sensible to me.\n\nThaks for the opinion. I'm fine with that, too.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 16 Sep 2022 15:16:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A question about wording in messages" }, { "msg_contents": "On Fri, Sep 16, 2022 at 11:46 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 16 Sep 2022 12:10:05 +1200, Thomas Munro <thomas.munro@gmail.com> wrote in\n> > On Wed, Sep 14, 2022 at 2:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > > > I saw the following message recently modified.\n> > > >> This controls the maximum distance we can read ahead in the WAL to prefetch referenced data blocks.\n> > > > Maybe the \"we\" means \"PostgreSQL program and you\" but I see it\n> > > > somewhat out of place.\n> > >\n> > > +1, I saw that today and thought it was outside our usual style.\n> > > The whole thing is awfully verbose for a GUC description, too.\n> > > Maybe\n> > >\n> > > \"Maximum distance to read ahead in WAL to prefetch data blocks.\"\n> >\n> > +1\n> >\n> > For \"we\", I must have been distracted by code comment style. For the\n> > extra useless verbiage, it's common for GUC description to begin \"This\n> > control/affects/blah\" like that, but I agree it's useless noise.\n> >\n> > For the other cases, Amit's suggestion of 'server' seems sensible to me.\n>\n> Thaks for the opinion. I'm fine with that, too.\n>\n\nSo, the change related to wal_decode_buffer_size needs to be\nbackpatched to 15 whereas other message changes will be HEAD only, am\nI correct?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 16 Sep 2022 12:29:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A question about wording in messages" }, { "msg_contents": "At Fri, 16 Sep 2022 12:29:52 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> So, the change related to wal_decode_buffer_size needs to be\n> backpatched to 15 whereas other message changes will be HEAD only, am\n> I correct?\n\nHas 15 closed the entry? IMHO I supposed that all changes are applied\nback(?) to 15.\n\nregardes.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 16 Sep 2022 17:37:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A question about wording in messages" }, { "msg_contents": "On Fri, Sep 16, 2022 at 2:07 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 16 Sep 2022 12:29:52 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > So, the change related to wal_decode_buffer_size needs to be\n> > backpatched to 15 whereas other message changes will be HEAD only, am\n> > I correct?\n>\n> Has 15 closed the entry? IMHO I supposed that all changes are applied\n> back(?) to 15.\n>\n\nWe only want to commit the changes to 15 (a) if those fixes a problem\nintroduced in 15, or (b) those are for a bug fix. I think the error\nmessage improvements fall into none of those categories, we can map it\nto (b) but I feel those are an improvement in the current messages and\ndon't seem critical to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 16 Sep 2022 14:28:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A question about wording in messages" }, { "msg_contents": "On Fri, Sep 16, 2022 at 12:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > > >\n> > > > +1, I saw that today and thought it was outside our usual style.\n> > > > The whole thing is awfully verbose for a GUC description, too.\n> > > > Maybe\n> > > >\n> > > > \"Maximum distance to read ahead in WAL to prefetch data blocks.\"\n> > >\n> > > +1\n> > >\n> > > For \"we\", I must have been distracted by code comment style. For the\n> > > extra useless verbiage, it's common for GUC description to begin \"This\n> > > control/affects/blah\" like that, but I agree it's useless noise.\n> > >\n> > > For the other cases, Amit's suggestion of 'server' seems sensible to me.\n> >\n> > Thaks for the opinion. I'm fine with that, too.\n> >\n>\n> So, the change related to wal_decode_buffer_size needs to be\n> backpatched to 15 whereas other message changes will be HEAD only, am\n> I correct?\n>\n\nI would like to pursue as per above unless there is more feedback on this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 20 Sep 2022 10:00:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A question about wording in messages" }, { "msg_contents": "On 2022-Sep-14, Kyotaro Horiguchi wrote:\n\n> At Tue, 13 Sep 2022 22:38:46 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > > I saw the following message recently modified.\n> > >> This controls the maximum distance we can read ahead in the WAL to prefetch referenced data blocks.\n> > > Maybe the \"we\" means \"PostgreSQL program and you\" but I see it\n> > > somewhat out of place.\n> > \n> > +1, I saw that today and thought it was outside our usual style.\n> > The whole thing is awfully verbose for a GUC description, too.\n> > Maybe\n> > \n> > \"Maximum distance to read ahead in WAL to prefetch data blocks.\"\n\nI failed to notice this issue. I agree it's unusual and +1 for changing it.\n\n> It seems to sufficiently work for average users and rather easy to\n> read, but it looks a short description.\n\n> So, taking the middle of them, how about the following?\n> \n> Short: Buffer size for reading ahead in the WAL during recovery.\n> Extra: This controls the maximum distance to read ahead in WAL to prefetch data blocks.\"\n\nBut why do we care that it's short? We don't need it to be long .. we\nonly need it to explain what it needs to explain.\n\nAfter spending way too much time editing this line, I ended up with\nexactly what Tom proposed, so +1 for his version. I think \"This\ncontrols\" adds nothing very useful, and we don't have it anywhere else,\nexcept tcp_keepalives_count from where I also propose to remove it.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Hay quien adquiere la mala costumbre de ser infeliz\" (M. A. Evans)", "msg_date": "Tue, 20 Sep 2022 18:21:25 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: A question about wording in messages" }, { "msg_contents": "On 2022-Sep-16, Amit Kapila wrote:\n\n> We only want to commit the changes to 15 (a) if those fixes a problem\n> introduced in 15, or (b) those are for a bug fix. I think the error\n> message improvements fall into none of those categories, we can map it\n> to (b) but I feel those are an improvement in the current messages and\n> don't seem critical to me.\n\nIMO at least the GUC one does fix a problem related to the wording of a\nuser-visible message, which also flows into the translations. I prefer\nto have that one fixed it in 15 also. The other messages (errors) don't\nseem very interesting because they're not as visible, so I don't care if\nthose are not backpatched.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"This is a foot just waiting to be shot\" (Andrew Dunstan)\n\n\n", "msg_date": "Tue, 20 Sep 2022 19:28:35 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: A question about wording in messages" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> After spending way too much time editing this line, I ended up with\n> exactly what Tom proposed, so +1 for his version. I think \"This\n> controls\" adds nothing very useful, and we don't have it anywhere else,\n> except tcp_keepalives_count from where I also propose to remove it.\n\nLGTM.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Sep 2022 14:56:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A question about wording in messages" } ]
[ { "msg_contents": "I found a small typo in a comment in pgbench.c of 15/master.\n\n- * Return the number fo failed transactions.\n+ * Return the number of failed transactions.\n\nWhile at it, I found \"* lot fo unnecessary work.\" in pg13's\nprocsignal.c. It has been fixed by 2a093355aa in PG14 but PG13 was\nleft alone at the time.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 14 Sep 2022 11:46:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "A small typo" }, { "msg_contents": "On Wed, Sep 14, 2022 at 10:46 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> I found a small typo in a comment in pgbench.c of 15/master.\n>\n> - * Return the number fo failed transactions.\n> + * Return the number of failed transactions.\n>\n> While at it, I found \"* lot fo unnecessary work.\" in pg13's\n> procsignal.c. It has been fixed by 2a093355aa in PG14 but PG13 was\n> left alone at the time.\n\n\n+1. And grep shows no more this kind of typo in source codes in master.\n\n$ find . -name \"*.[ch]\" | xargs grep ' fo '\n./src/bin/pgbench/pgbench.c: * Return the number fo failed transactions.\n\nThanks\nRichard\n\nOn Wed, Sep 14, 2022 at 10:46 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:I found a small typo in a comment in pgbench.c of 15/master.\n\n- * Return the number fo failed transactions.\n+ * Return the number of failed transactions.\n\nWhile at it, I found \"* lot fo unnecessary work.\" in pg13's\nprocsignal.c. It has been fixed by 2a093355aa in PG14 but PG13 was\nleft alone at the time. +1. And grep shows no more this kind of typo in source codes in master.$ find . -name \"*.[ch]\" | xargs grep ' fo './src/bin/pgbench/pgbench.c: * Return the number fo failed transactions.ThanksRichard", "msg_date": "Wed, 14 Sep 2022 10:55:40 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A small typo" }, { "msg_contents": "On Wed, Sep 14, 2022 at 8:16 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> I found a small typo in a comment in pgbench.c of 15/master.\n>\n> - * Return the number fo failed transactions.\n> + * Return the number of failed transactions.\n>\n\nLGTM.\n\n> While at it, I found \"* lot fo unnecessary work.\" in pg13's\n> procsignal.c. It has been fixed by 2a093355aa in PG14 but PG13 was\n> left alone at the time.\n>\n\nI think sometimes we fix typos only in HEAD. I am not sure if we have\na clear policy to backpatch such things.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 14 Sep 2022 08:57:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A small typo" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Wed, Sep 14, 2022 at 8:16 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> I found a small typo in a comment in pgbench.c of 15/master.\n>> - * Return the number fo failed transactions.\n>> + * Return the number of failed transactions.\n\n> LGTM.\n\n+1\n\n>> While at it, I found \"* lot fo unnecessary work.\" in pg13's\n>> procsignal.c. It has been fixed by 2a093355aa in PG14 but PG13 was\n>> left alone at the time.\n\n> I think sometimes we fix typos only in HEAD. I am not sure if we have\n> a clear policy to backpatch such things.\n\nI would not go back and change v13 at this point. You're right\nthat this is fuzzy, but overriding the contemporaneous decision\nnot to backpatch seems well outside our usual habits.\n\nThere are basically two good reasons to back-patch comment changes:\n\n* fear that the comment is wrong enough to mislead people looking\nat the older branch;\n\n* fear that leaving it alone will create a merge hazard for future\nback-patches.\n\nIt doesn't seem to me that either of those is a strong concern\nin this case. In the absence of these concerns, back-patching\nseems like make-work (and useless expenditure of buildfarm\ncycles).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 13 Sep 2022 23:40:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A small typo" }, { "msg_contents": "On Wed, Sep 14, 2022 at 9:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>\n> There are basically two good reasons to back-patch comment changes:\n>\n> * fear that the comment is wrong enough to mislead people looking\n> at the older branch;\n>\n> * fear that leaving it alone will create a merge hazard for future\n> back-patches.\n>\n> It doesn't seem to me that either of those is a strong concern\n> in this case. In the absence of these concerns, back-patching\n> seems like make-work (and useless expenditure of buildfarm\n> cycles).\n>\n\nAgreed. I'll push this to HEAD after some time.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 14 Sep 2022 09:19:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A small typo" }, { "msg_contents": "At Wed, 14 Sep 2022 09:19:22 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Wed, Sep 14, 2022 at 9:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> >\n> > There are basically two good reasons to back-patch comment changes:\n> >\n> > * fear that the comment is wrong enough to mislead people looking\n> > at the older branch;\n> >\n> > * fear that leaving it alone will create a merge hazard for future\n> > back-patches.\n> >\n> > It doesn't seem to me that either of those is a strong concern\n> > in this case. In the absence of these concerns, back-patching\n> > seems like make-work (and useless expenditure of buildfarm\n> > cycles).\n> >\n> \n> Agreed. I'll push this to HEAD after some time.\n\nThanks for committing, and for the clarification about back-patching\npolicy!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 14 Sep 2022 13:04:29 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A small typo" } ]
[ { "msg_contents": "\nHi hackers,\n\nI found the StartSubTransaction has the following code:\n\n static void\n StartSubTransaction(void)\n {\n [...]\n\n s->state = TRANS_START;\n\n /*\n * Initialize subsystems for new subtransaction\n *\n * must initialize resource-management stuff first\n */\n AtSubStart_Memory();\n AtSubStart_ResourceOwner();\n AfterTriggerBeginSubXact();\n\n s->state = TRANS_INPROGRESS;\n\n [...]\n }\n\nIIRC, AtSubStart_Memory, AtSubStart_ResourceOwner and AfterTriggerBeginSubXact don't\nuse s->state. Why should we set s->state to TRANS_START and then TRANS_INPROGRESS?\n\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 14 Sep 2022 13:41:24 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "A question about StartSubTransaction" }, { "msg_contents": "Japin Li <japinli@hotmail.com> writes:\n> IIRC, AtSubStart_Memory, AtSubStart_ResourceOwner and AfterTriggerBeginSubXact don't\n> use s->state.\n\nNo, they don't.\n\n> Why should we set s->state to TRANS_START and then TRANS_INPROGRESS?\n\nI believe it's so that if an error gets thrown somewhere in that\narea, we'll recover properly. I'd be the first to say that this\nstuff isn't terribly well-tested, since it's hard to force an\nerror there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Sep 2022 01:52:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A question about StartSubTransaction" }, { "msg_contents": "\nOn Wed, 14 Sep 2022 at 13:52, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Japin Li <japinli@hotmail.com> writes:\n>> Why should we set s->state to TRANS_START and then TRANS_INPROGRESS?\n>\n> I believe it's so that if an error gets thrown somewhere in that\n> area, we'll recover properly. I'd be the first to say that this\n> stuff isn't terribly well-tested, since it's hard to force an\n> error there.\n>\n\nThanks for the explanation! Maybe more comments here is better.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 14 Sep 2022 14:00:57 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: A question about StartSubTransaction" } ]
[ { "msg_contents": "Hi!\n\nI found prepare statement are not jumbled.\nFro example PREPARE 't1'; and PREPARE 't2' are counted separately in \npg_stat_statements.\nI think it needs to be fixed.\nWhat do you think?\n\nRegards,\n\nKotaro Kawamoto\n\n\n", "msg_date": "Wed, 14 Sep 2022 17:14:06 +0900", "msg_from": "bt22kawamotok <bt22kawamotok@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Query jumbling for prepare statement" }, { "msg_contents": "Hi,\n\nOn Wed, Sep 14, 2022 at 05:14:06PM +0900, bt22kawamotok wrote:\n>\n> I found prepare statement are not jumbled.\n> Fro example PREPARE 't1'; and PREPARE 't2' are counted separately in\n> pg_stat_statements.\n\nAre you talking about PREPARE TRANSACTION? If yes I already suggested that a\nfew days ago (1), and Bertrand implemented it in the v5 version of the patch on\nthat thread.\n\n[1] https://www.postgresql.org/message-id/20220908112919.2ytxpkitiw6lt2u6@jrouhaud\n\n\n", "msg_date": "Wed, 14 Sep 2022 16:18:33 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Query jumbling for prepare statement" }, { "msg_contents": "2022-09-14 17:18 に Julien Rouhaud さんは書きました:\n> Hi,\n> \n> On Wed, Sep 14, 2022 at 05:14:06PM +0900, bt22kawamotok wrote:\n>> \n>> I found prepare statement are not jumbled.\n>> Fro example PREPARE 't1'; and PREPARE 't2' are counted separately in\n>> pg_stat_statements.\n> \n> Are you talking about PREPARE TRANSACTION? If yes I already suggested \n> that a\n> few days ago (1), and Bertrand implemented it in the v5 version of the \n> patch on\n> that thread.\n> \n> [1] \n> https://www.postgresql.org/message-id/20220908112919.2ytxpkitiw6lt2u6@jrouhaud\n\nOh, I had missed it.\nThanks for telling me about it.\n\nKotaro Kawmaoto\n\n\n", "msg_date": "Wed, 14 Sep 2022 17:28:03 +0900", "msg_from": "bt22kawamotok <bt22kawamotok@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Query jumbling for prepare statement" } ]
[ { "msg_contents": "I miss UUID, which indexes very strangely, is more and more popular and \npeople want to search for it.\n\nSee: https://www.postgresql.org/docs/current/textsearch-parsers.html\n\nUUID is fairly easy to parse:\nThe hexadecimal digits are grouped as 32 hexadecimal characters with \nfour hyphens: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX.\nThe number of characters per hyphen is 8-4-4-4-12. The last section of \nfour, or the N position, indicates the format and encoding in either one \nto three bits.\n\nNow, UUIDs parse each other differently, depending on whether the \nindividual parts begin with numbers or letters:\n00633f1d-1fff-409e-8294-40a21f565904    '-40':6 '00633f1d':2 \n'00633f1d-1fff-409e':1 '1fff':3 '409e':4 '8294':5 'a21f565904':7\n00856c28-2251-4aaf-82d3-e4962f5b732d    '-2251':2 '-4':3 '00856c28':1 \n'82d3':6 'aaf':5 'aaf-82d3-e4962f5b732d':4 'e4962f5b732d':7\n00a1cc84-816a-490a-a99c-8a4c637380b0    '00a1cc84':2 \n'00a1cc84-816a-490a-a99c-8a4c637380b0':1 '490a':4 '816a':3 \n'8a4c637380b0':6 'a99c':5\n\nAs a result, such identifiers cannot be found in the database later.\n\nWhat is your opinion on missing tokens for FTS?\n\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nI miss UUID, which indexes very strangely, is more and more popular and \npeople want to search for it.\n\nSee: https://www.postgresql.org/docs/current/textsearch-parsers.html\n\nUUID is fairly easy to parse:\nThe hexadecimal digits are grouped as 32 hexadecimal characters with \nfour hyphens: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX.\nThe number of \ncharacters per hyphen is 8-4-4-4-12. The last section of four, or the N \nposition, indicates the format and encoding in either one to three bits.\n\nNow, UUIDs parse each other differently, depending on whether the \nindividual parts begin with numbers or letters:\n00633f1d-1fff-409e-8294-40a21f565904    '-40':6 '00633f1d':2 \n'00633f1d-1fff-409e':1 '1fff':3 '409e':4 '8294':5 'a21f565904':7\n00856c28-2251-4aaf-82d3-e4962f5b732d    '-2251':2 '-4':3 '00856c28':1 \n'82d3':6 'aaf':5 'aaf-82d3-e4962f5b732d':4 'e4962f5b732d':7\n00a1cc84-816a-490a-a99c-8a4c637380b0    '00a1cc84':2 \n'00a1cc84-816a-490a-a99c-8a4c637380b0':1 '490a':4 '816a':3 \n'8a4c637380b0':6 'a99c':5\n\nAs a result, such identifiers cannot be found in the database later.\n\nWhat is your opinion on missing tokens for FTS?\n\n-- \nPrzemysław\n Sztoch | Mobile +48 509 99 00 66", "msg_date": "Wed, 14 Sep 2022 11:26:41 +0200", "msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>", "msg_from_op": true, "msg_subject": "FTS parser - missing UUID token type" }, { "msg_contents": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl> writes:\n> I miss UUID, which indexes very strangely, is more and more popular and \n> people want to search for it.\n\nReally? UUIDs in running text seem like an extremely uncommon\nuse-case to me. URLs in running text are common nowadays, which is\nwhy the text search parser has special code for that, but UUIDs?\n\nAdding such a thing isn't cost-free either. Aside from the\nprobably-substantial development effort, we know from experience\nwith the URL support that it sometimes misfires and identifies\nsomething as a URL or URL fragment when it really isn't one.\nThat leads to poorer indexing of the affected text. It seems\nlikely that adding a UUID token type would be a net negative\nfor most people, since they'd be subject to that hazard even if\ntheir text contains no true UUIDs.\n\nIt's a shame that the text search parser isn't more extensible.\nIf it were you could imagine having such a feature while making\nit optional. I'm not volunteering to fix that though :-(\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Sep 2022 10:10:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FTS parser - missing UUID token type" } ]
[ { "msg_contents": "Hi hackers,\n\nRecently in one discussion a user complained [1] about\ncounterintuitive behavior of toast_tuple_target. Here is a quote:\n\n\"\"\"\nTable size 177.74 GB\nToast table size 12 GB\nIndexes size 33.49 GB\n\nThis table is composed of small columns \"id\", \"hash\", \"size\", and a\nmid~big (2~512kb) jsonb.\n\nI don't want to be forced to read the big column when doing seq scans,\nso I tried to set toast_tuple_target = 128, to exclude the big column,\nbut even after a VACUUM FULL i couldn't get pg to toast the big\ncolumn. Am I doing something wrong?\n\"\"\"\n\nArguably in this case the user may actually want to store the JSONB\nfields by the foreign key.\n\nHowever the user may have a good point that setting toast_tuple_target\n< TOAST_TUPLE_THRESHOLD effectively does nothing. This happens because\n[2]:\n\n\"\"\"\nThe TOAST management code is triggered only when a row value to be\nstored in a table is wider than TOAST_TUPLE_THRESHOLD bytes (normally\n2 kB). The TOAST code will compress and/or move field values\nout-of-line until the row value is shorter than toast_tuple_target\nbytes (also normally 2 kB, adjustable) or no more gains can be had.\n\"\"\"\n\n... TOAST is _triggered_ by TOAST_TUPLE_THRESHOLD but tries to\ncompress the tuple until toast_tuple_target bytes. This is indeed\nsomewhat confusing.\n\nI see several ways of solving this.\n\n1. Forbid setting toast_tuple_target < TOAST_TUPLE_THRESHOLD\n2. Consider using something like RelationGetToastTupleTarget(rel,\nTOAST_TUPLE_THRESHOLD) in heapam.c:2250, heapam.c:3625 and\nrewriteheap.c:636 and modify the documentation accordingly.\n3. Add a separate user-defined table setting toast_tuple_threshold\nsimilar to toast_tuple_target.\n\nThoughts?\n\n[1]: https://t.me/pg_sql/62265\n[2]: https://www.postgresql.org/docs/current/storage-toast.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 14 Sep 2022 19:03:51 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Counterintuitive behavior when toast_tuple_target <\n TOAST_TUPLE_THRESHOLD" }, { "msg_contents": "Hi hackers,\n\n> 1. Forbid setting toast_tuple_target < TOAST_TUPLE_THRESHOLD\n\nReading my own email I realized that this of course was stupid. For\nsure this is not an option. It's getting late in my timezone, sorry :)\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 14 Sep 2022 19:12:24 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Counterintuitive behavior when toast_tuple_target <\n TOAST_TUPLE_THRESHOLD" }, { "msg_contents": "Hi!\n\nI've noticed this behavior half a year ago during experiments with TOAST,\nand\nTOAST_TUPLE_THRESHOLD really works NOT the way it is thought to.\nI propose something like FORCE_TOAST flag/option as column option (stored\nin attoptions), because we already encountered multiple cases where data\nshould be stored externally despite its size.\nCurrently I'm working on passing Toaster options in attoptions.\n\nThoughts?\n\nOn Wed, Sep 14, 2022 at 7:12 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi hackers,\n>\n> > 1. Forbid setting toast_tuple_target < TOAST_TUPLE_THRESHOLD\n>\n> Reading my own email I realized that this of course was stupid. For\n> sure this is not an option. It's getting late in my timezone, sorry :)\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n>\n>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!I've noticed this behavior half a year ago during experiments with TOAST, and TOAST_TUPLE_THRESHOLD really works NOT the way it is thought to.I propose something like FORCE_TOAST flag/option as column option (storedin attoptions), because we already encountered multiple cases where data should be stored externally despite its size.Currently I'm working on passing Toaster options in attoptions.Thoughts?On Wed, Sep 14, 2022 at 7:12 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi hackers,\n\n> 1. Forbid setting toast_tuple_target < TOAST_TUPLE_THRESHOLD\n\nReading my own email I realized that this of course was stupid. For\nsure this is not an option. It's getting late in my timezone, sorry :)\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Wed, 14 Sep 2022 22:33:01 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Counterintuitive behavior when toast_tuple_target <\n TOAST_TUPLE_THRESHOLD" }, { "msg_contents": "On Thu, 15 Sept 2022 at 04:04, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> 1. Forbid setting toast_tuple_target < TOAST_TUPLE_THRESHOLD\n> 2. Consider using something like RelationGetToastTupleTarget(rel,\n> TOAST_TUPLE_THRESHOLD) in heapam.c:2250, heapam.c:3625 and\n> rewriteheap.c:636 and modify the documentation accordingly.\n> 3. Add a separate user-defined table setting toast_tuple_threshold\n> similar to toast_tuple_target.\n>\n> Thoughts?\n\nThere was some discussion on this problem in [1].\n\nThe problem with #2 is that if you look at\nheapam_relation_needs_toast_table(), it only decides if the toast\ntable should be created based on (tuple_length >\nTOAST_TUPLE_THRESHOLD). So if you were to change the logic as you\ndescribe for #2 then there might not be a toast table during an\nINSERT/UPDATE.\n\nThe only way to fix that would be to ensure that we reconsider if we\nshould create a toast table or not when someone changes the\ntoast_tuple_target reloption. That can't be done under\nShareUpdateExclusiveLock, so we'd need to obtain an\nAccessExclusiveLock instead when changing the toast_tuple_target\nreloption. That might upset some people.\n\nThe general direction of [1] was to just increase the minimum setting\nto TOAST_TUPLE_THRESHOLD, but there were some concerns about breaking\npg_dump as we'd have to error if someone does ALTER TABLE to set the\ntoast_tuple_target reloption lower than the newly defined minimum\nvalue.\n\nI don't quite follow you on #3. If there's no toast table we can't toast.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/20190403063759.GF3298@paquier.xyz\n\n\n", "msg_date": "Thu, 15 Sep 2022 12:05:30 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Counterintuitive behavior when toast_tuple_target <\n TOAST_TUPLE_THRESHOLD" }, { "msg_contents": "Hi!\n\nAs it is seen from the code (toasting.c and further) Toast tables are\ncreated immediately\nwhen a new relation with the TOASTable column is created. Practically,\nthere could occur\nthe case when Toast table does not exist and we should of course check for\nthat.\n\nTOAST_TUPLE_THRESHOLD is not only one which decides should be value stored\nexternally, this is slightly more complex and less obvious logic:\n(see heapam.c, heap_prepare_insert())\nelse if (HeapTupleHasExternal(tup) || tup->t_len > TOAST_TUPLE_THRESHOLD)\n\nas you can see here is another condition - HeapTupleHasExternal, which is\nset in\nheap_fill_tuple and lower in fill_val, where the infomask bit\nHEAP_HASEXTERNAL is set.\n\nSo when I experimented with the TOAST I'd to add a new flag which forced\nthe value to be\nTOASTed regardless of its size.\n\nAlso, TOAST_TUPLE_THRESHOLD sets overall tuple size over which it would be\nconsidered\nto be toasted, and has its minimum value that could not be decreased\nfurther.\n\nIn [1] (the Pluggable TOAST) we suggest making this an ontion for Toaster.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/224711f9-83b7-a307-b17f-4457ab73aa0a@sigaev.ru\n\n\nOn Thu, Sep 15, 2022 at 3:05 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 15 Sept 2022 at 04:04, Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> > 1. Forbid setting toast_tuple_target < TOAST_TUPLE_THRESHOLD\n> > 2. Consider using something like RelationGetToastTupleTarget(rel,\n> > TOAST_TUPLE_THRESHOLD) in heapam.c:2250, heapam.c:3625 and\n> > rewriteheap.c:636 and modify the documentation accordingly.\n> > 3. Add a separate user-defined table setting toast_tuple_threshold\n> > similar to toast_tuple_target.\n> >\n> > Thoughts?\n>\n> There was some discussion on this problem in [1].\n>\n> The problem with #2 is that if you look at\n> heapam_relation_needs_toast_table(), it only decides if the toast\n> table should be created based on (tuple_length >\n> TOAST_TUPLE_THRESHOLD). So if you were to change the logic as you\n> describe for #2 then there might not be a toast table during an\n> INSERT/UPDATE.\n>\n> The only way to fix that would be to ensure that we reconsider if we\n> should create a toast table or not when someone changes the\n> toast_tuple_target reloption. That can't be done under\n> ShareUpdateExclusiveLock, so we'd need to obtain an\n> AccessExclusiveLock instead when changing the toast_tuple_target\n> reloption. That might upset some people.\n>\n> The general direction of [1] was to just increase the minimum setting\n> to TOAST_TUPLE_THRESHOLD, but there were some concerns about breaking\n> pg_dump as we'd have to error if someone does ALTER TABLE to set the\n> toast_tuple_target reloption lower than the newly defined minimum\n> value.\n>\n> I don't quite follow you on #3. If there's no toast table we can't toast.\n>\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/20190403063759.GF3298@paquier.xyz\n>\n>\n>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!As it is seen from the code (toasting.c and further) Toast tables are created immediatelywhen a new relation with the TOASTable column is created. Practically, there could occurthe case when Toast table does not exist and we should of course check for that.TOAST_TUPLE_THRESHOLD is not only one which decides should be value storedexternally, this is slightly more complex and less obvious logic:(see heapam.c, heap_prepare_insert())else if (HeapTupleHasExternal(tup) || tup->t_len > TOAST_TUPLE_THRESHOLD)as you can see here is another condition - HeapTupleHasExternal, which is set inheap_fill_tuple and lower in fill_val, where the infomask bit HEAP_HASEXTERNAL is set.So when I experimented with the TOAST I'd to add a new flag which forced the value to beTOASTed regardless of its size.Also, TOAST_TUPLE_THRESHOLD sets overall tuple size over which it would be consideredto be toasted, and has its minimum value that could not be decreased further.In [1] (the Pluggable TOAST) we suggest making this an ontion for Toaster.[1] https://www.postgresql.org/message-id/flat/224711f9-83b7-a307-b17f-4457ab73aa0a@sigaev.ruOn Thu, Sep 15, 2022 at 3:05 AM David Rowley <dgrowleyml@gmail.com> wrote:On Thu, 15 Sept 2022 at 04:04, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> 1. Forbid setting toast_tuple_target < TOAST_TUPLE_THRESHOLD\n> 2. Consider using something like RelationGetToastTupleTarget(rel,\n> TOAST_TUPLE_THRESHOLD) in heapam.c:2250, heapam.c:3625 and\n> rewriteheap.c:636 and modify the documentation accordingly.\n> 3. Add a separate user-defined table setting toast_tuple_threshold\n> similar to toast_tuple_target.\n>\n> Thoughts?\n\nThere was some discussion on this problem in [1].\n\nThe problem with #2 is that if you look at\nheapam_relation_needs_toast_table(), it only decides if the toast\ntable should be created based on (tuple_length >\nTOAST_TUPLE_THRESHOLD). So if you were to change the logic as you\ndescribe for #2 then there might not be a toast table during an\nINSERT/UPDATE.\n\nThe only way to fix that would be to ensure that we reconsider if we\nshould create a toast table or not when someone changes the\ntoast_tuple_target reloption.  That can't be done under\nShareUpdateExclusiveLock, so we'd need to obtain an\nAccessExclusiveLock instead when changing the toast_tuple_target\nreloption. That might upset some people.\n\nThe general direction of [1] was to just increase the minimum setting\nto TOAST_TUPLE_THRESHOLD, but there were some concerns about breaking\npg_dump as we'd have to error if someone does ALTER TABLE to set the\ntoast_tuple_target reloption lower than the newly defined minimum\nvalue.\n\nI don't quite follow you on #3. If there's no toast table we can't toast.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/20190403063759.GF3298@paquier.xyz\n\n\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Thu, 15 Sep 2022 11:17:42 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Counterintuitive behavior when toast_tuple_target <\n TOAST_TUPLE_THRESHOLD" }, { "msg_contents": "Hi David,\n\n> There was some discussion on this problem in [1].\n> [1] https://www.postgresql.org/message-id/20190403063759.GF3298@paquier.xyz\n\nThanks for sharing this discussion. I missed it.\n\n> The problem with #2 is that if you look at\n> heapam_relation_needs_toast_table(), it only decides if the toast\n> table should be created based on (tuple_length >\n> TOAST_TUPLE_THRESHOLD). So if you were to change the logic as you\n> describe for #2 then there might not be a toast table during an\n> INSERT/UPDATE.\n>\n> The only way to fix that would be to ensure that we reconsider if we\n> should create a toast table or not when someone changes the\n> toast_tuple_target reloption. That can't be done under\n> ShareUpdateExclusiveLock, so we'd need to obtain an\n> AccessExclusiveLock instead when changing the toast_tuple_target\n> reloption. That might upset some people.\n\nPersonally, if I would choose between (A) a feature that is\npotentially expensive but useful to many and (B) a feature that in\npractice is pretty much useless to most of the users, I would choose\n(A). Maybe we will be able to make it a little less expensive if we\noptimistically take a shared lock first and then, if necessary, take\nan exclusive lock.\n\n> The general direction of [1] was to just increase the minimum setting\n> to TOAST_TUPLE_THRESHOLD, but there were some concerns about breaking\n> pg_dump as we'd have to error if someone does ALTER TABLE to set the\n> toast_tuple_target reloption lower than the newly defined minimum\n> value.\n\nYep, this doesn't seem to be an option.\n\n> I don't quite follow you on #3. If there's no toast table we can't toast.\n\nThe case I had in mind was the one when we already have a TOAST table\nand then change toast_tuple_target.\n\nIn this scenario TOAST_TUPLE_THRESHOLD is used to decide whether TOAST\nshould be triggered for a given tuple. For how long TOAST will keep\ncompressing the tuple is controlled by toast_tuple_target, not by\nTOAST_TUPLE_THRESHOLD. So the user has control of \"target\" but there\nis no control of \"threshold\". If the user could set both \"threshold\"\nand \"target\" low this would solve the problem the user originally had\n(the one described in the first email).\n\nAdding a separate \"threshold\" option doesn't solve the problem when we\nchange it and there is no TOAST table yet.\n\nI wonder though if we really need two entities - a \"threshold\" and a\n\"target\". It seems to me that it should actually be one value, no?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 15 Sep 2022 11:55:40 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Counterintuitive behavior when toast_tuple_target <\n TOAST_TUPLE_THRESHOLD" } ]
[ { "msg_contents": "In 29f45e29 we added support for executing NOT IN(values) with a\nhashtable, however this comment still claims that we only do so for\ncases where the ScalarArrayOpExpr's useOr flag is true.\n\nSee attached for fix.\n\nThanks,\nJames Coleman", "msg_date": "Wed, 14 Sep 2022 12:07:50 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Fix comment in convert_saop_to_hashed_saop" }, { "msg_contents": "On Thu, 15 Sept 2022 at 04:08, James Coleman <jtc331@gmail.com> wrote:\n> In 29f45e29 we added support for executing NOT IN(values) with a\n> hashtable, however this comment still claims that we only do so for\n> cases where the ScalarArrayOpExpr's useOr flag is true.\n>\n> See attached for fix.\n\nThank you. Pushed.\n\nDavid\n\n\n", "msg_date": "Thu, 15 Sep 2022 09:44:26 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix comment in convert_saop_to_hashed_saop" } ]
[ { "msg_contents": "Hi.\n\nAccording to:\nhttps://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-localalloc\n\"Note The local functions have greater overhead and provide fewer features\nthan other memory management functions. New applications should use the\nheap functions unless documentation states that a local function should be\nused. For more information, see Global and Local Functions.\"\n\nLocalAlloc is deprecated.\nSo use HeapAlloc instead, once LocalAlloc is an overhead wrapper to\nHeapAlloc.\n\nAttached a patch.\n\nregards,\nRanier Vilela", "msg_date": "Wed, 14 Sep 2022 20:19:12 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Avoid use deprecated Windows Memory API" }, { "msg_contents": "> On 15 Sep 2022, at 01:19, Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> LocalAlloc is deprecated.\n> So use HeapAlloc instead, once LocalAlloc is an overhead wrapper to HeapAlloc.\n> \n> Attached a patch.\n\nDon't forget that patches which aim to reduce overhead are best when\naccompanied with benchmarks which show the effect of the reduction.\n\n-\tpacl = (PACL) LocalAlloc(LPTR, dwNewAclSize);\n+\tpacl = (PACL) HeapAlloc(hDefaultProcessHeap, 0, dwNewAclSize);\n\nThese calls are not equal, the LocalAlloc calls zeroes out the allocated memory\nbut the HeapAlloc does not unless the HEAP_ZERO_MEMORY flag is passed. I\nhaven't read the code enough to know if that matters, but it seems relevant to\nat least discuss.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 15 Sep 2022 10:35:34 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Avoid use deprecated Windows Memory API" }, { "msg_contents": "Em qui., 15 de set. de 2022 às 05:35, Daniel Gustafsson <daniel@yesql.se>\nescreveu:\n\n> > On 15 Sep 2022, at 01:19, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> > LocalAlloc is deprecated.\n> > So use HeapAlloc instead, once LocalAlloc is an overhead wrapper to\n> HeapAlloc.\n> >\n> > Attached a patch.\n>\n> Don't forget that patches which aim to reduce overhead are best when\n> accompanied with benchmarks which show the effect of the reduction.\n>\nI'm trusting the API producer.\n\n\n>\n> - pacl = (PACL) LocalAlloc(LPTR, dwNewAclSize);\n> + pacl = (PACL) HeapAlloc(hDefaultProcessHeap, 0, dwNewAclSize);\n>\n> These calls are not equal, the LocalAlloc calls zeroes out the allocated\n> memory\n> but the HeapAlloc does not unless the HEAP_ZERO_MEMORY flag is passed. I\n> haven't read the code enough to know if that matters, but it seems\n> relevant to\n> at least discuss.\n>\nYeah, I missed that.\nBut works fine and passes all tests.\nIf really ok, yet another improvement by avoiding useless padding.\n\nCF entry created.\nhttps://commitfest.postgresql.org/40/3893/\n\nregards,\nRanier Vilela\n\nEm qui., 15 de set. de 2022 às 05:35, Daniel Gustafsson <daniel@yesql.se> escreveu:> On 15 Sep 2022, at 01:19, Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> LocalAlloc is deprecated.\n> So use HeapAlloc instead, once LocalAlloc is an overhead wrapper to HeapAlloc.\n> \n> Attached a patch.\n\nDon't forget that patches which aim to reduce overhead are best when\naccompanied with benchmarks which show the effect of the reduction.I'm trusting the API producer. \n\n-       pacl = (PACL) LocalAlloc(LPTR, dwNewAclSize);\n+       pacl = (PACL) HeapAlloc(hDefaultProcessHeap, 0, dwNewAclSize);\n\nThese calls are not equal, the LocalAlloc calls zeroes out the allocated memory\nbut the HeapAlloc does not unless the HEAP_ZERO_MEMORY flag is passed.  I\nhaven't read the code enough to know if that matters, but it seems relevant to\nat least discuss.Yeah, I missed that.But works fine and passes all tests.If really ok, yet another improvement by avoiding useless padding.CF entry created.https://commitfest.postgresql.org/40/3893/regards,Ranier Vilela", "msg_date": "Thu, 15 Sep 2022 08:26:34 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid use deprecated Windows Memory API" }, { "msg_contents": "On 2022-Sep-14, Ranier Vilela wrote:\n\n> According to:\n> https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-localalloc\n\n> LocalAlloc is deprecated.\n> So use HeapAlloc instead, once LocalAlloc is an overhead wrapper to\n> HeapAlloc.\n\nThese functions you are patching are not in performance-sensitive code,\nso I doubt this makes any difference performance wise. I doubt\nMicrosoft will ever remove these deprecated functions, given its history\nof backwards compatibility, so from that perspective this change does\nnot achieve anything either.\n\nIf you were proposing to change how palloc() allocates memory, that\nwould be quite different and perhaps useful, as long as a benchmark\naccompanies the patch.\n.\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 15 Sep 2022 14:50:29 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Avoid use deprecated Windows Memory API" }, { "msg_contents": "Hi Ranier,\n\n> use HeapAlloc instead, once LocalAlloc is an overhead wrapper to HeapAlloc.\n\nThanks for the patch.\n\nAlthough MSDN doesn't explicitly say that LocalAlloc is _depricated_\n+1 for replacing it. I agree with Alvaro that it is unlikely to be\never removed, but this is a trivial change, so let's keep the code a\nbit cleaner. Also I agree that no benchmarking is required for this\npatch since the code is not performance-sensitive.\n\n> These calls are not equal, the LocalAlloc calls zeroes out the allocated memory\n\nI took a look. The memory is initialized below by InitializeAcl() /\nGetTokenInformation() [1][2] so it seems we are fine here.\n\nThe patch didn't have a proper commit message and had some issues with\nthe formating. I fixed this. The new code checks the return value of\nGetProcessHeap() which is unlikely to fail, so I figured unlikely() is\nappropriate:\n\n+ hDefaultProcessHeap = GetProcessHeap();\n+ if (unlikely(hDefaultProcessHeap == NULL))\n\nThe corrected patch is attached.\n\n[1]: https://docs.microsoft.com/en-us/windows/win32/api/securitybaseapi/nf-securitybaseapi-initializeacl\n[2]: https://docs.microsoft.com/en-us/windows/win32/api/securitybaseapi/nf-securitybaseapi-gettokeninformation\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 15 Sep 2022 15:58:40 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Avoid use deprecated Windows Memory API" }, { "msg_contents": "Em qui., 15 de set. de 2022 às 09:58, Aleksander Alekseev <\naleksander@timescale.com> escreveu:\n\n> Hi Ranier,\n>\n> > use HeapAlloc instead, once LocalAlloc is an overhead wrapper to\n> HeapAlloc.\n>\n> Thanks for the patch.\n>\n> Although MSDN doesn't explicitly say that LocalAlloc is _depricated_\n> +1 for replacing it.\n\nThe MSDN says:\n\" New applications should use the heap functions\n<https://docs.microsoft.com/en-us/windows/desktop/Memory/heap-functions>\".\nIMO Postgres 16, fits here.\n\nI agree with Alvaro that it is unlikely to be\n> ever removed, but this is a trivial change, so let's keep the code a\n> bit cleaner. Also I agree that no benchmarking is required for this\n> patch since the code is not performance-sensitive.\n>\nIn no time, the patch claims performance.\nSo much so that the open CF is in refactoring.\n\n\n> > These calls are not equal, the LocalAlloc calls zeroes out the allocated\n> memory\n>\n> I took a look. The memory is initialized below by InitializeAcl() /\n> GetTokenInformation() [1][2] so it seems we are fine here.\n>\n> The patch didn't have a proper commit message and had some issues with\n> the formating. I fixed this. The new code checks the return value of\n> GetProcessHeap() which is unlikely to fail, so I figured unlikely() is\n> appropriate:\n>\n> + hDefaultProcessHeap = GetProcessHeap();\n> + if (unlikely(hDefaultProcessHeap == NULL))\n>\n> The corrected patch is attached.\n>\nThanks for the review and fixes.\n\nregards,\nRanier Vilela\n\nEm qui., 15 de set. de 2022 às 09:58, Aleksander Alekseev <aleksander@timescale.com> escreveu:Hi Ranier,\n\n> use HeapAlloc instead, once LocalAlloc is an overhead wrapper to HeapAlloc.\n\nThanks for the patch.\n\nAlthough MSDN doesn't explicitly say that LocalAlloc is _depricated_\n+1 for replacing it.The MSDN says:\"\nNew applications should use the \nheap functions\".IMO Postgres 16, fits here. I agree with Alvaro that it is unlikely to be\never removed, but this is a trivial change, so let's keep the code a\nbit cleaner. Also I agree that no benchmarking is required for this\npatch since the code is not performance-sensitive.In no time, the patch claims performance.So much so that the open CF is in refactoring. \n\n> These calls are not equal, the LocalAlloc calls zeroes out the allocated memory\n\nI took a look. The memory is initialized below by InitializeAcl() /\nGetTokenInformation() [1][2] so it seems we are fine here.\n\nThe patch didn't have a proper commit message and had some issues with\nthe formating. I fixed this. The new code checks the return value of\nGetProcessHeap() which is unlikely to fail, so I figured unlikely() is\nappropriate:\n\n+       hDefaultProcessHeap = GetProcessHeap();\n+       if (unlikely(hDefaultProcessHeap == NULL))\n\nThe corrected patch is attached.Thanks for the review and fixes.regards,Ranier Vilela", "msg_date": "Thu, 15 Sep 2022 10:05:19 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid use deprecated Windows Memory API" }, { "msg_contents": "Em qui., 15 de set. de 2022 às 09:50, Alvaro Herrera <\nalvherre@alvh.no-ip.org> escreveu:\n\n> On 2022-Sep-14, Ranier Vilela wrote:\n>\n> > According to:\n> >\n> https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-localalloc\n>\n> > LocalAlloc is deprecated.\n> > So use HeapAlloc instead, once LocalAlloc is an overhead wrapper to\n> > HeapAlloc.\n>\n> These functions you are patching are not in performance-sensitive code,\n> so I doubt this makes any difference performance wise. I doubt\n> Microsoft will ever remove these deprecated functions, given its history\n> of backwards compatibility, so from that perspective this change does\n> not achieve anything either.\n>\nIf users don't adapt to the new API, the old one will never really expire.\n\n\n> If you were proposing to change how palloc() allocates memory, that\n> would be quite different and perhaps useful, as long as a benchmark\n> accompanies the patch.\n>\nThis is irrelevant to the discussion.\nNeither the patch nor the thread deals with palloc.\n\nregards,\nRanier Vilela\n\nEm qui., 15 de set. de 2022 às 09:50, Alvaro Herrera <alvherre@alvh.no-ip.org> escreveu:On 2022-Sep-14, Ranier Vilela wrote:\n\n> According to:\n> https://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-localalloc\n\n> LocalAlloc is deprecated.\n> So use HeapAlloc instead, once LocalAlloc is an overhead wrapper to\n> HeapAlloc.\n\nThese functions you are patching are not in performance-sensitive code,\nso I doubt this makes any difference performance wise.  I doubt\nMicrosoft will ever remove these deprecated functions, given its history\nof backwards compatibility, so from that perspective this change does\nnot achieve anything either.If users don't adapt to the new API, the old one will never really expire. \n\nIf you were proposing to change how palloc() allocates memory, that\nwould be quite different and perhaps useful, as long as a benchmark\naccompanies the patch.This is irrelevant to the discussion. Neither the patch nor the thread deals with palloc.regards,Ranier Vilela", "msg_date": "Thu, 15 Sep 2022 10:11:10 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid use deprecated Windows Memory API" }, { "msg_contents": "On 2022-Sep-15, Aleksander Alekseev wrote:\n\n> I agree with Alvaro that it is unlikely to be ever removed, but this\n> is a trivial change, so let's keep the code a bit cleaner.\n\nIn what way is this code cleaner? I argue it is the opposite.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Nunca confiaré en un traidor. Ni siquiera si el traidor lo he creado yo\"\n(Barón Vladimir Harkonnen)\n\n\n", "msg_date": "Thu, 15 Sep 2022 15:11:24 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Avoid use deprecated Windows Memory API" }, { "msg_contents": "On 2022-Sep-15, Ranier Vilela wrote:\n\n> Em qui., 15 de set. de 2022 às 09:50, Alvaro Herrera <\n> alvherre@alvh.no-ip.org> escreveu:\n\n> > These functions you are patching are not in performance-sensitive code,\n> > so I doubt this makes any difference performance wise. I doubt\n> > Microsoft will ever remove these deprecated functions, given its history\n> > of backwards compatibility, so from that perspective this change does\n> > not achieve anything either.\n>\n> If users don't adapt to the new API, the old one will never really expire.\n\nIf you are claiming that Microsoft will remove the old API because\nPostgres stopped using it, ... sorry, no.\n\n> Neither the patch nor the thread deals with palloc.\n\nI know. Which is why I think the patch is useless.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 15 Sep 2022 15:13:47 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Avoid use deprecated Windows Memory API" }, { "msg_contents": "Hi Alvaro,\n\n> In what way is this code cleaner? I argue it is the opposite.\n\nWell, I guess it depends on the perspective. There are a bit more\nlines of code for sure. So if \"less code is better\" is the criteria,\nthen no, the new code is not cleaner. If the criteria is to use an API\naccording to the spec provided by the vendor, to me personally the new\ncode looks cleaner.\n\nThis being said I don't have a strong opinion on whether this patch\nshould be accepted or not. I completely agree that MS will actually\nkeep LocalAlloc() indefinitely for backward compatibility with\nexisting applications, as the company always did.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 15 Sep 2022 16:19:26 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Avoid use deprecated Windows Memory API" }, { "msg_contents": "Em qui., 15 de set. de 2022 às 10:13, Alvaro Herrera <\nalvherre@alvh.no-ip.org> escreveu:\n\n> On 2022-Sep-15, Ranier Vilela wrote:\n>\n> > Em qui., 15 de set. de 2022 às 09:50, Alvaro Herrera <\n> > alvherre@alvh.no-ip.org> escreveu:\n>\n> > > These functions you are patching are not in performance-sensitive code,\n> > > so I doubt this makes any difference performance wise. I doubt\n> > > Microsoft will ever remove these deprecated functions, given its\n> history\n> > > of backwards compatibility, so from that perspective this change does\n> > > not achieve anything either.\n> >\n> > If users don't adapt to the new API, the old one will never really\n> expire.\n>\n> If you are claiming that Microsoft will remove the old API because\n> Postgres stopped using it, ... sorry, no.\n>\nCertainly not.\nBut Postgres should be an example.\nBy removing the old API, Postgres encourages others to do so.\nSo who knows, one day, the OS might finally get rid of this useless burden.\n\n\n> > Neither the patch nor the thread deals with palloc.\n>\n> I know. Which is why I think the patch is useless.\n>\nOther hackers think differently, thankfully.\n\nregards,\nRanier Vilela\n\nEm qui., 15 de set. de 2022 às 10:13, Alvaro Herrera <alvherre@alvh.no-ip.org> escreveu:On 2022-Sep-15, Ranier Vilela wrote:\n\n> Em qui., 15 de set. de 2022 às 09:50, Alvaro Herrera <\n> alvherre@alvh.no-ip.org> escreveu:\n\n> > These functions you are patching are not in performance-sensitive code,\n> > so I doubt this makes any difference performance wise.  I doubt\n> > Microsoft will ever remove these deprecated functions, given its history\n> > of backwards compatibility, so from that perspective this change does\n> > not achieve anything either.\n>\n> If users don't adapt to the new API, the old one will never really expire.\n\nIf you are claiming that Microsoft will remove the old API because\nPostgres stopped using it, ... sorry, no.Certainly not.But Postgres should be an example.By removing the old API, Postgres encourages others to do so.So who knows, one day, the OS might finally get rid of this useless burden. \n\n> Neither the patch nor the thread deals with palloc.\n\nI know.  Which is why I think the patch is useless.Other hackers think differently, thankfully.regards,Ranier Vilela", "msg_date": "Thu, 15 Sep 2022 10:29:22 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid use deprecated Windows Memory API" }, { "msg_contents": "> On 15 Sep 2022, at 15:13, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2022-Sep-15, Ranier Vilela wrote:\n> \n>> Em qui., 15 de set. de 2022 às 09:50, Alvaro Herrera <\n>> alvherre@alvh.no-ip.org> escreveu:\n> \n>>> These functions you are patching are not in performance-sensitive code,\n>>> so I doubt this makes any difference performance wise. I doubt\n>>> Microsoft will ever remove these deprecated functions, given its history\n>>> of backwards compatibility, so from that perspective this change does\n>>> not achieve anything either.\n>> \n>> If users don't adapt to the new API, the old one will never really expire.\n> \n> If you are claiming that Microsoft will remove the old API because\n> Postgres stopped using it, ... sorry, no.\n\nAlso, worth noting is that these functions aren't actually deprecated. The\nnote in the docs state:\n\n\tThe local functions have greater overhead and provide fewer features\n\tthan other memory management functions. New applications should use\n\tthe heap functions unless documentation states that a local function\n\tshould be used.\n\nAnd following the bouncing ball into the documentation they refer to [0] I read\nthis:\n\n\tAs a result, the global and local families of functions are equivalent\n\tand choosing between them is a matter of personal preference.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0]: https://docs.microsoft.com/en-us/windows/win32/memory/global-and-local-functions\n\n", "msg_date": "Thu, 15 Sep 2022 15:30:13 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Avoid use deprecated Windows Memory API" }, { "msg_contents": "Em qui., 15 de set. de 2022 às 10:30, Daniel Gustafsson <daniel@yesql.se>\nescreveu:\n\n> > On 15 Sep 2022, at 15:13, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n> >\n> > On 2022-Sep-15, Ranier Vilela wrote:\n> >\n> >> Em qui., 15 de set. de 2022 às 09:50, Alvaro Herrera <\n> >> alvherre@alvh.no-ip.org> escreveu:\n> >\n> >>> These functions you are patching are not in performance-sensitive code,\n> >>> so I doubt this makes any difference performance wise. I doubt\n> >>> Microsoft will ever remove these deprecated functions, given its\n> history\n> >>> of backwards compatibility, so from that perspective this change does\n> >>> not achieve anything either.\n> >>\n> >> If users don't adapt to the new API, the old one will never really\n> expire.\n> >\n> > If you are claiming that Microsoft will remove the old API because\n> > Postgres stopped using it, ... sorry, no.\n>\n> Also, worth noting is that these functions aren't actually deprecated. The\n> note in the docs state:\n>\nDaniel, I agree that MSDN could be better written.\nSee:\n\"Remarks\n\nWindows memory management does not provide a separate local heap and global\nheap. Therefore, the *LocalAlloc* and GlobalAlloc\n<https://docs.microsoft.com/en-us/windows/desktop/api/winbase/nf-winbase-globalalloc>\nfunctions are essentially the same.\n\nThe movable-memory flags *LHND*, *LMEM_MOVABLE*, and *NONZEROLHND* add\nunnecessary overhead and require locking to be used safely. They should be\navoided unless documentation specifically states that they should be used.\n\nNew applications should use the heap functions\n<https://docs.microsoft.com/en-us/windows/desktop/Memory/heap-functions>\"\n\nAgain, MSDN claims to use a new API.\n\nAnd following the bouncing ball into the documentation they refer to [0] I\n> read\n> this:\n>\n> As a result, the global and local families of functions are\n> equivalent\n> and choosing between them is a matter of personal preference.\n>\nIMO, This part has nothing to do with it.\n\nregards,\nRanier Vilela\n\nEm qui., 15 de set. de 2022 às 10:30, Daniel Gustafsson <daniel@yesql.se> escreveu:> On 15 Sep 2022, at 15:13, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2022-Sep-15, Ranier Vilela wrote:\n> \n>> Em qui., 15 de set. de 2022 às 09:50, Alvaro Herrera <\n>> alvherre@alvh.no-ip.org> escreveu:\n> \n>>> These functions you are patching are not in performance-sensitive code,\n>>> so I doubt this makes any difference performance wise.  I doubt\n>>> Microsoft will ever remove these deprecated functions, given its history\n>>> of backwards compatibility, so from that perspective this change does\n>>> not achieve anything either.\n>> \n>> If users don't adapt to the new API, the old one will never really expire.\n> \n> If you are claiming that Microsoft will remove the old API because\n> Postgres stopped using it, ... sorry, no.\n\nAlso, worth noting is that these functions aren't actually deprecated.  The\nnote in the docs state:Daniel, I agree that MSDN could be better written.See:\n\"Remarks\nWindows memory management does not provide a separate local heap and global heap. Therefore, the LocalAlloc and GlobalAlloc functions are essentially the same.\nThe movable-memory flags LHND, LMEM_MOVABLE, and NONZEROLHND\n add unnecessary overhead and require locking to be used safely. They \nshould be avoided unless documentation specifically states that they \nshould be used.\nNew applications should use the\nheap functions\"\n Again, MSDN claims to use a new API.\nAnd following the bouncing ball into the documentation they refer to [0] I read\nthis:\n\n        As a result, the global and local families of functions are equivalent\n        and choosing between them is a matter of personal preference.IMO,  This part has nothing to do with it.regards,Ranier Vilela", "msg_date": "Thu, 15 Sep 2022 10:44:05 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid use deprecated Windows Memory API" }, { "msg_contents": "Hi,\n\n> Again, MSDN claims to use a new API.\n\nTWIMC the patch rotted a bit and now needs to be updated [1].\nI changed its status to \"Waiting on Author\" [2].\n\n[1]: http://cfbot.cputube.org/\n[2]: https://commitfest.postgresql.org/42/3893/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 17 Mar 2023 16:57:56 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Avoid use deprecated Windows Memory API" }, { "msg_contents": "Em sex., 17 de mar. de 2023 às 10:58, Aleksander Alekseev <\naleksander@timescale.com> escreveu:\n\n> Hi,\n>\n> > Again, MSDN claims to use a new API.\n>\n> TWIMC the patch rotted a bit and now needs to be updated [1].\n> I changed its status to \"Waiting on Author\" [2].\n>\nRebased to latest Head.\n\nbest regards,\nRanier Vilela", "msg_date": "Fri, 17 Mar 2023 12:19:56 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid use deprecated Windows Memory API" }, { "msg_contents": "On Fri, Mar 17, 2023 at 12:19:56PM -0300, Ranier Vilela wrote:\n> Rebased to latest Head.\n\nI was looking at this thread, and echoing Daniel's and Alvaro's\narguments, I don't quite see why this patch is something we need. I\nwould recommend to mark it as rejected and move on.\n--\nMichael", "msg_date": "Mon, 20 Mar 2023 07:41:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Avoid use deprecated Windows Memory API" }, { "msg_contents": "> On 19 Mar 2023, at 23:41, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Mar 17, 2023 at 12:19:56PM -0300, Ranier Vilela wrote:\n>> Rebased to latest Head.\n> \n> I was looking at this thread, and echoing Daniel's and Alvaro's\n> arguments, I don't quite see why this patch is something we need. I\n> would recommend to mark it as rejected and move on.\n\nUnless the claimed performance improvement is measured and provides a speedup,\nand the loss of zeroing memory is guaranteed safe, there doesn't seem to be\nmuch value provided.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 22 Mar 2023 11:01:39 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Avoid use deprecated Windows Memory API" }, { "msg_contents": "Em qua., 22 de mar. de 2023 às 07:01, Daniel Gustafsson <daniel@yesql.se>\nescreveu:\n\n> > On 19 Mar 2023, at 23:41, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Fri, Mar 17, 2023 at 12:19:56PM -0300, Ranier Vilela wrote:\n> >> Rebased to latest Head.\n> >\n> > I was looking at this thread, and echoing Daniel's and Alvaro's\n> > arguments, I don't quite see why this patch is something we need. I\n> > would recommend to mark it as rejected and move on.\n>\n> Unless the claimed performance improvement is measured and provides a\n> speedup,\n> and the loss of zeroing memory is guaranteed safe, there doesn't seem to be\n> much value provided.\n>\nAt no time was it suggested that there would be performance gains.\nThe patch proposes to adjust the API that the manufacturer asks you to use.\nHowever, I see no point in discussing a defunct patch.\n\nregards,\nRanier Vilela\n\nEm qua., 22 de mar. de 2023 às 07:01, Daniel Gustafsson <daniel@yesql.se> escreveu:> On 19 Mar 2023, at 23:41, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Mar 17, 2023 at 12:19:56PM -0300, Ranier Vilela wrote:\n>> Rebased to latest Head.\n> \n> I was looking at this thread, and echoing Daniel's and Alvaro's\n> arguments, I don't quite see why this patch is something we need.  I\n> would recommend to mark it as rejected and move on.\n\nUnless the claimed performance improvement is measured and provides a speedup,\nand the loss of zeroing memory is guaranteed safe, there doesn't seem to be\nmuch value provided.At no time was it suggested that there would be performance gains.The patch proposes to adjust the API that the manufacturer asks you to use.However, I see no point in discussing a defunct patch.regards,Ranier Vilela", "msg_date": "Wed, 22 Mar 2023 09:18:31 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid use deprecated Windows Memory API" } ]
[ { "msg_contents": "Hi hacker,\n\nRecently, I find there might be a typo in xact.c comments. The comments\nsay \"PG_PROC\", however, it actually means \"PGPROC\" structure. Since we\nhave pg_proc catalog, and use PG_PROC to reference the catalog [1], so,\nwe should use PGPROC to reference the structure. Any thoughts?\n\n[1] src/include/nodes/primnodes.h\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Thu, 15 Sep 2022 22:38:01 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Typo in xact.c" }, { "msg_contents": "At Thu, 15 Sep 2022 22:38:01 +0800, Japin Li <japinli@hotmail.com> wrote in \n> \n> Hi hacker,\n> \n> Recently, I find there might be a typo in xact.c comments. The comments\n> say \"PG_PROC\", however, it actually means \"PGPROC\" structure. Since we\n> have pg_proc catalog, and use PG_PROC to reference the catalog [1], so,\n> we should use PGPROC to reference the structure. Any thoughts?\n> \n> [1] src/include/nodes/primnodes.h\n\nThe patch seems to me covering all occurances of PG_PROC as PGPROC.\n\nI found several uses of PG_PROC as (pg_catalog.)pg_proc, which is\nquite confusing, too..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 16 Sep 2022 12:11:35 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Typo in xact.c" }, { "msg_contents": "On Fri, Sep 16, 2022 at 10:11 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 15 Sep 2022 22:38:01 +0800, Japin Li <japinli@hotmail.com> wrote in\n> >\n> > Hi hacker,\n> >\n> > Recently, I find there might be a typo in xact.c comments. The comments\n> > say \"PG_PROC\", however, it actually means \"PGPROC\" structure. Since we\n> > have pg_proc catalog, and use PG_PROC to reference the catalog [1], so,\n> > we should use PGPROC to reference the structure. Any thoughts?\n> >\n> > [1] src/include/nodes/primnodes.h\n>\n> The patch seems to me covering all occurances of PG_PROC as PGPROC.\n\n+1 since this hinders grep-ability.\n\n> I found several uses of PG_PROC as (pg_catalog.)pg_proc, which is\n> quite confusing, too..\n\nIt's pretty obvious to me what that refers to in primnodes.h, although\nthe capitalization of (some, but not all) catalog names in comments in\nthat file is a bit strange. Maybe not worth changing there.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 16 Sep 2022 10:51:32 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Typo in xact.c" }, { "msg_contents": "On Fri, 16 Sep 2022 at 11:51, John Naylor <john.naylor@enterprisedb.com> wrote:\n> On Fri, Sep 16, 2022 at 10:11 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>>\n>> At Thu, 15 Sep 2022 22:38:01 +0800, Japin Li <japinli@hotmail.com> wrote in\n>> >\n>> > Hi hacker,\n>> >\n>> > Recently, I find there might be a typo in xact.c comments. The comments\n>> > say \"PG_PROC\", however, it actually means \"PGPROC\" structure. Since we\n>> > have pg_proc catalog, and use PG_PROC to reference the catalog [1], so,\n>> > we should use PGPROC to reference the structure. Any thoughts?\n>> >\n>> > [1] src/include/nodes/primnodes.h\n>>\n>> The patch seems to me covering all occurances of PG_PROC as PGPROC.\n>\n> +1 since this hinders grep-ability.\n>\n>> I found several uses of PG_PROC as (pg_catalog.)pg_proc, which is\n>> quite confusing, too..\n>\n> It's pretty obvious to me what that refers to in primnodes.h, although\n> the capitalization of (some, but not all) catalog names in comments in\n> that file is a bit strange. Maybe not worth changing there.\n\nThanks for the review. I found for system catalog, we often use lower case.\nHere attached a new patch to fix it.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Fri, 16 Sep 2022 11:57:34 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Typo in xact.c" }, { "msg_contents": "\nOn Fri, 16 Sep 2022 at 11:11, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> At Thu, 15 Sep 2022 22:38:01 +0800, Japin Li <japinli@hotmail.com> wrote in \n>> \n>> Hi hacker,\n>> \n>> Recently, I find there might be a typo in xact.c comments. The comments\n>> say \"PG_PROC\", however, it actually means \"PGPROC\" structure. Since we\n>> have pg_proc catalog, and use PG_PROC to reference the catalog [1], so,\n>> we should use PGPROC to reference the structure. Any thoughts?\n>> \n>> [1] src/include/nodes/primnodes.h\n>\n> The patch seems to me covering all occurances of PG_PROC as PGPROC.\n>\n> I found several uses of PG_PROC as (pg_catalog.)pg_proc, which is\n> quite confusing, too..\n>\n> regards.\n\nThanks for the review. Attached a new patch to replace upper case system\ncatatlog to lower case [1].\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Fri, 16 Sep 2022 12:01:46 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Typo in xact.c" }, { "msg_contents": "On Fri, Sep 16, 2022 at 10:51 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> On Fri, Sep 16, 2022 at 10:11 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > The patch seems to me covering all occurances of PG_PROC as PGPROC.\n>\n> +1 since this hinders grep-ability.\n\nPushed this.\n\n> > I found several uses of PG_PROC as (pg_catalog.)pg_proc, which is\n> > quite confusing, too..\n>\n> It's pretty obvious to me what that refers to in primnodes.h, although\n> the capitalization of (some, but not all) catalog names in comments in\n> that file is a bit strange. Maybe not worth changing there.\n\nI left this alone. It's not wrong, and I don't think it's confusing in context.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 19 Sep 2022 11:42:39 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Typo in xact.c" } ]
[ { "msg_contents": "Detect format-string mistakes in the libpq_pipeline test module.\n\nI happened to notice that libpq_pipeline's private implementation\nof pg_fatal lacked any pg_attribute_printf decoration. Indeed,\nadding that turned up a mistake! We'd likely never have noticed\nbecause the error exits in this code are unlikely to get hit,\nbut still, it's a bug.\n\nWe're so used to having the compiler check this stuff for us that\na printf-like function without pg_attribute_printf is a land mine.\nI wonder if there is a way to detect such omissions.\n\nBack-patch to v14 where this code came in.\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/cf2c7a736e4939ff0d6cf2acd29b17eea3bca7c2\n\nModified Files\n--------------\nsrc/test/modules/libpq_pipeline/libpq_pipeline.c | 4 +++-\n1 file changed, 3 insertions(+), 1 deletion(-)", "msg_date": "Thu, 15 Sep 2022 21:18:03 +0000", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pgsql: Detect format-string mistakes in the libpq_pipeline test\n module." }, { "msg_contents": "Hi,\n\nOn 2022-09-15 21:18:03 +0000, Tom Lane wrote:\n> We're so used to having the compiler check this stuff for us that\n> a printf-like function without pg_attribute_printf is a land mine.\n> I wonder if there is a way to detect such omissions.\n\ngcc has -Wsuggest-attribute=format - but unfortunately its heuristics appear\nto be too simplistic to catch many omission. It doesn't catch this one, for\nexample. But it may still be worth trying out in a few more cases.\n\n -Wsuggest-attribute=format\n -Wmissing-format-attribute\n Warn about function pointers that might be candidates for \"format\" attributes. Note these are only possible candidates, not absolute ones.\n GCC guesses that function pointers with \"format\" attributes that are used in assignment, initialization, parameter passing or return\n statements should have a corresponding \"format\" attribute in the resulting type. I.e. the left-hand side of the assignment or\n initialization, the type of the parameter variable, or the return type of the containing function respectively should also have a \"format\"\n attribute to avoid the warning.\n\n GCC also warns about function definitions that might be candidates for \"format\" attributes. Again, these are only possible candidates. GCC\n guesses that \"format\" attributes might be appropriate for any function that calls a function like \"vprintf\" or \"vscanf\", but this might not\n always be the case, and some functions for which \"format\" attributes are appropriate may not be detected.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 15 Sep 2022 18:54:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "warning about missing format string annotations" } ]
[ { "msg_contents": "Following my discovery of missed pg_attribute_printf coverage\nin libpq_pipeline (cf2c7a736), I went looking to see if we'd\nforgotten that anywhere else. The coverage seems to be solid\n... except at the very root, where we have no such markers for\npg_vsnprintf, pg_vsprintf, pg_vfprintf, pg_vprintf.\nI wonder if the option to write \"0\" for the second argument of\npg_attribute_printf didn't exist then, or we didn't know about it?\n\nI'm not sure that adding those markers does all that much for\nus, since (a) it's hard to see how the compiler could check\nanything if it doesn't have the actual args list at hand,\nand (b) these functions are likely never invoked with constant\nformat strings anyway. Nonetheless, it seems pretty silly that\nwe've faithfully labeled all the derivative printf-alikes but\nnot these fundamental ones.\n\nI also noted that win32security.c thinks it can stick\npg_attribute_printf into a function definition, whereas all\nthe rest of our code thinks you have to append that marker\nto a separate function declaration. Does that actually work\nwith Microsoft-platform compilers? Or more likely, is the\nmacro expanding to empty on every compiler we've used with\nthat code? Anyway, seems like we ought to make this fall\nin line with the convention elsewhere.\n\nSo I end with the attached. Any objections?\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 15 Sep 2022 19:09:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Oddities in our pg_attribute_printf usage" } ]
[ { "msg_contents": "We recently saw many backends (close to max_connection limit) get stalled\nin 'startup' in one of the production environments for Greenplum (fork of\nPostgreSQL). Tracing the reason, it was found all the tuples created by\nbootstrap (xmin=1) in pg_attribute were at super high block numbers (for\nexample beyond 30,000). Tracing the reason for the backend startup stall\nexactly matched Tom's reasoning in [1]. Stalls became much longer in\npresence of sub-transaction overflow or presence of long running\ntransactions as tuple visibility took longer. The thread ruled out the\npossibility of system catalog rows to be present in higher block numbers\ninstead of in front for pg_attribute.\n\nThis thread provides simple reproduction on the latest version of\nPostgreSQL and RCA for how bootstrap catalog entries can move to higher\nblocks and as a result cause stalls for backend starts. Simple fix to avoid\nthe issue provided at the end.\n\nThe cause is syncscan triggering during VACUUM FULL. VACUUM FULL rewrites\nthe table by performing the seqscan as well. And\nheapam_relation_copy_for_cluster() conveys feel free to use syncscan. Hence\nlogic to not start from block 0 instead some other block already in cache\nis possible and opens the possibility to move the bootstrap tuples to\nanywhere else in the table.\n\n------------------------------------------------------------------\nRepro\n------------------------------------------------------------------\n-- create database to play\ndrop database if exists test;\ncreate database test;\n\\c test\n\n-- function just to create many tables to increase pg_attribute size\n-- (ideally many column table might do the job more easily)\nCREATE OR REPLACE FUNCTION public.f(id integer)\n RETURNS void\n LANGUAGE plpgsql\n STRICT\nAS $function$\ndeclare\n sql text;\n i int;\nbegin\n for i in id..id+9999 loop\n sql='create table if not exists tbl'||i||' (id int)';\n execute sql;\n end loop;\nend;\n$function$;\n\nselect f(10000);\nselect f(20000);\nselect f(30000);\nselect f(40000);\n\n-- validate pg_attribute size is greater than 1/4 of shared_buffers\n-- for syncscan to triggger\nshow shared_buffers;\nselect pg_size_pretty(pg_relation_size('pg_attribute'));\nselect ctid, xmin, attrelid, attname from pg_attribute where xmin = 1 limit\n5;\n\n-- perform seq scan of pg_attribute to page past bootstrapped tuples\ncopy (select * from pg_attribute limit 2000) to '/tmp/p';\n\n-- this will internally use syncscan starting with block after bootstrap\ntuples\n-- and hence move bootstrap tuples last to higher block numbers\nvacuum full pg_attribute;\n\n------------------------------------------------------------------\nSample run\n------------------------------------------------------------------\nshow shared_buffers;\n shared_buffers\n----------------\n 128MB\n(1 row)\n\nselect pg_size_pretty(pg_relation_size('pg_attribute'));\n pg_size_pretty\n----------------\n 40 MB\n(1 row)\n\nselect ctid, xmin, attrelid, attname from pg_attribute where xmin = 1 limit\n5;\n ctid | xmin | attrelid | attname\n-------+------+----------+--------------\n (0,1) | 1 | 1255 | oid\n (0,2) | 1 | 1255 | proname\n (0,3) | 1 | 1255 | pronamespace\n (0,4) | 1 | 1255 | proowner\n (0,5) | 1 | 1255 | prolang\n(5 rows)\n\ncopy (select * from pg_attribute limit 2000) to '/tmp/p';\nCOPY 2000\nvacuum full pg_attribute;\nVACUUM\nselect ctid, xmin, attrelid, attname from pg_attribute where xmin = 1 limit\n5;\n ctid | xmin | attrelid | attname\n-----------+------+----------+--------------\n (5115,14) | 1 | 1255 | oid\n (5115,15) | 1 | 1255 | proname\n (5115,16) | 1 | 1255 | pronamespace\n (5115,17) | 1 | 1255 | proowner\n (5115,18) | 1 | 1255 | prolang\n(5 rows)\n\n\nNote:\n-- used logic causing the problem to fix it as well on the system :-)\n-- scan till block where bootstrap tuples are located\nselect ctid, xmin, attrelid, attname from pg_attribute where xmin = 1 limit\n5;\n-- now due to syncscan triggering it will pick the blocks with bootstrap\ntuples first and help to bring them back to front\nvacuum full pg_attribute;\n\n------------------------------------------------------------------\nPatch to avoid the problem:\n------------------------------------------------------------------\ndiff --git a/src/backend/access/heap/heapam_handler.c\nb/src/backend/access/heap/heapam_handler.c\nindex a3414a76e8..4c031914a3 100644\n--- a/src/backend/access/heap/heapam_handler.c\n+++ b/src/backend/access/heap/heapam_handler.c\n@@ -757,7 +757,17 @@ heapam_relation_copy_for_cluster(Relation OldHeap,\nRelation NewHeap,\n pgstat_progress_update_param(PROGRESS_CLUSTER_PHASE,\n\n PROGRESS_CLUSTER_PHASE_SEQ_SCAN_HEAP);\n\n- tableScan = table_beginscan(OldHeap, SnapshotAny, 0,\n(ScanKey) NULL);\n+ /*\n+ * For system catalog tables avoid syncscan, so that scan\nalways\n+ * starts from block 0 during rewrite and helps retain\nbootstrap\n+ * tuples in initial pages only. If using syncscan, then\nbootstrap\n+ * tuples may move to higher blocks, which will lead to\ndegraded\n+ * performance for relcache initialization during\nconnection starts.\n+ */\n+ if (is_system_catalog)\n+ tableScan = table_beginscan_strat(OldHeap,\nSnapshotAny, 0, (ScanKey) NULL, true, false);\n+ else\n+ tableScan = table_beginscan(OldHeap, SnapshotAny,\n0, (ScanKey) NULL);\n heapScan = (HeapScanDesc) tableScan;\n indexScan = NULL;\n------------------------------------------------------------------\n\n\n1] https://www.postgresql.org/message-id/27844.1338148415%40sss.pgh.pa.us\n\n-- \n*Ashwin Agrawal (VMware)*\n\nWe recently saw many backends (close to max_connection limit) get stalled in 'startup' in one of the production environments for Greenplum (fork of PostgreSQL). Tracing the reason, it was found all the tuples created by bootstrap (xmin=1) in pg_attribute were at super high block numbers (for example beyond 30,000). Tracing the reason for the backend startup stall exactly matched Tom's reasoning in [1]. Stalls became much longer in presence of sub-transaction overflow or presence of long running transactions as tuple visibility took longer. The thread ruled out the possibility of system catalog rows to be present in higher block numbers instead of in front for pg_attribute.This thread provides simple reproduction on the latest version of PostgreSQL and RCA for how bootstrap catalog entries can move to higher blocks and as a result cause stalls for backend starts. Simple fix to avoid the issue provided at the end.The cause is syncscan triggering during VACUUM FULL. VACUUM FULL rewrites the table by performing the seqscan as well. And heapam_relation_copy_for_cluster() conveys feel free to use syncscan. Hence logic to not start from block 0 instead some other block already in cache is possible and opens the possibility to move the bootstrap tuples to anywhere else in the table.------------------------------------------------------------------Repro-------------------------------------------------------------------- create database to playdrop database if exists test;create database test;\\c test-- function just to create many tables to increase pg_attribute size-- (ideally many column table might do the job more easily)CREATE OR REPLACE FUNCTION public.f(id integer)   RETURNS void   LANGUAGE plpgsql   STRICT  AS $function$   declare     sql text;    i int;   begin     for i in id..id+9999 loop       sql='create table if not exists tbl'||i||' (id int)';      execute sql;    end loop;  end;  $function$;select f(10000);select f(20000);select f(30000);select f(40000);-- validate pg_attribute size is greater than 1/4 of shared_buffers-- for syncscan to trigggershow shared_buffers;select pg_size_pretty(pg_relation_size('pg_attribute'));select ctid, xmin, attrelid, attname from pg_attribute where xmin = 1 limit 5;-- perform seq scan of pg_attribute to page past bootstrapped tuplescopy (select * from pg_attribute limit 2000) to '/tmp/p';-- this will internally use syncscan starting with block after bootstrap tuples -- and hence move bootstrap tuples last to higher block numbersvacuum full pg_attribute;------------------------------------------------------------------Sample run------------------------------------------------------------------show shared_buffers; shared_buffers ---------------- 128MB(1 row)select pg_size_pretty(pg_relation_size('pg_attribute')); pg_size_pretty ---------------- 40 MB(1 row)select ctid, xmin, attrelid, attname from pg_attribute where xmin = 1 limit 5; ctid  | xmin | attrelid |   attname    -------+------+----------+-------------- (0,1) |    1 |     1255 | oid (0,2) |    1 |     1255 | proname (0,3) |    1 |     1255 | pronamespace (0,4) |    1 |     1255 | proowner (0,5) |    1 |     1255 | prolang(5 rows)copy (select * from pg_attribute limit 2000) to '/tmp/p';COPY 2000vacuum full pg_attribute;VACUUMselect ctid, xmin, attrelid, attname from pg_attribute where xmin = 1 limit 5;   ctid    | xmin | attrelid |   attname    -----------+------+----------+-------------- (5115,14) |    1 |     1255 | oid (5115,15) |    1 |     1255 | proname (5115,16) |    1 |     1255 | pronamespace (5115,17) |    1 |     1255 | proowner (5115,18) |    1 |     1255 | prolang(5 rows)Note:-- used logic causing the problem to fix it as well on the system :-)-- scan till block where bootstrap tuples are locatedselect ctid, xmin, attrelid, attname from pg_attribute where xmin = 1 limit 5;-- now due to syncscan triggering it will pick the blocks with bootstrap tuples first and help to bring them back to frontvacuum full pg_attribute;------------------------------------------------------------------Patch to avoid the problem:------------------------------------------------------------------diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.cindex a3414a76e8..4c031914a3 100644--- a/src/backend/access/heap/heapam_handler.c+++ b/src/backend/access/heap/heapam_handler.c@@ -757,7 +757,17 @@ heapam_relation_copy_for_cluster(Relation OldHeap, Relation NewHeap,                pgstat_progress_update_param(PROGRESS_CLUSTER_PHASE,                                                                         PROGRESS_CLUSTER_PHASE_SEQ_SCAN_HEAP); -               tableScan = table_beginscan(OldHeap, SnapshotAny, 0, (ScanKey) NULL);+               /*+                * For system catalog tables avoid syncscan, so that scan always+                * starts from block 0 during rewrite and helps retain bootstrap+                * tuples in initial pages only. If using syncscan, then bootstrap+                * tuples may move to higher blocks, which will lead to degraded+                * performance for relcache initialization during connection starts.+                */+               if (is_system_catalog)+                       tableScan = table_beginscan_strat(OldHeap, SnapshotAny, 0, (ScanKey) NULL, true, false);+               else+                       tableScan = table_beginscan(OldHeap, SnapshotAny, 0, (ScanKey) NULL);                heapScan = (HeapScanDesc) tableScan;                indexScan = NULL;------------------------------------------------------------------1] https://www.postgresql.org/message-id/27844.1338148415%40sss.pgh.pa.us-- Ashwin Agrawal (VMware)", "msg_date": "Thu, 15 Sep 2022 16:42:18 -0700", "msg_from": "Ashwin Agrawal <ashwinstar@gmail.com>", "msg_from_op": true, "msg_subject": "Backends stalled in 'startup' state" }, { "msg_contents": "On Thu, Sep 15, 2022 at 4:42 PM Ashwin Agrawal <ashwinstar@gmail.com> wrote:\n\n>\n> We recently saw many backends (close to max_connection limit) get stalled\n> in 'startup' in one of the production environments for Greenplum (fork of\n> PostgreSQL). Tracing the reason, it was found all the tuples created by\n> bootstrap (xmin=1) in pg_attribute were at super high block numbers (for\n> example beyond 30,000). Tracing the reason for the backend startup stall\n> exactly matched Tom's reasoning in [1]. Stalls became much longer in\n> presence of sub-transaction overflow or presence of long running\n> transactions as tuple visibility took longer. The thread ruled out the\n> possibility of system catalog rows to be present in higher block numbers\n> instead of in front for pg_attribute.\n>\n> This thread provides simple reproduction on the latest version of\n> PostgreSQL and RCA for how bootstrap catalog entries can move to higher\n> blocks and as a result cause stalls for backend starts. Simple fix to avoid\n> the issue provided at the end.\n>\n> The cause is syncscan triggering during VACUUM FULL. VACUUM FULL rewrites\n> the table by performing the seqscan as well. And\n> heapam_relation_copy_for_cluster() conveys feel free to use syncscan. Hence\n> logic to not start from block 0 instead some other block already in cache\n> is possible and opens the possibility to move the bootstrap tuples to\n> anywhere else in the table.\n>\n> ------------------------------------------------------------------\n> Repro\n> ------------------------------------------------------------------\n> -- create database to play\n> drop database if exists test;\n> create database test;\n> \\c test\n>\n> -- function just to create many tables to increase pg_attribute size\n> -- (ideally many column table might do the job more easily)\n> CREATE OR REPLACE FUNCTION public.f(id integer)\n> RETURNS void\n> LANGUAGE plpgsql\n> STRICT\n> AS $function$\n> declare\n> sql text;\n> i int;\n> begin\n> for i in id..id+9999 loop\n> sql='create table if not exists tbl'||i||' (id int)';\n> execute sql;\n> end loop;\n> end;\n> $function$;\n>\n> select f(10000);\n> select f(20000);\n> select f(30000);\n> select f(40000);\n>\n> -- validate pg_attribute size is greater than 1/4 of shared_buffers\n> -- for syncscan to triggger\n> show shared_buffers;\n> select pg_size_pretty(pg_relation_size('pg_attribute'));\n> select ctid, xmin, attrelid, attname from pg_attribute where xmin = 1\n> limit 5;\n>\n> -- perform seq scan of pg_attribute to page past bootstrapped tuples\n> copy (select * from pg_attribute limit 2000) to '/tmp/p';\n>\n> -- this will internally use syncscan starting with block after bootstrap\n> tuples\n> -- and hence move bootstrap tuples last to higher block numbers\n> vacuum full pg_attribute;\n>\n> ------------------------------------------------------------------\n> Sample run\n> ------------------------------------------------------------------\n> show shared_buffers;\n> shared_buffers\n> ----------------\n> 128MB\n> (1 row)\n>\n> select pg_size_pretty(pg_relation_size('pg_attribute'));\n> pg_size_pretty\n> ----------------\n> 40 MB\n> (1 row)\n>\n> select ctid, xmin, attrelid, attname from pg_attribute where xmin = 1\n> limit 5;\n> ctid | xmin | attrelid | attname\n> -------+------+----------+--------------\n> (0,1) | 1 | 1255 | oid\n> (0,2) | 1 | 1255 | proname\n> (0,3) | 1 | 1255 | pronamespace\n> (0,4) | 1 | 1255 | proowner\n> (0,5) | 1 | 1255 | prolang\n> (5 rows)\n>\n> copy (select * from pg_attribute limit 2000) to '/tmp/p';\n> COPY 2000\n> vacuum full pg_attribute;\n> VACUUM\n> select ctid, xmin, attrelid, attname from pg_attribute where xmin = 1\n> limit 5;\n> ctid | xmin | attrelid | attname\n> -----------+------+----------+--------------\n> (5115,14) | 1 | 1255 | oid\n> (5115,15) | 1 | 1255 | proname\n> (5115,16) | 1 | 1255 | pronamespace\n> (5115,17) | 1 | 1255 | proowner\n> (5115,18) | 1 | 1255 | prolang\n> (5 rows)\n>\n>\n> Note:\n> -- used logic causing the problem to fix it as well on the system :-)\n> -- scan till block where bootstrap tuples are located\n> select ctid, xmin, attrelid, attname from pg_attribute where xmin = 1\n> limit 5;\n> -- now due to syncscan triggering it will pick the blocks with bootstrap\n> tuples first and help to bring them back to front\n> vacuum full pg_attribute;\n>\n> ------------------------------------------------------------------\n> Patch to avoid the problem:\n> ------------------------------------------------------------------\n> diff --git a/src/backend/access/heap/heapam_handler.c\n> b/src/backend/access/heap/heapam_handler.c\n> index a3414a76e8..4c031914a3 100644\n> --- a/src/backend/access/heap/heapam_handler.c\n> +++ b/src/backend/access/heap/heapam_handler.c\n> @@ -757,7 +757,17 @@ heapam_relation_copy_for_cluster(Relation OldHeap,\n> Relation NewHeap,\n> pgstat_progress_update_param(PROGRESS_CLUSTER_PHASE,\n>\n> PROGRESS_CLUSTER_PHASE_SEQ_SCAN_HEAP);\n>\n> - tableScan = table_beginscan(OldHeap, SnapshotAny, 0,\n> (ScanKey) NULL);\n> + /*\n> + * For system catalog tables avoid syncscan, so that scan\n> always\n> + * starts from block 0 during rewrite and helps retain\n> bootstrap\n> + * tuples in initial pages only. If using syncscan, then\n> bootstrap\n> + * tuples may move to higher blocks, which will lead to\n> degraded\n> + * performance for relcache initialization during\n> connection starts.\n> + */\n> + if (is_system_catalog)\n> + tableScan = table_beginscan_strat(OldHeap,\n> SnapshotAny, 0, (ScanKey) NULL, true, false);\n> + else\n> + tableScan = table_beginscan(OldHeap, SnapshotAny,\n> 0, (ScanKey) NULL);\n> heapScan = (HeapScanDesc) tableScan;\n> indexScan = NULL;\n> ------------------------------------------------------------------\n>\n>\n> 1] https://www.postgresql.org/message-id/27844.1338148415%40sss.pgh.pa.us\n>\n\nTom, would be helpful to have your thoughts/comments on this.\n\n>\n\nOn Thu, Sep 15, 2022 at 4:42 PM Ashwin Agrawal <ashwinstar@gmail.com> wrote:We recently saw many backends (close to max_connection limit) get stalled in 'startup' in one of the production environments for Greenplum (fork of PostgreSQL). Tracing the reason, it was found all the tuples created by bootstrap (xmin=1) in pg_attribute were at super high block numbers (for example beyond 30,000). Tracing the reason for the backend startup stall exactly matched Tom's reasoning in [1]. Stalls became much longer in presence of sub-transaction overflow or presence of long running transactions as tuple visibility took longer. The thread ruled out the possibility of system catalog rows to be present in higher block numbers instead of in front for pg_attribute.This thread provides simple reproduction on the latest version of PostgreSQL and RCA for how bootstrap catalog entries can move to higher blocks and as a result cause stalls for backend starts. Simple fix to avoid the issue provided at the end.The cause is syncscan triggering during VACUUM FULL. VACUUM FULL rewrites the table by performing the seqscan as well. And heapam_relation_copy_for_cluster() conveys feel free to use syncscan. Hence logic to not start from block 0 instead some other block already in cache is possible and opens the possibility to move the bootstrap tuples to anywhere else in the table.------------------------------------------------------------------Repro-------------------------------------------------------------------- create database to playdrop database if exists test;create database test;\\c test-- function just to create many tables to increase pg_attribute size-- (ideally many column table might do the job more easily)CREATE OR REPLACE FUNCTION public.f(id integer)   RETURNS void   LANGUAGE plpgsql   STRICT  AS $function$   declare     sql text;    i int;   begin     for i in id..id+9999 loop       sql='create table if not exists tbl'||i||' (id int)';      execute sql;    end loop;  end;  $function$;select f(10000);select f(20000);select f(30000);select f(40000);-- validate pg_attribute size is greater than 1/4 of shared_buffers-- for syncscan to trigggershow shared_buffers;select pg_size_pretty(pg_relation_size('pg_attribute'));select ctid, xmin, attrelid, attname from pg_attribute where xmin = 1 limit 5;-- perform seq scan of pg_attribute to page past bootstrapped tuplescopy (select * from pg_attribute limit 2000) to '/tmp/p';-- this will internally use syncscan starting with block after bootstrap tuples -- and hence move bootstrap tuples last to higher block numbersvacuum full pg_attribute;------------------------------------------------------------------Sample run------------------------------------------------------------------show shared_buffers; shared_buffers ---------------- 128MB(1 row)select pg_size_pretty(pg_relation_size('pg_attribute')); pg_size_pretty ---------------- 40 MB(1 row)select ctid, xmin, attrelid, attname from pg_attribute where xmin = 1 limit 5; ctid  | xmin | attrelid |   attname    -------+------+----------+-------------- (0,1) |    1 |     1255 | oid (0,2) |    1 |     1255 | proname (0,3) |    1 |     1255 | pronamespace (0,4) |    1 |     1255 | proowner (0,5) |    1 |     1255 | prolang(5 rows)copy (select * from pg_attribute limit 2000) to '/tmp/p';COPY 2000vacuum full pg_attribute;VACUUMselect ctid, xmin, attrelid, attname from pg_attribute where xmin = 1 limit 5;   ctid    | xmin | attrelid |   attname    -----------+------+----------+-------------- (5115,14) |    1 |     1255 | oid (5115,15) |    1 |     1255 | proname (5115,16) |    1 |     1255 | pronamespace (5115,17) |    1 |     1255 | proowner (5115,18) |    1 |     1255 | prolang(5 rows)Note:-- used logic causing the problem to fix it as well on the system :-)-- scan till block where bootstrap tuples are locatedselect ctid, xmin, attrelid, attname from pg_attribute where xmin = 1 limit 5;-- now due to syncscan triggering it will pick the blocks with bootstrap tuples first and help to bring them back to frontvacuum full pg_attribute;------------------------------------------------------------------Patch to avoid the problem:------------------------------------------------------------------diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.cindex a3414a76e8..4c031914a3 100644--- a/src/backend/access/heap/heapam_handler.c+++ b/src/backend/access/heap/heapam_handler.c@@ -757,7 +757,17 @@ heapam_relation_copy_for_cluster(Relation OldHeap, Relation NewHeap,                pgstat_progress_update_param(PROGRESS_CLUSTER_PHASE,                                                                         PROGRESS_CLUSTER_PHASE_SEQ_SCAN_HEAP); -               tableScan = table_beginscan(OldHeap, SnapshotAny, 0, (ScanKey) NULL);+               /*+                * For system catalog tables avoid syncscan, so that scan always+                * starts from block 0 during rewrite and helps retain bootstrap+                * tuples in initial pages only. If using syncscan, then bootstrap+                * tuples may move to higher blocks, which will lead to degraded+                * performance for relcache initialization during connection starts.+                */+               if (is_system_catalog)+                       tableScan = table_beginscan_strat(OldHeap, SnapshotAny, 0, (ScanKey) NULL, true, false);+               else+                       tableScan = table_beginscan(OldHeap, SnapshotAny, 0, (ScanKey) NULL);                heapScan = (HeapScanDesc) tableScan;                indexScan = NULL;------------------------------------------------------------------1] https://www.postgresql.org/message-id/27844.1338148415%40sss.pgh.pa.usTom, would be helpful to have your thoughts/comments on this.", "msg_date": "Tue, 27 Sep 2022 09:31:24 -0700", "msg_from": "Ashwin Agrawal <ashwinstar@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Backends stalled in 'startup' state" }, { "msg_contents": "On Tue, Jan 17, 2023 at 4:52 PM Ashwin Agrawal <ashwinstar@gmail.com> wrote:\n\n>\n> We recently saw many backends (close to max_connection limit) get stalled\n> in 'startup' in one of the production environments for Greenplum (fork of\n> PostgreSQL). Tracing the reason, it was found all the tuples created by\n> bootstrap (xmin=1) in pg_attribute were at super high block numbers (for\n> example beyond 30,000). Tracing the reason for the backend startup stall\n> exactly matched Tom's reasoning in [1]. Stalls became much longer in\n> presence of sub-transaction overflow or presence of long running\n> transactions as tuple visibility took longer. The thread ruled out the\n> possibility of system catalog rows to be present in higher block numbers\n> instead of in front for pg_attribute.\n>\n> This thread provides simple reproduction on the latest version of\n> PostgreSQL and RCA for how bootstrap catalog entries can move to higher\n> blocks and as a result cause stalls for backend starts. Simple fix to avoid\n> the issue provided at the end.\n>\n> The cause is syncscan triggering during VACUUM FULL. VACUUM FULL rewrites\n> the table by performing the seqscan as well. And\n> heapam_relation_copy_for_cluster() conveys feel free to use syncscan. Hence\n> logic to not start from block 0 instead some other block already in cache\n> is possible and opens the possibility to move the bootstrap tuples to\n> anywhere else in the table.\n>\n> ------------------------------------------------------------------\n> Repro\n> ------------------------------------------------------------------\n> -- create database to play\n> drop database if exists test;\n> create database test;\n> \\c test\n>\n> -- function just to create many tables to increase pg_attribute size\n> -- (ideally many column table might do the job more easily)\n> CREATE OR REPLACE FUNCTION public.f(id integer)\n> RETURNS void\n> LANGUAGE plpgsql\n> STRICT\n> AS $function$\n> declare\n> sql text;\n> i int;\n> begin\n> for i in id..id+9999 loop\n> sql='create table if not exists tbl'||i||' (id int)';\n> execute sql;\n> end loop;\n> end;\n> $function$;\n>\n> select f(10000);\n> select f(20000);\n> select f(30000);\n> select f(40000);\n>\n> -- validate pg_attribute size is greater than 1/4 of shared_buffers\n> -- for syncscan to triggger\n> show shared_buffers;\n> select pg_size_pretty(pg_relation_size('pg_attribute'));\n> select ctid, xmin, attrelid, attname from pg_attribute where xmin = 1\n> limit 5;\n>\n> -- perform seq scan of pg_attribute to page past bootstrapped tuples\n> copy (select * from pg_attribute limit 2000) to '/tmp/p';\n>\n> -- this will internally use syncscan starting with block after bootstrap\n> tuples\n> -- and hence move bootstrap tuples last to higher block numbers\n> vacuum full pg_attribute;\n>\n> ------------------------------------------------------------------\n> Sample run\n> ------------------------------------------------------------------\n> show shared_buffers;\n> shared_buffers\n> ----------------\n> 128MB\n> (1 row)\n>\n> select pg_size_pretty(pg_relation_size('pg_attribute'));\n> pg_size_pretty\n> ----------------\n> 40 MB\n> (1 row)\n>\n> select ctid, xmin, attrelid, attname from pg_attribute where xmin = 1\n> limit 5;\n> ctid | xmin | attrelid | attname\n> -------+------+----------+--------------\n> (0,1) | 1 | 1255 | oid\n> (0,2) | 1 | 1255 | proname\n> (0,3) | 1 | 1255 | pronamespace\n> (0,4) | 1 | 1255 | proowner\n> (0,5) | 1 | 1255 | prolang\n> (5 rows)\n>\n> copy (select * from pg_attribute limit 2000) to '/tmp/p';\n> COPY 2000\n> vacuum full pg_attribute;\n> VACUUM\n> select ctid, xmin, attrelid, attname from pg_attribute where xmin = 1\n> limit 5;\n> ctid | xmin | attrelid | attname\n> -----------+------+----------+--------------\n> (5115,14) | 1 | 1255 | oid\n> (5115,15) | 1 | 1255 | proname\n> (5115,16) | 1 | 1255 | pronamespace\n> (5115,17) | 1 | 1255 | proowner\n> (5115,18) | 1 | 1255 | prolang\n> (5 rows)\n>\n>\n> Note:\n> -- used logic causing the problem to fix it as well on the system :-)\n> -- scan till block where bootstrap tuples are located\n> select ctid, xmin, attrelid, attname from pg_attribute where xmin = 1\n> limit 5;\n> -- now due to syncscan triggering it will pick the blocks with bootstrap\n> tuples first and help to bring them back to front\n> vacuum full pg_attribute;\n>\n> ------------------------------------------------------------------\n> Patch to avoid the problem:\n> ------------------------------------------------------------------\n> diff --git a/src/backend/access/heap/heapam_handler.c\n> b/src/backend/access/heap/heapam_handler.c\n> index a3414a76e8..4c031914a3 100644\n> --- a/src/backend/access/heap/heapam_handler.c\n> +++ b/src/backend/access/heap/heapam_handler.c\n> @@ -757,7 +757,17 @@ heapam_relation_copy_for_cluster(Relation OldHeap,\n> Relation NewHeap,\n> pgstat_progress_update_param(PROGRESS_CLUSTER_PHASE,\n>\n> PROGRESS_CLUSTER_PHASE_SEQ_SCAN_HEAP);\n>\n> - tableScan = table_beginscan(OldHeap, SnapshotAny, 0,\n> (ScanKey) NULL);\n> + /*\n> + * For system catalog tables avoid syncscan, so that scan\n> always\n> + * starts from block 0 during rewrite and helps retain\n> bootstrap\n> + * tuples in initial pages only. If using syncscan, then\n> bootstrap\n> + * tuples may move to higher blocks, which will lead to\n> degraded\n> + * performance for relcache initialization during\n> connection starts.\n> + */\n> + if (is_system_catalog)\n> + tableScan = table_beginscan_strat(OldHeap,\n> SnapshotAny, 0, (ScanKey) NULL, true, false);\n> + else\n> + tableScan = table_beginscan(OldHeap, SnapshotAny,\n> 0, (ScanKey) NULL);\n> heapScan = (HeapScanDesc) tableScan;\n> indexScan = NULL;\n> ------------------------------------------------------------------\n>\n>\n> 1] https://www.postgresql.org/message-id/27844.1338148415%40sss.pgh.pa.us\n>\n\nMissed to receive comment/reply to earlier email on\npgsql-hackers@lists.postgresql.org hence trying via\npgsql-hackers@postgresql.org this time (as not sure was missed or no\ninterest).\n\nAlso, I wish to add more scenarios where the problem manifests.\nDuring RelationCacheInitializePhase3() -> load_critical_index() performs\nsequential search for tuples in pg_class\nfor ClassOidIndexId, AttributeRelidNumIndexId, IndexRelidIndexId,\nOpclassOidIndexId, AccessMethodProcedureIndexId,\nRewriteRelRulenameIndexId\nand TriggerRelidNameIndexId. We found on systems that tuples corresponding\nto these indexes are not always present in starting blocks of pg_class.\nSpecially\nfor pg_opclass_oid_index, pg_rewrite_rel_rulename_index,\npg_amproc_fam_proc_index, pg_trigger_tgrelid_tgname_index,\npg_index_indexrelid_index\nto be present many times in block numbers over 2000 and such. Not fully\nsure on reasoning for this - maybe REINDEX (moves them to higher block\nnumbers). Under any situation where tuple visibility slows down (let's say\ndue to sub-transaction overflow) and relcache is invalidated, a lot of\nbackends were seen stalled in the \"startup\" phase.\n\n\n-- \n*Ashwin Agrawal (VMware)*\n\nOn Tue, Jan 17, 2023 at 4:52 PM Ashwin Agrawal <ashwinstar@gmail.com> wrote:We recently saw many backends (close to max_connection limit) get stalled in 'startup' in one of the production environments for Greenplum (fork of PostgreSQL). Tracing the reason, it was found all the tuples created by bootstrap (xmin=1) in pg_attribute were at super high block numbers (for example beyond 30,000). Tracing the reason for the backend startup stall exactly matched Tom's reasoning in [1]. Stalls became much longer in presence of sub-transaction overflow or presence of long running transactions as tuple visibility took longer. The thread ruled out the possibility of system catalog rows to be present in higher block numbers instead of in front for pg_attribute.This thread provides simple reproduction on the latest version of PostgreSQL and RCA for how bootstrap catalog entries can move to higher blocks and as a result cause stalls for backend starts. Simple fix to avoid the issue provided at the end.The cause is syncscan triggering during VACUUM FULL. VACUUM FULL rewrites the table by performing the seqscan as well. And heapam_relation_copy_for_cluster() conveys feel free to use syncscan. Hence logic to not start from block 0 instead some other block already in cache is possible and opens the possibility to move the bootstrap tuples to anywhere else in the table.------------------------------------------------------------------Repro-------------------------------------------------------------------- create database to playdrop database if exists test;create database test;\\c test-- function just to create many tables to increase pg_attribute size-- (ideally many column table might do the job more easily)CREATE OR REPLACE FUNCTION public.f(id integer)   RETURNS void   LANGUAGE plpgsql   STRICT  AS $function$   declare     sql text;    i int;   begin     for i in id..id+9999 loop       sql='create table if not exists tbl'||i||' (id int)';      execute sql;    end loop;  end;  $function$;select f(10000);select f(20000);select f(30000);select f(40000);-- validate pg_attribute size is greater than 1/4 of shared_buffers-- for syncscan to trigggershow shared_buffers;select pg_size_pretty(pg_relation_size('pg_attribute'));select ctid, xmin, attrelid, attname from pg_attribute where xmin = 1 limit 5;-- perform seq scan of pg_attribute to page past bootstrapped tuplescopy (select * from pg_attribute limit 2000) to '/tmp/p';-- this will internally use syncscan starting with block after bootstrap tuples -- and hence move bootstrap tuples last to higher block numbersvacuum full pg_attribute;------------------------------------------------------------------Sample run------------------------------------------------------------------show shared_buffers; shared_buffers ---------------- 128MB(1 row)select pg_size_pretty(pg_relation_size('pg_attribute')); pg_size_pretty ---------------- 40 MB(1 row)select ctid, xmin, attrelid, attname from pg_attribute where xmin = 1 limit 5; ctid  | xmin | attrelid |   attname    -------+------+----------+-------------- (0,1) |    1 |     1255 | oid (0,2) |    1 |     1255 | proname (0,3) |    1 |     1255 | pronamespace (0,4) |    1 |     1255 | proowner (0,5) |    1 |     1255 | prolang(5 rows)copy (select * from pg_attribute limit 2000) to '/tmp/p';COPY 2000vacuum full pg_attribute;VACUUMselect ctid, xmin, attrelid, attname from pg_attribute where xmin = 1 limit 5;   ctid    | xmin | attrelid |   attname    -----------+------+----------+-------------- (5115,14) |    1 |     1255 | oid (5115,15) |    1 |     1255 | proname (5115,16) |    1 |     1255 | pronamespace (5115,17) |    1 |     1255 | proowner (5115,18) |    1 |     1255 | prolang(5 rows)Note:-- used logic causing the problem to fix it as well on the system :-)-- scan till block where bootstrap tuples are locatedselect ctid, xmin, attrelid, attname from pg_attribute where xmin = 1 limit 5;-- now due to syncscan triggering it will pick the blocks with bootstrap tuples first and help to bring them back to frontvacuum full pg_attribute;------------------------------------------------------------------Patch to avoid the problem:------------------------------------------------------------------diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.cindex a3414a76e8..4c031914a3 100644--- a/src/backend/access/heap/heapam_handler.c+++ b/src/backend/access/heap/heapam_handler.c@@ -757,7 +757,17 @@ heapam_relation_copy_for_cluster(Relation OldHeap, Relation NewHeap,                pgstat_progress_update_param(PROGRESS_CLUSTER_PHASE,                                                                         PROGRESS_CLUSTER_PHASE_SEQ_SCAN_HEAP); -               tableScan = table_beginscan(OldHeap, SnapshotAny, 0, (ScanKey) NULL);+               /*+                * For system catalog tables avoid syncscan, so that scan always+                * starts from block 0 during rewrite and helps retain bootstrap+                * tuples in initial pages only. If using syncscan, then bootstrap+                * tuples may move to higher blocks, which will lead to degraded+                * performance for relcache initialization during connection starts.+                */+               if (is_system_catalog)+                       tableScan = table_beginscan_strat(OldHeap, SnapshotAny, 0, (ScanKey) NULL, true, false);+               else+                       tableScan = table_beginscan(OldHeap, SnapshotAny, 0, (ScanKey) NULL);                heapScan = (HeapScanDesc) tableScan;                indexScan = NULL;------------------------------------------------------------------1] https://www.postgresql.org/message-id/27844.1338148415%40sss.pgh.pa.usMissed to receive comment/reply to earlier email on pgsql-hackers@lists.postgresql.org hence trying via pgsql-hackers@postgresql.org this time (as not sure was missed or no interest).Also, I wish to add more scenarios where the problem manifests. During RelationCacheInitializePhase3() -> load_critical_index() performs sequential search for tuples in pg_class for ClassOidIndexId, AttributeRelidNumIndexId, IndexRelidIndexId, OpclassOidIndexId, AccessMethodProcedureIndexId, RewriteRelRulenameIndexId and TriggerRelidNameIndexId. We found on systems that tuples corresponding to these indexes are not always present in starting blocks of pg_class. Specially for pg_opclass_oid_index, pg_rewrite_rel_rulename_index, pg_amproc_fam_proc_index, pg_trigger_tgrelid_tgname_index, pg_index_indexrelid_index to be present many times in block numbers over 2000 and such. Not fully sure on reasoning for this - maybe REINDEX (moves them to higher block numbers). Under any situation where tuple visibility slows down (let's say due to sub-transaction overflow) and relcache is invalidated, a lot of backends were seen stalled in the \"startup\" phase.-- Ashwin Agrawal (VMware)", "msg_date": "Tue, 17 Jan 2023 17:21:27 -0800", "msg_from": "Ashwin Agrawal <ashwinstar@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Backends stalled in 'startup' state" } ]
[ { "msg_contents": "Hi,\n\n\"\"\"\\set ON_ERROR_STOP on\"\"\" stops any subsequent incoming query that \ncomes after an error of an SQL, but does not stop after a shell script \nran by \"\"\"\\! <some command>\"\"\" returning values other than 0, -1, or \n127, which suggests a failure in the result of the shell script.\n\nFor example, suppose that below is an SQL file.\n\\set ON_ERROR_STOP on\nSELECT 1;\n\\! false\nSELECT 2;\n\nThe current design allows SELECT 2 even though the shell script returns \na value indicating a failure.\n\nI thought that this action is rather unexpected since, based on the word \n\"\"\"ON_ERROR_STOP\"\"\", ones may expect that failures of shell scripts \nshould halt the incoming instructions as well.\nOne clear solution is to let failures of shell script stop incoming \nqueries just like how errors of SQLs do currently. Thoughts?\n\nTatsu", "msg_date": "Fri, 16 Sep 2022 15:55:33 +0900", "msg_from": "bt22nakamorit <bt22nakamorit@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "At Fri, 16 Sep 2022 15:55:33 +0900, bt22nakamorit <bt22nakamorit@oss.nttdata.com> wrote in \n> Hi,\n> \n> \"\"\"\\set ON_ERROR_STOP on\"\"\" stops any subsequent incoming query that\n> comes after an error of an SQL, but does not stop after a shell script\n> ran by \"\"\"\\! <some command>\"\"\" returning values other than 0, -1, or\n> 127, which suggests a failure in the result of the shell script.\n> \n> For example, suppose that below is an SQL file.\n> \\set ON_ERROR_STOP on\n> SELECT 1;\n> \\! false\n> SELECT 2;\n> \n> The current design allows SELECT 2 even though the shell script\n> returns a value indicating a failure.\n\nSince the \"false\" command did not \"error out\"?\n\n> I thought that this action is rather unexpected since, based on the\n> word \"\"\"ON_ERROR_STOP\"\"\", ones may expect that failures of shell\n> scripts should halt the incoming instructions as well.\n> One clear solution is to let failures of shell script stop incoming\n> queries just like how errors of SQLs do currently. Thoughts?\n\nI'm not sure we want to regard any exit status from a succssful run as\na failure.\n\nOn the other hand, the proposed behavior seems useful to me.\n\nSo +1 from me to the proposal, assuming the corresponding edit of the\ndocumentation happens.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 16 Sep 2022 17:30:35 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "2022-09-16 17:30 に Kyotaro Horiguchi さんは書きました:\n> At Fri, 16 Sep 2022 15:55:33 +0900, bt22nakamorit\n> <bt22nakamorit@oss.nttdata.com> wrote in\n>> Hi,\n>> \n>> \"\"\"\\set ON_ERROR_STOP on\"\"\" stops any subsequent incoming query that\n>> comes after an error of an SQL, but does not stop after a shell script\n>> ran by \"\"\"\\! <some command>\"\"\" returning values other than 0, -1, or\n>> 127, which suggests a failure in the result of the shell script.\n>> \n>> For example, suppose that below is an SQL file.\n>> \\set ON_ERROR_STOP on\n>> SELECT 1;\n>> \\! false\n>> SELECT 2;\n>> \n>> The current design allows SELECT 2 even though the shell script\n>> returns a value indicating a failure.\n> \n> Since the \"false\" command did not \"error out\"?\n> \n>> I thought that this action is rather unexpected since, based on the\n>> word \"\"\"ON_ERROR_STOP\"\"\", ones may expect that failures of shell\n>> scripts should halt the incoming instructions as well.\n>> One clear solution is to let failures of shell script stop incoming\n>> queries just like how errors of SQLs do currently. Thoughts?\n> \n> I'm not sure we want to regard any exit status from a succssful run as\n> a failure.\n> \n> On the other hand, the proposed behavior seems useful to me.\n> \n> So +1 from me to the proposal, assuming the corresponding edit of the\n> documentation happens.\n> \n> regards.\n\n> Since the \"false\" command did not \"error out\"?\n\"false\" command returns 1 which is an exit status code that indicates \nfailure, but not error.\nI think it does not \"error out\" if that is what you mean.\n\n> So +1 from me to the proposal, assuming the corresponding edit of the\n> documentation happens.\nI will work on editing the document and share further updates.\n\nThank you!\nTatsu\n\n\n", "msg_date": "Sat, 17 Sep 2022 09:44:33 +0900", "msg_from": "bt22nakamorit <bt22nakamorit@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "2022-09-17 09:44 に bt22nakamorit さんは書きました:\n> 2022-09-16 17:30 に Kyotaro Horiguchi さんは書きました:\n>> At Fri, 16 Sep 2022 15:55:33 +0900, bt22nakamorit\n>> <bt22nakamorit@oss.nttdata.com> wrote in\n>>> Hi,\n>>> \n>>> \"\"\"\\set ON_ERROR_STOP on\"\"\" stops any subsequent incoming query that\n>>> comes after an error of an SQL, but does not stop after a shell \n>>> script\n>>> ran by \"\"\"\\! <some command>\"\"\" returning values other than 0, -1, or\n>>> 127, which suggests a failure in the result of the shell script.\n>>> \n>>> For example, suppose that below is an SQL file.\n>>> \\set ON_ERROR_STOP on\n>>> SELECT 1;\n>>> \\! false\n>>> SELECT 2;\n>>> \n>>> The current design allows SELECT 2 even though the shell script\n>>> returns a value indicating a failure.\n>> \n>> Since the \"false\" command did not \"error out\"?\n>> \n>>> I thought that this action is rather unexpected since, based on the\n>>> word \"\"\"ON_ERROR_STOP\"\"\", ones may expect that failures of shell\n>>> scripts should halt the incoming instructions as well.\n>>> One clear solution is to let failures of shell script stop incoming\n>>> queries just like how errors of SQLs do currently. Thoughts?\n>> \n>> I'm not sure we want to regard any exit status from a succssful run as\n>> a failure.\n>> \n>> On the other hand, the proposed behavior seems useful to me.\n>> \n>> So +1 from me to the proposal, assuming the corresponding edit of the\n>> documentation happens.\n>> \n>> regards.\n> \n>> Since the \"false\" command did not \"error out\"?\n> \"false\" command returns 1 which is an exit status code that indicates\n> failure, but not error.\n> I think it does not \"error out\" if that is what you mean.\n> \n>> So +1 from me to the proposal, assuming the corresponding edit of the\n>> documentation happens.\n> I will work on editing the document and share further updates.\n> \n> Thank you!\n> Tatsu\n\nI edited the documentation for ON_ERROR_STOP.\nAny other suggestions?\n\nTatsu", "msg_date": "Tue, 20 Sep 2022 15:15:39 +0900", "msg_from": "bt22nakamorit <bt22nakamorit@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "\n\nOn 2022/09/20 15:15, bt22nakamorit wrote:\n>>>> I thought that this action is rather unexpected since, based on the\n>>>> word \"\"\"ON_ERROR_STOP\"\"\", ones may expect that failures of shell\n>>>> scripts should halt the incoming instructions as well.\n>>>> One clear solution is to let failures of shell script stop incoming\n>>>> queries just like how errors of SQLs do currently. Thoughts?\n\n+1\n\n\n> I edited the documentation for ON_ERROR_STOP.\n> Any other suggestions?\n\nThanks for the patch!\nCould you add it to the next CommitFest so that we don't forget it?\n\n\nWe can execute the shell commands via psql in various ways\nother than \\! meta-command. For example,\n\n* `command`\n* \\g | command\n* \\gx | command\n* \\o | command\n* \\w | command\n* \\copy ... program 'command'\n\nON_ERROR_STOP should handle not only \\! but also all the above in the same way?\n\n\nOne concern about this patch is that some applications already depend on\nthe current behavior of ON_ERROR_STOP, i.e., psql doesn't stop even when\nthe shell command returns non-zero exit code. If so, we might need to\nextend ON_ERROR_STOP so that it accepts the following setting values.\n\n* off - don't stop even when either sql or shell fails (same as the current behavior)\n* on or sql - stop only whensql fails (same as the current behavior)\n* shell - stop only when shell fails\n* all - stop when either sql or shell fails\n\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 21 Sep 2022 11:45:07 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "At Wed, 21 Sep 2022 11:45:07 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2022/09/20 15:15, bt22nakamorit wrote:\n> >>>> I thought that this action is rather unexpected since, based on the\n> >>>> word \"\"\"ON_ERROR_STOP\"\"\", ones may expect that failures of shell\n> >>>> scripts should halt the incoming instructions as well.\n> >>>> One clear solution is to let failures of shell script stop incoming\n> >>>> queries just like how errors of SQLs do currently. Thoughts?\n> \n> +1\n> \n> \n> > I edited the documentation for ON_ERROR_STOP.\n> > Any other suggestions?\n> \n> Thanks for the patch!\n> Could you add it to the next CommitFest so that we don't forget it?\n> \n> \n> We can execute the shell commands via psql in various ways\n> other than \\! meta-command. For example,\n> \n> * `command`\n> * \\g | command\n> * \\gx | command\n> * \\o | command\n> * \\w | command\n> * \\copy ... program 'command'\n> \n> ON_ERROR_STOP should handle not only \\! but also all the above in the\n> same way?\n\n+1\n\n> One concern about this patch is that some applications already depend\n> on\n> the current behavior of ON_ERROR_STOP, i.e., psql doesn't stop even\n> when\n> the shell command returns non-zero exit code. If so, we might need to\n> extend ON_ERROR_STOP so that it accepts the following setting values.\n> \n> * off - don't stop even when either sql or shell fails (same as the\n> * current behavior)\n> * on or sql - stop only whensql fails (same as the current behavior)\n> * shell - stop only when shell fails\n> * all - stop when either sql or shell fails\n> \n> Thought?\n\n+1\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 27 Sep 2022 12:34:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "Fujii Masao:\n> One concern about this patch is that some applications already depend on\n> the current behavior of ON_ERROR_STOP, i.e., psql doesn't stop even when\n> the shell command returns non-zero exit code. If so, we might need to\n> extend ON_ERROR_STOP so that it accepts the following setting values.\n\nI just got bitten by this and I definitely consider this a bug. I expect \npsql to stop when a shell script fails and I have ON_ERROR_STOP set. I \ndon't think this should be made more complicated with different settings.\n\nIf someone needs to have ON_ERROR_STOP set, but continue execution after \na certain shell command, they could still do something like this:\n\n\\! might_fail || true\n\nBest\n\nWolfgang\n\n\n", "msg_date": "Wed, 28 Sep 2022 08:36:00 +0200", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "On 2022-09-20 15:15, bt22nakamorit wrote:\n\n> I edited the documentation for ON_ERROR_STOP.\n> Any other suggestions?\n\nThanks for the patch!\n\n> if (result == 127 || result == -1)\n> {\n> pg_log_error(\"\\\\!: failed\");\n> return false;\n> }\n> else if (result != 0) {\n> pg_log_error(\"command failed\");\n> return false;\n\nSince it would be hard to understand the cause of failures from these \ntwo messages, it might be better to clarify them in the messages.\n\nThe former comes from failures of child process creation or execution on \nit and the latter occurs when child process creation and execution \nsucceeded but the return code is not 0, doesn't it?\n\n\nI also felt it'd be natural that the latter message also begins with \n\"\\\\!\" since both message concerns with \\!.\n\nHow do you think?\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 28 Sep 2022 21:49:30 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "2022-09-28 21:49 に torikoshia さんは書きました:\n>> if (result == 127 || result == -1)\n>> {\n>> pg_log_error(\"\\\\!: failed\");\n>> return false;\n>> }\n>> else if (result != 0) {\n>> pg_log_error(\"command failed\");\n>> return false;\n> \n> Since it would be hard to understand the cause of failures from these\n> two messages, it might be better to clarify them in the messages.\n> \n> The former comes from failures of child process creation or execution\n> on it and the latter occurs when child process creation and execution\n> succeeded but the return code is not 0, doesn't it?\n> \n> \n> I also felt it'd be natural that the latter message also begins with\n> \"\\\\!\" since both message concerns with \\!.\n> \n> How do you think?\n\nThank you for the feedback!\nI agree that the messages should be more clear.\n\\\\!: command was not executed\n\\\\!: command failed\nWould these two messages be enough to describe the two cases?\n\nTatsu\n\n\n", "msg_date": "Thu, 29 Sep 2022 11:29:40 +0900", "msg_from": "bt22nakamorit <bt22nakamorit@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "At Thu, 29 Sep 2022 11:29:40 +0900, bt22nakamorit <bt22nakamorit@oss.nttdata.com> wrote in \n> 2022-09-28 21:49 に torikoshia さんは書きました:\n> >> if (result == 127 || result == -1)\n> >> {\n> >> pg_log_error(\"\\\\!: failed\");\n> >> return false;\n> >> }\n> >> else if (result != 0) {\n> >> pg_log_error(\"command failed\");\n> >> return false;\n> > Since it would be hard to understand the cause of failures from these\n> > two messages, it might be better to clarify them in the messages.\n> > The former comes from failures of child process creation or execution\n> > on it and the latter occurs when child process creation and execution\n> > succeeded but the return code is not 0, doesn't it?\n> > I also felt it'd be natural that the latter message also begins with\n> > \"\\\\!\" since both message concerns with \\!.\n> > How do you think?\n> \n> Thank you for the feedback!\n> I agree that the messages should be more clear.\n> \\\\!: command was not executed\n> \\\\!: command failed\n> Would these two messages be enough to describe the two cases?\n\nFWIW, I would spell these as something like this:\n\n> \\\\!: command execution failure: %m\n> \\\\!: command returned failure status: %d\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 29 Sep 2022 12:35:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "At Thu, 29 Sep 2022 12:35:04 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > Thank you for the feedback!\n> > I agree that the messages should be more clear.\n> > \\\\!: command was not executed\n> > \\\\!: command failed\n> > Would these two messages be enough to describe the two cases?\n> \n> FWIW, I would spell these as something like this:\n> \n> > \\\\!: command execution failure: %m\n\nThe following might be more complient to our policy.\n\n> \\\\!: failed to execute command \\\"%s\\\": %m\n\n\n> > \\\\!: command returned failure status: %d\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 29 Sep 2022 13:51:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "2022-09-21 11:45 に Fujii Masao wrote:\n> We can execute the shell commands via psql in various ways\n> other than \\! meta-command. For example,\n> \n> * `command`\n> * \\g | command\n> * \\gx | command\n> * \\o | command\n> * \\w | command\n> * \\copy ... program 'command'\n> \n> ON_ERROR_STOP should handle not only \\! but also all the above in the \n> same way?\n> \n> \n> One concern about this patch is that some applications already depend \n> on\n> the current behavior of ON_ERROR_STOP, i.e., psql doesn't stop even \n> when\n> the shell command returns non-zero exit code. If so, we might need to\n> extend ON_ERROR_STOP so that it accepts the following setting values.\n> \n> * off - don't stop even when either sql or shell fails (same as the\n> current behavior)\n> * on or sql - stop only whensql fails (same as the current behavior)\n> * shell - stop only when shell fails\n> * all - stop when either sql or shell fails\n> \n> Thought?\n> \n> Regards,\n\nI agree that some applications may depend on the behavior of previous \nON_ERROR_STOP.\nI created a patch that implements off, on, shell, and all option for \nON_ERROR_STOP.\nI also edited the code for \\g, \\o, \\w, and \\set in addition to \\! to \nreturn exit status of shell commands for ON_ERROR_STOP.\n\nThere were discussions regarding the error messages for when shell \ncommand fails.\nI have found that \\copy already handles exit status of shell commands \nwhen it executes one, so I copied the messages from there.\nMore specifically, I referred to \"\"\"bool do_copy(const char *args)\"\"\" in \nsrc/bin/psql/copy.c\n\nAny feedback would be very much appreciated.\n\nTatsu", "msg_date": "Fri, 30 Sep 2022 16:54:06 +0900", "msg_from": "bt22nakamorit <bt22nakamorit@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "\n\nOn 2022/09/30 16:54, bt22nakamorit wrote:\n> 2022-09-21 11:45 に Fujii Masao wrote:\n>> We can execute the shell commands via psql in various ways\n>> other than \\! meta-command. For example,\n>>\n>> * `command`\n>> * \\g | command\n>> * \\gx | command\n>> * \\o | command\n>> * \\w | command\n>> * \\copy ... program 'command'\n>>\n>> ON_ERROR_STOP should handle not only \\! but also all the above in the same way?\n>>\n>>\n>> One concern about this patch is that some applications already depend on\n>> the current behavior of ON_ERROR_STOP, i.e., psql doesn't stop even when\n>> the shell command returns non-zero exit code. If so, we might need to\n>> extend ON_ERROR_STOP so that it accepts the following setting values.\n>>\n>> * off - don't stop even when either sql or shell fails (same as the\n>> current behavior)\n>> * on or sql - stop only whensql fails (same as the current behavior)\n>> * shell - stop only when shell fails\n>> * all - stop when either sql or shell fails\n>>\n>> Thought?\n>>\n>> Regards,\n> \n> I agree that some applications may depend on the behavior of previous ON_ERROR_STOP.\n> I created a patch that implements off, on, shell, and all option for ON_ERROR_STOP.\n> I also edited the code for \\g, \\o, \\w, and \\set in addition to \\! to return exit status of shell commands for ON_ERROR_STOP.\n> \n> There were discussions regarding the error messages for when shell command fails.\n> I have found that \\copy already handles exit status of shell commands when it executes one, so I copied the messages from there.\n> More specifically, I referred to \"\"\"bool do_copy(const char *args)\"\"\" in src/bin/psql/copy.c\n> \n> Any feedback would be very much appreciated.\n\nThanks for updating the patch!\n\nThe patch failed to be applied into the master cleanly. Could you rebase it?\n\npatching file src/bin/psql/common.c\nHunk #1 succeeded at 94 (offset 4 lines).\nHunk #2 succeeded at 104 (offset 4 lines).\nHunk #3 succeeded at 133 (offset 4 lines).\nHunk #4 succeeded at 1869 with fuzz 1 (offset 1162 lines).\nHunk #5 FAILED at 2624.\n1 out of 5 hunks FAILED -- saving rejects to file src/bin/psql/common.c.rej\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 7 Oct 2022 17:16:40 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "2022-10-07 17:16 Fujii Masao wrote:\n> The patch failed to be applied into the master cleanly. Could you \n> rebase it?\n> \n> patching file src/bin/psql/common.c\n> Hunk #1 succeeded at 94 (offset 4 lines).\n> Hunk #2 succeeded at 104 (offset 4 lines).\n> Hunk #3 succeeded at 133 (offset 4 lines).\n> Hunk #4 succeeded at 1869 with fuzz 1 (offset 1162 lines).\n> Hunk #5 FAILED at 2624.\n> 1 out of 5 hunks FAILED -- saving rejects to file \n> src/bin/psql/common.c.rej\n\nThank you for checking.\nI edited the patch so that it would apply to the latest master branch.\nPlease mention if there are any other problems.\n\nBest,\nTatsuhiro Nakamori", "msg_date": "Fri, 07 Oct 2022 19:41:16 +0900", "msg_from": "bt22nakamorit <bt22nakamorit@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "There was a mistake in the error message for \\! so I updated the patch.\n\nBest,\nTatsuhiro Nakamori", "msg_date": "Wed, 12 Oct 2022 18:13:07 +0900", "msg_from": "bt22nakamorit <bt22nakamorit@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "On Fri, Sep 16, 2022 at 03:55:33PM +0900, bt22nakamorit wrote:\n> Hi,\n> \n> \"\"\"\\set ON_ERROR_STOP on\"\"\" stops any subsequent incoming query that comes\n> after an error of an SQL, but does not stop after a shell script ran by\n> \"\"\"\\! <some command>\"\"\" returning values other than 0, -1, or 127, which\n> suggests a failure in the result of the shell script.\n\nActually, I think this could be described as a wider problem (not just\nON_ERROR_STOP). The shell's exit status is being ignored (except for -1\nand 127).\n\nShouldn't the user be able to do something with the exit status ?\nRight now, it seems like they'd need to wrap the shellscript with\n\"if ! ...; then echo failed; fi\"\nand then \\gset and compare with \"failed\"\n\nI think it'd be a lot better to expose the script status to psql.\n(without having to write \"foo; echo status=$?\").\n\nAnother consideration is that shellscripts can exit with a nonzero\nstatus due to the most recent conditional (like: if || &&).\n\nFor example, consider shell command like:\n\"if foo; then bar; fi\" or \"foo && bar\"\n\nIf foo has nonzero status, then bar isn't run.\n\nIf that's the entire shell script, the shell will *also* exit with foo's\nnonzero status. (That's the reason why people write \"exit 0\" as the\nlast line of a shell script. It's easy to believe that it was going to\n\"exit 0\" in any case; but, what it was actually going to do was to \"exit\n$?\", and $? can be nonzero after conditionals, even in \"set -e\" mode).\n\nSo a psql script like this would start to report as a failure any time\n\"foo\" was false, even if that's the normal/typical case.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 2 Nov 2022 06:58:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": ">\n> I think it'd be a lot better to expose the script status to psql.\n> (without having to write \"foo; echo status=$?\").\n>\n\nI agree, and I hacked up a proof of concept, but started another thread at\nhttps://www.postgresql.org/message-id/CADkLM=cWao2x2f+UDw15W1JkVFr_bsxfstw=NGea7r9m4j-7rQ@mail.gmail.com\nso as not to clutter up this one.\n\n\nI think it'd be a lot better to expose the script status to psql.\n(without having to write \"foo; echo status=$?\").I agree, and I hacked up a proof of concept, but started another thread at https://www.postgresql.org/message-id/CADkLM=cWao2x2f+UDw15W1JkVFr_bsxfstw=NGea7r9m4j-7rQ@mail.gmail.com so as not to clutter up this one.", "msg_date": "Fri, 4 Nov 2022 05:10:53 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "------- Original Message -------\nOn Tuesday, November 22nd, 2022 at 20:10, bt22nakamorit <bt22nakamorit@oss.nttdata.com> wrote:\n\n\n> There was a mistake in the error message for \\! so I updated the patch.\n> \n> Best,\n> Tatsuhiro Nakamori\n\nHi\n\nI was checking your patch and seems that it failed to be applied into the\nmaster cleanly. Could you please rebase it?\n\n--\nMatheus Alcantara\n\n\n", "msg_date": "Tue, 22 Nov 2022 23:16:27 +0000", "msg_from": "Matheus Alcantara <mths.dev@pm.me>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "On Tue, Nov 22, 2022 at 6:16 PM Matheus Alcantara <mths.dev@pm.me> wrote:\n\n> ------- Original Message -------\n> On Tuesday, November 22nd, 2022 at 20:10, bt22nakamorit <\n> bt22nakamorit@oss.nttdata.com> wrote:\n>\n>\n> > There was a mistake in the error message for \\! so I updated the patch.\n> >\n> > Best,\n> > Tatsuhiro Nakamori\n>\n> Hi\n>\n> I was checking your patch and seems that it failed to be applied into the\n> master cleanly. Could you please rebase it?\n>\n\nYes. My apologies, I had several life events get in the way.\n\nOn Tue, Nov 22, 2022 at 6:16 PM Matheus Alcantara <mths.dev@pm.me> wrote:------- Original Message -------\nOn Tuesday, November 22nd, 2022 at 20:10, bt22nakamorit <bt22nakamorit@oss.nttdata.com> wrote:\n\n\n> There was a mistake in the error message for \\! so I updated the patch.\n> \n> Best,\n> Tatsuhiro Nakamori\n\nHi\n\nI was checking your patch and seems that it failed to be applied into the\nmaster cleanly. Could you please rebase it?Yes. My apologies, I had several life events get in the way.", "msg_date": "Sun, 4 Dec 2022 00:16:01 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "On 2022-10-12 2:13 a.m., bt22nakamorit wrote:\n> There was a mistake in the error message for \\! so I updated the patch.\n>\nTried to apply this patch to the master branch, but got the error like \nbelow.\n$ git apply --check patch-view.diff\nerror: patch failed: src/bin/psql/command.c:2693\nerror: src/bin/psql/command.c: patch does not apply\n\nI think there are some tests related with \"ON_ERROR_STOP\" in \nsrc/bin/psql/t/001_basic.pl, and we should consider to add corresponding \ntest cases for \"on/off/shell/all\" to this patch.\n\n\nBest regards,\n\nDavid\n\n\n\n\n", "msg_date": "Tue, 13 Dec 2022 16:40:02 -0800", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "On 23/11/2022 00:16, Matheus Alcantara wrote:\n> ------- Original Message -------\n> On Tuesday, November 22nd, 2022 at 20:10, bt22nakamorit <bt22nakamorit@oss.nttdata.com> wrote:\n>\n>\n>> There was a mistake in the error message for \\! so I updated the patch.\n>>\n>> Best,\n>> Tatsuhiro Nakamori\n> Hi\n>\n> I was checking your patch and seems that it failed to be applied into the\n> master cleanly. Could you please rebase it?\n\nWas also looking into this patch.\n\nTatsuhiro: can you please send a rebased version?\n\n\nThanks\n\n-- \n\t\t\t\tAndreas 'ads' Scherbaum\nGerman PostgreSQL User Group\nEuropean PostgreSQL User Group - Board of Directors\nVolunteer Regional Contact, Germany - PostgreSQL Project\n\n\n\n", "msg_date": "Thu, 16 Feb 2023 20:33:16 +0100", "msg_from": "Andreas 'ads' Scherbaum <ads@pgug.de>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "On 16/02/2023 20:33, Andreas 'ads' Scherbaum wrote:\n> On 23/11/2022 00:16, Matheus Alcantara wrote:\n>> ------- Original Message -------\n>> On Tuesday, November 22nd, 2022 at 20:10, bt22nakamorit \n>> <bt22nakamorit@oss.nttdata.com> wrote:\n>>\n>>\n>>> There was a mistake in the error message for \\! so I updated the patch.\n>>>\n>>> Best,\n>>> Tatsuhiro Nakamori\n>> Hi\n>>\n>> I was checking your patch and seems that it failed to be applied into \n>> the\n>> master cleanly. Could you please rebase it?\n>\n> Was also looking into this patch.\n>\n> Tatsuhiro: can you please send a rebased version?\n\nThe email address is bouncing. That might be why ...\n\n-- \n\t\t\t\tAndreas 'ads' Scherbaum\nGerman PostgreSQL User Group\nEuropean PostgreSQL User Group - Board of Directors\nVolunteer Regional Contact, Germany - PostgreSQL Project\n\n\n\n", "msg_date": "Thu, 16 Feb 2023 20:36:37 +0100", "msg_from": "Andreas 'ads' Scherbaum <ads@pgug.de>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "So I took a look at this patch. The conflict is with 2fe3bdbd691\ncommitted by Peter Eisentraut which added error checks for pipes.\nAfaics the behaviour is now for pipe commands returning non-zero to\ncause an error *always* regardless of the setting of ON_ERROR_STOP.\n\nI'm not entirely sure that's sensible actually. If you write to a pipe\nthat ends in grep and it happens to produce no matching rows you may\nactually be quite surprised when that causes your script to fail...\n\nBut if you remove that failing hunk the resulting patch does apply. I\ndon't see any tests so ... I don't know if the behaviour is still\nsensible. A quick read gives me the impression that now it's actually\ninconsistent in the other direction where it stops sometimes more\noften than the user might expect.\n\nI also don't understand the difference between ON_ERROR_STOP_ON and\nON_ERROR_STOP_ALL. Nor why we would want ON_ERROR_STOP_SHELL which\nstops only on shell errors, rather than, say, ON_ERROR_STOP_SQL to do\nthe converse which would at least match the historical behaviour?", "msg_date": "Mon, 20 Mar 2023 14:31:51 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "Pruning bouncing email address -- please respond from this point in\nthread, not previous messages.\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Mon, 20 Mar 2023 14:34:39 -0400", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "On Mon, 20 Mar 2023 at 14:34, Gregory Stark (as CFM)\n<stark.cfm@gmail.com> wrote:\n>\n> Pruning bouncing email address -- please respond from this point in\n> thread, not previous messages.\n\nOh for heaven's sake. Trying again to prune bouncing email address.\nPlease respond from *here* on the thread.... Sorry\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Mon, 20 Mar 2023 14:36:33 -0400", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "On 20.03.23 19:31, Greg Stark wrote:\n> So I took a look at this patch. The conflict is with 2fe3bdbd691\n> committed by Peter Eisentraut which added error checks for pipes.\n> Afaics the behaviour is now for pipe commands returning non-zero to\n> cause an error*always* regardless of the setting of ON_ERROR_STOP.\n> \n> I'm not entirely sure that's sensible actually. If you write to a pipe\n> that ends in grep and it happens to produce no matching rows you may\n> actually be quite surprised when that causes your script to fail...\n\nThe only thing that that patch changed in psql was the \\w command, and \nAFAICT, ON_ERROR_STOP is still respected:\n\n$ cat test.sql\n\\w |foo\n\n$ psql -f test.sql\nsh: foo: command not found\npsql:test.sql:1: error: |foo: command not found\n$ echo $?\n0\n\n$ psql -f test.sql -v ON_ERROR_STOP=1\nsh: foo: command not found\npsql:test.sql:1: error: |foo: command not found\n$ echo $?\n3\n\n\n\n", "msg_date": "Fri, 24 Mar 2023 16:07:42 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "On Fri, Mar 24, 2023 at 11:07 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 20.03.23 19:31, Greg Stark wrote:\n> > So I took a look at this patch. The conflict is with 2fe3bdbd691\n> > committed by Peter Eisentraut which added error checks for pipes.\n> > Afaics the behaviour is now for pipe commands returning non-zero to\n> > cause an error*always* regardless of the setting of ON_ERROR_STOP.\n>\n\nCommit b0d8f2d983cb25d1035fae1cd7de214dd67809b4 adds SHELL_ERROR as a set\nto 'true' whenever a \\! or backtick command has a nonzero exit code. So\nit'll need some rebasing to remove the duplicated work.\n\nSo it's now possible to do this:\n\n\\set result = `some command`\n\\if :SHELL_ERROR\n -- maybe test SHELL_EXIT_CODE to see what kind of error\n \\echo some command failed\n -- nah, just quit\n \\q\n\\endif\n\n\n> > I'm not entirely sure that's sensible actually. If you write to a pipe\n> > that ends in grep and it happens to produce no matching rows you may\n> > actually be quite surprised when that causes your script to fail...\n>\n\nI agree that that would be quite surprising, and this feature would be a\nnon-starter for them. But if we extended the SHELL_ERROR and\nSHELL_EXIT_CODE patch to handle output pipes (which maybe we should have\ndone in the first place), the handling would look like this:\n\nSELECT ... \\g grep Frobozz\n\\if :SHELL_ERROR\n SELECT :SHELL_EXIT_CODE = 1 AS grep_found_nothing \\gset\n \\if :grep_found_nothing\n ...not-a-real-error handling...\n \\else\n ...regular error handling...\n \\endif\n\\endif\n\n...and that would be the solution for people who wanted to do something\nmore nuanced than ON_ERROR_STOP.\n\nOn Fri, Mar 24, 2023 at 11:07 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 20.03.23 19:31, Greg Stark wrote:\n> So I took a look at this patch. The conflict is with 2fe3bdbd691\n> committed by Peter Eisentraut which added error checks for pipes.\n> Afaics the behaviour is now for pipe commands returning non-zero to\n> cause an error*always*  regardless of the setting of ON_ERROR_STOP.Commit b0d8f2d983cb25d1035fae1cd7de214dd67809b4 adds SHELL_ERROR as a set to 'true' whenever a \\! or backtick command has a nonzero exit code. So it'll need some rebasing to remove the duplicated work.So it's now possible to do this:\\set result = `some command`\\if :SHELL_ERROR   -- maybe test SHELL_EXIT_CODE to see what kind of error   \\echo some command failed   -- nah, just quit   \\q\\endif \n> I'm not entirely sure that's sensible actually. If you write to a pipe\n> that ends in grep and it happens to produce no matching rows you may\n> actually be quite surprised when that causes your script to fail...I agree that that would be quite surprising, and this feature would be a non-starter for them. But if we extended the SHELL_ERROR and SHELL_EXIT_CODE patch to handle output pipes (which maybe we should have done in the first place), the handling would look like this:SELECT ... \\g grep Frobozz\\if :SHELL_ERROR  SELECT :SHELL_EXIT_CODE = 1 AS grep_found_nothing \\gset  \\if :grep_found_nothing  ...not-a-real-error handling...  \\else  ...regular error handling...  \\endif\\endif...and that would be the solution for people who wanted to do something more nuanced than ON_ERROR_STOP.", "msg_date": "Fri, 24 Mar 2023 14:16:05 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "On Fri, Mar 24, 2023 at 2:16 PM Corey Huinker <corey.huinker@gmail.com>\nwrote:\n\n>\n>\n> On Fri, Mar 24, 2023 at 11:07 AM Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> wrote:\n>\n>> On 20.03.23 19:31, Greg Stark wrote:\n>> > So I took a look at this patch. The conflict is with 2fe3bdbd691\n>> > committed by Peter Eisentraut which added error checks for pipes.\n>> > Afaics the behaviour is now for pipe commands returning non-zero to\n>> > cause an error*always* regardless of the setting of ON_ERROR_STOP.\n>>\n>\n> Commit b0d8f2d983cb25d1035fae1cd7de214dd67809b4 adds SHELL_ERROR as a set\n> to 'true' whenever a \\! or backtick command has a nonzero exit code. So\n> it'll need some rebasing to remove the duplicated work.\n>\n> So it's now possible to do this:\n>\n> \\set result = `some command`\n> \\if :SHELL_ERROR\n> -- maybe test SHELL_EXIT_CODE to see what kind of error\n> \\echo some command failed\n> -- nah, just quit\n> \\q\n> \\endif\n>\n>\n>> > I'm not entirely sure that's sensible actually. If you write to a pipe\n>> > that ends in grep and it happens to produce no matching rows you may\n>> > actually be quite surprised when that causes your script to fail...\n>>\n>\n> I agree that that would be quite surprising, and this feature would be a\n> non-starter for them. But if we extended the SHELL_ERROR and\n> SHELL_EXIT_CODE patch to handle output pipes (which maybe we should have\n> done in the first place), the handling would look like this:\n>\n> SELECT ... \\g grep Frobozz\n> \\if :SHELL_ERROR\n> SELECT :SHELL_EXIT_CODE = 1 AS grep_found_nothing \\gset\n> \\if :grep_found_nothing\n> ...not-a-real-error handling...\n> \\else\n> ...regular error handling...\n> \\endif\n> \\endif\n>\n> ...and that would be the solution for people who wanted to do something\n> more nuanced than ON_ERROR_STOP.\n>\n>\nDangit. Replied to Peter's email thinking he had gone off Greg's culling of\nthe recipients. Re-culled.\n\nOn Fri, Mar 24, 2023 at 2:16 PM Corey Huinker <corey.huinker@gmail.com> wrote:On Fri, Mar 24, 2023 at 11:07 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 20.03.23 19:31, Greg Stark wrote:\n> So I took a look at this patch. The conflict is with 2fe3bdbd691\n> committed by Peter Eisentraut which added error checks for pipes.\n> Afaics the behaviour is now for pipe commands returning non-zero to\n> cause an error*always*  regardless of the setting of ON_ERROR_STOP.Commit b0d8f2d983cb25d1035fae1cd7de214dd67809b4 adds SHELL_ERROR as a set to 'true' whenever a \\! or backtick command has a nonzero exit code. So it'll need some rebasing to remove the duplicated work.So it's now possible to do this:\\set result = `some command`\\if :SHELL_ERROR   -- maybe test SHELL_EXIT_CODE to see what kind of error   \\echo some command failed   -- nah, just quit   \\q\\endif \n> I'm not entirely sure that's sensible actually. If you write to a pipe\n> that ends in grep and it happens to produce no matching rows you may\n> actually be quite surprised when that causes your script to fail...I agree that that would be quite surprising, and this feature would be a non-starter for them. But if we extended the SHELL_ERROR and SHELL_EXIT_CODE patch to handle output pipes (which maybe we should have done in the first place), the handling would look like this:SELECT ... \\g grep Frobozz\\if :SHELL_ERROR  SELECT :SHELL_EXIT_CODE = 1 AS grep_found_nothing \\gset  \\if :grep_found_nothing  ...not-a-real-error handling...  \\else  ...regular error handling...  \\endif\\endif...and that would be the solution for people who wanted to do something more nuanced than ON_ERROR_STOP.Dangit. Replied to Peter's email thinking he had gone off Greg's culling of the recipients. Re-culled.", "msg_date": "Fri, 24 Mar 2023 14:20:47 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" }, { "msg_contents": "This patch hasn't applied for quite some time, has been waiting on author since\nDecember, and the thread has stalled. I'm marking this Returned with Feedback\nfor now, please feel free to resubmit to a future CF when there is renewed\ninterest in working on this.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Thu, 6 Jul 2023 11:57:07 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Make ON_ERROR_STOP stop on shell script failure" } ]
[ { "msg_contents": "Hi Hackers,\n\nI see in the texteq() function calls to DatumGetTextPP() are followed\nby conditional calls to PG_FREE_IF_COPY. e.g.\n\nhttps://github.com/postgres/postgres/blob/master/src/backend/utils/adt/varlena.c#L1792\n\n text *targ1 = DatumGetTextPP(arg1);\n text *targ2 = DatumGetTextPP(arg2);\n result = (memcmp(VARDATA_ANY(targ1), VARDATA_ANY(targ2), len1 -\nVARHDRSZ) == 0);\n PG_FREE_IF_COPY(targ1, 0);\n PG_FREE_IF_COPY(targ2, 1);\n\nHowever, in textlike(), PG_FREE_IF_COPY calls are missing.\n\nhttps://github.com/postgres/postgres/blob/master/src/backend/utils/adt/like.c#L283\n\nIs this a memory leak bug?\n\nRegards,\n-cktan\n\n\n", "msg_date": "Thu, 15 Sep 2022 23:56:43 -0700", "msg_from": "CK Tan <cktan@vitessedata.com>", "msg_from_op": true, "msg_subject": "missing PG_FREE_IF_COPY in textlike() and textnlike() ?" }, { "msg_contents": "CK Tan <cktan@vitessedata.com> writes:\n> I see in the texteq() function calls to DatumGetTextPP() are followed\n> by conditional calls to PG_FREE_IF_COPY. e.g.\n\nThat's because texteq() is potentially usable as a btree index\nfunction, and btree isn't too forgiving about leaks.\n\n> However, in textlike(), PG_FREE_IF_COPY calls are missing.\n> https://github.com/postgres/postgres/blob/master/src/backend/utils/adt/like.c#L283\n\ntextlike() isn't a member of any btree opclass.\n\n> Is this a memory leak bug?\n\nNot unless you can demonstrate a case where it causes problems.\nFor the most part, such functions run in short-lived contexts.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Sep 2022 03:03:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: missing PG_FREE_IF_COPY in textlike() and textnlike() ?" }, { "msg_contents": "Got it. It is a leak-by-design for efficiency.\n\nThanks,\n-cktan\n\nOn Fri, Sep 16, 2022 at 12:03 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> CK Tan <cktan@vitessedata.com> writes:\n> > I see in the texteq() function calls to DatumGetTextPP() are followed\n> > by conditional calls to PG_FREE_IF_COPY. e.g.\n>\n> That's because texteq() is potentially usable as a btree index\n> function, and btree isn't too forgiving about leaks.\n>\n> > However, in textlike(), PG_FREE_IF_COPY calls are missing.\n> > https://github.com/postgres/postgres/blob/master/src/backend/utils/adt/like.c#L283\n>\n> textlike() isn't a member of any btree opclass.\n>\n> > Is this a memory leak bug?\n>\n> Not unless you can demonstrate a case where it causes problems.\n> For the most part, such functions run in short-lived contexts.\n>\n> regards, tom lane\n\n\n", "msg_date": "Fri, 16 Sep 2022 00:29:15 -0700", "msg_from": "CK Tan <cktan@vitessedata.com>", "msg_from_op": true, "msg_subject": "Re: missing PG_FREE_IF_COPY in textlike() and textnlike() ?" } ]
[ { "msg_contents": "I have a question about displaying NestLoopParam. In the plan below,\n\n# explain (costs off)\nselect * from a, lateral (select sum(i) as i from b where exists (select\nsum(c.i) from c where c.j = a.j and c.i = b.i) ) ss where a.i = ss.i;\n QUERY PLAN\n--------------------------------------------------------------------\n Nested Loop\n -> Seq Scan on a\n -> Subquery Scan on ss\n Filter: (a.i = ss.i)\n -> Aggregate\n -> Seq Scan on b\n Filter: (SubPlan 1)\n SubPlan 1\n -> Aggregate\n -> Seq Scan on c\n Filter: ((j = $0) AND (i = b.i))\n(11 rows)\n\nThere are three Params. Param 0 (a.j) and param 2 (a.i) are from\nnestParams of the NestLoop. Param 1 (b.i) is from parParam of the\nSubPlan. As we can see, param 1 and param 2 are displayed as the\ncorresponding expressions, while param 0 is displayed as $0.\n\nI'm not saying this is a bug, but just curious why param 0 cannot be\ndisplayed as the referenced expression. And I find the reason is that in\nfunction find_param_referent(), we have the 'in_same_plan_level' flag\ncontrolling that if we have emerged from a subplan, i.e. not the same\nplan level any more, we would not look further for the matching\nNestLoopParam. Param 0 suits this situation.\n\nAnd there is a comment there also saying,\n\n /*\n * NestLoops transmit params to their inner child only; also, once\n * we've crawled up out of a subplan, this couldn't possibly be\n * the right match.\n */\n\nMy question is why is that?\n\nThanks\nRichard\n\nI have a question about displaying NestLoopParam. In the plan below,# explain (costs off)select * from a, lateral (select sum(i) as i from b where exists (select sum(c.i) from c where c.j = a.j and c.i = b.i) ) ss where a.i = ss.i;                             QUERY PLAN-------------------------------------------------------------------- Nested Loop   ->  Seq Scan on a   ->  Subquery Scan on ss         Filter: (a.i = ss.i)         ->  Aggregate               ->  Seq Scan on b                     Filter: (SubPlan 1)                     SubPlan 1                       ->  Aggregate                             ->  Seq Scan on c                                   Filter: ((j = $0) AND (i = b.i))(11 rows)There are three Params. Param 0 (a.j) and param 2 (a.i) are fromnestParams of the NestLoop. Param 1 (b.i) is from parParam of theSubPlan. As we can see, param 1 and param 2 are displayed as thecorresponding expressions, while param 0 is displayed as $0.I'm not saying this is a bug, but just curious why param 0 cannot bedisplayed as the referenced expression. And I find the reason is that infunction find_param_referent(), we have the 'in_same_plan_level' flagcontrolling that if we have emerged from a subplan, i.e. not the sameplan level any more, we would not look further for the matchingNestLoopParam. Param 0 suits this situation.And there is a comment there also saying,    /*     * NestLoops transmit params to their inner child only; also, once     * we've crawled up out of a subplan, this couldn't possibly be     * the right match.     */My question is why is that?ThanksRichard", "msg_date": "Fri, 16 Sep 2022 17:59:11 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "About displaying NestLoopParam" }, { "msg_contents": "On Fri, Sep 16, 2022 at 5:59 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> I'm not saying this is a bug, but just curious why param 0 cannot be\n> displayed as the referenced expression. And I find the reason is that in\n> function find_param_referent(), we have the 'in_same_plan_level' flag\n> controlling that if we have emerged from a subplan, i.e. not the same\n> plan level any more, we would not look further for the matching\n> NestLoopParam. Param 0 suits this situation.\n>\n> And there is a comment there also saying,\n>\n> /*\n> * NestLoops transmit params to their inner child only; also, once\n> * we've crawled up out of a subplan, this couldn't possibly be\n> * the right match.\n> */\n>\n\nAfter thinking of this for more time, I still don't see the reason why\nwe cannot display NestLoopParam after we've emerged from a subplan.\n\nIt seems these params are from parameterized subqueryscan and their\nvalues are supplied by an upper nestloop. These params should have been\nprocessed in process_subquery_nestloop_params() that we just add the\nPlannerParamItem entries to root->curOuterParams, in the form of\nNestLoopParam, using the same PARAM_EXEC slots.\n\nSo I propose the patch attached to remove the 'in_same_plan_level' flag\nso that we can display NestLoopParam across subplan. Please correct me\nif I'm wrong.\n\nThanks\nRichard", "msg_date": "Tue, 20 Sep 2022 16:55:11 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: About displaying NestLoopParam" }, { "msg_contents": "So I guess I don't have much to add since I don't really understand\nthe Param infrastructure, certainly not any better than you seem to.\n\nI do note that the code in question was added in this commit in 2010.\nThat predates the addition of LATERAL in 2013. I suppose those\ncomments may be talking about InitPlans for things like constant\nsubqueries that have been pulled up to InitPlans in queries like:\n\nexplain verbose select * from x join y on (x.i=y.j) where y.j+1=(select 5) ;\n\nWhich your patch doesn't eliminate the $0 in. I don't know if the code\nyou're removing is just for efficiency -- to avoid trawling through\nnodes of the plan that can't be relevant -- or for correctness.\n\nFwiw your patch applied for me and built without warnings and seems to\nwork for all the queries I've thrown at it so far. That's hardly an\nexhaustive test of course.\n\ncommit 1cc29fe7c60ba643c114979dbe588d3a38005449\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Tue Jul 13 20:57:19 2010 +0000\n\n Teach EXPLAIN to print PARAM_EXEC Params as the referenced expressions,\n rather than just $N. This brings the display of nestloop-inner-indexscan\n plans back to where it's been, and incidentally improves the display of\n SubPlan parameters as well. In passing, simplify the EXPLAIN code by\n having it deal primarily in the PlanState tree rather than separately\n searching Plan and PlanState trees. This is noticeably cleaner for\n subplans, and about a wash elsewhere.\n\n One small difference from previous behavior is that EXPLAIN will no longer\n qualify local variable references in inner-indexscan plan nodes, since it\n no longer sees such nodes as possibly referencing multiple tables. Vars\n referenced through PARAM_EXEC Params are still forcibly qualified, though,\n so I don't think the display is any more confusing than before. Adjust a\n couple of examples in the documentation to match this behavior.\n\nOn Tue, 20 Sept 2022 at 05:00, Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Fri, Sep 16, 2022 at 5:59 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>>\n>> I'm not saying this is a bug, but just curious why param 0 cannot be\n>> displayed as the referenced expression. And I find the reason is that in\n>> function find_param_referent(), we have the 'in_same_plan_level' flag\n>> controlling that if we have emerged from a subplan, i.e. not the same\n>> plan level any more, we would not look further for the matching\n>> NestLoopParam. Param 0 suits this situation.\n>>\n>> And there is a comment there also saying,\n>>\n>> /*\n>> * NestLoops transmit params to their inner child only; also, once\n>> * we've crawled up out of a subplan, this couldn't possibly be\n>> * the right match.\n>> */\n>\n>\n> After thinking of this for more time, I still don't see the reason why\n> we cannot display NestLoopParam after we've emerged from a subplan.\n>\n> It seems these params are from parameterized subqueryscan and their\n> values are supplied by an upper nestloop. These params should have been\n> processed in process_subquery_nestloop_params() that we just add the\n> PlannerParamItem entries to root->curOuterParams, in the form of\n> NestLoopParam, using the same PARAM_EXEC slots.\n>\n> So I propose the patch attached to remove the 'in_same_plan_level' flag\n> so that we can display NestLoopParam across subplan. Please correct me\n> if I'm wrong.\n>\n> Thanks\n> Richard\n\n\n\n-- \ngreg\n\n\n", "msg_date": "Wed, 16 Nov 2022 16:13:27 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: About displaying NestLoopParam" }, { "msg_contents": "Greg Stark <stark@mit.edu> writes:\n> I do note that the code in question was added in this commit in 2010.\n> That predates the addition of LATERAL in 2013.\n\nYeah. It's pretty clear from the comments that I was concerned about\nfalse matches of PARAM_EXEC numbers. I think that was a live issue\nat the time but is so no longer, cf. 46c508fbc and 1db5667ba.\nThe possibility of LATERAL references is what makes it interesting\nto search higher in the plan tree, so there wasn't any real reason to\ntake any risk of a false match.\n\nI think I might've also been concerned about printing misleading\nnames for any Vars we did find, due to them belonging to a different\nquery level. That's probably a dead issue too now that ruleutils\nassigns unique aliases to all RTEs in the query (I'm not sure if\nit did at the time).\n\nLooking at this now, it seems a little weird to me that we allow\nLATERAL values to be passed down directly into the subplan rather\nthan having them go through the parParam mechanism. (If they did,\nruleutils' restriction would be fine.) I don't know of a reason\nto change that, though.\n\n> I suppose those\n> comments may be talking about InitPlans for things like constant\n> subqueries that have been pulled up to InitPlans in queries like:\n> explain verbose select * from x join y on (x.i=y.j) where y.j+1=(select 5) ;\n> Which your patch doesn't eliminate the $0 in.\n\nNo, because that $0 is for a subplan/initplan output, which we don't\nhave any other sort of name for. Your example produces output that\nexplains what it is:\n\n InitPlan 1 (returns $0)\n ...\n Filter: ((y.j + 1) = $0)\n\nalthough I'm not sure that we document that anywhere user-facing.\n\n> Fwiw your patch applied for me and built without warnings and seems to\n> work for all the queries I've thrown at it so far. That's hardly an\n> exhaustive test of course.\n\nI'm content to apply this (although I quibble with removal of some\nof the commentary). Worst case, somebody will find an example where\nit produces wrong/misleading output, and we can revert it. But\nthe regression test changes show that it does produce useful output\nin at least some cases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Nov 2022 19:22:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: About displaying NestLoopParam" } ]
[ { "msg_contents": "Someone on general list recently complained that the error message\nfrom trying to use options on a partitioned table was misleading,\nwhich it definitely is:\n\nCREATE TABLE parted_col_comment (a int, b text) PARTITION BY LIST (a)\nWITH (fillfactor=100);\nERROR: unrecognized parameter \"fillfactor\"\n\nWhich is verified by patch 001.\n\nPatch 002 replaces this with a more meaningful error message, which\nmatches our fine manual.\nhttps://www.postgresql.org/docs/current/sql-createtable.html\n\n ERROR: cannot specify storage options for a partitioned table\n HINT: specify storage options on leaf partitions instead\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Fri, 16 Sep 2022 13:13:51 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Error for WITH options on partitioned tables" }, { "msg_contents": "\nOn Fri, 16 Sep 2022 at 20:13, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> Someone on general list recently complained that the error message\n> from trying to use options on a partitioned table was misleading,\n> which it definitely is:\n>\n> CREATE TABLE parted_col_comment (a int, b text) PARTITION BY LIST (a)\n> WITH (fillfactor=100);\n> ERROR: unrecognized parameter \"fillfactor\"\n>\n> Which is verified by patch 001.\n>\n> Patch 002 replaces this with a more meaningful error message, which\n> matches our fine manual.\n> https://www.postgresql.org/docs/current/sql-createtable.html\n>\n> ERROR: cannot specify storage options for a partitioned table\n> HINT: specify storage options on leaf partitions instead\n\nLooks good. Does this means we don't need the partitioned_table_reloptions()\nfunction and remove the reloptions validation in DefineRelation() for\npartitioned table. Or we can ereport() in partitioned_table_reloptions().\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Fri, 16 Sep 2022 20:48:37 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Error for WITH options on partitioned tables" }, { "msg_contents": "\nI wrote:\n> On Fri, 16 Sep 2022 at 20:13, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>> Patch 002 replaces this with a more meaningful error message, which\n>> matches our fine manual.\n>> https://www.postgresql.org/docs/current/sql-createtable.html\n>>\n>> ERROR: cannot specify storage options for a partitioned table\n>> HINT: specify storage options on leaf partitions instead\n>\n> Looks good. Does this means we don't need the partitioned_table_reloptions()\n> function and remove the reloptions validation in DefineRelation() for\n> partitioned table. Or we can ereport() in partitioned_table_reloptions().\n\nI want to know why we should do validation for partitioned tables even if it\ndoesn't support storage parameters?\n\n /*\n * There are no options for partitioned tables yet, but this is able to do\n * some validation.\n */\n return (bytea *) build_reloptions(reloptions, validate,\n RELOPT_KIND_PARTITIONED,\n 0, NULL, 0);\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Fri, 16 Sep 2022 20:59:58 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Error for WITH options on partitioned tables" }, { "msg_contents": "Hi, Simon!\n\nThe new error message looks better. But I believe it is better to use\n\"parameters\" instead of \"options\" as it is called \"storage parameters\"\nin the documentation. I also believe it is better to report error in\npartitioned_table_reloptions() (thanks to Japin Li for mentioning this\nfunction) as it also fixes the error message in the following situation:\n\ntest=# CREATE TABLE parted_col_comment (a int, b text) PARTITION BY LIST\n(a);\nCREATE TABLE\ntest=# ALTER TABLE parted_col_comment SET (fillfactor=100);\nERROR: unrecognized parameter \"fillfactor\"\n\nI attached the new versions of patches.\n\nI'm not sure about the errcode. May be it is better to report error with\nERRCODE_INVALID_OBJECT_DEFINITION for CREATE TABLE and with\nERRCODE_WRONG_OBJECT_TYPE for ALTER TABLE (as when you try \"ALTER TABLE\npartitioned INHERIT nonpartitioned;\" an ERROR with ERRCODE_WRONG_OBJECT_TYPE\nis reported). Then either the desired code should be passed to\npartitioned_table_reloptions() or similar checks and ereport calls should be\nplaced in two different places. I'm not sure whether it is a good idea to\nchange the code in one of these ways just to change the error code.\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/", "msg_date": "Fri, 14 Oct 2022 18:16:56 +0300", "msg_from": "Karina Litskevich <litskevichkarina@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error for WITH options on partitioned tables" }, { "msg_contents": "Hi Karina,\n\nI am not very clear about why `build_reloptions` is removed in patch \n`v2-0002-better-error-message-for-setting-parameters-for-p.patch`, if \nyou can help explain would be great.\n\n From my observation, it seems the WITH option has different behavior \nwhen creating partitioned table and index. For example,\n\npgbench -i --partitions=2 --partition-method=range -d postgres\n\npostgres=# create index idx_bid on pgbench_accounts using btree(bid) \nwith (fillfactor = 90);\nCREATE INDEX\n\npostgres=# select relname, relkind, reloptions from pg_class where \nrelnamespace=2200 order by oid;\n           relname           | relkind |    reloptions\n----------------------------+---------+------------------\n  idx_bid                    | I       | {fillfactor=90}\n  pgbench_accounts_1_bid_idx | i       | {fillfactor=90}\n  pgbench_accounts_2_bid_idx | i       | {fillfactor=90}\n\nI can see the `fillfactor` parameter has been added to the indexes, \nhowever, if I try to alter `fillfactor`, I got an error like below.\npostgres=# alter index idx_bid set (fillfactor=40);\nERROR:  ALTER action SET cannot be performed on relation \"idx_bid\"\nDETAIL:  This operation is not supported for partitioned indexes.\n\nThis generic error message is reported by \n`errdetail_relkind_not_supported`, and there is a similar error message \nfor partitioned tables. Anyone knows if this can be an option for \nreporting this `fillfactor` parameter not supported error.\n\n\nBest regards,\n\nDavid\n\nOn 2022-10-14 8:16 a.m., Karina Litskevich wrote:\n> Hi, Simon!\n>\n> The new error message looks better. But I believe it is better to use\n> \"parameters\" instead of \"options\" as it is called \"storage parameters\"\n> in the documentation. I also believe it is better to report error in\n> partitioned_table_reloptions() (thanks to Japin Li for mentioning this\n> function) as it also fixes the error message in the following situation:\n>\n> test=# CREATE TABLE parted_col_comment (a int, b text) PARTITION BY \n> LIST (a);\n> CREATE TABLE\n> test=# ALTER TABLE parted_col_comment SET (fillfactor=100);\n> ERROR:  unrecognized parameter \"fillfactor\"\n>\n> I attached the new versions of patches.\n>\n> I'm not sure about the errcode. May be it is better to report error with\n> ERRCODE_INVALID_OBJECT_DEFINITION for CREATE TABLE and with\n> ERRCODE_WRONG_OBJECT_TYPE for ALTER TABLE (as when you try \"ALTER TABLE\n> partitioned INHERIT nonpartitioned;\" an ERROR with \n> ERRCODE_WRONG_OBJECT_TYPE\n> is reported). Then either the desired code should be passed to\n> partitioned_table_reloptions() or similar checks and ereport calls \n> should be\n> placed in two different places. I'm not sure whether it is a good idea to\n> change the code in one of these ways just to change the error code.\n>\n> Best regards,\n> Karina Litskevich\n> Postgres Professional: http://postgrespro.com/\n\n\n\n", "msg_date": "Fri, 28 Oct 2022 14:21:18 -0700", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Error for WITH options on partitioned tables" }, { "msg_contents": "Apologies, I only just noticed these messages. I have set WoA until I\nhave read, understood and can respond to your detailed input, thanks.\n\nOn Fri, 28 Oct 2022 at 22:21, David Zhang <david.zhang@highgo.ca> wrote:\n>\n> Hi Karina,\n>\n> I am not very clear about why `build_reloptions` is removed in patch\n> `v2-0002-better-error-message-for-setting-parameters-for-p.patch`, if\n> you can help explain would be great.\n>\n> From my observation, it seems the WITH option has different behavior\n> when creating partitioned table and index. For example,\n>\n> pgbench -i --partitions=2 --partition-method=range -d postgres\n>\n> postgres=# create index idx_bid on pgbench_accounts using btree(bid)\n> with (fillfactor = 90);\n> CREATE INDEX\n>\n> postgres=# select relname, relkind, reloptions from pg_class where\n> relnamespace=2200 order by oid;\n> relname | relkind | reloptions\n> ----------------------------+---------+------------------\n> idx_bid | I | {fillfactor=90}\n> pgbench_accounts_1_bid_idx | i | {fillfactor=90}\n> pgbench_accounts_2_bid_idx | i | {fillfactor=90}\n>\n> I can see the `fillfactor` parameter has been added to the indexes,\n> however, if I try to alter `fillfactor`, I got an error like below.\n> postgres=# alter index idx_bid set (fillfactor=40);\n> ERROR: ALTER action SET cannot be performed on relation \"idx_bid\"\n> DETAIL: This operation is not supported for partitioned indexes.\n>\n> This generic error message is reported by\n> `errdetail_relkind_not_supported`, and there is a similar error message\n> for partitioned tables. Anyone knows if this can be an option for\n> reporting this `fillfactor` parameter not supported error.\n>\n>\n> Best regards,\n>\n> David\n>\n> On 2022-10-14 8:16 a.m., Karina Litskevich wrote:\n> > Hi, Simon!\n> >\n> > The new error message looks better. But I believe it is better to use\n> > \"parameters\" instead of \"options\" as it is called \"storage parameters\"\n> > in the documentation. I also believe it is better to report error in\n> > partitioned_table_reloptions() (thanks to Japin Li for mentioning this\n> > function) as it also fixes the error message in the following situation:\n> >\n> > test=# CREATE TABLE parted_col_comment (a int, b text) PARTITION BY\n> > LIST (a);\n> > CREATE TABLE\n> > test=# ALTER TABLE parted_col_comment SET (fillfactor=100);\n> > ERROR: unrecognized parameter \"fillfactor\"\n> >\n> > I attached the new versions of patches.\n> >\n> > I'm not sure about the errcode. May be it is better to report error with\n> > ERRCODE_INVALID_OBJECT_DEFINITION for CREATE TABLE and with\n> > ERRCODE_WRONG_OBJECT_TYPE for ALTER TABLE (as when you try \"ALTER TABLE\n> > partitioned INHERIT nonpartitioned;\" an ERROR with\n> > ERRCODE_WRONG_OBJECT_TYPE\n> > is reported). Then either the desired code should be passed to\n> > partitioned_table_reloptions() or similar checks and ereport calls\n> > should be\n> > placed in two different places. I'm not sure whether it is a good idea to\n> > change the code in one of these ways just to change the error code.\n> >\n> > Best regards,\n> > Karina Litskevich\n> > Postgres Professional: http://postgrespro.com/\n>\n\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 1 Nov 2022 23:58:20 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Error for WITH options on partitioned tables" }, { "msg_contents": "Hi David,\n\n> I am not very clear about why `build_reloptions` is removed in patch\n> `v2-0002-better-error-message-for-setting-parameters-for-p.patch`, if\n> you can help explain would be great.\n\n\"build_reloptions\" parses \"reloptions\" and takes for it a list of allowed\noptions defined by the 5th argument \"relopt_elems\" and the 6th argument\n\"num_relopt_elems\", which are NULL and 0 in the removed call. If \"validate\"\nis false, it ignores options, which are not in the list, while parsing. If\n\"validate\" is true, it \"elog\"s ERROR when it meets option, which is not in\nthe\nallowed list.\n\nAs in the deleted call \"build_reloptions\" is called with an empty list of\nallowed options, it does nothing (returns NULL) when \"validate\" is false,\nand\n\"elog\"s ERROR when \"validate\" is true and \"reloptions\" is not empty. That is\nwhat the deleted comment above the deleted call about. This call is there\nonly\nfor validation. So as I wanted to make a specific error message for the\ncase of\npartitioned tables, I added the validation in \"partitioned_table_reloptions\"\nand saw no reason to call \"build_reloptions\" any more because it would just\nreturn NULL in other cases.\n\n> This generic error message is reported by\n> `errdetail_relkind_not_supported`, and there is a similar error message\n> for partitioned tables. Anyone knows if this can be an option for\n> reporting this `fillfactor` parameter not supported error.\n\nThis error is reported by \"ATSimplePermissions\" and, as we can see in the\nbeginning of this function, it makes no difference between regular relations\nand partitioned tables now. To make it report errors for partitioned tables\nwe\nshould add new \"alter table target-type flag\" and add it to a mask of each\n\"AlterTableType\" if partitioned table is an allowed target for it (see that\nhuge switch-case in function \"ATPrepCmd\"). Then\n\"partitioned_table_reloptions\"\nmay become useless and we also should check weather some other functions\nbecome\nuseless. Maybe that is the right way, but it looks much harder than the\nexisting solutions, so I believe, before anyone began going this way, it's\nbetter to know whether there are any pitfalls there.\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/\n\nHi David,> I am not very clear about why `build_reloptions` is removed in patch> `v2-0002-better-error-message-for-setting-parameters-for-p.patch`, if> you can help explain would be great.\"build_reloptions\" parses \"reloptions\" and takes for it a list of allowedoptions defined by the 5th argument \"relopt_elems\" and the 6th argument\"num_relopt_elems\", which are NULL and 0 in the removed call. If \"validate\"is false, it ignores options, which are not in the list, while parsing. If\"validate\" is true, it \"elog\"s ERROR when it meets option, which is not in theallowed list.As in the deleted call \"build_reloptions\" is called with an empty list ofallowed options, it does nothing (returns NULL) when \"validate\" is false, and\"elog\"s ERROR when \"validate\" is true and \"reloptions\" is not empty. That iswhat the deleted comment above the deleted call about. This call is there onlyfor validation. So as I wanted to make a specific error message for the case ofpartitioned tables, I added the validation in \"partitioned_table_reloptions\"and saw no reason to call \"build_reloptions\" any more because it would justreturn NULL in other cases.> This generic error message is reported by > `errdetail_relkind_not_supported`, and there is a similar error message > for partitioned tables. Anyone knows if this can be an option for > reporting this `fillfactor` parameter not supported error.This error is reported by \"ATSimplePermissions\" and, as we can see in thebeginning of this function, it makes no difference between regular relationsand partitioned tables now. To make it report errors for partitioned tables weshould add new \"alter table target-type flag\" and add it to a mask of each\"AlterTableType\" if partitioned table is an allowed target for it (see thathuge switch-case in function \"ATPrepCmd\"). Then \"partitioned_table_reloptions\"may become useless and we also should check weather some other functions becomeuseless. Maybe that is the right way, but it looks much harder than theexisting solutions, so I believe, before anyone began going this way, it'sbetter to know whether there are any pitfalls there.Best regards,Karina LitskevichPostgres Professional: http://postgrespro.com/", "msg_date": "Mon, 7 Nov 2022 11:55:32 +0300", "msg_from": "Karina Litskevich <litskevichkarina@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Error for WITH options on partitioned tables" }, { "msg_contents": "On Mon, 7 Nov 2022 at 08:55, Karina Litskevich\n<litskevichkarina@gmail.com> wrote:\n>\n> Hi David,\n>\n> > I am not very clear about why `build_reloptions` is removed in patch\n> > `v2-0002-better-error-message-for-setting-parameters-for-p.patch`, if\n> > you can help explain would be great.\n>\n> \"build_reloptions\" parses \"reloptions\" and takes for it a list of allowed\n> options defined by the 5th argument \"relopt_elems\" and the 6th argument\n> \"num_relopt_elems\", which are NULL and 0 in the removed call. If \"validate\"\n> is false, it ignores options, which are not in the list, while parsing. If\n> \"validate\" is true, it \"elog\"s ERROR when it meets option, which is not in the\n> allowed list.\n\nKarina's changes make sense to me, so +1.\n\nThis is a minor patch, so I will set this as Ready For Committer.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 7 Nov 2022 11:10:06 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Error for WITH options on partitioned tables" }, { "msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> Karina's changes make sense to me, so +1.\n> This is a minor patch, so I will set this as Ready For Committer.\n\nPushed with minor fiddling:\n\n* I concur with Karina's thought that ERRCODE_WRONG_OBJECT_TYPE\nis the most on-point errcode for this. The complaint is specifically\nabout the table relkind and has nothing to do with the storage\nparameter per se. I also agree that it's not worth trying to use\na different errcode for CREATE vs. ALTER.\n\n* The HINT message wasn't per project style (it should be formatted as\na complete sentence), and I thought using \"parameters for\" in the\nmain message but \"parameters on\" in the hint was oddly inconsistent.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Nov 2022 12:33:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error for WITH options on partitioned tables" } ]
[ { "msg_contents": "A user asked me whether we prune never visible changes, such as\n\nBEGIN;\nINSERT...\nUPDATE.. (same row)\nCOMMIT;\n\nOnce committed, the original insert is no longer visible to anyone, so\n\"ought to be able to be pruned\", sayeth the user. And they also say\nthat changing the app is much harder, as ever.\n\nAfter some thought, Yes, we can prune, but not in all cases - only if\nthe never visible tuple is at the root end of the update chain. The\nonly question is can that be done cheaply enough to bother with. The\nanswer in one specific case is Yes, in other cases No.\n\nThis patch adds a new test for this use case, and code to remove the\nnever visible row when the changes are made by the same xid.\n\n(I'm pretty sure there used to be a test for this some years back and\nI'm guessing it was removed because it isn't always possible to remove\nthe tuple, which this new patch honours.)\n\nPlease let me know what you think.\n\n--\nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Fri, 16 Sep 2022 13:33:23 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Pruning never visible changes" }, { "msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> A user asked me whether we prune never visible changes, such as\n> BEGIN;\n> INSERT...\n> UPDATE.. (same row)\n> COMMIT;\n\nDidn't we just have this discussion in another thread? You cannot\ndo that, at least not without checking that the originating\ntransaction has no snapshots that could see the older row version.\nI'm not sure whether or not snapmgr.c has enough information to\ndetermine that, but in any case this formulation is surely\nunsafe, because it isn't even checking whether that transaction is\nour own, let alone asking snapmgr.c.\n\nI'm dubious that a safe version would fire often enough to be\nworth the cycles spent.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Sep 2022 10:26:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Pruning never visible changes" }, { "msg_contents": "On Fri, 16 Sept 2022 at 15:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > A user asked me whether we prune never visible changes, such as\n> > BEGIN;\n> > INSERT...\n> > UPDATE.. (same row)\n> > COMMIT;\n>\n> Didn't we just have this discussion in another thread?\n\nNot that I was aware of, but it sounds like a different case anyway.\n\n> You cannot\n> do that, at least not without checking that the originating\n> transaction has no snapshots that could see the older row version.\n\nI'm saying this is possible only AFTER the end of the originating\nxact, so there are no issues with additional snapshots.\n\ni.e. the never visible row has to be judged RECENTLY_DEAD before we even check.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 16 Sep 2022 17:56:42 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Pruning never visible changes" }, { "msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> On Fri, 16 Sept 2022 at 15:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> You cannot\n>> do that, at least not without checking that the originating\n>> transaction has no snapshots that could see the older row version.\n\n> I'm saying this is possible only AFTER the end of the originating\n> xact, so there are no issues with additional snapshots.\n\nI see, so the point is just that we can prune even if the originating\nxact hasn't yet passed the global xmin horizon. I agree that's safe,\nbut will it fire often enough to be worth the trouble? Also, why\ndoes it need to be restricted to certain shapes of HOT chains ---\nthat is, why can't we do exactly what we'd do if the xact *were*\npast the xmin horizon?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Sep 2022 13:37:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Pruning never visible changes" }, { "msg_contents": "On Fri, 2022-09-16 at 10:26 -0400, Tom Lane wrote:\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > A user asked me whether we prune never visible changes, such as\n> > BEGIN;\n> > INSERT...\n> > UPDATE.. (same row)\n> > COMMIT;\n> \n> Didn't we just have this discussion in another thread?\n\nFor reference: that was\nhttps://postgr.es/m/f6a491b32cb44bb5daaafec835364f7149348fa1.camel@cybertec.at\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 16 Sep 2022 22:07:48 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Pruning never visible changes" }, { "msg_contents": "On Fri, 16 Sept 2022 at 21:07, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Fri, 2022-09-16 at 10:26 -0400, Tom Lane wrote:\n> > Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > > A user asked me whether we prune never visible changes, such as\n> > > BEGIN;\n> > > INSERT...\n> > > UPDATE.. (same row)\n> > > COMMIT;\n> >\n> > Didn't we just have this discussion in another thread?\n>\n> For reference: that was\n> https://postgr.es/m/f6a491b32cb44bb5daaafec835364f7149348fa1.camel@cybertec.at\n\nThanks. I confirm I hadn't seen that, and indeed, I wrote the patch on\n5 Sept before you posted.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sat, 17 Sep 2022 07:33:07 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Pruning never visible changes" }, { "msg_contents": "On Fri, 16 Sept 2022 at 18:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > On Fri, 16 Sept 2022 at 15:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> You cannot\n> >> do that, at least not without checking that the originating\n> >> transaction has no snapshots that could see the older row version.\n>\n> > I'm saying this is possible only AFTER the end of the originating\n> > xact, so there are no issues with additional snapshots.\n>\n> I see, so the point is just that we can prune even if the originating\n> xact hasn't yet passed the global xmin horizon. I agree that's safe,\n> but will it fire often enough to be worth the trouble?\n\nIt is an edge case with limited utility, I agree, which is why I\nlooked for a cheap test (xmin == xmax only).\n\nThis additional test is also valid, but too expensive to apply:\nTransactionIdGetTopmostTranactionId(xmax) ==\nTransactionIdGetTopmostTranactionId(xmin)\n\n> Also, why\n> does it need to be restricted to certain shapes of HOT chains ---\n> that is, why can't we do exactly what we'd do if the xact *were*\n> past the xmin horizon?\n\nI thought it important to maintain the integrity of the HOT chain, in\nthat the xmax of one tuple version matches the xmin of the next. So\npruning only from the root of the chain allows us to maintain that\nvalidity check.\n\nI'm on the fence myself, which is why I didn't post it immediately I\nhad written it.\n\nIf not, it would be useful to add a note in comments to the code to\nexplain why we don't do this.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sat, 17 Sep 2022 07:46:58 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Pruning never visible changes" }, { "msg_contents": "On Fri, 16 Sept 2022 at 10:27, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > A user asked me whether we prune never visible changes, such as\n> > BEGIN;\n> > INSERT...\n> > UPDATE.. (same row)\n> > COMMIT;\n>\n> Didn't we just have this discussion in another thread?\n\nWell..... not \"just\" :)\n\ncommit 44e4bbf75d56e643b6afefd5cdcffccb68cce414\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Fri Apr 29 16:29:42 2011 -0400\n\n Remove special case for xmin == xmax in HeapTupleSatisfiesVacuum().\n\n VACUUM was willing to remove a committed-dead tuple immediately if it was\n deleted by the same transaction that inserted it. The idea is that such a\n tuple could never have been visible to any other transaction, so we don't\n need to keep it around to satisfy MVCC snapshots. However, there was\n already an exception for tuples that are part of an update chain, and this\n exception created a problem: we might remove TOAST tuples (which are never\n part of an update chain) while their parent tuple stayed around (if it was\n part of an update chain). This didn't pose a problem for most things,\n since the parent tuple is indeed dead: no snapshot will ever consider it\n visible. But MVCC-safe CLUSTER had a problem, since it will try to copy\n RECENTLY_DEAD tuples to the new table. It then has to copy their TOAST\n data too, and would fail if VACUUM had already removed the toast tuples.\n\n Easiest fix is to get rid of the special case for xmin == xmax. This may\n delay reclaiming dead space for a little bit in some cases, but it's by far\n the most reliable way to fix the issue.\n\n Per bug #5998 from Mark Reid. Back-patch to 8.3, which is the oldest\n version with MVCC-safe CLUSTER.\n\n\n", "msg_date": "Sun, 18 Sep 2022 19:16:00 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Pruning never visible changes" }, { "msg_contents": "On Mon, 19 Sept 2022 at 01:16, Greg Stark <stark@mit.edu> wrote:\n>\n> On Fri, 16 Sept 2022 at 10:27, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > > A user asked me whether we prune never visible changes, such as\n> > > BEGIN;\n> > > INSERT...\n> > > UPDATE.. (same row)\n> > > COMMIT;\n> >\n> > Didn't we just have this discussion in another thread?\n>\n> Well..... not \"just\" :)\n\nThis recent thread [0] mentioned the same, and I mentioned it in [1]\ntoo last year.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/flat/2031521.1663076724%40sss.pgh.pa.us#01542683a64a312e5c21541fecd50e63\nSubject: Re: Tuples inserted and deleted by the same transaction\nDate: 2022-09-13 14:13:44\n\n[1] https://www.postgresql.org/message-id/CAEze2Whjnhg96Wt2-DxtBydhmMDmVm2WfWOX3aGcB2C2Hbry0Q%40mail.gmail.com\nSubject: Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic\nDate: 2021-06-14 09:53:47\n(in a thread about a PS comment)\n\n\n", "msg_date": "Mon, 19 Sep 2022 17:29:14 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Pruning never visible changes" }, { "msg_contents": "On Mon, 19 Sept 2022 at 00:16, Greg Stark <stark@mit.edu> wrote:\n>\n> On Fri, 16 Sept 2022 at 10:27, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > > A user asked me whether we prune never visible changes, such as\n> > > BEGIN;\n> > > INSERT...\n> > > UPDATE.. (same row)\n> > > COMMIT;\n> >\n> > Didn't we just have this discussion in another thread?\n>\n> Well..... not \"just\" :)\n>\n> commit 44e4bbf75d56e643b6afefd5cdcffccb68cce414\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Fri Apr 29 16:29:42 2011 -0400\n>\n> Remove special case for xmin == xmax in HeapTupleSatisfiesVacuum().\n>\n> VACUUM was willing to remove a committed-dead tuple immediately if it was\n> deleted by the same transaction that inserted it. The idea is that such a\n> tuple could never have been visible to any other transaction, so we don't\n> need to keep it around to satisfy MVCC snapshots. However, there was\n> already an exception for tuples that are part of an update chain, and this\n> exception created a problem: we might remove TOAST tuples (which are never\n> part of an update chain) while their parent tuple stayed around (if it was\n> part of an update chain). This didn't pose a problem for most things,\n> since the parent tuple is indeed dead: no snapshot will ever consider it\n> visible. But MVCC-safe CLUSTER had a problem, since it will try to copy\n> RECENTLY_DEAD tuples to the new table. It then has to copy their TOAST\n> data too, and would fail if VACUUM had already removed the toast tuples.\n>\n> Easiest fix is to get rid of the special case for xmin == xmax. This may\n> delay reclaiming dead space for a little bit in some cases, but it's by far\n> the most reliable way to fix the issue.\n>\n> Per bug #5998 from Mark Reid. Back-patch to 8.3, which is the oldest\n> version with MVCC-safe CLUSTER.\n\nGood research Greg, thank you. Only took 10 years for me to notice it\nwas gone ;-)\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 22 Sep 2022 14:10:07 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Pruning never visible changes" }, { "msg_contents": "On 2022-Sep-22, Simon Riggs wrote:\n\n> On Mon, 19 Sept 2022 at 00:16, Greg Stark <stark@mit.edu> wrote:\n\n> > VACUUM was willing to remove a committed-dead tuple immediately if it was\n> > deleted by the same transaction that inserted it. The idea is that such a\n> > tuple could never have been visible to any other transaction, so we don't\n> > need to keep it around to satisfy MVCC snapshots. However, there was\n> > already an exception for tuples that are part of an update chain, and this\n> > exception created a problem: we might remove TOAST tuples (which are never\n> > part of an update chain) while their parent tuple stayed around (if it was\n> > part of an update chain). This didn't pose a problem for most things,\n> > since the parent tuple is indeed dead: no snapshot will ever consider it\n> > visible. But MVCC-safe CLUSTER had a problem, since it will try to copy\n> > RECENTLY_DEAD tuples to the new table. It then has to copy their TOAST\n> > data too, and would fail if VACUUM had already removed the toast tuples.\n\n> Good research Greg, thank you. Only took 10 years for me to notice it\n> was gone ;-)\n\nBut this begs the question: is the proposed change safe, given that\nancient consideration? I don't think TOAST issues have been mentioned\nin this thread so far, so I wonder if there is a test case that verifies\nthat this problem doesn't occur for some reason.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 22 Sep 2022 16:15:50 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Pruning never visible changes" }, { "msg_contents": "On Thu, 22 Sept 2022 at 15:16, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Sep-22, Simon Riggs wrote:\n>\n> > On Mon, 19 Sept 2022 at 00:16, Greg Stark <stark@mit.edu> wrote:\n>\n> > > VACUUM was willing to remove a committed-dead tuple immediately if it was\n> > > deleted by the same transaction that inserted it. The idea is that such a\n> > > tuple could never have been visible to any other transaction, so we don't\n> > > need to keep it around to satisfy MVCC snapshots. However, there was\n> > > already an exception for tuples that are part of an update chain, and this\n> > > exception created a problem: we might remove TOAST tuples (which are never\n> > > part of an update chain) while their parent tuple stayed around (if it was\n> > > part of an update chain). This didn't pose a problem for most things,\n> > > since the parent tuple is indeed dead: no snapshot will ever consider it\n> > > visible. But MVCC-safe CLUSTER had a problem, since it will try to copy\n> > > RECENTLY_DEAD tuples to the new table. It then has to copy their TOAST\n> > > data too, and would fail if VACUUM had already removed the toast tuples.\n>\n> > Good research Greg, thank you. Only took 10 years for me to notice it\n> > was gone ;-)\n>\n> But this begs the question: is the proposed change safe, given that\n> ancient consideration? I don't think TOAST issues have been mentioned\n> in this thread so far, so I wonder if there is a test case that verifies\n> that this problem doesn't occur for some reason.\n\nOh, completely agreed.\n\nI will submit a modified patch that adds a test case and just a\ncomment to explain why we can't remove such rows.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 22 Sep 2022 21:04:35 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Pruning never visible changes" } ]
[ { "msg_contents": "Hi,\r\n\r\nThe release team is planning to release PostgreSQL 15 Release Candidate \r\n1 (RC1) on 2022-09-29. Please ensure all open items[1] are resolved no \r\nlater than 2022-09-24 0:00 AoE.\r\n\r\nFollowing recent release patterns, we planning for 2022-10-06 to be the \r\nGA date. This may change based on what reports we receive from testing \r\nover the next few weeks.\r\n\r\nPlease let us know if you have any questions.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items", "msg_date": "Fri, 16 Sep 2022 09:17:39 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "PostgreSQL 15 RC1 + GA dates" } ]
[ { "msg_contents": "According to\n\nhttps://bugzilla.redhat.com/show_bug.cgi?id=2127503\n\nbleeding-edge clang complains thusly:\n\nllvmjit_inline.cpp: In function 'std::unique_ptr<llvm::ModuleSummaryIndex> llvm_load_summary(llvm::StringRef)':\nllvmjit_inline.cpp:771:37: error: incomplete type 'llvm::MemoryBuffer' used in nested name specifier\n 771 | llvm::MemoryBuffer::getFile(path);\n | ^~~~~~~\nIn file included from /usr/include/c++/12/memory:76,\n from /usr/include/llvm/ADT/SmallVector.h:28,\n from /usr/include/llvm/ADT/ArrayRef.h:14,\n from /usr/include/llvm/ADT/SetVector.h:23,\n from llvmjit_inline.cpp:48:\n/usr/include/c++/12/bits/unique_ptr.h: In instantiation of 'void std::default_delete<_Tp>::operator()(_Tp*) const [with _Tp = llvm::MemoryBuffer]':\n/usr/include/c++/12/bits/unique_ptr.h:396:17: required from 'std::unique_ptr<_Tp, _Dp>::~unique_ptr() [with _Tp = llvm::MemoryBuffer; _Dp = std::default_delete<llvm::MemoryBuffer>]'\n/usr/include/llvm/Support/ErrorOr.h:142:34: required from 'llvm::ErrorOr<T>::~ErrorOr() [with T = std::unique_ptr<llvm::MemoryBuffer>]'\nllvmjit_inline.cpp:771:35: required from here\n/usr/include/c++/12/bits/unique_ptr.h:93:23: error: invalid application of 'sizeof' to incomplete type 'llvm::MemoryBuffer'\n 93 | static_assert(sizeof(_Tp)>0,\n | ^~~~~~~~~~~\n\nI suspect this is less about clang and more about LLVM APIs,\nbut anyway it seems like we gotta fix something.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Sep 2022 11:40:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "clang 15 doesn't like our JIT code" }, { "msg_contents": "Hi,\n\n\nOn 2022-09-16 11:40:46 -0400, Tom Lane wrote:\n> According to\n> \n> https://bugzilla.redhat.com/show_bug.cgi?id=2127503\n> \n> bleeding-edge clang complains thusly:\n> \n> llvmjit_inline.cpp: In function 'std::unique_ptr<llvm::ModuleSummaryIndex> llvm_load_summary(llvm::StringRef)':\n> llvmjit_inline.cpp:771:37: error: incomplete type 'llvm::MemoryBuffer' used in nested name specifier\n> 771 | llvm::MemoryBuffer::getFile(path);\n> | ^~~~~~~\n> In file included from /usr/include/c++/12/memory:76,\n> from /usr/include/llvm/ADT/SmallVector.h:28,\n> from /usr/include/llvm/ADT/ArrayRef.h:14,\n> from /usr/include/llvm/ADT/SetVector.h:23,\n> from llvmjit_inline.cpp:48:\n> /usr/include/c++/12/bits/unique_ptr.h: In instantiation of 'void std::default_delete<_Tp>::operator()(_Tp*) const [with _Tp = llvm::MemoryBuffer]':\n> /usr/include/c++/12/bits/unique_ptr.h:396:17: required from 'std::unique_ptr<_Tp, _Dp>::~unique_ptr() [with _Tp = llvm::MemoryBuffer; _Dp = std::default_delete<llvm::MemoryBuffer>]'\n> /usr/include/llvm/Support/ErrorOr.h:142:34: required from 'llvm::ErrorOr<T>::~ErrorOr() [with T = std::unique_ptr<llvm::MemoryBuffer>]'\n> llvmjit_inline.cpp:771:35: required from here\n> /usr/include/c++/12/bits/unique_ptr.h:93:23: error: invalid application of 'sizeof' to incomplete type 'llvm::MemoryBuffer'\n> 93 | static_assert(sizeof(_Tp)>0,\n> | ^~~~~~~~~~~\n> \n> I suspect this is less about clang and more about LLVM APIs,\n> but anyway it seems like we gotta fix something.\n\nYea, there's definitely a bunch of llvm 15 issues that need to be fixed - this\nparticular failure is pretty easy to fix, but there's some others that are\nharder. They redesigned a fairly core part of the IR representation. Thomas\nhas a WIP fix, I think.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 16 Sep 2022 11:45:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: clang 15 doesn't like our JIT code" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-09-16 11:40:46 -0400, Tom Lane wrote:\n>> I suspect this is less about clang and more about LLVM APIs,\n>> but anyway it seems like we gotta fix something.\n\n> Yea, there's definitely a bunch of llvm 15 issues that need to be fixed - this\n> particular failure is pretty easy to fix, but there's some others that are\n> harder. They redesigned a fairly core part of the IR representation. Thomas\n> has a WIP fix, I think.\n\nI'm more and more getting the feeling that we're interfacing with LLVM\nat too low a level, because it seems like our code is constantly breaking.\nDo they just not have any stable API at all?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Sep 2022 16:07:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: clang 15 doesn't like our JIT code" }, { "msg_contents": "Hi,\n\nOn 2022-09-16 16:07:51 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-09-16 11:40:46 -0400, Tom Lane wrote:\n> >> I suspect this is less about clang and more about LLVM APIs,\n> >> but anyway it seems like we gotta fix something.\n>\n> > Yea, there's definitely a bunch of llvm 15 issues that need to be fixed - this\n> > particular failure is pretty easy to fix, but there's some others that are\n> > harder. They redesigned a fairly core part of the IR representation. Thomas\n> > has a WIP fix, I think.\n>\n> I'm more and more getting the feeling that we're interfacing with LLVM\n> at too low a level, because it seems like our code is constantly breaking.\n> Do they just not have any stable API at all?\n\nI don't think it's the wrong level. While LLVM has a subset of the API that's\nsupposed to be stable, and we mostly use only that subset, they've definitely\nare breaking it more and more frequently. Based on my observation that's\nbecause more and more of the development is done by google and facebook, which\ninternally use monorepos, and vendor LLVM - that kind of model makes API\nchanges much less of an issue. OTOH, the IR breakage (and a few prior related\nones) is about fixing a design issue they've been talking about fixing for 10+\nyears...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 16 Sep 2022 13:16:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: clang 15 doesn't like our JIT code" }, { "msg_contents": "On Sat, Sep 17, 2022 at 6:45 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-09-16 11:40:46 -0400, Tom Lane wrote:\n> > According to\n> >\n> > https://bugzilla.redhat.com/show_bug.cgi?id=2127503\n> >\n> > bleeding-edge clang complains thusly:\n> >\n> > llvmjit_inline.cpp: In function 'std::unique_ptr<llvm::ModuleSummaryIndex> llvm_load_summary(llvm::StringRef)':\n> > llvmjit_inline.cpp:771:37: error: incomplete type 'llvm::MemoryBuffer' used in nested name specifier\n> > 771 | llvm::MemoryBuffer::getFile(path);\n> > | ^~~~~~~\n> > In file included from /usr/include/c++/12/memory:76,\n> > from /usr/include/llvm/ADT/SmallVector.h:28,\n> > from /usr/include/llvm/ADT/ArrayRef.h:14,\n> > from /usr/include/llvm/ADT/SetVector.h:23,\n> > from llvmjit_inline.cpp:48:\n> > /usr/include/c++/12/bits/unique_ptr.h: In instantiation of 'void std::default_delete<_Tp>::operator()(_Tp*) const [with _Tp = llvm::MemoryBuffer]':\n> > /usr/include/c++/12/bits/unique_ptr.h:396:17: required from 'std::unique_ptr<_Tp, _Dp>::~unique_ptr() [with _Tp = llvm::MemoryBuffer; _Dp = std::default_delete<llvm::MemoryBuffer>]'\n> > /usr/include/llvm/Support/ErrorOr.h:142:34: required from 'llvm::ErrorOr<T>::~ErrorOr() [with T = std::unique_ptr<llvm::MemoryBuffer>]'\n> > llvmjit_inline.cpp:771:35: required from here\n> > /usr/include/c++/12/bits/unique_ptr.h:93:23: error: invalid application of 'sizeof' to incomplete type 'llvm::MemoryBuffer'\n> > 93 | static_assert(sizeof(_Tp)>0,\n> > | ^~~~~~~~~~~\n> >\n> > I suspect this is less about clang and more about LLVM APIs,\n> > but anyway it seems like we gotta fix something.\n>\n> Yea, there's definitely a bunch of llvm 15 issues that need to be fixed - this\n> particular failure is pretty easy to fix, but there's some others that are\n> harder. They redesigned a fairly core part of the IR representation. Thomas\n> has a WIP fix, I think.\n\nYes, I've been working on this and will try to have a patch on the\nlist in a few days. There are also a few superficial changes to\nnames, arguments, headers etc like the one reported there, but the\nreal problem is that it aborts at runtime when JIT stuff happens, so I\ndidn't want to push changes for the superficial things without\naddressing that or someone might get a nasty surprise. Separately,\nthere's also the walker stuff[1] to address.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGKpHPDTv67Y%2Bs6yiC8KH5OXeDg6a-twWo_xznKTcG0kSA%40mail.gmail.com\n\n\n", "msg_date": "Sat, 17 Sep 2022 08:36:08 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: clang 15 doesn't like our JIT code" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> there's also the walker stuff[1] to address.\n\nYeah. I just did some experimentation with that, and it looks like\nneither gcc nor clang will cut you any slack at all for declaring\nan argument as \"void *\": given say\n\ntypedef bool (*tree_walker_callback) (Node *node, void *context);\n\nthe walker functions also have to be declared with exactly \"void *\"\nas their second argument. So it's going to be just as messy and\nfull-of-casts as we feared. Still, I'm not sure we have any\nalternative.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Sep 2022 19:15:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: clang 15 doesn't like our JIT code" } ]
[ { "msg_contents": "Hi Hackers,\r\n\r\nA recent observation in the way pg_stat_statements\r\nhandles CURSORS appears to be undesirable. This\r\nwas also recently brought up by Fujii Masao in the thread [1].\r\n\r\nThe findings are:\r\n1. DECLARE CURSOR statements are not normalized.\r\n2. The statistics are aggregated on the FETCH\r\n statement.\r\n3. Planning time is not tracked for DECLARE CURSOR\r\n statements.\r\n\r\nFor #1, the concern is for applications that heavily\r\nuse cursors could end up seeing heavy pgss thrashing if\r\nthe query parameter change often.\r\n\r\nFor #2, since the FETCH statement only deals with a\r\ncursor name, similar cursor names with different\r\nunderlying cursor statements will be lumped into\r\nthe same entry, which is absolutely incorrect from a\r\nstatistics perspective. Even if the same cursor name\r\nis always for the same underlying statement, the pgss\r\nuser has to do extra parsing work to figure out which\r\nunderlying SQL statement is for the cursor.\r\n\r\nFor #3, planning time for cursors not being considered is\r\nbecause pgss always sets queryId to 0 for utility statements,\r\nand pgss_planner is taught to ignore queryId's = 0. This should\r\nnot be the case if the UTILITY statement has an underlying\r\noptimizable statement.\r\n\r\nI have attached v01-improve-cursor-tracking-in-pg_stat_statements.patch\r\nwhich does the following:\r\n\r\n## without the patch\r\n\r\npostgres=# begin;\r\nBEGIN\r\npostgres=*# declare c1 cursor for select * from foo where id = 1;\r\nDECLARE CURSOR\r\npostgres=*# fetch c1; close c1;\r\nid\r\n----\r\n 1\r\n(1 row)\r\n\r\nCLOSE CURSOR\r\npostgres=*# declare c1 cursor for select * from foo where id = 2;\r\nDECLARE CURSOR\r\npostgres=*# fetch c1; close c1;\r\nid\r\n----\r\n 2\r\n(1 row)\r\n\r\nCLOSE CURSOR\r\npostgres=*# select query, calls from pg_stat_statements where query like '%c1%';\r\n query | calls | plans\r\n----------------------------------------------------------------------+--------+-------\r\ndeclare c1 cursor for select * from foo where id = 1 | 1 | 0\r\ndeclare c1 cursor for select * from foo where id = 2 | 1 | 0\r\nclose c1 | 2 | 0\r\nfetch c1 | 2 | 0\r\n(4 rows)\r\n\r\n### with the patch\r\n\r\npostgres=# begin;\r\nBEGIN\r\npostgres=*# declare c1 cursor for select * from foo where id = 1;\r\nDECLARE CURSOR\r\npostgres=*# fetch c1;\r\nid\r\n----\r\n 1\r\n(1 row)\r\n\r\npostgres=*# close c1;\r\nCLOSE CURSOR\r\npostgres=*# declare c1 cursor for select * from foo where id = 1;\r\nDECLARE CURSOR\r\npostgres=*# fetch c1;\r\nid\r\n----\r\n 1\r\n(1 row)\r\n\r\npostgres=*# close c1;\r\nCLOSE CURSOR\r\npostgres=*# select query, calls from pg_stat_statements where query like '%c1%';\r\npostgres=*# select query, calls, plans from pg_stat_statements where query like '%c1%';\r\n query | calls | plans\r\n------------------------------------------------------------------------+------+-------\r\ndeclare c1 cursor for select * from foo where id = $1 | 2 | 2\r\n(1 row)\r\n\r\nWe can see that:\r\nA. without the patch, planning stats were not considered,\r\n but with the patch they are.\r\nB. With the patch, the queries are normalized,\r\n reducing the # of entries.\r\nC. With the patch, the cursor stats are tracked by the top\r\n level DECLARE CURSOR statement instead of the FETCH. This\r\n means all FETCHes to the same cursor, regardless of the\r\n FETCH options, will be tracked together. This to me is\r\n more reasonable than lumping all FETCH stats for a cursor,\r\n even if the cursors underlying statement is different.\r\nD. The CLOSE <cursor> statement is also not tracked any\r\n longer with the patch.\r\nE. For queryId Jumbling, an underlying statement will not have\r\n the same queryId as a similar statement that is not\r\n run in a cursor. i.e. \"select * from tab\" will have a differet\r\n queryId from \"declare cursor c1 for select * from tab\"\r\n\r\nFeedback on this patch will be appreciated.\r\n\r\nRegards,\r\n\r\n[1] https://www.postgresql.org/message-id/37d32e91-4a08-afaf-a3a8-fd0578e4db50%40oss.nttdata.com\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services (AWS)", "msg_date": "Fri, 16 Sep 2022 18:32:34 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": true, "msg_subject": "Improve cursor handling in pg_stat_statements" } ]
[ { "msg_contents": "Hi,\n\nI liked this idea and after reviewing code I noticed some moments and \nI'd rather ask you some questions.\n\n\nFirstly, I suggest some editing in the comment of commit. I think, it is \nturned out the more laconic and the same clear. I wrote it below since I \ncan't think of any other way to add it.\n\n```\nCurrently, we have to wait for finishing of the query execution to check \nits plan.\nThis is not so convenient in investigation long-running queries on \nproduction\nenvironments where we cannot use debuggers.\n\nTo improve this situation there is proposed the patch containing the \npg_log_query_plan()\nfunction which request to log plan of the specified backend process.\n\nBy default, only superusers are allowed to request log of the plan \notherwise\nallowing any users to issue this request could create cause lots of log \nmessages\nand it can lead to denial of service.\n\nAt the next requesting CHECK_FOR_INTERRUPTS(), the target backend logs \nits plan at\nLOG_SERVER_ONLY level and therefore this plan will appear in the server \nlog only,\nnot to be sent to the client.\n```\n\nSecondly, I have question about deleting USE_ASSERT_CHECKING in lock.h?\nIt supposed to have been checked in another placed of the code by \nmatching values. I am worry about skipping errors due to untesting with \nassert option in the places where it (GetLockMethodLocalHash) \nparticipates and we won't able to get core file in segfault cases. I \nmight not understand something, then can you please explain to me?\n\nThirdly, I have incomprehension of the point why save_ActiveQueryDesc is \ndeclared in the pquery.h? I am seemed to save_ActiveQueryDesc to be used \nin an once time in the ExecutorRun function and  its declaration \nsuperfluous. I added it in the attached patch.\n\nFourthly, it seems to me there are not enough explanatory comments in \nthe code. I also added them in the attached patch.\n\nLastly, I have incomprehension about handling signals since have been \nunused it before. Could another signal disabled calling this signal to \nlog query plan? I noticed this signal to be checked the latest in \nprocsignal_sigusr1_handler function.\n\nRegards,\n\n--\nAlena Rybakina\nPostgres Professional", "msg_date": "Fri, 16 Sep 2022 21:51:01 +0300", "msg_from": "\"a.rybakina\" <a.rybakina@postgrespro.ru>", "msg_from_op": true, "msg_subject": "RFC: Logging plan of the running query" } ]
[ { "msg_contents": "Hi y'all, I've got a proposed clarification to the documentation on the\nnuances of RLS behavior for update policies, and maybe a (humble) request\nfor a change in behavior to make it more intuitive. I am starting with\npgsql-docs since I think the documentation change is a good starting point.\n\nWe use RLS policies to hide \"soft deleted\" objects from certain DB roles.\nWe recently tried to add the ability to let a user \"soft delete\" a row.\nIdeally, we can write an RLS policy that allows a user to \"soft delete\" a\nrow, but then hides the row from that same user once it is soft deleted.\n\nHere's the setup on a fresh Postgres db for my example. I'm executing these\nqueries as the database owner:\n```\ncreate role some_user_type;\ncreate table foo(id int primary key, soft_deleted_at timestamptz,\nsome_other_field text);\nalter table foo enable row level security;\ngrant select on table foo to some_user_type;\ngrant update(soft_deleted_at) on table foo to some_user_type;\ninsert into foo(id) values (1);\ninsert into foo(id, soft_deleted_at) values (2, now());\n```\n\nThe behavior I'm trying to encode in RLS is that users with the role\nsome_user_type can see all rows where soft_deleted_at is null, can update\nrows where BOTH the original soft_deleted_at is null AND the updated row\nhas a non-null soft_deleted_at. Basically, the only thing this user can do\nto this row is to soft delete it.\n\nWe'll use a restrictive policy to get better error messages when we do an\nupdate later on:\n```\ncreate policy pol_1 on foo for select to some_user_type using (true);\ncreate policy pol_1_res on foo as restrictive for select to some_user_type\nusing (soft_deleted_at is null);\n```\n\nAnd just to verify it's working:\n```\nchandler@localhost:chandler> begin; set local role some_user_type; select *\nfrom foo; rollback;\nBEGIN\nSET\n╒════╤═════════════════╤══════════════════╕\n│ id │ soft_deleted_at │ some_other_field │\n╞════╪═════════════════╪══════════════════╡\n│ 1 │ ¤ │ ¤ │\n╘════╧═════════════════╧══════════════════╛\nSELECT 1\nROLLBACK\n```\n\nNow the important bit, the update policy:\n```\ncreate policy pol_2 on foo for update to some_user_type using\n(soft_deleted_at is null) with check (soft_deleted_at is not null);\n```\n\nIf we update a row without a where clause that touches the row, the update\nsucceeds:\n```\nchandler@localhost:chandler> begin; set local role some_user_type; update\nfoo set soft_deleted_at = now() where true; rollback;\nBEGIN\nSET\nUPDATE 1\nROLLBACK\n```\n\nBut if we update a row with a where clause that uses the existing row, the\nupdate fails:\n```\nchandler@localhost:chandler> begin; set local role some_user_type; update\nfoo set soft_deleted_at = now() where id = 1; rollback;\nBEGIN\nSET\nnew row violates row-level security policy \"pol_1_res\" for table \"foo\"\n```\n\nThis was very unintuitive to me. My understanding is that when USING and\nWITH CHECK are both used for an update policy, the USING is tested against\nthe original row, and the WITH CHECK clause against the new row. This is\nstated in the [documentation](\nhttps://www.postgresql.org/docs/current/sql-createpolicy.html):\n\n> The USING expression determines which records the UPDATE command will see\nto operate against, while the WITH CHECK expression defines which modified\nrows are allowed to be stored back into the relation\n\nSo why is it that the addition of where id = 1 violates the select policy?\nLater in the same doc:\n\n> Typically an UPDATE command also needs to read data from columns in the\nrelation being updated (e.g., in a WHERE clause or a RETURNING clause, or\nin an expression on the right hand side of the SET clause). In this case,\nSELECT rights are also required on the relation being updated, and the\nappropriate SELECT or ALL policies will be applied in addition to the\nUPDATE policies. Thus the user must have access to the row(s) being updated\nthrough a SELECT or ALL policy in addition to being granted permission to\nupdate the row(s) via an UPDATE or ALL policy.\n\nIf you read this documentation perfectly literally, it does describe the\nbehavior shown my examples above, but it is a bit ambiguous. I think a more\nclear explanation of this behavior would be the following:\n\n> Typically an UPDATE command also needs to read data from columns in the\nrelation being updated (e.g., in a WHERE clause or a RETURNING clause, or\nin an expression on the right hand side of the SET clause). In this case,\nSELECT rights are also required on the relation being updated, and the\nappropriate SELECT or ALL policies will be applied in addition to the\nUPDATE policies. Thus the user must have access to the row(s) operated\nagainst and the rows being stored back into the relation via a SELECT or\nALL policy in addition to being granted permission to update the row(s) via\nan UPDATE or ALL policy.\n\nHowever, it seems like there is an opportunity to change the behavior to be\nmore intuitive, and to support the use case I am attempting. It seems like\nuse of a WHERE clause that uses the existing row should result in the\nselect policies only being applied to the row being operated against, not\nthe row being stored back into the relation. If the caller adds a RETURNING\nclause, the existing behavior makes sense, because you are asking to access\nthe row being stored back into the relation.\n\nAm I thinking about this correctly? I know this would be a breaking change,\nalbeit a small one. I'm not familiar with Postgres's approach to breaking\nchanges to know if it would even be considered. At a minimum, does my\nproposed documentation change seem reasonable? It would have saved us lots\nof time in discovering the behavior.\n\nIf this behavior is intentional, I'm also just curious about the rationale\nbehind it if someone happens to know? Maybe there's a use case that this is\nbuilt for that I'm not thinking of.\n\nHi y'all, I've got a proposed clarification to the documentation on the nuances of RLS behavior for update policies, and maybe a (humble) request for a change in behavior to make it more intuitive. I am starting with pgsql-docs since I think the documentation change is a good starting point.We use RLS policies to hide \"soft deleted\" objects from certain DB roles. We recently tried to add the ability to let a user \"soft delete\" a row. Ideally, we can write an RLS policy that allows a user to \"soft delete\" a row, but then hides the row from that same user once it is soft deleted.Here's the setup on a fresh Postgres db for my example. I'm executing these queries as the database owner:```create role some_user_type;create table foo(id int primary key, soft_deleted_at timestamptz, some_other_field text);alter table foo enable row level security;grant select on table foo to some_user_type;grant update(soft_deleted_at) on table foo to some_user_type;insert into foo(id) values (1);insert into foo(id, soft_deleted_at) values (2, now());```The behavior I'm trying to encode in RLS is that users with the role some_user_type can see all rows where soft_deleted_at is null, can update rows where BOTH the original soft_deleted_at is null AND the updated row has a non-null soft_deleted_at. Basically, the only thing this user can do to this row is to soft delete it.We'll use a restrictive policy to get better error messages when we do an update later on:```create policy pol_1 on foo for select to some_user_type using (true);create policy pol_1_res on foo as restrictive for select to some_user_type using (soft_deleted_at is null);```And just to verify it's working:```chandler@localhost:chandler> begin; set local role some_user_type; select * from foo; rollback;BEGINSET╒════╤═════════════════╤══════════════════╕│ id │ soft_deleted_at │ some_other_field │╞════╪═════════════════╪══════════════════╡│ 1  │ ¤               │ ¤                │╘════╧═════════════════╧══════════════════╛SELECT 1ROLLBACK```Now the important bit, the update policy:```create policy pol_2 on foo for update to some_user_type using (soft_deleted_at is null) with check (soft_deleted_at is not null);```If we update a row without a where clause that touches the row, the update succeeds:```chandler@localhost:chandler> begin; set local role some_user_type; update foo set soft_deleted_at = now() where true; rollback;BEGINSETUPDATE 1ROLLBACK```But if we update a row with a where clause that uses the existing row, the update fails:```chandler@localhost:chandler> begin; set local role some_user_type; update foo set soft_deleted_at = now() where id = 1; rollback;BEGINSETnew row violates row-level security policy \"pol_1_res\" for table \"foo\"```This was very unintuitive to me. My understanding is that when USING and WITH CHECK are both used for an update policy, the USING is tested against the original row, and the WITH CHECK clause against the new row.  This is stated in the [documentation](https://www.postgresql.org/docs/current/sql-createpolicy.html):> The USING expression determines which records the UPDATE command will see to operate against, while the WITH CHECK expression defines which modified rows are allowed to be stored back into the relationSo why is it that the addition of where id = 1 violates the select policy? Later in the same doc:> Typically an UPDATE command also needs to read data from columns in the relation being updated (e.g., in a WHERE clause or a RETURNING clause, or in an expression on the right hand side of the SET clause). In this case, SELECT rights are also required on the relation being updated, and the appropriate SELECT or ALL policies will be applied in addition to the UPDATE policies. Thus the user must have access to the row(s) being updated through a SELECT or ALL policy in addition to being granted permission to update the row(s) via an UPDATE or ALL policy.If you read this documentation perfectly literally, it does describe the behavior shown my examples above, but it is a bit ambiguous. I think a more clear explanation of this behavior would be the following:> Typically an UPDATE command also needs to read data from columns in the relation being updated (e.g., in a WHERE clause or a RETURNING clause, or in an expression on the right hand side of the SET clause). In this case, SELECT rights are also required on the relation being updated, and the appropriate SELECT or ALL policies will be applied in addition to the UPDATE policies. Thus the user must have access to the row(s) operated against and the rows being stored back into the relation via a SELECT or ALL policy in addition to being granted permission to update the row(s) via an UPDATE or ALL policy.However, it seems like there is an opportunity to change the behavior to be more intuitive, and to support the use case I am attempting. It seems like use of a WHERE clause that uses the existing row should result in the select policies only being applied to the row being operated against, not the row being stored back into the relation. If the caller adds a RETURNING clause, the existing behavior makes sense, because you are asking to access the row being stored back into the relation.Am I thinking about this correctly? I know this would be a breaking change, albeit a small one. I'm not familiar with Postgres's approach to breaking changes to know if it would even be considered. At a minimum, does my proposed documentation change seem reasonable? It would have saved us lots of time in discovering the behavior.If this behavior is intentional, I'm also just curious about the rationale behind it if someone happens to know? Maybe there's a use case that this is built for that I'm not thinking of.", "msg_date": "Fri, 16 Sep 2022 11:57:25 -0700", "msg_from": "Chandler Gonzales <jcgsville@gmail.com>", "msg_from_op": true, "msg_subject": "Clarifying docs on nuance of select and update policies" }, { "msg_contents": "\nThread moved to hackers since it involves complex queries. Can someone\nlike Stephen comment on the existing docs and feature behavior? Thanks.\n\n---------------------------------------------------------------------------\n\nOn Fri, Sep 16, 2022 at 11:57:25AM -0700, Chandler Gonzales wrote:\n> Hi y'all, I've got a proposed clarification to the documentation on the nuances\n> of RLS behavior for update policies, and maybe a (humble) request for a change\n> in behavior to make it more intuitive. I am starting with pgsql-docs since I\n> think the documentation change is a good starting point.\n> \n> We use RLS policies to hide \"soft deleted\" objects from certain DB roles. We\n> recently tried to add the ability to let a user \"soft delete\" a row. Ideally,\n> we can write an RLS policy that allows a user to \"soft delete\" a row, but then\n> hides the row from that same user once it is soft deleted.\n> \n> Here's the setup on a fresh Postgres db for my example. I'm executing these\n> queries as the database owner:\n> ```\n> create role some_user_type;\n> create table foo(id int primary key, soft_deleted_at timestamptz,\n> some_other_field text);\n> alter table foo enable row level security;\n> grant select on table foo to some_user_type;\n> grant update(soft_deleted_at) on table foo to some_user_type;\n> insert into foo(id) values (1);\n> insert into foo(id, soft_deleted_at) values (2, now());\n> ```\n> \n> The behavior I'm trying to encode in RLS is that users with the role\n> some_user_type can see all rows where soft_deleted_at is null, can update rows\n> where BOTH the original soft_deleted_at is null AND the updated row has a\n> non-null soft_deleted_at. Basically, the only thing this user can do to this\n> row is to soft delete it.\n> \n> We'll use a restrictive policy to get better error messages when we do an\n> update later on:\n> ```\n> create policy pol_1 on foo for select to some_user_type using (true);\n> create policy pol_1_res on foo as restrictive for select to some_user_type\n> using (soft_deleted_at is null);\n> ```\n> \n> And just to verify it's working:\n> ```\n> chandler@localhost:chandler> begin; set local role some_user_type; select *\n> from foo; rollback;\n> BEGIN\n> SET\n> ╒════╤═════════════════╤══════════════════╕\n> │ id │ soft_deleted_at │ some_other_field │\n> ╞════╪═════════════════╪══════════════════╡\n> │ 1  │ ¤               │ ¤                │\n> ╘════╧═════════════════╧══════════════════╛\n> SELECT 1\n> ROLLBACK\n> ```\n> \n> Now the important bit, the update policy:\n> ```\n> create policy pol_2 on foo for update to some_user_type using (soft_deleted_at\n> is null) with check (soft_deleted_at is not null);\n> ```\n> \n> If we update a row without a where clause that touches the row, the update\n> succeeds:\n> ```\n> chandler@localhost:chandler> begin; set local role some_user_type; update foo\n> set soft_deleted_at = now() where true; rollback;\n> BEGIN\n> SET\n> UPDATE 1\n> ROLLBACK\n> ```\n> \n> But if we update a row with a where clause that uses the existing row, the\n> update fails:\n> ```\n> chandler@localhost:chandler> begin; set local role some_user_type; update foo\n> set soft_deleted_at = now() where id = 1; rollback;\n> BEGIN\n> SET\n> new row violates row-level security policy \"pol_1_res\" for table \"foo\"\n> ```\n> \n> This was very unintuitive to me. My understanding is that when USING and WITH\n> CHECK are both used for an update policy, the USING is tested against the\n> original row, and the WITH CHECK clause against the new row.  This is stated in\n> the [documentation](https://www.postgresql.org/docs/current/\n> sql-createpolicy.html):\n> \n> > The USING expression determines which records the UPDATE command will see to\n> operate against, while the WITH CHECK expression defines which modified rows\n> are allowed to be stored back into the relation\n> \n> So why is it that the addition of where id = 1 violates the select policy?\n> Later in the same doc:\n> \n> > Typically an UPDATE command also needs to read data from columns in the\n> relation being updated (e.g., in a WHERE clause or a RETURNING clause, or in an\n> expression on the right hand side of the SET clause). In this case, SELECT\n> rights are also required on the relation being updated, and the appropriate\n> SELECT or ALL policies will be applied in addition to the UPDATE policies. Thus\n> the user must have access to the row(s) being updated through a SELECT or ALL\n> policy in addition to being granted permission to update the row(s) via an\n> UPDATE or ALL policy.\n> \n> If you read this documentation perfectly literally, it does describe the\n> behavior shown my examples above, but it is a bit ambiguous. I think a more\n> clear explanation of this behavior would be the following:\n> \n> > Typically an UPDATE command also needs to read data from columns in the\n> relation being updated (e.g., in a WHERE clause or a RETURNING clause, or in an\n> expression on the right hand side of the SET clause). In this case, SELECT\n> rights are also required on the relation being updated, and the appropriate\n> SELECT or ALL policies will be applied in addition to the UPDATE policies. Thus\n> the user must have access to the row(s) operated against and the rows being\n> stored back into the relation via a SELECT or ALL policy in addition to being\n> granted permission to update the row(s) via an UPDATE or ALL policy.\n> \n> However, it seems like there is an opportunity to change the behavior to be\n> more intuitive, and to support the use case I am attempting. It seems like use\n> of a WHERE clause that uses the existing row should result in the select\n> policies only being applied to the row being operated against, not the row\n> being stored back into the relation. If the caller adds a RETURNING clause, the\n> existing behavior makes sense, because you are asking to access the row being\n> stored back into the relation.\n> \n> Am I thinking about this correctly? I know this would be a breaking change,\n> albeit a small one. I'm not familiar with Postgres's approach to breaking\n> changes to know if it would even be considered. At a minimum, does my proposed\n> documentation change seem reasonable? It would have saved us lots of time in\n> discovering the behavior.\n> \n> If this behavior is intentional, I'm also just curious about the rationale\n> behind it if someone happens to know? Maybe there's a use case that this is\n> built for that I'm not thinking of.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Tue, 11 Oct 2022 13:00:36 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Clarifying docs on nuance of select and update policies" } ]
[ { "msg_contents": "I applied clang-tidy's\nreadability-inconsistent-declaration-parameter-name check with\n\"readability-inconsistent-declaration-parameter-name.Strict:true\" [1]\nto write the attached refactoring patch series. The patch series makes\nparameter names consistent between each function's definition and\ndeclaration. The check made the whole process of getting everything to\nmatch straightforward.\n\nThe total number of lines changed worked out at less than you might\nguess it would, since we mostly tend to do this already:\n\n 178 files changed, 593 insertions(+), 582 deletions(-)\n\nI have to admit that these inconsistencies are a pet peeve of mine. I\nfind them distracting, and have a history of fixing them on an ad-hoc\nbasis. But there are real practical arguments in favor of being strict\nabout it as a matter of policy -- it's not *just* neatnikism.\n\nFirst there is a non-zero potential for bugs by allowing\ninconsistencies. Consider the example of the function check_usermap(),\nfrom hba.c. The bool argument named \"case_insensitive\" is inverted in\nthe declaration, where it is spelled \"case_sensitive\". At first I\nthought that this might be a security bug, and reported it to\n-security as such. It's harmless, but is still arguably something that\nmight have led to a real bug.\n\nThen there is the \"automated refactoring\" argument. It would be nice\nto make automated refactoring tools work a little better by always (or\nalmost always) having a clean slate to start with.\n\nIn general refactoring work might involve writing a patch that starts\nwith the declarations that appear in a some .h file of interest in one\npass, and work backwards from there. It might be necessary to switch\ndozens of functions over to some new naming convention or parameter\norder, so you really want to start with the high level interface in\nsuch a scenario. It's rather nice to be able to use clang-tidy to make\nsure that there are no newly introduced inconsistencies -- which have\nthe potential to be live bugs. It's possible to use clang-tidy for\nthis process right now, but it's not as easy as it could be because\nyou have to ignore any preexisting minor inconsistencies. We don't\nquite have a clean slate to start from, which makes it more error\nprone.\n\nIMV there is a lot to be said for making this a largely mechanical\nprocess, with built in guard rails. Why not lean on the tooling that's\nwidely available already?\n\nIntroducing a project policy around consistent parameter names would\nmake this easy. I believe that it would be practical and unobtrusive\n-- we almost do this already, without any policy in place (it only\ntook me a few hours to come up with the patch series). I don't think\nthat we need to create new work for committers to do this.\n\n[1] https://releases.llvm.org/14.0.0/tools/clang/tools/extra/docs/clang-tidy/checks/readability-inconsistent-declaration-parameter-name.html\n-- \nPeter Geoghegan", "msg_date": "Fri, 16 Sep 2022 15:42:13 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> I have to admit that these inconsistencies are a pet peeve of mine. I\n> find them distracting, and have a history of fixing them on an ad-hoc\n> basis. But there are real practical arguments in favor of being strict\n> about it as a matter of policy -- it's not *just* neatnikism.\n\nI agree, this has always been a pet peeve of mine as well. I would\nhave guessed there were fewer examples than you found, because I've\ngenerally fixed any such cases I happened to notice.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Sep 2022 19:19:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Fri, Sep 16, 2022 at 4:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I agree, this has always been a pet peeve of mine as well. I would\n> have guessed there were fewer examples than you found, because I've\n> generally fixed any such cases I happened to notice.\n\nIf you actually go through them all one by one you'll see that the\nvast majority of individual cases involve an inconsistency that\nfollows some kind of recognizable pattern. For example, a Relation\nparameter might be spelled \"relation\" in one place and \"rel\" in\nanother. I find these more common cases much less noticeable --\nperhaps that's why there are more than you thought there'd be?\n\nIt's possible to configure the clang-tidy tooling to tolerate various\ninconsistencies, below some kind of threshold -- it is totally\ncustomizable. But I think that a strict, simple rule is the way to go\nhere. (Though without creating busy work for committers that don't\nwant to use clang-tidy all the time.)\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 16 Sep 2022 16:36:50 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> It's possible to configure the clang-tidy tooling to tolerate various\n> inconsistencies, below some kind of threshold -- it is totally\n> customizable. But I think that a strict, simple rule is the way to go\n> here.\n\nAgreed; I see no need to tolerate any inconsistency.\n\n> (Though without creating busy work for committers that don't\n> want to use clang-tidy all the time.)\n\nYeah. I'd be inclined to handle it about like cpluspluscheck:\nprovide a script that people can run from time to time, but\ndon't insist that it's a commit-blocker. (I wouldn't be unhappy\nto see the cfbot include this in its compiler warnings suite,\nthough, once we get rid of the existing instances.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Sep 2022 19:49:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Fri, Sep 16, 2022 at 4:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Agreed; I see no need to tolerate any inconsistency.\n\nThe check that I used to write the patches doesn't treat unnamed\nparameters in a function declaration as an inconsistency, even when\n\"strict\" is used. Another nearby check *could* be used to catch\nunnamed parameters [1] if that was deemed useful, though. How do you\nfeel about unnamed parameters?\n\nMany of the function declarations from reorderbuffer.h will be\naffected if we decide that we don't want to allow unnamed parameters\n-- it's quite noticeable there. I myself lean towards not allowing\nunnamed parameters. (Though perhaps I should reserve judgement until\nafter I've measured just how prevalent unnamed parameters are.)\n\n> Yeah. I'd be inclined to handle it about like cpluspluscheck:\n> provide a script that people can run from time to time, but\n> don't insist that it's a commit-blocker.\n\nMy thoughts exactly.\n\n[1] https://releases.llvm.org/14.0.0/tools/clang/tools/extra/docs/clang-tidy/checks/readability-named-parameter.html\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 16 Sep 2022 17:15:24 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> The check that I used to write the patches doesn't treat unnamed\n> parameters in a function declaration as an inconsistency, even when\n> \"strict\" is used. Another nearby check *could* be used to catch\n> unnamed parameters [1] if that was deemed useful, though. How do you\n> feel about unnamed parameters?\n\nI think they're easily Stroustrup's worst idea ever. You're basically\nthrowing away an opportunity for documentation, and that documentation\nis often sorely needed. Handy example:\n\nextern void ReorderBufferCommitChild(ReorderBuffer *, TransactionId, TransactionId,\n XLogRecPtr commit_lsn, XLogRecPtr end_lsn);\n\nWhich TransactionId parameter is which? You might be tempted to guess,\nif you think you remember how the function works, and that is a recipe\nfor bugs.\n\nI'd view the current state of reorderbuffer.h as pretty unacceptable on\nstylistic grounds no matter which position you take. Having successive\ndeclarations randomly using named or unnamed parameters is seriously\nugly and distracting, at least to my eye. We don't need such blatant\nreminders of how many cooks have stirred this broth.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 16 Sep 2022 21:20:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Fri, Sep 16, 2022 at 6:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think they're easily Stroustrup's worst idea ever. You're basically\n> throwing away an opportunity for documentation, and that documentation\n> is often sorely needed.\n\nHe could at least point to C++ pure virtual functions, where omitting\na parameter name in the base class supposedly conveys useful\ninformation. I don't find that argument particularly convincing\nmyself, even in a C++ context, but at least it's an argument. Doesn't\napply here in any case.\n\n> I'd view the current state of reorderbuffer.h as pretty unacceptable on\n> stylistic grounds no matter which position you take. Having successive\n> declarations randomly using named or unnamed parameters is seriously\n> ugly and distracting, at least to my eye. We don't need such blatant\n> reminders of how many cooks have stirred this broth.\n\nI'll come up with a revision that deals with that too, then. Shouldn't\nbe too much more work.\n\nI suppose that I ought to backpatch a fix for the really egregious\nissue in hba.h, and leave it at that on stable branches.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 16 Sep 2022 18:48:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Fri, Sep 16, 2022 at 06:48:36PM -0700, Peter Geoghegan wrote:\n> On Fri, Sep 16, 2022 at 6:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'd view the current state of reorderbuffer.h as pretty unacceptable on\n>> stylistic grounds no matter which position you take. Having successive\n>> declarations randomly using named or unnamed parameters is seriously\n>> ugly and distracting, at least to my eye. We don't need such blatant\n>> reminders of how many cooks have stirred this broth.\n> \n> I'll come up with a revision that deals with that too, then. Shouldn't\n> be too much more work.\n\nBeing able to catch unnamed paramaters in function declarations would\nbe really nice. Thanks for looking at that.\n\n> I suppose that I ought to backpatch a fix for the really egregious\n> issue in hba.h, and leave it at that on stable branches.\n\nIf check_usermap() is used in a bugfix, that could be a risk, so this\nbit warrants a backpatch in my opinion.\n--\nMichael", "msg_date": "Sat, 17 Sep 2022 15:59:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Fri, Sep 16, 2022 at 6:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Fri, Sep 16, 2022 at 6:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I think they're easily Stroustrup's worst idea ever. You're basically\n> > throwing away an opportunity for documentation, and that documentation\n> > is often sorely needed.\n>\n> He could at least point to C++ pure virtual functions, where omitting\n> a parameter name in the base class supposedly conveys useful\n> information. I don't find that argument particularly convincing\n> myself, even in a C++ context, but at least it's an argument. Doesn't\n> apply here in any case.\n\nSeveral files from src/timezone and from src/backend/regex make use of\nunnamed parameters in function declarations. It wouldn't be difficult\nto fix everything and call it a day, but I wonder if there are any\nspecial considerations here. I don't think that Henry Spencer's regex\ncode is considered vendored code these days (if it ever was), so that\nseems clear cut. I'm less sure about the timezone code.\n\nNote that regcomp.c has a relatively large number of function\ndeclarations that need to be fixed (regexec.c has some too), since the\nregex code was written in a style that makes unnamed parameters in\ndeclarations the standard -- we're talking about changing every static\nfunction declaration. The timezone code is just inconsistent about its\nuse of unnamed parameters, kind of like reorderbuffer.h.\n\nI don't see any reason to treat this quasi-vendored code as special,\nbut I don't really know anything about your workflow with the timezone\nfiles.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 17 Sep 2022 11:05:09 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Several files from src/timezone and from src/backend/regex make use of\n> unnamed parameters in function declarations. It wouldn't be difficult\n> to fix everything and call it a day, but I wonder if there are any\n> special considerations here. I don't think that Henry Spencer's regex\n> code is considered vendored code these days (if it ever was), so that\n> seems clear cut. I'm less sure about the timezone code.\n\nYeah, bringing the regex code into line with our standards is fine.\nI don't really see a reason not to do it with the timezone code\neither, as long as there aren't too many changes there. We are\ncarrying a pretty large number of diffs from upstream already.\n\n(Which reminds me that I need to do another update pass on that\ncode soon. Not right now, though.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 17 Sep 2022 14:26:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Sat, Sep 17, 2022 at 11:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, bringing the regex code into line with our standards is fine.\n> I don't really see a reason not to do it with the timezone code\n> either, as long as there aren't too many changes there. We are\n> carrying a pretty large number of diffs from upstream already.\n\nI'd be surprised if this created more than 3 minutes of extra work for\nyou when updating the timezone code.\n\nThere are a few places where I had to apply a certain amount of\nsubjective judgement (rather than just mechanically normalizing the\ndeclarations), but the timezone code wasn't one of those places. Plus\nthere just isn't that many affected timezone related function\ndeclarations, and they're concentrated in only 3 distinct areas.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 17 Sep 2022 11:36:48 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Sat, Sep 17, 2022 at 11:36 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'd be surprised if this created more than 3 minutes of extra work for\n> you when updating the timezone code.\n\nAttached revision adds a new, third patch. This fixes all the warnings\nfrom clang-tidy's \"readability-named-parameter\" check. The extent of\nthe code churn seems acceptable to me.\n\nBTW. there are just a couple of remaining unfixed\n\"readability-inconsistent-declaration-parameter-name\" warnings --\nlegitimately tricky cases. These are related to simplehash.h client\ncode, where we use the C preprocessor to simulate C++ templates\n(Bjarne strikes again). It would be possible to suppress the warnings\nby making the client code use matching generic function-style\nparameter names, but that doesn't seem like an improvement.\n\nIf we're going to adapt a project policy around parameter names, then\nwe'll need a workaround -- probably just by suppressing a handful of\ntricky warnings. For now my focus is cleaning things up on HEAD.\n\n-- \nPeter Geoghegan", "msg_date": "Sat, 17 Sep 2022 12:58:41 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Fri, Sep 16, 2022 at 11:59 PM Michael Paquier <michael@paquier.xyz> wrote:\n> If check_usermap() is used in a bugfix, that could be a risk, so this\n> bit warrants a backpatch in my opinion.\n\nMakes sense. Committed and backpatched a fix for check_usermap() just now\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 17 Sep 2022 16:55:17 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Sun, 18 Sept 2022 at 07:59, Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached revision adds a new, third patch. This fixes all the warnings\n> from clang-tidy's \"readability-named-parameter\" check. The extent of\n> the code churn seems acceptable to me.\n\n+1 to the idea of aligning the parameter names between function\ndeclarations and definitions.\n\nI had a look at the v2-0001 patch and noted down a few things while reading:\n\n1. In getJsonPathVariable you seem to have mistakenly removed a\nparameter from the declaration.\n\n2. You changed the name of the parameter in the definition of\nScanCKeywordLookup(). Is it not better to keep the existing name there\nso that that function is consistent with ScanKeywordLookup()?\n\n3. Why did you rename the parameter in the definition of\nnocachegetattr()? Wouldn't it be better just to rename in the\ndeclaration. To me, \"tup\" does not really seem better than \"tuple\"\nhere.\n\n4. In the definition of ExecIncrementalSortInitializeWorker() you've\nrenamed pwcxt to pcxt, but it seems that the other *InitializeWorker()\nfunctions call this pwcxt. Is it better to keep those consistent? I\nunderstand that you've done this for consistency with *InitializeDSM()\nand *Estimate() functions, but I'd rather see it remain consistent\nwith the other *InitializeWorker() functions instead. (I'd not be\nagainst a wider rename so all those functions use the same name.)\n\n5. In md.c I see you've renamed a few \"forkNum\" variables to\n\"formnum\". Maybe it's worth also doing the same in mdexists().\nmdcreate() is also external and got the rename, so I'm not quite sure\nwhy mdexists() would be left.\n\nDavid\n\n\n", "msg_date": "Mon, 19 Sep 2022 11:38:26 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Sun, Sep 18, 2022 at 4:38 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> 1. In getJsonPathVariable you seem to have mistakenly removed a\n> parameter from the declaration.\n\nThat was left behind following a recent rebase. Will fix.\n\nEvery other issue you've raised is some variant of:\n\n\"I see that you've made a subjective decision to resolve this\nparticular inconsistency on the declaration side by following this\nparticular approach. Why did you do it that way?\"\n\nThis is perfectly reasonable, and it's possible that I made clear\nmistakes in some individual cases. But overall it's not surprising\nthat somebody else wouldn't handle it in exactly the same way. There\nis no question that some of these decisions are a little arbitrary.\n\n> 2. You changed the name of the parameter in the definition of\n> ScanCKeywordLookup(). Is it not better to keep the existing name there\n> so that that function is consistent with ScanKeywordLookup()?\n\nBecause it somehow felt slightly preferable than introducing a .h\nlevel inconsistency between ScanECPGKeywordLookup() and\nScanCKeywordLookup(). This is about as hard to justify as justifying\nwhy one prefers a slightly different shade of beige when comparing two\npages from a book of wallpaper samples.\n\n> 3. Why did you rename the parameter in the definition of\n> nocachegetattr()? Wouldn't it be better just to rename in the\n> declaration. To me, \"tup\" does not really seem better than \"tuple\"\n> here.\n\nAgain, greater consistency at the .h level won out here. Granted it's\nstill not perfectly consistent, since I didn't take that to its\nlogical conclusion and make sure that the .h file was consistent,\nbecause then we'd be talking about why I did that. :-)\n\n> 4. In the definition of ExecIncrementalSortInitializeWorker() you've\n> renamed pwcxt to pcxt, but it seems that the other *InitializeWorker()\n> functions call this pwcxt. Is it better to keep those consistent? I\n> understand that you've done this for consistency with *InitializeDSM()\n> and *Estimate() functions, but I'd rather see it remain consistent\n> with the other *InitializeWorker() functions instead. (I'd not be\n> against a wider rename so all those functions use the same name.)\n\nAgain, I was looking at this at the level of the .h file (in this case\nnodeIncrementalSort.h). It never occurred to me to consider other\n*InitializeWorker() functions.\n\nOffhand I think that we should change all of the other\n*InitializeWorker() functions. I think that things got like this\nbecause somebody randomly made one of them pwcxt at some point, which\nwas copied later on.\n\n> 5. In md.c I see you've renamed a few \"forkNum\" variables to\n> \"formnum\". Maybe it's worth also doing the same in mdexists().\n> mdcreate() is also external and got the rename, so I'm not quite sure\n> why mdexists() would be left.\n\nYeah, I think that we might as well be perfectly consistent.\n\nMaking automated refactoring tools work better here is of course a\ngoal of mine -- which is especially useful for making everything\nconsistent at the whole-interface (or header file) level. I wasn't\nsure how much of that to do up front vs in a later commit.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 18 Sep 2022 17:08:02 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Sun, Sep 18, 2022 at 5:08 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Again, I was looking at this at the level of the .h file (in this case\n> nodeIncrementalSort.h). It never occurred to me to consider other\n> *InitializeWorker() functions.\n>\n> Offhand I think that we should change all of the other\n> *InitializeWorker() functions. I think that things got like this\n> because somebody randomly made one of them pwcxt at some point, which\n> was copied later on.\n\nOn second thought I definitely got this wrong (it's not subjective\nafter all). I didn't notice that there are actually 2 different\ndatatypes involved here, justifying a different naming convention for\neach. In other words, the problem really was in the .h file, not in\nthe .c file, so I should simply fix the declaration of\nExecIncrementalSortInitializeWorker() and call it a day.\n\nThere is no reason why ExecIncrementalSortInitializeWorker() ought to\nbe consistent with other functions that appear in the same header\nfile, since (if you squint) you'll notice that the data types are also\ndifferent.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 18 Sep 2022 17:26:26 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Sun, Sep 18, 2022 at 5:08 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> That was left behind following a recent rebase. Will fix.\n\nAttached revision fixes this issue, plus the\nExecIncrementalSortInitializeWorker() issue.\n\nIt also adds a lot more fixes which were missed earlier because I\ndidn't use \"strict\" when running clang-tidy from the command line\n(just in my editor). This includes a fix for the mdexists() issue that\nyou highlighted.\n\nI expect that this patch series will bitrot frequently, so there is no\ngood reason to not post revisions frequently.\n\nThe general structure of the patchset is now a little more worked out.\nAlthough it's still not close to being commitable, it should give you\na better idea of the kind of structure that I'm aiming for. I think\nthat this should be broken into a few different parts based on the\narea of the codebase affected (not the type of check used). Even that\naspect needs more work, because there is still one massive patch --\nthis is now the sixth and final patch.\n\nIt seems like a good idea to at least have separate commits for both\nthe regex code and the timezone code, since these are \"quasi-vendored\"\nareas of the code base.\n\n-- \nPeter Geoghegan", "msg_date": "Sun, 18 Sep 2022 20:04:12 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Mon, 19 Sept 2022 at 15:04, Peter Geoghegan <pg@bowt.ie> wrote:\n> The general structure of the patchset is now a little more worked out.\n> Although it's still not close to being commitable, it should give you\n> a better idea of the kind of structure that I'm aiming for. I think\n> that this should be broken into a few different parts based on the\n> area of the codebase affected (not the type of check used). Even that\n> aspect needs more work, because there is still one massive patch --\n> this is now the sixth and final patch.\n\nThanks for updating the patches.\n\nI'm slightly confused about \"still not close to being commitable\"\nalong with \"this is now the sixth and final patch.\". That seems to\nimply that you're not planning to send any more patches but you don't\nthink this is commitable. I'm assuming I've misunderstood that.\n\nI don't have any problems with 0001, 0002 or 0003.\n\nLooking at 0004 I see a few issues:\n\n1. ConnectDatabase() seems to need a bit more work in the header\ncomment. There's a reference to AH and AHX. The parameter is now\ncalled \"A\".\n\n2. setup_connection() still references AH->use_role in the comments\n(line 1102). Similar problem on line 1207 with AH->sync_snapshot_id\n\n3. setupDumpWorker() still makes references to AH->sync_snapshot_id\nand AH->use_role in the comments. The parameter is now called \"A\".\n\n4. dumpSearchPath() still has a comment which references AH->searchpath\n\n0005 looks fine.\n\nI've not looked at 0006 again.\n\nDavid\n\n\n", "msg_date": "Mon, 19 Sep 2022 16:07:18 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Sun, Sep 18, 2022 at 9:07 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I'm slightly confused about \"still not close to being commitable\"\n> along with \"this is now the sixth and final patch.\". That seems to\n> imply that you're not planning to send any more patches but you don't\n> think this is commitable. I'm assuming I've misunderstood that.\n\nI meant that the \"big patch\" now has a new order -- it is sixth/last in\nthe newly revised patchset, v3. I don't know how many more patch\nrevisions will be required, but at least one or two more revisions\nseem likely.\n\nHere is the stuff that it less ready, or at least seems ambiguous:\n\n1. The pg_dump patch is relatively opinionated about how to resolve\ninconsistencies, and makes quite a few changes to the .c side.\n\nSeeking out the \"lesser inconsistency\" resulted in more lines being\nchanged. Maybe you won't see it the same way (maybe you'll prefer the\nother trade-off). That's just how it ended up.\n\n2. The same thing is true to a much smaller degree with the jsonb patch.\n\n3. The big patch itself is...well, very big. And written on autopilot,\nto a certain degree. So that one just needs more careful examination,\non general principle.\n\n> Looking at 0004 I see a few issues:\n>\n> 1. ConnectDatabase() seems to need a bit more work in the header\n> comment. There's a reference to AH and AHX. The parameter is now\n> called \"A\".\n>\n> 2. setup_connection() still references AH->use_role in the comments\n> (line 1102). Similar problem on line 1207 with AH->sync_snapshot_id\n>\n> 3. setupDumpWorker() still makes references to AH->sync_snapshot_id\n> and AH->use_role in the comments. The parameter is now called \"A\".\n>\n> 4. dumpSearchPath() still has a comment which references AH->searchpath\n\nWill fix all those in the next revision. Thanks.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Sun, 18 Sep 2022 21:17:19 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Sun, Sep 18, 2022 at 9:17 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Will fix all those in the next revision. Thanks.\n\nAttached revision v4 fixes those pg_dump patch items.\n\nIt also breaks out the ecpg changes into their own patch. Looks like\necpg requires the same treatment as the timezone code and the regex\ncode -- it generally doesn't use named parameters in function\ndeclarations, so the majority of its function declarations need to be\nadjusted. The overall code churn impact is higher than it was with the\nother two modules.\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 19 Sep 2022 23:36:35 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Mon, Sep 19, 2022 at 11:36 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached revision v4 fixes those pg_dump patch items.\n>\n> It also breaks out the ecpg changes into their own patch.\n\nI pushed much of this just now. All that remains to bring the entire\ncodebase into compliance is the ecpg patch and the pg_dump patch.\nThose two areas are relatively tricky. But it's now unlikely that I'll\nneed to push a commit that makes relatively many CF patches stop\napplying against HEAD -- that part is over.\n\nOnce we're done with ecpg and pg_dump, we can talk about the actual\npracticalities of formally adopting a project policy on consistent\nparameter names. I mostly use clang-tidy via my editor's support for\nthe clangd language server -- clang-tidy is primarily a linter, so it\nisn't necessarily run in bulk all that often. I'll need to come up\nwith instructions for running clang-tidy from the command line that\nare easy to follow.\n\nI've found that the run_clang_tidy script (AKA run-clang-tidy.py)\nworks, but the whole experience feels hobbled together. I think that\nwe really need something like a build target for this -- something\ncomparable to what we do to support GCOV. That would also allow us to\nuse additional clang-tidy checks, which might be useful. We might even\nfind it useful to come up with some novel check of our own. Apparently\nit's not all that difficult to write one from scratch, to implement\ncustom rules. There are already custom rules for big open source\nprojects such as the Linux Kernel, Chromium, and LLVM itself.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 20 Sep 2022 13:51:48 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Tue, Sep 20, 2022 at 1:51 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I pushed much of this just now. All that remains to bring the entire\n> codebase into compliance is the ecpg patch and the pg_dump patch.\n> Those two areas are relatively tricky. But it's now unlikely that I'll\n> need to push a commit that makes relatively many CF patches stop\n> applying against HEAD -- that part is over.\n\nAttached revision shows where I'm at with this. Would be nice to get\nit all out of the way before too long.\n\nTurns out that we'll need a new patch for contrib, which was missed\nbefore now due to an issue with how I build a compilation database\nusing bear [1]. The new patch for contrib isn't very different to the\nother patches, though. The most notable changes are in pgcrypto and\noid2name. Fairly minor stuff, overall.\n\n[1] https://github.com/rizsotto/Bear\n-- \nPeter Geoghegan", "msg_date": "Wed, 21 Sep 2022 18:58:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Wed, Sep 21, 2022 at 6:58 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached revision shows where I'm at with this. Would be nice to get\n> it all out of the way before too long.\n\nAttached is v6, which now consists of only one single patch, which\nfixes things up in pg_dump. (This is almost though not quite identical\nto the same patch from v5.)\n\nI would like to give another 24 hours for anybody to lodge final\nobjections to what I've done in this patch. It seems possible that\nthere will be concerns about how this might affect backpatching, or\nsomething like that. This patch goes relatively far in the direction\nof refactoring to make things consistent at the module level -- unlike\nmost of the patches, which largely consisted of mechanical adjustments\nthat were obviously correct, both locally and at the whole-module level.\n\nBTW, I notice that meson seems to have built-in support for running\nscan-build, a tool that performs static analysis using clang. I'm\npretty sure that it's possible to use scan-build to run clang-tidy\nchecks (though I've just been using run-clang-tidy myself). Perhaps it\nwould make sense to use meson's support for scan-build to make it easy\nfor everybody to run the clang-tidy checks locally.\n\n--\nPeter Geoghegan", "msg_date": "Thu, 22 Sep 2022 14:41:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> I would like to give another 24 hours for anybody to lodge final\n> objections to what I've done in this patch. It seems possible that\n> there will be concerns about how this might affect backpatching, or\n> something like that. This patch goes relatively far in the direction\n> of refactoring to make things consistent at the module level -- unlike\n> most of the patches, which largely consisted of mechanical adjustments\n> that were obviously correct, both locally and at the whole-module level.\n\nYeah. I'm not much on board with the AHX->A and AH->A changes you made;\nthose seem extremely invasive and it's not real clear that they add a\nlot of value.\n\nI've never thought that the Archive vs. ArchiveHandle separation in\npg_dump was very well thought out. I could perhaps get behind a patch\nto eliminate that bit of \"abstraction\"; but I'd still try to avoid\nwholesale changes in local-variable names from it. I don't think that\nwould buy anything that's worth the back-patching pain. Just accepting\nthat Archive[Handle] variables might be named either AH or AHX depending\non historical accident does not seem that bad to me. We have lots more\nand worse naming inconsistencies in our tree.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Sep 2022 17:55:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Thu, Sep 22, 2022 at 2:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah. I'm not much on board with the AHX->A and AH->A changes you made;\n> those seem extremely invasive and it's not real clear that they add a\n> lot of value.\n\nThat makes it easy, then. I'll just take the least invasive approach\npossible with pg_dump: treat the names from function definitions as\nauthoritative, and mechanically adjust the function declarations as\nneeded to make everything agree.\n\nThe commit message for this will note in passing that the\ninconsistency that this creates at the header file level is a known\nissue.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 22 Sep 2022 15:14:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> That makes it easy, then. I'll just take the least invasive approach\n> possible with pg_dump: treat the names from function definitions as\n> authoritative, and mechanically adjust the function declarations as\n> needed to make everything agree.\n\nWFM.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Sep 2022 18:20:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" }, { "msg_contents": "On Thu, Sep 22, 2022 at 3:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> WFM.\n\nOkay, pushed a minimally invasive commit to fix the inconsistencies in\npg_dump related code just now. That's the last of them. Now the only\nremaining clang-tidy warnings about inconsistent parameter names are\nthose that seem practically impossible to fix (these are mostly just\ncases involving flex/bison).\n\nIt still seems like a good idea to formally create a new coding\nstandard around C function parameter names. We really need a simple\nclang-tidy workflow to be able to do that. I'll try to get to that\nsoon. Part of the difficulty there will be finding a way to ignore the\nwarnings that we really can't do anything about.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 22 Sep 2022 16:46:53 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Making C function declaration parameter names consistent with\n corresponding definition names" } ]
[ { "msg_contents": "On Fri, Sep 16, 2022 at 02:54:14PM +0700, John Naylor wrote:\n> v6 demonstrates why this should have been put off towards the end. (more below)\n\nSince the SIMD code is fresh in my mind, I wanted to offer my review for\n0001 in the \"Improve dead tuple storage for lazy vacuum\" thread [0].\nHowever, I agree with John that the SIMD part of that work should be left\nfor the end, and I didn't want to distract from the radix tree part too\nmuch. So, here is a new thread for just the SIMD part.\n\n>> I've updated the radix tree patch. It's now separated into two patches.\n>>\n>> 0001 patch introduces pg_lsearch8() and pg_lsearch8_ge() (we may find\n>> better names) that are similar to the pg_lfind8() family but they\n>> return the index of the key in the vector instead of true/false. The\n>> patch includes regression tests.\n\nI don't think it's clear that the \"lfind\" functions return whether there is\na match while the \"lsearch\" functions return the index of the first match.\nIt might be better to call these something like \"pg_lfind8_idx\" and\n\"pg_lfind8_ge_idx\" instead.\n\n> +/*\n> + * Return the index of the first element in the vector that is greater than\n> + * or eual to the given scalar. Return sizeof(Vector8) if there is no such\n> + * element.\n>\n> That's a bizarre API to indicate non-existence.\n\n+1. It should probably just return -1 in that case.\n\n> + *\n> + * Note that this function assumes the elements in the vector are sorted.\n> + */\n>\n> That is *completely* unacceptable for a general-purpose function.\n\n+1\n\n> +#else /* USE_NO_SIMD */\n> + Vector8 r = 0;\n> + uint8 *rp = (uint8 *) &r;\n> +\n> + for (Size i = 0; i < sizeof(Vector8); i++)\n> + rp[i] = (((const uint8 *) &v1)[i] == ((const uint8 *) &v2)[i]) ? 0xFF : 0;\n>\n> I don't think we should try to force the non-simd case to adopt the\n> special semantics of vector comparisons. It's much easier to just use\n> the same logic as the assert builds.\n\n+1\n\n> +#ifdef USE_SSE2\n> + return (uint32) _mm_movemask_epi8(v);\n> +#elif defined(USE_NEON)\n> + static const uint8 mask[16] = {\n> + 1 << 0, 1 << 1, 1 << 2, 1 << 3,\n> + 1 << 4, 1 << 5, 1 << 6, 1 << 7,\n> + 1 << 0, 1 << 1, 1 << 2, 1 << 3,\n> + 1 << 4, 1 << 5, 1 << 6, 1 << 7,\n> + };\n> +\n> + uint8x16_t masked = vandq_u8(vld1q_u8(mask), (uint8x16_t)\n> vshrq_n_s8(v, 7));\n> + uint8x16_t maskedhi = vextq_u8(masked, masked, 8);\n> +\n> + return (uint32) vaddvq_u16((uint16x8_t) vzip1q_u8(masked, maskedhi));\n>\n> For Arm, we need to be careful here. This article goes into a lot of\n> detail for this situation:\n>\n> https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/porting-x86-vector-bitmask-optimizations-to-arm-neon\n\nThe technique demonstrated in this article seems to work nicely.\n\nFor these kinds of patches, I find the best way to review them is to try\nout my proposed changes as I'm reading through the patch. I hope you don't\nmind that I've done so here and attached a new version of the patch. In\naddition to addressing the aforementioned feedback, I made the following\nchanges:\n\n* I renamed the vector8_search_* functions to vector8_find() and\nvector8_find_ge(). IMO this is more in the spirit of existing function\nnames like vector8_has().\n\n* I simplified vector8_find_ge() by essentially making it do the opposite\nof what vector8_has_le() does (i.e., using saturating subtraction to find\nmatching bytes). This removes the need for vector8_min(), and since\nvector8_find_ge() can just call vector8_search() to find any 0 bytes,\nvector8_highbit_mask() can be removed as well.\n\n* I simplified the test for pg_lfind8_ge_idx() by making it look a little\nmore like the test for pg_lfind32(). I wasn't sure about the use of rand()\nand qsort(), and overall it just felt a little too complicated to me.\n\nI've tested all three code paths (i.e., SSE2, Neon, and USE_NO_SIMD), but I\nhaven't done any performance analysis yet.\n\n[0] https://postgr.es/m/CAD21AoD3w76wERs_Lq7_uA6%2BgTaoOERPji%2BYz8Ac6aui4JwvTg%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 16 Sep 2022 22:29:03 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "introduce optimized linear search functions that return index of\n matching element" }, { "msg_contents": "On Sat, Sep 17, 2022 at 12:29 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n>\n> On Fri, Sep 16, 2022 at 02:54:14PM +0700, John Naylor wrote:\n> > v6 demonstrates why this should have been put off towards the end.\n(more below)\n>\n> Since the SIMD code is fresh in my mind, I wanted to offer my review for\n> 0001 in the \"Improve dead tuple storage for lazy vacuum\" thread [0].\n> However, I agree with John that the SIMD part of that work should be left\n> for the end\n\nAs I mentioned in the radix tree thread, I don't believe this level of\nabstraction is appropriate for the intended use case. We'll want to\nincorporate some of the low-level simd.h improvements later, so you should\nget authorship credit for those. I've marked the entry \"returned with\nfeedback\".\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sat, Sep 17, 2022 at 12:29 PM Nathan Bossart <nathandbossart@gmail.com> wrote:>> On Fri, Sep 16, 2022 at 02:54:14PM +0700, John Naylor wrote:> > v6 demonstrates why this should have been put off towards the end. (more below)>> Since the SIMD code is fresh in my mind, I wanted to offer my review for> 0001 in the \"Improve dead tuple storage for lazy vacuum\" thread [0].> However, I agree with John that the SIMD part of that work should be left> for the endAs I mentioned in the radix tree thread, I don't believe this level of abstraction is appropriate for the intended use case. We'll want to incorporate some of the low-level simd.h improvements later, so you should get authorship credit for those.  I've marked the entry \"returned with feedback\".--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 12 Oct 2022 12:57:16 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: introduce optimized linear search functions that return index of\n matching element" } ]
[ { "msg_contents": "Hi.\n\nThere are already multiple places that are building the subscription\norigin name, and there are more coming with some new patches [1] doing\nthe same:\n\ne.g.\nsnprintf(originname, sizeof(originname), \"pg_%u\", subid);\n\n~~\n\nIMO it is better to encapsulate this name formatting in common code\ninstead of the format string being scattered around the place.\n\nPSA a patch to add a common function ReplicationOriginName. This is\nthe equivalent of a similar function for tablesync\n(ReplicationOriginNameForTablesync) which already existed.\n\n------\n[1] https://www.postgresql.org/message-id/flat/CAA4eK1%2BwyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 19 Sep 2022 12:35:25 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Add common function ReplicationOriginName." }, { "msg_contents": "Hi Peter,\n\n> PSA a patch to add a common function ReplicationOriginName\n\nThe patch looks good to me.\n\nOne nitpick I have is that the 2nd argument of snprintf is size_t\nwhile we are passing int's. Your patch is consistent with the current\nimplementation of ReplicationOriginNameForTablesync() and similar\nfunctions in tablesync.c. However I would like to mention this in case\nthe committer will be interested in replacing ints with Size's while\non it.\n\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 19 Sep 2022 11:57:09 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add common function ReplicationOriginName." }, { "msg_contents": "On Mon, Sep 19, 2022 at 2:27 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi Peter,\n>\n> > PSA a patch to add a common function ReplicationOriginName\n>\n> The patch looks good to me.\n>\n> One nitpick I have is that the 2nd argument of snprintf is size_t\n> while we are passing int's. Your patch is consistent with the current\n> implementation of ReplicationOriginNameForTablesync() and similar\n> functions in tablesync.c.\n>\n\nI think it is better to use Size. Even though, it may not fail now as\nthe size of names for origin will always be much lesser but it is\nbetter if we are consistent. If we agree with this, then as a first\npatch, we can make it to use Size in existing places and then\nintroduce this new function.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 20 Sep 2022 12:20:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add common function ReplicationOriginName." }, { "msg_contents": "Hi Amit,\n\n> I think it is better to use Size. Even though, it may not fail now as\n> the size of names for origin will always be much lesser but it is\n> better if we are consistent. If we agree with this, then as a first\n> patch, we can make it to use Size in existing places and then\n> introduce this new function.\n\nOK, here is the updated patchset.\n\n* 0001 replaces int's with Size's in the existing code\n* 0002 applies Peter's patch on top of 0001\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 20 Sep 2022 11:35:58 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add common function ReplicationOriginName." }, { "msg_contents": "On Tue, Sep 20, 2022 at 6:36 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi Amit,\n>\n> > I think it is better to use Size. Even though, it may not fail now as\n> > the size of names for origin will always be much lesser but it is\n> > better if we are consistent. If we agree with this, then as a first\n> > patch, we can make it to use Size in existing places and then\n> > introduce this new function.\n>\n> OK, here is the updated patchset.\n>\n> * 0001 replaces int's with Size's in the existing code\n> * 0002 applies Peter's patch on top of 0001\n>\n\nLGTM. Thanks!\n\n-----\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 20 Sep 2022 19:03:06 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add common function ReplicationOriginName." }, { "msg_contents": "On Tue, Sep 20, 2022 at 2:06 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi Amit,\n>\n> > I think it is better to use Size. Even though, it may not fail now as\n> > the size of names for origin will always be much lesser but it is\n> > better if we are consistent. If we agree with this, then as a first\n> > patch, we can make it to use Size in existing places and then\n> > introduce this new function.\n>\n> OK, here is the updated patchset.\n>\n> * 0001 replaces int's with Size's in the existing code\n>\n\nPushed this one.\n\n> * 0002 applies Peter's patch on top of 0001\n>\n\nCan't we use the existing function ReplicationOriginNameForTablesync()\nby passing relid as InvalidOid for this purpose? We need a check\ninside to decide which name to construct, otherwise, it should be\nfine. If we agree with this, then we can change the name of the\nfunction to something like ReplicationOriginNameForLogicalRep or\nReplicationOriginNameForLogicalRepWorkers.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Sep 2022 10:53:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add common function ReplicationOriginName." }, { "msg_contents": "On Wed, Sep 21, 2022 at 3:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n...\n\n> Can't we use the existing function ReplicationOriginNameForTablesync()\n> by passing relid as InvalidOid for this purpose? We need a check\n> inside to decide which name to construct, otherwise, it should be\n> fine. If we agree with this, then we can change the name of the\n> function to something like ReplicationOriginNameForLogicalRep or\n> ReplicationOriginNameForLogicalRepWorkers.\n>\n\nThis suggestion attaches special meaning to the reild param.\n\nWon't it seem a bit strange for the non-tablesync callers (who\nprobably have a perfectly valid 'relid') to have to pass an InvalidOid\nrelid just so they can format the correct origin name?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 21 Sep 2022 19:38:58 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add common function ReplicationOriginName." }, { "msg_contents": "On Wed, Sep 21, 2022 at 3:09 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Sep 21, 2022 at 3:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> ...\n>\n> > Can't we use the existing function ReplicationOriginNameForTablesync()\n> > by passing relid as InvalidOid for this purpose? We need a check\n> > inside to decide which name to construct, otherwise, it should be\n> > fine. If we agree with this, then we can change the name of the\n> > function to something like ReplicationOriginNameForLogicalRep or\n> > ReplicationOriginNameForLogicalRepWorkers.\n> >\n>\n> This suggestion attaches special meaning to the reild param.\n>\n> Won't it seem a bit strange for the non-tablesync callers (who\n> probably have a perfectly valid 'relid') to have to pass an InvalidOid\n> relid just so they can format the correct origin name?\n>\n\nFor non-tablesync workers, relid should always be InvalidOid. See, how\nwe launch apply workers in ApplyLauncherMain(). Do you see any case\nfor non-tablesync workers where relid is not InvalidOid?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Sep 2022 15:38:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add common function ReplicationOriginName." }, { "msg_contents": "On Wed, Sep 21, 2022 at 8:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Sep 21, 2022 at 3:09 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Wed, Sep 21, 2022 at 3:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > ...\n> >\n> > > Can't we use the existing function ReplicationOriginNameForTablesync()\n> > > by passing relid as InvalidOid for this purpose? We need a check\n> > > inside to decide which name to construct, otherwise, it should be\n> > > fine. If we agree with this, then we can change the name of the\n> > > function to something like ReplicationOriginNameForLogicalRep or\n> > > ReplicationOriginNameForLogicalRepWorkers.\n> > >\n> >\n> > This suggestion attaches special meaning to the reild param.\n> >\n> > Won't it seem a bit strange for the non-tablesync callers (who\n> > probably have a perfectly valid 'relid') to have to pass an InvalidOid\n> > relid just so they can format the correct origin name?\n> >\n>\n> For non-tablesync workers, relid should always be InvalidOid. See, how\n> we launch apply workers in ApplyLauncherMain(). Do you see any case\n> for non-tablesync workers where relid is not InvalidOid?\n>\n\nHmm, my mistake. I was thinking more of all the calls coming from the\nsubscriptioncmds.c, but now that I look at it maybe none of those has\nany relid either.\n\nOK, I can unify the 2 functions as you suggested. I will post another\npatch in a few days.\n\n------\nKind Regards,\nPeter Smith,\nFujitsu Australia.\n\n\n", "msg_date": "Wed, 21 Sep 2022 20:22:42 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add common function ReplicationOriginName." }, { "msg_contents": "On Wed, Sep 21, 2022 at 8:22 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n...\n> > > On Wed, Sep 21, 2022 at 3:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > ...\n> > >\n> > > > Can't we use the existing function ReplicationOriginNameForTablesync()\n> > > > by passing relid as InvalidOid for this purpose? We need a check\n> > > > inside to decide which name to construct, otherwise, it should be\n> > > > fine. If we agree with this, then we can change the name of the\n> > > > function to something like ReplicationOriginNameForLogicalRep or\n> > > > ReplicationOriginNameForLogicalRepWorkers.\n> > > >\n...\n>\n> OK, I can unify the 2 functions as you suggested. I will post another\n> patch in a few days.\n>\n\nPSA patch v3 to combine the different replication origin name\nformatting in a single function ReplicationOriginNameForLogicalRep as\nsuggested.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 26 Sep 2022 13:15:08 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add common function ReplicationOriginName." }, { "msg_contents": "On Tue, Sep 20, 2022 at 6:36 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi Amit,\n>\n> > I think it is better to use Size. Even though, it may not fail now as\n> > the size of names for origin will always be much lesser but it is\n> > better if we are consistent. If we agree with this, then as a first\n> > patch, we can make it to use Size in existing places and then\n> > introduce this new function.\n>\n> OK, here is the updated patchset.\n>\n> * 0001 replaces int's with Size's in the existing code\n> * 0002 applies Peter's patch on top of 0001\n>\n\nHi Aleksander.\n\nFYI - although it is outside the scope of this thread, I did notice at\nleast one other example where you might want to substitute Size for\nint in the same way as your v2-0001 patch did.\n\ne.g. Just searching code for 'snprintf' where there is some parameter\nfor the size I quickly found:\n\nFile: src/bin/pg_dump/pg_dump_sort.c:\n\nstatic void\ndescribeDumpableObject(DumpableObject *obj, char *buf, int bufsize)\n\ncaller:\ndescribeDumpableObject(loop[i], buf, sizeof(buf));\n\n~~\n\nI expect you can find more like just this if you look harder than I did.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Tue, 27 Sep 2022 11:47:33 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add common function ReplicationOriginName." }, { "msg_contents": "Hi Peter,\n\n> PSA patch v3 to combine the different replication origin name\n> formatting in a single function ReplicationOriginNameForLogicalRep as\n> suggested.\n\nLGTM except for minor issues with the formatting; fixed.\n\n> I expect you can find more like just this if you look harder than I did.\n\nThanks. You are right, there are more places that pass int as the\nsecond argument of *nprintf(). I used a command:\n\n$ grep -r nprintf ./ | perl -lne 'print if($_ !~\n/nprintf\\([^\\,]+,\\s*(sizeof|([0-9A-Z_ \\-]+\\,))/ )' > found.txt\n\n... and then re-checked the results manually. This excludes patterns\nlike *nprintf(..., sizeof(...)) and *nprintf(..., MACRO) and leaves\nonly something like *nprintf(..., variable). The cases where we\nsubtract an integer from a Size, etc were ignored.\n\nI don't have a strong opinion on whether we should be really worried\nby this. But in case we do, here is the patch. The order of 0001 and\n0002 doesn't matter.\n\nAs I understand, ecpg uses size_t rather than Size, so for this\nlibrary I used size_t. Not 100% sure if the changes I made to\nsrc/backend/utils/adt/jsonb.c add much value. I leave this to the\ncommitter to decide.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 27 Sep 2022 14:33:50 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add common function ReplicationOriginName." }, { "msg_contents": "On Tue, Sep 27, 2022 at 5:04 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi Peter,\n>\n> > PSA patch v3 to combine the different replication origin name\n> > formatting in a single function ReplicationOriginNameForLogicalRep as\n> > suggested.\n>\n> LGTM except for minor issues with the formatting; fixed.\n>\n\nLGTM as well. I'll push this tomorrow unless there are any more comments.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 10 Oct 2022 19:09:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add common function ReplicationOriginName." }, { "msg_contents": "On Mon, Oct 10, 2022 at 7:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 27, 2022 at 5:04 PM Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> >\n> > Hi Peter,\n> >\n> > > PSA patch v3 to combine the different replication origin name\n> > > formatting in a single function ReplicationOriginNameForLogicalRep as\n> > > suggested.\n> >\n> > LGTM except for minor issues with the formatting; fixed.\n> >\n>\n> LGTM as well. I'll push this tomorrow unless there are any more comments.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 11 Oct 2022 13:06:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add common function ReplicationOriginName." }, { "msg_contents": "Hi Amit,\n\n> Pushed.\n\nThanks!\n\n> I don't have a strong opinion on whether we should be really worried\n> by this. But in case we do, here is the patch.\n\nThis leaves us one patch to deal with.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 11 Oct 2022 11:02:51 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add common function ReplicationOriginName." }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> This leaves us one patch to deal with.\n> [ v4-0001-Pass-Size-size_t-as-a-2nd-argument-of-snprintf.patch ]\n\nI looked at this and am inclined to reject it. None of these\nplaces realistically need to deal with strings longer than\nMAXPATHLEN or so, let alone multiple gigabytes. So it's just\ncode churn, creating backpatch hazards (admittedly not big ones)\nfor no real gain.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 05 Nov 2022 10:59:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add common function ReplicationOriginName." }, { "msg_contents": "Hi Tom,\n\n> I looked at this and am inclined to reject it. [...]\n\nOK, thanks. Then we are done with this thread. I closed the\ncorresponding CF entry.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 7 Nov 2022 11:16:32 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Add common function ReplicationOriginName." } ]
[ { "msg_contents": "Hi,\n\nWhile working on some other patches, I found serval typos(duplicate words and\nincorrect function name reference) in the code comments. Here is a small patch\nto fix them.\n\nBest regards,\nHou zhijie", "msg_date": "Mon, 19 Sep 2022 02:44:12 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Fix typos in code comments" }, { "msg_contents": "On Mon, Sep 19, 2022 at 8:14 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> While working on some other patches, I found serval typos(duplicate words and\n> incorrect function name reference) in the code comments. Here is a small patch\n> to fix them.\n>\n\nThanks, the patch looks good to me. I'll push this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 19 Sep 2022 08:27:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix typos in code comments" }, { "msg_contents": "Hi\nOn Sep 19, 2022, 10:57 +0800, Amit Kapila <amit.kapila16@gmail.com>, wrote:\n> On Mon, Sep 19, 2022 at 8:14 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > While working on some other patches, I found serval typos(duplicate words and\n> > incorrect function name reference) in the code comments. Here is a small patch\n> > to fix them.\n> >\n>\n> Thanks, the patch looks good to me. I'll push this.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n>\nGood catch. There is a similar typo in doc, runtime.sgml.\n\n```using TLS protocols enabled by by setting the parameter```\n\n\n\n\n\n\n\nHi\n\n\nOn Sep 19, 2022, 10:57 +0800, Amit Kapila <amit.kapila16@gmail.com>, wrote:\nOn Mon, Sep 19, 2022 at 8:14 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n\nWhile working on some other patches, I found serval typos(duplicate words and\nincorrect function name reference) in the code comments. Here is a small patch\nto fix them.\n\n\nThanks, the patch looks good to me. I'll push this.\n\n--\nWith Regards,\nAmit Kapila.\n\n\nGood catch. There is a similar typo in doc, runtime.sgml.\n\n```using TLS protocols enabled by by setting the parameter```", "msg_date": "Mon, 19 Sep 2022 11:05:24 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix typos in code comments" }, { "msg_contents": "On Mon, Sep 19, 2022 at 02:44:12AM +0000, houzj.fnst@fujitsu.com wrote:\n> While working on some other patches, I found serval typos(duplicate words and\n> incorrect function name reference) in the code comments. Here is a small patch\n> to fix them.\n\nThanks.\n\nOn Mon, Sep 19, 2022 at 11:05:24AM +0800, Zhang Mingli wrote:\n> Good catch. There is a similar typo in doc, runtime.sgml.\n> ```using�TLS�protocols enabled�by by�setting the parameter```\n\nThat one should be backpatched to v15.\n\nFind below some others.\n\ndiff --git a/src/backend/executor/execPartition.c b/src/backend/executor/execPartition.c\nindex 901dd435efd..160296e1daf 100644\n--- a/src/backend/executor/execPartition.c\n+++ b/src/backend/executor/execPartition.c\n@@ -2155,7 +2155,7 @@ InitPartitionPruneContext(PartitionPruneContext *context,\n * Current values of the indexes present in PartitionPruneState count all the\n * subplans that would be present before initial pruning was done. If initial\n * pruning got rid of some of the subplans, any subsequent pruning passes will\n- * will be looking at a different set of target subplans to choose from than\n+ * be looking at a different set of target subplans to choose from than\n * those in the pre-initial-pruning set, so the maps in PartitionPruneState\n * containing those indexes must be updated to reflect the new indexes of\n * subplans in the post-initial-pruning set.\ndiff --git a/src/backend/utils/activity/pgstat.c b/src/backend/utils/activity/pgstat.c\nindex 6224c498c21..5b0f26e3b07 100644\n--- a/src/backend/utils/activity/pgstat.c\n+++ b/src/backend/utils/activity/pgstat.c\n@@ -556,7 +556,7 @@ pgstat_initialize(void)\n * suggested idle timeout is returned. Currently this is always\n * PGSTAT_IDLE_INTERVAL (10000ms). Callers can use the returned time to set up\n * a timeout after which to call pgstat_report_stat(true), but are not\n- * required to to do so.\n+ * required to do so.\n *\n * Note that this is called only when not within a transaction, so it is fair\n * to use transaction stop time as an approximation of current time.\ndiff --git a/src/backend/utils/activity/pgstat_replslot.c b/src/backend/utils/activity/pgstat_replslot.c\nindex b77c05ab5fa..9a59012a855 100644\n--- a/src/backend/utils/activity/pgstat_replslot.c\n+++ b/src/backend/utils/activity/pgstat_replslot.c\n@@ -8,7 +8,7 @@\n * storage implementation and the details about individual types of\n * statistics.\n *\n- * Replication slot stats work a bit different than other other\n+ * Replication slot stats work a bit different than other\n * variable-numbered stats. Slots do not have oids (so they can be created on\n * physical replicas). Use the slot index as object id while running. However,\n * the slot index can change when restarting. That is addressed by using the\ndiff --git a/src/test/ssl/t/SSL/Server.pm b/src/test/ssl/t/SSL/Server.pm\nindex 62f54dcbf16..0a9e5da01e4 100644\n--- a/src/test/ssl/t/SSL/Server.pm\n+++ b/src/test/ssl/t/SSL/Server.pm\n@@ -257,7 +257,7 @@ The certificate file to use. Implementation is SSL backend specific.\n \n =item keyfile => B<value>\n \n-The private key to to use. Implementation is SSL backend specific.\n+The private key file to use. Implementation is SSL backend specific.\n \n =item crlfile => B<value>\n \n\n\n", "msg_date": "Mon, 19 Sep 2022 06:10:00 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Fix typos in code comments" }, { "msg_contents": "On Mon, 19 Sept 2022 at 23:10, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Find below some others.\n\nThanks. Pushed.\n\nDavid\n\n\n", "msg_date": "Tue, 20 Sep 2022 08:38:14 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix typos in code comments" }, { "msg_contents": "On Mon, Sep 19, 2022 at 06:10:00AM -0500, Justin Pryzby wrote:\n> On Mon, Sep 19, 2022 at 11:05:24AM +0800, Zhang Mingli wrote:\n> > Good catch. There is a similar typo in doc, runtime.sgml.\n> > ```using�TLS�protocols enabled�by by�setting the parameter```\n> \n> That one should be backpatched to v15.\n\nThis is still needed -- \"by by\"\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 23 Sep 2022 17:30:57 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Fix typos in code comments" }, { "msg_contents": "On Sat, Sep 24, 2022 at 4:00 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Sep 19, 2022 at 06:10:00AM -0500, Justin Pryzby wrote:\n> > On Mon, Sep 19, 2022 at 11:05:24AM +0800, Zhang Mingli wrote:\n> > > Good catch. There is a similar typo in doc, runtime.sgml.\n> > > ```using TLS protocols enabled by by setting the parameter```\n> >\n> > That one should be backpatched to v15.\n>\n> This is still needed -- \"by by\"\n>\n\nThanks for the reminder. I have pushed the fix.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Sep 2022 11:55:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix typos in code comments" } ]
[ { "msg_contents": "\nHi hackers,\n\nWhen I read the heapam_visibility.c, I found that there is some code in\nHeapTupleSatisfiesXXX that has a similar code for pre-9.0 binary upgrades.\n\nHeapTupleSatisfiesSelf, HeapTupleSatisfiesToast and HeapTupleSatisfiesDirty\nhave the same code for pre-9.0 binary upgrades. HeapTupleSatisfiesUpdate\nhas a similar code, except it returns TM_Result other than boolean.\n\nHeapTupleSatisfiesMVCC also has a similar code like HeapTupleSatisfiesSelf,\nexpect it use XidInMVCCSnapshot other than TransactionIdIsInProgress.\n\nThe most different is HeapTupleSatisfiesVacuumHorizon.\n\nCould we encapsulate the code for pre-9.0 binary upgrades? This idea comes from [1].\n\nAny thoughts?\n\n\n[1] https://www.postgresql.org/message-id/8a855f33-2581-66bf-85f7-0b99239edbda%40postgrespro.ru\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Mon, 19 Sep 2022 15:54:57 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Code clean for pre-9.0 binary upgrades in HeapTupleSatisfiesXXX()" } ]
[ { "msg_contents": "Hi,\n\n\nIn preprocess_aggref(), list same_input_transnos is used to track compatible transnos.\n\nFree it if we don’t need it anymore.\n\n```\n\n/*\n * 2. See if this aggregate can share transition state with another\n * aggregate that we've initialized already.\n */\n transno = find_compatible_trans(root, aggref, shareable,\n aggtransfn, aggtranstype,\n transtypeLen, transtypeByVal,\n aggcombinefn,\n aggserialfn, aggdeserialfn,\n initValue, initValueIsNull,\n same_input_transnos);\n list_free(same_input_transnos);\n\n```\n\nNot sure if it worths as it will be freed sooner or later when current context ends.\n\nBut as in find_compatible_agg(), the list is freed if we found a compatible Agg.\n\nThis patch helps a little when there are lots of incompatible aggs because we will try to find the compatible transnos again and again.\n\nEach iteration will keep an unused list memory.\n\nRegards,\nZhang Mingli", "msg_date": "Mon, 19 Sep 2022 18:19:07 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Free list same_input_transnos in preprocess_aggref" }, { "msg_contents": "Zhang Mingli <zmlpostgres@gmail.com> writes:\n> In preprocess_aggref(), list same_input_transnos is used to track compatible transnos.\n> Free it if we don’t need it anymore.\n\nVery little of the planner bothers with freeing small allocations\nlike that. Can you demonstrate a case where this would actually\nmake a meaningful difference?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Sep 2022 11:14:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Free list same_input_transnos in preprocess_aggref" }, { "msg_contents": "Regards,\nZhang Mingli\nOn Sep 19, 2022, 23:14 +0800, Tom Lane <tgl@sss.pgh.pa.us>, wrote:\n> Very little of the planner bothers with freeing small allocations\n> like that.\nI think so too, as said, not sure if it worths.\n> Can you demonstrate a case where this would actually\n> make a meaningful difference?\nOffhand, an example may help a little:\n\ncreate table t1(id int);\nexplain select max(id), min(id), sum(id), count(id), avg(id) from t1;\n\n Modify codes to test:\n\n@@ -139,6 +139,7 @@ preprocess_aggref(Aggref *aggref, PlannerInfo *root)\n int16 transtypeLen;\n Oid inputTypes[FUNC_MAX_ARGS];\n int numArguments;\n+ static size_t accumulate_list_size = 0;\n\n Assert(aggref->agglevelsup == 0);\n\n@@ -265,7 +266,7 @@ preprocess_aggref(Aggref *aggref, PlannerInfo *root)\n aggserialfn, aggdeserialfn,\n initValue, initValueIsNull,\n same_input_transnos);\n- list_free(same_input_transnos);\n+ accumulate_list_size += sizeof(int) * list_length(same_input_transnos);\n\nGdb and print accumulate_list_size for each iteration:\n\nSaveBytes = Sum results of accumulate_list_size: 32(4+4+8+8), as we have 5 aggs in sql.\n\nIf there were N sets of that aggs (more columns as id, with above aggs ), the bytes will be N*SaveBytes.\n\nSeems we don’t have so many agg functions that could share the same trans function, Does it worth?\n\n\n\n\n\n\n\n\n\n\n\nRegards,\nZhang Mingli\n\n\n\nOn Sep 19, 2022, 23:14 +0800, Tom Lane <tgl@sss.pgh.pa.us>, wrote:\nVery little of the planner bothers with freeing small allocations\nlike that. \nI think so too, as said, not sure if it worths. \nCan you demonstrate a case where this would actually\nmake a meaningful difference?\nOffhand, an example may help a little:\n\ncreate table t1(id int);\nexplain select max(id), min(id), sum(id), count(id), avg(id) from t1;\n\n Modify codes to test:\n\n@@ -139,6 +139,7 @@ preprocess_aggref(Aggref *aggref, PlannerInfo *root)\n int16 transtypeLen;\n Oid inputTypes[FUNC_MAX_ARGS];\n int numArguments;\n+ static size_t accumulate_list_size = 0;\n\n Assert(aggref->agglevelsup == 0);\n\n@@ -265,7 +266,7 @@ preprocess_aggref(Aggref *aggref, PlannerInfo *root)\n aggserialfn, aggdeserialfn,\n initValue, initValueIsNull,\n same_input_transnos);\n- list_free(same_input_transnos);\n+ accumulate_list_size += sizeof(int) * list_length(same_input_transnos);\n\nGdb and print accumulate_list_size for each iteration:\n\nSaveBytes = Sum results of accumulate_list_size: 32(4+4+8+8), as we have 5 aggs in sql.\n\nIf there were N sets of that aggs (more columns as id, with above aggs ), the bytes will be N*SaveBytes.\n\nSeems we don’t have so many agg functions that could share the same trans function, Does it worth?", "msg_date": "Tue, 20 Sep 2022 00:27:30 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Free list same_input_transnos in preprocess_aggref" }, { "msg_contents": "Regards,\nZhang Mingli\nOn Sep 20, 2022, 00:27 +0800, Zhang Mingli <zmlpostgres@gmail.com>, wrote:\n>\n> SaveBytes = Sum results of accumulate_list_size: 32(4+4+8+8), as we have 5 aggs in sql\nCorrection: SaveBytes = Sum results of accumulate_list_size: 24(4+4+8+8),\n\n\n\n\n\n\n\nRegards,\nZhang Mingli\n\n\n\nOn Sep 20, 2022, 00:27 +0800, Zhang Mingli <zmlpostgres@gmail.com>, wrote:\n\nSaveBytes = Sum results of accumulate_list_size: 32(4+4+8+8), as we have 5 aggs in sql\nCorrection: SaveBytes = Sum results of accumulate_list_size: 24(4+4+8+8),", "msg_date": "Tue, 20 Sep 2022 00:31:29 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Free list same_input_transnos in preprocess_aggref" }, { "msg_contents": "Zhang Mingli <zmlpostgres@gmail.com> writes:\n> Correction: SaveBytes = Sum results of accumulate_list_size: 24(4+4+8+8),\n\nWhat I did was to stick in\n\n\telog(LOG, \"leaking list of length %d\", list_length(same_input_transnos));\n\nat the end of preprocess_aggref. What I see on your five-aggregate\nexample is\n\n2022-11-06 14:59:25.666 EST [3046253] LOG: leaking list of length 0\n2022-11-06 14:59:25.666 EST [3046253] STATEMENT: explain select max(id), min(id), sum(id), count(id), avg(id) from t1;\n2022-11-06 14:59:25.666 EST [3046253] LOG: leaking list of length 1\n2022-11-06 14:59:25.666 EST [3046253] STATEMENT: explain select max(id), min(id), sum(id), count(id), avg(id) from t1;\n2022-11-06 14:59:25.666 EST [3046253] LOG: leaking list of length 0\n2022-11-06 14:59:25.666 EST [3046253] STATEMENT: explain select max(id), min(id), sum(id), count(id), avg(id) from t1;\n2022-11-06 14:59:25.666 EST [3046253] LOG: leaking list of length 1\n2022-11-06 14:59:25.666 EST [3046253] STATEMENT: explain select max(id), min(id), sum(id), count(id), avg(id) from t1;\n2022-11-06 14:59:25.666 EST [3046253] LOG: leaking list of length 0\n2022-11-06 14:59:25.666 EST [3046253] STATEMENT: explain select max(id), min(id), sum(id), count(id), avg(id) from t1;\n\nThe NIL lists are of course occupying no storage. The two one-element\nlists are absolutely, completely negligible in the context of planning\nany nontrivial statement. Even the aggtransinfos list that is the\nprimary output of preprocess_aggref will dwarf that; and we leak\nsimilarly small data structures in probably many hundred places in\nthe planner.\n\nI went a bit further and ran the core regression tests, then aggregated\nthe results:\n\n$ grep 'leaking list' postmaster.log | sed 's/.*] //' | sort | uniq -c \n 4516 LOG: leaking list of length 0\n 95 LOG: leaking list of length 1\n 15 LOG: leaking list of length 2\n\nYou can quibble of course about how representative the regression tests\nare, but there's sure no evidence at all here that we'd be saving\nanything measurable.\n\nIf anything, I'd be inclined to get rid of the\n\n\t\t\tlist_free(*same_input_transnos);\n\nin find_compatible_agg, because it seems like a waste of code on\nthe same grounds. Instrumenting that in the same way, I find\nthat it's not reached at all in your example, while the\nregression tests give\n\n 49 LOG: freeing list of length 0\n 2 LOG: freeing list of length 1\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Nov 2022 15:12:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Free list same_input_transnos in preprocess_aggref" }, { "msg_contents": "HI,\n\n\nOn Nov 7, 2022, 04:12 +0800, Tom Lane <tgl@sss.pgh.pa.us>, wrote:\n>\n> The NIL lists are of course occupying no storage. The two one-element\n> lists are absolutely, completely negligible in the context of planning\n> any nontrivial statement. Even the aggtransinfos list that is the\n> primary output of preprocess_aggref will dwarf that; and we leak\n> similarly small data structures in probably many hundred places in\n> the planner.\n>\n> I went a bit further and ran the core regression tests, then aggregated\n> the results:\n>\n> $ grep 'leaking list' postmaster.log | sed 's/.*] //' | sort | uniq -c\n> 4516 LOG: leaking list of length 0\n> 95 LOG: leaking list of length 1\n> 15 LOG: leaking list of length 2\n>\n> You can quibble of course about how representative the regression tests\n> are, but there's sure no evidence at all here that we'd be saving\n> anything measurable.\n>\n> If anything, I'd be inclined to get rid of the\n>\n> list_free(*same_input_transnos);\n>\n> in find_compatible_agg, because it seems like a waste of code on\n> the same grounds. Instrumenting that in the same way, I find\n> that it's not reached at all in your example, while the\n> regression tests give\n>\n> 49 LOG: freeing list of length 0\n> 2 LOG: freeing list of length 1\n>\nThanks for the investigation.\nYeah, this patch is negligible. I’ll withdraw it in CF.\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHI,\n\n\nOn Nov 7, 2022, 04:12 +0800, Tom Lane <tgl@sss.pgh.pa.us>, wrote:\n\nThe NIL lists are of course occupying no storage. The two one-element\nlists are absolutely, completely negligible in the context of planning\nany nontrivial statement. Even the aggtransinfos list that is the\nprimary output of preprocess_aggref will dwarf that; and we leak\nsimilarly small data structures in probably many hundred places in\nthe planner.\n\nI went a bit further and ran the core regression tests, then aggregated\nthe results:\n\n$ grep 'leaking list' postmaster.log | sed 's/.*] //' | sort | uniq -c\n4516 LOG: leaking list of length 0\n95 LOG: leaking list of length 1\n15 LOG: leaking list of length 2\n\nYou can quibble of course about how representative the regression tests\nare, but there's sure no evidence at all here that we'd be saving\nanything measurable.\n\nIf anything, I'd be inclined to get rid of the\n\nlist_free(*same_input_transnos);\n\nin find_compatible_agg, because it seems like a waste of code on\nthe same grounds. Instrumenting that in the same way, I find\nthat it's not reached at all in your example, while the\nregression tests give\n\n49 LOG: freeing list of length 0\n2 LOG: freeing list of length 1\n\nThanks for the investigation.\nYeah, this patch is negligible. I’ll withdraw it in CF.\n\n\nRegards,\nZhang Mingli", "msg_date": "Mon, 7 Nov 2022 12:51:33 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Free list same_input_transnos in preprocess_aggref" } ]
[ { "msg_contents": "Hi,\n\nI'm sorry,if this message is duplicated previous this one, but the \nprevious message is sent incorrectly. I sent it from email address \nlena.ribackina@yandex.ru.\n\nI liked this idea and after reviewing code I noticed some moments and \nI'd rather ask you some questions.\n\nFirstly, I suggest some editing in the comment of commit. I think, it is \nturned out the more laconic and the same clear. I wrote it below since I \ncan't think of any other way to add it.\n\n```\nCurrently, we have to wait for finishing of the query execution to check \nits plan.\nThis is not so convenient in investigation long-running queries on \nproduction\nenvironments where we cannot use debuggers.\n\nTo improve this situation there is proposed the patch containing the \npg_log_query_plan()\nfunction which request to log plan of the specified backend process.\n\nBy default, only superusers are allowed to request log of the plan \notherwise\nallowing any users to issue this request could create cause lots of log \nmessages\nand it can lead to denial of service.\n\nAt the next requesting CHECK_FOR_INTERRUPTS(), the target backend logs \nits plan at\nLOG_SERVER_ONLY level and therefore this plan will appear in the server \nlog only,\nnot to be sent to the client.\n```\n\nSecondly, I have question about deleting USE_ASSERT_CHECKING in lock.h?\nIt supposed to have been checked in another placed of the code by \nmatching values. I am worry about skipping errors due to untesting with \nassert option in the places where it (GetLockMethodLocalHash) \nparticipates and we won't able to get core file in segfault cases. I \nmight not understand something, then can you please explain to me?\n\nThirdly, I have incomprehension of the point why save_ActiveQueryDesc is \ndeclared in the pquery.h? I am seemed to save_ActiveQueryDesc to be used \nin an once time in the ExecutorRun function and  its declaration \nsuperfluous. I added it in the attached patch.\n\nFourthly, it seems to me there are not enough explanatory comments in \nthe code. I also added them in the attached patch.\n\nLastly, I have incomprehension about handling signals since have been \nunused it before. Could another signal disabled calling this signal to \nlog query plan? I noticed this signal to be checked the latest in \nprocsignal_sigusr1_handler function.\n\nRegards,\n\n-- \nAlena Rybakina\nPostgres Professional\n> 19.09.2022, 11:01, \"torikoshia\" <torikoshia@oss.nttdata.com>:\n>\n> On 2022-05-16 17:02, torikoshia wrote:\n>\n>  On 2022-03-09 19:04, torikoshia wrote:\n>\n>  On 2022-02-08 01:13, Fujii Masao wrote:\n>\n>  AbortSubTransaction() should reset ActiveQueryDesc to\n>  save_ActiveQueryDesc that ExecutorRun() set, instead\n> of NULL?\n>  Otherwise ActiveQueryDesc of top-level statement will\n> be unavailable\n>  after subtransaction is aborted in the nested statements.\n>\n>\n>  I once agreed above suggestion and made v20 patch making\n>  save_ActiveQueryDesc a global variable, but it caused\n> segfault when\n>  calling pg_log_query_plan() after FreeQueryDesc().\n>\n>  OTOH, doing some kind of reset of ActiveQueryDesc seems\n> necessary\n>  since it also caused segfault when running\n> pg_log_query_plan() during\n>  installcheck.\n>\n>  There may be a better way, but resetting ActiveQueryDesc\n> to NULL seems\n>  safe and simple.\n>  Of course it makes pg_log_query_plan() useless after a\n> subtransaction\n>  is aborted.\n>  However, if it does not often happen that people want to\n> know the\n>  running query's plan whose subtransaction is aborted,\n> resetting\n>  ActiveQueryDesc to NULL would be acceptable.\n>\n>  Attached is a patch that sets ActiveQueryDesc to NULL when a\n>  subtransaction is aborted.\n>\n>  How do you think?\n>\n> Attached new patch to fix patch apply failures again.\n>\n> --\n> Regards,\n>\n> --\n> Atsushi Torikoshi\n> NTT DATA CORPORATION\n>", "msg_date": "Mon, 19 Sep 2022 13:42:24 +0300", "msg_from": "\"a.rybakina\" <a.rybakina@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: RFC: Logging plan of the running query" } ]
[ { "msg_contents": "Hi,\n\nI added xlogreader cache stats (hits/misses) to pg_stat_wal in\nReadPageInternal() for some of my work and ran some tests with logical\nreplication subscribers. I had expected that the new stats generated\nby walsenders serving the subscribers would be accumulated and shown\nvia pg_stat_wal view in another session, but that didn't happen. Upon\nlooking around, I thought adding\npgstat_flush_wal()/pgstat_report_wal() around proc_exit() in\nwalsener.c might work, but that didn't help either. Can anyone please\nlet me know why the stats that I added aren't shown via pg_stat_wal\nview despite the walsender doing pgstat_initialize() ->\npgstat_init_wal()? Am I missing something?\n\nI'm attaching here with the patch that I was using.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 19 Sep 2022 18:46:17 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "pgstat: stats added in ReadPageInternal() aren't getting reported via\n pg_stat_wal" }, { "msg_contents": "At Mon, 19 Sep 2022 18:46:17 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> walsener.c might work, but that didn't help either. Can anyone please\n> let me know why the stats that I added aren't shown via pg_stat_wal\n> view despite the walsender doing pgstat_initialize() ->\n> pgstat_init_wal()? Am I missing something?\n> \n> I'm attaching here with the patch that I was using.\n\npgstat_report_wal() is supposed to be called on user-facing (or query\nexecuting) backends. The function is called while in idle state on\nsuch backends. Other processes need to call one or more friend\nfunctions as needed. Checkpointer calls pgstat_report_checkpoint/wal()\nat the end of an iteration of the main waiting loop. Walwriter does\nthe same with pgstat_report_wal(). Thus we need to do the same on\nstartup process somewhere within the redo loop but where that call\ndoesn't cause contention on shared stats area.\n\nAbout what the patch does, it increments the xlogreader_cache_hits\ncounter when the WAL page to read have been already loaded, but it's\nnot a cache but a buffer. The counter would approximately shows the\nnumber about mean-(raw)-records-per-wal-page *\nwal-page-ever-read. (mean-records-per-wal-page = page-size /\nmean-record-length). So I don't think that the value offers something\nvaluable. I guess that the same value (for the moment) can be\ncalculated from the result of pg_get_wal_records_info().\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 27 Sep 2022 10:58:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgstat: stats added in ReadPageInternal() aren't getting\n reported via pg_stat_wal" } ]
[ { "msg_contents": "Hi,\n\nI was recently asked about converting an offset reported in WAL read\nerror messages[1] to an LSN with which pg_waldump can be used to\nverify the records or WAL file around that LSN (basically one can\nfilter out the output based on LSN). AFAICS, there's no function that\ntakes offset as an input and produces an LSN and I ended up figuring\nout LSN manually. And, for some of my work too, I was hitting errors\nin XLogReaderValidatePageHeader() and adding recptr to those error\nmessages helped me debug issues faster.\n\nWe have a bunch of messages [1] that have an offset, but not LSN in\nthe error message. Firstly, is there an easiest way to figure out LSN\nfrom offset reported in the error messages? If not, is adding LSN to\nthese messages along with offset a good idea? Of course, we can't just\nconvert offset to LSN using XLogSegNoOffsetToRecPtr() and report, but\nsomething meaningful like reporting the LSN of the page that we are\nreading-in or writing-out etc.\n\nThoughts?\n\n[1]\nerrmsg(\"could not read from WAL segment %s, offset %u: %m\",\nerrmsg(\"could not read from WAL segment %s, offset %u: %m\",\nerrmsg(\"could not write to log file %s \"\n \"at offset %u, length %zu: %m\",\nerrmsg(\"unexpected timeline ID %u in WAL segment %s, offset %u\",\nerrmsg(\"could not read from WAL segment %s, offset %u: read %d of %zu\",\npg_log_error(\"received write-ahead log record for offset %u with no file open\",\n\"invalid magic number %04X in WAL segment %s, offset %u\",\n\"invalid info bits %04X in WAL segment %s, offset %u\",\n\"invalid info bits %04X in WAL segment %s, offset %u\",\n\"unexpected pageaddr %X/%X in WAL segment %s, offset %u\",\n\"out-of-sequence timeline ID %u (after %u) in WAL segment %s, offset %u\",\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Sep 2022 19:26:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Mon, Sep 19, 2022 at 07:26:57PM +0530, Bharath Rupireddy wrote:\n> We have a bunch of messages [1] that have an offset, but not LSN in\n> the error message. Firstly, is there an easiest way to figure out LSN\n> from offset reported in the error messages? If not, is adding LSN to\n> these messages along with offset a good idea? Of course, we can't just\n> convert offset to LSN using XLogSegNoOffsetToRecPtr() and report, but\n> something meaningful like reporting the LSN of the page that we are\n> reading-in or writing-out etc.\n\nIt seems like you want the opposite of pg_walfile_name_offset(). Perhaps\nwe could add a function like pg_walfile_offset_lsn() that accepts a WAL\nfile name and byte offset and returns the LSN.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Sep 2022 15:16:42 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On Mon, Sep 19, 2022 at 03:16:42PM -0700, Nathan Bossart wrote:\n> It seems like you want the opposite of pg_walfile_name_offset(). Perhaps\n> we could add a function like pg_walfile_offset_lsn() that accepts a WAL\n> file name and byte offset and returns the LSN.\n\nLike so...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 19 Sep 2022 20:32:38 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On 2022-Sep-19, Bharath Rupireddy wrote:\n\n> We have a bunch of messages [1] that have an offset, but not LSN in\n> the error message. Firstly, is there an easiest way to figure out LSN\n> from offset reported in the error messages? If not, is adding LSN to\n> these messages along with offset a good idea? Of course, we can't just\n> convert offset to LSN using XLogSegNoOffsetToRecPtr() and report, but\n> something meaningful like reporting the LSN of the page that we are\n> reading-in or writing-out etc.\n\nMaybe add errcontext() somewhere that reports the LSN would be\nappropriate. For example, the page_read() callbacks have the LSN\nreadily available, so the ones in backend could install the errcontext\ncallback; or perhaps ReadPageInternal can do it #ifndef FRONTEND. Not\nsure what is best of those options, but either of those sounds better\nthan sticking the LSN in a lower-level routine that doesn't necessarily\nhave the info already.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 20 Sep 2022 09:26:55 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On Tue, Sep 20, 2022 at 9:02 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Sep 19, 2022 at 03:16:42PM -0700, Nathan Bossart wrote:\n> > It seems like you want the opposite of pg_walfile_name_offset(). Perhaps\n> > we could add a function like pg_walfile_offset_lsn() that accepts a WAL\n> > file name and byte offset and returns the LSN.\n>\n> Like so...\n\nYeah, something like this will be handy for sure, but I'm not sure if\nwe want this to be in core. Let's hear from others.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 20 Sep 2022 17:38:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Tue, Sep 20, 2022 at 12:57 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Sep-19, Bharath Rupireddy wrote:\n>\n> > We have a bunch of messages [1] that have an offset, but not LSN in\n> > the error message. Firstly, is there an easiest way to figure out LSN\n> > from offset reported in the error messages? If not, is adding LSN to\n> > these messages along with offset a good idea? Of course, we can't just\n> > convert offset to LSN using XLogSegNoOffsetToRecPtr() and report, but\n> > something meaningful like reporting the LSN of the page that we are\n> > reading-in or writing-out etc.\n>\n> Maybe add errcontext() somewhere that reports the LSN would be\n> appropriate. For example, the page_read() callbacks have the LSN\n> readily available, so the ones in backend could install the errcontext\n> callback; or perhaps ReadPageInternal can do it #ifndef FRONTEND. Not\n> sure what is best of those options, but either of those sounds better\n> than sticking the LSN in a lower-level routine that doesn't necessarily\n> have the info already.\n\nAll of the error messages [1] have the LSN from which offset was\ncalculated, I think we can just append that to the error messages\n(something like \".... offset %u, LSN %X/%X: %m\") and not complicate\nit. Thoughts?\n\n[1]\nerrmsg(\"could not read from WAL segment %s, offset %u: %m\",\nerrmsg(\"could not read from WAL segment %s, offset %u: %m\",\nerrmsg(\"could not write to log file %s \"\n \"at offset %u, length %zu: %m\",\nerrmsg(\"unexpected timeline ID %u in WAL segment %s, offset %u\",\nerrmsg(\"could not read from WAL segment %s, offset %u: read %d of %zu\",\npg_log_error(\"received write-ahead log record for offset %u with no file open\",\n\"invalid magic number %04X in WAL segment %s, offset %u\",\n\"invalid info bits %04X in WAL segment %s, offset %u\",\n\"invalid info bits %04X in WAL segment %s, offset %u\",\n\"unexpected pageaddr %X/%X in WAL segment %s, offset %u\",\n\"out-of-sequence timeline ID %u (after %u) in WAL segment %s, offset %u\",\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 20 Sep 2022 17:40:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "At Tue, 20 Sep 2022 17:40:36 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Tue, Sep 20, 2022 at 12:57 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2022-Sep-19, Bharath Rupireddy wrote:\n> >\n> > > We have a bunch of messages [1] that have an offset, but not LSN in\n> > > the error message. Firstly, is there an easiest way to figure out LSN\n> > > from offset reported in the error messages? If not, is adding LSN to\n> > > these messages along with offset a good idea? Of course, we can't just\n> > > convert offset to LSN using XLogSegNoOffsetToRecPtr() and report, but\n> > > something meaningful like reporting the LSN of the page that we are\n> > > reading-in or writing-out etc.\n> >\n> > Maybe add errcontext() somewhere that reports the LSN would be\n> > appropriate. For example, the page_read() callbacks have the LSN\n> > readily available, so the ones in backend could install the errcontext\n> > callback; or perhaps ReadPageInternal can do it #ifndef FRONTEND. Not\n> > sure what is best of those options, but either of those sounds better\n> > than sticking the LSN in a lower-level routine that doesn't necessarily\n> > have the info already.\n> \n> All of the error messages [1] have the LSN from which offset was\n> calculated, I think we can just append that to the error messages\n> (something like \".... offset %u, LSN %X/%X: %m\") and not complicate\n> it. Thoughts?\n\nIf all error-emitting site knows the LSN, we don't need the context\nmessage. But *I* would like that the additional message looks like\n\"while reading record at LSN %X/%X\" or slightly shorter version of\nit. Because the targetRecPtr is the beginning of the current reading\nrecord, not the LSN for the segment and offset. It may point to past\nsegments.\n\n\n> [1]\n> errmsg(\"could not read from WAL segment %s, offset %u: %m\",\n> errmsg(\"could not read from WAL segment %s, offset %u: %m\",\n> errmsg(\"could not write to log file %s \"\n> \"at offset %u, length %zu: %m\",\n> errmsg(\"unexpected timeline ID %u in WAL segment %s, offset %u\",\n> errmsg(\"could not read from WAL segment %s, offset %u: read %d of %zu\",\n> pg_log_error(\"received write-ahead log record for offset %u with no file open\",\n> \"invalid magic number %04X in WAL segment %s, offset %u\",\n> \"invalid info bits %04X in WAL segment %s, offset %u\",\n> \"invalid info bits %04X in WAL segment %s, offset %u\",\n> \"unexpected pageaddr %X/%X in WAL segment %s, offset %u\",\n> \"out-of-sequence timeline ID %u (after %u) in WAL segment %s, offset %u\",\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 27 Sep 2022 12:01:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On Tue, Sep 27, 2022 at 8:31 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> If all error-emitting site knows the LSN, we don't need the context\n> message. But *I* would like that the additional message looks like\n> \"while reading record at LSN %X/%X\" or slightly shorter version of\n> it. Because the targetRecPtr is the beginning of the current reading\n> record, not the LSN for the segment and offset. It may point to past\n> segments.\n\nI think we could just say \"LSN %X/%X, offset %u\", because the overall\ncontext whether it's being read or written is implicit with the other\npart of the message.\n\nPlease see the attached v1 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 29 Sep 2022 19:43:28 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Thu, Sep 29, 2022 at 7:43 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Please see the attached v1 patch.\n\nFWIW, I'm attaching Nathan's patch that introduced the new function\npg_walfile_offset_lsn as 0002 in the v1 patch set. Here's the CF entry\n- https://commitfest.postgresql.org/40/3909/.\n\nPlease consider this for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 4 Oct 2022 14:58:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Tue, Oct 4, 2022 at 2:58 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Sep 29, 2022 at 7:43 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Please see the attached v1 patch.\n>\n> FWIW, I'm attaching Nathan's patch that introduced the new function\n> pg_walfile_offset_lsn as 0002 in the v1 patch set. Here's the CF entry\n> - https://commitfest.postgresql.org/40/3909/.\n>\n> Please consider this for further review.\n\nI'm attaching the v2 patch set after rebasing on to the latest HEAD.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 27 Oct 2022 14:58:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "Hi!\n\nI've watched over the patch and consider it useful. Applies without\nconflicts. The functionality of the patch itself is\nmeets declared functionality.\n\nBut, in my view, some improvements may be proposed. We should be more,\nlet's say, cautious (or a paranoid if you wish),\nin pg_walfile_offset_lsn while dealing with user input values. What I mean\nby that is:\n - to init input vars of sscanf, i.e. tli, log andseg;\n - check the return value of sscanf call;\n - check filename max length.\n\nAnother question arises for me: is this function can be called during\nrecovery? If not, we should ereport about this, should we?\n\nAnd one last note here: pg_walfile_offset_lsn is accepting NULL values and\nreturn NULL in this case. From a theoretical\npoint of view, this is perfectly fine. Actually, I think this is exactly\nhow it supposed to be, but I'm not sure if there are no other opinions here.\n-- \nBest regards,\nMaxim Orlov.\n\nHi!I've watched over the patch and consider it useful. Applies without conflicts. The functionality of the patch itself is meets declared functionality. But, in my view, some improvements may be proposed. We should be more, let's say, cautious (or a paranoid if you wish), in pg_walfile_offset_lsn while dealing with user input values. What I mean by that is: - to init input vars of sscanf, i.e. tli, log andseg; - check the return value of sscanf call; - check filename max length.Another question arises for me: is this function can be called during recovery? If not, we should ereport about this, should we?And one last note here: pg_walfile_offset_lsn is accepting NULL values and return NULL in this case. From a theoretical point of view, this is perfectly fine. Actually, I think this is exactly how it supposed to be, but I'm not sure if there are no other opinions here.-- Best regards,Maxim Orlov.", "msg_date": "Fri, 11 Nov 2022 15:21:58 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "Hmm, in 0002, why not return the timeline from the LSN too? It seems a\nwaste not to have it.\n\n+\t\tereport(ERROR,\n+\t\t\t\t(errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n+\t\t\t\t errmsg(\"\\\"offset\\\" must not be negative or greater than or \"\n+\t\t\t\t\t\t\"equal to WAL segment size\")));\n\nI don't think the word offset should be in quotes; and please don't cut\nthe line. So I propose\n\n errmsg(\"offset must not be negative or greater than or equal to the WAL segment size\")));\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 11 Nov 2022 18:35:06 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On Fri, Nov 11, 2022 at 5:52 PM Maxim Orlov <orlovmg@gmail.com> wrote:\n>\n> Hi!\n>\n> I've watched over the patch and consider it useful. Applies without conflicts. The functionality of the patch itself is\n> meets declared functionality.\n\nThanks for reviewing.\n\n> But, in my view, some improvements may be proposed. We should be more, let's say, cautious (or a paranoid if you wish),\n> in pg_walfile_offset_lsn while dealing with user input values. What I mean by that is:\n> - to init input vars of sscanf, i.e. tli, log andseg;\n> - check the return value of sscanf call;\n> - check filename max length.\n\nIsXLogFileName() will take care of this. Also, I've added a new inline\nfunction XLogIdFromFileName() that parses file name and returns log\nand seg along with XLogSegNo and timeline id. This new function avoids\nan extra sscanf() call as well.\n\n> Another question arises for me: is this function can be called during recovery? If not, we should ereport about this, should we?\n\nIt's just a utility function and doesn't depend on any of the server's\ncurrent state (unlike pg_walfile_name()), hence it doesn't matter if\nthis function is called during recovery.\n\n> And one last note here: pg_walfile_offset_lsn is accepting NULL values and return NULL in this case. From a theoretical\n> point of view, this is perfectly fine. Actually, I think this is exactly how it supposed to be, but I'm not sure if there are no other opinions here.\n\nThese functions are marked as 'STRICT', meaning a null is returned,\nwithout even calling the function, if any of the input parameters is\nnull. I think we can keep the behaviour the same as its friends.\n\nOn Fri, Nov 11, 2022 at 11:05 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n\nThanks for reviewing.\n\n> Hmm, in 0002, why not return the timeline from the LSN too? It seems a\n> waste not to have it.\n\nYeah, that actually makes sense. We might be tempted to return segno\ntoo, but it's not something that we emit to the user elsewhere,\nwhereas we emit timeline id.\n\n> + ereport(ERROR,\n> + (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> + errmsg(\"\\\"offset\\\" must not be negative or greater than or \"\n> + \"equal to WAL segment size\")));\n>\n> I don't think the word offset should be in quotes; and please don't cut\n> the line. So I propose\n>\n> errmsg(\"offset must not be negative or greater than or equal to the WAL segment size\")));\n\nChanged.\n\nWhile on this, I noticed that the pg_walfile_name_offset() isn't\ncovered in tests. I took an opportunity and added a simple test case\nalong with pg_walfile_offset_lsn().\n\nI'm attaching the v3 patch set for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 15 Nov 2022 15:32:32 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Tue, 15 Nov 2022 at 13:02, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Fri, Nov 11, 2022 at 5:52 PM Maxim Orlov <orlovmg@gmail.com> wrote:\n>\n> > But, in my view, some improvements may be proposed. We should be more,\n> let's say, cautious (or a paranoid if you wish),\n> > in pg_walfile_offset_lsn while dealing with user input values. What I\n> mean by that is:\n> > - to init input vars of sscanf, i.e. tli, log andseg;\n> > - check the return value of sscanf call;\n> > - check filename max length.\n>\n> IsXLogFileName() will take care of this. Also, I've added a new inline\n> function XLogIdFromFileName() that parses file name and returns log\n> and seg along with XLogSegNo and timeline id. This new function avoids\n> an extra sscanf() call as well.\n>\n> > Another question arises for me: is this function can be called during\n> recovery? If not, we should ereport about this, should we?\n>\n> It's just a utility function and doesn't depend on any of the server's\n> current state (unlike pg_walfile_name()), hence it doesn't matter if\n> this function is called during recovery.\n>\n> > And one last note here: pg_walfile_offset_lsn is accepting NULL values\n> and return NULL in this case. From a theoretical\n> > point of view, this is perfectly fine. Actually, I think this is exactly\n> how it supposed to be, but I'm not sure if there are no other opinions here.\n>\n> These functions are marked as 'STRICT', meaning a null is returned,\n> without even calling the function, if any of the input parameters is\n> null. I think we can keep the behaviour the same as its friends.\n>\n\nThanks for the explanations. I think you are right.\n\n\n> > errmsg(\"offset must not be negative or greater than or equal to the\n> WAL segment size\")));\n>\n> Changed.\n>\n\nConfirm. And a timeline_id is added.\n\n\n> While on this, I noticed that the pg_walfile_name_offset() isn't\n> covered in tests. I took an opportunity and added a simple test case\n> along with pg_walfile_offset_lsn().\n>\n> I'm attaching the v3 patch set for further review.\n>\n\nGreat job! We should mark this patch as RFC, shall we?\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Tue, 15 Nov 2022 at 13:02, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Fri, Nov 11, 2022 at 5:52 PM Maxim Orlov <orlovmg@gmail.com> wrote:\n\n> But, in my view, some improvements may be proposed. We should be more, let's say, cautious (or a paranoid if you wish),\n> in pg_walfile_offset_lsn while dealing with user input values. What I mean by that is:\n>  - to init input vars of sscanf, i.e. tli, log andseg;\n>  - check the return value of sscanf call;\n>  - check filename max length.\n\nIsXLogFileName() will take care of this. Also, I've added a new inline\nfunction XLogIdFromFileName() that parses file name and returns log\nand seg along with XLogSegNo and timeline id. This new function avoids\nan extra sscanf() call as well.\n\n> Another question arises for me: is this function can be called during recovery? If not, we should ereport about this, should we?\n\nIt's just a utility function and doesn't depend on any of the server's\ncurrent state (unlike pg_walfile_name()), hence it doesn't matter if\nthis function is called during recovery.\n\n> And one last note here: pg_walfile_offset_lsn is accepting NULL values and return NULL in this case. From a theoretical\n> point of view, this is perfectly fine. Actually, I think this is exactly how it supposed to be, but I'm not sure if there are no other opinions here.\n\nThese functions are marked as 'STRICT', meaning a null is returned,\nwithout even calling the function, if any of the input parameters is\nnull. I think we can keep the behaviour the same as its friends.  Thanks for the explanations. I think you are right. \n>    errmsg(\"offset must not be negative or greater than or equal to the WAL segment size\")));\n\nChanged. Confirm. And a timeline_id is added. \nWhile on this, I noticed that the pg_walfile_name_offset() isn't\ncovered in tests. I took an opportunity and added a simple test case\nalong with pg_walfile_offset_lsn().\n\nI'm attaching the v3 patch set for further review. Great job! We should mark this patch as RFC, shall we?-- Best regards,Maxim Orlov.", "msg_date": "Wed, 16 Nov 2022 16:24:51 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Wed, Nov 16, 2022 at 6:55 PM Maxim Orlov <orlovmg@gmail.com> wrote:\n>\n>> These functions are marked as 'STRICT', meaning a null is returned,\n>> without even calling the function, if any of the input parameters is\n>> null. I think we can keep the behaviour the same as its friends.\n>\n> Thanks for the explanations. I think you are right.\n>\n> Confirm. And a timeline_id is added.\n>>\n>> While on this, I noticed that the pg_walfile_name_offset() isn't\n>> covered in tests. I took an opportunity and added a simple test case\n>> along with pg_walfile_offset_lsn().\n>>\n>> I'm attaching the v3 patch set for further review.\n>\n> Great job! We should mark this patch as RFC, shall we?\n\nPlease do, if you feel so. Thanks for your review.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 17 Nov 2022 11:53:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Thu, Nov 17, 2022 at 11:53:23AM +0530, Bharath Rupireddy wrote:\n> Please do, if you feel so. Thanks for your review.\n\nI don't really mind the addition of the LSN when operating on a given\nrecord where we are reading a location, like in the five error paths\nfor the header validation or the three ones in ReadRecord()\n\nNow this one looks confusing:\n+ XLogSegNoOffsetToRecPtr(openLogSegNo, startoffset,\n+ wal_segment_size, lsn);\n ereport(PANIC,\n (errcode_for_file_access(),\n errmsg(\"could not write to log file %s \"\n- \"at offset %u, length %zu: %m\",\n- xlogfname, startoffset, nleft)));\n+ \"at offset %u, LSN %X/%X, length %zu: %m\",\n+ xlogfname, startoffset,\n+ LSN_FORMAT_ARGS(lsn), nleft)));\n\nThis does not always refer to an exact LSN of a record as we may be in\nthe middle of a write, so I would leave it as-is.\n\nSimilarly the addition of wre_lsn would be confusing? The offset\nlooks kind of enough to me when referring to the middle of a page in\nWALReadRaiseError().\n--\nMichael", "msg_date": "Fri, 2 Dec 2022 16:20:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On Fri, Dec 2, 2022 at 12:50 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Nov 17, 2022 at 11:53:23AM +0530, Bharath Rupireddy wrote:\n> > Please do, if you feel so. Thanks for your review.\n>\n> I don't really mind the addition of the LSN when operating on a given\n> record where we are reading a location, like in the five error paths\n> for the header validation or the three ones in ReadRecord()\n\nThanks for reviewing.\n\n> Now this one looks confusing:\n> + XLogSegNoOffsetToRecPtr(openLogSegNo, startoffset,\n> + wal_segment_size, lsn);\n> ereport(PANIC,\n> (errcode_for_file_access(),\n> errmsg(\"could not write to log file %s \"\n> - \"at offset %u, length %zu: %m\",\n> - xlogfname, startoffset, nleft)));\n> + \"at offset %u, LSN %X/%X, length %zu: %m\",\n> + xlogfname, startoffset,\n> + LSN_FORMAT_ARGS(lsn), nleft)));\n>\n> This does not always refer to an exact LSN of a record as we may be in\n> the middle of a write, so I would leave it as-is.\n>\n> Similarly the addition of wre_lsn would be confusing? The offset\n> looks kind of enough to me when referring to the middle of a page in\n> WALReadRaiseError().\n\nYes, I removed those changes. Even if someone sees an offset of a\nrecord within a WAL file elsewhere, they have the new utility function\n(0002) pg_walfile_offset_lsn().\n\nI'm attaching the v4 patch set for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 3 Dec 2022 09:07:38 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Sat, Dec 03, 2022 at 09:07:38AM +0530, Bharath Rupireddy wrote:\n> Yes, I removed those changes. Even if someone sees an offset of a\n> record within a WAL file elsewhere, they have the new utility function\n> (0002) pg_walfile_offset_lsn().\n> \n> I'm attaching the v4 patch set for further review.\n\n+ * Compute an LSN and timline ID given a WAL file name and decimal byte offset.\ns/timline/timeline/, exactly two times\n\n+ Datum values[2] = {0};\n+ bool isnull[2] = {0};\nI would not hardcode the number of attributes of the record returned.\n\nRegarding pg_walfile_offset_lsn(), I am not sure that this is the best\nmove we can do as it is possible to compile a LSN from 0/0 with just a\nsegment number, say:\nselect '0/0'::pg_lsn + :segno * setting::int + :offset\n from pg_settings where name = 'wal_segment_size';\n\n+ resultTupleDesc = CreateTemplateTupleDesc(2);\n+ TupleDescInitEntry(resultTupleDesc, (AttrNumber) 1, \"lsn\",\n+ PG_LSNOID, -1, 0);\n+ TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, \"timeline_id\",\n+ INT4OID, -1, 0);\nLet's use get_call_result_type() to get the TupleDesc and to avoid a\nduplication between pg_proc.dat and this code.\n\nHence I would tend to let XLogFromFileName do the job, while having a\nSQL function that is just a thin wrapper around it that returns the\nsegment TLI and its number, leaving the offset out of the equation as\nwell as this new XLogIdFromFileName().\n--\nMichael", "msg_date": "Mon, 5 Dec 2022 10:03:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On Mon, Dec 5, 2022 at 6:34 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Regarding pg_walfile_offset_lsn(), I am not sure that this is the best\n> move we can do as it is possible to compile a LSN from 0/0 with just a\n> segment number, say:\n> select '0/0'::pg_lsn + :segno * setting::int + :offset\n> from pg_settings where name = 'wal_segment_size';\n\nNice.\n\n> + resultTupleDesc = CreateTemplateTupleDesc(2);\n> + TupleDescInitEntry(resultTupleDesc, (AttrNumber) 1, \"lsn\",\n> + PG_LSNOID, -1, 0);\n> + TupleDescInitEntry(resultTupleDesc, (AttrNumber) 2, \"timeline_id\",\n> + INT4OID, -1, 0);\n> Let's use get_call_result_type() to get the TupleDesc and to avoid a\n> duplication between pg_proc.dat and this code.\n>\n> Hence I would tend to let XLogFromFileName do the job, while having a\n> SQL function that is just a thin wrapper around it that returns the\n> segment TLI and its number, leaving the offset out of the equation as\n> well as this new XLogIdFromFileName().\n\nSo, a SQL function pg_dissect_walfile_name (or some other better name)\ngiven a WAL file name returns the tli and seg number. Then the\npg_walfile_offset_lsn can just be a SQL-defined function (in\nsystem_functions.sql) using this new function and pg_settings. If this\nunderstanding is correct, it looks good to me at this point.\n\nThat said, let's also hear from others.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 5 Dec 2022 08:48:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Mon, Dec 05, 2022 at 08:48:25AM +0530, Bharath Rupireddy wrote:\n> So, a SQL function pg_dissect_walfile_name (or some other better name)\n> given a WAL file name returns the tli and seg number. Then the\n> pg_walfile_offset_lsn can just be a SQL-defined function (in\n> system_functions.sql) using this new function and pg_settings. If this\n> understanding is correct, it looks good to me at this point.\n\nI would do without the SQL function that looks at pg_settings, FWIW.\n\n> That said, let's also hear from others.\n\nSure. Perhaps my set of suggestions will not get the majority,\nthough..\n--\nMichael", "msg_date": "Mon, 5 Dec 2022 13:13:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "At Mon, 5 Dec 2022 13:13:23 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, Dec 05, 2022 at 08:48:25AM +0530, Bharath Rupireddy wrote:\n> > So, a SQL function pg_dissect_walfile_name (or some other better name)\n> > given a WAL file name returns the tli and seg number. Then the\n> > pg_walfile_offset_lsn can just be a SQL-defined function (in\n> > system_functions.sql) using this new function and pg_settings. If this\n> > understanding is correct, it looks good to me at this point.\n> \n> I would do without the SQL function that looks at pg_settings, FWIW.\n\nIf that function may be called at a high frequency, SQL-defined one is\nnot suitable, but I don't think this function is used that way. With\nthat premise, I would prefer SQL-defined as it is far simpler on its\nface.\n\nAt Mon, 5 Dec 2022 10:03:55 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Hence I would tend to let XLogFromFileName do the job, while having a\n> SQL function that is just a thin wrapper around it that returns the\n> segment TLI and its number, leaving the offset out of the equation as\n> well as this new XLogIdFromFileName().\n\nI don't think it could be problematic that the SQL-callable function\nreturns a bogus result for a wrong WAL filename in the correct\nformat. Specifically, I think that the function may return (0/0,0) for\n\"000000000000000000000000\" since that behavior is completely\nharmless. If we don't check logid, XLogFromFileName fits instead.\n\n(If we assume that the file names are typed in letter-by-letter, I\n rather prefer to allow lower-case letters:p)\n\n> > That said, let's also hear from others.\n> \n> Sure. Perhaps my set of suggestions will not get the majority,\n> though..\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 06 Dec 2022 16:27:50 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On Tue, Dec 6, 2022 at 12:57 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 5 Dec 2022 13:13:23 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n> > On Mon, Dec 05, 2022 at 08:48:25AM +0530, Bharath Rupireddy wrote:\n> > > So, a SQL function pg_dissect_walfile_name (or some other better name)\n> > > given a WAL file name returns the tli and seg number. Then the\n> > > pg_walfile_offset_lsn can just be a SQL-defined function (in\n> > > system_functions.sql) using this new function and pg_settings. If this\n> > > understanding is correct, it looks good to me at this point.\n> >\n> > I would do without the SQL function that looks at pg_settings, FWIW.\n>\n> If that function may be called at a high frequency, SQL-defined one is\n> not suitable, but I don't think this function is used that way. With\n> that premise, I would prefer SQL-defined as it is far simpler on its\n> face.\n>\n> At Mon, 5 Dec 2022 10:03:55 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n> > Hence I would tend to let XLogFromFileName do the job, while having a\n> > SQL function that is just a thin wrapper around it that returns the\n> > segment TLI and its number, leaving the offset out of the equation as\n> > well as this new XLogIdFromFileName().\n>\n> I don't think it could be problematic that the SQL-callable function\n> returns a bogus result for a wrong WAL filename in the correct\n> format. Specifically, I think that the function may return (0/0,0) for\n> \"000000000000000000000000\" since that behavior is completely\n> harmless. If we don't check logid, XLogFromFileName fits instead.\n\nIf we were to provide correctness and input invalidations\n(specifically, the validations around 'seg', see below) of the WAL\nfile name typed in by the user, the function pg_walfile_offset_lsn()\nwins the race.\n\n+ XLogIdFromFileName(fname, &tli, &segno, &log, &seg, wal_segment_size);\n+\n+ if (seg >= XLogSegmentsPerXLogId(wal_segment_size) ||\n+ (log == 0 && seg == 0) ||\n+ segno == 0 ||\n+ tli == 0)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"invalid WAL file name \\\"%s\\\"\", fname)));\n\n+SELECT * FROM pg_walfile_offset_lsn('0000000100000000FFFFFFFF', 15);\n+ERROR: invalid WAL file name \"0000000100000000FFFFFFFF\"\n\nThat said, I think we can have a single function\npg_dissect_walfile_name(wal_file_name, offset optional) returning\nsegno (XLogSegNo - physical log file sequence number), tli, lsn (if\noffset is given). This way there is no need for another SQL-callable\nfunction using pg_settings. Thoughts?\n\n> (If we assume that the file names are typed in letter-by-letter, I\n> rather prefer to allow lower-case letters:p)\n\nIt's easily doable if we convert the entered WAL file name to\nuppercase using pg_toupper() and then pass it to IsXLogFileName().\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 6 Dec 2022 16:46:45 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Tue, Dec 6, 2022 at 4:46 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> That said, I think we can have a single function\n> pg_dissect_walfile_name(wal_file_name, offset optional) returning\n> segno (XLogSegNo - physical log file sequence number), tli, lsn (if\n> offset is given). This way there is no need for another SQL-callable\n> function using pg_settings. Thoughts?\n>\n> > (If we assume that the file names are typed in letter-by-letter, I\n> > rather prefer to allow lower-case letters:p)\n>\n> It's easily doable if we convert the entered WAL file name to\n> uppercase using pg_toupper() and then pass it to IsXLogFileName().\n\nOkay, here's the v5 patch that I could come up with. It basically adds\nfunctions for dissecting WAL file names and computing offset from lsn.\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 13 Dec 2022 21:32:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Tue, Dec 06, 2022 at 04:27:50PM +0900, Kyotaro Horiguchi wrote:\n> At Mon, 5 Dec 2022 10:03:55 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n>> Hence I would tend to let XLogFromFileName do the job, while having a\n>> SQL function that is just a thin wrapper around it that returns the\n>> segment TLI and its number, leaving the offset out of the equation as\n>> well as this new XLogIdFromFileName().\n> \n> I don't think it could be problematic that the SQL-callable function\n> returns a bogus result for a wrong WAL filename in the correct\n> format. Specifically, I think that the function may return (0/0,0) for\n> \"000000000000000000000000\" since that behavior is completely\n> harmless. If we don't check logid, XLogFromFileName fits instead.\n\nYeah, I really don't think that it is a big deal either:\nXLogIdFromFileName() just translates what it receives from the\ncaller for the TLI and the segment number.\n\n> (If we assume that the file names are typed in letter-by-letter, I\n> rather prefer to allow lower-case letters:p)\n\nYep, makes sense to enforce a compatible WAL segment name if we can.\n--\nMichael", "msg_date": "Mon, 19 Dec 2022 16:28:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On Tue, Dec 13, 2022 at 09:32:19PM +0530, Bharath Rupireddy wrote:\n> Okay, here's the v5 patch that I could come up with. It basically adds\n> functions for dissecting WAL file names and computing offset from lsn.\n> Thoughts?\n\nI had a second look at that, and I still have mixed feelings about the\naddition of the SQL function, no real objection about\npg_dissect_walfile_name().\n\nI don't really think that we need a specific handling with a new\nmacro from xlog_internal.h that does its own parsing of the segment\nnumber while XLogFromFileName() can do that based on the user input,\nso I have simplified that.\n\nA second thing is the TLI that had better be returned as int8 and not\nint4 so as we don't have a negative number for a TLI higher than 2B in\na WAL segment name.\n--\nMichael", "msg_date": "Mon, 19 Dec 2022 17:07:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On Mon, Dec 19, 2022 at 1:37 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Dec 13, 2022 at 09:32:19PM +0530, Bharath Rupireddy wrote:\n> > Okay, here's the v5 patch that I could come up with. It basically adds\n> > functions for dissecting WAL file names and computing offset from lsn.\n> > Thoughts?\n>\n> I had a second look at that, and I still have mixed feelings about the\n> addition of the SQL function, no real objection about\n> pg_dissect_walfile_name().\n>\n> I don't really think that we need a specific handling with a new\n> macro from xlog_internal.h that does its own parsing of the segment\n> number while XLogFromFileName() can do that based on the user input,\n> so I have simplified that.\n>\n> A second thing is the TLI that had better be returned as int8 and not\n> int4 so as we don't have a negative number for a TLI higher than 2B in\n> a WAL segment name.\n\nThanks. The v6 patch LGTM.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Dec 2022 17:22:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Mon, Dec 19, 2022 at 5:22 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Dec 19, 2022 at 1:37 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Tue, Dec 13, 2022 at 09:32:19PM +0530, Bharath Rupireddy wrote:\n> > > Okay, here's the v5 patch that I could come up with. It basically adds\n> > > functions for dissecting WAL file names and computing offset from lsn.\n> > > Thoughts?\n> >\n> > I had a second look at that, and I still have mixed feelings about the\n> > addition of the SQL function, no real objection about\n> > pg_dissect_walfile_name().\n> >\n> > I don't really think that we need a specific handling with a new\n> > macro from xlog_internal.h that does its own parsing of the segment\n> > number while XLogFromFileName() can do that based on the user input,\n> > so I have simplified that.\n> >\n> > A second thing is the TLI that had better be returned as int8 and not\n> > int4 so as we don't have a negative number for a TLI higher than 2B in\n> > a WAL segment name.\n>\n> Thanks. The v6 patch LGTM.\n\nA nitpick - can we also specify a use case for the function\npg_dissect_walfile_name(), that is, computing LSN from offset and WAL\nfile name, something like [1]?\n\n[1]\ndiff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml\nindex 2f05b06f14..c36fcb83c8 100644\n--- a/doc/src/sgml/func.sgml\n+++ b/doc/src/sgml/func.sgml\n@@ -26110,7 +26110,17 @@ LOG: Grand total: 1651920 bytes in 201\nblocks; 622360 free (88 chunks); 1029560\n </para>\n <para>\n Extract the file sequence number and timeline ID from a WAL file\n- name.\n+ name. This function is useful to compute LSN from a given offset\n+ and WAL file name, for example:\n+<screen>\n+postgres=# \\set file_name '000000010000000100C000AB'\n+postgres=# \\set offset 256\n+postgres=# SELECT '0/0'::pg_lsn + pd.segno * ps.setting::int +\n:offset AS lsn FROM pg_dissect_walfile_name(:'file_name') pd,\npg_show_all_settings() ps WHERE ps.name = 'wal_segment_size';\n+ lsn\n+---------------\n+ C001/AB000100\n+(1 row)\n+</screen>\n </para></entry>\n </row>\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 19 Dec 2022 17:51:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Mon, Dec 19, 2022 at 05:51:19PM +0530, Bharath Rupireddy wrote:\n> A nitpick - can we also specify a use case for the function\n> pg_dissect_walfile_name(), that is, computing LSN from offset and WAL\n> file name, something like [1]?\n\nYeah, my mind was considering as well yesterday the addition of a note\nin the docs about something among these lines, so fine by me.\n--\nMichael", "msg_date": "Tue, 20 Dec 2022 09:01:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On Tue, Dec 20, 2022 at 09:01:02AM +0900, Michael Paquier wrote:\n> Yeah, my mind was considering as well yesterday the addition of a note\n> in the docs about something among these lines, so fine by me.\n\nAnd applied that, after tweaking a few tiny things on a last lookup\nwith a catversion bump. Note that the example has been moved at the\nbottom of the table for these functions, which is more consistent with\nthe surroundings. \n--\nMichael", "msg_date": "Tue, 20 Dec 2022 13:40:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On Tue, Dec 20, 2022 at 5:40 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Dec 20, 2022 at 09:01:02AM +0900, Michael Paquier wrote:\n> > Yeah, my mind was considering as well yesterday the addition of a note\n> > in the docs about something among these lines, so fine by me.\n>\n> And applied that, after tweaking a few tiny things on a last lookup\n> with a catversion bump. Note that the example has been moved at the\n> bottom of the table for these functions, which is more consistent with\n> the surroundings.\n>\n>\nHi!\n\nCaught this thread late. To me, pg_dissect_walfile_name() is a really\nstrange name for a function. Grepping our I code I see the term dissect s\nused somewhere inside the regex code and exactly zero instances elsewhere.\nWhich is why I definitely didn't recognize the term...\n\nWouldn't something like pg_split_walfile_name() be a lot more consistent\nwith the rest of our names?\n\n//Magnus\n\nOn Tue, Dec 20, 2022 at 5:40 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Dec 20, 2022 at 09:01:02AM +0900, Michael Paquier wrote:\n> Yeah, my mind was considering as well yesterday the addition of a note\n> in the docs about something among these lines, so fine by me.\n\nAnd applied that, after tweaking a few tiny things on a last lookup\nwith a catversion bump.  Note that the example has been moved at the\nbottom of the table for these functions, which is more consistent with\nthe surroundings. Hi!Caught this thread late. To me, pg_dissect_walfile_name() is a really strange name for a function. Grepping our I code I see the term dissect s used somewhere inside the regex code and exactly zero instances elsewhere. Which is why I definitely didn't recognize the term...Wouldn't something like pg_split_walfile_name() be a lot more consistent with the rest of our names?//Magnus", "msg_date": "Tue, 20 Dec 2022 08:57:41 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Tue, Dec 20, 2022 at 1:27 PM Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Tue, Dec 20, 2022 at 5:40 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Tue, Dec 20, 2022 at 09:01:02AM +0900, Michael Paquier wrote:\n>> > Yeah, my mind was considering as well yesterday the addition of a note\n>> > in the docs about something among these lines, so fine by me.\n>>\n>> And applied that, after tweaking a few tiny things on a last lookup\n>> with a catversion bump. Note that the example has been moved at the\n>> bottom of the table for these functions, which is more consistent with\n>> the surroundings.\n>>\n>\n> Hi!\n>\n> Caught this thread late. To me, pg_dissect_walfile_name() is a really strange name for a function. Grepping our I code I see the term dissect s used somewhere inside the regex code and exactly zero instances elsewhere. Which is why I definitely didn't recognize the term...\n>\n> Wouldn't something like pg_split_walfile_name() be a lot more consistent with the rest of our names?\n\nHm. FWIW, here's the patch.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 20 Dec 2022 18:04:40 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "2022年12月20日(火) 21:35 Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:\n>\n> On Tue, Dec 20, 2022 at 1:27 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >\n> > On Tue, Dec 20, 2022 at 5:40 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >>\n> >> On Tue, Dec 20, 2022 at 09:01:02AM +0900, Michael Paquier wrote:\n> >> > Yeah, my mind was considering as well yesterday the addition of a note\n> >> > in the docs about something among these lines, so fine by me.\n> >>\n> >> And applied that, after tweaking a few tiny things on a last lookup\n> >> with a catversion bump. Note that the example has been moved at the\n> >> bottom of the table for these functions, which is more consistent with\n> >> the surroundings.\n> >>\n> >\n> > Hi!\n> >\n> > Caught this thread late. To me, pg_dissect_walfile_name() is a really strange name for a function. Grepping our I code I see the term dissect s used somewhere inside the regex code and exactly zero instances elsewhere. Which is why I definitely didn't recognize the term...\n\nLate to the party too, but I did wonder about the name when I saw it.\n\n> > Wouldn't something like pg_split_walfile_name() be a lot more consistent with the rest of our names?\n>\n> Hm. FWIW, here's the patch.\n\nHmm, \"pg_split_walfile_name()\" sounds like it would return three 8\ncharacter strings.\n\nMaybe something like \"pg_walfile_name_elements()\" ?\n\nRegards\n\nIan Barwick\n\n\n", "msg_date": "Wed, 21 Dec 2022 09:06:15 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Tue, Dec 20, 2022 at 06:04:40PM +0530, Bharath Rupireddy wrote:\n> On Tue, Dec 20, 2022 at 1:27 PM Magnus Hagander <magnus@hagander.net> wrote:\n>> Caught this thread late. To me, pg_dissect_walfile_name() is a\n>> really strange name for a function. Grepping our I code I see the\n>> term dissect s used somewhere inside the regex code and exactly\n>> zero instances elsewhere. Which is why I definitely didn't\n>> recognize the term... \n>>\n>> Wouldn't something like pg_split_walfile_name() be a lot more\n>> consistent with the rest of our names? \n\nFine by me to change that if there is little support for the current\nnaming, though the current one does not sound that bad to me either.\n\n> Hm. FWIW, here's the patch.\n\n\"split\" is used a lot for the picksplit functions, but not in any of\nthe existing functions as a name. Some extra options: parse, read,\nextract, calculate, deduce, get. \"parse\" would be something I would\nbe OK with.\n--\nMichael", "msg_date": "Wed, 21 Dec 2022 09:09:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On Wed, Dec 21, 2022 at 5:39 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Dec 20, 2022 at 06:04:40PM +0530, Bharath Rupireddy wrote:\n> > On Tue, Dec 20, 2022 at 1:27 PM Magnus Hagander <magnus@hagander.net> wrote:\n> >> Caught this thread late. To me, pg_dissect_walfile_name() is a\n> >> really strange name for a function. Grepping our I code I see the\n> >> term dissect s used somewhere inside the regex code and exactly\n> >> zero instances elsewhere. Which is why I definitely didn't\n> >> recognize the term...\n> >>\n> >> Wouldn't something like pg_split_walfile_name() be a lot more\n> >> consistent with the rest of our names?\n>\n> Fine by me to change that if there is little support for the current\n> naming, though the current one does not sound that bad to me either.\n>\n> > Hm. FWIW, here's the patch.\n>\n> \"split\" is used a lot for the picksplit functions, but not in any of\n> the existing functions as a name. Some extra options: parse, read,\n> extract, calculate, deduce, get. \"parse\" would be something I would\n> be OK with.\n\n\"dissect\", \"split\" and \"parse\" - I'm okay with either of these.\n\nRead somewhere - a saying that goes this way \"the hardest part of\ncoding is to name variables and functions\" :).\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 21 Dec 2022 11:09:01 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Wed, Dec 21, 2022 at 1:09 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Dec 20, 2022 at 06:04:40PM +0530, Bharath Rupireddy wrote:\n> > On Tue, Dec 20, 2022 at 1:27 PM Magnus Hagander <magnus@hagander.net>\n> wrote:\n> >> Caught this thread late. To me, pg_dissect_walfile_name() is a\n> >> really strange name for a function. Grepping our I code I see the\n> >> term dissect s used somewhere inside the regex code and exactly\n> >> zero instances elsewhere. Which is why I definitely didn't\n> >> recognize the term...\n> >>\n> >> Wouldn't something like pg_split_walfile_name() be a lot more\n> >> consistent with the rest of our names?\n>\n> Fine by me to change that if there is little support for the current\n> naming, though the current one does not sound that bad to me either.\n>\n> > Hm. FWIW, here's the patch.\n>\n> \"split\" is used a lot for the picksplit functions, but not in any of\n> the existing functions as a name. Some extra options: parse, read,\n> extract, calculate, deduce, get. \"parse\" would be something I would\n> be OK with.\n>\n\n\nNot sure what you mean? We certainly have a lot of functions called split\nthat are not the picksplit ones. split_part(). regexp_split_to_array(),\nregexp_split_to_table()... And ther'es things like tuiple_data_split() in\npageinspect.\n\nThere are many other examples outside of postgres as well, e.g. python has\na split() of pathnames, \"almost every language\" has a split() on strings\netc. I don't think I've ever seen dissect in a place like that either\n(though Im sure it exists somewhere, it's hardly common)\n\nBasically, we take one thing and turn it into 3. That very naturally rings\nwith \"split\" to me.\n\nParse might work as well, certainly better than dissect. I'd still prefer\nsplit though.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Dec 21, 2022 at 1:09 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Dec 20, 2022 at 06:04:40PM +0530, Bharath Rupireddy wrote:\n> On Tue, Dec 20, 2022 at 1:27 PM Magnus Hagander <magnus@hagander.net> wrote:\n>> Caught this thread late. To me, pg_dissect_walfile_name() is a\n>> really strange name for a function. Grepping our I code I see the\n>> term dissect s used somewhere inside the regex code and exactly\n>> zero instances elsewhere. Which is why I definitely didn't\n>> recognize the term... \n>>\n>> Wouldn't something like pg_split_walfile_name() be a lot more\n>> consistent with the rest of our names? \n\nFine by me to change that if there is little support for the current\nnaming, though the current one does not sound that bad to me either.\n\n> Hm. FWIW, here's the patch.\n\n\"split\" is used a lot for the picksplit functions, but not in any of\nthe existing functions as a name.  Some extra options: parse, read,\nextract, calculate, deduce, get.  \"parse\" would be something I would\nbe OK with.Not sure what you mean? We certainly have a lot of functions called split that are not the picksplit ones. split_part(). regexp_split_to_array(), regexp_split_to_table()... And ther'es things like tuiple_data_split() in pageinspect.There are many other examples outside of postgres as well, e.g. python has a split() of pathnames, \"almost every language\" has a split() on strings etc. I don't think I've ever seen dissect in a place like that either (though Im sure it exists somewhere, it's hardly common)Basically, we take one thing and turn it into 3. That very naturally rings with \"split\" to me.Parse might work as well, certainly better than dissect. I'd still prefer split though.--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 21 Dec 2022 22:22:02 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Wed, Dec 21, 2022 at 10:22:02PM +0100, Magnus Hagander wrote:\n> Basically, we take one thing and turn it into 3. That very naturally rings\n> with \"split\" to me.\n> \n> Parse might work as well, certainly better than dissect. I'd still prefer\n> split though.\n\nHonestly, I don't have any counter-arguments, so I am fine to switch\nthe name as you are suggesting. And pg_split_walfile_name() it is?\n--\nMichael", "msg_date": "Thu, 22 Dec 2022 11:27:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On Thu, Dec 22, 2022 at 7:57 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Dec 21, 2022 at 10:22:02PM +0100, Magnus Hagander wrote:\n> > Basically, we take one thing and turn it into 3. That very naturally rings\n> > with \"split\" to me.\n> >\n> > Parse might work as well, certainly better than dissect. I'd still prefer\n> > split though.\n>\n> Honestly, I don't have any counter-arguments, so I am fine to switch\n> the name as you are suggesting. And pg_split_walfile_name() it is?\n\n+1. FWIW, a simple patch is here\nhttps://www.postgresql.org/message-id/CALj2ACXdZ7WGRD-_jPPeZugvWLN%2Bgxo3QtV-eZPRicUwjesM%3Dg%40mail.gmail.com.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 22 Dec 2022 10:09:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "At Thu, 22 Dec 2022 10:09:23 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Thu, Dec 22, 2022 at 7:57 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, Dec 21, 2022 at 10:22:02PM +0100, Magnus Hagander wrote:\n> > > Basically, we take one thing and turn it into 3. That very naturally rings\n> > > with \"split\" to me.\n> > >\n> > > Parse might work as well, certainly better than dissect. I'd still prefer\n> > > split though.\n> >\n> > Honestly, I don't have any counter-arguments, so I am fine to switch\n> > the name as you are suggesting. And pg_split_walfile_name() it is?\n> \n> +1. FWIW, a simple patch is here\n> https://www.postgresql.org/message-id/CALj2ACXdZ7WGRD-_jPPeZugvWLN%2Bgxo3QtV-eZPRicUwjesM%3Dg%40mail.gmail.com.\n\nBy the way the function is documented as the follows.\n\n> Extracts the file sequence number and timeline ID from a WAL file name.\n\nI didn't find the definition for the workd \"file sequence number\" in\nthe doc. Instead I find \"segment number\" (a bit doubtful, though..).\n\nIn the first place \"file sequence number\" and \"segno\" can hardly be\nassociated by appearance by readers, I think. (Yeah, we can identify\nthat since the another parameter is identifiable alone.) Why don't we\nspell out the parameter simply as \"segment number\"?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 22 Dec 2022 17:19:24 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On Thu, Dec 22, 2022 at 05:19:24PM +0900, Kyotaro Horiguchi wrote:\n> In the first place \"file sequence number\" and \"segno\" can hardly be\n> associated by appearance by readers, I think. (Yeah, we can identify\n> that since the another parameter is identifiable alone.) Why don't we\n> spell out the parameter simply as \"segment number\"?\n\nAs in using \"sequence number\" removing \"file\" from the docs and\nchanging the OUT parameter name to segment_number rather than segno?\nFine by me.\n--\nMichael", "msg_date": "Thu, 22 Dec 2022 20:27:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On Thu, Dec 22, 2022 at 4:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Dec 22, 2022 at 05:19:24PM +0900, Kyotaro Horiguchi wrote:\n> > In the first place \"file sequence number\" and \"segno\" can hardly be\n> > associated by appearance by readers, I think. (Yeah, we can identify\n> > that since the another parameter is identifiable alone.) Why don't we\n> > spell out the parameter simply as \"segment number\"?\n>\n> As in using \"sequence number\" removing \"file\" from the docs and\n> changing the OUT parameter name to segment_number rather than segno?\n> Fine by me.\n\n+1.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 22 Dec 2022 17:03:35 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" }, { "msg_contents": "On Thu, Dec 22, 2022 at 05:03:35PM +0530, Bharath Rupireddy wrote:\n> On Thu, Dec 22, 2022 at 4:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> As in using \"sequence number\" removing \"file\" from the docs and\n>> changing the OUT parameter name to segment_number rather than segno?\n>> Fine by me.\n> \n> +1.\n\nOkay, done this way.\n--\nMichael", "msg_date": "Fri, 23 Dec 2022 10:06:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL\n file read/write/validate header failures" }, { "msg_contents": "On Fri, Dec 23, 2022 at 2:06 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Dec 22, 2022 at 05:03:35PM +0530, Bharath Rupireddy wrote:\n> > On Thu, Dec 22, 2022 at 4:57 PM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> >> As in using \"sequence number\" removing \"file\" from the docs and\n> >> changing the OUT parameter name to segment_number rather than segno?\n> >> Fine by me.\n> >\n> > +1.\n>\n> Okay, done this way.\n>\n>\nThanks!\n\n//Magnus\n\nOn Fri, Dec 23, 2022 at 2:06 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Dec 22, 2022 at 05:03:35PM +0530, Bharath Rupireddy wrote:\n> On Thu, Dec 22, 2022 at 4:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> As in using \"sequence number\" removing \"file\" from the docs and\n>> changing the OUT parameter name to segment_number rather than segno?\n>> Fine by me.\n> \n> +1.\n\nOkay, done this way.Thanks!//Magnus", "msg_date": "Fri, 23 Dec 2022 18:07:04 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Add LSN along with offset to error messages reported for WAL file\n read/write/validate header failures" } ]
[ { "msg_contents": "\nAll of a sudden I'm getting repo not found for\nUbuntu 16.04 LTS on the APT repo. Why?\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Mon, 19 Sep 2022 09:46:05 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Ubuntu 16.04: Xenial: Why was it removed from the apt repo?" }, { "msg_contents": "Hello!\n\nBecause it has been removed and moved to the archives, as per the warning\nfrom early July.\n\nSee\nhttps://www.postgresql.org/message-id/flat/YsV8fmomNNC%2BGpIR%40msg.credativ.de\n\n//Magnus\n\n\nOn Mon, Sep 19, 2022 at 4:46 PM Larry Rosenman <ler@lerctr.org> wrote:\n\n>\n> All of a sudden I'm getting repo not found for\n> Ubuntu 16.04 LTS on the APT repo. Why?\n>\n> --\n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 214-642-9640 E-Mail: ler@lerctr.org\n> US Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n>\n>\n\nHello!Because it has been removed and moved to the archives, as per the warning from early July.See https://www.postgresql.org/message-id/flat/YsV8fmomNNC%2BGpIR%40msg.credativ.de//MagnusOn Mon, Sep 19, 2022 at 4:46 PM Larry Rosenman <ler@lerctr.org> wrote:\nAll of a sudden I'm getting repo not found for\nUbuntu 16.04 LTS on the APT repo.  Why?\n\n-- \nLarry Rosenman                     http://www.lerctr.org/~ler\nPhone: +1 214-642-9640                 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106", "msg_date": "Mon, 19 Sep 2022 16:50:36 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Ubuntu 16.04: Xenial: Why was it removed from the apt repo?" }, { "msg_contents": "Hi\nUbuntu 16.04 is EOL from April 2021, over a year ago.\n\nhttps://wiki.postgresql.org/wiki/Apt/FAQ#Where_are_older_versions_of_the_packages.3F\n\nregards, Sergei\n\n\n", "msg_date": "Mon, 19 Sep 2022 17:51:35 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re:Ubuntu 16.04: Xenial: Why was it removed from the apt repo?" } ]
[ { "msg_contents": "Hi All,\n\nCurrently, we have pg_current_wal_insert_lsn and pg_walfile_name sql\nfunctions which gives us information about the next wal insert\nlocation and the WAL file that the next wal insert location belongs\nto. Can we have a binary version of these sql functions? It would be\nlike any other binaries we have for e.g. pg_waldump to which we can\npass the location of the pg_wal directory. This binary would scan\nthrough the directory to return the next wal insert location and the\nwal file the next wal insert pointer belongs to.\n\nThe binary version of these sql functions can be used when the server\nis offline. This can help us to know the overall WAL data that needs\nto be replayed when the server is in recovery. In the control file we\ndo have the redo pointer. Knowing the end pointer would definitely be\nhelpful.\n\nIf you are ok then I will prepare a patch for it and share it. Please\nlet me know your thoughts/comments. thank you.!\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Mon, 19 Sep 2022 20:19:34 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": true, "msg_subject": "binary version of pg_current_wal_insert_lsn and pg_walfile_name\n functions" }, { "msg_contents": "On Mon, Sep 19, 2022 at 8:19 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi All,\n>\n> Currently, we have pg_current_wal_insert_lsn and pg_walfile_name sql\n> functions which gives us information about the next wal insert\n> location and the WAL file that the next wal insert location belongs\n> to. Can we have a binary version of these sql functions?\n\n+1 for the idea in general.\n\nAs said, pg_waldump seems to be the right candidate. I think we want\nthe lsn of the last WAL record and its info and the WAL file name\ngiven an input data directory or just the pg_wal directory or any\ndirectory where WAL files are located. For instance, one can use this\non an archive location containing archived WAL files or on a node\nwhere pg_receivewal is receiving WAL files. Am I missing any other\nuse-cases?\n\npg_waldump currently can't understand compressed and partial files. I\nthink that we need to fix this as well.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 20 Sep 2022 17:13:17 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: binary version of pg_current_wal_insert_lsn and pg_walfile_name\n functions" }, { "msg_contents": "On Tue, Sep 20, 2022 at 5:13 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Sep 19, 2022 at 8:19 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Hi All,\n> >\n> > Currently, we have pg_current_wal_insert_lsn and pg_walfile_name sql\n> > functions which gives us information about the next wal insert\n> > location and the WAL file that the next wal insert location belongs\n> > to. Can we have a binary version of these sql functions?\n>\n> +1 for the idea in general.\n>\n> As said, pg_waldump seems to be the right candidate. I think we want\n> the lsn of the last WAL record and its info and the WAL file name\n> given an input data directory or just the pg_wal directory or any\n> directory where WAL files are located. For instance, one can use this\n> on an archive location containing archived WAL files or on a node\n> where pg_receivewal is receiving WAL files. Am I missing any other\n> use-cases?\n>\n\nYeah, we can either add this functionality to pg_waldump or maybe add\na new binary itself that would return this information.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Wed, 21 Sep 2022 21:53:06 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": true, "msg_subject": "Re: binary version of pg_current_wal_insert_lsn and pg_walfile_name\n functions" }, { "msg_contents": "On Wed, Sep 21, 2022 at 9:53 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Yeah, we can either add this functionality to pg_waldump or maybe add\n> a new binary itself that would return this information.\n\nIMV, a separate tool isn't the way, since pg_waldump already reads WAL\nfiles and decodes WAL records, what's proposed here can be an\nadditional functionality of pg_waldump.\n\nIt will be great if an initial patch is posted here.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 22 Sep 2022 07:40:56 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: binary version of pg_current_wal_insert_lsn and pg_walfile_name\n functions" }, { "msg_contents": "On Thu, Sep 22, 2022 at 7:41 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Sep 21, 2022 at 9:53 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > Yeah, we can either add this functionality to pg_waldump or maybe add\n> > a new binary itself that would return this information.\n>\n> IMV, a separate tool isn't the way, since pg_waldump already reads WAL\n> files and decodes WAL records, what's proposed here can be an\n> additional functionality of pg_waldump.\n>\n> It will be great if an initial patch is posted here.\n>\n\nPFA that enhances pg_waldump to show the latest LSN and the\ncorresponding WAL file when the -l or --lastLSN option is passed an\nargument to pg_waldump. Below is an example:\n\nashu@92893de650ed:~/pgsql$ pg_waldump -l -D ./data-dir\nLatest LSN: 0/148A45F8\nLatest WAL filename: 000000010000000000000014\n\nHow has it been coded?\n\nWhen the user passes the '-l' command line option along with the data\ndirectory path to pg_waldump, it reads the control file from the data\ndirectory. From the control file, it gets information like redo\npointer and current timeline id. The redo pointer is considered to be\nthe start pointer from where the pg_waldump starts reading wal data\nuntil end-of-wal to find the last LSN. For details please check the\nattached patch.\n\nPlease note that for compressed and .partial wal files this doesn't work.\n\n--\nWith Regards,\nAshutosh Sharma.", "msg_date": "Thu, 22 Sep 2022 22:24:49 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": true, "msg_subject": "Re: binary version of pg_current_wal_insert_lsn and pg_walfile_name\n functions" }, { "msg_contents": "On Thu, Sep 22, 2022 at 10:25 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> PFA that enhances pg_waldump to show the latest LSN and the\n> corresponding WAL file when the -l or --lastLSN option is passed an\n> argument to pg_waldump. Below is an example:\n\nThanks for the patch. I have some quick thoughts about it.\n\n> When the user passes the '-l' command line option along with the data\n> directory path to pg_waldump, it reads the control file from the data\n> directory.\n\nI don't think we need a new option for data directory -D. pg_waldump's\noption 'p' can be used, please see the comments around\nidentify_target_directory().\n\n> From the control file, it gets information like redo\n> pointer and current timeline id.\n\nIs there any reason for not using get_control_file() from\nsrc/common/controldata_utils.c, but defining the exact same function\nin pg_waldump.c?\n\n> The redo pointer is considered to be\n> the start pointer from where the pg_waldump starts reading wal data\n> until end-of-wal to find the last LSN. For details please check the\n> attached patch.\n\nMaking it dependent on the controlfile limits the usability of this\nfeature. Imagine, using this feature on an archive location or\npg_receivewal target directory where there are WAL files but no\ncontrolfile. I think we can choose the appropriate combinations of\nexisting pg_waldump options, for instance, let users enter the start\nWAL segment via startseg and/or start LSN via --start and the new\noption for end WAL segment and end LSN.\n\n> Please note that for compressed and .partial wal files this doesn't work.\n\nLooking forward to the above capability because it expands the\nusability of this feature.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 23 Sep 2022 06:05:36 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: binary version of pg_current_wal_insert_lsn and pg_walfile_name\n functions" }, { "msg_contents": "On Fri, Sep 23, 2022 at 6:05 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Sep 22, 2022 at 10:25 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >\n> > PFA that enhances pg_waldump to show the latest LSN and the\n> > corresponding WAL file when the -l or --lastLSN option is passed an\n> > argument to pg_waldump. Below is an example:\n>\n> Thanks for the patch. I have some quick thoughts about it.\n>\n> > When the user passes the '-l' command line option along with the data\n> > directory path to pg_waldump, it reads the control file from the data\n> > directory.\n>\n> I don't think we need a new option for data directory -D. pg_waldump's\n> option 'p' can be used, please see the comments around\n> identify_target_directory().\n>\n\n-p is the path to the WAL directory. It doesn't necessarily have to be\na data directory, however the user can specify the data directory path\nhere as well using which the path to the WAL directory can be\nrecognized, but as I said it doesn't mean -p will always represent the\ndata directory.\n\n> > From the control file, it gets information like redo\n> > pointer and current timeline id.\n>\n> Is there any reason for not using get_control_file() from\n> src/common/controldata_utils.c, but defining the exact same function\n> in pg_waldump.c?\n>\n\nWill give it a thought on it later. If possible, will try to reuse it.\n\n> > The redo pointer is considered to be\n> > the start pointer from where the pg_waldump starts reading wal data\n> > until end-of-wal to find the last LSN. For details please check the\n> > attached patch.\n>\n> Making it dependent on the controlfile limits the usability of this\n> feature. Imagine, using this feature on an archive location or\n> pg_receivewal target directory where there are WAL files but no\n> controlfile. I think we can choose the appropriate combinations of\n> existing pg_waldump options, for instance, let users enter the start\n> WAL segment via startseg and/or start LSN via --start and the new\n> option for end WAL segment and end LSN.\n>\n\nI have written this patch assuming that the end user is not aware of\nany LSN or any other WAL data and wants to know the last LSN. So all\nhe can do is take the help of the control data to find the redo LSN\nand use that as a reference point (start pointer) to find the last\nLSN. And whatever is the WAL directory (be it archive location or wall\ncollected via pg_receivewal or pg_wal directory), we will consider the\nredo pointer as the start pointer. Now, it's possible that the WAL\ncorresponding to the start pointer is not at all available in the WAL\ndirectory like archive location or pg_receivewal directory in which\nthis cannot be used, but this is very unlikely.\n\n> > Please note that for compressed and .partial wal files this doesn't work.\n>\n> Looking forward to the above capability because it expands the\n> usability of this feature.\n>\n\nThis is a different task altogether. We will probably need to work on\nit separately.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Fri, 23 Sep 2022 09:28:10 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": true, "msg_subject": "Re: binary version of pg_current_wal_insert_lsn and pg_walfile_name\n functions" }, { "msg_contents": "PFA v2 patch.\n\nChanges in the v2 patch:\n\n- Reuse the existing get_controlfile function in\nsrc/common/controldata_utils.c instead of adding a new one.\n\n- Set env variable PGDATA with the data directory specified by the user.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Fri, 23 Sep 2022 12:24:11 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": true, "msg_subject": "Re: binary version of pg_current_wal_insert_lsn and pg_walfile_name\n functions" }, { "msg_contents": "On Fri, Sep 23, 2022 at 12:24 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> PFA v2 patch.\n>\n> Changes in the v2 patch:\n>\n> - Reuse the existing get_controlfile function in\n> src/common/controldata_utils.c instead of adding a new one.\n>\n> - Set env variable PGDATA with the data directory specified by the user.\n>\n\nForgot to attach the patch with above changes. Here it is.\n\n--\nWith Regards,\nAshutosh Sharma.", "msg_date": "Fri, 23 Sep 2022 12:25:59 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": true, "msg_subject": "Re: binary version of pg_current_wal_insert_lsn and pg_walfile_name\n functions" } ]
[ { "msg_contents": "\nHi hacker,\n\nAs $subject detailed, the tab-complete cannot work such as:\n\n CREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres port=6543' \\t\n\nIt seems that the get_previous_words() could not parse the single quote.\n\nOTOH, it works for CREATE SUBSCRIPTION sub CONNECTION xx \\t, should we fix it?\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Tue, 20 Sep 2022 00:19:58 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Tab complete for CREATE SUBSCRIPTION ... CONECTION does not work" }, { "msg_contents": "On Tue, 20 Sep 2022 at 00:19, Japin Li <japinli@hotmail.com> wrote:\n> Hi hacker,\n>\n> As $subject detailed, the tab-complete cannot work such as:\n>\n> CREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres port=6543' \\t\n>\n> It seems that the get_previous_words() could not parse the single quote.\n>\n> OTOH, it works for CREATE SUBSCRIPTION sub CONNECTION xx \\t, should we fix it?\n\nAttach fixed this problem. Any thoughts?\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Thu, 22 Sep 2022 23:08:53 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Tab complete for CREATE SUBSCRIPTION ... CONECTION does not work" }, { "msg_contents": "On Tue, Dec 26, 2023 at 3:02 PM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> Hi hacker,\n>\n> As $subject detailed, the tab-complete cannot work such as:\n>\n> CREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres port=6543' \\t\n>\n> It seems that the get_previous_words() could not parse the single quote.\n>\n> OTOH, it works for CREATE SUBSCRIPTION sub CONNECTION xx \\t, should we fix it?\n>\nI reviewed the Patch and it looks fine to me.\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Tue, 26 Dec 2023 15:40:36 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tab complete for CREATE SUBSCRIPTION ... CONECTION does not work" }, { "msg_contents": "\nOn Tue, 26 Dec 2023 at 18:10, Shubham Khanna <khannashubham1197@gmail.com> wrote:\n> On Tue, Dec 26, 2023 at 3:02 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>>\n>> Hi hacker,\n>>\n>> As $subject detailed, the tab-complete cannot work such as:\n>>\n>> CREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres port=6543' \\t\n>>\n>> It seems that the get_previous_words() could not parse the single quote.\n>>\n>> OTOH, it works for CREATE SUBSCRIPTION sub CONNECTION xx \\t, should we fix it?\n>>\n> I reviewed the Patch and it looks fine to me.\n>\nThanks for the review.\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n", "msg_date": "Wed, 27 Dec 2023 14:31:31 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Tab complete for CREATE SUBSCRIPTION ... CONECTION does not work" } ]
[ { "msg_contents": "Hello,\n\nThe function `pg_create_logical_replication_slot()` is documented to have\na `two_phase` argument(note the underscore), but the function instead\nrequires `twophase`.\n\n```\n\\df pg_catalog.pg_create_logical_replication_slot\nList of functions\n-[ RECORD 1\n]-------+---------------------------------------------------------------------------------------------------------------------------------\n\nSchema | pg_catalog\nName | pg_create_logical_replication_slot\nResult data type | record\nArgument data types | slot_name name, plugin name, temporary boolean\nDEFAULT false, twophase boolean DEFAULT false, OUT slot_name name, OUT lsn\npg_lsn\nType | func\n```\n\nThis was introduced in commit 19890a06.\n\nIMHO we should use the documented argument name `two_phase` and change the\nfunction to accept it.\n\nWhat do you think?\n\nPlease, check the attached patch.\n\n\nCheers,\nFlorin\n--\n*www.enterprisedb.com <http://www.enterprisedb.com/>*", "msg_date": "Mon, 19 Sep 2022 19:02:16 +0200", "msg_from": "Florin Irion <irionr@gmail.com>", "msg_from_op": true, "msg_subject": "pg_create_logical_replication_slot argument incongruency" }, { "msg_contents": "On Mon, Sep 19, 2022 at 07:02:16PM +0200, Florin Irion wrote:\n> This was introduced in commit 19890a06.\n> \n> IMHO we should use the documented argument name `two_phase` and change the\n> function to accept it.\n> \n> What do you think?\n\n19890a0 is included in REL_14_STABLE, and changing an argument name is\nnot acceptable in a stable branch as it would imply a catversion\nbump. Let's change the docs so as we document the parameter as\n\"twophase\", instead.\n--\nMichael", "msg_date": "Tue, 20 Sep 2022 10:33:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_create_logical_replication_slot argument incongruency" }, { "msg_contents": "On 20/09/22 03:33, Michael Paquier wrote:\n> On Mon, Sep 19, 2022 at 07:02:16PM +0200, Florin Irion wrote:\n>> This was introduced in commit 19890a06.\n>>\n>> IMHO we should use the documented argument name `two_phase` and change the\n>> function to accept it.\n>>\n>> What do you think?\n> \n> 19890a0 is included in REL_14_STABLE, and changing an argument name is\n> not acceptable in a stable branch as it would imply a catversion\n> bump. Let's change the docs so as we document the parameter as\n> \"twophase\", instead.\n> --\n> Michael\n\nI understand. \n\nOK, patch only for the docs attached.\n\nCheers, \nFlorin\nwww.enterprisedb.com", "msg_date": "Tue, 20 Sep 2022 08:41:56 +0200", "msg_from": "Florin Irion <irionr@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_create_logical_replication_slot argument incongruency" }, { "msg_contents": "On Tue, Sep 20, 2022 at 08:41:56AM +0200, Florin Irion wrote:\n> OK, patch only for the docs attached.\n\nThanks, applied.\n--\nMichael", "msg_date": "Tue, 20 Sep 2022 19:29:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_create_logical_replication_slot argument incongruency" }, { "msg_contents": "Thank you!\n\nIl mar 20 set 2022, 12:29 Michael Paquier <michael@paquier.xyz> ha scritto:\n\n> On Tue, Sep 20, 2022 at 08:41:56AM +0200, Florin Irion wrote:\n> > OK, patch only for the docs attached.\n>\n> Thanks, applied.\n> --\n> Michael\n>\n\nThank you! Il mar 20 set 2022, 12:29 Michael Paquier <michael@paquier.xyz> ha scritto:On Tue, Sep 20, 2022 at 08:41:56AM +0200, Florin Irion wrote:\n> OK, patch only for the docs attached.\n\nThanks, applied.\n--\nMichael", "msg_date": "Tue, 20 Sep 2022 13:33:45 +0200", "msg_from": "Florin Irion <irionr@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_create_logical_replication_slot argument incongruency" } ]
[ { "msg_contents": "While working on the -Wdeprecated-non-prototype fixups discussed\nnearby, I saw that clang 15.0 produces a few other new warnings\n(which are also visible in the buildfarm). Pursuant to our\nusual policy that we should suppress warnings on compilers likely\nto be used for development, here's a patch to silence them.\n\nThere are three groups of these:\n\n* With %pure-parser, Bison makes the \"yynerrs\" variable local\ninstead of static, and then if you don't use it clang notices\nthat it's set but never read. There doesn't seem to be a way\nto persuade Bison not to emit the variable at all, so here I've\njust added \"(void) yynerrs;\" to the topmost production of each\naffected grammar. If anyone has a nicer idea, let's hear it.\n\n* xlog.c's AdvanceXLInsertBuffer has a local variable \"npages\"\nthat is only read in the \"#ifdef WAL_DEBUG\" stanza at the\nbottom. Here I've done the rather ugly and brute-force thing\nof wrapping all the variable's references in \"#ifdef WAL_DEBUG\".\n(I tried marking it PG_USED_FOR_ASSERTS_ONLY, but oddly that\ndid not silence the warning.) I kind of wonder how useful this\nfunction's WAL_DEBUG output is --- maybe just dropping that\naltogether would be better?\n\n* array_typanalyze.c's compute_array_stats counts the number\nof null arrays in the column, but then does nothing with the\nresult. AFAICS this is redundant with what std_compute_stats\nwill do, so I just removed the variable.\n\nAny thoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 19 Sep 2022 15:20:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Silencing the remaining clang 15 warnings" }, { "msg_contents": "On Tue, Sep 20, 2022 at 7:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> * With %pure-parser, Bison makes the \"yynerrs\" variable local\n> instead of static, and then if you don't use it clang notices\n> that it's set but never read. There doesn't seem to be a way\n> to persuade Bison not to emit the variable at all, so here I've\n> just added \"(void) yynerrs;\" to the topmost production of each\n> affected grammar. If anyone has a nicer idea, let's hear it.\n\n+1. FTR they know about this:\n\nhttps://www.mail-archive.com/bison-patches@gnu.org/msg07836.html\nhttps://github.com/akimd/bison/commit/a166d5450e3f47587b98f6005f9f5627dbe21a5b\n\n... but that YY_ATTRIBUTE_UNUSED hasn't landed in my systems'\n/usr[/local]/share/bison/skeletons/yacc.c yet and it seems harmless\nand also legitimate to reference yynerrs from an action.\n\n> * xlog.c's AdvanceXLInsertBuffer has a local variable \"npages\"\n> that is only read in the \"#ifdef WAL_DEBUG\" stanza at the\n> bottom. Here I've done the rather ugly and brute-force thing\n> of wrapping all the variable's references in \"#ifdef WAL_DEBUG\".\n> (I tried marking it PG_USED_FOR_ASSERTS_ONLY, but oddly that\n> did not silence the warning.) I kind of wonder how useful this\n> function's WAL_DEBUG output is --- maybe just dropping that\n> altogether would be better?\n\nNo opinion on the value of the message, but maybe\npg_attribute_unused() would be better?\n\n> * array_typanalyze.c's compute_array_stats counts the number\n> of null arrays in the column, but then does nothing with the\n> result. AFAICS this is redundant with what std_compute_stats\n> will do, so I just removed the variable.\n\n+1\n\n\n", "msg_date": "Tue, 20 Sep 2022 10:56:16 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Silencing the remaining clang 15 warnings" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Tue, Sep 20, 2022 at 7:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * xlog.c's AdvanceXLInsertBuffer has a local variable \"npages\"\n>> that is only read in the \"#ifdef WAL_DEBUG\" stanza at the\n>> bottom. Here I've done the rather ugly and brute-force thing\n>> of wrapping all the variable's references in \"#ifdef WAL_DEBUG\".\n>> (I tried marking it PG_USED_FOR_ASSERTS_ONLY, but oddly that\n>> did not silence the warning.) I kind of wonder how useful this\n>> function's WAL_DEBUG output is --- maybe just dropping that\n>> altogether would be better?\n\n> No opinion on the value of the message, but maybe\n> pg_attribute_unused() would be better?\n\nI realized that the reason PG_USED_FOR_ASSERTS_ONLY didn't help\nis that it expands to empty in an assert-enabled build, which is\nwhat I was testing. So yeah, using pg_attribute_unused() directly\nwould probably work better.\n\n(Also, I tried an assert-disabled build and found one additional new\nwarning; same deal where clang doesn't believe \"foo++;\" is reason to\nconsider foo to be used. That one, PG_USED_FOR_ASSERTS_ONLY can fix.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Sep 2022 19:06:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Silencing the remaining clang 15 warnings" }, { "msg_contents": "HEAD and v15 now compile cleanly for me with clang 15.0.0,\nbut I find that there's still work to do in the back branches:\n\n* There are new(?) -Wunused-but-set-variable warnings in every older\nbranch, which we evidently cleaned up or rewrote at one point or\nanother. I think this is definitely worth fixing in the in-support\nbranches. I'm a bit less sure if it's worth the trouble in the\nout-of-support branches.\n\n* The 9.2 and 9.3 branches spew boatloads of 'undeclared library function'\nwarnings about strlcpy() and related functions. This is evidently\nbecause 16fbac39f was only back-patched as far as 9.4. There are\nenough of these to be pretty annoying if you're trying to build those\nbranches with clang, so I think this is clearly justified for\nback-patching into the older out-of-support branches, assuming that\nthe patch will work there. (I see that 9.2 and 9.3 were still on the\nprior version of autoconf, so it might not be an easy change.)\n\nI also observe that 9.2 and 9.3 produce\n\nfloat.c:1278:29: warning: implicit conversion from 'int' to 'float' changes value from 2147483647 to 2147483648 [-Wimplicit-const-int-float-conversion]\n\nThis is because cbdb8b4c0 was only back-patched as far as 9.4.\nHowever, I think that that would *not* be fit material for\nback-patching into out-of-support branches, since our policy\nfor them is \"no behavioral changes\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 21 Sep 2022 11:44:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Silencing the remaining clang 15 warnings" } ]
[ { "msg_contents": "Over in the \"Add last commit LSN to pg_last_committed_xact()\" [1]\nthread this patch had been added as a precursor, but Michael Paquier\nsuggested it be broken out separately, so I'm doing that here.\n\nIt turns out that MSVC supports both noreturn [2] [3] and alignment\n[4] [5] attributes, so this patch adds support for those. MSVC also\nsupports a form of packing, but the implementation as I can tell\nrequires wrapping the entire struct (with a push/pop declaration set)\n[6], which doesn't seem to match the style of macros we're using for\npacking in other compilers, so I opted not to implement that\nattribute.\n\nJames Coleman\n\n1: https://www.postgresql.org/message-id/Yk6UgCGlZKuxRr4n%40paquier.xyz\n2: 2008+ https://learn.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/k6ktzx3s(v=vs.90)\n3. 2015+ https://learn.microsoft.com/en-us/cpp/c-language/noreturn?view=msvc-140\n4. 2008+ https://learn.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/dabb5z75(v=vs.90)\n5. 2015+ https://learn.microsoft.com/en-us/cpp/cpp/align-cpp?view=msvc-170\n6. https://learn.microsoft.com/en-us/cpp/preprocessor/pack?view=msvc-170", "msg_date": "Mon, 19 Sep 2022 18:21:58 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Support pg_attribute_aligned and noreturn in MSVC" }, { "msg_contents": "On Mon, Sep 19, 2022 at 06:21:58PM -0400, James Coleman wrote:\n> It turns out that MSVC supports both noreturn [2] [3] and alignment\n> [4] [5] attributes, so this patch adds support for those. MSVC also\n> supports a form of packing, but the implementation as I can tell\n> requires wrapping the entire struct (with a push/pop declaration set)\n> [6], which doesn't seem to match the style of macros we're using for\n> packing in other compilers, so I opted not to implement that\n> attribute.\n\nInteresting. Thanks for the investigation.\n\n+/*\n+ * MSVC supports aligned and noreturn\n+ * Packing is also possible but only by wrapping the entire struct definition\n+ * which doesn't fit into our current macro declarations.\n+ */\n+#elif defined(_MSC_VER)\n+#define pg_attribute_aligned(a) __declspec(align(a))\n+#define pg_attribute_noreturn() __declspec(noreturn)\n #else\nNit: I think that the comment should be in the elif block for Visual.\n\npg_attribute_aligned() could be used in generic-msvc.h's\npg_atomic_uint64 as it uses now align.\n\nShouldn't HAVE_PG_ATTRIBUTE_NORETURN be set for the MSVC case as well?\n--\nMichael", "msg_date": "Tue, 20 Sep 2022 09:21:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support pg_attribute_aligned and noreturn in MSVC" }, { "msg_contents": "On Mon, Sep 19, 2022 at 8:21 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Sep 19, 2022 at 06:21:58PM -0400, James Coleman wrote:\n> > It turns out that MSVC supports both noreturn [2] [3] and alignment\n> > [4] [5] attributes, so this patch adds support for those. MSVC also\n> > supports a form of packing, but the implementation as I can tell\n> > requires wrapping the entire struct (with a push/pop declaration set)\n> > [6], which doesn't seem to match the style of macros we're using for\n> > packing in other compilers, so I opted not to implement that\n> > attribute.\n>\n> Interesting. Thanks for the investigation.\n>\n> +/*\n> + * MSVC supports aligned and noreturn\n> + * Packing is also possible but only by wrapping the entire struct definition\n> + * which doesn't fit into our current macro declarations.\n> + */\n> +#elif defined(_MSC_VER)\n> +#define pg_attribute_aligned(a) __declspec(align(a))\n> +#define pg_attribute_noreturn() __declspec(noreturn)\n> #else\n> Nit: I think that the comment should be in the elif block for Visual.\n\nI was following the style of the comment outside the \"if\", but I'm not\nattached to that style, so changed in this version.\n\n> pg_attribute_aligned() could be used in generic-msvc.h's\n> pg_atomic_uint64 as it uses now align.\n\nAdded.\n\n> Shouldn't HAVE_PG_ATTRIBUTE_NORETURN be set for the MSVC case as well?\n\nYes, fixed.\n\nJames Coleman", "msg_date": "Mon, 19 Sep 2022 20:51:37 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Support pg_attribute_aligned and noreturn in MSVC" }, { "msg_contents": "On Mon, Sep 19, 2022 at 08:51:37PM -0400, James Coleman wrote:\n> Yes, fixed.\n\nThe CF bot is failing compilation on Windows:\nhttp://commitfest.cputube.org/james-coleman.html\nhttps://api.cirrus-ci.com/v1/task/5376566577332224/logs/build.log\n\nThere is something going on with noreturn() after applying it to\nelog.h:\n01:11:00.468] c:\\cirrus\\src\\include\\utils\\elog.h(410,45): error C2085:\n'ThrowErrorData': not in formal parameter list (compiling source file\nsrc/common/hashfn.c) [c:\\cirrus\\libpgcommon.vcxproj]\n[01:11:00.468] c:\\cirrus\\src\\include\\mb\\pg_wchar.h(701,80): error\nC2085: 'pgwin32_message_to_UTF16': not in formal parameter list\n(compiling source file src/common/encnames.c)\n[c:\\cirrus\\libpgcommon.vcxproj] \n[01:11:00.468] c:\\cirrus\\src\\include\\utils\\elog.h(411,54): error\nC2085: 'pg_re_throw': not in formal parameter list (compiling source\nfile src/common/hashfn.c) [c:\\cirrus\\libpgcommon.vcxproj] \n\nalign() seems to look fine, at least. I'd be tempted to apply the\nalign part first, as that's independently useful for itemptr.h.\n--\nMichael", "msg_date": "Tue, 20 Sep 2022 12:21:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support pg_attribute_aligned and noreturn in MSVC" }, { "msg_contents": "On Mon, Sep 19, 2022 at 11:21 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Sep 19, 2022 at 08:51:37PM -0400, James Coleman wrote:\n> > Yes, fixed.\n>\n> The CF bot is failing compilation on Windows:\n> http://commitfest.cputube.org/james-coleman.html\n> https://api.cirrus-ci.com/v1/task/5376566577332224/logs/build.log\n>\n> There is something going on with noreturn() after applying it to\n> elog.h:\n> 01:11:00.468] c:\\cirrus\\src\\include\\utils\\elog.h(410,45): error C2085:\n> 'ThrowErrorData': not in formal parameter list (compiling source file\n> src/common/hashfn.c) [c:\\cirrus\\libpgcommon.vcxproj]\n> [01:11:00.468] c:\\cirrus\\src\\include\\mb\\pg_wchar.h(701,80): error\n> C2085: 'pgwin32_message_to_UTF16': not in formal parameter list\n> (compiling source file src/common/encnames.c)\n> [c:\\cirrus\\libpgcommon.vcxproj]\n> [01:11:00.468] c:\\cirrus\\src\\include\\utils\\elog.h(411,54): error\n> C2085: 'pg_re_throw': not in formal parameter list (compiling source\n> file src/common/hashfn.c) [c:\\cirrus\\libpgcommon.vcxproj]\n>\n> align() seems to look fine, at least. I'd be tempted to apply the\n> align part first, as that's independently useful for itemptr.h.\n\n\nI don't have access to a Windows machine for testing, but re-reading\nthe documentation it looks like the issue is that our noreturn macro\nis used after the definition while the MSVC equivalent is used before.\nI've removed that for now (and commented about it); it's not as\nvaluable anyway since it's mostly an indicator for code analysis\n(human and machine).\n\nJames Coleman", "msg_date": "Tue, 20 Sep 2022 08:01:20 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Support pg_attribute_aligned and noreturn in MSVC" }, { "msg_contents": "On Tue, Sep 20, 2022 at 08:01:20AM -0400, James Coleman wrote:\n> I don't have access to a Windows machine for testing, but re-reading\n> the documentation it looks like the issue is that our noreturn macro\n> is used after the definition while the MSVC equivalent is used before.\n\nA CI setup would do the job for example, see src/tools/ci/README that\nexplains how to set up things.\n\n> I've removed that for now (and commented about it); it's not as\n> valuable anyway since it's mostly an indicator for code analysis\n> (human and machine).\n\nExcept for the fact that the patch missed to define\npg_attribute_noreturn() in the MSVC branch, this looks fine to me. I\nhave been looking at what you meant with packing, and I can see the\nbusiness with PACK(), something actually doable with gcc.\n\nThat's a first step, at least, and I see no reason not to do it, so\napplied.\n--\nMichael", "msg_date": "Wed, 21 Sep 2022 10:17:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Support pg_attribute_aligned and noreturn in MSVC" }, { "msg_contents": "On Tue, Sep 20, 2022 at 9:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Sep 20, 2022 at 08:01:20AM -0400, James Coleman wrote:\n> > I don't have access to a Windows machine for testing, but re-reading\n> > the documentation it looks like the issue is that our noreturn macro\n> > is used after the definition while the MSVC equivalent is used before.\n>\n> A CI setup would do the job for example, see src/tools/ci/README that\n> explains how to set up things.\n\nThat's a good reminder; I've been meaning to set that up but haven't\ntaken the time yet.\n\n> > I've removed that for now (and commented about it); it's not as\n> > valuable anyway since it's mostly an indicator for code analysis\n> > (human and machine).\n>\n> Except for the fact that the patch missed to define\n> pg_attribute_noreturn() in the MSVC branch, this looks fine to me. I\n> have been looking at what you meant with packing, and I can see the\n> business with PACK(), something actually doable with gcc.\n>\n> That's a first step, at least, and I see no reason not to do it, so\n> applied.\n\nThanks!\n\nJames Coleman\n\n\n", "msg_date": "Wed, 21 Sep 2022 08:21:58 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Support pg_attribute_aligned and noreturn in MSVC" } ]
[ { "msg_contents": "Hello hackers,\n\nAs noted in the source:\n\nhttps://github.com/postgres/postgres/blob/master/src/include/nodes/pg_list.h#L6-L11\n\n * Once upon a time, parts of Postgres were written in Lisp and used real\n * cons-cell lists for major data structures. When that code was rewritten\n * in C, we initially had a faithful emulation of cons-cell lists, which\n * unsurprisingly was a performance bottleneck. A couple of major rewrites\n * later, these data structures are actually simple expansible arrays;\n * but the \"List\" name and a lot of the notation survives.\n\nThe Postgres parser format as described in the wiki page:\n\nhttps://wiki.postgresql.org/wiki/Query_Parsing\n\nlooks almost, but not quite, entirely like JSON:\n\n SELECT * FROM foo where bar = 42 ORDER BY id DESC LIMIT 23;\n (\n {SELECT\n :distinctClause <>\n :intoClause <>\n :targetList (\n {RESTARGET\n :name <>\n :indirection <>\n :val\n {COLUMNREF\n :fields (\n {A_STAR\n }\n )\n :location 7\n }\n :location 7\n }\n )\n :fromClause (\n {RANGEVAR\n :schemaname <>\n :relname foo\n :inhOpt 2\n :relpersistence p\n :alias <>\n :location 14\n }\n )\n ... and so on\n )\n\nThis non-standard format is useful for visual inspection and perhaps\nsimple parsing. Parsers that do exist for it are generally specific\nto some languages. If there were a standard way to parse queries,\ntools like code generators and analysis tools can work with a variety\nof libraries that already handle JSON quite well. Future potential\nwould include exposing this data to command_ddl_start event triggers.\nProviding a JSON Schema would also aid tools that want to validate or\ntransform the json with rule based systems.\n\nI would like to propose a discussion that in a future major release\nPostgres switch\nfrom this custom format to JSON. The current format is question is\ngenerated from macros and functions found in\n`src/backend/nodes/readfuncs.c` and `src/backend/nodes/outfuncs.c` and\nconverting them to emit valid JSON would be relatively\nstraightforward.\n\nOne downside would be that this would not be a forward compatible\nbinary change across releases. Since it is unlikely that very much\ncode is reliant on this custom format; this would not be a huge problem\nfor most.\n\nThoughts?\n\n-Michel\n\nHello hackers,As noted in the source:https://github.com/postgres/postgres/blob/master/src/include/nodes/pg_list.h#L6-L11 * Once upon a time, parts of Postgres were written in Lisp and used real * cons-cell lists for major data structures.  When that code was rewritten * in C, we initially had a faithful emulation of cons-cell lists, which * unsurprisingly was a performance bottleneck.  A couple of major rewrites * later, these data structures are actually simple expansible arrays; * but the \"List\" name and a lot of the notation survives.The Postgres parser format as described in the wiki page:https://wiki.postgresql.org/wiki/Query_Parsinglooks almost, but not quite, entirely like JSON:    SELECT * FROM foo where bar = 42 ORDER BY id DESC LIMIT 23;       (          {SELECT           :distinctClause <>           :intoClause <>           :targetList (             {RESTARGET              :name <>              :indirection <>              :val                 {COLUMNREF                 :fields (                   {A_STAR                   }                )                :location 7                }             :location 7             }          )          :fromClause (             {RANGEVAR              :schemaname <>              :relname foo              :inhOpt 2              :relpersistence p              :alias <>              :location 14             }          )          ... and so on       )This non-standard format is useful for visual inspection and perhapssimple parsing.  Parsers that do exist for it are generally specificto some languages.  If there were a standard way to parse queries,tools like code generators and analysis tools can work with a varietyof libraries that already handle JSON quite well.  Future potentialwould include exposing this data to command_ddl_start event triggers.Providing a JSON Schema would also aid tools that want to validate ortransform the json with rule based systems.I would like to propose a discussion that in a future major release Postgres switchfrom this custom format to JSON.  The current format is question isgenerated from macros and functions found in`src/backend/nodes/readfuncs.c` and `src/backend/nodes/outfuncs.c` andconverting them to emit valid JSON would be relativelystraightforward.One downside would be that this would not be a forward compatiblebinary change across releases.  Since it is unlikely that very muchcode is reliant on this custom format; this would not be a huge problemfor most.Thoughts?-Michel", "msg_date": "Mon, 19 Sep 2022 17:15:54 -0700", "msg_from": "Michel Pelletier <pelletier.michel@gmail.com>", "msg_from_op": true, "msg_subject": "Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "Michel Pelletier <pelletier.michel@gmail.com> writes:\n> I would like to propose a discussion that in a future major release\n> Postgres switch from this custom format to JSON.\n\nThere are certainly reasons to think about changing the node tree\nstorage format; but if we change it, I'd like to see it go to something\nmore compact not more verbose. JSON doesn't fit our needs all that\nclosely, so some things like bitmapsets would become a lot longer;\nand even where the semantics are pretty-much-the-same, JSON's\ninsistence on details like quoting field names will add bytes.\nPerhaps making the physical storage be JSONB not JSON would help that\npain point. It's still far from ideal though.\n\nMaybe a compromise could be found whereby we provide a conversion\nfunction that converts whatever the catalog storage format is to\nsome JSON equivalent. That would address the needs of external\ncode that doesn't want to write a custom parser, while not tying\nus directly to JSON.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Sep 2022 22:29:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Mon, Sep 19, 2022 at 7:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Maybe a compromise could be found whereby we provide a conversion\n> function that converts whatever the catalog storage format is to\n> some JSON equivalent. That would address the needs of external\n> code that doesn't want to write a custom parser, while not tying\n> us directly to JSON.\n\nThat seems like a perfectly good solution, as long as it can be done\nin a way that doesn't leave consumers of the JSON output at any kind\nof disadvantage.\n\nI find the current node tree format ludicrously verbose, and generally\nhard to work with. But it's not the format itself, really -- that's\nnot the problem. The underlying data structures are typically very\ninformation dense. So having an output format that's a known quantity\nsounds very valuable to me.\n\nWriting declarative @> containment queries against (say) a JSON\nvariant of node tree format seems like it could be a huge quality of\nlife improvement. It will make the output format even more verbose,\nbut that might not matter in the same way as it does right now.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 19 Sep 2022 20:25:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Tue, Sep 20, 2022 at 12:16 PM Michel Pelletier\n<pelletier.michel@gmail.com> wrote:\n> This non-standard format\n\nFWIW, it derives from Lisp s-expressions, but deviates from Lisp's\ndefault reader/printer behaviour in small ways, including being case\nsensitive and using {NAME :x 1 ...} instead of #S(NAME :x 1 ...) for\nstructs for reasons that are lost AFAIK (there's a dark age between\nthe commit history of the old Berkeley repo and our current repo, and\nit looks like plan nodes were still printed as #(NAME ...) at\nBerkeley). At some point it was probably exchanging data between the\nLisp and C parts of POSTGRES, and you could maybe sorta claim it's\nbased on an ANSI standard (Lisp!), but not with a straight face :-)\n\n\n", "msg_date": "Tue, 20 Sep 2022 15:36:57 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Writing declarative @> containment queries against (say) a JSON\n> variant of node tree format seems like it could be a huge quality of\n> life improvement.\n\nThere are certainly use-cases for something like that, but let's\nbe clear about it: that's a niche case of interest to developers\nand pretty much nobody else. For ordinary users, what matters about\nthe node tree storage format is compactness and speed of loading.\nOur existing format is certainly not great on those metrics, but\nI do not see how \"let's use JSON!\" is a route to improvement.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Sep 2022 23:39:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Tue, Sep 20, 2022 at 7:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Michel Pelletier <pelletier.michel@gmail.com> writes:\n> > I would like to propose a discussion that in a future major release\n> > Postgres switch from this custom format to JSON.\n>\n> There are certainly reasons to think about changing the node tree\n> storage format; but if we change it, I'd like to see it go to something\n> more compact not more verbose. JSON doesn't fit our needs all that\n> closely, so some things like bitmapsets would become a lot longer;\n> and even where the semantics are pretty-much-the-same, JSON's\n> insistence on details like quoting field names will add bytes.\n> Perhaps making the physical storage be JSONB not JSON would help that\n> pain point. It's still far from ideal though.\n>\n> Maybe a compromise could be found whereby we provide a conversion\n> function that converts whatever the catalog storage format is to\n> some JSON equivalent. That would address the needs of external\n> code that doesn't want to write a custom parser, while not tying\n> us directly to JSON.\n>\n\nI think the DDL deparsing stuff that is being discussed as a base for\nDDL logical replication provides something like what you are saying\n[1][2].\n\n[1] - https://www.postgresql.org/message-id/CAFPTHDaqqGxqncAP42Z%3Dw9GVXDR92HN-57O%3D2Zy6tmayV2_eZw%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAAD30U%2BpVmfKwUKy8cbZOnUXyguJ-uBNejwD75Kyo%3DOjdQGJ9g%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 20 Sep 2022 09:17:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> FWIW, it derives from Lisp s-expressions, but deviates from Lisp's\n> default reader/printer behaviour in small ways, including being case\n> sensitive and using {NAME :x 1 ...} instead of #S(NAME :x 1 ...) for\n> structs for reasons that are lost AFAIK (there's a dark age between\n> the commit history of the old Berkeley repo and our current repo, and\n> it looks like plan nodes were still printed as #(NAME ...) at\n> Berkeley).\n\nWow, where did you find a commit history for Berkeley's code?\nThere's evidence in the tarballs I have that they were using\nRCS, but I never heard that the repo was made public.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Sep 2022 23:58:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Mon, Sep 19, 2022 at 8:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> There are certainly use-cases for something like that, but let's\n> be clear about it: that's a niche case of interest to developers\n> and pretty much nobody else. For ordinary users, what matters about\n> the node tree storage format is compactness and speed of loading.\n\nOf course. But is there any reason to think that there has to be even\na tiny cost imposed on users?\n\n> Our existing format is certainly not great on those metrics, but\n> I do not see how \"let's use JSON!\" is a route to improvement.\n\nThe existing format was designed with developer convenience as a goal,\nthough -- despite my complaints, and in spite of your objections. This\nis certainly not a new consideration.\n\nIf it didn't have to be easy (or even practical) for developers to\ndirectly work with the output format, then presumably the format used\ninternally could be replaced with something lower level and faster. So\nit seems like the two goals (developer ergonomics and faster\ninterchange format for users) might actually be complementary.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 19 Sep 2022 20:59:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Mon, Sep 19, 2022 at 8:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Wow, where did you find a commit history for Berkeley's code?\n> There's evidence in the tarballs I have that they were using\n> RCS, but I never heard that the repo was made public.\n\nIt's on Github:\n\nhttps://github.com/kelvich/postgres_pre95\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 19 Sep 2022 21:00:37 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Tue, Sep 20, 2022 at 3:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > FWIW, it derives from Lisp s-expressions, but deviates from Lisp's\n> > default reader/printer behaviour in small ways, including being case\n> > sensitive and using {NAME :x 1 ...} instead of #S(NAME :x 1 ...) for\n> > structs for reasons that are lost AFAIK (there's a dark age between\n> > the commit history of the old Berkeley repo and our current repo, and\n> > it looks like plan nodes were still printed as #(NAME ...) at\n> > Berkeley).\n>\n> Wow, where did you find a commit history for Berkeley's code?\n> There's evidence in the tarballs I have that they were using\n> RCS, but I never heard that the repo was made public.\n\nOne of the tarballs at https://dsf.berkeley.edu/postgres.html has the\ncomplete RCS history, but Stas Kelvich imported it to github as Peter\nG has just reported faster than I could.\n\n\n", "msg_date": "Tue, 20 Sep 2022 16:03:07 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Tue, Sep 20, 2022 at 4:03 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Sep 20, 2022 at 3:58 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > FWIW, it derives from Lisp s-expressions, but deviates from Lisp's\n> > > default reader/printer behaviour in small ways, including being case\n> > > sensitive and using {NAME :x 1 ...} instead of #S(NAME :x 1 ...) for\n> > > structs for reasons that are lost AFAIK (there's a dark age between\n> > > the commit history of the old Berkeley repo and our current repo, and\n> > > it looks like plan nodes were still printed as #(NAME ...) at\n> > > Berkeley).\n> >\n> > Wow, where did you find a commit history for Berkeley's code?\n> > There's evidence in the tarballs I have that they were using\n> > RCS, but I never heard that the repo was made public.\n>\n> One of the tarballs at https://dsf.berkeley.edu/postgres.html has the\n> complete RCS history, but Stas Kelvich imported it to github as Peter\n> G has just reported faster than I could.\n\nTo explain my earlier guess: reader code for #S(STRUCTNAME ...) can\nbee seen here, though it's being lexed as \"PLAN_SYM\" so perhaps the\nauthor of that C already didn't know that was a general syntax for\nLisp structs. (Example: at a Lisp prompt, if you write (defstruct foo\nx y z) then (make-foo :x 1 :y 2 :z 3), the resulting object will be\nprinted as #S(FOO :x 1 :y 2 :z 3), so I'm guessing that the POSTGRES\nLisp code, which sadly (for me) was ripped out before even that repo\nIIUC, must have used defstruct-based plans.)\n\nhttps://github.com/kelvich/postgres_pre95/blob/master/src/backend/lib/lispread.c#L132\n\nIt may still be within the bounds of what a real Lisp could be\nconvinced to read though, given a reader macro to handle {} and maybe\nsome other little tweaks here and there.\n\n\n", "msg_date": "Tue, 20 Sep 2022 16:16:32 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Mon, Sep 19, 2022 at 8:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Our existing format is certainly not great on those metrics, but\n>> I do not see how \"let's use JSON!\" is a route to improvement.\n\n> The existing format was designed with developer convenience as a goal,\n> though -- despite my complaints, and in spite of your objections.\n\nAs Munro adduces nearby, it'd be a stretch to conclude that the current\nformat was designed with any Postgres-related goals in mind at all.\nI think he's right that it's a variant of some Lisp-y dump format that's\nprobably far hoarier than even Berkeley Postgres.\n\n> If it didn't have to be easy (or even practical) for developers to\n> directly work with the output format, then presumably the format used\n> internally could be replaced with something lower level and faster. So\n> it seems like the two goals (developer ergonomics and faster\n> interchange format for users) might actually be complementary.\n\nI think the principal mistake in what we have now is that the storage\nformat is identical to the \"developer friendly\" text format (plus or\nminus some whitespace). First we need to separate those. We could\nhave more than one equivalent text format perhaps, and I don't have\nany strong objection to basing the text format (or one of them) on\nJSON.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Sep 2022 00:48:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Mon, Sep 19, 2022 at 9:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> As Munro adduces nearby, it'd be a stretch to conclude that the current\n> format was designed with any Postgres-related goals in mind at all.\n> I think he's right that it's a variant of some Lisp-y dump format that's\n> probably far hoarier than even Berkeley Postgres.\n\nThat sounds very much like the 1980s graduate student equivalent of\nJSON to my ears.\n\nJSON is generally manipulated as native Javascript/python/whatever\nlists, maps, and strings. It's an interchange format that tries not to\nbe obtrusive in the same way as things like XML always are, at the\ncost of making things kinda dicey for things like numeric precision\n(unless you can account for everything). Isn't that...basically the\nsame concept as the lisp-y dump format, at a high level?\n\n> I think the principal mistake in what we have now is that the storage\n> format is identical to the \"developer friendly\" text format (plus or\n> minus some whitespace). First we need to separate those. We could\n> have more than one equivalent text format perhaps, and I don't have\n> any strong objection to basing the text format (or one of them) on\n> JSON.\n\nAgreed.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 19 Sep 2022 21:58:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Tue, Sep 20, 2022 at 4:58 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Sep 19, 2022 at 9:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > As Munro adduces nearby, it'd be a stretch to conclude that the current\n> > format was designed with any Postgres-related goals in mind at all.\n> > I think he's right that it's a variant of some Lisp-y dump format that's\n> > probably far hoarier than even Berkeley Postgres.\n>\n> That sounds very much like the 1980s graduate student equivalent of\n> JSON to my ears.\n\nYeah. Easy data interchange on Lisp systems is built in, just write\nobjects into a socket/file/whatever and read them back, as people now\ndo with JSON/XML/whatever. That's the format we see here.\n\n> JSON is generally manipulated as native Javascript/python/whatever\n> lists, maps, and strings. It's an interchange format that tries not to\n> be obtrusive in the same way as things like XML always are, at the\n> cost of making things kinda dicey for things like numeric precision\n> (unless you can account for everything). Isn't that...basically the\n> same concept as the lisp-y dump format, at a high level?\n\nYes, s-expressions and JSON are absolutely the same concept; simple\nrepresentation of simple data structures of a dynamically typed\nlanguage. There's even a chain of events connecting the two: JSON is\nroughly the literal data syntax from Javascript's grammar, and\nJavascript is the language that Brendan Eich developed after Netscape\nhired him to do an embedded Lisp (Scheme) for the browser, except they\ndecided at some point to change tack and make their new language have\na surface grammar more like Java, the new hotness. If the goal was to\nmake sure it caught on, it's hard to conclude they were wrong...\n\n\n", "msg_date": "Tue, 20 Sep 2022 18:35:51 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Tue, Sep 20, 2022 at 7:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > On Mon, Sep 19, 2022 at 8:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Our existing format is certainly not great on those metrics, but\n> >> I do not see how \"let's use JSON!\" is a route to improvement.\n>\n> > The existing format was designed with developer convenience as a goal,\n> > though -- despite my complaints, and in spite of your objections.\n>\n> As Munro adduces nearby, it'd be a stretch to conclude that the current\n> format was designed with any Postgres-related goals in mind at all.\n> I think he's right that it's a variant of some Lisp-y dump format that's\n> probably far hoarier than even Berkeley Postgres.\n>\n> > If it didn't have to be easy (or even practical) for developers to\n> > directly work with the output format, then presumably the format used\n> > internally could be replaced with something lower level and faster. So\n> > it seems like the two goals (developer ergonomics and faster\n> > interchange format for users) might actually be complementary.\n>\n> I think the principal mistake in what we have now is that the storage\n> format is identical to the \"developer friendly\" text format (plus or\n> minus some whitespace). First we need to separate those. We could\n> have more than one equivalent text format perhaps, and I don't have\n> any strong objection to basing the text format (or one of them) on\n> JSON.\n\n+1 for considering storage format and text format separately.\n\nLet's consider what our criteria could be for the storage format.\n\n1) Storage effectiveness (shorter is better) and\nserialization/deserialization effectiveness (faster is better). On\nthis criterion, the custom binary format looks perfect.\n2) Robustness in the case of corruption. It seems much easier to\ndetect the data corruption and possibly make some partial manual\nrecovery for textual format.\n3) Standartness. It's better to use something known worldwide or at\nleast used in other parts of PostgreSQL than something completely\ncustom. From this perspective, JSON/JSONB is better than custom\nthings.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 20 Sep 2022 13:00:36 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Tue, Sep 20, 2022 at 1:00 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Tue, Sep 20, 2022 at 7:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Peter Geoghegan <pg@bowt.ie> writes:\n> > > On Mon, Sep 19, 2022 at 8:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> Our existing format is certainly not great on those metrics, but\n> > >> I do not see how \"let's use JSON!\" is a route to improvement.\n> >\n> > > The existing format was designed with developer convenience as a goal,\n> > > though -- despite my complaints, and in spite of your objections.\n> >\n> > As Munro adduces nearby, it'd be a stretch to conclude that the current\n> > format was designed with any Postgres-related goals in mind at all.\n> > I think he's right that it's a variant of some Lisp-y dump format that's\n> > probably far hoarier than even Berkeley Postgres.\n> >\n> > > If it didn't have to be easy (or even practical) for developers to\n> > > directly work with the output format, then presumably the format used\n> > > internally could be replaced with something lower level and faster. So\n> > > it seems like the two goals (developer ergonomics and faster\n> > > interchange format for users) might actually be complementary.\n> >\n> > I think the principal mistake in what we have now is that the storage\n> > format is identical to the \"developer friendly\" text format (plus or\n> > minus some whitespace). First we need to separate those. We could\n> > have more than one equivalent text format perhaps, and I don't have\n> > any strong objection to basing the text format (or one of them) on\n> > JSON.\n>\n> +1 for considering storage format and text format separately.\n>\n> Let's consider what our criteria could be for the storage format.\n>\n> 1) Storage effectiveness (shorter is better) and\n> serialization/deserialization effectiveness (faster is better). On\n> this criterion, the custom binary format looks perfect.\n> 2) Robustness in the case of corruption. It seems much easier to\n> detect the data corruption and possibly make some partial manual\n> recovery for textual format.\n> 3) Standartness. It's better to use something known worldwide or at\n> least used in other parts of PostgreSQL than something completely\n> custom. From this perspective, JSON/JSONB is better than custom\n> things.\n\n(sorry, I've accidentally cut the last paragraph from the message)\n\nIt seems that there is no perfect fit for this multi-criteria\noptimization, and we should pick what is more important. Any\nthoughts?\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Tue, 20 Sep 2022 13:02:07 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Tue, 20 Sept 2022 at 12:00, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Tue, Sep 20, 2022 at 7:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Peter Geoghegan <pg@bowt.ie> writes:\n> > > On Mon, Sep 19, 2022 at 8:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> Our existing format is certainly not great on those metrics, but\n> > >> I do not see how \"let's use JSON!\" is a route to improvement.\n> >\n> > > The existing format was designed with developer convenience as a goal,\n> > > though -- despite my complaints, and in spite of your objections.\n> >\n> > As Munro adduces nearby, it'd be a stretch to conclude that the current\n> > format was designed with any Postgres-related goals in mind at all.\n> > I think he's right that it's a variant of some Lisp-y dump format that's\n> > probably far hoarier than even Berkeley Postgres.\n> >\n> > > If it didn't have to be easy (or even practical) for developers to\n> > > directly work with the output format, then presumably the format used\n> > > internally could be replaced with something lower level and faster. So\n> > > it seems like the two goals (developer ergonomics and faster\n> > > interchange format for users) might actually be complementary.\n> >\n> > I think the principal mistake in what we have now is that the storage\n> > format is identical to the \"developer friendly\" text format (plus or\n> > minus some whitespace). First we need to separate those. We could\n> > have more than one equivalent text format perhaps, and I don't have\n> > any strong objection to basing the text format (or one of them) on\n> > JSON.\n>\n> +1 for considering storage format and text format separately.\n>\n> Let's consider what our criteria could be for the storage format.\n>\n> 1) Storage effectiveness (shorter is better) and\n> serialization/deserialization effectiveness (faster is better). On\n> this criterion, the custom binary format looks perfect.\n> 2) Robustness in the case of corruption. It seems much easier to\n> detect the data corruption and possibly make some partial manual\n> recovery for textual format.\n> 3) Standartness. It's better to use something known worldwide or at\n> least used in other parts of PostgreSQL than something completely\n> custom. From this perspective, JSON/JSONB is better than custom\n> things.\n\nAllow me to add: compressability\n\nIn the thread surrounding [0] there were complaints about the size of\ncatalogs, and specifically the template database. Significant parts of\nthat (688kB of 8080kB a fresh PG14 database) are in pg_rewrite, which\nconsists mostly of serialized Nodes. If we're going to replace our\ncurrent NodeToText infrastructure, we'd better know we can effectively\ncompress this data.\n\nIn that same thread, I also suggested that we could try to not emit a\nNode's fields if they contain their default values while serializing;\nsuch as the common `:location -1` or `:mynodefield <>`. Those fields\nstill take up space in the format, while conveying no interesting\ninformation (the absense of that field in the struct definition would\nconvey the same). It would be useful if this new serialized format\nwould allow us to do similar tricks cheaply.\n\nAs for JSON vs JSONB for storage:\nI'm fairly certain that JSONB is less compact than JSON (without\ntaking compression into the picture) due to the 4-byte guaranteed\noverhead for each jsonb element; while for JSON that is only 2 bytes\nfor each (up to 3 when you consider separators, plus potential extra\noverhead for escaped values that are unlikely to appear our catalogs).\nSome numbers can be stored more efficiently in JSONB, but only large\nnumbers and small fractions that we're unlikely to hit in system\nviews: a back-of-the-envelope calculation puts the cutoff point of\nefficient storage between strings-of-decimals and Numeric at >10^12, <\n-10^11, or very precise fractional values.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/CAEze2WgGexDM63dOvndLdAWwA6uSmSsc97jmrCuNmrF1JEDK7w%40mail.gmail.com\n\n\n", "msg_date": "Tue, 20 Sep 2022 13:37:13 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On 2022-Sep-20, Matthias van de Meent wrote:\n\n> Allow me to add: compressability\n> \n> In the thread surrounding [0] there were complaints about the size of\n> catalogs, and specifically the template database. Significant parts of\n> that (688kB of 8080kB a fresh PG14 database) are in pg_rewrite, which\n> consists mostly of serialized Nodes. If we're going to replace our\n> current NodeToText infrastructure, we'd better know we can effectively\n> compress this data.\n\nTrue. Currently, the largest ev_action values compress pretty well. I\nthink if we wanted this to be more succint, we would have to invent some\nbinary format -- perhaps something like Protocol Buffers: it'd be stored\nin the binary format in catalogs, but for output it would be converted\ninto something easy to read (we already do this for\npg_statistic_ext_data for example). We'd probably lose compressibility,\nbut that'd be okay because the binary format would already remove most\nof the redundancy by nature.\n\nDo we want to go there?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Java is clearly an example of money oriented programming\" (A. Stepanov)\n\n\n", "msg_date": "Tue, 20 Sep 2022 17:28:57 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Tue, 20 Sept 2022 at 17:29, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Sep-20, Matthias van de Meent wrote:\n>\n> > Allow me to add: compressability\n> >\n> > In the thread surrounding [0] there were complaints about the size of\n> > catalogs, and specifically the template database. Significant parts of\n> > that (688kB of 8080kB a fresh PG14 database) are in pg_rewrite, which\n> > consists mostly of serialized Nodes. If we're going to replace our\n> > current NodeToText infrastructure, we'd better know we can effectively\n> > compress this data.\n>\n> True. Currently, the largest ev_action values compress pretty well. I\n> think if we wanted this to be more succint, we would have to invent some\n> binary format -- perhaps something like Protocol Buffers: it'd be stored\n> in the binary format in catalogs, but for output it would be converted\n> into something easy to read (we already do this for\n> pg_statistic_ext_data for example). We'd probably lose compressibility,\n> but that'd be okay because the binary format would already remove most\n> of the redundancy by nature.\n>\n> Do we want to go there?\n\nI don't think that a binary format would be much better for\ndebugging/fixing than an optimization of the current textual format\nwhen combined with compression. As I mentioned in that thread, there\nis a lot of improvement possible with the existing format, and I think\nany debugging of serialized nodes would greatly benefit from using a\ntextual format.\n\nThen again, I also agree that this argument doesn't hold it's weight\nwhen storage and output formats are going to be different. I trust\nthat any new tooling introduced as a result of this thread will be\nbetter than what we have right now.\n\nAs for best format: I don't know. The current format is usable, and a\nbetter format would not store any data for default values. JSON can do\nthat, but I could think of many formats that could do the same (Smile,\nBSON, xml, etc.).\n\nI do not think that protobuf is the best choice for storage, though,\nbecause it has its own rules on what it considers a default value and\nwhat it does or does not serialize: zero is considered the only\ndefault for numbers, as is the empty string for text, etc.\nI think it is allright for general use, but with e.g. `location: -1`\nin just about every parse node we'd probably want to select our own\nvalues to ignore during (de)serialization of fields.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Wed, 21 Sep 2022 20:04:16 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Wed, Sep 21, 2022 at 11:04 AM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n\n> On Tue, 20 Sept 2022 at 17:29, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n> >\n> > On 2022-Sep-20, Matthias van de Meent wrote:\n> >\n> > > Allow me to add: compressability\n> > >\n> > > In the thread surrounding [0] there were complaints about the size of\n> > > catalogs, and specifically the template database. Significant parts of\n> > > that (688kB of 8080kB a fresh PG14 database) are in pg_rewrite, which\n> > > consists mostly of serialized Nodes. If we're going to replace our\n> > > current NodeToText infrastructure, we'd better know we can effectively\n> > > compress this data.\n> >\n> > True. Currently, the largest ev_action values compress pretty well. I\n> > think if we wanted this to be more succint, we would have to invent some\n> > binary format -- perhaps something like Protocol Buffers: it'd be stored\n> > in the binary format in catalogs, but for output it would be converted\n> > into something easy to read (we already do this for\n> > pg_statistic_ext_data for example). We'd probably lose compressibility,\n> > but that'd be okay because the binary format would already remove most\n> > of the redundancy by nature.\n> >\n> > Do we want to go there?\n>\n> I don't think that a binary format would be much better for\n> debugging/fixing than an optimization of the current textual format\n> when combined with compression.\n\n\nI agree, JSON is not perfect, but it compresses and it's usable\neverywhere. My personal need for this is purely developer experience, and\nTom pointed out, a \"niche\" need for sure, but we are starting to do some\nserious work with Dan Lynch's plpgsql deparser tool to generate RLS\npolicies from meta schema models, and having the same format come out of\nthe parser would make a complete end to end solution for us, especially if\nwe can get this data from a function in a ddl_command_start event trigger.\nDan also writes a popular deparser for Javascript, and unifying the formats\nacross these tools would be a big win for us.\n\n\n> As I mentioned in that thread, there\n> is a lot of improvement possible with the existing format, and I think\n> any debugging of serialized nodes would greatly benefit from using a\n> textual format.\n>\n\nAgreed.\n\n\n> Then again, I also agree that this argument doesn't hold it's weight\n> when storage and output formats are going to be different. I trust\n> that any new tooling introduced as a result of this thread will be\n> better than what we have right now.\n>\n\nSeparating formats seems like a lot of work to me, to get what might not be\na huge improvement over compressing JSON, for what seems unlikely to be\nmore than a few megabytes of parsed SQL.\n\n\n> As for best format: I don't know. The current format is usable, and a\n> better format would not store any data for default values. JSON can do\n> that, but I could think of many formats that could do the same (Smile,\n> BSON, xml, etc.).\n>\n> I do not think that protobuf is the best choice for storage, though,\n> because it has its own rules on what it considers a default value and\n> what it does or does not serialize: zero is considered the only\n> default for numbers, as is the empty string for text, etc.\n> I think it is allright for general use, but with e.g. `location: -1`\n> in just about every parse node we'd probably want to select our own\n> values to ignore during (de)serialization of fields.\n>\n\nAgreed.\n\n Thank you everyone who has contributed to this thread, I'm pleased that it\ngot a very spirited debate and I apologize for the delay in getting back to\neveryone.\n\nI'd like to spike on a proposed patch that:\n\n - Converts the existing text format to JSON (or possibly jsonb,\nconsidering feedback from this thread)\n - Can be stored compressed\n - Can be passed to a ddl_command_start event trigger with a function.\n\nThoughts?\n\n-Michel\n\nOn Wed, Sep 21, 2022 at 11:04 AM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:On Tue, 20 Sept 2022 at 17:29, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Sep-20, Matthias van de Meent wrote:\n>\n> > Allow me to add: compressability\n> >\n> > In the thread surrounding [0] there were complaints about the size of\n> > catalogs, and specifically the template database. Significant parts of\n> > that (688kB of 8080kB a fresh PG14 database) are in pg_rewrite, which\n> > consists mostly of serialized Nodes. If we're going to replace our\n> > current NodeToText infrastructure, we'd better know we can effectively\n> > compress this data.\n>\n> True.  Currently, the largest ev_action values compress pretty well.  I\n> think if we wanted this to be more succint, we would have to invent some\n> binary format -- perhaps something like Protocol Buffers: it'd be stored\n> in the binary format in catalogs, but for output it would be converted\n> into something easy to read (we already do this for\n> pg_statistic_ext_data for example).  We'd probably lose compressibility,\n> but that'd be okay because the binary format would already remove most\n> of the redundancy by nature.\n>\n> Do we want to go there?\n\nI don't think that a binary format would be much better for\ndebugging/fixing than an optimization of the current textual format\nwhen combined with compression. I agree, JSON is not perfect, but it compresses and it's usable everywhere.  My personal need for this is purely developer experience, and Tom pointed out, a \"niche\" need for sure, but we are starting to do some serious work with Dan Lynch's plpgsql deparser tool to generate RLS policies from meta schema models, and having the same format come out of the parser would make a complete end to end solution for us, especially if we can get this data from a function in a ddl_command_start event trigger.  Dan also writes a popular deparser for Javascript, and unifying the formats across these tools would be a big win for us. As I mentioned in that thread, there\nis a lot of improvement possible with the existing format, and I think\nany debugging of serialized nodes would greatly benefit from using a\ntextual format. Agreed. \nThen again, I also agree that this argument doesn't hold it's weight\nwhen storage and output formats are going to be different. I trust\nthat any new tooling introduced as a result of this thread will be\nbetter than what we have right now.Separating formats seems like a lot of work to me, to get what might not be a huge improvement over compressing JSON, for what seems unlikely to be more than a few megabytes of parsed SQL. \nAs for best format: I don't know. The current format is usable, and a\nbetter format would not store any data for default values. JSON can do\nthat, but I could think of many formats that could do the same (Smile,\nBSON, xml, etc.).\n\nI do not think that protobuf is the best choice for storage, though,\nbecause it has its own rules on what it considers a default value and\nwhat it does or does not serialize: zero is considered the only\ndefault for numbers, as is the empty string for text, etc.\nI think it is allright for general use, but with e.g. `location: -1`\nin just about every parse node we'd probably want to select our own\nvalues to ignore during (de)serialization of fields.Agreed. Thank you everyone who has contributed to this thread, I'm pleased that it got a very spirited debate and I apologize for the delay in getting back to everyone.  I'd like to spike on a proposed patch that:  - Converts the existing text format to JSON (or possibly jsonb, considering feedback from this thread)  - Can be stored compressed  - Can be passed to a ddl_command_start event trigger with a function.Thoughts?-Michel", "msg_date": "Thu, 27 Oct 2022 07:38:46 -0700", "msg_from": "Michel Pelletier <pelletier.michel@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "Hi,\n\nOn 2022-09-19 22:29:15 -0400, Tom Lane wrote:\n> There are certainly reasons to think about changing the node tree\n> storage format; but if we change it, I'd like to see it go to something\n> more compact not more verbose.\n\nVery much seconded - the various pg_node_trees are a quite significant\nfraction of the overall size of an empty database. And they're not\nparticularly useful for a human either.\n\nIIRC it's not just catalog storage that's affected, but iirc also relevant for\nparallel query.\n\nMy pet peeve is the way datums are output as individual bytes printed as\nintegers each. For narrow fixed-width datums including a lot of 0's for bytes\nthat aren't even used in the datum.\n\n\n> Maybe a compromise could be found whereby we provide a conversion function\n> that converts whatever the catalog storage format is to some JSON\n> equivalent. That would address the needs of external code that doesn't want\n> to write a custom parser, while not tying us directly to JSON.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 27 Oct 2022 16:38:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "\nOn 2022-10-27 Th 19:38, Andres Freund wrote:\n> Hi,\n>\n> On 2022-09-19 22:29:15 -0400, Tom Lane wrote:\n>> Maybe a compromise could be found whereby we provide a conversion function\n>> that converts whatever the catalog storage format is to some JSON\n>> equivalent. That would address the needs of external code that doesn't want\n>> to write a custom parser, while not tying us directly to JSON.\n> +1\n>\n\n\nAgreed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 28 Oct 2022 09:26:51 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Fri, Oct 28, 2022 at 4:27 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 2022-10-27 Th 19:38, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2022-09-19 22:29:15 -0400, Tom Lane wrote:\n> >> Maybe a compromise could be found whereby we provide a conversion function\n> >> that converts whatever the catalog storage format is to some JSON\n> >> equivalent. That would address the needs of external code that doesn't want\n> >> to write a custom parser, while not tying us directly to JSON.\n> > +1\n> >\n>\n>\n> Agreed.\n\n+1\n\nMichel, it seems that you now have a green light to implement node to\njson function.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 31 Oct 2022 15:45:55 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Mon, 31 Oct 2022 at 13:46, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Fri, Oct 28, 2022 at 4:27 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> On 2022-10-27 Th 19:38, Andres Freund wrote:\n>> > Hi,\n>> >\n>> > On 2022-09-19 22:29:15 -0400, Tom Lane wrote:\n>> >> Maybe a compromise could be found whereby we provide a conversion function\n>> >> that converts whatever the catalog storage format is to some JSON\n>> >> equivalent. That would address the needs of external code that doesn't want\n>> >> to write a custom parser, while not tying us directly to JSON.\n>> > +1\n>>\n>> Agreed.\n>\n> +1\n>\n> Michel, it seems that you now have a green light to implement node to\n> json function.\n\nI think that Tom's proposal that we +1 is on a pg_node_tree to json\nSQL function / cast; which is tangentially related to the \"nodeToJson\n/ changing the storage format of pg_node_tree to json\" proposal, but\nnot the same.\n\nI will add my +1 to Tom's proposal for that function/cast, but I'm not\nsure on changing the storage format of pg_node_tree to json.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 31 Oct 2022 14:15:44 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Mon, Oct 31, 2022 at 6:15 AM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n\n> On Mon, 31 Oct 2022 at 13:46, Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n> > On Fri, Oct 28, 2022 at 4:27 PM Andrew Dunstan <andrew@dunslane.net>\n> wrote:\n> >> On 2022-10-27 Th 19:38, Andres Freund wrote:\n> >> > Hi,\n> >> >\n> >> > On 2022-09-19 22:29:15 -0400, Tom Lane wrote:\n> >> >> Maybe a compromise could be found whereby we provide a conversion\n> function\n> >> >> that converts whatever the catalog storage format is to some JSON\n> >> >> equivalent. That would address the needs of external code that\n> doesn't want\n> >> >> to write a custom parser, while not tying us directly to JSON.\n> >> > +1\n> >>\n> >> Agreed.\n> >\n> > +1\n> >\n> > Michel, it seems that you now have a green light to implement node to\n> > json function.\n>\n> I think that Tom's proposal that we +1 is on a pg_node_tree to json\n> SQL function / cast; which is tangentially related to the \"nodeToJson\n> / changing the storage format of pg_node_tree to json\" proposal, but\n> not the same.\n>\n\nI agree.\n\n\n> I will add my +1 to Tom's proposal for that function/cast, but I'm not\n> sure on changing the storage format of pg_node_tree to json.\n>\n\nI'm going to spike on this function and will get back to the thread with\nany updates.\n\nThank you!\n\n-Michel\n\nOn Mon, Oct 31, 2022 at 6:15 AM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:On Mon, 31 Oct 2022 at 13:46, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Fri, Oct 28, 2022 at 4:27 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> On 2022-10-27 Th 19:38, Andres Freund wrote:\n>> > Hi,\n>> >\n>> > On 2022-09-19 22:29:15 -0400, Tom Lane wrote:\n>> >> Maybe a compromise could be found whereby we provide a conversion function\n>> >> that converts whatever the catalog storage format is to some JSON\n>> >> equivalent.  That would address the needs of external code that doesn't want\n>> >> to write a custom parser, while not tying us directly to JSON.\n>> > +1\n>>\n>> Agreed.\n>\n> +1\n>\n> Michel, it seems that you now have a green light to implement node to\n> json function.\n\nI think that Tom's proposal that we +1 is on a pg_node_tree to json\nSQL function / cast; which is tangentially related to the \"nodeToJson\n/ changing the storage format of pg_node_tree to json\" proposal, but\nnot the same.I agree. \nI will add my +1 to Tom's proposal for that function/cast, but I'm not\nsure on changing the storage format of pg_node_tree to json.I'm going to spike on this function and will get back to the thread with any updates.Thank you!-Michel", "msg_date": "Mon, 31 Oct 2022 07:55:25 -0700", "msg_from": "Michel Pelletier <pelletier.michel@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Tue, Sep 20, 2022 at 4:16 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> To explain my earlier guess: reader code for #S(STRUCTNAME ...) can\n> bee seen here, though it's being lexed as \"PLAN_SYM\" so perhaps the\n> author of that C already didn't know that was a general syntax for\n> Lisp structs. (Example: at a Lisp prompt, if you write (defstruct foo\n> x y z) then (make-foo :x 1 :y 2 :z 3), the resulting object will be\n> printed as #S(FOO :x 1 :y 2 :z 3), so I'm guessing that the POSTGRES\n> Lisp code, which sadly (for me) was ripped out before even that repo\n> IIUC, must have used defstruct-based plans.)\n\nThat defstruct guess is confirmed by page 36 and nearby of\nhttps://dsf.berkeley.edu/papers/UCB-MS-zfong.pdf.\n\n\n", "msg_date": "Tue, 10 Oct 2023 12:11:36 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" }, { "msg_contents": "On Mon, 31 Oct 2022 at 15:56, Michel Pelletier\n<pelletier.michel@gmail.com> wrote:\n> On Mon, Oct 31, 2022 at 6:15 AM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n>> On Mon, 31 Oct 2022 at 13:46, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>>> On Fri, Oct 28, 2022 at 4:27 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>>> On 2022-10-27 Th 19:38, Andres Freund wrote:\n>>>>> Hi,\n>>>>>\n>>>>> On 2022-09-19 22:29:15 -0400, Tom Lane wrote:\n>>>>>> Maybe a compromise could be found whereby we provide a conversion function\n>>>>>> that converts whatever the catalog storage format is to some JSON\n>>>>>> equivalent. That would address the needs of external code that doesn't want\n>>>>>> to write a custom parser, while not tying us directly to JSON.\n>>>>> +1\n>>>>\n>>>> Agreed.\n>>>\n>>> +1\n>>>\n>>> Michel, it seems that you now have a green light to implement node to\n>>> json function.\n>>\n>> I think that Tom's proposal that we +1 is on a pg_node_tree to json\n>> SQL function / cast; which is tangentially related to the \"nodeToJson\n>> / changing the storage format of pg_node_tree to json\" proposal, but\n>> not the same.\n>\n>\n> I agree.\n>\n>>\n>> I will add my +1 to Tom's proposal for that function/cast, but I'm not\n>> sure on changing the storage format of pg_node_tree to json.\n>\n>\n> I'm going to spike on this function and will get back to the thread with any updates.\n\nMichel, did you get a result from this spike?\n\nI'm asking, because as I spiked most of my ideas on updating the node\ntext format, and am working on wrapping it up into a patch (or\npatchset) later this week. The ideas for this are:\n\n1. Don't write fields with default values for their types, such as\nNULL for Node* fields;\n2. Reset location fields before transforming the node tree to text\nwhen we don't have a copy of the original query, which removes\nlocation fields from serialization with step 1;\n3. Add default() node labels to struct fields that do not share the\nfield type's default, allowing more fields to be omitted with step 1;\n4. Add special default_ref() pg_node_attr for node fields that default\nto other node field's values, used in Var's varnosyn/varattnosyn as\nrefering to varno/varattno; and\n5. Truncate trailing 0s in Const' outDatum notation of by-ref types,\nso that e.g. Consts with `name` data don't waste so much space with 0s\n\nCurrently, it reduces the pg_total_relation_size metric of pg_rewrite\nafter TOAST compression by 35% vs pg16, down to 483328 bytes / 59\npages, from 753664 bytes / 92 pages. The raw size of the ev_action\ncolumn's data (that is, before compression) is reduced by 55% to\n1.18MB (from 2.80MB), and the largest default shipped row (the\ninformation_schema.columns view) in that table is reduced to 'only'\n78kB raw, from 193kB.\n\nRW performance hasn't been tested yet, so that is still to be determined...\n\nKind regards,\n\nMatthias van de Meent\nNeon (https://neon.tech)\n\n\n", "msg_date": "Mon, 4 Dec 2023 17:44:39 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Proposal to use JSON for Postgres Parser format" } ]
[ { "msg_contents": "Hi,\n\nI realized that there are some places where we use XLogRecPtr for\nvariables for replication origin id. The attached patch fixes them to\nuse RepOriginiId instead.\n\nRegards,\n\n-- \nMasahiko Sawada\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 20 Sep 2022 14:49:14 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Fix incorrect variable type for origin_id" }, { "msg_contents": "On Tue, Sep 20, 2022 at 02:49:14PM +0900, Masahiko Sawada wrote:\n> I realized that there are some places where we use XLogRecPtr for\n> variables for replication origin id. The attached patch fixes them to\n> use RepOriginiId instead.\n\nRight, good catch. Will fix, thanks!\n--\nMichael", "msg_date": "Tue, 20 Sep 2022 15:06:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix incorrect variable type for origin_id" } ]
[ { "msg_contents": "Hi,\nI was looking at this check in src/backend/parser/parse_utilcmd.c w.r.t.\nconstraint:\n\n if (indclass->values[i] != defopclass ||\n attform->attcollation != index_rel->rd_indcollation[i]\n||\n attoptions != (Datum) 0 ||\n index_rel->rd_indoption[i] != 0)\n ereport(ERROR,\n (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n errmsg(\"index \\\"%s\\\" column number %d does not\nhave default sorting behavior\", index_name, i + 1),\n errdetail(\"Cannot create a primary key or\nunique constraint using such an index.\"),\n\nIt seems this first came in via `Indexes with INCLUDE columns and their\nsupport in B-tree`\n\nIf the index has DESC sorting order, why it cannot be used to back a\nconstraint ?\nSome concrete sample would help me understand this.\n\nThanks\n\nHi,I was looking at this check in src/backend/parser/parse_utilcmd.c w.r.t. constraint:                if (indclass->values[i] != defopclass ||                    attform->attcollation != index_rel->rd_indcollation[i] ||                    attoptions != (Datum) 0 ||                    index_rel->rd_indoption[i] != 0)                    ereport(ERROR,                            (errcode(ERRCODE_WRONG_OBJECT_TYPE),                             errmsg(\"index \\\"%s\\\" column number %d does not have default sorting behavior\", index_name, i + 1),                             errdetail(\"Cannot create a primary key or unique constraint using such an index.\"),It seems this first came in via `Indexes with INCLUDE columns and their support in B-tree`If the index has DESC sorting order, why it cannot be used to back a constraint ?Some concrete sample would help me understand this.Thanks", "msg_date": "Tue, 20 Sep 2022 09:34:25 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "default sorting behavior for index" }, { "msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> I was looking at this check in src/backend/parser/parse_utilcmd.c w.r.t.\n> constraint:\n> ...\n> If the index has DESC sorting order, why it cannot be used to back a\n> constraint ?\n> Some concrete sample would help me understand this.\n\nPlease read the nearby comments, particularly\n\n * Insist on default opclass, collation, and sort options.\n * While the index would still work as a constraint with\n * non-default settings, it might not provide exactly the same\n * uniqueness semantics as you'd get from a normally-created\n * constraint; and there's also the dump/reload problem\n * mentioned above.\n\nThe \"mentioned above\" refers to this:\n\n * Insist on it being a btree. That's the only kind that supports\n * uniqueness at the moment anyway; but we must have an index that\n * exactly matches what you'd get from plain ADD CONSTRAINT syntax,\n * else dump and reload will produce a different index (breaking\n * pg_upgrade in particular).\n\nThe concern about whether the uniqueness semantics are the same probably\nmostly applies to just the opclass and collation properties. However,\nrd_indoption contains AM-specific options, and we have little ability\nto be sure in this code exactly what those bits might do. In any case\nwe'd definitely have a risk of things breaking during pg_upgrade if we\nignore rd_indoption.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 20 Sep 2022 19:38:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: default sorting behavior for index" } ]
[ { "msg_contents": "Hopefully I'm not missing something obvious, but as far as I know\nthere's no way to configure auto explain to work fire\nstatement_timeout fires.\n\nI'd like to look into this at some point, but I'm wondering if anyone\nhas thought about it before, and, if so, is there any obvious\nimpediment to doing so?\n\nThanks,\nJames Coleman\n\n\n", "msg_date": "Tue, 20 Sep 2022 13:34:10 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Auto explain after query timeout" }, { "msg_contents": "On Tue Sep 20, 2022 at 10:34 AM PDT, James Coleman wrote:\n> Hopefully I'm not missing something obvious, but as far as I know\n> there's no way to configure auto explain to work fire\n> statement_timeout fires.\n\nI believe you're correct.\n\n> I'd like to look into this at some point, but I'm wondering if anyone\n> has thought about it before, and, if so, is there any obvious\n> impediment to doing so?\n\nThis would be a neat feature. Since the changes would be fairly\nlocalized to the contrib module, this would be a great first patch for\nsomeone new to contributing.\n\nThis can be exposed at a new GUC auto_explain.log_on_statement_timeout.\nI wish our conventions allowed for creating hierarchies of GUC\nparameters, e.g. auto_explain.when.statmeent_timeout.\n\nFor someone who would like to achieve this in the field today, I believe\nthey can set auto_explain.log_min_duration equal to, or less than,\nstatement_timeout.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Tue, 20 Sep 2022 11:12:52 -0700", "msg_from": "\"Gurjeet\" <singh.gurjeet@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Auto explain after query timeout" }, { "msg_contents": "On Tue, Sep 20, 2022 at 2:12 PM Gurjeet <singh.gurjeet@gmail.com> wrote:\n>\n> On Tue Sep 20, 2022 at 10:34 AM PDT, James Coleman wrote:\n> > Hopefully I'm not missing something obvious, but as far as I know\n> > there's no way to configure auto explain to work fire\n> > statement_timeout fires.\n>\n> I believe you're correct.\n>\n> > I'd like to look into this at some point, but I'm wondering if anyone\n> > has thought about it before, and, if so, is there any obvious\n> > impediment to doing so?\n>\n> This would be a neat feature. Since the changes would be fairly\n> localized to the contrib module, this would be a great first patch for\n> someone new to contributing.\n>\n> This can be exposed at a new GUC auto_explain.log_on_statement_timeout.\n> I wish our conventions allowed for creating hierarchies of GUC\n> parameters, e.g. auto_explain.when.statmeent_timeout.\n>\n> For someone who would like to achieve this in the field today, I believe\n> they can set auto_explain.log_min_duration equal to, or less than,\n> statement_timeout.\n\nEither I'm missing something (and/or this was fixed in a later PG\nversion), but I don't think this is how it works. We have this\nspecific problem now: we set auto_explain.log_min_duration to 200 (ms)\nand statement_timeout set to 30s, but when a statement times out we do\nnot get the plan logged with auto-explain.\n\nJames Coleman\n\n\n", "msg_date": "Tue, 20 Sep 2022 14:34:48 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Auto explain after query timeout" }, { "msg_contents": "On Tue, Sep 20, 2022 at 2:35 PM James Coleman <jtc331@gmail.com> wrote:\n> Either I'm missing something (and/or this was fixed in a later PG\n> version), but I don't think this is how it works. We have this\n> specific problem now: we set auto_explain.log_min_duration to 200 (ms)\n> and statement_timeout set to 30s, but when a statement times out we do\n> not get the plan logged with auto-explain.\n\nI think you're correct. auto_explain uses the ExecutorEnd hook, but\nthat hook will not be fired in the case of an error. Indeed, if an\nerror has already been thrown, it would be unsafe to try to\nauto-explain anything. For instance -- and this is just one problem of\nprobably many -- ExplainTargetRel() performs catalog lookups, which is\nnot OK in a failed transaction.\n\nTo make this work, I think you'd need a hook that fires just BEFORE\nthe error is thrown. However, previous attempts to introduce hooks\ninto ProcessInterrupts() have not met with a wam response from Tom, so\nit might be a tough sell. And maybe for good reason. I see at least\ntwo problems. The first is that explaining a query is a pretty\ncomplicated operation that involves catalog lookups and lots of\ncomplicated stuff, and I don't think that it would be safe to do all\nof that at any arbitrary point in the code where ProcessInterrupts()\nhappened to fire. What if one of the syscache lookups misses the cache\nand we have to open the underlying catalog? Then\nAcceptInvalidationMessages() will fire, but we don't currently assume\nthat any old CHECK_FOR_INTERRUPTS() can process invalidations. What if\nthe running query has called a user-defined function or procedure\nwhich is running DDL which is in the middle of changing catalog state\nfor some relation involved in the query at the moment that the\nstatement timeout arrives? I feel like there are big problems here.\n\nThe other problem I see is that ProcessInterrupts() is our mechanism\nfor allowing people to escape from queries that would otherwise run\nforever by hitting ^C. But what if the explain code goes crazy and\nitself needs to be interrupted, when we're already inside\nProcessInterrupts()? Maybe that would work out OK... or maybe it\nwouldn't. I'm not really sure. But it's another reason to be very,\nvery cautious about putting complicated logic inside\nProcessInterrupts().\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Sep 2022 15:05:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Auto explain after query timeout" }, { "msg_contents": "On Tue Sep 20, 2022 at 11:34 AM PDT, James Coleman wrote:\n> On Tue, Sep 20, 2022 at 2:12 PM Gurjeet <singh.gurjeet@gmail.com> wrote:\n> >\n> > For someone who would like to achieve this in the field today, I believe\n> > they can set auto_explain.log_min_duration equal to, or less than,\n> > statement_timeout.\n>\n> Either I'm missing something (and/or this was fixed in a later PG\n> version), but I don't think this is how it works. We have this\n> specific problem now: we set auto_explain.log_min_duration to 200 (ms)\n> and statement_timeout set to 30s, but when a statement times out we do\n> not get the plan logged with auto-explain.\n\nMy DBA skills are rusty, so I'll take your word for it.\n\nIf this is the current behaviour, I'm inclined to treat this as a bug,\nor at least a desirable improvement, and see if auto_explain can be\nimproved to emit the plan on statment_timeout.\n\n From what I undestand, the behaviour of auto_explain is that it waits\nfor the query to finish before it emits the plan. This information is\nvery useful for diagnosing long-running queries that are still running.\nMany a times, you encounter such queries in production workloads, and\nreproducing such a scenario later on is either undesirable, expensive, or\neven impossible. So being able to see the plan of a query that has\ncrossed auto_explain.log_min_duration as soon as possible, would be highly\ndesirable.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Tue, 20 Sep 2022 12:10:07 -0700", "msg_from": "\"Gurjeet\" <singh.gurjeet@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Auto explain after query timeout" }, { "msg_contents": "On Tue, Sep 20, 2022 at 3:06 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Sep 20, 2022 at 2:35 PM James Coleman <jtc331@gmail.com> wrote:\n> > Either I'm missing something (and/or this was fixed in a later PG\n> > version), but I don't think this is how it works. We have this\n> > specific problem now: we set auto_explain.log_min_duration to 200 (ms)\n> > and statement_timeout set to 30s, but when a statement times out we do\n> > not get the plan logged with auto-explain.\n>\n> I think you're correct. auto_explain uses the ExecutorEnd hook, but\n> that hook will not be fired in the case of an error. Indeed, if an\n> error has already been thrown, it would be unsafe to try to\n> auto-explain anything. For instance -- and this is just one problem of\n> probably many -- ExplainTargetRel() performs catalog lookups, which is\n> not OK in a failed transaction.\n>\n> To make this work, I think you'd need a hook that fires just BEFORE\n> the error is thrown. However, previous attempts to introduce hooks\n> into ProcessInterrupts() have not met with a wam response from Tom, so\n> it might be a tough sell. And maybe for good reason. I see at least\n> two problems. The first is that explaining a query is a pretty\n> complicated operation that involves catalog lookups and lots of\n> complicated stuff, and I don't think that it would be safe to do all\n> of that at any arbitrary point in the code where ProcessInterrupts()\n> happened to fire. What if one of the syscache lookups misses the cache\n> and we have to open the underlying catalog? Then\n> AcceptInvalidationMessages() will fire, but we don't currently assume\n> that any old CHECK_FOR_INTERRUPTS() can process invalidations. What if\n> the running query has called a user-defined function or procedure\n> which is running DDL which is in the middle of changing catalog state\n> for some relation involved in the query at the moment that the\n> statement timeout arrives? I feel like there are big problems here.\n>\n> The other problem I see is that ProcessInterrupts() is our mechanism\n> for allowing people to escape from queries that would otherwise run\n> forever by hitting ^C. But what if the explain code goes crazy and\n> itself needs to be interrupted, when we're already inside\n> ProcessInterrupts()? Maybe that would work out OK... or maybe it\n> wouldn't. I'm not really sure. But it's another reason to be very,\n> very cautious about putting complicated logic inside\n> ProcessInterrupts().\n\nThis is exactly the kind of background I was hoping someone would\nprovide; thank you, Robert.\n\nIt seems like one could imagine addressing all of these by having one of:\n\n- A safe explain (e.g., disallow catalog access) that is potentially\nmissing information.\n- A safe way to interrupt queries such as \"safe shutdown\" of a node\n(e.g., a seq scan could stop returning tuples early) and allow a\nconfigurable buffer of time after the statement timeout before firing\na hard abort of the query (and transaction).\n\nBoth of those seem like a significant amount of work.\n\nAlternatively I wonder if it's possible (this would maybe assume no\ncatalog changes in the current transaction -- or at least none that\nwould be referenced by the current query) to open a new transaction\n(with the same horizon information) and duplicate the plan over to\nthat transaction and run the explain there. This way you do it *after*\nthe error is raised. That's some serious spit-balling -- I'm not\nsaying that's doable, just trying to imagine how one might\ncomprehensively address the concerns.\n\nDoes any of that seem at all like a path you could imagine being fruitful?\n\nThanks,\nJames Coleman\n\n\n", "msg_date": "Tue, 20 Sep 2022 17:08:45 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Auto explain after query timeout" }, { "msg_contents": "On Tue, Sep 20, 2022 at 5:08 PM James Coleman <jtc331@gmail.com> wrote:\n> - A safe explain (e.g., disallow catalog access) that is potentially\n> missing information.\n\nThis would be pretty useless I think, because you'd be missing all\nrelation names.\n\n> - A safe way to interrupt queries such as \"safe shutdown\" of a node\n> (e.g., a seq scan could stop returning tuples early) and allow a\n> configurable buffer of time after the statement timeout before firing\n> a hard abort of the query (and transaction).\n\nThis might be useful, but it seems extremely difficult to get working.\nYou'd not only have to design the safe shutdown mechanism itself, but\nalso find a way to safely engage it at the right times.\n\n> Alternatively I wonder if it's possible (this would maybe assume no\n> catalog changes in the current transaction -- or at least none that\n> would be referenced by the current query) to open a new transaction\n> (with the same horizon information) and duplicate the plan over to\n> that transaction and run the explain there. This way you do it *after*\n> the error is raised. That's some serious spit-balling -- I'm not\n> saying that's doable, just trying to imagine how one might\n> comprehensively address the concerns.\n\nDoesn't work, because the new transaction's snapshot wouldn't be the\nsame as that of the old one. Imagine that you create a table and run a\nquery on it in the same transaction. Then you migrate the plan tree to\na new transaction and try to find out the table name. But in the new\ntransaction, that table doesn't exist: it was destroyed by the\nprevious rollback.\n\nHonestly I have no very good ideas how to create the feature you want\nhere. I guess the only thing I can think of is to separate the EXPLAIN\nprocess into two phases: a first phase that runs when the plan tree is\nset up and gathers all of the information that we might need later,\nlike relation names, and then a second phase that runs later when you\nwant to generate the output and does nothing that can fail, or at\nleast no database: maybe it's allowed to allocate memory, for example.\nBut that sounds like a big and perhaps painful refactoring exercise,\nand I can imagine that there might be reasons why it doesn't work out.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 20 Sep 2022 18:56:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Auto explain after query timeout" } ]
[ { "msg_contents": "For publication schemas (OBJECT_PUBLICATION_NAMESPACE) and user\nmappings (OBJECT_USER_MAPPING), pg_get_object_address() checked the\narray length of the second argument, but not of the first argument.\nIf the first argument was too long, it would just silently ignore\neverything but the first argument. Fix that by checking the length of\nthe first argument as well.\n\nI wouldn't be surprised if there were more holes like this in this area. \n I just happened to find these while working on something related.", "msg_date": "Tue, 20 Sep 2022 13:44:12 -0400", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Tighten pg_get_object_address argument checking" }, { "msg_contents": "On Tue, Sep 20, 2022 at 11:14 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> For publication schemas (OBJECT_PUBLICATION_NAMESPACE) and user\n> mappings (OBJECT_USER_MAPPING), pg_get_object_address() checked the\n> array length of the second argument, but not of the first argument.\n> If the first argument was too long, it would just silently ignore\n> everything but the first argument. Fix that by checking the length of\n> the first argument as well.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Sep 2022 15:31:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Tighten pg_get_object_address argument checking" }, { "msg_contents": "On 21.09.22 12:01, Amit Kapila wrote:\n> On Tue, Sep 20, 2022 at 11:14 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> For publication schemas (OBJECT_PUBLICATION_NAMESPACE) and user\n>> mappings (OBJECT_USER_MAPPING), pg_get_object_address() checked the\n>> array length of the second argument, but not of the first argument.\n>> If the first argument was too long, it would just silently ignore\n>> everything but the first argument. Fix that by checking the length of\n>> the first argument as well.\n> \n> LGTM.\n\nCommitted, thanks for checking.\n\n\n\n", "msg_date": "Wed, 21 Sep 2022 10:00:20 -0400", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Tighten pg_get_object_address argument checking" } ]
[ { "msg_contents": "Hello,\n\nI’m working on a patch to support logical replication of large objects\n(LOBs). This is a useful feature when a database in logical\nreplication has lots of tables, functions and other objects that\nchange over time, such as in online cross major version upgrade. As\nan example, this lets users replicate large objects between different\nPostgreSQL major versions.\n\nThe topic was previously discussed in [1]. Moreover, we need to\naddress the following 3 challenges. I worked on some designs and\nappreciate feedback :\n\n1. Replication of the change stream of LOBs\n My suggestion is that we can just add a check when any LOB\nfunction or API is called and executed in the backend, and then add a\nsimple SQL command in WAL files to do the replication . Take lo_unlink\nas example[2] : we can create a “simple” SQL like SELECT\nlo_unlink(<PID>); and log it in WAL, so that replica only needs to\nreplay the “simple” SQL command. We can unlink the LOBs in replica\naccordingly.\n Pros :\n a. We do not need to add much additional functionality.\n b. For most of the LOBs related APIs, we don’t need to touch whole\nLOBs, except for the case creation of LOBs.\n Cons:\n a. For the case creation of LOBs, we may need to split the whole\nLOB content into WAL files which will increase volume of WAL and\nreplicated writes dramatically. This could be prevented if we can make\nsure the original file is publicly shared, like a url from cloud\nstorage, or exists on host on replica as well.\n2. Initializing replication of LOBs\n When a subscription is established, LOBs in the source should be\nreplicated even if they are not created in replica. Here are two\noptions to approach this problem:\n Option 1 : Support LOB related system catalogs in logical replication\n We can make an exception in this line[3] in the\n“check_publication_add_schema” function.\n Pros: All required LOBs can be transferred to replica.\n Cons: There is currently no support for allowing logical\nreplication of system catalogs.\n Option 2 : Run a function or a script from source instance when it\ndetects logical replication is established.\n The point is that we can ask the source to replicate whole LOBs\nwhen a new subscription is created.\n Maybe we can copy the whole LOBs related system catalogs and\nreplicate the copy to replica, then restore the original LOBs into\nreplica from the copy.\n Cons : This will increase the volume of WAL and replicated\nwrites dramatically. I currently do not have a preference on either\noption. I would like to see if others have thoughts on how we could\napproach this.\n3. OID conflicts\n A common case is that the OID we want to publish is already used\nin subscriber.\n Option 1 (My recommendation) : Create/Update existing System\ncatalog for mapping the OID if conflict happens\n Maybe we could add another column naming like mapping_oid in\nsystem catalog pg_largeobject_metaqdate on the replica. When the\nreplica detects the OID (E.g. 16400) that source is replicating is\nalready used in replica, replica could store the 16400 as mapping_oid\nand create a new OID (E.g. 16500) as oid to be used in replica, so\nwhatever operation is done on 16400 in source, in replica we just need\nto perform on 16500.\n Cons : We would need to add additional columns to the system catalog\n Option 2 : Prompt error message in Replica and let user handle it manually\n Cons : This is not user-friendly.\n\nPlease let me know your thoughts.\n\nBorui Yang\nAmazon RDS/Aurora for PostgreSQL\n\n[1] https://www.postgresql.org/message-id/VisenaEmail.16.35d9e854e3626e81.15cd0de93df%40tc7-visena\n[2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/libpq/be-fsstubs.c#l312\n[3] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/catalog/pg_publication.c#l98\n\n\n", "msg_date": "Tue, 20 Sep 2022 12:13:05 -0700", "msg_from": "Borui Yang <boruiyan02@gmail.com>", "msg_from_op": true, "msg_subject": "Support logical replication of large objects" } ]
[ { "msg_contents": "Hi hackers,\n\nIn 6c2003f8a1bbc7c192a2e83ec51581c018aa162f, we change the snapshot name\nwhen exporting snapshot, however, there is one place we missed update the\nsnapshot name in documentation. Attach a patch to fix it.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.", "msg_date": "Wed, 21 Sep 2022 11:01:23 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Fix snapshot name for SET TRANSACTION documentation" }, { "msg_contents": "\n\nOn 2022/09/21 12:01, Japin Li wrote:\n> \n> Hi hackers,\n> \n> In 6c2003f8a1bbc7c192a2e83ec51581c018aa162f, we change the snapshot name\n> when exporting snapshot, however, there is one place we missed update the\n> snapshot name in documentation. Attach a patch to fix it.\n\nThanks for the patch! Looks good to me.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 21 Sep 2022 14:40:34 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix snapshot name for SET TRANSACTION documentation" }, { "msg_contents": "\n\nOn 2022/09/21 14:40, Fujii Masao wrote:\n> \n> \n> On 2022/09/21 12:01, Japin Li wrote:\n>>\n>> Hi hackers,\n>>\n>> In 6c2003f8a1bbc7c192a2e83ec51581c018aa162f, we change the snapshot name\n>> when exporting snapshot, however, there is one place we missed update the\n>> snapshot name in documentation.  Attach a patch to fix it.\n> \n> Thanks for the patch! Looks good to me.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 22 Sep 2022 13:00:36 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix snapshot name for SET TRANSACTION documentation" }, { "msg_contents": "\nOn Thu, 22 Sep 2022 at 12:00, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2022/09/21 14:40, Fujii Masao wrote:\n>> On 2022/09/21 12:01, Japin Li wrote:\n>>>\n>>> Hi hackers,\n>>>\n>>> In 6c2003f8a1bbc7c192a2e83ec51581c018aa162f, we change the snapshot name\n>>> when exporting snapshot, however, there is one place we missed update the\n>>> snapshot name in documentation. Attach a patch to fix it.\n>> Thanks for the patch! Looks good to me.\n>\n> Pushed. Thanks!\n>\n\nThanks!\n\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Thu, 22 Sep 2022 21:16:21 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix snapshot name for SET TRANSACTION documentation" } ]
[ { "msg_contents": "Hello Hackers!\n\nIs it possible to get the current virtual txid from C somehow?\n\nI've looked through the code, but can't seem to find anything other than\ngetting a NULL when there is no (real) xid assigned. Maybe I'm missing\nsomething?\n\nCheers,\nJames\n\nHello Hackers!Is it possible to get the current virtual txid from C somehow?I've looked through the code, but can't seem to find anything other than getting a NULL when there is no (real) xid assigned. Maybe I'm missing something?Cheers,James", "msg_date": "Wed, 21 Sep 2022 13:58:47 +1000", "msg_from": "James Sewell <james.sewell@gmail.com>", "msg_from_op": true, "msg_subject": "Virtual tx id" }, { "msg_contents": "\nOn Wed, 21 Sep 2022 at 11:58, James Sewell <james.sewell@gmail.com> wrote:\n> Hello Hackers!\n>\n> Is it possible to get the current virtual txid from C somehow?\n>\nThe virtual txid is consisted of MyProc->backendId and MyProc->lxid. Do you\nmean a C function that returns virtual txid?\n\n> I've looked through the code, but can't seem to find anything other than\n> getting a NULL when there is no (real) xid assigned. Maybe I'm missing\n> something?\n>\nDo you mean use SQL function to check the virtual? IIRC, there is no such\nfunctions. Maybe you can use pg_export_snapshot() to get virtual txid (v10\nand later), the filename of exported snapshot consists MyProc->backendId\nand MyProc->lxid.\n\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Wed, 21 Sep 2022 12:17:19 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Virtual tx id" }, { "msg_contents": "HI,\nOn Sep 21, 2022, 11:59 +0800, James Sewell <james.sewell@gmail.com>, wrote:\n> Hello Hackers!\n>\n> Is it possible to get the current virtual txid from C somehow?\n>\n> I've looked through the code, but can't seem to find anything other than getting a NULL when there is no (real) xid assigned. Maybe I'm missing something?\n>\n> Cheers,\n> James\n\nVirtual xid is meaningful only inside a read-only transaction.\n\nIt’s made up of MyProc->BackendId and MyProc->LocalTransactionId.\n\nTo catch it, you can begin a transaction, but don’t exec sqls that could change the db.\n\nAnd gdb on process to see( must exec a read only sql to call StartTransaction, else MyProc->lxid is not assigned).\n\n```\nBegin;\n// do nothing\n\n```\ngdb on it and see\n\n```\np MyProc->lxid\np MyProc->backendId\n\n```\n\n\n\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHI,\n\n\nOn Sep 21, 2022, 11:59 +0800, James Sewell <james.sewell@gmail.com>, wrote:\nHello Hackers!\n\nIs it possible to get the current virtual txid from C somehow?\n\nI've looked through the code, but can't seem to find anything other than getting a NULL when there is no (real) xid assigned. Maybe I'm missing something?\n\nCheers,\nJames\n\nVirtual xid is meaningful only inside a read-only transaction. \n\nIt’s made up of MyProc->BackendId and MyProc->LocalTransactionId.\n\nTo catch it, you can begin a transaction, but don’t exec sqls that could change the db.\n\nAnd gdb on process to see( must exec a read only sql to call StartTransaction, else MyProc->lxid is not assigned).\n\n```\nBegin;\n// do nothing\n\n```\ngdb on it and see\n\n```\np MyProc->lxid\np MyProc->backendId\n\n```\n\n\n\n\nRegards,\nZhang Mingli", "msg_date": "Wed, 21 Sep 2022 12:19:46 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Virtual tx id" }, { "msg_contents": "Hi,\n\nOn Wed, Sep 21, 2022 at 01:58:47PM +1000, James Sewell wrote:\n> Hello Hackers!\n>\n> Is it possible to get the current virtual txid from C somehow?\n>\n> I've looked through the code, but can't seem to find anything other than\n> getting a NULL when there is no (real) xid assigned. Maybe I'm missing\n> something?\n\nIt should be MyProc->lxid, and prepend it with MyBackendId if you want to\ncompare something like pg_locks.virtualtransaction.\n\n\n", "msg_date": "Wed, 21 Sep 2022 12:21:10 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Virtual tx id" } ]
[ { "msg_contents": "Hi hackers,\n\nRecently, we discover that the field of tts_tableOid of TupleTableSlot is\nassigned duplicated in table AM's interface which is not necessary. For\nexample, in table_scan_getnextslot,\n\n```\nstatic inline bool\ntable_scan_getnextslot(TableScanDesc sscan, ScanDirection direction,\nTupleTableSlot *slot)\n{\n slot->tts_tableOid = RelationGetRelid(sscan->rs_rd);\n\n /*\n * We don't expect direct calls to table_scan_getnextslot with valid\n * CheckXidAlive for catalog or regular tables. See detailed\ncomments in\n * xact.c where these variables are declared.\n */\n if (unlikely(TransactionIdIsValid(CheckXidAlive) && !bsysscan))\n elog(ERROR, \"unexpected table_scan_getnextslot call during\nlogical decoding\");\n\n return sscan->rs_rd->rd_tableam->scan_getnextslot(sscan, direction,\nslot);\n}\n```\n\nwe can see that it assigns tts_tableOid, while calling\nsscan->rs_rd->rd_tableam->scan_getnextslot which implemented by\nheap_getnextslot also assigns tts_tableOid in the call of\nExecStoreBufferHeapTuple.\n\n```\nTupleTableSlot *\nExecStoreBufferHeapTuple(HeapTuple tuple,\n TupleTableSlot *slot,\n Buffer buffer)\n{\n /*\n * sanity checks\n */\n Assert(tuple != NULL);\n Assert(slot != NULL);\n Assert(slot->tts_tupleDescriptor != NULL);\n Assert(BufferIsValid(buffer));\n\n if (unlikely(!TTS_IS_BUFFERTUPLE(slot)))\n elog(ERROR, \"trying to store an on-disk heap tuple into\nwrong type of slot\");\n tts_buffer_heap_store_tuple(slot, tuple, buffer, false);\n\n slot->tts_tableOid = tuple->t_tableOid;\n\n return slot;\n}\n```\n\nWe can get the two assigned values are same by reading codes. Maybe we\nshould remove one?\n\nWhat's more, we find that maybe we assign slot->tts_tableOid in outer\ninterface like table_scan_getnextslot may be better and more friendly when\nwe import other pluggable storage formats. It can avoid duplicated\nassignments in every implementation of table AM's interfaces.\n\nRegards,\nWenchao\n\nHi hackers,Recently, we discover that the field of tts_tableOid of TupleTableSlot is assigned duplicated in table AM's interface which is not necessary. For example, in table_scan_getnextslot,```static inline booltable_scan_getnextslot(TableScanDesc sscan, ScanDirection direction, TupleTableSlot *slot){        slot->tts_tableOid = RelationGetRelid(sscan->rs_rd);        /*         * We don't expect direct calls to table_scan_getnextslot with valid         * CheckXidAlive for catalog or regular tables.  See detailed comments in         * xact.c where these variables are declared.         */        if (unlikely(TransactionIdIsValid(CheckXidAlive) && !bsysscan))                elog(ERROR, \"unexpected table_scan_getnextslot call during logical decoding\");        return sscan->rs_rd->rd_tableam->scan_getnextslot(sscan, direction, slot);}```we can see that it assigns tts_tableOid, while calling sscan->rs_rd->rd_tableam->scan_getnextslot which implemented by heap_getnextslot also assigns tts_tableOid in the call of ExecStoreBufferHeapTuple.```TupleTableSlot *ExecStoreBufferHeapTuple(HeapTuple tuple,                                                 TupleTableSlot *slot,                                                 Buffer buffer){        /*         * sanity checks         */        Assert(tuple != NULL);        Assert(slot != NULL);        Assert(slot->tts_tupleDescriptor != NULL);        Assert(BufferIsValid(buffer));        if (unlikely(!TTS_IS_BUFFERTUPLE(slot)))                elog(ERROR, \"trying to store an on-disk heap tuple into wrong type of slot\");        tts_buffer_heap_store_tuple(slot, tuple, buffer, false);        slot->tts_tableOid = tuple->t_tableOid;        return slot;}```We can get the two assigned values are same by reading codes. Maybe we should remove one?What's more, we find that maybe we assign slot->tts_tableOid in outer interface like table_scan_getnextslot may be better and more friendly when we import other pluggable storage formats. It can avoid duplicated assignments in every implementation of table AM's interfaces.Regards,Wenchao", "msg_date": "Wed, 21 Sep 2022 14:51:02 +0800", "msg_from": "Wenchao Zhang <zwcpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Assign TupleTableSlot->tts_tableOid duplicated in tale AM." }, { "msg_contents": "On Tue, Sep 20, 2022 at 11:51 PM Wenchao Zhang <zwcpostgres@gmail.com> wrote:\n> We can get the two assigned values are same by reading codes. Maybe we should remove one?\n>\n> What's more, we find that maybe we assign slot->tts_tableOid in outer interface like table_scan_getnextslot may be better and more friendly when we import other pluggable storage formats.\n\nI suppose that you're right; it really should happen in exactly one\nplace, based on some overarching theory about how tts_tableOid works\nwith the table AM interface. I just don't know what that theory is.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 27 Sep 2022 19:46:38 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Assign TupleTableSlot->tts_tableOid duplicated in tale AM." }, { "msg_contents": "Firstly, thank you for your reply.\nYeah, I think maybe just assigning tts_tableOid of TTS only once\nduring scanning the same table may be better. That really needs\nto be thinked over.\n\nRegards,\nWenchao\n\nPeter Geoghegan <pg@bowt.ie> 于2022年9月28日周三 10:47写道:\n\n> On Tue, Sep 20, 2022 at 11:51 PM Wenchao Zhang <zwcpostgres@gmail.com>\n> wrote:\n> > We can get the two assigned values are same by reading codes. Maybe we\n> should remove one?\n> >\n> > What's more, we find that maybe we assign slot->tts_tableOid in outer\n> interface like table_scan_getnextslot may be better and more friendly when\n> we import other pluggable storage formats.\n>\n> I suppose that you're right; it really should happen in exactly one\n> place, based on some overarching theory about how tts_tableOid works\n> with the table AM interface. I just don't know what that theory is.\n>\n> --\n> Peter Geoghegan\n>\n\nFirstly, thank you for your reply. Yeah, I think maybe just assigning tts_tableOid of TTS only onceduring scanning the same table may be better. That really needsto be thinked over.Regards,WenchaoPeter Geoghegan <pg@bowt.ie> 于2022年9月28日周三 10:47写道:On Tue, Sep 20, 2022 at 11:51 PM Wenchao Zhang <zwcpostgres@gmail.com> wrote:\n> We can get the two assigned values are same by reading codes. Maybe we should remove one?\n>\n> What's more, we find that maybe we assign slot->tts_tableOid in outer interface like table_scan_getnextslot may be better and more friendly when we import other pluggable storage formats.\n\nI suppose that you're right; it really should happen in exactly one\nplace, based on some overarching theory about how tts_tableOid works\nwith the table AM interface. I just don't know what that theory is.\n\n-- \nPeter Geoghegan", "msg_date": "Fri, 30 Sep 2022 11:30:57 +0800", "msg_from": "Wenchao Zhang <zwcpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assign TupleTableSlot->tts_tableOid duplicated in tale AM." } ]
[ { "msg_contents": "Hi,\n\nWhile working on the “Fast COPY FROM based on batch insert” patch, I\nnoticed this:\n\n else if (proute != NULL && resultRelInfo->ri_TrigDesc != NULL &&\n resultRelInfo->ri_TrigDesc->trig_insert_new_table)\n {\n /*\n * For partitioned tables we can't support multi-inserts when there\n * are any statement level insert triggers. It might be possible to\n * allow partitioned tables with such triggers in the future, but for\n * now, CopyMultiInsertInfoFlush expects that any before row insert\n * and statement level insert triggers are on the same relation.\n */\n insertMethod = CIM_SINGLE;\n }\n\nI think there is a thinko in the comment; “before” should be after.\nPatch attached.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Wed, 21 Sep 2022 16:39:41 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Multi-insert related comment in CopyFrom()" }, { "msg_contents": "On Wed, Sep 21, 2022 at 4:39 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> While working on the “Fast COPY FROM based on batch insert” patch, I\n> noticed this:\n>\n> else if (proute != NULL && resultRelInfo->ri_TrigDesc != NULL &&\n> resultRelInfo->ri_TrigDesc->trig_insert_new_table)\n> {\n> /*\n> * For partitioned tables we can't support multi-inserts when there\n> * are any statement level insert triggers. It might be possible to\n> * allow partitioned tables with such triggers in the future, but for\n> * now, CopyMultiInsertInfoFlush expects that any before row insert\n> * and statement level insert triggers are on the same relation.\n> */\n> insertMethod = CIM_SINGLE;\n> }\n>\n> I think there is a thinko in the comment; “before” should be after.\n> Patch attached.\n\nPushed.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 22 Sep 2022 16:11:29 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Multi-insert related comment in CopyFrom()" } ]
[ { "msg_contents": "when a error occurs when creating proc, it should point out the\nspecific proc kind instead of just printing \"function\".\n\ndiff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c\nindex a9fe45e347..58af4b48ce 100644\n--- a/src/backend/catalog/pg_proc.c\n+++ b/src/backend/catalog/pg_proc.c\n@@ -373,7 +373,11 @@ ProcedureCreate(const char *procedureName,\n if (!replace)\n ereport(ERROR,\n (errcode(ERRCODE_DUPLICATE_FUNCTION),\n- errmsg(\"function \\\"%s\\\"\nalready exists with same argument types\",\n+ errmsg(\"%s \\\"%s\\\" already\nexists with same argument types\",\n+\n(oldproc->prokind == PROKIND_AGGREGATE ? \"aggregate function\" :\n+\noldproc->prokind == PROKIND_PROCEDURE ? \"procedure\" :\n+\noldproc->prokind == PROKIND_WINDOW ? \"window function\" :\n+ \"function\"),\n procedureName)));\n if (!pg_proc_ownercheck(oldproc->oid, proowner))\n aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION,\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Wed, 21 Sep 2022 19:45:01 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] polish the error message of creating proc" }, { "msg_contents": "Hi,\n\nOn Wed, Sep 21, 2022 at 07:45:01PM +0800, Junwang Zhao wrote:\n> when a error occurs when creating proc, it should point out the\n> specific proc kind instead of just printing \"function\".\n\nYou should have kept the original thread in copy (1), or at least mention it.\n\n> diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c\n> index a9fe45e347..58af4b48ce 100644\n> --- a/src/backend/catalog/pg_proc.c\n> +++ b/src/backend/catalog/pg_proc.c\n> @@ -373,7 +373,11 @@ ProcedureCreate(const char *procedureName,\n> if (!replace)\n> ereport(ERROR,\n> (errcode(ERRCODE_DUPLICATE_FUNCTION),\n> - errmsg(\"function \\\"%s\\\"\n> already exists with same argument types\",\n> + errmsg(\"%s \\\"%s\\\" already\n> exists with same argument types\",\n> +\n> (oldproc->prokind == PROKIND_AGGREGATE ? \"aggregate function\" :\n> +\n> oldproc->prokind == PROKIND_PROCEDURE ? \"procedure\" :\n> +\n> oldproc->prokind == PROKIND_WINDOW ? \"window function\" :\n> + \"function\"),\n> procedureName)));\n> if (!pg_proc_ownercheck(oldproc->oid, proowner))\n> aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION,\n\nYou can't put part of the message in parameter, as the resulting string isn't\ntranslatable. You should either use \"routine\" as a generic term or provide 3\ndifferent full messages.\n\n[1] https://www.postgresql.org/message-id/29ea5666.6ce8.1835f4b4992.Coremail.qtds_126@126.com\n\n\n", "msg_date": "Wed, 21 Sep 2022 20:17:30 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] polish the error message of creating proc" }, { "msg_contents": "On Wed, Sep 21, 2022 at 8:17 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Wed, Sep 21, 2022 at 07:45:01PM +0800, Junwang Zhao wrote:\n> > when a error occurs when creating proc, it should point out the\n> > specific proc kind instead of just printing \"function\".\n>\n> You should have kept the original thread in copy (1), or at least mention it.\n\nYeah, thanks for pointing that out, will do that next time.\n\n>\n> > diff --git a/src/backend/catalog/pg_proc.c b/src/backend/catalog/pg_proc.c\n> > index a9fe45e347..58af4b48ce 100644\n> > --- a/src/backend/catalog/pg_proc.c\n> > +++ b/src/backend/catalog/pg_proc.c\n> > @@ -373,7 +373,11 @@ ProcedureCreate(const char *procedureName,\n> > if (!replace)\n> > ereport(ERROR,\n> > (errcode(ERRCODE_DUPLICATE_FUNCTION),\n> > - errmsg(\"function \\\"%s\\\"\n> > already exists with same argument types\",\n> > + errmsg(\"%s \\\"%s\\\" already\n> > exists with same argument types\",\n> > +\n> > (oldproc->prokind == PROKIND_AGGREGATE ? \"aggregate function\" :\n> > +\n> > oldproc->prokind == PROKIND_PROCEDURE ? \"procedure\" :\n> > +\n> > oldproc->prokind == PROKIND_WINDOW ? \"window function\" :\n> > + \"function\"),\n> > procedureName)));\n> > if (!pg_proc_ownercheck(oldproc->oid, proowner))\n> > aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_FUNCTION,\n>\n> You can't put part of the message in parameter, as the resulting string isn't\n> translatable. You should either use \"routine\" as a generic term or provide 3\n> different full messages.\n\nI noticed that there are some translations under the backend/po directory,\ncan we just change\nmsgid \"function \\\"%s\\\" already exists with same argument types\"\nto\nmsgid \"%s \\\"%s\\\" already exists with same argument types\" ?\n\n>\n> [1] https://www.postgresql.org/message-id/29ea5666.6ce8.1835f4b4992.Coremail.qtds_126@126.com\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Wed, 21 Sep 2022 22:35:47 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] polish the error message of creating proc" }, { "msg_contents": "On Wed, Sep 21, 2022 at 10:35:47PM +0800, Junwang Zhao wrote:\n> On Wed, Sep 21, 2022 at 8:17 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > You can't put part of the message in parameter, as the resulting string isn't\n> > translatable. You should either use \"routine\" as a generic term or provide 3\n> > different full messages.\n>\n> I noticed that there are some translations under the backend/po directory,\n> can we just change\n> msgid \"function \\\"%s\\\" already exists with same argument types\"\n> to\n> msgid \"%s \\\"%s\\\" already exists with same argument types\" ?\n\nThe problem is that the parameters are passed as-is, so the final emitted\ntranslated string will be a mix of both languages, which isn't acceptable.\n\n\n", "msg_date": "Wed, 21 Sep 2022 22:49:12 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] polish the error message of creating proc" }, { "msg_contents": "Junwang Zhao <zhjwpku@gmail.com> writes:\n> I noticed that there are some translations under the backend/po directory,\n> can we just change\n> msgid \"function \\\"%s\\\" already exists with same argument types\"\n> to\n> msgid \"%s \\\"%s\\\" already exists with same argument types\" ?\n\nNo. This doesn't satisfy our message translation guidelines [1].\nThe fact that there are other messages that aren't up to project\nstandard isn't a license to create more.\n\nMore generally: there are probably dozens, if not hundreds, of\nmessages in the backend that say \"function\" but nowadays might\nalso be talking about a procedure. I'm not sure there's value\nin improving just one of them.\n\nI am pretty sure that we made an explicit decision some time back\nthat it is okay to say \"function\" when the object could also be\nan aggregate or window function. So you could at least cut this\nback to just handling \"procedure\" and \"function\". Or you could\nchange it to \"routine\" as Julien suggests, but I think a lot of\npeople will not think that's an improvement.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/nls-programmer.html#NLS-GUIDELINES\n\n\n", "msg_date": "Wed, 21 Sep 2022 10:53:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] polish the error message of creating proc" }, { "msg_contents": "On Wed, Sep 21, 2022 at 10:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Junwang Zhao <zhjwpku@gmail.com> writes:\n> > I noticed that there are some translations under the backend/po directory,\n> > can we just change\n> > msgid \"function \\\"%s\\\" already exists with same argument types\"\n> > to\n> > msgid \"%s \\\"%s\\\" already exists with same argument types\" ?\n>\n> No. This doesn't satisfy our message translation guidelines [1].\n> The fact that there are other messages that aren't up to project\n> standard isn't a license to create more.\n>\n> More generally: there are probably dozens, if not hundreds, of\n> messages in the backend that say \"function\" but nowadays might\n> also be talking about a procedure. I'm not sure there's value\n> in improving just one of them.\n>\n> I am pretty sure that we made an explicit decision some time back\n> that it is okay to say \"function\" when the object could also be\n> an aggregate or window function. So you could at least cut this\n> back to just handling \"procedure\" and \"function\". Or you could\n> change it to \"routine\" as Julien suggests, but I think a lot of\n> people will not think that's an improvement.\n\nYeah, make sense, will leave it as is.\n\n>\n> regards, tom lane\n>\n> [1] https://www.postgresql.org/docs/devel/nls-programmer.html#NLS-GUIDELINES\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Wed, 21 Sep 2022 22:56:20 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] polish the error message of creating proc" } ]
[ { "msg_contents": "In view of\n\nAuthor: John Naylor <john.naylor@postgresql.org>\nBranch: master [8b878bffa] 2022-09-09 12:55:23 +0700\n\n Bump minimum version of Flex to 2.5.35\n\nI wonder if we should go a little further and get rid of\nsrc/tools/fix-old-flex-code.pl (just in HEAD, to be clear).\nThat does nothing when flex is 2.5.36 or newer, and even with\nolder versions it's just suppressing an \"unused variable\" warning.\nI think there's a reasonable case for not caring about that,\nor at least valuing it lower than saving one build step.\n\nAccording to a recent survey, these are the active buildfarm\nmembers still using 2.5.35:\n\n frogfish | 2022-08-21 17:59:26 | configure: using flex 2.5.35\n hoverfly | 2022-09-02 16:02:01 | configure: using flex 2.5.35\n lapwing | 2022-09-02 16:40:12 | configure: using flex 2.5.35\n skate | 2022-09-02 07:27:10 | configure: using flex 2.5.35\n snapper | 2022-09-02 13:38:22 | configure: using flex 2.5.35\n\nlapwing uses -Werror, so it will be unhappy, but I don't think it's\nunreasonable to say that you should be using fairly modern tools\nif you want to use -Werror.\n\nOr we could leave well enough alone. But the current cycle of\ngetting rid of ancient portability hacks seems like a good climate\nfor this to happen. Thoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 21 Sep 2022 10:21:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Is it time to drop fix-old-flex-code.pl?" }, { "msg_contents": "On Wed, Sep 21, 2022 at 10:21:25AM -0400, Tom Lane wrote:\n>\n> lapwing uses -Werror, so it will be unhappy, but I don't think it's\n> unreasonable to say that you should be using fairly modern tools\n> if you want to use -Werror.\n\nI think that the -Werror helped to find problems multiple times (although less\noften than the ones due to the 32bits arch). If it's worth keeping I could see\nif I can get a 2.5.36 flex to compile, if we drop fix-old-flex-code.pl.\n\n\n", "msg_date": "Wed, 21 Sep 2022 23:50:25 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Is it time to drop fix-old-flex-code.pl?" } ]
[ { "msg_contents": "Hi hackers,\n\nSharing a small patch to remove an unused parameter\nin SnapBuildGetOrBuildSnapshot function in snapbuild.c\n\nWith commit 6c2003f8a1bbc7c192a2e83ec51581c018aa162f,\nSnapBuildBuildSnapshot no longer needs transaction id. This also makes the\nxid parameter in SnapBuildGetOrBuildSnapshot useless.\nI couldn't see a reason to keep it and decided to remove it.\n\nRegards,\nMelih", "msg_date": "Wed, 21 Sep 2022 17:21:57 +0300", "msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>", "msg_from_op": true, "msg_subject": "Removing unused parameter in SnapBuildGetOrBuildSnapshot" }, { "msg_contents": "On Sep 21, 2022, 22:22 +0800, Melih Mutlu <m.melihmutlu@gmail.com>, wrote:\n> Hi hackers,\n>\n> Sharing a small patch to remove an unused parameter in SnapBuildGetOrBuildSnapshot function in snapbuild.c\n>\n> With commit 6c2003f8a1bbc7c192a2e83ec51581c018aa162f, SnapBuildBuildSnapshot no longer needs transaction id. This also makes the xid parameter in SnapBuildGetOrBuildSnapshot useless.\n> I couldn't see a reason to keep it and decided to remove it.\n>\n> Regards,\n> Melih\n+1, Good Catch.\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nOn Sep 21, 2022, 22:22 +0800, Melih Mutlu <m.melihmutlu@gmail.com>, wrote:\nHi hackers,\n\nSharing a small patch to remove an unused parameter in SnapBuildGetOrBuildSnapshot function in snapbuild.c\n\nWith commit 6c2003f8a1bbc7c192a2e83ec51581c018aa162f, SnapBuildBuildSnapshot no longer needs transaction id. This also makes the xid parameter in SnapBuildGetOrBuildSnapshot useless.\nI couldn't see a reason to keep it and decided to remove it.\n\nRegards,\nMelih\n+1, Good Catch.\n\n\nRegards,\nZhang Mingli", "msg_date": "Wed, 21 Sep 2022 22:41:20 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing unused parameter in SnapBuildGetOrBuildSnapshot" }, { "msg_contents": "On Wed, Sep 21, 2022 at 8:11 PM Zhang Mingli <zmlpostgres@gmail.com> wrote:\n>\n> On Sep 21, 2022, 22:22 +0800, Melih Mutlu <m.melihmutlu@gmail.com>, wrote:\n>\n> Hi hackers,\n>\n> Sharing a small patch to remove an unused parameter in SnapBuildGetOrBuildSnapshot function in snapbuild.c\n>\n> With commit 6c2003f8a1bbc7c192a2e83ec51581c018aa162f, SnapBuildBuildSnapshot no longer needs transaction id. This also makes the xid parameter in SnapBuildGetOrBuildSnapshot useless.\n> I couldn't see a reason to keep it and decided to remove it.\n>\n> Regards,\n> Melih\n>\n> +1, Good Catch.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 23 Sep 2022 08:58:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing unused parameter in SnapBuildGetOrBuildSnapshot" }, { "msg_contents": "On Fri, Sep 23, 2022 at 8:58 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Sep 21, 2022 at 8:11 PM Zhang Mingli <zmlpostgres@gmail.com> wrote:\n> >\n> > On Sep 21, 2022, 22:22 +0800, Melih Mutlu <m.melihmutlu@gmail.com>, wrote:\n> >\n> > Hi hackers,\n> >\n> > Sharing a small patch to remove an unused parameter in SnapBuildGetOrBuildSnapshot function in snapbuild.c\n> >\n> >\n> > +1, Good Catch.\n> >\n>\n> LGTM.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Sep 2022 09:52:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Removing unused parameter in SnapBuildGetOrBuildSnapshot" } ]
[ { "msg_contents": "Hi all,\n\nI am working on a project with LLVM ORC that led us to PostgreSQL as a\ntarget application. We were surprised by learning that PGSQL already uses\nLLVM ORC to JIT certain queries.\n\nI would love to know what motivated this feature and for what it is being\ncurrently used for, as it is not enabled by default.\n\nThanks.\n\n-- \nJoão Paulo L. de Carvalho\nPh.D Computer Science | IC-UNICAMP | Campinas , SP - Brazil\nPostdoctoral Research Fellow | University of Alberta | Edmonton, AB - Canada\njoao.carvalho@ic.unicamp.br\njoao.carvalho@ualberta.ca\n\nHi all,I am working on a project with LLVM ORC that led us to PostgreSQL as a target application. We were surprised by learning that PGSQL already uses LLVM ORC to JIT certain queries.I would love to know what motivated this feature and for what it is being currently used for, as it is not enabled by default.Thanks.-- João Paulo L. de CarvalhoPh.D Computer Science |  IC-UNICAMP | Campinas , SP - BrazilPostdoctoral Research Fellow | University of Alberta | Edmonton, AB - Canadajoao.carvalho@ic.unicamp.brjoao.carvalho@ualberta.ca", "msg_date": "Wed, 21 Sep 2022 10:16:46 -0600", "msg_from": "=?UTF-8?Q?Jo=C3=A3o_Paulo_Labegalini_de_Carvalho?=\n <jaopaulolc@gmail.com>", "msg_from_op": true, "msg_subject": "Query JITing with LLVM ORC" }, { "msg_contents": "On Thu, Sep 22, 2022 at 4:17 AM João Paulo Labegalini de Carvalho\n<jaopaulolc@gmail.com> wrote:\n> I am working on a project with LLVM ORC that led us to PostgreSQL as a target application. We were surprised by learning that PGSQL already uses LLVM ORC to JIT certain queries.\n\nIt JITs expressions but not whole queries. Query execution at the\ntuple-flow level is still done using a C call stack the same shape as\nthe query plan, but it *could* be transformed to a different control\nflow that could be run more efficiently and perhaps JITed. CCing\nAndres who developed all this and had some ideas about that...\n\n> I would love to know what motivated this feature and for what it is being currently used for,\n\nhttps://www.postgresql.org/docs/current/jit-reason.html\n\n> as it is not enabled by default.\n\nIt's enabled by default in v12 and higher (if you built with\n--with-llvm, as packagers do), but not always used:\n\nhttps://www.postgresql.org/docs/current/jit-decision.html\n\n\n", "msg_date": "Thu, 22 Sep 2022 07:28:19 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Query JITing with LLVM ORC" }, { "msg_contents": "Hi Thomas,\n\nIt JITs expressions but not whole queries.\n\n\nThanks for the clarification.\n\n\n> Query execution at the\n> tuple-flow level is still done using a C call stack the same shape as\n> the query plan, but it *could* be transformed to a different control\n> flow that could be run more efficiently and perhaps JITed.\n\n\nI see, so there is room for extending the use of Orc JIT in PGSQL.\n\n\n> CCing\n> Andres who developed all this and had some ideas about that...\n>\n\nThanks for CCing Andres, it will be great to hear from him.\n\n\n> > I would love to know what motivated this feature and for what it is\n> being currently used for,\n>\n> https://www.postgresql.org/docs/current/jit-reason.html\n\n\nIn that link I found the README under src/backend/jit, which was very\nhelpful.\n\n> as it is not enabled by default.\n>\n> It's enabled by default in v12 and higher (if you built with\n> --with-llvm, as packagers do), but not always used:\n>\n> https://www.postgresql.org/docs/current/jit-decision.html\n>\n\nGood to know. I compiled from the REL_14_5 tag and did a simple experiment\nto contrast building with and w/o passing --with-llvm.\nI ran the TPC-C benchmark with 1 warehouse, 10 terminals, 20min of ramp-up,\nand 120 of measurement time.\nThe number of transactions per minute was about the same with & w/o JITing.\nIs this expected? Should I use a different benchmark to observe a\nperformance difference?\n\nRegards,\n\n-- \nJoão Paulo L. de Carvalho\nPh.D Computer Science | IC-UNICAMP | Campinas , SP - Brazil\nPostdoctoral Research Fellow | University of Alberta | Edmonton, AB - Canada\njoao.carvalho@ic.unicamp.br\njoao.carvalho@ualberta.ca\n\nHi Thomas,\nIt JITs expressions but not whole queries. Thanks for the clarification. Query execution at the\ntuple-flow level is still done using a C call stack the same shape as\nthe query plan, but it *could* be transformed to a different control\nflow that could be run more efficiently and perhaps JITed. I see, so there is room for extending the use of Orc JIT in PGSQL. CCing\nAndres who developed all this and had some ideas about that...Thanks for CCing Andres, it will be great to hear from him. \n> I would love to know what motivated this feature and for what it is being currently used for,\n\nhttps://www.postgresql.org/docs/current/jit-reason.htmlIn that link I found the README under src/backend/jit, which was very helpful. \n> as it is not enabled by default.\n\nIt's enabled by default in v12 and higher (if you built with\n--with-llvm, as packagers do), but not always used:\n\nhttps://www.postgresql.org/docs/current/jit-decision.html\nGood to know. I compiled from the REL_14_5 tag and did a simple experiment to contrast building with and w/o passing --with-llvm.I ran the TPC-C benchmark with 1 warehouse, 10 terminals, 20min of ramp-up, and 120 of measurement time. The number of transactions per minute was about the same with & w/o JITing.Is this expected? Should I use a different benchmark to observe a performance difference?Regards,-- João Paulo L. de CarvalhoPh.D Computer Science |  IC-UNICAMP | Campinas , SP - BrazilPostdoctoral Research Fellow | University of Alberta | Edmonton, AB - Canadajoao.carvalho@ic.unicamp.brjoao.carvalho@ualberta.ca", "msg_date": "Wed, 21 Sep 2022 16:04:09 -0600", "msg_from": "=?UTF-8?Q?Jo=C3=A3o_Paulo_Labegalini_de_Carvalho?=\n <jaopaulolc@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Query JITing with LLVM ORC" }, { "msg_contents": "=?UTF-8?Q?Jo=C3=A3o_Paulo_Labegalini_de_Carvalho?= <jaopaulolc@gmail.com> writes:\n> Good to know. I compiled from the REL_14_5 tag and did a simple experiment\n> to contrast building with and w/o passing --with-llvm.\n> I ran the TPC-C benchmark with 1 warehouse, 10 terminals, 20min of ramp-up,\n> and 120 of measurement time.\n> The number of transactions per minute was about the same with & w/o JITing.\n> Is this expected? Should I use a different benchmark to observe a\n> performance difference?\n\nTPC-C is mostly short queries, so we aren't likely to choose to use JIT\n(and if we did, it'd likely be slower). You need a long query that will\nexecute the same expressions over and over for it to make sense to\ncompile them. Did you check whether any JIT was happening there?\n\nThere are a bunch of issues in this area concerning whether our cost\nmodels are good enough to accurately predict whether JIT is a good\nidea. But single-row fetches and updates are basically never going\nto use it, nor should they.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 21 Sep 2022 18:35:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Query JITing with LLVM ORC" }, { "msg_contents": "On Thu, Sep 22, 2022 at 10:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> =?UTF-8?Q?Jo=C3=A3o_Paulo_Labegalini_de_Carvalho?= <jaopaulolc@gmail.com> writes:\n> > Good to know. I compiled from the REL_14_5 tag and did a simple experiment\n> > to contrast building with and w/o passing --with-llvm.\n> > I ran the TPC-C benchmark with 1 warehouse, 10 terminals, 20min of ramp-up,\n> > and 120 of measurement time.\n> > The number of transactions per minute was about the same with & w/o JITing.\n> > Is this expected? Should I use a different benchmark to observe a\n> > performance difference?\n>\n> TPC-C is mostly short queries, so we aren't likely to choose to use JIT\n> (and if we did, it'd likely be slower). You need a long query that will\n> execute the same expressions over and over for it to make sense to\n> compile them. Did you check whether any JIT was happening there?\n\nSee also the proposal thread which has some earlier numbers from TPC-H.\n\nhttps://www.postgresql.org/message-id/flat/20170901064131.tazjxwus3k2w3ybh%40alap3.anarazel.de\n\n\n", "msg_date": "Thu, 22 Sep 2022 10:44:14 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Query JITing with LLVM ORC" }, { "msg_contents": "On Thu, Sep 22, 2022 at 10:04 AM João Paulo Labegalini de Carvalho\n<jaopaulolc@gmail.com> wrote:\n>building with and w/o passing --with-llvm\n\nBTW you can also just turn it off with runtime settings, no need to rebuild.\n\n\n", "msg_date": "Thu, 22 Sep 2022 10:54:14 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Query JITing with LLVM ORC" }, { "msg_contents": "Tom & Thomas:\n\nThank you so much, those a very useful comments.\n\nI noticed that I didn't make my intentions very clear. My teams goal is to\nevaluate if there are any gains in JITing PostgreSQL itself, or at least\nparts of it, and not the expressions or parts of a query.\n\nThe rationale to use PostgreSQL is because DBs are long running\napplications and the cost of JITing can be amortized.\n\nWe have a prototype LLVM IR pass that outlines functions in a program to\nJIT and a ORC-based runtime to re-compile functions. Our goal is to see\nimprovements due to target/sub-target specialization.\n\nThe reason I was looking at benchmarks is to have a workload to profile\nPostgreSQL and find its bottlenecks. The hot functions would then be\noutlined for JITing.\n\n\n\nOn Wed., Sep. 21, 2022, 4:54 p.m. Thomas Munro, <thomas.munro@gmail.com>\nwrote:\n\n> On Thu, Sep 22, 2022 at 10:04 AM João Paulo Labegalini de Carvalho\n> <jaopaulolc@gmail.com> wrote:\n> >building with and w/o passing --with-llvm\n>\n> BTW you can also just turn it off with runtime settings, no need to\n> rebuild.\n>\n\nTom & Thomas:Thank you so much, those a very useful comments.I noticed that I didn't make my intentions very clear. My teams goal is to evaluate if there are any gains in JITing PostgreSQL itself, or at least parts of it, and not the expressions or parts of a query.The rationale to use PostgreSQL is because DBs are long running applications and the cost of JITing can be amortized.We have a prototype LLVM IR pass that outlines functions in a program to JIT and a ORC-based runtime to re-compile functions. Our goal is to see improvements due to target/sub-target specialization.The reason I was looking at benchmarks is to have a workload to profile PostgreSQL and find its bottlenecks. The hot functions would then be outlined for JITing.On Wed., Sep. 21, 2022, 4:54 p.m. Thomas Munro, <thomas.munro@gmail.com> wrote:On Thu, Sep 22, 2022 at 10:04 AM João Paulo Labegalini de Carvalho\n<jaopaulolc@gmail.com> wrote:\n>building with and w/o passing --with-llvm\n\nBTW you can also just turn it off with runtime settings, no need to rebuild.", "msg_date": "Wed, 21 Sep 2022 21:53:11 -0600", "msg_from": "=?UTF-8?Q?Jo=C3=A3o_Paulo_Labegalini_de_Carvalho?=\n <jaopaulolc@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Query JITing with LLVM ORC" } ]
[ { "msg_contents": "Currently, pg_basebackup has\n--create-slot option to create slot if not already exists or\n--slot to use existing slot\n\nWhich means it needs knowledge on if the slot with the given name already\nexists or not before invoking the command. If pg_basebackup --create-slot\n<> command fails for some reason after creating the slot. Re-triggering the\nsame command fails with ERROR slot already exists. Either then need to\ndelete the slot and retrigger Or need to add a check before retriggering\nthe command to check if the slot exists and based on the same alter the\ncommand to avoid passing --create-slot option. This poses\ninconvenience while automating on top of pg_basebackup. As checking for\nslot presence before invoking pg_basebackup unnecessarily involves issuing\nseparate SQL commands. Would be really helpful for such scenarios if\nsimilar to CREATE TABLE, pg_basebackup can have IF NOT EXISTS kind of\nsemantic. (Seems the limitation most likely is coming from CREATE\nREPLICATION SLOT protocol itself), Thoughts?\n\n-- \n*Ashwin Agrawal (VMware)*\n\nCurrently, pg_basebackup has--create-slot option to create slot if not already exists or--slot to use existing slotWhich means it needs knowledge on if the slot with the given name already exists or not before invoking the command. If pg_basebackup --create-slot <> command fails for some reason after creating the slot. Re-triggering the same command fails with ERROR slot already exists. Either then need to delete the slot and retrigger Or need to add a check before retriggering the command to check if the slot exists and based on the same alter the command to avoid passing --create-slot option. This poses inconvenience while automating on top of pg_basebackup. As checking for slot presence before invoking pg_basebackup unnecessarily involves issuing separate SQL commands. Would be really helpful for such scenarios if similar to CREATE TABLE, pg_basebackup can have IF NOT EXISTS kind of semantic. (Seems the limitation most likely is coming from CREATE REPLICATION SLOT protocol itself), Thoughts?-- Ashwin Agrawal (VMware)", "msg_date": "Wed, 21 Sep 2022 17:07:06 -0700", "msg_from": "Ashwin Agrawal <ashwinstar@gmail.com>", "msg_from_op": true, "msg_subject": "pg_basebackup --create-slot-if-not-exists?" }, { "msg_contents": "On Wednesday, September 21, 2022, Ashwin Agrawal <ashwinstar@gmail.com>\nwrote:\n\n> Currently, pg_basebackup has\n> --create-slot option to create slot if not already exists or\n> --slot to use existing slot\n>\n> Which means it needs knowledge on if the slot with the given name already\n> exists or not before invoking the command. If pg_basebackup --create-slot\n> <> command fails for some reason after creating the slot. Re-triggering the\n> same command fails with ERROR slot already exists. Either then need to\n> delete the slot and retrigger Or need to add a check before retriggering\n> the command to check if the slot exists and based on the same alter the\n> command to avoid passing --create-slot option. This poses\n> inconvenience while automating on top of pg_basebackup. As checking for\n> slot presence before invoking pg_basebackup unnecessarily involves issuing\n> separate SQL commands. Would be really helpful for such scenarios if\n> similar to CREATE TABLE, pg_basebackup can have IF NOT EXISTS kind of\n> semantic. (Seems the limitation most likely is coming from CREATE\n> REPLICATION SLOT protocol itself), Thoughts?\n>\n\nWhat’s the use case for automating pg_basebackup with a named replication\nslot created by the pg_basebackup command? Why can you not leverage a\ntemporary replication slot (i.e., omit —slot). ISTM the create option is\nbasically obsolete now.\n\nDavid J.\n\nOn Wednesday, September 21, 2022, Ashwin Agrawal <ashwinstar@gmail.com> wrote:Currently, pg_basebackup has--create-slot option to create slot if not already exists or--slot to use existing slotWhich means it needs knowledge on if the slot with the given name already exists or not before invoking the command. If pg_basebackup --create-slot <> command fails for some reason after creating the slot. Re-triggering the same command fails with ERROR slot already exists. Either then need to delete the slot and retrigger Or need to add a check before retriggering the command to check if the slot exists and based on the same alter the command to avoid passing --create-slot option. This poses inconvenience while automating on top of pg_basebackup. As checking for slot presence before invoking pg_basebackup unnecessarily involves issuing separate SQL commands. Would be really helpful for such scenarios if similar to CREATE TABLE, pg_basebackup can have IF NOT EXISTS kind of semantic. (Seems the limitation most likely is coming from CREATE REPLICATION SLOT protocol itself), Thoughts?What’s the use case for automating pg_basebackup with a named replication slot created by the pg_basebackup command?  Why can you not leverage a temporary replication slot (i.e., omit —slot). ISTM the create option is basically obsolete now.David J.", "msg_date": "Wed, 21 Sep 2022 17:34:20 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup --create-slot-if-not-exists?" }, { "msg_contents": "On Wed, Sep 21, 2022 at 05:34:20PM -0700, David G. Johnston wrote:\n> What’s the use case for automating pg_basebackup with a named replication\n> slot created by the pg_basebackup command? Why can you not leverage a\n> temporary replication slot (i.e., omit —slot). ISTM the create option is\n> basically obsolete now.\n\n+1.\n\nPerhaps the ask would ease some monitoring around pg_replication_slots\nwith a fixed slot name? One thing that could come into play is the\npossibility to enforce the use of a temporary slot with a name given\nby -S as pg_basebackup makes it permanent in this case, still the\ntemporary slot name being pg_basebackup_${PID} makes the slot\nsearchable with a pattern.\n--\nMichael", "msg_date": "Thu, 22 Sep 2022 09:46:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup --create-slot-if-not-exists?" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Sep 21, 2022 at 05:34:20PM -0700, David G. Johnston wrote:\n>> What’s the use case for automating pg_basebackup with a named replication\n>> slot created by the pg_basebackup command? Why can you not leverage a\n>> temporary replication slot (i.e., omit —slot). ISTM the create option is\n>> basically obsolete now.\n\n> +1.\n\nISTM there'd also be some security concerns, ie what if there's a\npre-existing slot (created by a hostile user, perhaps) that has\nproperties different from what you expect? I realize that slot\ncreation is a pretty high-privilege operation, but it's not\nsuperuser-only.\n\nIn any case I agree with the point that --create-slot seems\nrather obsolete. If you are trying to resume in a previous\nreplication stream (which seems like the point of persistent\nslots) then the slot had better already exist. If you are\nsatisfied with just starting replication from the current\ninstant, then a temp slot seems like what you want.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 21 Sep 2022 21:12:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup --create-slot-if-not-exists?" }, { "msg_contents": "On Wed, Sep 21, 2022 at 5:34 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Wednesday, September 21, 2022, Ashwin Agrawal <ashwinstar@gmail.com>\n> wrote:\n>\n>> Currently, pg_basebackup has\n>> --create-slot option to create slot if not already exists or\n>> --slot to use existing slot\n>>\n>> Which means it needs knowledge on if the slot with the given name already\n>> exists or not before invoking the command. If pg_basebackup --create-slot\n>> <> command fails for some reason after creating the slot. Re-triggering the\n>> same command fails with ERROR slot already exists. Either then need to\n>> delete the slot and retrigger Or need to add a check before retriggering\n>> the command to check if the slot exists and based on the same alter the\n>> command to avoid passing --create-slot option. This poses\n>> inconvenience while automating on top of pg_basebackup. As checking for\n>> slot presence before invoking pg_basebackup unnecessarily involves issuing\n>> separate SQL commands. Would be really helpful for such scenarios if\n>> similar to CREATE TABLE, pg_basebackup can have IF NOT EXISTS kind of\n>> semantic. (Seems the limitation most likely is coming from CREATE\n>> REPLICATION SLOT protocol itself), Thoughts?\n>>\n>\n> What’s the use case for automating pg_basebackup with a named replication\n> slot created by the pg_basebackup command?\n>\n\nGreenplum runs N (some hundred) number of PostgreSQL instances to form a\nsharded database cluster. Hence, automation/scripts are in place to create\nreplicas, failover failback for these N instances and such. As Michael said\nfor predictable management and monitoring of the slot across these many\ninstances, specific named replication slots are used across all these\ninstances. These named replication slots are used both for pg_basebackup\nfollowed by streaming replication.\n\nWhy can you not leverage a temporary replication slot (i.e., omit —slot).\n> ISTM the create option is basically obsolete now.\n>\n\nWe would be more than happy to use a temporary replication slot if it\nprovided full functionality. It might be a gap in my understanding, but I\nfeel a temporary replication slot only protects WAL deletion for the\nduration of pg_basebackup. It doesn't protect the window between\npg_basebackup completion and streaming replication starting.\nWith --write-recovery-conf option \"primary_slot_name\" only gets written\nto postgresql.auto.conf if the named replication slot is provided, which\nmakes sure the same slot will be used for pg_basebackup and streaming\nreplication hence will keep the WAL around till streaming replica connects\nafter pg_basebackup. How to avoid this window with a temp slot?\n\n-- \n*Ashwin Agrawal (VMware)*\n\nOn Wed, Sep 21, 2022 at 5:34 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Wednesday, September 21, 2022, Ashwin Agrawal <ashwinstar@gmail.com> wrote:Currently, pg_basebackup has--create-slot option to create slot if not already exists or--slot to use existing slotWhich means it needs knowledge on if the slot with the given name already exists or not before invoking the command. If pg_basebackup --create-slot <> command fails for some reason after creating the slot. Re-triggering the same command fails with ERROR slot already exists. Either then need to delete the slot and retrigger Or need to add a check before retriggering the command to check if the slot exists and based on the same alter the command to avoid passing --create-slot option. This poses inconvenience while automating on top of pg_basebackup. As checking for slot presence before invoking pg_basebackup unnecessarily involves issuing separate SQL commands. Would be really helpful for such scenarios if similar to CREATE TABLE, pg_basebackup can have IF NOT EXISTS kind of semantic. (Seems the limitation most likely is coming from CREATE REPLICATION SLOT protocol itself), Thoughts?What’s the use case for automating pg_basebackup with a named replication slot created by the pg_basebackup command? Greenplum runs N (some hundred) number of PostgreSQL instances to form a sharded database cluster. Hence, automation/scripts are in place to create replicas, failover failback for these N instances and such. As Michael said for predictable management and monitoring of the slot across these many instances, specific named replication slots are used across all these instances. These named replication slots are used both for pg_basebackup followed by streaming replication.Why can you not leverage a temporary replication slot (i.e., omit —slot). ISTM the create option is basically obsolete now.We would be more than happy to use a temporary replication slot if it provided full functionality. It might be a gap in my understanding, but I feel a temporary replication slot only protects WAL deletion for the duration of pg_basebackup. It doesn't protect the window between pg_basebackup completion and streaming replication starting. With --write-recovery-conf option \"primary_slot_name\" only gets written to postgresql.auto.conf if the named replication slot is provided, which makes sure the same slot will be used for pg_basebackup and streaming replication hence will keep the WAL around till streaming replica connects after pg_basebackup. How to avoid this window with a temp slot?-- Ashwin Agrawal (VMware)", "msg_date": "Thu, 22 Sep 2022 17:18:53 -0700", "msg_from": "Ashwin Agrawal <ashwinstar@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_basebackup --create-slot-if-not-exists?" }, { "msg_contents": "Hi,\n\nOn Wed, Sep 21, 2022 at 09:12:04PM -0400, Tom Lane wrote:\n> In any case I agree with the point that --create-slot seems\n> rather obsolete. If you are trying to resume in a previous\n> replication stream (which seems like the point of persistent\n> slots) then the slot had better already exist. If you are\n> satisfied with just starting replication from the current\n> instant, then a temp slot seems like what you want.\n\nOne advantage of using a permanent slot is that it's getting written\ninto the recovery configuration when you use --write-recovery-conf and\nyou only need to start the standby after intial bootstrap to have it\nconnect using the slot.\n\nNot sure that's worth keeping it around, but it makes automating things\nsomewhat simpler I guess. I do somewhat agree with the thread starter,\nthat --create-slot-if-not-exists would make things even easier, but in\nthe light of your concerns regarding security it's probably not the best\nidea and would make things even more convoluted than they are now.\n\n\nMichael\n\n\n", "msg_date": "Sun, 25 Sep 2022 21:41:45 +0200", "msg_from": "Michael Banck <mbanck@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup --create-slot-if-not-exists?" } ]
[ { "msg_contents": "meson: Add initial version of meson based build system\n\nAutoconf is showing its age, fewer and fewer contributors know how to wrangle\nit. Recursive make has a lot of hard to resolve dependency issues and slow\nincremental rebuilds. Our home-grown MSVC build system is hard to maintain for\ndevelopers not using Windows and runs tests serially. While these and other\nissues could individually be addressed with incremental improvements, together\nthey seem best addressed by moving to a more modern build system.\n\nAfter evaluating different build system choices, we chose to use meson, to a\ngood degree based on the adoption by other open source projects.\n\nWe decided that it's more realistic to commit a relatively early version of\nthe new build system and mature it in tree.\n\nThis commit adds an initial version of a meson based build system. It supports\nbuilding postgres on at least AIX, FreeBSD, Linux, macOS, NetBSD, OpenBSD,\nSolaris and Windows (however only gcc is supported on aix, solaris). For\nWindows/MSVC postgres can now be built with ninja (faster, particularly for\nincremental builds) and msbuild (supporting the visual studio GUI, but\nbuilding slower).\n\nSeveral aspects (e.g. Windows rc file generation, PGXS compatibility, LLVM\nbitcode generation, documentation adjustments) are done in subsequent commits\nrequiring further review. Other aspects (e.g. not installing test-only\nextensions) are not yet addressed.\n\nWhen building on Windows with msbuild, builds are slower when using a visual\nstudio version older than 2019, because those versions do not support\nMultiToolTask, required by meson for intra-target parallelism.\n\nThe plan is to remove the MSVC specific build system in src/tools/msvc soon\nafter reaching feature parity. However, we're not planning to remove the\nautoconf/make build system in the near future. Likely we're going to keep at\nleast the parts required for PGXS to keep working around until all supported\nversions build with meson.\n\nSome initial help for postgres developers is at\nhttps://wiki.postgresql.org/wiki/Meson\n\nWith contributions from Thomas Munro, John Naylor, Stone Tickle and others.\n\nAuthor: Andres Freund <andres@anarazel.de>\nAuthor: Nazir Bilal Yavuz <byavuz81@gmail.com>\nAuthor: Peter Eisentraut <peter@eisentraut.org>\nReviewed-By: Peter Eisentraut <peter.eisentraut@enterprisedb.com>\nDiscussion: https://postgr.es/m/20211012083721.hvixq4pnh2pixr3j@alap3.anarazel.de\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/e6927270cd18d535b77cbe79c55c6584351524be\n\nModified Files\n--------------\nconfig/meson.build | 4 +\nconfigure | 6 +\nconfigure.ac | 6 +\ncontrib/adminpack/meson.build | 23 +\ncontrib/amcheck/meson.build | 37 +\ncontrib/auth_delay/meson.build | 5 +\ncontrib/auto_explain/meson.build | 16 +\ncontrib/basebackup_to_shell/meson.build | 22 +\ncontrib/basic_archive/meson.build | 23 +\ncontrib/bloom/meson.build | 36 +\ncontrib/bool_plperl/meson.build | 42 +\ncontrib/btree_gin/meson.build | 54 +\ncontrib/btree_gist/meson.build | 84 +\ncontrib/citext/meson.build | 34 +\ncontrib/cube/meson.build | 53 +\ncontrib/dblink/meson.build | 31 +\ncontrib/dict_int/meson.build | 22 +\ncontrib/dict_xsyn/meson.build | 29 +\ncontrib/earthdistance/meson.build | 23 +\ncontrib/file_fdw/meson.build | 22 +\ncontrib/fuzzystrmatch/meson.build | 26 +\ncontrib/hstore/meson.build | 44 +\ncontrib/hstore_plperl/meson.build | 43 +\ncontrib/hstore_plpython/meson.build | 37 +\ncontrib/intagg/meson.build | 6 +\ncontrib/intarray/meson.build | 37 +\ncontrib/isn/meson.build | 33 +\ncontrib/jsonb_plperl/meson.build | 43 +\ncontrib/jsonb_plpython/meson.build | 36 +\ncontrib/lo/meson.build | 27 +\ncontrib/ltree/meson.build | 44 +\ncontrib/ltree_plpython/meson.build | 37 +\ncontrib/meson.build | 66 +\ncontrib/oid2name/meson.build | 17 +\ncontrib/old_snapshot/meson.build | 15 +\ncontrib/pageinspect/meson.build | 50 +\ncontrib/passwordcheck/meson.build | 30 +\ncontrib/pg_buffercache/meson.build | 27 +\ncontrib/pg_freespacemap/meson.build | 29 +\ncontrib/pg_prewarm/meson.build | 27 +\ncontrib/pg_stat_statements/meson.build | 35 +\ncontrib/pg_surgery/meson.build | 25 +\ncontrib/pg_trgm/meson.build | 35 +\ncontrib/pg_visibility/meson.build | 26 +\ncontrib/pg_walinspect/meson.build | 27 +\ncontrib/pgcrypto/meson.build | 100 +\ncontrib/pgrowlocks/meson.build | 27 +\ncontrib/pgstattuple/meson.build | 31 +\ncontrib/postgres_fdw/meson.build | 34 +\ncontrib/seg/meson.build | 51 +\ncontrib/sepgsql/meson.build | 34 +\ncontrib/spi/meson.build | 50 +\ncontrib/sslinfo/meson.build | 21 +\ncontrib/tablefunc/meson.build | 24 +\ncontrib/tcn/meson.build | 25 +\ncontrib/test_decoding/meson.build | 63 +\ncontrib/tsm_system_rows/meson.build | 24 +\ncontrib/tsm_system_time/meson.build | 24 +\ncontrib/unaccent/meson.build | 32 +\ncontrib/uuid-ossp/meson.build | 31 +\ncontrib/vacuumlo/meson.build | 17 +\ncontrib/xml2/meson.build | 32 +\ndoc/src/sgml/meson.build | 254 ++\ndoc/src/sgml/version.sgml.in | 2 +\nmeson.build | 3025 ++++++++++++++++++++\nmeson_options.txt | 185 ++\nsrc/backend/access/brin/meson.build | 12 +\nsrc/backend/access/common/meson.build | 18 +\nsrc/backend/access/gin/meson.build | 17 +\nsrc/backend/access/gist/meson.build | 13 +\nsrc/backend/access/hash/meson.build | 12 +\nsrc/backend/access/heap/meson.build | 11 +\nsrc/backend/access/index/meson.build | 6 +\nsrc/backend/access/meson.build | 13 +\nsrc/backend/access/nbtree/meson.build | 13 +\nsrc/backend/access/rmgrdesc/meson.build | 26 +\nsrc/backend/access/spgist/meson.build | 13 +\nsrc/backend/access/table/meson.build | 6 +\nsrc/backend/access/tablesample/meson.build | 5 +\nsrc/backend/access/transam/meson.build | 31 +\nsrc/backend/backup/meson.build | 13 +\nsrc/backend/bootstrap/meson.build | 28 +\nsrc/backend/catalog/meson.build | 44 +\nsrc/backend/commands/meson.build | 51 +\nsrc/backend/executor/meson.build | 67 +\nsrc/backend/foreign/meson.build | 3 +\nsrc/backend/jit/llvm/meson.build | 73 +\nsrc/backend/jit/meson.build | 3 +\nsrc/backend/lib/meson.build | 12 +\nsrc/backend/libpq/meson.build | 32 +\nsrc/backend/main/meson.build | 2 +\nsrc/backend/meson.build | 190 ++\nsrc/backend/nodes/meson.build | 29 +\nsrc/backend/optimizer/geqo/meson.build | 17 +\nsrc/backend/optimizer/meson.build | 5 +\nsrc/backend/optimizer/path/meson.build | 11 +\nsrc/backend/optimizer/plan/meson.build | 10 +\nsrc/backend/optimizer/prep/meson.build | 7 +\nsrc/backend/optimizer/util/meson.build | 16 +\nsrc/backend/parser/meson.build | 48 +\nsrc/backend/partitioning/meson.build | 5 +\nsrc/backend/po/meson.build | 1 +\nsrc/backend/port/meson.build | 31 +\nsrc/backend/port/win32/meson.build | 6 +\nsrc/backend/postmaster/meson.build | 15 +\nsrc/backend/regex/meson.build | 8 +\n.../replication/libpqwalreceiver/meson.build | 13 +\nsrc/backend/replication/logical/meson.build | 14 +\nsrc/backend/replication/meson.build | 51 +\nsrc/backend/replication/pgoutput/meson.build | 10 +\nsrc/backend/rewrite/meson.build | 9 +\nsrc/backend/snowball/meson.build | 88 +\nsrc/backend/statistics/meson.build | 6 +\nsrc/backend/storage/buffer/meson.build | 7 +\nsrc/backend/storage/file/meson.build | 8 +\nsrc/backend/storage/freespace/meson.build | 5 +\nsrc/backend/storage/ipc/meson.build | 20 +\nsrc/backend/storage/large_object/meson.build | 3 +\nsrc/backend/storage/lmgr/meson.build | 13 +\nsrc/backend/storage/meson.build | 9 +\nsrc/backend/storage/page/meson.build | 5 +\nsrc/backend/storage/smgr/meson.build | 4 +\nsrc/backend/storage/sync/meson.build | 4 +\nsrc/backend/tcop/meson.build | 8 +\nsrc/backend/tsearch/meson.build | 21 +\nsrc/backend/utils/activity/meson.build | 18 +\nsrc/backend/utils/adt/meson.build | 131 +\nsrc/backend/utils/cache/meson.build | 16 +\nsrc/backend/utils/error/meson.build | 6 +\nsrc/backend/utils/fmgr/meson.build | 8 +\nsrc/backend/utils/hash/meson.build | 4 +\nsrc/backend/utils/init/meson.build | 4 +\nsrc/backend/utils/mb/conversion_procs/meson.build | 36 +\nsrc/backend/utils/mb/meson.build | 9 +\nsrc/backend/utils/meson.build | 17 +\nsrc/backend/utils/misc/meson.build | 35 +\nsrc/backend/utils/mmgr/meson.build | 10 +\nsrc/backend/utils/resowner/meson.build | 3 +\nsrc/backend/utils/sort/meson.build | 9 +\nsrc/backend/utils/time/meson.build | 4 +\nsrc/bin/initdb/meson.build | 30 +\nsrc/bin/initdb/po/meson.build | 1 +\nsrc/bin/meson.build | 20 +\nsrc/bin/pg_amcheck/meson.build | 27 +\nsrc/bin/pg_amcheck/po/meson.build | 1 +\nsrc/bin/pg_archivecleanup/meson.build | 19 +\nsrc/bin/pg_archivecleanup/po/meson.build | 1 +\nsrc/bin/pg_basebackup/meson.build | 61 +\nsrc/bin/pg_basebackup/po/meson.build | 1 +\nsrc/bin/pg_checksums/meson.build | 21 +\nsrc/bin/pg_checksums/po/meson.build | 1 +\nsrc/bin/pg_config/meson.build | 19 +\nsrc/bin/pg_config/po/meson.build | 1 +\nsrc/bin/pg_controldata/meson.build | 19 +\nsrc/bin/pg_controldata/po/meson.build | 1 +\nsrc/bin/pg_ctl/meson.build | 22 +\nsrc/bin/pg_ctl/po/meson.build | 1 +\nsrc/bin/pg_dump/meson.build | 75 +\nsrc/bin/pg_dump/po/meson.build | 1 +\nsrc/bin/pg_resetwal/meson.build | 20 +\nsrc/bin/pg_resetwal/po/meson.build | 1 +\nsrc/bin/pg_rewind/meson.build | 42 +\nsrc/bin/pg_rewind/po/meson.build | 1 +\nsrc/bin/pg_test_fsync/meson.build | 21 +\nsrc/bin/pg_test_fsync/po/meson.build | 1 +\nsrc/bin/pg_test_timing/meson.build | 19 +\nsrc/bin/pg_test_timing/po/meson.build | 1 +\nsrc/bin/pg_upgrade/meson.build | 40 +\nsrc/bin/pg_upgrade/po/meson.build | 1 +\nsrc/bin/pg_verifybackup/meson.build | 33 +\nsrc/bin/pg_verifybackup/po/meson.build | 1 +\nsrc/bin/pg_waldump/meson.build | 30 +\nsrc/bin/pg_waldump/po/meson.build | 1 +\nsrc/bin/pgbench/meson.build | 38 +\nsrc/bin/pgevent/meson.build | 24 +\nsrc/bin/psql/meson.build | 67 +\nsrc/bin/psql/po/meson.build | 1 +\nsrc/bin/scripts/meson.build | 51 +\nsrc/bin/scripts/po/meson.build | 1 +\nsrc/common/meson.build | 174 ++\nsrc/common/unicode/meson.build | 106 +\nsrc/fe_utils/meson.build | 29 +\nsrc/include/catalog/meson.build | 142 +\nsrc/include/meson.build | 173 ++\nsrc/include/nodes/meson.build | 58 +\nsrc/include/pg_config_ext.h.meson | 7 +\nsrc/include/storage/meson.build | 19 +\nsrc/include/utils/meson.build | 57 +\nsrc/interfaces/ecpg/compatlib/meson.build | 22 +\nsrc/interfaces/ecpg/ecpglib/meson.build | 37 +\nsrc/interfaces/ecpg/ecpglib/po/meson.build | 1 +\nsrc/interfaces/ecpg/include/meson.build | 51 +\nsrc/interfaces/ecpg/meson.build | 9 +\nsrc/interfaces/ecpg/pgtypeslib/meson.build | 30 +\nsrc/interfaces/ecpg/preproc/meson.build | 104 +\nsrc/interfaces/ecpg/preproc/po/meson.build | 1 +\n.../ecpg/test/compat_informix/meson.build | 31 +\nsrc/interfaces/ecpg/test/compat_oracle/meson.build | 20 +\nsrc/interfaces/ecpg/test/connect/meson.build | 20 +\nsrc/interfaces/ecpg/test/meson.build | 84 +\nsrc/interfaces/ecpg/test/pgtypeslib/meson.build | 21 +\nsrc/interfaces/ecpg/test/preproc/meson.build | 37 +\nsrc/interfaces/ecpg/test/sql/meson.build | 46 +\nsrc/interfaces/ecpg/test/thread/meson.build | 21 +\nsrc/interfaces/libpq/meson.build | 108 +\nsrc/interfaces/libpq/po/meson.build | 1 +\nsrc/interfaces/libpq/test/meson.build | 15 +\nsrc/interfaces/meson.build | 2 +\nsrc/meson.build | 12 +\nsrc/pl/meson.build | 5 +\nsrc/pl/plperl/meson.build | 90 +\nsrc/pl/plperl/po/meson.build | 1 +\nsrc/pl/plpgsql/meson.build | 1 +\nsrc/pl/plpgsql/src/meson.build | 84 +\nsrc/pl/plpgsql/src/po/meson.build | 1 +\nsrc/pl/plpython/meson.build | 99 +\nsrc/pl/plpython/po/meson.build | 1 +\nsrc/pl/tcl/meson.build | 55 +\nsrc/pl/tcl/po/meson.build | 1 +\nsrc/port/meson.build | 184 ++\nsrc/test/authentication/meson.build | 11 +\nsrc/test/icu/meson.build | 11 +\nsrc/test/isolation/meson.build | 58 +\nsrc/test/kerberos/meson.build | 15 +\nsrc/test/ldap/meson.build | 11 +\nsrc/test/meson.build | 25 +\nsrc/test/modules/brin/meson.build | 16 +\nsrc/test/modules/commit_ts/meson.build | 18 +\nsrc/test/modules/delay_execution/meson.build | 18 +\nsrc/test/modules/dummy_index_am/meson.build | 23 +\nsrc/test/modules/dummy_seclabel/meson.build | 23 +\nsrc/test/modules/libpq_pipeline/meson.build | 21 +\nsrc/test/modules/meson.build | 27 +\nsrc/test/modules/plsample/meson.build | 23 +\nsrc/test/modules/snapshot_too_old/meson.build | 14 +\nsrc/test/modules/spgist_name_ops/meson.build | 23 +\n.../modules/ssl_passphrase_callback/meson.build | 48 +\nsrc/test/modules/test_bloomfilter/meson.build | 23 +\nsrc/test/modules/test_ddl_deparse/meson.build | 43 +\nsrc/test/modules/test_extensions/meson.build | 45 +\nsrc/test/modules/test_ginpostinglist/meson.build | 23 +\nsrc/test/modules/test_integerset/meson.build | 23 +\nsrc/test/modules/test_lfind/meson.build | 23 +\nsrc/test/modules/test_misc/meson.build | 12 +\nsrc/test/modules/test_oat_hooks/meson.build | 18 +\nsrc/test/modules/test_parser/meson.build | 23 +\nsrc/test/modules/test_pg_dump/meson.build | 22 +\nsrc/test/modules/test_predtest/meson.build | 23 +\nsrc/test/modules/test_rbtree/meson.build | 23 +\nsrc/test/modules/test_regex/meson.build | 24 +\nsrc/test/modules/test_rls_hooks/meson.build | 17 +\nsrc/test/modules/test_shm_mq/meson.build | 27 +\nsrc/test/modules/unsafe_tests/meson.build | 11 +\nsrc/test/modules/worker_spi/meson.build | 26 +\nsrc/test/perl/meson.build | 12 +\nsrc/test/recovery/meson.build | 43 +\nsrc/test/regress/meson.build | 62 +\nsrc/test/ssl/meson.build | 13 +\nsrc/test/subscription/meson.build | 42 +\nsrc/timezone/meson.build | 48 +\nsrc/timezone/tznames/meson.build | 21 +\nsrc/tools/find_meson | 30 +\nsrc/tools/gen_export.pl | 81 +\nsrc/tools/pgflex | 85 +\nsrc/tools/testwrap | 47 +\n265 files changed, 10962 insertions(+)", "msg_date": "Thu, 22 Sep 2022 05:47:52 +0000", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "pgsql: meson: Add initial version of meson based build system" }, { "msg_contents": "Re: Andres Freund\n> https://git.postgresql.org/pg/commitdiff/e6927270cd18d535b77cbe79c55c6584351524be\n\nThis commit broke VPATH builds when the original source directory\ncontains symlinks.\n\nThe $PWD is /home/myon/postgresql/pg/master, but the actual directory\nis /home/myon/projects/postgresql/pg/postgresql. When I\n mkdir build; cd build && ../configure\nthere, I get a build directory missing a lot of files/directories:\n\n$ ls build/\nconfig.log config.status* GNUmakefile meson.build src/\n$ ls build/src/\nbackend/ include/ interfaces/ Makefile.global Makefile.port@\n$ ls build/src/backend/\nport/\n\nGiven there are no other changes I think this bit is at fault:\n\n> Modified Files\n> --------------\n> configure.ac | 6 +\n\n+# Ensure that any meson build directories would reconfigure and see that\n+# there's a conflicting in-tree build and can error out.\n+if test \"$vpath_build\"=\"no\"; then\n+ touch meson.build\n+fi\n\nChristoph\n\n\n", "msg_date": "Wed, 17 Apr 2024 15:42:28 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: meson: Add initial version of meson based build system" }, { "msg_contents": "Hi,\n\nUh, huh.\n\nOn 2024-04-17 15:42:28 +0200, Christoph Berg wrote:\n> Re: Andres Freund\n> > https://git.postgresql.org/pg/commitdiff/e6927270cd18d535b77cbe79c55c6584351524be\n>\n> This commit broke VPATH builds when the original source directory\n> contains symlinks.\n\nI.e. a symlink to the source directory, not a symlink inside the source\ndirectory.\n\n\n> Given there are no other changes I think this bit is at fault:\n>\n> > Modified Files\n> > --------------\n> > configure.ac | 6 +\n>\n> +# Ensure that any meson build directories would reconfigure and see that\n> +# there's a conflicting in-tree build and can error out.\n> +if test \"$vpath_build\"=\"no\"; then\n> + touch meson.build\n> +fi\n\nArgh, this is missing spaces around the '=', leading to the branch always\nbeing entered.\n\nWhat I don't understand is how that possibly could affect the prep_buildtree\nstep, that happens earlier.\n\nHm.\n\nUh, I don't think it does? Afaict this failure is entirely unrelated to 'touch\nmeson.build'? From what I can tell the problem is that config/prep_buildtree\nis invoked with the symlinked path, and that that doesn't seem to work:\n\nbash -x /home/andres/src/postgresql-via-symlink/config/prep_buildtree /home/andres/src/postgresql-via-symlink .\n++ basename /home/andres/src/postgresql-via-symlink/config/prep_buildtree\n+ me=prep_buildtree\n+ help='Usage: prep_buildtree sourcetree [buildtree]'\n+ test -z /home/andres/src/postgresql-via-symlink\n+ test x/home/andres/src/postgresql-via-symlink = x--help\n+ unset CDPATH\n++ cd /home/andres/src/postgresql-via-symlink\n++ pwd\n+ sourcetree=/home/andres/src/postgresql-via-symlink\n++ cd .\n++ pwd\n+ buildtree=/tmp/pgs\n++ find /home/andres/src/postgresql-via-symlink -type d '(' '(' -name CVS -prune ')' -o '(' -name .git -prune ')' -o -print ')'\n++ grep -v '/home/andres/src/postgresql-via-symlink/doc/src/sgml/\\+'\n++ find /home/andres/src/postgresql-via-symlink -name Makefile -print -o -name GNUmakefile -print\n++ grep -v /home/andres/src/postgresql-via-symlink/doc/src/sgml/images/\n+ exit 0\n\nNote that the find does not return anything.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Apr 2024 16:00:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pgsql: meson: Add initial version of meson based build system" }, { "msg_contents": "Re: Andres Freund\n> > This commit broke VPATH builds when the original source directory\n> > contains symlinks.\n> \n> I.e. a symlink to the source directory, not a symlink inside the source\n> directory.\n\nYes.\n\n> Argh, this is missing spaces around the '=', leading to the branch always\n> being entered.\n\nGlad I found at least something :)\n\n> Uh, I don't think it does? Afaict this failure is entirely unrelated to 'touch\n> meson.build'? From what I can tell the problem is that config/prep_buildtree\n> is invoked with the symlinked path, and that that doesn't seem to work:\n\nApparently I messed up both the git bisect run and manually\nconfirm the problem later. Trying again now, the problem has been\nexisting at least since 2002, probably earlier.\n\nI've been using this directory layout for years, apparently so far\nI've always only used non-VPATH builds or dpkg-buildpackage, which\nprobably canonicalizes the path before building, given it works.\n\nSince no one else has been complaining, it might not be worth fixing.\n\nSorry for the noise!\n\nChristoph\n\n\n", "msg_date": "Thu, 18 Apr 2024 10:54:18 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: meson: Add initial version of meson based build system" }, { "msg_contents": "Hi,\n\nOn 2024-04-18 10:54:18 +0200, Christoph Berg wrote:\n> Re: Andres Freund\n> > > This commit broke VPATH builds when the original source directory\n> > > contains symlinks.\n> > Argh, this is missing spaces around the '=', leading to the branch always\n> > being entered.\n> \n> Glad I found at least something :)\n\nYep :). I pushed a fix to that now.\n\n\n> I've been using this directory layout for years, apparently so far\n> I've always only used non-VPATH builds or dpkg-buildpackage, which\n> probably canonicalizes the path before building, given it works.\n\nI wonder if perhaps find's behaviour might have changed at some point?\n\n\n> Since no one else has been complaining, it might not be worth fixing.\n\nI'm personally not excited about working on fixing this, but if somebody else\nwants to spend the cycles to make this work reliably...\n\nIt's certainly interesting that we have some code worrying about symlinks in\nconfigure.ac:\n# prepare build tree if outside source tree\n# Note 1: test -ef might not exist, but it's more reliable than `pwd`.\n# Note 2: /bin/pwd might be better than shell's built-in at getting\n# a symlink-free name.\n\nBut we only use this to determine if we're doing a vpath build, not as the\npath passed to prep_buildtree...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 25 Apr 2024 07:58:22 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: pgsql: meson: Add initial version of meson based build system" } ]
[ { "msg_contents": "Hi\n\nToday I found new bug\n\n-o llvmjit_wrap.o llvmjit_wrap.cpp -MMD -MP -MF .deps/llvmjit_wrap.Po\nllvmjit.c: In function ‘llvm_resolve_symbols’:\nllvmjit.c:1115:57: error: ‘LLVMJITCSymbolMapPair’ undeclared (first use in\nthis function); did you mean ‘LLVMOrcCSymbolMapPair’?\n 1115 | LLVMOrcCSymbolMapPairs symbols =\npalloc0(sizeof(LLVMJITCSymbolMapPair) * LookupSetSize);\n |\n^~~~~~~~~~~~~~~~~~~~~\n |\nLLVMOrcCSymbolMapPair\nllvmjit.c:1115:57: note: each undeclared identifier is reported only once\nfor each function it appears in\nllvmjit.c: In function ‘llvm_create_jit_instance’:\nllvmjit.c:1233:19: error: too few arguments to function\n‘LLVMOrcCreateCustomCAPIDefinitionGenerator’\n 1233 | ref_gen =\nLLVMOrcCreateCustomCAPIDefinitionGenerator(llvm_resolve_symbols, NULL);\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIn file included from llvmjit.c:22:\n/usr/include/llvm-c/Orc.h:997:31: note: declared here\n 997 | LLVMOrcDefinitionGeneratorRef\nLLVMOrcCreateCustomCAPIDefinitionGenerator(\n |\n^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nI have fedora 37\n\nRegards\n\nPavel\n\nHiToday I found new bug-o llvmjit_wrap.o llvmjit_wrap.cpp -MMD -MP -MF .deps/llvmjit_wrap.Pollvmjit.c: In function ‘llvm_resolve_symbols’:llvmjit.c:1115:57: error: ‘LLVMJITCSymbolMapPair’ undeclared (first use in this function); did you mean ‘LLVMOrcCSymbolMapPair’? 1115 |         LLVMOrcCSymbolMapPairs symbols = palloc0(sizeof(LLVMJITCSymbolMapPair) * LookupSetSize);      |                                                         ^~~~~~~~~~~~~~~~~~~~~      |                                                         LLVMOrcCSymbolMapPairllvmjit.c:1115:57: note: each undeclared identifier is reported only once for each function it appears inllvmjit.c: In function ‘llvm_create_jit_instance’:llvmjit.c:1233:19: error: too few arguments to function ‘LLVMOrcCreateCustomCAPIDefinitionGenerator’ 1233 |         ref_gen = LLVMOrcCreateCustomCAPIDefinitionGenerator(llvm_resolve_symbols, NULL);      |                   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~In file included from llvmjit.c:22:/usr/include/llvm-c/Orc.h:997:31: note: declared here  997 | LLVMOrcDefinitionGeneratorRef LLVMOrcCreateCustomCAPIDefinitionGenerator(      |                               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~I have fedora 37RegardsPavel", "msg_date": "Thu, 22 Sep 2022 12:42:58 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "cannot to compile master in llvm_resolve_symbols" }, { "msg_contents": "On Thu, Sep 22, 2022 at 10:43 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Today I found new bug\n>\n> -o llvmjit_wrap.o llvmjit_wrap.cpp -MMD -MP -MF .deps/llvmjit_wrap.Po\n> llvmjit.c: In function ‘llvm_resolve_symbols’:\n> llvmjit.c:1115:57: error: ‘LLVMJITCSymbolMapPair’ undeclared (first use in this function); did you mean ‘LLVMOrcCSymbolMapPair’?\n> 1115 | LLVMOrcCSymbolMapPairs symbols = palloc0(sizeof(LLVMJITCSymbolMapPair) * LookupSetSize);\n> | ^~~~~~~~~~~~~~~~~~~~~\n> | LLVMOrcCSymbolMapPair\n\nHi Pavel,\n\nSome changes are needed for LLVM 15. I'm working on a patch, but it's\nnot quite ready yet. Use LLVM 14 for now. There are a few\nsuperficial changes like that that are very easy to fix (that struct's\nname changed), but the real problem is that in LLVM 15 you have to do\nextra work to track the type of pointers and pass them into API calls\nthat we have a lot of. https://llvm.org/docs/OpaquePointers.html\n\n\n", "msg_date": "Thu, 22 Sep 2022 22:54:00 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: cannot to compile master in llvm_resolve_symbols" }, { "msg_contents": "čt 22. 9. 2022 v 12:54 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Thu, Sep 22, 2022 at 10:43 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > Today I found new bug\n> >\n> > -o llvmjit_wrap.o llvmjit_wrap.cpp -MMD -MP -MF .deps/llvmjit_wrap.Po\n> > llvmjit.c: In function ‘llvm_resolve_symbols’:\n> > llvmjit.c:1115:57: error: ‘LLVMJITCSymbolMapPair’ undeclared (first use\n> in this function); did you mean ‘LLVMOrcCSymbolMapPair’?\n> > 1115 | LLVMOrcCSymbolMapPairs symbols =\n> palloc0(sizeof(LLVMJITCSymbolMapPair) * LookupSetSize);\n> > |\n> ^~~~~~~~~~~~~~~~~~~~~\n> > |\n> LLVMOrcCSymbolMapPair\n>\n> Hi Pavel,\n>\n> Some changes are needed for LLVM 15. I'm working on a patch, but it's\n> not quite ready yet. Use LLVM 14 for now. There are a few\n> superficial changes like that that are very easy to fix (that struct's\n> name changed), but the real problem is that in LLVM 15 you have to do\n> extra work to track the type of pointers and pass them into API calls\n> that we have a lot of. https://llvm.org/docs/OpaquePointers.html\n\n\nThank you for info\n\nRegards\n\nPavel\n\nčt 22. 9. 2022 v 12:54 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Thu, Sep 22, 2022 at 10:43 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> Today I found new bug\n>\n> -o llvmjit_wrap.o llvmjit_wrap.cpp -MMD -MP -MF .deps/llvmjit_wrap.Po\n> llvmjit.c: In function ‘llvm_resolve_symbols’:\n> llvmjit.c:1115:57: error: ‘LLVMJITCSymbolMapPair’ undeclared (first use in this function); did you mean ‘LLVMOrcCSymbolMapPair’?\n>  1115 |         LLVMOrcCSymbolMapPairs symbols = palloc0(sizeof(LLVMJITCSymbolMapPair) * LookupSetSize);\n>       |                                                         ^~~~~~~~~~~~~~~~~~~~~\n>       |                                                         LLVMOrcCSymbolMapPair\n\nHi Pavel,\n\nSome changes are needed for LLVM 15.  I'm working on a patch, but it's\nnot quite ready yet.  Use LLVM 14 for now.  There are a few\nsuperficial changes like that that are very easy to fix (that struct's\nname changed), but the real problem is that in LLVM 15 you have to do\nextra work to track the type of pointers and pass them into API calls\nthat we have a lot of.  https://llvm.org/docs/OpaquePointers.htmlThank you for infoRegardsPavel", "msg_date": "Thu, 22 Sep 2022 12:59:11 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: cannot to compile master in llvm_resolve_symbols" } ]
[ { "msg_contents": "I was confused about how come the new patches I'd just posted in\nthe 3848 CF item (write/read support for raw parse trees) are\nshowing a mix of passes and fails in the cfbot. I eventually\nrealized that the fail results are all old and stale, because\n(for example) there's no longer a \"FreeBSD - 13\" task to run,\njust \"FreeBSD - 13 - Meson\". This seems tremendously confusing.\nCan we get the cfbot to not display no-longer-applicable results?\n\nAs a quick-fix hack it might do to just flush all the pre-meson-commit\nresults, but I doubt this'll be the last time we make such changes.\nI think an actual filter comparing the result's time to the time of the\nallegedly-being-tested patch would be prudent.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Sep 2022 17:44:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "cfbot vs. changes in the set of CI tasks" }, { "msg_contents": "On Fri, Sep 23, 2022 at 9:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I was confused about how come the new patches I'd just posted in\n> the 3848 CF item (write/read support for raw parse trees) are\n> showing a mix of passes and fails in the cfbot. I eventually\n> realized that the fail results are all old and stale, because\n> (for example) there's no longer a \"FreeBSD - 13\" task to run,\n> just \"FreeBSD - 13 - Meson\". This seems tremendously confusing.\n> Can we get the cfbot to not display no-longer-applicable results?\n>\n> As a quick-fix hack it might do to just flush all the pre-meson-commit\n> results, but I doubt this'll be the last time we make such changes.\n> I think an actual filter comparing the result's time to the time of the\n> allegedly-being-tested patch would be prudent.\n\nAh, right. Morning here and I've just spotted that too after the\nfirst results of the Meson era rolled in overnight. It shows the\nlatest result for each distinct task name, which now includes extinct\ntasks (until they eventually get garbage collected due to age in a few\ndays). I clearly didn't anticipate tasks going away. Perhaps I\nshould figure out how to show only results corresponding to a single\ncommit ID... looking into that...\n\n\n", "msg_date": "Fri, 23 Sep 2022 10:17:59 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: cfbot vs. changes in the set of CI tasks" }, { "msg_contents": "On Fri, Sep 23, 2022 at 10:17 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Sep 23, 2022 at 9:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I was confused about how come the new patches I'd just posted in\n> > the 3848 CF item (write/read support for raw parse trees) are\n> > showing a mix of passes and fails in the cfbot. I eventually\n> > realized that the fail results are all old and stale, because\n> > (for example) there's no longer a \"FreeBSD - 13\" task to run,\n> > just \"FreeBSD - 13 - Meson\". This seems tremendously confusing.\n> > Can we get the cfbot to not display no-longer-applicable results?\n> >\n> > As a quick-fix hack it might do to just flush all the pre-meson-commit\n> > results, but I doubt this'll be the last time we make such changes.\n> > I think an actual filter comparing the result's time to the time of the\n> > allegedly-being-tested patch would be prudent.\n>\n> Ah, right. Morning here and I've just spotted that too after the\n> first results of the Meson era rolled in overnight. It shows the\n> latest result for each distinct task name, which now includes extinct\n> tasks (until they eventually get garbage collected due to age in a few\n> days). I clearly didn't anticipate tasks going away. Perhaps I\n> should figure out how to show only results corresponding to a single\n> commit ID... looking into that...\n\nDone, and looking more sane now.\n\n\n", "msg_date": "Fri, 23 Sep 2022 10:42:23 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: cfbot vs. changes in the set of CI tasks" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Sep 23, 2022 at 10:17 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Ah, right. Morning here and I've just spotted that too after the\n>> first results of the Meson era rolled in overnight. It shows the\n>> latest result for each distinct task name, which now includes extinct\n>> tasks (until they eventually get garbage collected due to age in a few\n>> days). I clearly didn't anticipate tasks going away. Perhaps I\n>> should figure out how to show only results corresponding to a single\n>> commit ID... looking into that...\n\n> Done, and looking more sane now.\n\nLooks better to me too. Thanks!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Sep 2022 18:57:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: cfbot vs. changes in the set of CI tasks" }, { "msg_contents": "Hi,\n\nOn 2022-09-22 17:44:56 -0400, Tom Lane wrote:\n> I was confused about how come the new patches I'd just posted in\n> the 3848 CF item (write/read support for raw parse trees) are\n> showing a mix of passes and fails in the cfbot. I eventually\n> realized that the fail results are all old and stale, because\n> (for example) there's no longer a \"FreeBSD - 13\" task to run,\n> just \"FreeBSD - 13 - Meson\". This seems tremendously confusing.\n> Can we get the cfbot to not display no-longer-applicable results?\n\nSomewhat tangentially - it seemed the right thing at the time [TM], but now I\nwonder if tacking on the buildsystem like I did is the best way? As long as we\nhave both I think we need it in some form...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 22 Sep 2022 16:46:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: cfbot vs. changes in the set of CI tasks" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-09-22 17:44:56 -0400, Tom Lane wrote:\n>> I was confused about how come the new patches I'd just posted in\n>> the 3848 CF item (write/read support for raw parse trees) are\n>> showing a mix of passes and fails in the cfbot. I eventually\n>> realized that the fail results are all old and stale, because\n>> (for example) there's no longer a \"FreeBSD - 13\" task to run,\n>> just \"FreeBSD - 13 - Meson\". This seems tremendously confusing.\n>> Can we get the cfbot to not display no-longer-applicable results?\n\n> Somewhat tangentially - it seemed the right thing at the time [TM], but now I\n> wonder if tacking on the buildsystem like I did is the best way? As long as we\n> have both I think we need it in some form...\n\nYeah, I think those names are fine for now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Sep 2022 19:50:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: cfbot vs. changes in the set of CI tasks" } ]
[ { "msg_contents": "When using ON CONFLICT DO NOTHING together with RETURNING, the \nconflicted rows are not returned. Sometimes, this would be useful \nthough, for example when generated columns or default values are in play:\n\nCREATE TABLE x (\n id INT PRIMARY KEY,\n created_at TIMESTAMPTZ DEFAULT CURRENT_TIMEMSTAMP\n);\n\nTo get the created_at timestamp for a certain id **and** at the same \ntime create this id in case it does not exist, yet, I can currently do:\n\nINSERT INTO x (id) VALUES (1)\n ON CONFLICT DO UPDATE\n SET id=EXCLUDED.id\n RETURNING created_at;\n\nHowever that will result in a useless UPDATE of the row.\n\nI could probably add a trigger to prevent the UPDATE in that case. Or I \ncould do something in a CTE. Or in multiple statements in plpgsql - this \nis what I currently do in application code.\n\nThe attached patch adds a DO RETURN clause to be able to do this:\n\nINSERT INTO x (id) VALUES (1)\n ON CONFLICT DO RETURN\n RETURNING created_at;\n\nMuch simpler. This will either insert or do nothing - but in both cases \nreturn a row.\n\nThoughts?\n\nBest\n\nWolfgang", "msg_date": "Sun, 25 Sep 2022 17:55:05 +0200", "msg_from": "Wolfgang Walther <walther@technowledgy.de>", "msg_from_op": true, "msg_subject": "Add ON CONFLICT DO RETURN clause" }, { "msg_contents": "On Sun, Sep 25, 2022 at 8:55 AM Wolfgang Walther\n<walther@technowledgy.de> wrote:\n> The attached patch adds a DO RETURN clause to be able to do this:\n>\n> INSERT INTO x (id) VALUES (1)\n> ON CONFLICT DO RETURN\n> RETURNING created_at;\n>\n> Much simpler. This will either insert or do nothing - but in both cases\n> return a row.\n\nHow can you tell which it was, though?\n\nI don't see why this statement should ever perform steps for any row\nthat are equivalent to DO NOTHING processing -- it should at least\nlock each and every affected row, if only to conclusively determine\nthat there really must be a conflict.\n\nIn general ON CONFLICT DO UPDATE allows the user to add a WHERE clause\nto back out of updating a row based on an arbitrary predicate. DO\nNOTHING has no such WHERE clause. So DO NOTHING quite literally does\nnothing for any rows that had conflicts, unlike DO UPDATE, which will\nat the very least lock the row (with or without an explicit WHERE\nclause).\n\nThe READ COMMITTED behavior for DO NOTHING is a bit iffy, even\ncompared to DO UPDATE, but the advantages in bulk loading scenarios\ncan be decisive. Or at least they were before we had MERGE.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 25 Sep 2022 10:53:23 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Add ON CONFLICT DO RETURN clause" }, { "msg_contents": "Peter Geoghegan:\n> On Sun, Sep 25, 2022 at 8:55 AM Wolfgang Walther\n> <walther@technowledgy.de> wrote:\n>> The attached patch adds a DO RETURN clause to be able to do this:\n>>\n>> INSERT INTO x (id) VALUES (1)\n>> ON CONFLICT DO RETURN\n>> RETURNING created_at;\n>>\n>> Much simpler. This will either insert or do nothing - but in both cases\n>> return a row.\n> \n> How can you tell which it was, though?\n\nI guess I can't reliably. But isn't that the same in the ON UPDATE case?\n\nIn the use cases I had so far, I didn't need to know.\n\n> I don't see why this statement should ever perform steps for any row\n> that are equivalent to DO NOTHING processing -- it should at least\n> lock each and every affected row, if only to conclusively determine\n> that there really must be a conflict.\n> \n> In general ON CONFLICT DO UPDATE allows the user to add a WHERE clause\n> to back out of updating a row based on an arbitrary predicate. DO\n> NOTHING has no such WHERE clause. So DO NOTHING quite literally does\n> nothing for any rows that had conflicts, unlike DO UPDATE, which will\n> at the very least lock the row (with or without an explicit WHERE\n> clause).\n> \n> The READ COMMITTED behavior for DO NOTHING is a bit iffy, even\n> compared to DO UPDATE, but the advantages in bulk loading scenarios\n> can be decisive. Or at least they were before we had MERGE.\n\nAgreed - it needs to lock the row. I don't think I fully understood what \n\"nothing\" in DO NOTHING extended to.\n\nI guess I want DO RETURN to behave more like a DO SELECT, so with the \nsame semantics as selecting the row?\n\nBest\n\nWolfgang\n\n\n", "msg_date": "Mon, 26 Sep 2022 08:33:28 +0200", "msg_from": "Wolfgang Walther <walther@technowledgy.de>", "msg_from_op": true, "msg_subject": "Re: Add ON CONFLICT DO RETURN clause" }, { "msg_contents": "Wolfgang Walther <walther@technowledgy.de> writes:\n\n> Peter Geoghegan:\n>> On Sun, Sep 25, 2022 at 8:55 AM Wolfgang Walther\n>> <walther@technowledgy.de> wrote:\n>>> The attached patch adds a DO RETURN clause to be able to do this:\n>>>\n>>> INSERT INTO x (id) VALUES (1)\n>>> ON CONFLICT DO RETURN\n>>> RETURNING created_at;\n>>>\n>>> Much simpler. This will either insert or do nothing - but in both cases\n>>> return a row.\n>> How can you tell which it was, though?\n>\n> I guess I can't reliably. But isn't that the same in the ON UPDATE case?\n>\n> In the use cases I had so far, I didn't need to know.\n>\n>> I don't see why this statement should ever perform steps for any row\n>> that are equivalent to DO NOTHING processing -- it should at least\n>> lock each and every affected row, if only to conclusively determine\n>> that there really must be a conflict.\n>> In general ON CONFLICT DO UPDATE allows the user to add a WHERE clause\n>> to back out of updating a row based on an arbitrary predicate. DO\n>> NOTHING has no such WHERE clause. So DO NOTHING quite literally does\n>> nothing for any rows that had conflicts, unlike DO UPDATE, which will\n>> at the very least lock the row (with or without an explicit WHERE\n>> clause).\n>> The READ COMMITTED behavior for DO NOTHING is a bit iffy, even\n>> compared to DO UPDATE, but the advantages in bulk loading scenarios\n>> can be decisive. Or at least they were before we had MERGE.\n>\n> Agreed - it needs to lock the row. I don't think I fully understood what\n> \"nothing\" in DO NOTHING extended to.\n>\n> I guess I want DO RETURN to behave more like a DO SELECT, so with the\n> same semantics as selecting the row?\n\nThere was a patch for ON CONFLICT DO SELECT submitted a while back, but\nthe author abandoned it. I hven't read either that patch that or yours,\nso I don't know how they compare, but you might want to have a look at\nit:\n\nhttps://commitfest.postgresql.org/16/1241/\n\n> Best\n>\n> Wolfgang\n\n- ilmari\n\n\n", "msg_date": "Mon, 26 Sep 2022 10:40:08 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Add ON CONFLICT DO RETURN clause" } ]
[ { "msg_contents": "Hi,\n\nI amd working on adding an \"installcheck\" equivalent mode to the meson\nbuild. One invocation of something \"installcheck-world\" like lead to things\ngetting stuck.\n\nLots of log messages like:\n\n2022-09-25 16:16:30.999 PDT [2705454][client backend][28/1112:41269][pg_regress] LOG: still waiting for backend with PID 2705178 to accept ProcSignalBarrier\n2022-09-25 16:16:30.999 PDT [2705454][client backend][28/1112:41269][pg_regress] STATEMENT: DROP DATABASE IF EXISTS \"regression_test_parser_regress\"\n2022-09-25 16:16:31.006 PDT [2705472][client backend][22/3699:41294][pg_regress] LOG: still waiting for backend with PID 2705178 to accept ProcSignalBarrier\n2022-09-25 16:16:31.006 PDT [2705472][client backend][22/3699:41294][pg_regress] STATEMENT: DROP DATABASE IF EXISTS \"regression_test_predtest_regress\"\n\na stacktrace of 2705178 shows:\n\n(gdb) bt\n#0 0x00007f67d26fe1b3 in __GI___poll (fds=fds@entry=0x7ffebe187c88, nfds=nfds@entry=1, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29\n#1 0x00007f67cfd03c1c in pqSocketPoll (sock=<optimized out>, forRead=forRead@entry=1, forWrite=forWrite@entry=0, end_time=end_time@entry=-1)\n at ../../../../home/andres/src/postgresql/src/interfaces/libpq/fe-misc.c:1125\n#2 0x00007f67cfd04310 in pqSocketCheck (conn=conn@entry=0x562f875a9b70, forRead=forRead@entry=1, forWrite=forWrite@entry=0, end_time=end_time@entry=-1)\n at ../../../../home/andres/src/postgresql/src/interfaces/libpq/fe-misc.c:1066\n#3 0x00007f67cfd043fd in pqWaitTimed (forRead=forRead@entry=1, forWrite=forWrite@entry=0, conn=conn@entry=0x562f875a9b70, finish_time=finish_time@entry=-1)\n at ../../../../home/andres/src/postgresql/src/interfaces/libpq/fe-misc.c:998\n#4 0x00007f67cfcfc47b in connectDBComplete (conn=conn@entry=0x562f875a9b70) at ../../../../home/andres/src/postgresql/src/interfaces/libpq/fe-connect.c:2166\n#5 0x00007f67cfcfe248 in PQconnectdbParams (keywords=keywords@entry=0x562f87613d20, values=values@entry=0x562f87613d70, expand_dbname=expand_dbname@entry=0)\n at ../../../../home/andres/src/postgresql/src/interfaces/libpq/fe-connect.c:659\n#6 0x00007f67cfd29536 in connect_pg_server (server=server@entry=0x562f876139b0, user=user@entry=0x562f87613980)\n at ../../../../home/andres/src/postgresql/contrib/postgres_fdw/connection.c:474\n#7 0x00007f67cfd29910 in make_new_connection (entry=entry@entry=0x562f8758b2c8, user=user@entry=0x562f87613980)\n at ../../../../home/andres/src/postgresql/contrib/postgres_fdw/connection.c:344\n#8 0x00007f67cfd29da0 in GetConnection (user=0x562f87613980, will_prep_stmt=will_prep_stmt@entry=false, state=state@entry=0x562f876136a0)\n at ../../../../home/andres/src/postgresql/contrib/postgres_fdw/connection.c:204\n#9 0x00007f67cfd35294 in postgresBeginForeignScan (node=0x562f87612e70, eflags=<optimized out>)\n\n\nand it turns out that backend can't be graciously be terminated.\n\n\nMaybe I am missing something, but I don't think it's OK for\nconnect_pg_server() to connect in a blocking manner, without accepting\ninterrupts?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 25 Sep 2022 16:22:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "Hi,\n\nI just re-discovered this issue, via\nhttps://www.postgresql.org/message-id/20221209003607.bz2zdznvfnkq4zz3%40awork3.anarazel.de\n\n\nOn 2022-09-25 16:22:37 -0700, Andres Freund wrote:\n> Maybe I am missing something, but I don't think it's OK for\n> connect_pg_server() to connect in a blocking manner, without accepting\n> interrupts?\n\nIt's definitely not. This basically means network issues or such can lead to\nconnections being unkillable...\n\nWe know how to do better, c.f. libpqrcv_connect(). I hacked that up for\npostgres_fdw, and got through quite a few runs without related issues ([1]).\n\nThe same problem is present in two places in dblink.c. Obviously we can copy\nand paste the code to dblink.c as well. But that means we have the same not\nquite trivial code in three different c files. There's already a fair bit of\nduplicated code around AcquireExternalFD().\n\nIt seems we should find a place to put backend specific libpq wrapper code\nsomewhere. Unless we want to relax the rule about not linking libpq into the\nbackend we can't just put it in the backend directly, though.\n\nThe only alternative way to provide a wrapper that I can think of are to\na) introduce a new static library that can be linked to by libpqwalreceiver,\n postgres_fdw, dblink\nb) add a header with static inline functions implementing interrupt-processing\n connection establishment for libpq\n\nNeither really has precedent.\n\nThe attached quick patch just adds and uses libpq_connect_interruptible() in\npostgres_fdw. If we wanted to move this somewhere generic, at least part of\nthe external FD handling should also be moved into it.\n\n\ndblink.c uses a lot of other blocking libpq functions, which obviously also\nisn't ok.\n\n\nPerhaps we could somehow make this easier from within libpq? My first thought\nwas a connection parameter that'd provide a different implementation of\npqSocketCheck() or such - but I don't think that'd work, because we might\nthrow errors when processing interrupts, and that'd not be ok from deep within\nlibpq.\n\nGreetings,\n\nAndres Freund\n\n\n[1] I eventually encountered a deadlock in REINDEX, but it didn't involve\n postgres_fw / ProcSignalBarrier", "msg_date": "Thu, 8 Dec 2022 18:08:15 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "Hi,\n\nOn 2022-12-08 18:08:15 -0800, Andres Freund wrote:\n> I just re-discovered this issue, via\n> https://www.postgresql.org/message-id/20221209003607.bz2zdznvfnkq4zz3%40awork3.anarazel.de\n> \n> \n> On 2022-09-25 16:22:37 -0700, Andres Freund wrote:\n> > Maybe I am missing something, but I don't think it's OK for\n> > connect_pg_server() to connect in a blocking manner, without accepting\n> > interrupts?\n> \n> It's definitely not. This basically means network issues or such can lead to\n> connections being unkillable...\n> \n> We know how to do better, c.f. libpqrcv_connect(). I hacked that up for\n> postgres_fdw, and got through quite a few runs without related issues ([1]).\n> \n> The same problem is present in two places in dblink.c. Obviously we can copy\n> and paste the code to dblink.c as well. But that means we have the same not\n> quite trivial code in three different c files. There's already a fair bit of\n> duplicated code around AcquireExternalFD().\n> \n> It seems we should find a place to put backend specific libpq wrapper code\n> somewhere. Unless we want to relax the rule about not linking libpq into the\n> backend we can't just put it in the backend directly, though.\n> \n> The only alternative way to provide a wrapper that I can think of are to\n> a) introduce a new static library that can be linked to by libpqwalreceiver,\n> postgres_fdw, dblink\n> b) add a header with static inline functions implementing interrupt-processing\n> connection establishment for libpq\n> \n> Neither really has precedent.\n> \n> The attached quick patch just adds and uses libpq_connect_interruptible() in\n> postgres_fdw. If we wanted to move this somewhere generic, at least part of\n> the external FD handling should also be moved into it.\n> \n> \n> dblink.c uses a lot of other blocking libpq functions, which obviously also\n> isn't ok.\n> \n> \n> Perhaps we could somehow make this easier from within libpq? My first thought\n> was a connection parameter that'd provide a different implementation of\n> pqSocketCheck() or such - but I don't think that'd work, because we might\n> throw errors when processing interrupts, and that'd not be ok from deep within\n> libpq.\n\nAny opinions? Due to the simplicity I'm currently leaning to a header-only\nhelper, but I don't feel confident about it.\n\nThe rate of cfbot failures is high enough that it'd be good to do something\nabout this.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 29 Dec 2022 09:23:30 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "On Fri, Dec 30, 2022 at 6:23 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-12-08 18:08:15 -0800, Andres Freund wrote:\n> > On 2022-09-25 16:22:37 -0700, Andres Freund wrote:\n> > The only alternative way to provide a wrapper that I can think of are to\n> > a) introduce a new static library that can be linked to by libpqwalreceiver,\n> > postgres_fdw, dblink\n> > b) add a header with static inline functions implementing interrupt-processing\n> > connection establishment for libpq\n> >\n> > Neither really has precedent.\n\n> Any opinions? Due to the simplicity I'm currently leaning to a header-only\n> helper, but I don't feel confident about it.\n\nThe header idea is a little bit sneaky (IIUC: a header that is part of\nthe core tree, but can't be used by core and possibly needs special\ntreatment in 'headercheck' to get the right include search path, can\nonly be used by libpqwalreceiver et al which are allowed to link to\nlibpq), but I think it is compatible with other goals we have\ndiscussed in other threads. I think in the near future we'll probably\nremove the concept of non-threaded server builds (as proposed before\nin the post HP-UX 10 cleanup thread, with patches, but not quite over\nthe line yet). Then I think the server could be allowed to link libpq\ndirectly? And at that point this code wouldn't be sneaky anymore and\ncould optionally move into a .c. Does that makes sense?\n\n\n", "msg_date": "Fri, 30 Dec 2022 10:31:22 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> The header idea is a little bit sneaky (IIUC: a header that is part of\n> the core tree, but can't be used by core and possibly needs special\n> treatment in 'headercheck' to get the right include search path, can\n> only be used by libpqwalreceiver et al which are allowed to link to\n> libpq), but I think it is compatible with other goals we have\n> discussed in other threads. I think in the near future we'll probably\n> remove the concept of non-threaded server builds (as proposed before\n> in the post HP-UX 10 cleanup thread, with patches, but not quite over\n> the line yet). Then I think the server could be allowed to link libpq\n> directly? And at that point this code wouldn't be sneaky anymore and\n> could optionally move into a .c. Does that makes sense?\n\nI don't like the idea of linking libpq directly into the backend.\nIt should remain a dynamically-loaded library to avoid problems \nduring software updates.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 29 Dec 2022 16:37:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "Hi,\n\nOn 2022-12-30 10:31:22 +1300, Thomas Munro wrote:\n> On Fri, Dec 30, 2022 at 6:23 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-12-08 18:08:15 -0800, Andres Freund wrote:\n> > > On 2022-09-25 16:22:37 -0700, Andres Freund wrote:\n> > > The only alternative way to provide a wrapper that I can think of are to\n> > > a) introduce a new static library that can be linked to by libpqwalreceiver,\n> > > postgres_fdw, dblink\n> > > b) add a header with static inline functions implementing interrupt-processing\n> > > connection establishment for libpq\n> > >\n> > > Neither really has precedent.\n> \n> > Any opinions? Due to the simplicity I'm currently leaning to a header-only\n> > helper, but I don't feel confident about it.\n> \n> The header idea is a little bit sneaky (IIUC: a header that is part of\n> the core tree, but can't be used by core and possibly needs special\n> treatment in 'headercheck' to get the right include search path, can\n> only be used by libpqwalreceiver et al which are allowed to link to\n> libpq), but I think it is compatible with other goals we have\n> discussed in other threads.\n\nHm, what special search path / headerscheck magic are you thinking of? I think\nsomething like src/include/libpq/libpq-be-fe-helpers.h defining a bunch of\nstatic inlines should \"just\" work?\n\n\nWe likely could guard against that header being included from code ending up\nin the postgres binary directly by #error'ing if BUILDING_DLL is\ndefined. That's a very badly named define, but it IIRC has to be iff building\ncode ending up in postgres directly.\n\n\n> I think in the near future we'll probably remove the concept of non-threaded\n> server builds (as proposed before in the post HP-UX 10 cleanup thread, with\n> patches, but not quite over the line yet). Then I think the server could be\n> allowed to link libpq directly? And at that point this code wouldn't be\n> sneaky anymore and could optionally move into a .c. Does that makes sense?\n\nI was wondering about linking in libpq directly as well. But I am not sure\nit's a good idea. I suspect we'd run into some issues around libraries\n(including extensions) linking to different versions of libpq etc - if we\ndirectly link to libpq that'd end up in tears.\n\nIt might be a different story if we had a version of libpq built with\ndifferent symbol names etc. But that's not exactly trivial either.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 29 Dec 2022 13:54:37 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "On Fri, Dec 30, 2022 at 10:54 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-12-30 10:31:22 +1300, Thomas Munro wrote:\n> > On Fri, Dec 30, 2022 at 6:23 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2022-12-08 18:08:15 -0800, Andres Freund wrote:\n> > > > On 2022-09-25 16:22:37 -0700, Andres Freund wrote:\n> > > > The only alternative way to provide a wrapper that I can think of are to\n> > > > a) introduce a new static library that can be linked to by libpqwalreceiver,\n> > > > postgres_fdw, dblink\n> > > > b) add a header with static inline functions implementing interrupt-processing\n> > > > connection establishment for libpq\n> > > >\n> > > > Neither really has precedent.\n> >\n> > > Any opinions? Due to the simplicity I'm currently leaning to a header-only\n> > > helper, but I don't feel confident about it.\n> >\n> > The header idea is a little bit sneaky (IIUC: a header that is part of\n> > the core tree, but can't be used by core and possibly needs special\n> > treatment in 'headercheck' to get the right include search path, can\n> > only be used by libpqwalreceiver et al which are allowed to link to\n> > libpq), but I think it is compatible with other goals we have\n> > discussed in other threads.\n>\n> Hm, what special search path / headerscheck magic are you thinking of? I think\n> something like src/include/libpq/libpq-be-fe-helpers.h defining a bunch of\n> static inlines should \"just\" work?\n\nOh, I was imagining something slightly different. Not something under\nsrc/include/libpq, but conceptually a separate header-only library\nthat is above both the backend and libpq. Maybe something like\nsrc/include/febe_util/libpq_connect_interruptible.h. In other words,\nI thought your idea b was a header-only version of your idea a. I\nthink that might be a bit nicer than putting it under libpq?\nSuperficial difference, perhaps...\n\nAnd then I assumed that headerscheck would need to be told to add\nlibpq's header location in -I for that header, but on closer\ninspection it already adds that unconditionally so I retract that\ncomment.\n\n> > I think in the near future we'll probably remove the concept of non-threaded\n> > server builds (as proposed before in the post HP-UX 10 cleanup thread, with\n> > patches, but not quite over the line yet). Then I think the server could be\n> > allowed to link libpq directly? And at that point this code wouldn't be\n> > sneaky anymore and could optionally move into a .c. Does that makes sense?\n>\n> I was wondering about linking in libpq directly as well. But I am not sure\n> it's a good idea. I suspect we'd run into some issues around libraries\n> (including extensions) linking to different versions of libpq etc - if we\n> directly link to libpq that'd end up in tears.\n>\n> It might be a different story if we had a version of libpq built with\n> different symbol names etc. But that's not exactly trivial either.\n\nHmm, yeah. Some interesting things to think about. Whether it's a\nfeature or an accident that new backends can pick up new libpq minor\nupdates without restarting the postmaster, and how we'd manage a\nfuture libpq major version/ABI break. Getting a bit off topic for\nthis thread I suppose.\n\n\n", "msg_date": "Fri, 30 Dec 2022 14:14:55 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "Hi,\n\nOn 2022-12-30 14:14:55 +1300, Thomas Munro wrote:\n> Oh, I was imagining something slightly different. Not something under\n> src/include/libpq, but conceptually a separate header-only library\n> that is above both the backend and libpq. Maybe something like\n> src/include/febe_util/libpq_connect_interruptible.h. In other words,\n> I thought your idea b was a header-only version of your idea a. I\n> think that might be a bit nicer than putting it under libpq?\n> Superficial difference, perhaps...\n\nIt doesn't seem entirely right to introduce a top-level \"module\" for\nlibpq-in-extensions to me - we don't do that for other APIs used for\nextensions. But a header only library also doesn't quite seem right. So ...\n\nLooking at turning my patch upthread into something slightly less prototype-y,\nI noticed that libpqwalreceiver doesn't do AcquireExternalFD(), added to other\nbackend uses of libpq in 3d475515a15. It's unlikely to matter for\nwalreceiver.c itelf, but it seems problematic for the logical replication\ncases?\n\nIt's annoying to introduce PG_TRY/CATCH blocks, just to deal with the\npotential for errors inside WaitLatchOrSocket(), which should never happen. I\nwonder if we should consider making latch.c error out fatally, instead of\nelog(ERROR). If latches are broken, things are bad.\n\n\nThe PG_CATCH() logic in postgres_fdw's GetConnection() looks quite suspicious\nto me. It looks like 32a9c0bdf493 took entirely the wrong path. Instead of\noptionally not throwing or directly re-establishing connections in\nbegin_remote_xact(), the PG_CATCH() hammer was applied.\n\n\n\nThe attached patch adds libpq-be-fe-helpers.h and uses it in libpqwalreceiver,\ndblink, postgres_fdw.\n\nAs I made libpq-be-fe-helpers.h handle reserving external fds,\nlibpqwalreceiver now does so. I briefly looked through its users without\nseeing cases of leaking in case of errors - which would already have been bad,\nsince we'd already have leaked a libpq connection/socket.\n\n\nGiven the lack of field complaints and the size of the required changes, I\ndon't think we should backpatch this, even though it's pretty clearly buggy\nas-is.\n\n\nSome time back Thomas had a patch to introduce a wrapper around\nlibpq-in-extensions that fixed issues cause by some events being\nedge-triggered on windows. It's possible that combining these two efforts\nwould yield something better. I resisted the urge to create a wrapper around\neach connection in this patch, as it'd have ended up being a whole lot more\ninvasive. But... Thomas, do you have a reference to that code handy?\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 3 Jan 2023 12:05:20 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "On Sun, Sep 25, 2022 at 7:22 PM Andres Freund <andres@anarazel.de> wrote:\n> Maybe I am missing something, but I don't think it's OK for\n> connect_pg_server() to connect in a blocking manner, without accepting\n> interrupts?\n\nI remember noticing various problems in this area years ago. I'm not\nsure whether I noticed this particular one, but the commit message for\nae9bfc5d65123aaa0d1cca9988037489760bdeae mentions a few others that I\ndid notice.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 3 Jan 2023 15:19:43 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "Hi,\n\nOn 2023-01-03 12:05:20 -0800, Andres Freund wrote:\n> The attached patch adds libpq-be-fe-helpers.h and uses it in libpqwalreceiver,\n> dblink, postgres_fdw.\n\n> As I made libpq-be-fe-helpers.h handle reserving external fds,\n> libpqwalreceiver now does so. I briefly looked through its users without\n> seeing cases of leaking in case of errors - which would already have been bad,\n> since we'd already have leaked a libpq connection/socket.\n> \n> \n> Given the lack of field complaints and the size of the required changes, I\n> don't think we should backpatch this, even though it's pretty clearly buggy\n> as-is.\n\nAny comments on this? Otherwise I think I'll go with this approach.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Jan 2023 18:05:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "On Tue, Jan 10, 2023 at 3:05 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-01-03 12:05:20 -0800, Andres Freund wrote:\n> > The attached patch adds libpq-be-fe-helpers.h and uses it in libpqwalreceiver,\n> > dblink, postgres_fdw.\n>\n> > As I made libpq-be-fe-helpers.h handle reserving external fds,\n> > libpqwalreceiver now does so. I briefly looked through its users without\n> > seeing cases of leaking in case of errors - which would already have been bad,\n> > since we'd already have leaked a libpq connection/socket.\n> >\n> >\n> > Given the lack of field complaints and the size of the required changes, I\n> > don't think we should backpatch this, even though it's pretty clearly buggy\n> > as-is.\n>\n> Any comments on this? Otherwise I think I'll go with this approach.\n\n+1. Not totally convinced about the location but we are free to\nre-organise it any time, and the random CI failures are bad.\n\n\n", "msg_date": "Sat, 14 Jan 2023 14:45:06 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "Hi,\n\nOn 2023-01-14 14:45:06 +1300, Thomas Munro wrote:\n> On Tue, Jan 10, 2023 at 3:05 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-01-03 12:05:20 -0800, Andres Freund wrote:\n> > > The attached patch adds libpq-be-fe-helpers.h and uses it in libpqwalreceiver,\n> > > dblink, postgres_fdw.\n> >\n> > > As I made libpq-be-fe-helpers.h handle reserving external fds,\n> > > libpqwalreceiver now does so. I briefly looked through its users without\n> > > seeing cases of leaking in case of errors - which would already have been bad,\n> > > since we'd already have leaked a libpq connection/socket.\n> > >\n> > >\n> > > Given the lack of field complaints and the size of the required changes, I\n> > > don't think we should backpatch this, even though it's pretty clearly buggy\n> > > as-is.\n> >\n> > Any comments on this? Otherwise I think I'll go with this approach.\n> \n> +1. Not totally convinced about the location but we are free to\n> re-organise it any time, and the random CI failures are bad.\n\nCool.\n\nUpdated patch attached. I split it into multiple pieces.\n1) A fix for [1], included here because I encountered it while testing\n2) Introduction of libpq-be-fe-helpers.h\n3) Convert dblink and postgres_fdw to the helper\n4) Convert libpqwalreceiver.c to the helper\n\nEven if we eventually decide to backpatch 3), we'd likely not backpatch 4), as\nthere's no bug (although perhaps the lack of FD handling could be called a\nbug?).\n\nThere's also some light polishing, improving commit message, comments and\nmoving some internal helper functions to later in the file.\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20230121011237.q52apbvlarfv6jm6%40awork3.anarazel.de", "msg_date": "Fri, 20 Jan 2023 19:00:08 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "Hi,\n\nOn 2023-01-20 19:00:08 -0800, Andres Freund wrote:\n> Updated patch attached. I split it into multiple pieces.\n> 1) A fix for [1], included here because I encountered it while testing\n> 2) Introduction of libpq-be-fe-helpers.h\n> 3) Convert dblink and postgres_fdw to the helper\n> 4) Convert libpqwalreceiver.c to the helper\n> \n> Even if we eventually decide to backpatch 3), we'd likely not backpatch 4), as\n> there's no bug (although perhaps the lack of FD handling could be called a\n> bug?).\n> \n> There's also some light polishing, improving commit message, comments and\n> moving some internal helper functions to later in the file.\n\nAfter a tiny bit further polishing, and after separately pushing a resource\nleak fix for walrcv_connect(), I pushed this.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 23 Jan 2023 19:28:06 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "On Mon, Jan 23, 2023 at 07:28:06PM -0800, Andres Freund wrote:\n> After a tiny bit further polishing, and after separately pushing a resource\n> leak fix for walrcv_connect(), I pushed this.\n\nMy colleague Robins Tharakan (CC'd) noticed crashes when testing recent\ncommits, and he traced it back to e460248. From this information, I\ndiscovered the following typo:\n\ndiff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c\nindex 8982d623d3..78a8bcee6e 100644\n--- a/contrib/dblink/dblink.c\n+++ b/contrib/dblink/dblink.c\n@@ -321,7 +321,7 @@ dblink_connect(PG_FUNCTION_ARGS)\n else\n {\n if (pconn->conn)\n- libpqsrv_disconnect(conn);\n+ libpqsrv_disconnect(pconn->conn);\n pconn->conn = conn;\n }\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 30 Jan 2023 11:30:08 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "Hi,\n\nOn 2023-01-30 11:30:08 -0800, Nathan Bossart wrote:\n> On Mon, Jan 23, 2023 at 07:28:06PM -0800, Andres Freund wrote:\n> > After a tiny bit further polishing, and after separately pushing a resource\n> > leak fix for walrcv_connect(), I pushed this.\n> \n> My colleague Robins Tharakan (CC'd) noticed crashes when testing recent\n> commits, and he traced it back to e460248. From this information, I\n> discovered the following typo:\n> \n> diff --git a/contrib/dblink/dblink.c b/contrib/dblink/dblink.c\n> index 8982d623d3..78a8bcee6e 100644\n> --- a/contrib/dblink/dblink.c\n> +++ b/contrib/dblink/dblink.c\n> @@ -321,7 +321,7 @@ dblink_connect(PG_FUNCTION_ARGS)\n> else\n> {\n> if (pconn->conn)\n> - libpqsrv_disconnect(conn);\n> + libpqsrv_disconnect(pconn->conn);\n> pconn->conn = conn;\n> }\n\nUgh. Good catch.\n\nWhy don't the dblink tests catch this? Any chance you or Robins could prepare\na patch with fix and test, given that you know how to trigger this?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 30 Jan 2023 11:49:37 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "On Mon, Jan 30, 2023 at 11:49:37AM -0800, Andres Freund wrote:\n> Why don't the dblink tests catch this? Any chance you or Robins could prepare\n> a patch with fix and test, given that you know how to trigger this?\n\nIt's trivially reproducible by calling 1-argument dblink_connect() multiple\ntimes and then calling dblink_disconnect(). Here's a patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 30 Jan 2023 12:00:55 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "Hi Andres,\n\nOn Tue, 31 Jan 2023 at 06:31, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Jan 30, 2023 at 11:49:37AM -0800, Andres Freund wrote:\n> > Why don't the dblink tests catch this? Any chance you or Robins could prepare\n> > a patch with fix and test, given that you know how to trigger this?\n>\n> It's trivially reproducible by calling 1-argument dblink_connect() multiple\n> times and then calling dblink_disconnect(). Here's a patch.\n>\n\nMy test instance has been running Nathan's patch for the past\nfew hours, and it looks like this should resolve the issue.\n\nA little bit of background, since the past few days I was\nnoticing frequent crashes on a test instance. They're not simple\nto reproduce manually, but if left running I can reliably see\ncrashes on an ~hourly basis.\n\nIn trying to trace the origin, I ran a multiple-hour test for\neach commit going back a few commits and noticed that the\ncrashes stopped prior to commit e4602483e9, which is when the\nissue became visible.\n\nThe point is now moot, but let me know if full backtraces still\nhelp (I was concerned looking at the excerpts from some\nof the crashes):\n\n\"double free or corruption (out)\"\n\"munmap_chunk(): invalid pointer\"\n\"free(): invalid pointer\"\nstr->data[0] = '\\0';\n\n\nSome backtraces\n###############\n\nUsing host libthread_db library \"/lib/x86_64-linux-gnu/libthread_db.so.1\".\nCore was generated by `postgres: 6c6d6ba3ee@master@sqith: u21 postgres\n127.0.0.1(45334) SELECT '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 __GI___libc_free (mem=0x312f352f65736162) at malloc.c:3102\n#0 __GI___libc_free (mem=0x312f352f65736162) at malloc.c:3102\n#1 0x00007fc0062dfefd in pqDropConnection (conn=0x564bb3e1a080,\n flushInput=true) at fe-connect.c:495\n#2 0x00007fc0062e5cb3 in closePGconn (conn=0x564bb3e1a080)\n at fe-connect.c:4112\n#3 0x00007fc0062e5d55 in PQfinish (conn=0x564bb3e1a080) at fe-connect.c:4134\n#4 0x00007fc0061a442b in libpqsrv_disconnect (conn=0x564bb3e1a080)\n at ../../src/include/libpq/libpq-be-fe-helpers.h:117\n#5 0x00007fc0061a4df1 in dblink_disconnect (fcinfo=0x564bb3d980f0)\n at dblink.c:357\n#6 0x0000564bb0e70aa7 in ExecInterpExpr (state=0x564bb3d98018,\n econtext=0x564bb3d979a0, isnull=0x7ffd60824b0f) at execExprInterp.c:728\n#7 0x0000564bb0e72f36 in ExecInterpExprStillValid (state=0x564bb3d98018,\n econtext=0x564bb3d979a0, isNull=0x7ffd60824b0f) at execExprInterp.c:1838\n\n============\n\n\nCore was generated by `postgres: 6c6d6ba3ee@master@sqith: u52 postgres\n127.0.0.1(58778) SELECT '.\nProgram terminated with signal SIGABRT, Aborted.\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007fc021792859 in __GI_abort () at abort.c:79\n#2 0x00007fc0217fd26e in __libc_message (action=action@entry=do_abort,\n fmt=fmt@entry=0x7fc021927298 \"%s\\n\") at ../sysdeps/posix/libc_fatal.c:155\n#3 0x00007fc0218052fc in malloc_printerr (\n str=str@entry=0x7fc021929670 \"double free or corruption (out)\")\n at malloc.c:5347\n#4 0x00007fc021806fa0 in _int_free (av=0x7fc02195cb80 <main_arena>,\n p=0x7fc02195cbf0 <main_arena+112>, have_lock=<optimized out>)\n at malloc.c:4314\n#5 0x00007fc0062e16ed in freePGconn (conn=0x564bb6e36b80)\n at fe-connect.c:3977\n#6 0x00007fc0062e1d61 in PQfinish (conn=0x564bb6e36b80) at fe-connect.c:4135\n#7 0x00007fc0062a142b in libpqsrv_disconnect (conn=0x564bb6e36b80)\n at ../../src/include/libpq/libpq-be-fe-helpers.h:117\n#8 0x00007fc0062a1df1 in dblink_disconnect (fcinfo=0x564bb5647b58)\n at dblink.c:357\n\n============\n\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 resetPQExpBuffer (str=0x559d3af0e838) at pqexpbuffer.c:153\n153 str->data[0] = '\\0';\n#0 resetPQExpBuffer (str=0x559d3af0e838) at pqexpbuffer.c:153\n#1 0x00007f2240b0a876 in PQexecStart (conn=0x559d3af0e410) at fe-exec.c:2320\n#2 0x00007f2240b0a688 in PQexec (conn=0x559d3af0e410,\n query=0x559d56fb8ee8 \"p3\") at fe-exec.c:2227\n#3 0x00007f223ba8c7e4 in dblink_exec (fcinfo=0x559d3b101f58) at dblink.c:1432\n#4 0x0000559d2f003c82 in ExecInterpExpr (state=0x559d3b101ac0,\n econtext=0x559d34e76578, isnull=0x7ffe3d590fdf) at execExprInterp.c:752\n\n\n============\n\nCore was generated by `postgres: 728f86fec6@(HEAD detached at\n728f86fec6)@sqith: u19 postgres 127.0.0.'.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x00007f96f5fc6e99 in SSL_shutdown ()\n from /lib/x86_64-linux-gnu/libssl.so.1.1\n#0 0x00007f96f5fc6e99 in SSL_shutdown ()\n from /lib/x86_64-linux-gnu/libssl.so.1.1\n#1 0x00007f96da56027a in pgtls_close (conn=0x55d919752fb0)\n at fe-secure-openssl.c:1555\n#2 0x00007f96da558e41 in pqsecure_close (conn=0x55d919752fb0)\n at fe-secure.c:192\n#3 0x00007f96da53dd12 in pqDropConnection (conn=0x55d919752fb0,\n flushInput=true) at fe-connect.c:449\n#4 0x00007f96da543cb3 in closePGconn (conn=0x55d919752fb0)\n at fe-connect.c:4112\n#5 0x00007f96da543d55 in PQfinish (conn=0x55d919752fb0) at fe-connect.c:4134\n#6 0x00007f96d9ebd42b in libpqsrv_disconnect (conn=0x55d919752fb0)\n at ../../src/include/libpq/libpq-be-fe-helpers.h:117\n#7 0x00007f96d9ebddf1 in dblink_disconnect (fcinfo=0x55d91f2692a8)\n at dblink.c:357\n\n\n\n============\n\nProgram terminated with signal SIGABRT, Aborted.\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007f5f6b632859 in __GI_abort () at abort.c:79\n#2 0x00007f5f6b69d26e in __libc_message (action=action@entry=do_abort,\n fmt=fmt@entry=0x7f5f6b7c7298 \"%s\\n\") at ../sysdeps/posix/libc_fatal.c:155\n#3 0x00007f5f6b6a52fc in malloc_printerr (\n str=str@entry=0x7f5f6b7c91e0 \"munmap_chunk(): invalid pointer\")\n at malloc.c:5347\n#4 0x00007f5f6b6a554c in munmap_chunk (p=<optimized out>) at malloc.c:2830\n#5 0x00007f5f50085efd in pqDropConnection (conn=0x55d12ebcd100,\n flushInput=true) at fe-connect.c:495\n#6 0x00007f5f5008bcb3 in closePGconn (conn=0x55d12ebcd100)\n at fe-connect.c:4112\n#7 0x00007f5f5008bd55 in PQfinish (conn=0x55d12ebcd100) at fe-connect.c:4134\n#8 0x00007f5f5006c42b in libpqsrv_disconnect (conn=0x55d12ebcd100)\n at ../../src/include/libpq/libpq-be-fe-helpers.h:117\n\n\n============\n\nProgram terminated with signal SIGABRT, Aborted.\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007f5f6b632859 in __GI_abort () at abort.c:79\n#2 0x00007f5f6b69d26e in __libc_message (action=action@entry=do_abort,\n fmt=fmt@entry=0x7f5f6b7c7298 \"%s\\n\") at ../sysdeps/posix/libc_fatal.c:155\n#3 0x00007f5f6b6a52fc in malloc_printerr (\n str=str@entry=0x7f5f6b7c54c1 \"free(): invalid pointer\") at malloc.c:5347\n#4 0x00007f5f6b6a6b2c in _int_free (av=<optimized out>, p=<optimized out>,\n have_lock=0) at malloc.c:4173\n#5 0x00007f5f500fe6ed in freePGconn (conn=0x55d142273000)\n at fe-connect.c:3977\n#6 0x00007f5f500fed61 in PQfinish (conn=0x55d142273000) at fe-connect.c:4135\n#7 0x00007f5f501de42b in libpqsrv_disconnect (conn=0x55d142273000)\n at ../../src/include/libpq/libpq-be-fe-helpers.h:117\n#8 0x00007f5f501dedf1 in dblink_disconnect (fcinfo=0x55d1527998f8)\n at dblink.c:357\n\n\n============\n\n\nCore was generated by `postgres: e4602483e9@(HEAD detached at\ne4602483e9)@sqith: u73 postgres 127.0.0.'.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 __GI___libc_realloc (oldmem=0x7f7f7f7f7f7f7f7f, bytes=2139070335)\n at malloc.c:3154\n#0 __GI___libc_realloc (oldmem=0x7f7f7f7f7f7f7f7f, bytes=2139070335)\n at malloc.c:3154\n#1 0x00007fb7bc0a580a in pqCheckOutBufferSpace (bytes_needed=2139062148,\n conn=0x55b191aa9380) at fe-misc.c:329\n#2 0x00007fb7bc0a5b1c in pqPutMsgStart (msg_type=88 'X', conn=0x55b191aa9380)\n at fe-misc.c:476\n#3 0x00007fb7bc097c60 in sendTerminateConn (conn=0x55b191aa9380)\n at fe-connect.c:4076\n#4 0x00007fb7bc097c97 in closePGconn (conn=0x55b191aa9380)\n at fe-connect.c:4096\n#5 0x00007fb7bc097d55 in PQfinish (conn=0x55b191aa9380) at fe-connect.c:4134\n#6 0x00007fb7bc14a42b in libpqsrv_disconnect (conn=0x55b191aa9380)\n at ../../src/include/libpq/libpq-be-fe-helpers.h:117\n#7 0x00007fb7bc14adf1 in dblink_disconnect (fcinfo=0x55b193894f00)\n at dblink.c:357\n\n============\n\nThanks to SQLSmith for helping with this find.\n\n-\nRobins Tharakan\nAmazon Web Services\n\n\n", "msg_date": "Tue, 31 Jan 2023 11:04:58 +1030", "msg_from": "Robins Tharakan <tharakan@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" }, { "msg_contents": "Hi,\n\nOn 2023-01-30 12:00:55 -0800, Nathan Bossart wrote:\n> On Mon, Jan 30, 2023 at 11:49:37AM -0800, Andres Freund wrote:\n> > Why don't the dblink tests catch this? Any chance you or Robins could prepare\n> > a patch with fix and test, given that you know how to trigger this?\n> \n> It's trivially reproducible by calling 1-argument dblink_connect() multiple\n> times and then calling dblink_disconnect(). Here's a patch.\n\nThanks for the quick patch and for the find. Pushed.\n\nGreetings,\n\nAndres\n\n\n", "msg_date": "Tue, 31 Jan 2023 18:14:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw uninterruptible during connection establishment /\n ProcSignalBarrier" } ]
[ { "msg_contents": "The max size for the shared memory hash table name is SHMEM_INDEX_KEYSIZE - 1\n (shared hash table name is stored and indexed by ShmemIndex hash table,\n the key size of it is SHMEM_INDEX_KEYSIZE), but when the caller uses a\n longer hash table name, it doesn't report any error, instead it just\n uses the first SHMEM_INDEX_KEYSIZE chars as the hash table name.\n\n When some hash tables' names have the same prefix which is longer than\n (SHMEM_INDEX_KEYSIZE - 1), issues will come: those hash tables actually\n are created as the same hash table whose name is the prefix. So add the\n assert to prevent it.", "msg_date": "Mon, 26 Sep 2022 05:08:48 +0000", "msg_from": "Xiaoran Wang <wxiaoran@vmware.com>", "msg_from_op": true, "msg_subject": "Add an assert to the length of shared hashtable name" } ]
[ { "msg_contents": "Hi,\n\npg_stat_statements module distinguishes queries with different \nstructures, but some visibly different MERGE queries were combined as \none pg_stat_statements entry.\nFor example,\nMERGE INTO test1 USING test2 ON test1.id = test2.id WHEN MATCHED THEN \nUPDATE var = 1;\nMERGE INTO test1 USING test2 ON test1.id = test2.id WHEN MATCHED THEN \nDELETE;\nThese two queries have different command types after WHEN (UPDATE and \nDELETE), but they were regarded as one entry in pg_stat_statements.\nI think that they should be sampled as distinct queries.\n\nI attached a patch file that adds information about MERGE queries on the \ndocumentation of pg_stat_statements, and lines of code that helps with \nthe calculation of queryid hash value to differentiate MERGE queries.\nAny kind of feedback is appreciated.\n\nTatsu", "msg_date": "Mon, 26 Sep 2022 15:12:46 +0900", "msg_from": "bt22nakamorit <bt22nakamorit@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Differentiate MERGE queries with different structures" }, { "msg_contents": "Hi,\n\nOn Mon, Sep 26, 2022 at 03:12:46PM +0900, bt22nakamorit wrote:\n>\n> pg_stat_statements module distinguishes queries with different structures,\n> but some visibly different MERGE queries were combined as one\n> pg_stat_statements entry.\n> For example,\n> MERGE INTO test1 USING test2 ON test1.id = test2.id WHEN MATCHED THEN UPDATE\n> var = 1;\n> MERGE INTO test1 USING test2 ON test1.id = test2.id WHEN MATCHED THEN\n> DELETE;\n> These two queries have different command types after WHEN (UPDATE and\n> DELETE), but they were regarded as one entry in pg_stat_statements.\n> I think that they should be sampled as distinct queries.\n\nAgreed.\n\n> I attached a patch file that adds information about MERGE queries on the\n> documentation of pg_stat_statements, and lines of code that helps with the\n> calculation of queryid hash value to differentiate MERGE queries.\n> Any kind of feedback is appreciated.\n\nI didn't test the patch (and never looked at MERGE implementation either), but\nI'm wondering if MergeAction->matched and MergeAction->override should be\njumbled too?\n\nAlso, the patch should contain some extra tests to fully cover MERGE jumbling.\n\n\n", "msg_date": "Mon, 26 Sep 2022 17:25:34 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Differentiate MERGE queries with different structures" }, { "msg_contents": "On 2022-Sep-26, Julien Rouhaud wrote:\n\n> On Mon, Sep 26, 2022 at 03:12:46PM +0900, bt22nakamorit wrote:\n\n> > I attached a patch file that adds information about MERGE queries on the\n> > documentation of pg_stat_statements, and lines of code that helps with the\n> > calculation of queryid hash value to differentiate MERGE queries.\n> > Any kind of feedback is appreciated.\n> \n> I didn't test the patch (and never looked at MERGE implementation either), but\n> I'm wondering if MergeAction->matched and MergeAction->override should be\n> jumbled too?\n\n->matched distinguishes these two queries:\n\nMERGE INTO foo USING bar ON (something)\n WHEN MATCHED THEN DO NOTHING;\nMERGE INTO foo USING bar ON (something)\n WHEN NOT MATCHED THEN DO NOTHING;\n\nbecause only DO NOTHING can be used with both MATCHED and NOT MATCHED,\nthough on the whole the distinction seems pointless. However I think if\nyou sent both these queries and got a single pgss entry with the text of\none of them and not the other, you're going to be confused about where\nthe other went. So +1 for jumbling it too.\n\n->overriding is used in OVERRIDING SYSTEM VALUES (used for GENERATED\ncolumns). I don't think there's any case where it's interesting\ncurrently: if you specify the column it's going to be in the column list\n(which does affect the query ID).\n\n> Also, the patch should contain some extra tests to fully cover MERGE\n> jumbling.\n\nAgreed. I struggle to find a balance between not having anything and\ngoing overboard, but I decided to add different for the different things\nthat should be jumbled, so that they would all appear in the view.\n\n\nI moved the code around; instead of adding it at the end of the switch,\nI did what the comment says, which is to mirror expression_tree_walker.\nThis made me realize that the latter is using the wrong order for fields\naccording to the struct definition, so I flipped that also.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La gente vulgar sólo piensa en pasar el tiempo;\nel que tiene talento, en aprovecharlo\"", "msg_date": "Mon, 26 Sep 2022 13:46:29 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Differentiate MERGE queries with different structures" }, { "msg_contents": "Pushed, thanks.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n<Schwern> It does it in a really, really complicated way\n<crab> why does it need to be complicated?\n<Schwern> Because it's MakeMaker.\n\n\n", "msg_date": "Tue, 27 Sep 2022 10:48:15 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Differentiate MERGE queries with different structures" }, { "msg_contents": "2022-09-27 17:48 に Alvaro Herrera さんは書きました:\n> Pushed, thanks.\nThe code and the tests went fine on my environment.\nThank you Alvaro for your help, and thank you Julien for your review!\n\nBest,\nTatsuhiro Nakamori\n\n\n", "msg_date": "Tue, 27 Sep 2022 22:47:50 +0900", "msg_from": "bt22nakamorit <bt22nakamorit@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Differentiate MERGE queries with different structures" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile reviewing [1], I and Amit noticed that a flag ParallelMessagePending is defined\nas \"volatile bool\", but other flags set by signal handlers are defined as \"volatile sig_atomic_t\".\n\nThe datatype has been defined in standard C,\nand it says that variables referred by signal handlers should be \"volatile sig_atomic_t\".\n(Please see my observation [2])\n\nThis may be not needed because any failures had been reported,\nbut I thought their datatype should be same and attached a small patch.\n\nHow do you think?\n\n[1]: https://commitfest.postgresql.org/39/3621/\n[2]: https://www.postgresql.org/message-id/TYAPR01MB5866C056BB9F81A42B85D20BF54E9%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED", "msg_date": "Mon, 26 Sep 2022 06:57:28 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "[small patch] Change datatype of ParallelMessagePending from\n \"volatile bool\" to \"volatile sig_atomic_t\"" }, { "msg_contents": "On Mon, Sep 26, 2022 at 06:57:28AM +0000, kuroda.hayato@fujitsu.com wrote:\n> While reviewing [1], I and Amit noticed that a flag ParallelMessagePending is defined\n> as \"volatile bool\", but other flags set by signal handlers are defined as \"volatile sig_atomic_t\".\n> \n> How do you think?\n\nYou are right. bool is not usually a problem in a signal handler, but\nsig_atomic_t is the type we ought to use. I'll go adjust that.\n--\nMichael", "msg_date": "Mon, 26 Sep 2022 16:50:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [small patch] Change datatype of ParallelMessagePending from\n \"volatile bool\" to \"volatile sig_atomic_t\"" }, { "msg_contents": "On Mon, Sep 26, 2022 at 04:50:36PM +0900, Michael Paquier wrote:\n> You are right. bool is not usually a problem in a signal handler, but\n> sig_atomic_t is the type we ought to use. I'll go adjust that.\n\nDone this one. I have scanned the code, but did not notice a similar\nmistake. It is worth noting that we have only one remaining \"volatile\nbool\" in the headers now.\n--\nMichael", "msg_date": "Tue, 27 Sep 2022 09:36:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [small patch] Change datatype of ParallelMessagePending from\n \"volatile bool\" to \"volatile sig_atomic_t\"" }, { "msg_contents": "Dear Michael,\n\n> Done this one. I have scanned the code, but did not notice a similar\n> mistake. \n\nI found your commit, thanks!\n\n> It is worth noting that we have only one remaining \"volatile\n> bool\" in the headers now.\n\nMaybe you mentioned about sigint_interrupt_enabled,\nand it also seems to be modified in the signal handler.\nBut I think any race conditions may be not occurred here\nbecause if the value is set in the handler the code jump will be also happened.\n\nOf course it's OK to mark the variable to sig_atomic_t too if there is no problem.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Tue, 27 Sep 2022 01:38:26 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [small patch] Change datatype of ParallelMessagePending from\n \"volatile bool\" to \"volatile sig_atomic_t\"" }, { "msg_contents": "On Tue, Sep 27, 2022 at 01:38:26AM +0000, kuroda.hayato@fujitsu.com wrote:\n> Maybe you mentioned about sigint_interrupt_enabled,\n> and it also seems to be modified in the signal handler.\n\nYeah, at least as of the cancel callback psql_cancel_callback() that\nhandle_sigint() would call on SIGINT as this is set by psql. So it\ndoes not seem right to use a boolean rather than a sig_atomic_t in\nthis case, as well.\n--\nMichael", "msg_date": "Wed, 28 Sep 2022 10:23:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [small patch] Change datatype of ParallelMessagePending from\n \"volatile bool\" to \"volatile sig_atomic_t\"" }, { "msg_contents": "Dear Michael,\n\n> Yeah, at least as of the cancel callback psql_cancel_callback() that\n> handle_sigint() would call on SIGINT as this is set by psql. So it\n> does not seem right to use a boolean rather than a sig_atomic_t in\n> this case, as well.\n\nPSA fix patch. Note that PromptInterruptContext.enabled was also fixed\nbecause it is substituted from sigint_interrupt_enabled\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED", "msg_date": "Wed, 28 Sep 2022 04:47:09 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: [small patch] Change datatype of ParallelMessagePending from\n \"volatile bool\" to \"volatile sig_atomic_t\"" }, { "msg_contents": "On Wed, Sep 28, 2022 at 04:47:09AM +0000, kuroda.hayato@fujitsu.com wrote:\n> PSA fix patch. Note that PromptInterruptContext.enabled was also fixed\n> because it is substituted from sigint_interrupt_enabled.\n\nGood point. Thanks for the patch, this looks consistent!\n--\nMichael", "msg_date": "Wed, 28 Sep 2022 16:45:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [small patch] Change datatype of ParallelMessagePending from\n \"volatile bool\" to \"volatile sig_atomic_t\"" }, { "msg_contents": "On Wed, Sep 28, 2022 at 04:45:17PM +0900, Michael Paquier wrote:\n> Good point. Thanks for the patch, this looks consistent!\n\nDone as of 5ac9e86.\n--\nMichael", "msg_date": "Thu, 29 Sep 2022 14:33:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [small patch] Change datatype of ParallelMessagePending from\n \"volatile bool\" to \"volatile sig_atomic_t\"" } ]
[ { "msg_contents": "I saw the following message recently added to publicationcmds.c.\n\n(ERROR: cannot use publication column list for relation \"%s.%s\"\")\n> DETAIL: Column list cannot be specified if any schema is part of the publication or specified in the list.\n\nAs my reading, the \"the list\" at the end syntactically means \"Column\nlist\" but that is actually wrong; it could be read as \"the list\nfollowing TABLES IN\" but that doesn't seem reasonable.\n\nIf I am right, it might should be something like the following:\n\n+ Column list cannot be specified if any schema is part of the publication or specified in the command.\n+ Column list cannot be specified if any schema is part of the publication or specified together.\n\nWhat do you think about this?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 26 Sep 2022 16:04:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "A doubt about a newly added errdetail" }, { "msg_contents": "On 2022-Sep-26, Kyotaro Horiguchi wrote:\n\n> I saw the following message recently added to publicationcmds.c.\n> \n> (ERROR: cannot use publication column list for relation \"%s.%s\"\")\n> > DETAIL: Column list cannot be specified if any schema is part of the publication or specified in the list.\n> \n> As my reading, the \"the list\" at the end syntactically means \"Column\n> list\" but that is actually wrong; it could be read as \"the list\n> following TABLES IN\" but that doesn't seem reasonable.\n> \n> If I am right, it might should be something like the following:\n> \n> + Column list cannot be specified if any schema is part of the publication or specified in the command.\n> + Column list cannot be specified if any schema is part of the publication or specified together.\n\nI propose\n\nERROR: cannot use column list for relation \"%s.%s\" in publication \"%s\"\nDETAIL: Column lists cannot be specified in publications containing FOR TABLES IN SCHEMA elements.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 26 Sep 2022 09:39:49 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: A doubt about a newly added errdetail" }, { "msg_contents": "On Mon, Sep 26, 2022 at 1:10 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Sep-26, Kyotaro Horiguchi wrote:\n>\n> > I saw the following message recently added to publicationcmds.c.\n> >\n> > (ERROR: cannot use publication column list for relation \"%s.%s\"\")\n> > > DETAIL: Column list cannot be specified if any schema is part of the publication or specified in the list.\n> >\n> > As my reading, the \"the list\" at the end syntactically means \"Column\n> > list\" but that is actually wrong; it could be read as \"the list\n> > following TABLES IN\" but that doesn't seem reasonable.\n> >\n> > If I am right, it might should be something like the following:\n> >\n> > + Column list cannot be specified if any schema is part of the publication or specified in the command.\n> > + Column list cannot be specified if any schema is part of the publication or specified together.\n>\n> I propose\n>\n> ERROR: cannot use column list for relation \"%s.%s\" in publication \"%s\"\n> DETAIL: Column lists cannot be specified in publications containing FOR TABLES IN SCHEMA elements.\n>\n\nThis looks mostly good to me. BTW, is it a good idea to add \".. in\npublication \"%s\"\" to the error message as this can happen even during\ncreate publication? If so, I think we can change the nearby message as\nbelow to include the same:\n\nif (!pubviaroot &&\npri->relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\nereport(ERROR,\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\nerrmsg(\"cannot use publication column list for relation \\\"%s\\\"\",\n\nI think even if we don't include the publication name, there won't be\nany confusion because there won't be multiple publications in the\ncommand.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Sep 2022 13:31:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A doubt about a newly added errdetail" }, { "msg_contents": "On 2022-Sep-26, Amit Kapila wrote:\n\n> On Mon, Sep 26, 2022 at 1:10 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > ERROR: cannot use column list for relation \"%s.%s\" in publication \"%s\"\n> > DETAIL: Column lists cannot be specified in publications containing FOR TABLES IN SCHEMA elements.\n> \n> This looks mostly good to me. BTW, is it a good idea to add \".. in\n> publication \"%s\"\" to the error message as this can happen even during\n> create publication?\n\nHmm, I don't see why not. The publication is new, sure, but it would\nalready have a name, so there's no possible confusion as to what this\nmeans.\n\n(My main change was to take the word \"publication\" out of the phrase\n\"publication column list\", because that seemed a bit strange; it isn't\nthe publication that has a column list, but the relation.)\n\n\n> If so, I think we can change the nearby message as below to include\n> the same:\n> \n> if (!pubviaroot &&\n> pri->relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> errmsg(\"cannot use publication column list for relation \\\"%s\\\"\",\n\nWFM.\n\n> I think even if we don't include the publication name, there won't be\n> any confusion because there won't be multiple publications in the\n> command.\n\nTrue, and the whole error report is likely to contain a STATEMENT line.\n\nHowever, you could argue that specifying the publication in errmsg is\nredundant. But then, the \"for relation %s.%s\" is also redundant (since\nthat is *also* in the STATEMENT line), and could even be misleading: if\nyou have a command that specifies *two* relations with column lists, why\nspecify only the first one you find? The user could be angry that they\nremove the column list from that relation and retry, and then receive\nthe exact same error for the next relation with a list that they didn't\nedit. But I think people don't work that way. So if you wanted to be\nsuper precise you would also omit the relation name unless you scanned\nthe whole list and verified that only this relation is specifying a\ncolumn list; but whom does that help?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 26 Sep 2022 10:33:12 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: A doubt about a newly added errdetail" }, { "msg_contents": "On Mon, Sep 26, 2022 at 2:03 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Sep-26, Amit Kapila wrote:\n>\n> > On Mon, Sep 26, 2022 at 1:10 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > > ERROR: cannot use column list for relation \"%s.%s\" in publication \"%s\"\n> > > DETAIL: Column lists cannot be specified in publications containing FOR TABLES IN SCHEMA elements.\n> >\n> > This looks mostly good to me. BTW, is it a good idea to add \".. in\n> > publication \"%s\"\" to the error message as this can happen even during\n> > create publication?\n>\n> Hmm, I don't see why not. The publication is new, sure, but it would\n> already have a name, so there's no possible confusion as to what this\n> means.\n>\n> (My main change was to take the word \"publication\" out of the phrase\n> \"publication column list\", because that seemed a bit strange; it isn't\n> the publication that has a column list, but the relation.)\n>\n\nOkay, that makes sense.\n\n>\n> > If so, I think we can change the nearby message as below to include\n> > the same:\n> >\n> > if (!pubviaroot &&\n> > pri->relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n> > ereport(ERROR,\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > errmsg(\"cannot use publication column list for relation \\\"%s\\\"\",\n>\n> WFM.\n>\n\nOkay.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Sep 2022 14:26:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A doubt about a newly added errdetail" }, { "msg_contents": "On Monday, September 26, 2022 4:57 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> \r\n> On Mon, Sep 26, 2022 at 2:03 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\r\n> wrote:\r\n> >\r\n> > On 2022-Sep-26, Amit Kapila wrote:\r\n> >\r\n> > > On Mon, Sep 26, 2022 at 1:10 PM Alvaro Herrera\r\n> <alvherre@alvh.no-ip.org> wrote:\r\n> >\r\n> > > > ERROR: cannot use column list for relation \"%s.%s\" in publication \"%s\"\r\n> > > > DETAIL: Column lists cannot be specified in publications containing FOR\r\n> TABLES IN SCHEMA elements.\r\n> > >\r\n> > > This looks mostly good to me. BTW, is it a good idea to add \".. in\r\n> > > publication \"%s\"\" to the error message as this can happen even\r\n> > > during create publication?\r\n> >\r\n> > Hmm, I don't see why not. The publication is new, sure, but it would\r\n> > already have a name, so there's no possible confusion as to what this\r\n> > means.\r\n> >\r\n> > (My main change was to take the word \"publication\" out of the phrase\r\n> > \"publication column list\", because that seemed a bit strange; it isn't\r\n> > the publication that has a column list, but the relation.)\r\n> >\r\n> \r\n> Okay, that makes sense.\r\n\r\n+1\r\n\r\n> >\r\n> > > If so, I think we can change the nearby message as below to include\r\n> > > the same:\r\n> > >\r\n> > > if (!pubviaroot &&\r\n> > > pri->relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\r\n> > > ereport(ERROR,\r\n> > > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> > > errmsg(\"cannot use publication column list for relation \\\"%s\\\"\",\r\n> >\r\n> > WFM.\r\n> >\r\n> \r\n> Okay.\r\n\r\nWhile reviewing this part, I notice an unused parameter(queryString) of function\r\nCheckPubRelationColumnList. I feel we can remove that as well while on it. I\r\nplan to post a patch to fix the error message and parameter soon.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Mon, 26 Sep 2022 09:03:22 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: A doubt about a newly added errdetail" }, { "msg_contents": "On Monday, September 26, 2022 5:03 PM houzj.fnst@fujitsu.com wrote:\r\n> \r\n> On Monday, September 26, 2022 4:57 PM Amit Kapila\r\n> <amit.kapila16@gmail.com>\r\n> >\r\n> > On Mon, Sep 26, 2022 at 2:03 PM Alvaro Herrera\r\n> > <alvherre@alvh.no-ip.org>\r\n> > wrote:\r\n> > >\r\n> > > On 2022-Sep-26, Amit Kapila wrote:\r\n> > >\r\n> > > > On Mon, Sep 26, 2022 at 1:10 PM Alvaro Herrera\r\n> > <alvherre@alvh.no-ip.org> wrote:\r\n> > >\r\n> > > > > ERROR: cannot use column list for relation \"%s.%s\" in publication \"%s\"\r\n> > > > > DETAIL: Column lists cannot be specified in publications\r\n> > > > > containing FOR\r\n> > TABLES IN SCHEMA elements.\r\n> > > >\r\n> > > > This looks mostly good to me. BTW, is it a good idea to add \".. in\r\n> > > > publication \"%s\"\" to the error message as this can happen even\r\n> > > > during create publication?\r\n> > >\r\n> > > Hmm, I don't see why not. The publication is new, sure, but it\r\n> > > would already have a name, so there's no possible confusion as to\r\n> > > what this means.\r\n> > >\r\n> > > (My main change was to take the word \"publication\" out of the phrase\r\n> > > \"publication column list\", because that seemed a bit strange; it\r\n> > > isn't the publication that has a column list, but the relation.)\r\n> > >\r\n> >\r\n> > Okay, that makes sense.\r\n> \r\n> +1\r\n> \r\n> > >\r\n> > > > If so, I think we can change the nearby message as below to\r\n> > > > include the same:\r\n> > > >\r\n> > > > if (!pubviaroot &&\t\r\n> > > > pri->relation->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\r\n> > > > ereport(ERROR,\r\n> > > > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> > > > errmsg(\"cannot use publication column list for relation \\\"%s\\\"\",\r\n> > >\r\n> > > WFM.\r\n> > >\r\n> >\r\n> > Okay.\r\n> \r\n> While reviewing this part, I notice an unused parameter(queryString) of function\r\n> CheckPubRelationColumnList. I feel we can remove that as well while on it. I plan\r\n> to post a patch to fix the error message and parameter soon.\r\n> \r\n\r\nAttach the patch. (The patch can apply on both HEAD and PG15)\r\n\r\nBest regards,\r\nHou zj", "msg_date": "Mon, 26 Sep 2022 11:15:48 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: A doubt about a newly added errdetail" }, { "msg_contents": "On Mon, Sep 26, 2022 at 4:45 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n>\n> Attach the patch. (The patch can apply on both HEAD and PG15)\n>\n\nThe patch looks good to me.\n\n*\n- errmsg(\"cannot add schema to the publication\"),\n+ errmsg(\"cannot add schema to publication \\\"%s\\\"\",\n+ stmt->pubname),\n\nI see that you have improved an additional message in the patch which\nappears okay to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Sep 2022 17:33:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A doubt about a newly added errdetail" }, { "msg_contents": "At Mon, 26 Sep 2022 17:33:46 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Mon, Sep 26, 2022 at 4:45 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> >\n> > Attach the patch. (The patch can apply on both HEAD and PG15)\n> >\n> \n> The patch looks good to me.\n> \n> *\n> - errmsg(\"cannot add schema to the publication\"),\n> + errmsg(\"cannot add schema to publication \\\"%s\\\"\",\n> + stmt->pubname),\n> \n> I see that you have improved an additional message in the patch which\n> appears okay to me.\n\nOverall +1 from me, thanks!\n\nBy the way, this is not an issue caused by the proposed patch, I see\nthe following message in the patch.\n\n-\t\t\t\t\t errdetail(\"Column list cannot be used for a partitioned table when %s is false.\",\n+\t\t\t\t\t errdetail(\"Column list cannot be specified for a partitioned table when %s is false.\",\n \t\t\t\t\t\t\t \"publish_via_partition_root\")));\n\nI think that the purpose of such separation of variable names is to\nunify multiple messages differing only by the names (to keep\ntranslation labor (relatively:p) low). In that sense, that separation\nhere is useless since I see no chance of having the same message with\nanother variable in future.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 27 Sep 2022 10:40:49 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A doubt about a newly added errdetail" }, { "msg_contents": "On 2022-Sep-27, Kyotaro Horiguchi wrote:\n\n> By the way, this is not an issue caused by the proposed patch, I see\n> the following message in the patch.\n> \n> -\t\t\t\t\t errdetail(\"Column list cannot be used for a partitioned table when %s is false.\",\n> +\t\t\t\t\t errdetail(\"Column list cannot be specified for a partitioned table when %s is false.\",\n> \t\t\t\t\t\t\t \"publish_via_partition_root\")));\n> \n> I think that the purpose of such separation of variable names is to\n> unify multiple messages differing only by the names (to keep\n> translation labor (relatively:p) low). In that sense, that separation\n> here is useless since I see no chance of having the same message with\n> another variable in future.\n\nWell, it also reduces chances for typos and such, so while it's not\nstrictly necessary to do it this way, I tend to prefer it on new\nmessages. However, as you say it's not very interesting when there's no\npossibility of duplication, so changing existing messages to this style\nwhen we have no other reason to change the message, is not a useful use\nof time. In this case we're changing the message in another way too, so\nI think it's okay.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.\nEscriba un guión que no toque nada para no causar daños.\" (Jakob Nielsen)\n\n\n", "msg_date": "Tue, 27 Sep 2022 09:21:05 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: A doubt about a newly added errdetail" }, { "msg_contents": "On Tuesday, September 27, 2022 3:21 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\r\n> \r\n> On 2022-Sep-27, Kyotaro Horiguchi wrote:\r\n> \r\n> > By the way, this is not an issue caused by the proposed patch, I see\r\n> > the following message in the patch.\r\n> >\r\n> > -\t\t\t\t\t errdetail(\"Column list cannot be used\r\n> for a partitioned table when %s is false.\",\r\n> > +\t\t\t\t\t errdetail(\"Column list cannot be\r\n> specified for a partitioned\r\n> > +table when %s is false.\",\r\n> >\r\n> \"publish_via_partition_root\")));\r\n> >\r\n> > I think that the purpose of such separation of variable names is to\r\n> > unify multiple messages differing only by the names (to keep\r\n> > translation labor (relatively:p) low). In that sense, that separation\r\n> > here is useless since I see no chance of having the same message with\r\n> > another variable in future.\r\n> \r\n> Well, it also reduces chances for typos and such, so while it's not strictly\r\n> necessary to do it this way, I tend to prefer it on new messages. However, as\r\n> you say it's not very interesting when there's no possibility of duplication, so\r\n> changing existing messages to this style when we have no other reason to\r\n> change the message, is not a useful use of time. In this case we're changing\r\n> the message in another way too, so I think it's okay.\r\n\r\nThanks for reviewing!\r\n\r\nJust in case I misunderstand, it seems you mean the message style[1] is OK, right ?\r\n\r\n[1]\r\nerrdetail(\"Column list cannot be specified for a partitioned table when %s is false.\",\r\n\t \"publish_via_partition_root\")));\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Tue, 27 Sep 2022 09:28:18 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: A doubt about a newly added errdetail" }, { "msg_contents": "On 2022-Sep-27, houzj.fnst@fujitsu.com wrote:\n\n> Thanks for reviewing!\n> \n> Just in case I misunderstand, it seems you mean the message style[1] is OK, right ?\n> \n> [1]\n> errdetail(\"Column list cannot be specified for a partitioned table when %s is false.\",\n> \t \"publish_via_partition_root\")));\n\nYeah, since you're changing another word in that line, it's ok to move\nthe parameter line off-string. (If you were only changing the parameter\nto %s and there was no message duplication, I would reject the patch as\nuseless.)\n\nI'm going over that patch now, I have a few other changes as attached,\nintend to commit soon.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Ni aún el genio muy grande llegaría muy lejos\nsi tuviera que sacarlo todo de su propio interior\" (Goethe)", "msg_date": "Tue, 27 Sep 2022 12:19:35 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: A doubt about a newly added errdetail" }, { "msg_contents": "While reading this code, I noticed that function expr_allowed_in_node()\nhas a very strange API: it doesn't have any return convention at all\nother than \"if we didn't modify errdetail_str then all is good\". I was\ntempted to add an \"Assert(*errdetail_msg == NULL)\" at the start of it,\njust to make sure that it is not called if a message is already set.\n\nI think it would be much saner to inline the few lines of that function\nin its sole caller, as in the attached.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"E pur si muove\" (Galileo Galilei)", "msg_date": "Tue, 27 Sep 2022 14:42:49 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: A doubt about a newly added errdetail" }, { "msg_contents": "On Tue, Sep 27, 2022 at 6:12 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> While reading this code, I noticed that function expr_allowed_in_node()\n> has a very strange API: it doesn't have any return convention at all\n> other than \"if we didn't modify errdetail_str then all is good\". I was\n> tempted to add an \"Assert(*errdetail_msg == NULL)\" at the start of it,\n> just to make sure that it is not called if a message is already set.\n>\n> I think it would be much saner to inline the few lines of that function\n> in its sole caller, as in the attached.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 28 Sep 2022 08:58:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A doubt about a newly added errdetail" }, { "msg_contents": "At Tue, 27 Sep 2022 12:19:35 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> Yeah, since you're changing another word in that line, it's ok to move\n> the parameter line off-string. (If you were only changing the parameter\n> to %s and there was no message duplication, I would reject the patch as\n> useless.)\n\nI'm fine with that. By the way, related to the area, I found the\nfollowing error messages.\n\n>\t errmsg(\"publication \\\"%s\\\" is defined as FOR ALL TABLES\",\n>\t\t\tNameStr(pubform->pubname)),\n>\t errdetail(\"Schemas cannot be added to or dropped from FOR ALL TABLES publications.\")));\n\nIt looks tome that the errmsg and errordetail are reversed. Isn't the following order common?\n\n>\t errmsg(\"schemas cannot be added to or dropped from publication \\\"%s\\\".\",\n>\t\t\tNameStr(pubform->pubname)),\n>\t errdetail(\"The publication is defined as FOR ALL TABLES.\")));\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 28 Sep 2022 15:00:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A doubt about a newly added errdetail" }, { "msg_contents": "On Wed, Sep 28, 2022 at 11:30 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 27 Sep 2022 12:19:35 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in\n> > Yeah, since you're changing another word in that line, it's ok to move\n> > the parameter line off-string. (If you were only changing the parameter\n> > to %s and there was no message duplication, I would reject the patch as\n> > useless.)\n>\n> I'm fine with that. By the way, related to the area, I found the\n> following error messages.\n>\n> > errmsg(\"publication \\\"%s\\\" is defined as FOR ALL TABLES\",\n> > NameStr(pubform->pubname)),\n> > errdetail(\"Schemas cannot be added to or dropped from FOR ALL TABLES publications.\")));\n>\n> It looks tome that the errmsg and errordetail are reversed. Isn't the following order common?\n>\n> > errmsg(\"schemas cannot be added to or dropped from publication \\\"%s\\\".\",\n> > NameStr(pubform->pubname)),\n> > errdetail(\"The publication is defined as FOR ALL TABLES.\")));\n>\n\nThis one seems to be matching with the below existing message:\nereport(ERROR,\n(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\nerrmsg(\"publication \\\"%s\\\" is defined as FOR ALL TABLES\",\nNameStr(pubform->pubname)),\nerrdetail(\"Tables cannot be added to or dropped from FOR ALL TABLES\npublications.\")));\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 28 Sep 2022 13:47:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A doubt about a newly added errdetail" }, { "msg_contents": "On 2022-Sep-28, Amit Kapila wrote:\n\n> On Wed, Sep 28, 2022 at 11:30 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n\n> > It looks tome that the errmsg and errordetail are reversed. Isn't the following order common?\n> >\n> > > errmsg(\"schemas cannot be added to or dropped from publication \\\"%s\\\".\",\n> > > NameStr(pubform->pubname)),\n> > > errdetail(\"The publication is defined as FOR ALL TABLES.\")));\n> >\n> \n> This one seems to be matching with the below existing message:\n> ereport(ERROR,\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> errmsg(\"publication \\\"%s\\\" is defined as FOR ALL TABLES\",\n> NameStr(pubform->pubname)),\n> errdetail(\"Tables cannot be added to or dropped from FOR ALL TABLES\n> publications.\")));\n\nWell, that suggests we should change both together. I do agree that\nthey look suspicious; they should be more similar to this other one, I\nthink:\n\n ereport(ERROR,\n errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n errmsg(\"cannot add schema to publication \\\"%s\\\"\",\n stmt->pubname),\n errdetail(\"Schemas cannot be added if any tables that specify a column list are already part of the publication.\"));\n\nThe errcodes appear not to agree with each other, also. Maybe that\nneeds some more thought as well. I don't think INVALID_PARAMETER_VALUE\nis the right thing here, and I'm not sure about\nOBJECT_NOT_IN_PREREQUISITE_STATE either.\n\n\nFWIW, the latter is a whole category which is not defined by the SQL\nstandard, so I recall Tom got it from DB2. DB2 chose to subdivide in a\nlot of different cases, see\nhttps://www.ibm.com/docs/en/db2/9.7?topic=messages-sqlstate#rsttmsg__code55\nfor a (current?) table. Maybe we should define some additional 55Pxx\nvalues -- say 55PR1 INCOMPATIBLE PUBLICATION DEFINITION (R for\n\"replication\"-related matters; the P is what we chose for the\nPostgres-specific subcategory).\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"People get annoyed when you try to debug them.\" (Larry Wall)\n\n\n", "msg_date": "Wed, 28 Sep 2022 10:46:41 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: A doubt about a newly added errdetail" }, { "msg_contents": "At Wed, 28 Sep 2022 13:47:25 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Wed, Sep 28, 2022 at 11:30 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > I'm fine with that. By the way, related to the area, I found the\n> > following error messages.\n> >\n> > > errmsg(\"publication \\\"%s\\\" is defined as FOR ALL TABLES\",\n> > > NameStr(pubform->pubname)),\n> > > errdetail(\"Schemas cannot be added to or dropped from FOR ALL TABLES publications.\")));\n> >\n> > It looks tome that the errmsg and errordetail are reversed. Isn't the following order common?\n> >\n> > > errmsg(\"schemas cannot be added to or dropped from publication \\\"%s\\\".\",\n> > > NameStr(pubform->pubname)),\n> > > errdetail(\"The publication is defined as FOR ALL TABLES.\")));\n> >\n> \n> This one seems to be matching with the below existing message:\n> ereport(ERROR,\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> errmsg(\"publication \\\"%s\\\" is defined as FOR ALL TABLES\",\n> NameStr(pubform->pubname)),\n> errdetail(\"Tables cannot be added to or dropped from FOR ALL TABLES\n> publications.\")));\n\nYeah, so I meant that I'd like to propose to chage the both. I just\nwanted to ask people whether that proposal is reasonable or not.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 28 Sep 2022 18:22:32 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A doubt about a newly added errdetail" }, { "msg_contents": "At Wed, 28 Sep 2022 10:46:41 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2022-Sep-28, Amit Kapila wrote:\n> \n> > On Wed, Sep 28, 2022 at 11:30 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> \n> > > It looks tome that the errmsg and errordetail are reversed. Isn't the following order common?\n> > >\n> > > > errmsg(\"schemas cannot be added to or dropped from publication \\\"%s\\\".\",\n> > > > NameStr(pubform->pubname)),\n> > > > errdetail(\"The publication is defined as FOR ALL TABLES.\")));\n> > >\n> > \n> > This one seems to be matching with the below existing message:\n> > ereport(ERROR,\n> > (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > errmsg(\"publication \\\"%s\\\" is defined as FOR ALL TABLES\",\n> > NameStr(pubform->pubname)),\n> > errdetail(\"Tables cannot be added to or dropped from FOR ALL TABLES\n> > publications.\")));\n> \n> Well, that suggests we should change both together. I do agree that\n> they look suspicious; they should be more similar to this other one, I\n> think:\n\nAh, yes, and thanks.\n\n> ereport(ERROR,\n> errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> errmsg(\"cannot add schema to publication \\\"%s\\\"\",\n> stmt->pubname),\n> errdetail(\"Schemas cannot be added if any tables that specify a column list are already part of the publication.\"));\n> \n> The errcodes appear not to agree with each other, also. Maybe that\n> needs some more thought as well. I don't think INVALID_PARAMETER_VALUE\n> is the right thing here, and I'm not sure about\n> OBJECT_NOT_IN_PREREQUISITE_STATE either.\n\nThe latter seems to fit better than the current one. That being said\nif we change the SQLSTATE for exising erros, that may make existing\nusers confused.\n\n> FWIW, the latter is a whole category which is not defined by the SQL\n> standard, so I recall Tom got it from DB2. DB2 chose to subdivide in a\n> lot of different cases, see\n> https://www.ibm.com/docs/en/db2/9.7?topic=messages-sqlstate#rsttmsg__code55\n> for a (current?) table. Maybe we should define some additional 55Pxx\n> values -- say 55PR1 INCOMPATIBLE PUBLICATION DEFINITION (R for\n> \"replication\"-related matters; the P is what we chose for the\n> Postgres-specific subcategory).\n\nI generally agree to this. But we don't have enough time to fully\nconsider that?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 28 Sep 2022 18:41:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A doubt about a newly added errdetail" } ]
[ { "msg_contents": "Hi All,\n\nConsider the below test:\n\npostgres@53130=#create role test WITH login createdb;\nCREATE ROLE\npostgres@53130=#\\c - test\nYou are now connected to database \"postgres\" as user \"test\".\npostgres@53150=#create database test;\nCREATE DATABASE\npostgres@53150=#\\c - rushabh\nYou are now connected to database \"postgres\" as user \"rushabh\".\npostgres@53162=#\npostgres@53162=#\n-- This was working before the below mentioned commit.\npostgres@53162=#drop owned by test;\nERROR: global objects cannot be deleted by doDeletion\n\nCommit 6566133c5f52771198aca07ed18f84519fac1be7 ensure that\npg_auth_members.grantor is always valid. This commit did changes\ninto shdepDropOwned() function and combined the SHARED_DEPENDENCY_ACL\nand SHARED_DEPENDENCY_OWNER. In that process it removed condition for\nlocal object in owner dependency.\n\n case SHARED_DEPENDENCY_OWNER:\n- /* If a local object, save it for deletion below */\n- if (sdepForm->dbid == MyDatabaseId)\n+ /* Save it for deletion below */\n\nCase ending up with above error because of the above removed condition.\n\nPlease find the attached patch which fixes the case.\n\nThanks,\nRushabh Lathia\nwww.EnterpriseDB.com", "msg_date": "Mon, 26 Sep 2022 13:13:53 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": true, "msg_subject": "DROP OWNED BY is broken on master branch." }, { "msg_contents": "On Mon, Sep 26, 2022 at 01:13:53PM +0530, Rushabh Lathia wrote:\n> Please find the attached patch which fixes the case.\n\nCould it be possible to stress this stuff in the regression tests?\nThere is a gap here. (I have not looked at what you are proposing.)\n--\nMichael", "msg_date": "Mon, 26 Sep 2022 17:00:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY is broken on master branch." }, { "msg_contents": "On Mon, Sep 26, 2022 at 3:44 AM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:\n> Commit 6566133c5f52771198aca07ed18f84519fac1be7 ensure that\n> pg_auth_members.grantor is always valid. This commit did changes\n> into shdepDropOwned() function and combined the SHARED_DEPENDENCY_ACL\n> and SHARED_DEPENDENCY_OWNER. In that process it removed condition for\n> local object in owner dependency.\n>\n> case SHARED_DEPENDENCY_OWNER:\n> - /* If a local object, save it for deletion below */\n> - if (sdepForm->dbid == MyDatabaseId)\n> + /* Save it for deletion below */\n>\n> Case ending up with above error because of the above removed condition.\n>\n> Please find the attached patch which fixes the case.\n\nThanks for the report. I think it would be preferable not to duplicate\nthe logic as your version does, though, so here's a slightly different\nversion that avoids that.\n\nPer Michael's suggestion, I have also written a test case and included\nit in this version.\n\nComments?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 26 Sep 2022 14:16:45 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY is broken on master branch." }, { "msg_contents": "On Mon, Sep 26, 2022 at 11:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Sep 26, 2022 at 3:44 AM Rushabh Lathia <rushabh.lathia@gmail.com>\n> wrote:\n> > Commit 6566133c5f52771198aca07ed18f84519fac1be7 ensure that\n> > pg_auth_members.grantor is always valid. This commit did changes\n> > into shdepDropOwned() function and combined the SHARED_DEPENDENCY_ACL\n> > and SHARED_DEPENDENCY_OWNER. In that process it removed condition for\n> > local object in owner dependency.\n> >\n> > case SHARED_DEPENDENCY_OWNER:\n> > - /* If a local object, save it for deletion below */\n> > - if (sdepForm->dbid == MyDatabaseId)\n> > + /* Save it for deletion below */\n> >\n> > Case ending up with above error because of the above removed condition.\n> >\n> > Please find the attached patch which fixes the case.\n>\n> Thanks for the report. I think it would be preferable not to duplicate\n> the logic as your version does, though, so here's a slightly different\n> version that avoids that.\n>\n\nYes, I was also thinking to avoid the duplicate logic but couldn't found\na way. I did the quick testing with the patch, and reported test is working\nfine. But \"make check\" is failing with few failures.\n\n\n> Per Michael's suggestion, I have also written a test case and included\n> it in this version.\n>\n\nThanks for this.\n\n\n> Comments?\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\n\n-- \nRushabh Lathia\n\nOn Mon, Sep 26, 2022 at 11:46 PM Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Sep 26, 2022 at 3:44 AM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:\n> Commit 6566133c5f52771198aca07ed18f84519fac1be7 ensure that\n> pg_auth_members.grantor is always valid.  This commit did changes\n> into shdepDropOwned() function and combined the SHARED_DEPENDENCY_ACL\n> and SHARED_DEPENDENCY_OWNER.  In that process it removed condition for\n> local object in owner dependency.\n>\n>                 case SHARED_DEPENDENCY_OWNER:\n> -                   /* If a local object, save it for deletion below */\n> -                   if (sdepForm->dbid == MyDatabaseId)\n> +                   /* Save it for deletion below */\n>\n> Case ending up with above error because of the above removed condition.\n>\n> Please find the attached patch which fixes the case.\n\nThanks for the report. I think it would be preferable not to duplicate\nthe logic as your version does, though, so here's a slightly different\nversion that avoids that.Yes, I was also thinking to avoid the duplicate logic but couldn't founda way.  I did the quick testing with the patch, and reported test is workingfine.   But  \"make check\" is failing with few failures.\n\nPer Michael's suggestion, I have also written a test case and included\nit in this version.Thanks for this. \n\nComments?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n-- Rushabh Lathia", "msg_date": "Tue, 27 Sep 2022 12:22:50 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY is broken on master branch." }, { "msg_contents": "On Tue, Sep 27, 2022 at 2:53 AM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:\n> Yes, I was also thinking to avoid the duplicate logic but couldn't found\n> a way. I did the quick testing with the patch, and reported test is working\n> fine. But \"make check\" is failing with few failures.\n\nOh, woops. There was a dumb mistake in that version -- it was testing\nsdepForm->dbid == SHARED_DEPENDENCY_OWNER, which is nonsense, instead\nof sdepForm->dbid == MyDatabaseId. Here's a fixed version.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 27 Sep 2022 10:04:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY is broken on master branch." }, { "msg_contents": "On Tue, Sep 27, 2022 at 7:34 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Sep 27, 2022 at 2:53 AM Rushabh Lathia <rushabh.lathia@gmail.com>\n> wrote:\n> > Yes, I was also thinking to avoid the duplicate logic but couldn't found\n> > a way. I did the quick testing with the patch, and reported test is\n> working\n> > fine. But \"make check\" is failing with few failures.\n>\n> Oh, woops. There was a dumb mistake in that version -- it was testing\n> sdepForm->dbid == SHARED_DEPENDENCY_OWNER, which is nonsense, instead\n> of sdepForm->dbid == MyDatabaseId. Here's a fixed version.\n>\n\nThis seems to fix the issue and in further testing I didn't find anything\nelse.\n\nThanks,\n\n\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\n\n-- \nRushabh Lathia\n\nOn Tue, Sep 27, 2022 at 7:34 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Sep 27, 2022 at 2:53 AM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:\n> Yes, I was also thinking to avoid the duplicate logic but couldn't found\n> a way.  I did the quick testing with the patch, and reported test is working\n> fine.   But  \"make check\" is failing with few failures.\n\nOh, woops. There was a dumb mistake in that version -- it was testing\nsdepForm->dbid == SHARED_DEPENDENCY_OWNER, which is nonsense, instead\nof sdepForm->dbid == MyDatabaseId. Here's a fixed version.This seems to fix the issue and in further testing I didn't find anything else.Thanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n-- Rushabh Lathia", "msg_date": "Wed, 28 Sep 2022 17:50:53 +0530", "msg_from": "Rushabh Lathia <rushabh.lathia@gmail.com>", "msg_from_op": true, "msg_subject": "Re: DROP OWNED BY is broken on master branch." }, { "msg_contents": "On Wed, Sep 28, 2022 at 8:21 AM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:\n> On Tue, Sep 27, 2022 at 7:34 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> On Tue, Sep 27, 2022 at 2:53 AM Rushabh Lathia <rushabh.lathia@gmail.com> wrote:\n>> > Yes, I was also thinking to avoid the duplicate logic but couldn't found\n>> > a way. I did the quick testing with the patch, and reported test is working\n>> > fine. But \"make check\" is failing with few failures.\n>>\n>> Oh, woops. There was a dumb mistake in that version -- it was testing\n>> sdepForm->dbid == SHARED_DEPENDENCY_OWNER, which is nonsense, instead\n>> of sdepForm->dbid == MyDatabaseId. Here's a fixed version.\n>\n> This seems to fix the issue and in further testing I didn't find anything else.\n\nOK, committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 11:02:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: DROP OWNED BY is broken on master branch." } ]
[ { "msg_contents": "Hi hackers,\n\nI wanted to add ARM CPU darwin to the CI but it seems\nthat kerberos/001_auth fails on ARM CPU darwin.\n\nOS:\nDarwin admins-Virtual-Machine.local 21.6.0 Darwin Kernel Version 21.6.0:\nWed Aug 10 14:26:07 PDT 2022; root:xnu-8020.141.5~2/RELEASE_ARM64_VMAPPLE\narm64\n\nError message:\nCan't exec \"kdb5_util\": No such file or directory at\n/Users/admin/pgsql/src/test/perl/PostgreSQL/Test/Utils.pm line 338.\n[02:53:37.177](0.043s) Bail out! failed to execute command \"kdb5_util\ncreate -s -P secret0\": No such file or directory\n\nIt seems that kerberos is installed at the '/opt/homebrew/opt/krb5' path on\nARM CPU darwin instances instead of the '/usr/local/opt/krb5' path.\n\nI attached two patches:\n0001-ci-Add-arm-CPU-for-darwin.patch is about adding ARM CPU darwin to the\nCI.\n0002-fix-darwin-ARM-CPU-darwin-krb5-path-fix.patch is about fixing the\nerror.\n\nCI run after ARM CPU darwin is added:\nhttps://cirrus-ci.com/build/5772792711872512\n\nCI run after fix applied:\nhttps://cirrus-ci.com/build/5686842363215872\n\nRegards,\nNazir Bilal Yavuz", "msg_date": "Mon, 26 Sep 2022 13:45:59 +0300", "msg_from": "Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": true, "msg_subject": "kerberos/001_auth test fails on arm CPU darwin" }, { "msg_contents": "Bilal Yavuz <byavuz81@gmail.com> writes:\n> It seems that kerberos is installed at the '/opt/homebrew/opt/krb5' path on\n> ARM CPU darwin instances instead of the '/usr/local/opt/krb5' path.\n\nI think this also needs to account for MacPorts, which would likely\nput it under /opt/local/sbin. (I wonder where /usr/local/opt/krb5\ncame from at all -- that sounds like somebody's manual installation\nrather than a packaged one.) Maybe we should first try\n\"krb5-config --prefix\" to see if that gives an answer.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Sep 2022 07:14:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: kerberos/001_auth test fails on arm CPU darwin" }, { "msg_contents": "On 26.09.22 13:14, Tom Lane wrote:\n> Bilal Yavuz<byavuz81@gmail.com> writes:\n>> It seems that kerberos is installed at the '/opt/homebrew/opt/krb5' path on\n>> ARM CPU darwin instances instead of the '/usr/local/opt/krb5' path.\n> I think this also needs to account for MacPorts, which would likely\n> put it under /opt/local/sbin. (I wonder where /usr/local/opt/krb5\n> came from at all -- that sounds like somebody's manual installation\n> rather than a packaged one.)\n\n/usr/local/opt/ is used by Homebrew on Intel macOS.\n\n\n", "msg_date": "Mon, 26 Sep 2022 16:39:36 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: kerberos/001_auth test fails on arm CPU darwin" }, { "msg_contents": "Hi,\n\n\nOn 9/26/2022 2:14 PM, Tom Lane wrote:\n>\n> Maybe we should first try \"krb5-config --prefix\" to see if that gives an answer.\n\n\nI tested that command on multiple OSes and it was correct for freeBSD, \ndebian and openSUSE.\n\nI don't have macOS so I tried to use CI for running macOS VMs(both arm \nand Intel CPU):\nWhen \"krb5-config\" binary is used from brew or MacPorts installations' \npath it gives the correct path but there is another \"krb5-config\" binary \nat \"/usr/bin/krb5-config\" path on the macOS VMs, when this binary is \nused while running \"krb5-config --prefix\" command run it gives \"/\" as \noutput. This issue can be related about the CI VMs but I couldn't check it.\n\nRegards,\nNazir Bilal Yavuz\n\n\n\n", "msg_date": "Mon, 26 Sep 2022 19:39:41 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": false, "msg_subject": "Re: kerberos/001_auth test fails on arm CPU darwin" }, { "msg_contents": "On 09/26/2022 11:39 am, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> \n> On 9/26/2022 2:14 PM, Tom Lane wrote:\n>> \n>> Maybe we should first try \"krb5-config --prefix\" to see if that gives \n>> an answer.\n> \n> \n> I tested that command on multiple OSes and it was correct for freeBSD, \n> debian and openSUSE.\n> \n> I don't have macOS so I tried to use CI for running macOS VMs(both arm \n> and Intel CPU):\n> When \"krb5-config\" binary is used from brew or MacPorts installations' \n> path it gives the correct path but there is another \"krb5-config\" \n> binary at \"/usr/bin/krb5-config\" path on the macOS VMs, when this \n> binary is used while running \"krb5-config --prefix\" command run it \n> gives \"/\" as output. This issue can be related about the CI VMs but I \n> couldn't check it.\n> \n> Regards,\n> Nazir Bilal Yavuz\n\non macOS monterey 12.6:\n~ via 💎 v3.1.2 on ☁️ (us-east-1) on ﴃ WhereTo - Prod\n❯ krb5-config --prefix\n/\n\n~ via 💎 v3.1.2 on ☁️ (us-east-1) on ﴃ WhereTo - Prod\n❯\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Mon, 26 Sep 2022 11:47:32 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: kerberos/001_auth test fails on arm CPU darwin" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> On 09/26/2022 11:39 am, Nazir Bilal Yavuz wrote:\n>> When \"krb5-config\" binary is used from brew or MacPorts installations' \n>> path it gives the correct path but there is another \"krb5-config\" \n>> binary at \"/usr/bin/krb5-config\" path on the macOS VMs, when this \n>> binary is used while running \"krb5-config --prefix\" command run it \n>> gives \"/\" as output. This issue can be related about the CI VMs but I \n>> couldn't check it.\n\n> [ yup, it gives \"/\" ]\n\nYeah, I see the same on my laptop. So we can't trust krb5-config\nunconditionally. But we could do something like checking\n\"-x $config_prefix . '/bin/kinit'\" before believing it's good,\nand maybe also check sbin/krb5kdc. We'd want to use similar\nprobes to decide which of the fallback directories to use, anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Sep 2022 13:41:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: kerberos/001_auth test fails on arm CPU darwin" }, { "msg_contents": "On Mon, Sep 26, 2022 at 04:39:36PM +0200, Peter Eisentraut wrote:\n> On 26.09.22 13:14, Tom Lane wrote:\n>> Bilal Yavuz<byavuz81@gmail.com> writes:\n>> > It seems that kerberos is installed at the '/opt/homebrew/opt/krb5' path on\n>> > ARM CPU darwin instances instead of the '/usr/local/opt/krb5' path.\n>> I think this also needs to account for MacPorts, which would likely\n>> put it under /opt/local/sbin. (I wonder where /usr/local/opt/krb5\n>> came from at all -- that sounds like somebody's manual installation\n>> rather than a packaged one.)\n> \n> /usr/local/opt/ is used by Homebrew on Intel macOS.\n\nHmm. Is that the case with new setups under x86_64? I have a M1\nwhere everything goes through /opt/homebrew/, though it has been set\nvery recently.\n--\nMichael", "msg_date": "Tue, 27 Sep 2022 10:25:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: kerberos/001_auth test fails on arm CPU darwin" }, { "msg_contents": "On 09/26/2022 8:25 pm, Michael Paquier wrote:\n> On Mon, Sep 26, 2022 at 04:39:36PM +0200, Peter Eisentraut wrote:\n>> On 26.09.22 13:14, Tom Lane wrote:\n>>> Bilal Yavuz<byavuz81@gmail.com> writes:\n>>> > It seems that kerberos is installed at the '/opt/homebrew/opt/krb5' path on\n>>> > ARM CPU darwin instances instead of the '/usr/local/opt/krb5' path.\n>>> I think this also needs to account for MacPorts, which would likely\n>>> put it under /opt/local/sbin. (I wonder where /usr/local/opt/krb5\n>>> came from at all -- that sounds like somebody's manual installation\n>>> rather than a packaged one.)\n>> \n>> /usr/local/opt/ is used by Homebrew on Intel macOS.\n> \n> Hmm. Is that the case with new setups under x86_64? I have a M1\n> where everything goes through /opt/homebrew/, though it has been set\n> very recently.\n> --\n> Michael\n\nIntel:\nwf-corporate-chef on  master +6 -420 [✘!] on ☁️ (us-east-1) on ﴃ \nWhereTo - Prod\n❯ /usr/local/opt/krb5/bin/krb5-config --prefix\n/usr/local/Cellar/krb5/1.20\n\nwf-corporate-chef on  master +6 -420 [✘!] on ☁️ (us-east-1) on ﴃ \nWhereTo - Prod\n❯\n\nSame on my M1 iMac (migrated from an Intel iMac however)\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 214-642-9640 E-Mail: ler@lerctr.org\nUS Mail: 5708 Sabbia Dr, Round Rock, TX 78665-2106\n\n\n", "msg_date": "Mon, 26 Sep 2022 20:30:33 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: kerberos/001_auth test fails on arm CPU darwin" }, { "msg_contents": "Hi,\n\nOn 2022-09-27 10:25:07 +0900, Michael Paquier wrote:\n> On Mon, Sep 26, 2022 at 04:39:36PM +0200, Peter Eisentraut wrote:\n> > On 26.09.22 13:14, Tom Lane wrote:\n> >> Bilal Yavuz<byavuz81@gmail.com> writes:\n> >> > It seems that kerberos is installed at the '/opt/homebrew/opt/krb5' path on\n> >> > ARM CPU darwin instances instead of the '/usr/local/opt/krb5' path.\n> >> I think this also needs to account for MacPorts, which would likely\n> >> put it under /opt/local/sbin. (I wonder where /usr/local/opt/krb5\n> >> came from at all -- that sounds like somebody's manual installation\n> >> rather than a packaged one.)\n> > \n> > /usr/local/opt/ is used by Homebrew on Intel macOS.\n> \n> Hmm. Is that the case with new setups under x86_64? I have a M1\n> where everything goes through /opt/homebrew/, though it has been set\n> very recently.\n\nYes, it's hardware dependent:\n\nhttps://docs.brew.sh/Installation\n\"This script installs Homebrew to its preferred prefix (/usr/local for macOS\nIntel, /opt/homebrew for Apple Silicon and /home/linuxbrew/.linuxbrew for\nLinux\"\n\n\nMaybe we should rely on PATH, rather than hardcoding OS dependent locations?\nOr at least fall back to seach binaries in PATH? Seems pretty odd to hardcode\nall these locations without a way to influence it from outside the test.\n\nThere has to be something similar to python's shutil.which() in perl.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 26 Sep 2022 18:37:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: kerberos/001_auth test fails on arm CPU darwin" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Maybe we should rely on PATH, rather than hardcoding OS dependent locations?\n\nMy suggestion to consult krb5-config first was meant to allow PATH\nto influence the results. However, if that doesn't work, it's important\nIMO to have a sane list of hardwired places to look in. Personally,\nI do not like to have MacPorts or Homebrew's bin directory in the\nPATH -- at least not for a buildfarm animal -- because there tends\nto be an enormous amount of non-Mac-ish clutter there. So for this\npurpose, I'd like us to be able to find the standard places that\nMacPorts and Homebrew install Kerberos in even if they are not in PATH.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Sep 2022 22:21:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: kerberos/001_auth test fails on arm CPU darwin" }, { "msg_contents": "On 27.09.22 03:37, Andres Freund wrote:\n> Maybe we should rely on PATH, rather than hardcoding OS dependent locations?\n> Or at least fall back to seach binaries in PATH? Seems pretty odd to hardcode\n> all these locations without a way to influence it from outside the test.\n\nHomebrew intentionally does not install the krb5 and openldap packages \ninto the path, because they conflict with macOS-provided software. \nHowever, those macOS-provided variants don't provide all the pieces we \nneed for the tests.\n\nAlso, on Linux you need /usr/sbin, which is often not in the path.\n\nSo I think there is no good way around hardcoding a lot of these paths.\n\n\n\n", "msg_date": "Tue, 27 Sep 2022 12:29:31 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: kerberos/001_auth test fails on arm CPU darwin" }, { "msg_contents": "Hi,\n\nThanks for the reviews!\n\n\nOn 9/27/2022 5:21 AM, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Maybe we should rely on PATH, rather than hardcoding OS dependent locations?\n> My suggestion to consult krb5-config first was meant to allow PATH\n> to influence the results. However, if that doesn't work, it's important\n> IMO to have a sane list of hardwired places to look in.\n\n\nI updated my patch regarding these reviews.\n\nThe current logic is it will try to find all executables in that \norder(if it finds all executables, it won't try remaining steps):\n\n\n1 - 'krb5-config --prefix'\n\n2 - hardcoded paths(I added arm and MacPorts paths for darwin)\n\n3 - from PATH\n\nAlso, I tried to do some refactoring for adding another paths to search \nin the future and being sure about all executables are found.\n\nCi run after fix is applied:\nhttps://cirrus-ci.com/build/5758254918664192\n\nRegards,\nNazir Bilal Yavuz", "msg_date": "Tue, 27 Sep 2022 18:35:45 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": false, "msg_subject": "Re: kerberos/001_auth test fails on arm CPU darwin" }, { "msg_contents": "Greetings,\n\n* Peter Eisentraut (peter.eisentraut@enterprisedb.com) wrote:\n> On 27.09.22 03:37, Andres Freund wrote:\n> > Maybe we should rely on PATH, rather than hardcoding OS dependent locations?\n> > Or at least fall back to seach binaries in PATH? Seems pretty odd to hardcode\n> > all these locations without a way to influence it from outside the test.\n> \n> Homebrew intentionally does not install the krb5 and openldap packages into\n> the path, because they conflict with macOS-provided software. However, those\n> macOS-provided variants don't provide all the pieces we need for the tests.\n\nThe macOS-provided versions are also old and broken, or at least that\nwas the case when I looked into them last.\n\n> Also, on Linux you need /usr/sbin, which is often not in the path.\n> \n> So I think there is no good way around hardcoding a lot of these paths.\n\nYeah, not sure what else to do.\n\nThanks,\n\nStephen", "msg_date": "Thu, 29 Sep 2022 10:39:12 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: kerberos/001_auth test fails on arm CPU darwin" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Peter Eisentraut (peter.eisentraut@enterprisedb.com) wrote:\n>> Homebrew intentionally does not install the krb5 and openldap packages into\n>> the path, because they conflict with macOS-provided software. However, those\n>> macOS-provided variants don't provide all the pieces we need for the tests.\n\n> The macOS-provided versions are also old and broken, or at least that\n> was the case when I looked into them last.\n\nYeah. They also generate tons of deprecation warnings at compile time,\nso it's not like Apple is encouraging you to use them. I wonder why\nthey're still there at all.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 29 Sep 2022 10:49:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: kerberos/001_auth test fails on arm CPU darwin" }, { "msg_contents": "On 27.09.22 17:35, Nazir Bilal Yavuz wrote:\n> I updated my patch regarding these reviews.\n> \n> The current logic is it will try to find all executables in that \n> order(if it finds all executables, it won't try remaining steps):\n> \n> \n> 1 - 'krb5-config --prefix'\n> \n> 2 - hardcoded paths(I added arm and MacPorts paths for darwin)\n> \n> 3 - from PATH\n> \n> Also, I tried to do some refactoring for adding another paths to search \n> in the future and being sure about all executables are found.\n\nThis patch could use some more in-code comments. For example, this\n\n+# get prefix for kerberos executables and try to find them at this path\n+sub test_krb5_paths\n\nis not helpful. What does it \"get\", where does it put it, how does it \n\"try\", and what does it do if it fails? What are the inputs and outputs \nof this function?\n\n+ # remove '\\n' since 'krb5-config --prefix' returns path ends with '\\n'\n+ $krb5_path =~ s/\\n//g;\n\nuse chomp\n\n\n\n", "msg_date": "Sat, 1 Oct 2022 13:12:51 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: kerberos/001_auth test fails on arm CPU darwin" }, { "msg_contents": "Hi,\n\nThanks for the review!\n\nOn 10/1/22 14:12, Peter Eisentraut wrote:\n> This patch could use some more in-code comments.  For example, this\n>\n> +# get prefix for kerberos executables and try to find them at this path\n> +sub test_krb5_paths\n>\n> is not helpful.  What does it \"get\", where does it put it, how does it \n> \"try\", and what does it do if it fails?  What are the inputs and \n> outputs of this function?\n>\n> +   # remove '\\n' since 'krb5-config --prefix' returns path ends with \n> '\\n'\n> +   $krb5_path =~ s/\\n//g;\n>\n> use chomp\n>\n\nI updated patch regarding these comments.\n\nI have a question about my logic:\n+    elsif ($^O eq 'linux')\n+    {\n+        test_krb5_paths('/usr/');\n+    }\n  }\n\nBefore that, test could use krb5kdc, kadmin and kdb5_util from \n'/usr/sbin/'; krb5_config and kinit from $PATH. However, now it will try \nto use all of them from $PATH or from '/usr/sbin/' and '/usr/bin/'. Does \nthat cause a problem?\n\nCi run after fix is applied:\n\nhttps://cirrus-ci.com/build/5359971746447360\n\n\nRegards,\nNazir Bilal Yavuz", "msg_date": "Mon, 10 Oct 2022 17:32:16 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": false, "msg_subject": "Re: kerberos/001_auth test fails on arm CPU darwin" } ]
[ { "msg_contents": "Hi,\n\nPostgres currently can leak memory if a failure occurs during base\nbackup in do_pg_backup_start() or do_pg_backup_stop() or\nperform_base_backup(). The palloc'd memory such as backup_state or\ntablespace_map in xlogfuncs.c or basebackup.c or tablespaceinfo or the\nmemory that gets allocated by bbsink_begin_backup() in\nperform_base_backup() or any other, is left-out which may cause memory\nbloat on the server eventually. To experience this issue, run\npg_basebackup with --label name longer than 1024 characters and\nobserve memory with watch command, the memory usage goes up.\n\nIt looks like the memory leak issue has been there for quite some\ntime, discussed in [1].\n\nI'm proposing a patch that leverages the error callback mechanism and\nmemory context. The design of the patch is as follows:\n1) pg_backup_start() and pg_backup_stop() - the error callback frees\nup the backup_state and tablespace_map variables allocated in\nTopMemoryContext. We don't need a separate memory context here because\ndo_pg_backup_start() and do_pg_backup_stop() don't return any\ndynamically created memory for now. We can choose to create a separate\nmemory context for the future changes that may come, but now it is not\nrequired.\n2) perform_base_backup() - a new memory context has been created that\ngets deleted by the callback upon error.\n\nThe error callbacks are typically called for all the elevels, but we\nneed to free up the memory only when elevel is >= ERROR or ==\nCOMMERROR. The COMMERROR is a common scenario because the server can\nclose the connection to the client or vice versa in which case the\nbase backup fails. For all other elevels like WARNING, NOTICE, DEBUGX,\nINFO etc. we don't free up the memory.\n\nI'm attaching v1 patch herewith.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/Yyq15ekNzjZecwMW%40paquier.xyz\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 26 Sep 2022 19:06:35 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Avoid memory leaks during base backups" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> Postgres currently can leak memory if a failure occurs during base\n> backup in do_pg_backup_start() or do_pg_backup_stop() or\n> perform_base_backup(). The palloc'd memory such as backup_state or\n> tablespace_map in xlogfuncs.c or basebackup.c or tablespaceinfo or the\n> memory that gets allocated by bbsink_begin_backup() in\n> perform_base_backup() or any other, is left-out which may cause memory\n> bloat on the server eventually. To experience this issue, run\n> pg_basebackup with --label name longer than 1024 characters and\n> observe memory with watch command, the memory usage goes up.\n\n> It looks like the memory leak issue has been there for quite some\n> time, discussed in [1].\n\n> I'm proposing a patch that leverages the error callback mechanism and\n> memory context.\n\nThis ... seems like inventing your own shape of wheel. The\nnormal mechanism for preventing this type of leak is to put the\nallocations in a memory context that can be reset or deallocated\nin mainline code at the end of the operation. I do not think that\nhaving an errcontext callback with side-effects like deallocating\nmemory is even remotely safe, and it's certainly a first-order\nabuse of that mechanism.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Sep 2022 10:04:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Mon, Sep 26, 2022 at 7:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > I'm proposing a patch that leverages the error callback mechanism and\n> > memory context.\n>\n> This ... seems like inventing your own shape of wheel. The\n> normal mechanism for preventing this type of leak is to put the\n> allocations in a memory context that can be reset or deallocated\n> in mainline code at the end of the operation.\n\nYes, that's the typical way and the patch attached does it for\nperform_base_backup(). What happens if we allocate some memory in the\nnew memory context and error-out before reaching the end of operation?\nHow do we deallocate such memory?\nBackup related code has simple-to-generate-error paths in between and\nmemory can easily be leaked.\n\nAre you suggesting to use sigsetjmp or some other way to prevent memory leaks?\n\n> I do not think that\n> having an errcontext callback with side-effects like deallocating\n> memory is even remotely safe, and it's certainly a first-order\n> abuse of that mechanism.\n\nAre you saying that the error callback might deallocate the memory\nthat may be needed later in the error processing?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 27 Sep 2022 11:33:56 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "At Tue, 27 Sep 2022 11:33:56 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Mon, Sep 26, 2022 at 7:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > This ... seems like inventing your own shape of wheel. The\n> > normal mechanism for preventing this type of leak is to put the\n> > allocations in a memory context that can be reset or deallocated\n> > in mainline code at the end of the operation.\n> \n> Yes, that's the typical way and the patch attached does it for\n> perform_base_backup(). What happens if we allocate some memory in the\n> new memory context and error-out before reaching the end of operation?\n> How do we deallocate such memory?\n\nWhoever directly or indirectly catches the exception can do that. For\nexample, SendBaseBackup() seems to catch execptions from\nperform_base_backup(). bbsinc_cleanup() is already resides there.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 27 Sep 2022 17:32:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Tue, Sep 27, 2022 at 05:32:26PM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 27 Sep 2022 11:33:56 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> > On Mon, Sep 26, 2022 at 7:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > This ... seems like inventing your own shape of wheel. The\n> > > normal mechanism for preventing this type of leak is to put the\n> > > allocations in a memory context that can be reset or deallocated\n> > > in mainline code at the end of the operation.\n> > \n> > Yes, that's the typical way and the patch attached does it for\n> > perform_base_backup(). What happens if we allocate some memory in the\n> > new memory context and error-out before reaching the end of operation?\n> > How do we deallocate such memory?\n> \n> Whoever directly or indirectly catches the exception can do that. For\n> example, SendBaseBackup() seems to catch execptions from\n> perform_base_backup(). bbsinc_cleanup() is already resides there.\n\nEven with that, what's the benefit in using an extra memory context in\nbasebackup.c? backup_label and tablespace_map are mentioned upthread,\nbut we have a tight control of these, and they should be allocated in\nthe memory context created for replication commands (grep for\n\"Replication command context\") anyway. Using a dedicated memory\ncontext for the SQL backup functions under TopMemoryContext could be\ninteresting, on the other hand..\n--\nMichael", "msg_date": "Wed, 28 Sep 2022 13:16:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Wed, Sep 28, 2022 at 9:46 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Sep 27, 2022 at 05:32:26PM +0900, Kyotaro Horiguchi wrote:\n> > At Tue, 27 Sep 2022 11:33:56 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > > On Mon, Sep 26, 2022 at 7:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > This ... seems like inventing your own shape of wheel. The\n> > > > normal mechanism for preventing this type of leak is to put the\n> > > > allocations in a memory context that can be reset or deallocated\n> > > > in mainline code at the end of the operation.\n> > >\n> > > Yes, that's the typical way and the patch attached does it for\n> > > perform_base_backup(). What happens if we allocate some memory in the\n> > > new memory context and error-out before reaching the end of operation?\n> > > How do we deallocate such memory?\n> >\n> > Whoever directly or indirectly catches the exception can do that. For\n> > example, SendBaseBackup() seems to catch execptions from\n> > perform_base_backup(). bbsinc_cleanup() is already resides there.\n>\n> Even with that, what's the benefit in using an extra memory context in\n> basebackup.c? backup_label and tablespace_map are mentioned upthread,\n> but we have a tight control of these, and they should be allocated in\n> the memory context created for replication commands (grep for\n> \"Replication command context\") anyway. Using a dedicated memory\n> context for the SQL backup functions under TopMemoryContext could be\n> interesting, on the other hand..\n\nI had the same opinion. Here's what I think - for backup functions, we\ncan have the new memory context child of TopMemoryContext and for\nperform_base_backup(), we can have the memory context child of\nCurrentMemoryContext. With PG_TRY()-PG_FINALLY()-PG_END_TRY(), we can\ndelete those memory contexts upon ERRORs. This approach works for us\nsince backup-related code doesn't have any FATALs.\n\nThoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 10:09:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> I had the same opinion. Here's what I think - for backup functions, we\n> can have the new memory context child of TopMemoryContext and for\n> perform_base_backup(), we can have the memory context child of\n> CurrentMemoryContext. With PG_TRY()-PG_FINALLY()-PG_END_TRY(), we can\n> delete those memory contexts upon ERRORs. This approach works for us\n> since backup-related code doesn't have any FATALs.\n\nNot following your last point here? A process exiting on FATAL\ndoes not especially need to clean up its memory allocations first.\nWhich is good, because \"backup-related code doesn't have any FATALs\"\nseems like an assertion with a very short half-life.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Sep 2022 00:49:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Wed, Sep 28, 2022 at 10:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > I had the same opinion. Here's what I think - for backup functions, we\n> > can have the new memory context child of TopMemoryContext and for\n> > perform_base_backup(), we can have the memory context child of\n> > CurrentMemoryContext. With PG_TRY()-PG_FINALLY()-PG_END_TRY(), we can\n> > delete those memory contexts upon ERRORs. This approach works for us\n> > since backup-related code doesn't have any FATALs.\n>\n> Not following your last point here? A process exiting on FATAL\n> does not especially need to clean up its memory allocations first.\n> Which is good, because \"backup-related code doesn't have any FATALs\"\n> seems like an assertion with a very short half-life.\n\nYou're right. My bad. For FATALs, we don't need to clean the memory as\nthe process itself exits.\n\n * Note: an ereport(FATAL) will not be caught by this construct; control will\n * exit straight through proc_exit().\n\n /*\n * Perform error recovery action as specified by elevel.\n */\n if (elevel == FATAL)\n {\n\n /*\n * Do normal process-exit cleanup, then return exit code 1 to indicate\n * FATAL termination. The postmaster may or may not consider this\n * worthy of panic, depending on which subprocess returns it.\n */\n proc_exit(1);\n }\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 10:26:14 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "At Wed, 28 Sep 2022 13:16:36 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Sep 27, 2022 at 05:32:26PM +0900, Kyotaro Horiguchi wrote:\n> > At Tue, 27 Sep 2022 11:33:56 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> > > On Mon, Sep 26, 2022 at 7:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > > This ... seems like inventing your own shape of wheel. The\n> > > > normal mechanism for preventing this type of leak is to put the\n> > > > allocations in a memory context that can be reset or deallocated\n> > > > in mainline code at the end of the operation.\n> > > \n> > > Yes, that's the typical way and the patch attached does it for\n> > > perform_base_backup(). What happens if we allocate some memory in the\n> > > new memory context and error-out before reaching the end of operation?\n> > > How do we deallocate such memory?\n> > \n> > Whoever directly or indirectly catches the exception can do that. For\n> > example, SendBaseBackup() seems to catch execptions from\n> > perform_base_backup(). bbsinc_cleanup() is already resides there.\n> \n> Even with that, what's the benefit in using an extra memory context in\n> basebackup.c? backup_label and tablespace_map are mentioned upthread,\n> but we have a tight control of these, and they should be allocated in\n> the memory context created for replication commands (grep for\n> \"Replication command context\") anyway. Using a dedicated memory\n> context for the SQL backup functions under TopMemoryContext could be\n> interesting, on the other hand..\n\nIf I understand you correctly, my point was the usage of error\ncallbacks. I meant that we can release that tangling memory blocks in\nSendBaseBackup() even by directly pfree()ing then NULLing the pointer,\nif the pointer were file-scoped static.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 28 Sep 2022 15:12:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Wed, Sep 28, 2022 at 10:09 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Here's what I think - for backup functions, we\n> can have the new memory context child of TopMemoryContext and for\n> perform_base_backup(), we can have the memory context child of\n> CurrentMemoryContext. With PG_TRY()-PG_FINALLY()-PG_END_TRY(), we can\n> delete those memory contexts upon ERRORs. This approach works for us\n> since backup-related code doesn't have any FATALs.\n>\n> Thoughts?\n\nI'm attaching the v2 patch designed as described above. Please review it.\n\nI've added an entry in CF - https://commitfest.postgresql.org/40/3915/\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 28 Sep 2022 15:00:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Wed, Sep 28, 2022 at 5:30 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> I'm attaching the v2 patch designed as described above. Please review it.\n>\n> I've added an entry in CF - https://commitfest.postgresql.org/40/3915/\n\nThis looks odd to me. In the case of a regular backend, the\nsigsetjmp() handler in src/backend/tcop/postgres.c is responsible for\ncleaning up memory. It calls AbortCurrentTransaction() which will call\nCleanupTransaction() which will call AtCleanup_Memory() which will\nblock away TopTransactionContext. I think we ought to do something\nanalogous here, and we almost do already. Some walsender commands are\ngoing to be SQL commands and some aren't. For those that aren't, the\nsame block calls WalSndErrorCleanup() which does similar kinds of\ncleanup, including in some situations calling WalSndResourceCleanup()\nwhich cleans up the resource owner in cases where we have a resource\nowner without a transaction. I feel like we ought to be trying to tie\nthe cleanup into WalSndErrorCleanup() or WalSndResourceCleanup() based\non having the memory context that we ought to be blowing away stored\nin a global variable, rather than using a try/catch block.\n\nLike, maybe there's a function EstablishWalSenderMemoryContext() that\ncommands can call before allocating memory that shouldn't survive an\nerror. And it's deleted after each command if it exists, or if an\nerror occurs then WalSndErrorCleanup() deletes it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 11:16:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Wed, Sep 28, 2022 at 8:46 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> I feel like we ought to be trying to tie\n> the cleanup into WalSndErrorCleanup() or WalSndResourceCleanup() based\n> on having the memory context that we ought to be blowing away stored\n> in a global variable, rather than using a try/catch block.\n\nOkay, I got rid of the try-catch block. I added two clean up callbacks\n(one for SQL backup functions or on-line backup, another for base\nbackup) that basically delete the respective memory contexts and reset\nthe file-level variables, they get called from PostgresMain()'s error\nhandling code.\n\n> Like, maybe there's a function EstablishWalSenderMemoryContext() that\n> commands can call before allocating memory that shouldn't survive an\n> error. And it's deleted after each command if it exists, or if an\n> error occurs then WalSndErrorCleanup() deletes it.\n\nI don't think we need any of the above. I've used file-level variables\nto hold memory contexts, allocating them whenever needed and cleaning\nthem up either at the end of backup operation or upon error.\n\nPlease review the attached v3 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 29 Sep 2022 13:58:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Thu, Sep 29, 2022 at 4:29 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Please review the attached v3 patch.\n\ntemplate1=# select * from pg_backup_start('sdgkljsdgkjdsg', true);\n pg_backup_start\n-----------------\n 0/2000028\n(1 row)\n\ntemplate1=# select 1/0;\nERROR: division by zero\ntemplate1=# select * from pg_backup_stop();\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nThe connection to the server was lost. Attempting reset: Failed.\n!?>\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 29 Sep 2022 09:35:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Thu, Sep 29, 2022 at 7:05 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Sep 29, 2022 at 4:29 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Please review the attached v3 patch.\n>\n> template1=# select * from pg_backup_start('sdgkljsdgkjdsg', true);\n> pg_backup_start\n> -----------------\n> 0/2000028\n> (1 row)\n>\n> template1=# select 1/0;\n> ERROR: division by zero\n> template1=# select * from pg_backup_stop();\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> The connection to the server was lost. Attempting reset: Failed.\n> !?>\n\nThanks! I used a variable to define the scope to clean up the backup\nmemory context for SQL functions/on-line backup. We don't have this\nproblem in case of base backup because we don't give control in\nbetween start and stop backup in perform_base_backup().\n\nPlease review the v4 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 29 Sep 2022 22:38:33 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Thu, Sep 29, 2022 at 10:38 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Please review the v4 patch.\n\nI used valgrind for testing. Without patch, there's an obvious memory\nleak [1], with patch no memory leak.\n\nI used ALLOCSET_START_SMALL_SIZES instead of ALLOCSET_DEFAULT_SIZES\nfor backup memory context so that it can start small and grow if\nrequired.\n\nI'm attaching v5 patch, please review it further.\n\n[1]\n==00:00:01:36.306 145709== VALGRINDERROR-BEGIN\n==00:00:01:36.306 145709== 24 bytes in 1 blocks are still reachable in\nloss record 122 of 511\n==00:00:01:36.306 145709== at 0x98E501: palloc (mcxt.c:1170)\n==00:00:01:36.306 145709== by 0x9C1795: makeStringInfo (stringinfo.c:45)\n==00:00:01:36.306 145709== by 0x2DE22A: pg_backup_start (xlogfuncs.c:96)\n==00:00:01:36.306 145709== by 0x4D2DB6: ExecMakeTableFunctionResult\n(execSRF.c:234)\n==00:00:01:36.306 145709== by 0x4F08DA: FunctionNext (nodeFunctionscan.c:95)\n==00:00:01:36.306 145709== by 0x4D48EA: ExecScanFetch (execScan.c:133)\n==00:00:01:36.306 145709== by 0x4D4963: ExecScan (execScan.c:182)\n==00:00:01:36.306 145709== by 0x4F0C84: ExecFunctionScan\n(nodeFunctionscan.c:270)\n==00:00:01:36.306 145709== by 0x4D0255: ExecProcNodeFirst\n(execProcnode.c:464)\n==00:00:01:36.306 145709== by 0x4C32D4: ExecProcNode (executor.h:259)\n==00:00:01:36.306 145709== by 0x4C619C: ExecutePlan (execMain.c:1636)\n==00:00:01:36.306 145709== by 0x4C3A0F: standard_ExecutorRun (execMain.c:363)\n==00:00:01:36.306 145709==\n==00:00:01:36.306 145709== VALGRINDERROR-END\n\n==00:00:01:36.334 145709== VALGRINDERROR-BEGIN\n==00:00:01:36.334 145709== 1,024 bytes in 1 blocks are still reachable\nin loss record 426 of 511\n==00:00:01:36.334 145709== at 0x98E501: palloc (mcxt.c:1170)\n==00:00:01:36.334 145709== by 0x9C17CF: initStringInfo (stringinfo.c:63)\n==00:00:01:36.334 145709== by 0x9C17A5: makeStringInfo (stringinfo.c:47)\n==00:00:01:36.334 145709== by 0x2DE22A: pg_backup_start (xlogfuncs.c:96)\n==00:00:01:36.334 145709== by 0x4D2DB6: ExecMakeTableFunctionResult\n(execSRF.c:234)\n==00:00:01:36.334 145709== by 0x4F08DA: FunctionNext (nodeFunctionscan.c:95)\n==00:00:01:36.334 145709== by 0x4D48EA: ExecScanFetch (execScan.c:133)\n==00:00:01:36.334 145709== by 0x4D4963: ExecScan (execScan.c:182)\n==00:00:01:36.334 145709== by 0x4F0C84: ExecFunctionScan\n(nodeFunctionscan.c:270)\n==00:00:01:36.334 145709== by 0x4D0255: ExecProcNodeFirst\n(execProcnode.c:464)\n==00:00:01:36.334 145709== by 0x4C32D4: ExecProcNode (executor.h:259)\n==00:00:01:36.334 145709== by 0x4C619C: ExecutePlan (execMain.c:1636)\n==00:00:01:36.334 145709==\n==00:00:01:36.334 145709== VALGRINDERROR-END\n\n==00:00:01:36.335 145709== VALGRINDERROR-BEGIN\n==00:00:01:36.335 145709== 1,096 bytes in 1 blocks are still reachable\nin loss record 431 of 511\n==00:00:01:36.335 145709== at 0x98E766: palloc0 (mcxt.c:1201)\n==00:00:01:36.335 145709== by 0x2DE152: pg_backup_start (xlogfuncs.c:81)\n==00:00:01:36.335 145709== by 0x4D2DB6: ExecMakeTableFunctionResult\n(execSRF.c:234)\n==00:00:01:36.335 145709== by 0x4F08DA: FunctionNext (nodeFunctionscan.c:95)\n==00:00:01:36.335 145709== by 0x4D48EA: ExecScanFetch (execScan.c:133)\n==00:00:01:36.335 145709== by 0x4D4963: ExecScan (execScan.c:182)\n==00:00:01:36.335 145709== by 0x4F0C84: ExecFunctionScan\n(nodeFunctionscan.c:270)\n==00:00:01:36.335 145709== by 0x4D0255: ExecProcNodeFirst\n(execProcnode.c:464)\n==00:00:01:36.335 145709== by 0x4C32D4: ExecProcNode (executor.h:259)\n==00:00:01:36.335 145709== by 0x4C619C: ExecutePlan (execMain.c:1636)\n==00:00:01:36.335 145709== by 0x4C3A0F: standard_ExecutorRun (execMain.c:363)\n==00:00:01:36.335 145709== by 0x4C37FA: ExecutorRun (execMain.c:307)\n==00:00:01:36.335 145709==\n==00:00:01:36.335 145709== VALGRINDERROR-END\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 3 Oct 2022 19:18:21 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nHello\r\n\r\nI applied your v5 patch on the current master and run valgrind on it while doing a basebackup with simulated error. No memory leak related to backup is observed. Regression is also passing \r\n\r\nthank you\r\n\r\nCary Huang\r\nHighGo Software Canada", "msg_date": "Fri, 14 Oct 2022 21:56:31 +0000", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Fri, Oct 14, 2022 at 09:56:31PM +0000, Cary Huang wrote:\n> I applied your v5 patch on the current master and run valgrind on it\n> while doing a basebackup with simulated error. No memory leak\n> related to backup is observed. Regression is also passing.\n\nEchoing with what I mentioned upthread in [1], I don't quite\nunderstand why this patch needs to touch basebackup.c, walsender.c\nand postgres.c. In the case of a replication command processed by a\nWAL sender, memory allocations happen in the memory context created\nfor replication commands, which is itself, as far as I understand, the\nmessage memory context when we get a 'Q' message for a simple query.\nWhy do we need more code for a cleanup that should be already\nhappening? Am I missing something obvious?\n\nxlogfuncs.c, by storing stuff in the TopMemoryContext of the process\nrunning the SQL commands pg_backup_start/stop() is different, of\ncourse. Perhaps the point of centralizing the base backup context in\nxlogbackup.c makes sense, but my guess that it makes more sense to\nkeep that with the SQL functions as these are the only ones in need of\na cleanup, coming down to the fact that the start and stop functions\nhappen in different queries, aka these are not bind to a message\ncontext.\n\n[1]: https://www.postgresql.org/message-id/YzPKpKEk/JMjhWEz@paquier.xyz\n--\nMichael", "msg_date": "Mon, 17 Oct 2022 16:39:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Mon, Oct 17, 2022 at 1:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Oct 14, 2022 at 09:56:31PM +0000, Cary Huang wrote:\n> > I applied your v5 patch on the current master and run valgrind on it\n> > while doing a basebackup with simulated error. No memory leak\n> > related to backup is observed. Regression is also passing.\n>\n> Echoing with what I mentioned upthread in [1], I don't quite\n> understand why this patch needs to touch basebackup.c, walsender.c\n> and postgres.c. In the case of a replication command processed by a\n> WAL sender, memory allocations happen in the memory context created\n> for replication commands, which is itself, as far as I understand, the\n> message memory context when we get a 'Q' message for a simple query.\n> Why do we need more code for a cleanup that should be already\n> happening? Am I missing something obvious?\n>\n> [1]: https://www.postgresql.org/message-id/YzPKpKEk/JMjhWEz@paquier.xyz\n\nMy bad, I missed that. You are right. We have \"Replication command\ncontext\" as a child of \"MessageContext\" memory context for base backup\nthat gets cleaned upon error in PostgresMain() [1].\n\n> xlogfuncs.c, by storing stuff in the TopMemoryContext of the process\n> running the SQL commands pg_backup_start/stop() is different, of\n> course. Perhaps the point of centralizing the base backup context in\n> xlogbackup.c makes sense, but my guess that it makes more sense to\n> keep that with the SQL functions as these are the only ones in need of\n> a cleanup, coming down to the fact that the start and stop functions\n> happen in different queries, aka these are not bind to a message\n> context.\n\nYes, they're not related to \"MessageContext\" memory context.\n\nPlease see the attached v6 patch that deals with memory leaks for\nbackup SQL-callable functions.\n\n[1]\n(gdb) bt\n#0 MemoryContextDelete (context=0x558b7cd0de50) at mcxt.c:378\n#1 0x0000558b7c655733 in MemoryContextDeleteChildren\n(context=0x558b7ccda8c0) at mcxt.c:430\n#2 0x0000558b7c65546d in MemoryContextReset (context=0x558b7ccda8c0)\nat mcxt.c:309\n#3 0x0000558b7c43b5cd in PostgresMain (dbname=0x558b7cd11fb8 \"\",\nusername=0x558b7ccd6298 \"ubuntu\")\n at postgres.c:4358\n#4 0x0000558b7c364a88 in BackendRun (port=0x558b7cd09620) at postmaster.c:4482\n#5 0x0000558b7c36431b in BackendStartup (port=0x558b7cd09620) at\npostmaster.c:4210\n#6 0x0000558b7c3603be in ServerLoop () at postmaster.c:1804\n#7 0x0000558b7c35fb1b in PostmasterMain (argc=3, argv=0x558b7ccd4200)\nat postmaster.c:1476\n#8 0x0000558b7c229a0e in main (argc=3, argv=0x558b7ccd4200) at main.c:197\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 19 Oct 2022 12:33:32 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Wed, Oct 19, 2022 at 3:04 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Echoing with what I mentioned upthread in [1], I don't quite\n> > understand why this patch needs to touch basebackup.c, walsender.c\n> > and postgres.c. In the case of a replication command processed by a\n> > WAL sender, memory allocations happen in the memory context created\n> > for replication commands, which is itself, as far as I understand, the\n> > message memory context when we get a 'Q' message for a simple query.\n> > Why do we need more code for a cleanup that should be already\n> > happening? Am I missing something obvious?\n>\n> My bad, I missed that. You are right. We have \"Replication command\n> context\" as a child of \"MessageContext\" memory context for base backup\n> that gets cleaned upon error in PostgresMain() [1].\n\nWell this still touches postgres.c. And I still think it's an awful\nlot of machinery for a pretty trivial problem. As a practical matter,\nnobody is going to open a connection and sit there and try to start a\nbackup over and over again on the same connection. And even if someone\nwrote a client that did that -- why? -- they'd have to be awfully\npersistent to leak any amount of memory that would actually matter. So\nit is not insane to just think of ignoring this problem entirely.\n\nBut if we want to fix it, I think we should do it in some more\nlocalized way. One option is to just have do_pg_start_backup() blow\naway any old memory context before it allocates any new memory, and\nforget about releasing anything in PostgresMain(). That means memory\ncould remain allocated after a failure until you next retry the\noperation, but I don't think that really matters. It's not a lot of\nmemory; we just don't want it to accumulate across many repetitions.\nAnother option, perhaps, is to delete some memory context from within\nthe TRY/CATCH block if non-NULL, although that wouldn't work nicely if\nit might blow away the data we need to generate the error message. A\nthird option is to do something useful inside WalSndErrorCleanup() or\nWalSndResourceCleanup() as I suggested previously.\n\nI'm not exactly sure what the right solution is here, but I think you\nneed to put more thought into how to make the code look simple,\nelegant, and non-invasive.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 19 Oct 2022 10:40:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Wed, Oct 19, 2022 at 8:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Well this still touches postgres.c. And I still think it's an awful\n> lot of machinery for a pretty trivial problem. As a practical matter,\n> nobody is going to open a connection and sit there and try to start a\n> backup over and over again on the same connection. And even if someone\n> wrote a client that did that -- why? -- they'd have to be awfully\n> persistent to leak any amount of memory that would actually matter. So\n> it is not insane to just think of ignoring this problem entirely.\n\nI understand that the amount of memory allocated by pg_backup_start()\nis small compared to the overall RAM, however, I don't think we can\nignore the problem and let postgres cause memory leaks.\n\n> But if we want to fix it, I think we should do it in some more\n> localized way.\n\nAgreed.\n\n> One option is to just have do_pg_start_backup() blow\n> away any old memory context before it allocates any new memory, and\n> forget about releasing anything in PostgresMain(). That means memory\n> could remain allocated after a failure until you next retry the\n> operation, but I don't think that really matters. It's not a lot of\n> memory; we just don't want it to accumulate across many repetitions.\n\nThis seems reasonable to me.\n\n> Another option, perhaps, is to delete some memory context from within\n> the TRY/CATCH block if non-NULL, although that wouldn't work nicely if\n> it might blow away the data we need to generate the error message.\n\nRight.\n\n> A third option is to do something useful inside WalSndErrorCleanup() or\n> WalSndResourceCleanup() as I suggested previously.\n\nThese functions will not be called for SQL-callable backup functions\npg_backup_start() and pg_backup_stop(). And the memory leak problem\nwe're trying to solve is for SQL-callable functions, but not for\nbasebaskups as they already have a memory context named \"Replication\ncommand context\" that gets deleted in PostgresMain().\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 19 Oct 2022 21:23:51 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Wed, Oct 19, 2022 at 9:23 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Oct 19, 2022 at 8:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> > One option is to just have do_pg_start_backup() blow\n> > away any old memory context before it allocates any new memory, and\n> > forget about releasing anything in PostgresMain(). That means memory\n> > could remain allocated after a failure until you next retry the\n> > operation, but I don't think that really matters. It's not a lot of\n> > memory; we just don't want it to accumulate across many repetitions.\n>\n> This seems reasonable to me.\n\nI tried implementing this, please see the attached v7 patch.\nCurrently, memory allocated in the new memory context is around 4KB\n[1]. In the extreme and rarest of the rare cases where somebody\nexecutes select pg_backup_start(repeat('foo', 1024)); or a failure\noccurs before reaching pg_backup_stop() on all of the sessions\n(max_connections) at once, the maximum/peak memory bloat/leak is\naround max_connections*4KB, which will still be way less than the\ntotal amount of RAM. Hence, I think this approach seems very\nreasonable and non-invasive yet can solve the memory leak problem.\nThoughts?\n\n[1]\n(gdb) p *backupcontext\n$4 = {type = T_AllocSetContext, isReset = false, allowInCritSection =\nfalse, mem_allocated = 4232,\n methods = 0x55c925b81f90 <mcxt_methods+240>, parent =\n0x55c92766d2a0, firstchild = 0x0, prevchild = 0x0,\n nextchild = 0x55c92773f1f0, name = 0x55c9258be05c \"on-line backup\ncontext\", ident = 0x0, reset_cbs = 0x0}\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 20 Oct 2022 16:16:58 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Thu, Oct 20, 2022 at 6:47 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> I tried implementing this, please see the attached v7 patch.\n\nI haven't checked this in detail but it looks much more reasonable in\nterms of code footprint. However, we should, I think, set backup_state\n= NULL and tablespace_map = NULL before deleting the memory context.\nAs you have it, I believe that if backup_state = (BackupState *)\npalloc0(sizeof(BackupState)) fails -- say due to running out of memory\n-- then those variables could end up pointing to garbage because the\ncontext had already been reset before initializing them. I don't know\nwhether it's possible for that to cause any concrete harm, but nulling\nout the pointers seems like cheap insurance.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Oct 2022 12:18:30 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Thu, Oct 20, 2022 at 9:48 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Oct 20, 2022 at 6:47 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > I tried implementing this, please see the attached v7 patch.\n>\n> I haven't checked this in detail but it looks much more reasonable in\n> terms of code footprint. However, we should, I think, set backup_state\n> = NULL and tablespace_map = NULL before deleting the memory context.\n> As you have it, I believe that if backup_state = (BackupState *)\n> palloc0(sizeof(BackupState)) fails -- say due to running out of memory\n> -- then those variables could end up pointing to garbage because the\n> context had already been reset before initializing them. I don't know\n> whether it's possible for that to cause any concrete harm, but nulling\n> out the pointers seems like cheap insurance.\n\nI think elsewhere in the code we reset dangling pointers either ways -\nbefore or after deleting/resetting memory context. But placing them\nbefore would give us extra safety in case memory context\ndeletion/reset fails. Not sure what's the best way. However, I'm\nnullifying the dangling pointers after deleting/resetting memory\ncontext.\nMemoryContextDelete(Conf->buildCxt);\nMemoryContextDelete(PostmasterContext);\nMemoryContextDelete(rulescxt);\n\nPlease see the attached v8 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 20 Oct 2022 23:05:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On 2022-Oct-20, Bharath Rupireddy wrote:\n\n> I think elsewhere in the code we reset dangling pointers either ways -\n> before or after deleting/resetting memory context. But placing them\n> before would give us extra safety in case memory context\n> deletion/reset fails. Not sure what's the best way. However, I'm\n> nullifying the dangling pointers after deleting/resetting memory\n> context.\n\nI agree that's a good idea, and the patch looks good to me, but I don't\nthink asserting that they are null afterwards is useful.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 20 Oct 2022 19:47:07 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Thu, Oct 20, 2022 at 1:35 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> I think elsewhere in the code we reset dangling pointers either ways -\n> before or after deleting/resetting memory context. But placing them\n> before would give us extra safety in case memory context\n> deletion/reset fails. Not sure what's the best way.\n\nI think it's OK to assume that deallocating memory will always\nsucceed, so it doesn't matter whether you do it just before or just\nafter that. But it's not OK to assume that *allocating* memory will\nalways succeed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Oct 2022 14:51:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Thu, Oct 20, 2022 at 11:17 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Oct-20, Bharath Rupireddy wrote:\n>\n> > I think elsewhere in the code we reset dangling pointers either ways -\n> > before or after deleting/resetting memory context. But placing them\n> > before would give us extra safety in case memory context\n> > deletion/reset fails. Not sure what's the best way. However, I'm\n> > nullifying the dangling pointers after deleting/resetting memory\n> > context.\n>\n> I agree that's a good idea, and the patch looks good to me, but I don't\n> think asserting that they are null afterwards is useful.\n\n+1. Removed those assertions. Please see the attached v9 patch.\n\nOn Fri, Oct 21, 2022 at 12:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Oct 20, 2022 at 1:35 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > I think elsewhere in the code we reset dangling pointers either ways -\n> > before or after deleting/resetting memory context. But placing them\n> > before would give us extra safety in case memory context\n> > deletion/reset fails. Not sure what's the best way.\n>\n> I think it's OK to assume that deallocating memory will always\n> succeed, so it doesn't matter whether you do it just before or just\n> after that. But it's not OK to assume that *allocating* memory will\n> always succeed.\n\nRight.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 21 Oct 2022 11:34:27 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Thu, Oct 20, 2022 at 02:51:21PM -0400, Robert Haas wrote:\n> On Thu, Oct 20, 2022 at 1:35 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> I think elsewhere in the code we reset dangling pointers either ways -\n>> before or after deleting/resetting memory context. But placing them\n>> before would give us extra safety in case memory context\n>> deletion/reset fails. Not sure what's the best way.\n> \n> I think it's OK to assume that deallocating memory will always\n> succeed, so it doesn't matter whether you do it just before or just\n> after that. But it's not OK to assume that *allocating* memory will\n> always succeed.\n\nAFAIK, one of the callbacks associated to a memory context could\nfail, see comments before MemoryContextCallResetCallbacks() in\nMemoryContextDelete(). I agree that it should not matter here, but I\nthink that it is better to reset the pointers before attempting the\ndeletion of the memory context in this case.\n--\nMichael", "msg_date": "Fri, 21 Oct 2022 15:18:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Fri, Oct 21, 2022 at 11:34:27AM +0530, Bharath Rupireddy wrote:\n> On Fri, Oct 21, 2022 at 12:21 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> On Thu, Oct 20, 2022 at 1:35 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>> I think elsewhere in the code we reset dangling pointers either ways -\n>>> before or after deleting/resetting memory context. But placing them\n>>> before would give us extra safety in case memory context\n>>> deletion/reset fails. Not sure what's the best way.\n>>\n>> I think it's OK to assume that deallocating memory will always\n>> succeed, so it doesn't matter whether you do it just before or just\n>> after that. But it's not OK to assume that *allocating* memory will\n>> always succeed.\n> \n> Right.\n\nTo be exact, it seems to me that tablespace_map and backup_state\nshould be reset before deleting backupcontext, but the reset of\nbackupcontext should happen after the fact.\n\n+ backup_state = NULL;\n tablespace_map = NULL;\nThese two in pg_backup_start() don't matter, do they? They are\nreallocated a couple of lines down.\n\n+ * across. We keep the memory allocated in this memory context less,\nWhat does \"We keep the memory allocated in this memory context less\"\nmean here? \n--\nMichael", "msg_date": "Fri, 21 Oct 2022 15:28:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "At Thu, 20 Oct 2022 19:47:07 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> I agree that's a good idea, and the patch looks good to me, but I don't\n> think asserting that they are null afterwards is useful.\n\n+1 for this direction. And the patch is fine to me.\n\n\n>\toldcontext = MemoryContextSwitchTo(backupcontext);\n>\tAssert(backup_state == NULL);\n>\tAssert(tablespace_map == NULL);\n>\tbackup_state = (BackupState *) palloc0(sizeof(BackupState));\n>\ttablespace_map = makeStringInfo();\n>\tMemoryContextSwitchTo(oldcontext);\n\nWe can use MemoryContextAllocZero() for this purpose, but of couse not\nmandatory.\n\n\n+\t * across. We keep the memory allocated in this memory context less,\n+\t * because any error before reaching pg_backup_stop() can leak the memory\n+\t * until pg_backup_start() is called again. While this is not smart, it\n+\t * helps to keep things simple.\n\nI think the \"less\" is somewhat obscure. I feel we should be more\nexplicitly. And we don't need to put emphasis on \"leak\". I recklessly\npropose this as the draft.\n\n\"The context is intended to be used by this function to store only\nsession-lifetime values. It is, if left alone, reset at the next call\nto blow away orphan memory blocks from the previous failed call.\nWhile this is not smart, it helps to keep things simple.\"\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 21 Oct 2022 15:41:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Fri, Oct 21, 2022 at 2:18 AM Michael Paquier <michael@paquier.xyz> wrote:\n> AFAIK, one of the callbacks associated to a memory context could\n> fail, see comments before MemoryContextCallResetCallbacks() in\n> MemoryContextDelete(). I agree that it should not matter here, but I\n> think that it is better to reset the pointers before attempting the\n> deletion of the memory context in this case.\n\nI think this is nitpicking. There's no real danger here, and if there\nwere, the error handling would have to take it into account somehow,\nwhich it doesn't.\n\nI'd probably do it before resetting the context as a matter of style,\nto make it clear that there's no window in which the pointers are set\nbut referencing invalid memory. But I do not think it makes any\npractical difference at all.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 21 Oct 2022 10:17:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Fri, Oct 21, 2022 at 11:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> To be exact, it seems to me that tablespace_map and backup_state\n> should be reset before deleting backupcontext, but the reset of\n> backupcontext should happen after the fact.\n>\n> + backup_state = NULL;\n> tablespace_map = NULL;\n> These two in pg_backup_start() don't matter, do they? They are\n> reallocated a couple of lines down.\n\nAfter all, that is what is being discussed here; what if palloc down\nbelow fails and they're not reset to NULL after MemoryContextReset()?\n\n> + * across. We keep the memory allocated in this memory context less,\n> What does \"We keep the memory allocated in this memory context less\"\n> mean here?\n\nWe try to keep it less because we don't want to allocate more memory\nand leak it unless pg_start_backup() is called again. Please read the\ndescription. I'll leave it to the committer's discretion whether to\nhave that part or remove it.\n\nOn Fri, Oct 21, 2022 at 12:11 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n>\n> + * across. We keep the memory allocated in this memory context less,\n> + * because any error before reaching pg_backup_stop() can leak the memory\n> + * until pg_backup_start() is called again. While this is not smart, it\n> + * helps to keep things simple.\n>\n> I think the \"less\" is somewhat obscure. I feel we should be more\n> explicitly. And we don't need to put emphasis on \"leak\". I recklessly\n> propose this as the draft.\n\nI tried to put it simple, please see the attached v10. I'll leave it\nto the committer's discretion for better wording.\n\nOn Fri, Oct 21, 2022 at 7:47 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Oct 21, 2022 at 2:18 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > AFAIK, one of the callbacks associated to a memory context could\n> > fail, see comments before MemoryContextCallResetCallbacks() in\n> > MemoryContextDelete(). I agree that it should not matter here, but I\n> > think that it is better to reset the pointers before attempting the\n> > deletion of the memory context in this case.\n>\n> I think this is nitpicking. There's no real danger here, and if there\n> were, the error handling would have to take it into account somehow,\n> which it doesn't.\n>\n> I'd probably do it before resetting the context as a matter of style,\n> to make it clear that there's no window in which the pointers are set\n> but referencing invalid memory. But I do not think it makes any\n> practical difference at all.\n\nPlease see the attached v10.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 21 Oct 2022 21:02:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Avoid memory leaks during base backups" }, { "msg_contents": "On Fri, Oct 21, 2022 at 09:02:04PM +0530, Bharath Rupireddy wrote:\n> After all, that is what is being discussed here; what if palloc down\n> below fails and they're not reset to NULL after MemoryContextReset()?\n\nIt does not seem to matter much to me for that, so left these as\nproposed.\n\n> On Fri, Oct 21, 2022 at 12:11 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>> I think the \"less\" is somewhat obscure. I feel we should be more\n>> explicitly. And we don't need to put emphasis on \"leak\". I recklessly\n>> propose this as the draft.\n> \n> I tried to put it simple, please see the attached v10. I'll leave it\n> to the committer's discretion for better wording.\n\nI am still not sure what \"less\" means when referring to a \"memory\ncontext\". Anyway, I have gone through the comments and finished with\nsomething much more simplified, and applied the whole.\n--\nMichael", "msg_date": "Sat, 22 Oct 2022 18:42:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Avoid memory leaks during base backups" } ]
[ { "msg_contents": "Hi,\r\n\r\nThe PostgreSQL 15 GA release (15.0) is now scheduled for October 13, \r\n2022. The release team changed this from the planned date of October 6 \r\nto allow for additional testing of recent changes.\r\n\r\nPlease let us know if you have any questions. We're excited that we are \r\nvery close to officially releasing PostgreSQL 15.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Mon, 26 Sep 2022 11:06:51 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "PostgreSQL 15 GA release date" } ]
[ { "msg_contents": "Hi, hackers\n\nheap_force_kill/heap_force_freeze doesn’t consider other transactions that are using the same tuples even with tuple-locks.\nThe functions may break transaction semantic, ex:\n\nsession1\n```\ncreate table htab(id int);\ninsert into htab values (100), (200), (300), (400), (500);\n```\n\nsession2\n```\nbegin isolation level repeatable read;\nselect * from htab for share;\n id\n-----\n 100\n 200\n 300\n 400\n 500\n(5 rows)\n```\n\nsession1\n```\nselect heap_force_kill('htab'::regclass, ARRAY['(0, 1)']::tid[]);\n heap_force_kill\n-----------------\n\n(1 row)\n```\n\nsession2\n```\nselect * from htab for share;\n id\n-----\n 200\n 300\n 400\n 500\n(4 rows)\n```\n\nsession2 should get the same results as it's repeatable read isolation level.\n\nBy reading the doc:\n```\nThe pg_surgery module provides various functions to perform surgery on a damaged relation. These functions are unsafe by design and using them may corrupt (or further corrupt) your database. For example, these functions can easily be used to make a table inconsistent with its own indexes, to cause UNIQUE or FOREIGN KEY constraint violations, or even to make tuples visible which, when read, will cause a database server crash. They should be used with great caution and only as a last resort.\n\n```\nI know they are powerful tools, but also a little surprise with the above example.\n\nShould we add more docs to tell the users that the tool will change the tuples anyway even there are tuple-locks on them?\n\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi, hackers\n\nheap_force_kill/heap_force_freeze doesn’t consider other transactions that are using the same tuples even with tuple-locks.\nThe functions may break transaction semantic, ex:\n\nsession1\n```\ncreate table htab(id int);\ninsert into htab values (100), (200), (300), (400), (500);\n```\n\nsession2\n```\nbegin isolation level repeatable read;\nselect * from htab for share;\n id\n-----\n 100\n 200\n 300\n 400\n 500\n(5 rows)\n```\n\nsession1\n```\nselect heap_force_kill('htab'::regclass, ARRAY['(0, 1)']::tid[]);\n heap_force_kill\n-----------------\n\n(1 row)\n```\n\nsession2\n```\nselect * from htab for share;\n id\n-----\n 200\n 300\n 400\n 500\n(4 rows)\n```\n\nsession2 should get the same results as it's repeatable read isolation level.\n\nBy reading the doc:\n```\nThe pg_surgery module provides various functions to perform surgery on a damaged relation. These functions are unsafe by design and using them may corrupt (or further corrupt) your database. For example, these functions can easily be used to make a table inconsistent with its own indexes, to cause UNIQUE or FOREIGN KEY constraint violations, or even to make tuples visible which, when read, will cause a database server crash. They should be used with great caution and only as a last resort.```\nI know they are powerful tools, but also a little surprise with the above example.\n\nShould we add more docs to tell the users that the tool will change the tuples anyway even there are tuple-locks on them?\n\n\n\nRegards,\nZhang Mingli", "msg_date": "Mon, 26 Sep 2022 23:59:19 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Add more docs for pg_surgery?" }, { "msg_contents": "On Mon, Sep 26, 2022 at 9:29 PM Zhang Mingli <zmlpostgres@gmail.com> wrote:\n>\n> Hi, hackers\n>\n> heap_force_kill/heap_force_freeze doesn’t consider other transactions that are using the same tuples even with tuple-locks.\n> The functions may break transaction semantic, ex:\n>\n> session1\n> ```\n> create table htab(id int);\n> insert into htab values (100), (200), (300), (400), (500);\n> ```\n>\n> session2\n> ```\n> begin isolation level repeatable read;\n> select * from htab for share;\n> id\n> -----\n> 100\n> 200\n> 300\n> 400\n> 500\n> (5 rows)\n> ```\n>\n> session1\n> ```\n> select heap_force_kill('htab'::regclass, ARRAY['(0, 1)']::tid[]);\n> heap_force_kill\n> -----------------\n>\n> (1 row)\n> ```\n>\n> session2\n> ```\n> select * from htab for share;\n> id\n> -----\n> 200\n> 300\n> 400\n> 500\n> (4 rows)\n> ```\n>\n> session2 should get the same results as it's repeatable read isolation level.\n>\n> By reading the doc:\n> ```\n> The pg_surgery module provides various functions to perform surgery on a damaged relation. These functions are unsafe by design and using them may corrupt (or further corrupt) your database. For example, these functions can easily be used to make a table inconsistent with its own indexes, to cause UNIQUE or FOREIGN KEY constraint violations, or even to make tuples visible which, when read, will cause a database server crash. They should be used with great caution and only as a last resort.\n>\n> ```\n> I know they are powerful tools, but also a little surprise with the above example.\n>\n> Should we add more docs to tell the users that the tool will change the tuples anyway even there are tuple-locks on them?\n>\n\nAs the name suggests and as documented, heap_force_kill will \"force\nkill\" the tuple, regardless of whether it is visible to another\ntransaction or not. And further it looks like you are doing an\nexperiment on undamaged relation which is not recommended as\ndocumented. If the relation would have been damaged, you probably may\nnot be able to access it.\n\n--\nWith Regards,\nAshutosh Sharma.\n\n\n", "msg_date": "Mon, 26 Sep 2022 22:17:07 +0530", "msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add more docs for pg_surgery?" }, { "msg_contents": "Regards,\nZhang Mingli\nOn Sep 27, 2022, 00:47 +0800, Ashutosh Sharma <ashu.coek88@gmail.com>, wrote:\n>\n> And further it looks like you are doing an\n> experiment on undamaged relation which is not recommended as\n> documented.\nYeah.\n>  If the relation would have been damaged, you probably may\n> not be able to access it.\n>\nThat make some sense.\n> --\n> With Regards,\n> Ashutosh Sharma.\n\n\n\n\n\n\n\nRegards,\nZhang Mingli\n\n\n\nOn Sep 27, 2022, 00:47 +0800, Ashutosh Sharma <ashu.coek88@gmail.com>, wrote:\n\nAnd further it looks like you are doing an\nexperiment on undamaged relation which is not recommended as\ndocumented.\nYeah.\n If the relation would have been damaged, you probably may\nnot be able to access it.\n\nThat make some sense.\n--\nWith Regards,\nAshutosh Sharma.", "msg_date": "Tue, 27 Sep 2022 19:18:00 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add more docs for pg_surgery?" } ]
[ { "msg_contents": "Just a reminder that only some days left of \"September 2022 commitfest\"\nAs of now, there are \"295\" patches in total. Out of these 295 patches, \"18\"\npatches required committer attention, and 167 patches needed reviews.\n\nTotal: 295.\nNeeds review: 167.\nWaiting on Author: 44.\nReady for Committer: 18.\nCommitted: 50.\nMoved to next CF: 3.\nReturned with Feedback: 3.\nRejected: 2.\nWithdrawn: 8.\n\n\nOn the last days of Commitfest, I will perform these activities\n\n- For patches marked \"Waiting for Author\" and having at least one review,\n set to \"Returned with Feedback\" and send the appropriate email\n- For patches marked \"Needs review\n * If it received at least one good review, move it to the next CF\n(removing the current reviewer reservation)\n * Otherwise, leave them pending\n\n-- \nIbrar Ahmed\n\nJust a reminder that only some days left of \"September 2022 commitfest\"As of now, there are \"295\" patches in total. Out of these 295 patches, \"18\"patches required committer attention, and 167 patches needed reviews.Total: 295.Needs review: 167.Waiting on Author: 44.Ready for Committer: 18.Committed: 50.Moved to next CF: 3.Returned with Feedback: 3.Rejected: 2.Withdrawn: 8.On the last days of Commitfest, I will perform these activities- For patches marked \"Waiting for Author\" and having at least one review,  set to \"Returned with Feedback\" and send the appropriate email- For patches marked \"Needs review     * If it received at least one good review, move it to the next CF (removing the current reviewer reservation)     * Otherwise, leave them pending-- Ibrar Ahmed", "msg_date": "Tue, 27 Sep 2022 04:26:53 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": true, "msg_subject": "[Commitfest 2022-09] Last days" } ]
[ { "msg_contents": "Hi hackers,\n\nEnums index a number of the GUC tables. This all relies on the\nelements being carefully arranged to be in the same order as those\nenums. There are comments to say what enum index belongs to each table\nelement.\n\nBut why not use designated initializers to enforce what the comments\nare hoping for?\n\n~~\n\nPSA a patch for the same.\n\nDoing this also exposed a minor typo in the comments.\n\"ERROR_HANDLING\" -> \"ERROR_HANDLING_OPTIONS\"\n\nFurthermore, with this change, now the GUC table elements are able to\nbe rearranged into any different order - eg alphabetical - if that\nwould be useful (my patch does not do this).\n\n~~\n\nIn passing, I also made a 0002 patch to remove some inconsistent\nwhitespace noticed in those config tables.\n\nThoughts?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.", "msg_date": "Tue, 27 Sep 2022 09:27:48 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "GUC tables - use designated initializers" }, { "msg_contents": "On Tue, Sep 27, 2022 at 09:27:48AM +1000, Peter Smith wrote:\n> But why not use designated initializers to enforce what the comments\n> are hoping for?\n\nThis is a C99 thing as far as I understand, adding one safety net.\nWhy not for these cases..\n\n> Doing this also exposed a minor typo in the comments.\n> \"ERROR_HANDLING\" -> \"ERROR_HANDLING_OPTIONS\"\n\nRight.\n--\nMichael", "msg_date": "Tue, 27 Sep 2022 10:12:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC tables - use designated initializers" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> Enums index a number of the GUC tables. This all relies on the\n> elements being carefully arranged to be in the same order as those\n> enums. There are comments to say what enum index belongs to each table\n> element.\n> But why not use designated initializers to enforce what the comments\n> are hoping for?\n\nInteresting proposal, but it's not clear to me that this solution makes\nthe code more bulletproof rather than less so. Yeah, you removed the\norder dependency, but the other concern here is that the array gets\nupdated at all when adding a new enum entry. This method seems like\nit'd help disguise such an oversight. In particular, the adjacent\nStaticAssertDecls about the array lengths are testing something different\nthan they used to, and I fear they lost some of their power. Can we\nimprove those checks so they'd catch a missing entry again?\n\n> Furthermore, with this change, now the GUC table elements are able to\n> be rearranged into any different order - eg alphabetical - if that\n> would be useful (my patch does not do this).\n\nIf anything, that's an anti-feature IMV. I quite dislike code where\nthe same set of items are dealt with in randomly different orders\nin different places.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Sep 2022 12:21:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GUC tables - use designated initializers" }, { "msg_contents": "On Wed, Sep 28, 2022 at 2:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > Enums index a number of the GUC tables. This all relies on the\n> > elements being carefully arranged to be in the same order as those\n> > enums. There are comments to say what enum index belongs to each table\n> > element.\n> > But why not use designated initializers to enforce what the comments\n> > are hoping for?\n>\n> Interesting proposal, but it's not clear to me that this solution makes\n> the code more bulletproof rather than less so. Yeah, you removed the\n> order dependency, but the other concern here is that the array gets\n> updated at all when adding a new enum entry. This method seems like\n> it'd help disguise such an oversight. In particular, the adjacent\n> StaticAssertDecls about the array lengths are testing something different\n> than they used to, and I fear they lost some of their power.\n\nThanks for the feedback!\n\nThe current code StaticAssertDecl asserts that the array length is the\nsame as the number of enums by using hardwired knowledge of what enum\nis the \"last\" one (e.g. DEVELOPER_OPTIONS in the example below).\n\nStaticAssertDecl(lengthof(config_group_names) == (DEVELOPER_OPTIONS + 2),\n\"array length mismatch\");\n\nHmmm. I think maybe I found the example to justify your fear. It's a\nbit subtle and AFAIK the HEAD code would not suffer this problem ---\nimagine if the developer adds the new enum before the \"last\" one (e.g.\nADD_NEW_BEFORE_LAST comes before DEVELOPER_OPTIONS) and at the same\ntime they *forgot* to update the table elements, then that designated\nindex [DEVELOPER_OPTIONS] will still ensure the table becomes the\ncorrect increased length (so the StaticAssertDecl will be ok) except\nnow there will be an undetected \"hole\" in the table at the forgotten\n[ADD_NEW_BEFORE_LAST] index.\n\n> Can we\n> improve those checks so they'd catch a missing entry again?\n\nThinking...\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 28 Sep 2022 12:04:00 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GUC tables - use designated initializers" }, { "msg_contents": "On Wed, Sep 28, 2022 at 12:04 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Sep 28, 2022 at 2:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Peter Smith <smithpb2250@gmail.com> writes:\n> > > Enums index a number of the GUC tables. This all relies on the\n> > > elements being carefully arranged to be in the same order as those\n> > > enums. There are comments to say what enum index belongs to each table\n> > > element.\n> > > But why not use designated initializers to enforce what the comments\n> > > are hoping for?\n> >\n> > Interesting proposal, but it's not clear to me that this solution makes\n> > the code more bulletproof rather than less so. Yeah, you removed the\n> > order dependency, but the other concern here is that the array gets\n> > updated at all when adding a new enum entry. This method seems like\n> > it'd help disguise such an oversight. In particular, the adjacent\n> > StaticAssertDecls about the array lengths are testing something different\n> > than they used to, and I fear they lost some of their power.\n>\n> Thanks for the feedback!\n>\n> The current code StaticAssertDecl asserts that the array length is the\n> same as the number of enums by using hardwired knowledge of what enum\n> is the \"last\" one (e.g. DEVELOPER_OPTIONS in the example below).\n>\n> StaticAssertDecl(lengthof(config_group_names) == (DEVELOPER_OPTIONS + 2),\n> \"array length mismatch\");\n>\n> Hmmm. I think maybe I found the example to justify your fear. It's a\n> bit subtle and AFAIK the HEAD code would not suffer this problem ---\n> imagine if the developer adds the new enum before the \"last\" one (e.g.\n> ADD_NEW_BEFORE_LAST comes before DEVELOPER_OPTIONS) and at the same\n> time they *forgot* to update the table elements, then that designated\n> index [DEVELOPER_OPTIONS] will still ensure the table becomes the\n> correct increased length (so the StaticAssertDecl will be ok) except\n> now there will be an undetected \"hole\" in the table at the forgotten\n> [ADD_NEW_BEFORE_LAST] index.\n>\n> > Can we\n> > improve those checks so they'd catch a missing entry again?\n>\n> Thinking...\n>\n\nAlthough adding designated initializers did fix some behaviour of the\ncurrent code (e.g. which assumed array element order, but cannot\nenforce it), it also introduces some new unwanted quirks (e.g.\naccidentally omitting table elements can now be undetectable).\n\nI can't see any good coming from exchanging one kind of problem for a\nnew kind of problem, so I am abandoning the idea of using designated\ninitializers in these GUC tables.\n\n~\n\nThe v2 patches are updated as follows:\n\n0001 - Now this patch only fixes a comment that had a wrong enum name.\n0002 - Removes unnecessary whitespace (same as v1-0002)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 4 Oct 2022 16:20:36 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GUC tables - use designated initializers" }, { "msg_contents": "On Tue, Oct 04, 2022 at 04:20:36PM +1100, Peter Smith wrote:\n> The v2 patches are updated as follows:\n> \n> 0001 - Now this patch only fixes a comment that had a wrong enum name.\n\nThis was wrong, so fixed.\n\n> 0002 - Removes unnecessary whitespace (same as v1-0002)\n\nThis one does not seem worth doing, though..\n--\nMichael", "msg_date": "Tue, 4 Oct 2022 15:48:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC tables - use designated initializers" }, { "msg_contents": "On Tue, Oct 4, 2022 at 5:48 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Oct 04, 2022 at 04:20:36PM +1100, Peter Smith wrote:\n> > The v2 patches are updated as follows:\n> >\n> > 0001 - Now this patch only fixes a comment that had a wrong enum name.\n>\n> This was wrong, so fixed.\n\nThanks for pushing!\n\n>\n> > 0002 - Removes unnecessary whitespace (same as v1-0002)\n>\n> This one does not seem worth doing, though..\n\nYeah, fair enough. I didn't really expect much support for that one,\nbut I thought I'd post it anyway when I saw it removed 250 lines from\nthe already long source file.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 4 Oct 2022 18:01:16 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GUC tables - use designated initializers" } ]
[ { "msg_contents": "Hi hackers.\n\nI have a question about the recommended way to declare the C variables\nused for the GUC values.\n\nHere are some examples from the code:\n\n~\n\nThe GUC boot values are defined in src/backend.utils/misc/guc_tables.c\n\ne.g. See the 4, and 2 below\n\n{\n{\"max_logical_replication_workers\",\nPGC_POSTMASTER,\nREPLICATION_SUBSCRIBERS,\ngettext_noop(\"Maximum number of logical replication worker processes.\"),\nNULL,\n},\n&max_logical_replication_workers,\n4, 0, MAX_BACKENDS,\nNULL, NULL, NULL\n},\n\n{\n{\"max_sync_workers_per_subscription\",\nPGC_SIGHUP,\nREPLICATION_SUBSCRIBERS,\ngettext_noop(\"Maximum number of table synchronization workers per\nsubscription.\"),\nNULL,\n},\n&max_sync_workers_per_subscription,\n2, 0, MAX_BACKENDS,\nNULL, NULL, NULL\n},\n\n~~\n\nMeanwhile, the associated C variables are declared in their respective modules.\n\ne.g. src/backend/replication/launcher.c\n\nint max_logical_replication_workers = 4;\nint max_sync_workers_per_subscription = 2;\n\n~~\n\nIt seems confusing to me that for the above code the initial value is\n\"hardwired\" in multiple places. Specifically, it looks tempting to\njust change the variable declaration value, but IIUC that's going to\nachieve nothing because it will just be overwritten by the\n\"boot-value\" during the GUC mechanism start-up.\n\nFurthermore, there seems no consistency with how these C variables are\nauto-initialized:\n\na) Sometimes the static variable is assigned some (dummy?) value that\nis not the same as the boot value\n- See src/backend/utils/misc/guc_tables.c, max_replication_slots boot\nvalue is 10\n- See src/backend/replication/slot.c, int max_replication_slots = 0;\n\nb) Sometimes the static value is assigned the same hardwired value as\nthe GUC boot value\n- See src/backend/utils/misc/guc_tables.c,\nmax_logical_replication_workers boot value is 4\n- See src/backend/replication/launcher.c, int\nmax_logical_replication_workers = 4;\n\nc) Sometimes the GUC C variables don't even have a comment saying that\nthey are GUC variables, so it is not at all obvious their initial\nvalues are going to get overwritten by some external mechanism.\n- See src/backend/replication/launcher.c, int\nmax_logical_replication_workers = 4;\n\n~\n\nI would like to know what is the recommended way/convention to write\nthe C variable declarations for the GUC values.\n\nIMO I felt the launch.c code as shown would be greatly improved simply\nby starting with 0 values, and including an explanatory comment.\n\ne.g.\n\nCURRENT\nint max_logical_replication_workers = 4;\nint max_sync_workers_per_subscription = 2;\n\nSUGGESTION\n/*\n * GUC variables. Initial values are assigned at startup via\nInitializeGUCOptions.\n */\nint max_logical_replication_workers = 0;\nint max_sync_workers_per_subscription = 0;\n\n\nThoughts?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 27 Sep 2022 09:51:12 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "GUC values - recommended way to declare the C variables?" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> It seems confusing to me that for the above code the initial value is\n> \"hardwired\" in multiple places. Specifically, it looks tempting to\n> just change the variable declaration value, but IIUC that's going to\n> achieve nothing because it will just be overwritten by the\n> \"boot-value\" during the GUC mechanism start-up.\n\nWell, if you try that you'll soon discover it doesn't work ;-)\n\nIIRC, the primary argument for hand-initializing GUC variables is to\nensure that they have a sane value even before InitializeGUCOptions runs.\nObviously, that only matters for some subset of the GUCs that could be\nconsulted very early in startup ... but it's not perfectly clear just\nwhich ones it matters for.\n\n> a) Sometimes the static variable is assigned some (dummy?) value that\n> is not the same as the boot value\n> - See src/backend/utils/misc/guc_tables.c, max_replication_slots boot\n> value is 10\n> - See src/backend/replication/slot.c, int max_replication_slots = 0;\n\nThat seems pretty bogus. I think if we're not initializing a GUC to\nthe \"correct\" value then we should just leave it as not explicitly\ninitialized.\n\n> c) Sometimes the GUC C variables don't even have a comment saying that\n> they are GUC variables, so it is not at all obvious their initial\n> values are going to get overwritten by some external mechanism.\n\nThat's flat out sloppy commenting. There are a lot of people around\nhere who seem to think comments are optional :-(\n\n> SUGGESTION\n> /*\n> * GUC variables. Initial values are assigned at startup via\n> InitializeGUCOptions.\n> */\n> int max_logical_replication_workers = 0;\n> int max_sync_workers_per_subscription = 0;\n\n1. Comment far wordier than necessary. In most places we just\nannotate these as \"GUC variables\", and I think that's sufficient.\nYou're going to have a hard time getting people to write more\nthan that anyway.\n\n2. I don't agree with explicitly initializing to a wrong value.\nIt'd be sufficient to do\n\nint max_logical_replication_workers;\nint max_sync_workers_per_subscription;\n\nwhich would also make it clearer that initialization happens\nthrough some other mechanism.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 26 Sep 2022 20:08:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Tue, Sep 27, 2022 at 10:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > It seems confusing to me that for the above code the initial value is\n> > \"hardwired\" in multiple places. Specifically, it looks tempting to\n> > just change the variable declaration value, but IIUC that's going to\n> > achieve nothing because it will just be overwritten by the\n> > \"boot-value\" during the GUC mechanism start-up.\n>\n> Well, if you try that you'll soon discover it doesn't work ;-)\n>\n> IIRC, the primary argument for hand-initializing GUC variables is to\n> ensure that they have a sane value even before InitializeGUCOptions runs.\n> Obviously, that only matters for some subset of the GUCs that could be\n> consulted very early in startup ... but it's not perfectly clear just\n> which ones it matters for.\n>\n> > a) Sometimes the static variable is assigned some (dummy?) value that\n> > is not the same as the boot value\n> > - See src/backend/utils/misc/guc_tables.c, max_replication_slots boot\n> > value is 10\n> > - See src/backend/replication/slot.c, int max_replication_slots = 0;\n>\n> That seems pretty bogus. I think if we're not initializing a GUC to\n> the \"correct\" value then we should just leave it as not explicitly\n> initialized.\n>\n> > c) Sometimes the GUC C variables don't even have a comment saying that\n> > they are GUC variables, so it is not at all obvious their initial\n> > values are going to get overwritten by some external mechanism.\n>\n> That's flat out sloppy commenting. There are a lot of people around\n> here who seem to think comments are optional :-(\n>\n> > SUGGESTION\n> > /*\n> > * GUC variables. Initial values are assigned at startup via\n> > InitializeGUCOptions.\n> > */\n> > int max_logical_replication_workers = 0;\n> > int max_sync_workers_per_subscription = 0;\n>\n> 1. Comment far wordier than necessary. In most places we just\n> annotate these as \"GUC variables\", and I think that's sufficient.\n> You're going to have a hard time getting people to write more\n> than that anyway.\n>\n> 2. I don't agree with explicitly initializing to a wrong value.\n> It'd be sufficient to do\n>\n> int max_logical_replication_workers;\n> int max_sync_workers_per_subscription;\n>\n> which would also make it clearer that initialization happens\n> through some other mechanism.\n>\n\nThanks for your advice.\n\nI will try to post a patch in the new few days to address (per your\nsuggestions) some of the variables that I am more familiar with.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Tue, 27 Sep 2022 11:07:53 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Tue, Sep 27, 2022 at 11:07 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n...\n\n> I will try to post a patch in the new few days to address (per your\n> suggestions) some of the variables that I am more familiar with.\n>\n\nPSA a small patch to tidy a few of the GUC C variables - adding\ncomments and removing unnecessary declaration assignments.\n\nmake check-world passed OK.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.", "msg_date": "Wed, 28 Sep 2022 10:13:22 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Wed, Sep 28, 2022 at 10:13:22AM +1000, Peter Smith wrote:\n> PSA a small patch to tidy a few of the GUC C variables - adding\n> comments and removing unnecessary declaration assignments.\n> \n> make check-world passed OK.\n\nLooks reasonable to me. I've marked this as ready-for-committer.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 12 Oct 2022 12:12:15 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Wed, Oct 12, 2022 at 12:12:15PM -0700, Nathan Bossart wrote:\n> Looks reasonable to me. I've marked this as ready-for-committer.\n\nSo, the initial values of max_wal_senders and max_replication_slots\nbecame out of sync with their defaults in guc_tables.c. FWIW, I would\nargue the opposite way: rather than removing the initializations, I\nwould fix and keep them as these references can be useful when\nbrowsing the area of the code related to such GUCs, without having to\nlook at guc_tables.c for this information.\n--\nMichael", "msg_date": "Thu, 13 Oct 2022 09:47:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Thu, Oct 13, 2022 at 09:47:25AM +0900, Michael Paquier wrote:\n> So, the initial values of max_wal_senders and max_replication_slots\n> became out of sync with their defaults in guc_tables.c. FWIW, I would\n> argue the opposite way: rather than removing the initializations, I\n> would fix and keep them as these references can be useful when\n> browsing the area of the code related to such GUCs, without having to\n> look at guc_tables.c for this information.\n\nWell, those initializations are only useful when they are kept in sync,\nwhich, as demonstrated by this patch, isn't always the case. I don't have\na terribly strong opinion about this, but I'd lean towards reducing the\nnumber of places that track the default value of GUCs.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 13 Oct 2022 14:26:35 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Fri, Oct 14, 2022 at 8:26 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Thu, Oct 13, 2022 at 09:47:25AM +0900, Michael Paquier wrote:\n> > So, the initial values of max_wal_senders and max_replication_slots\n> > became out of sync with their defaults in guc_tables.c. FWIW, I would\n> > argue the opposite way: rather than removing the initializations, I\n> > would fix and keep them as these references can be useful when\n> > browsing the area of the code related to such GUCs, without having to\n> > look at guc_tables.c for this information.\n>\n> Well, those initializations are only useful when they are kept in sync,\n> which, as demonstrated by this patch, isn't always the case. I don't have\n> a terribly strong opinion about this, but I'd lean towards reducing the\n> number of places that track the default value of GUCs.\n>\n\nI agree if constants are used in both places then there will always be\nsome risk they can get out of sync again.\n\nBut probably it is no problem to just add #defines (e.g. in\nlogicallauncher.h?) to be commonly used for the C variable declaration\nand also in the guc_tables. I chose not to do that way only because it\ndidn't seem to be the typical convention for all the other numeric\nGUCs I looked at, but it's fine by me if that way is preferred\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Fri, 14 Oct 2022 13:15:58 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> I agree if constants are used in both places then there will always be\n> some risk they can get out of sync again.\n\nYeah.\n\n> But probably it is no problem to just add #defines (e.g. in\n> logicallauncher.h?) to be commonly used for the C variable declaration\n> and also in the guc_tables.\n\nThe problem is exactly that there's no great place to put those #define's,\nat least not without incurring a lot of fresh #include bloat.\n\nAlso, if you did it like that, then it doesn't really address Michael's\ndesire to see the default value in the variable declaration.\n\nI do lean towards having the data available, mainly because of the\nfear I mentioned upthread that some GUCs may be accessed before\nInitializeGUCOptions runs.\n\nCould we fix the out-of-sync risk by having InitializeGUCOptions insist\nthat the pre-existing value of the variable match what is in guc_tables.c?\nThat may not work for string values but I think we could insist on it\nfor other GUC data types. For strings, maybe the rule could be \"the\nold value must be NULL or strcmp-equal to the boot_val\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Oct 2022 23:14:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Thu, Oct 13, 2022 at 11:14:57PM -0400, Tom Lane wrote:\n> Could we fix the out-of-sync risk by having InitializeGUCOptions insist\n> that the pre-existing value of the variable match what is in guc_tables.c?\n> That may not work for string values but I think we could insist on it\n> for other GUC data types. For strings, maybe the rule could be \"the\n> old value must be NULL or strcmp-equal to the boot_val\".\n\npg_strcasecmp()'d would be more flexible here? Sometimes the\ncharacter casing on the values is not entirely consistent, but no\nobjections to use something stricter, either.\n--\nMichael", "msg_date": "Fri, 14 Oct 2022 12:56:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, Oct 13, 2022 at 11:14:57PM -0400, Tom Lane wrote:\n>> For strings, maybe the rule could be \"the\n>> old value must be NULL or strcmp-equal to the boot_val\".\n\n> pg_strcasecmp()'d would be more flexible here?\n\nDon't see the point for that. The case we're talking about is\nwhere the variable is declared like\n\nchar *my_guc_variable = \"foo_bar\";\n\nwhere the initialization value is going to be a compile-time\nconstant. I don't see why we'd need to allow any difference\nbetween that constant and the one used in guc_tables.c.\n\nOn the other hand, we could insist that string values be strcmp-equal with\nno allowance for NULL. But that probably results in duplicate copies of\nthe string constant, and I'm not sure it buys anything in most places.\nAllowing NULL doesn't seem like it creates any extra hazard for early\nreferences, because they'd just crash if they try to use the value\nwhile it's still NULL.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Oct 2022 00:07:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "PSA v2* patches.\n\nPatch 0001 is just a minor tidy of the GUC C variables of logical\nreplication. The C variable initial values are present again, how\nMichael preferred them [1].\n\nPatch 0002 adds a sanity-check function called by\nInitializeGUCOptions, as suggested by Tom [2]. This is to ensure that\nthe GUC C variable initial values are sensible and/or have not gone\nstale compared with the compiled-in defaults of guc_tables.c. This\npatch also changes some GUC C variable initial values which were\nalready found (by this sanity-checker) to be different.\n\n~~~\n\nFYI, here are examples of errors when (contrived) mismatched values\nare detected:\n\n[postgres@CentOS7-x64 ~]$ pg_ctl -D ./MYDATAOSS/ start\nwaiting for server to start....FATAL: GUC (PGC_INT)\nmax_replication_slots, boot_val=10, C-var=999\n stopped waiting\npg_ctl: could not start server\nExamine the log output.\n\n[postgres@CentOS7-x64 ~]$ pg_ctl -D ./MYDATAOSS/ start\nwaiting for server to start....FATAL: GUC (PGC_BOOL)\nenable_partitionwise_aggregate, boot_val=0, C-var=1\n stopped waiting\npg_ctl: could not start server\nExamine the log output.\n\n[postgres@CentOS7-x64 ~]$ pg_ctl -D ./MYDATAOSS/ start\nwaiting for server to start....FATAL: GUC (PGC_REAL)\ncpu_operator_cost, boot_val=0.0025, C-var=99.99\n stopped waiting\npg_ctl: could not start server\nExamine the log output.\n\n[postgres@CentOS7-x64 ~]$ pg_ctl -D ./MYDATAOSS/ start\nwaiting for server to start....FATAL: GUC (PGC_STRING)\narchive_command, boot_val=, C-var=banana\n stopped waiting\npg_ctl: could not start server\nExamine the log output.\n\n[postgres@CentOS7-x64 ~]$ pg_ctl -D ./MYDATAOSS/ start\nwaiting for server to start....FATAL: GUC (PGC_ENUM) wal_level,\nboot_val=1, C-var=99\n stopped waiting\npg_ctl: could not start server\nExamine the log output.\n\n------\n[1] prefer to have C initial values -\nhttps://www.postgresql.org/message-id/Y0dgHfEGvvay5nle%40paquier.xyz\n[2] sanity-check idea -\nhttps://www.postgresql.org/message-id/1113448.1665717297%40sss.pgh.pa.us\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 20 Oct 2022 11:56:58 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Thu, Oct 20, 2022 at 11:56:58AM +1100, Peter Smith wrote:\n> Patch 0002 adds a sanity-check function called by\n> InitializeGUCOptions, as suggested by Tom [2]. This is to ensure that\n> the GUC C variable initial values are sensible and/or have not gone\n> stale compared with the compiled-in defaults of guc_tables.c. This\n> patch also changes some GUC C variable initial values which were\n> already found (by this sanity-checker) to be different.\n\nI like it.\n\nHowever it's fails on windows:\n\nhttps://cirrus-ci.com/task/5545965036765184\n\nrunning bootstrap script ... FATAL: GUC (PGC_BOOL) update_process_title, boot_val=0, C-var=1\n\nMaybe you need to exclude dynamically set gucs ?\nSee also this other thread, where I added a flag identifying exactly\nthat. https://commitfest.postgresql.org/40/3736/\nI need to polish that patch some, but maybe it'll be useful for you, too.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 19 Oct 2022 23:15:59 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Thu, Oct 20, 2022 at 3:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Oct 20, 2022 at 11:56:58AM +1100, Peter Smith wrote:\n> > Patch 0002 adds a sanity-check function called by\n> > InitializeGUCOptions, as suggested by Tom [2]. This is to ensure that\n> > the GUC C variable initial values are sensible and/or have not gone\n> > stale compared with the compiled-in defaults of guc_tables.c. This\n> > patch also changes some GUC C variable initial values which were\n> > already found (by this sanity-checker) to be different.\n>\n> I like it.\n>\n> However it's fails on windows:\n>\n> https://cirrus-ci.com/task/5545965036765184\n>\n> running bootstrap script ... FATAL: GUC (PGC_BOOL) update_process_title, boot_val=0, C-var=1\n>\n> Maybe you need to exclude dynamically set gucs ?\n> See also this other thread, where I added a flag identifying exactly\n> that. https://commitfest.postgresql.org/40/3736/\n> I need to polish that patch some, but maybe it'll be useful for you, too.\n>\n\nGreat, this looks very helpful. I will try again tomorrow by skipping\nover such GUCs.\n\nAnd I noticed a couple of other C initial values I had changed\ncoincide with what you've marked as GUC_DYNAMIC_DEFAULT so I'll\nrestore those to how they were before too.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 20 Oct 2022 18:52:00 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Thu, Oct 20, 2022 at 6:52 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, Oct 20, 2022 at 3:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Thu, Oct 20, 2022 at 11:56:58AM +1100, Peter Smith wrote:\n> > > Patch 0002 adds a sanity-check function called by\n> > > InitializeGUCOptions, as suggested by Tom [2]. This is to ensure that\n> > > the GUC C variable initial values are sensible and/or have not gone\n> > > stale compared with the compiled-in defaults of guc_tables.c. This\n> > > patch also changes some GUC C variable initial values which were\n> > > already found (by this sanity-checker) to be different.\n> >\n> > I like it.\n> >\n> > However it's fails on windows:\n> >\n> > https://cirrus-ci.com/task/5545965036765184\n> >\n> > running bootstrap script ... FATAL: GUC (PGC_BOOL) update_process_title, boot_val=0, C-var=1\n> >\n> > Maybe you need to exclude dynamically set gucs ?\n> > See also this other thread, where I added a flag identifying exactly\n> > that. https://commitfest.postgresql.org/40/3736/\n> > I need to polish that patch some, but maybe it'll be useful for you, too.\n> >\n\nPSA patch set v3.\n\nThis is essentially the same as before except now, utilizing the\nGUC_DEFAULT_COMPILE flag added by Justin's patch [1], the sanity-check\nskips over any dynamic compiler-dependent GUCs.\n\nPatch 0001 - GUC trivial mods to logical replication GUC C var declarations\nPatch 0002 - (TMP) Justin's patch adds the GUC_DEFAULT_COMPILE flag\nsupport -- this is now a prerequisite for 0003\nPatch 0003 - GUC sanity-check comparisons of GUC C var declarations\nwith the GUC defaults from guc_tables.c\n\n------\n[1] Justin's patch of 24/Oct -\nhttps://www.postgresql.org/message-id/20221024220544.GJ16921%40telsasoft.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 25 Oct 2022 14:43:43 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Tue, Oct 25, 2022 at 02:43:43PM +1100, Peter Smith wrote:\n> This is essentially the same as before except now, utilizing the\n> GUC_DEFAULT_COMPILE flag added by Justin's patch [1], the sanity-check\n> skips over any dynamic compiler-dependent GUCs.\n\nYeah, this is a self-reminder that I should try to look at what's on\nthe other thread.\n\n> Patch 0001 - GUC trivial mods to logical replication GUC C var declarations\n\nThis one seems fine, so done.\n--\nMichael", "msg_date": "Tue, 25 Oct 2022 14:09:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Tue, Oct 25, 2022 at 4:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Oct 25, 2022 at 02:43:43PM +1100, Peter Smith wrote:\n> > This is essentially the same as before except now, utilizing the\n> > GUC_DEFAULT_COMPILE flag added by Justin's patch [1], the sanity-check\n> > skips over any dynamic compiler-dependent GUCs.\n>\n> Yeah, this is a self-reminder that I should try to look at what's on\n> the other thread.\n>\n> > Patch 0001 - GUC trivial mods to logical replication GUC C var declarations\n>\n> This one seems fine, so done.\n> --\n\nThanks for pushing v3-0001.\n\nPSA v4. Rebased the remaining 2 patches so the cfbot can still work.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 25 Oct 2022 17:11:12 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "+#ifdef USE_ASSERT_CHECKING\n+ sanity_check_GUC_C_var(hentry->gucvar);\n+#endif\n\n=> You can conditionally define that as an empty function so #ifdefs\naren't needed in the caller:\n\nvoid sanity_check_GUC_C_var()\n{\n#ifdef USE_ASSERT_CHECKING\n\t...\n#endif\n}\n\n+ /* Skip checking for dynamic (compiler-dependent) GUCs. */\n\n=> This should say that the GUC's default is determined at compile-time.\n\nBut actually, I don't think you should use my patch. You needed to\nexclude update_process_title:\n\nsrc/backend/utils/misc/ps_status.c:bool update_process_title = true;\n...\nsrc/backend/utils/misc/guc_tables.c-#ifdef WIN32\nsrc/backend/utils/misc/guc_tables.c- false,\nsrc/backend/utils/misc/guc_tables.c-#else\nsrc/backend/utils/misc/guc_tables.c- true,\nsrc/backend/utils/misc/guc_tables.c-#endif\nsrc/backend/utils/misc/guc_tables.c- NULL, NULL, NULL\n\nMy patch would also exclude the 16 other GUCs with compile-time defaults\nfrom your check. It'd be better not to exclude them; I think the right\nsolution is to change the C variable initialization to a compile-time\nconstant:\n\n#ifdef WIN32\nbool update_process_title = false;\n#else\nbool update_process_title = true;\n#endif\n\nOr something more indirect like:\n\n#ifdef WIN32\n#define\tDEFAULT_PROCESS_TITLE false\n#else\n#define\tDEFAULT_PROCESS_TITLE true\n#endif\n\nbool update_process_title = DEFAULT_PROCESS_TITLE;\n\nI suspect there's not many GUCs that would need to change - this might\nbe the only one. If this GUC were defined in the inverse (bool\nskip_process_title), it wouldn't need special help, either.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 25 Oct 2022 15:04:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "Thanks for the feedback. PSA the v5 patch.\n\nOn Wed, Oct 26, 2022 at 7:04 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> +#ifdef USE_ASSERT_CHECKING\n> + sanity_check_GUC_C_var(hentry->gucvar);\n> +#endif\n>\n> => You can conditionally define that as an empty function so #ifdefs\n> aren't needed in the caller:\n>\n> void sanity_check_GUC_C_var()\n> {\n> #ifdef USE_ASSERT_CHECKING\n> ...\n> #endif\n> }\n>\n\nFixed as suggested.\n\n> But actually, I don't think you should use my patch. You needed to\n> exclude update_process_title:\n>\n> src/backend/utils/misc/ps_status.c:bool update_process_title = true;\n> ...\n> src/backend/utils/misc/guc_tables.c-#ifdef WIN32\n> src/backend/utils/misc/guc_tables.c- false,\n> src/backend/utils/misc/guc_tables.c-#else\n> src/backend/utils/misc/guc_tables.c- true,\n> src/backend/utils/misc/guc_tables.c-#endif\n> src/backend/utils/misc/guc_tables.c- NULL, NULL, NULL\n>\n> My patch would also exclude the 16 other GUCs with compile-time defaults\n> from your check. It'd be better not to exclude them; I think the right\n> solution is to change the C variable initialization to a compile-time\n> constant:\n>\n> #ifdef WIN32\n> bool update_process_title = false;\n> #else\n> bool update_process_title = true;\n> #endif\n>\n> Or something more indirect like:\n>\n> #ifdef WIN32\n> #define DEFAULT_PROCESS_TITLE false\n> #else\n> #define DEFAULT_PROCESS_TITLE true\n> #endif\n>\n> bool update_process_title = DEFAULT_PROCESS_TITLE;\n>\n> I suspect there's not many GUCs that would need to change - this might\n> be the only one. If this GUC were defined in the inverse (bool\n> skip_process_title), it wouldn't need special help, either.\n>\n\nI re-checked all the GUC C vars which your patch flags as\nGUC_DEFAULT_COMPILE. For some of them, where it was not any trouble, I\nmade the C var assignment use the same preprocessor rules as used by\nguc_tables. For others (mostly the string ones) I left the GUC C var\nuntouched because the sanity checker function already has a rule not\nto complain about int GUC C vars which are 0 or string GUC vars which\nare NULL.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 26 Oct 2022 18:31:56 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Wed, Oct 26, 2022 at 06:31:56PM +1100, Peter Smith wrote:\n> I re-checked all the GUC C vars which your patch flags as\n> GUC_DEFAULT_COMPILE. For some of them, where it was not any trouble, I\n> made the C var assignment use the same preprocessor rules as used by\n> guc_tables. For others (mostly the string ones) I left the GUC C var\n> untouched because the sanity checker function already has a rule not\n> to complain about int GUC C vars which are 0 or string GUC vars which\n> are NULL.\n\nI see. So you have on this thread an independent patch to make the CF\nbot happy, still depend on the patch posted on [1] to bypass the\nchanges with variables whose boot values are compilation-dependent.\n\nIs it right to believe that the only requirement here is\nGUC_DEFAULT_COMPILE but not GUC_DEFAULT_INITDB? The former is much\nmore intuitive than the latter. Still, I see an inconsistency here in\nwhat you are doing here.\n\nsanity_check_GUC_C_var() would need to skip all the GUCs marked as\nGUC_DEFAULT_COMPILE, meaning that one could still be \"fooled by a\nmismatched value\" in these cases. We are talking about a limited set\nof them, but it seems to me that we have no need for this flag at all\nonce the default GUC values are set with a #defined'd value, no?\ncheckpoint_flush_after, bgwriter_flush_after, port and\neffective_io_concurrency do that, which is why\nv5-0001-GUC-C-variable-sanity-check.patch does its stuff only for\nmaintenance_io_concurrency, update_process_title, assert_enabled and\nsyslog_facility. I think that it would be simpler to have a default\nfor these last four with a centralized definition, meaning that we\nwould not need a GUC_DEFAULT_COMPILE at all, while the validation\ncould be done for basically all the GUCs with default values\nassigned. In short, this patch has no need to depend on what's posted\nin [1]. \n\n[1]: https://www.postgresql.org/message-id/20221024220544.GJ16921@telsasoft.com\n--\nMichael", "msg_date": "Thu, 27 Oct 2022 11:06:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Thu, Oct 27, 2022 at 11:06:56AM +0900, Michael Paquier wrote:\n> On Wed, Oct 26, 2022 at 06:31:56PM +1100, Peter Smith wrote:\n> > I re-checked all the GUC C vars which your patch flags as\n> > GUC_DEFAULT_COMPILE. For some of them, where it was not any trouble, I\n> > made the C var assignment use the same preprocessor rules as used by\n> > guc_tables. For others (mostly the string ones) I left the GUC C var\n> > untouched because the sanity checker function already has a rule not\n> > to complain about int GUC C vars which are 0 or string GUC vars which\n> > are NULL.\n> \n> I see. So you have on this thread an independent patch to make the CF\n> bot happy, still depend on the patch posted on [1] to bypass the\n> changes with variables whose boot values are compilation-dependent.\n\nIt seems like you're reviewing the previous version of the patch, rather\nthan the one attached to the message you responded to (which doesn't\nhave anything to do with GUC_DEFAULT_COMPILE).\n\nI don't know what you meant by \"make the CF bot happy\" (?)\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 26 Oct 2022 21:14:37 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Wed, Oct 26, 2022 at 09:14:37PM -0500, Justin Pryzby wrote:\n> It seems like you're reviewing the previous version of the patch, rather\n> than the one attached to the message you responded to (which doesn't\n> have anything to do with GUC_DEFAULT_COMPILE).\n\nIt does not seem so as things stand, I have been looking at\nv5-0001-GUC-C-variable-sanity-check.patch as posted by Peter here:\nhttps://www.postgresql.org/message-id/CAHut+PuCHjYXiTGdTOvHvDnjpbivLLr49gWVS+8VwnfoM4hJTw@mail.gmail.com\n\nIn combination with a two-patch set as posted by you here:\n0001-add-DYNAMIC_DEFAULT-for-settings-which-vary-by-.-con.patch\n0002-WIP-test-guc-default-values.patch\nhttps://www.postgresql.org/message-id/20221024220544.GJ16921@telsasoft.com\n\nThese are the latest patch versions posted on their respective thread\nI am aware of, and based on the latest updates of each thread it still\nlooked like there was a dependency between both. So, is that the case\nor not? If not, sorry if I misunderstood things.\n\n> I don't know what you meant by \"make the CF bot happy\" (?)\n\nIt is in my opinion confusing to see that the v5 posted on this\nthread, which was marked as ready for committer as of\nhttps://commitfest.postgresql.org/40/3934/, seem to rely on a facility\nthat it makes no use of. Hence it looks to me that this patch has\nbeen posted as-is to allow the CF bot to pass (I would have posted\nthat as an isolated two-patch set with the first patch introducing the\nflag if need be).\n\nAnyway, per my previous comments in my last message of this thread as\nof https://www.postgresql.org/message-id/Y1nnwFTrnL3ItleP@paquier.xyz,\nI don't see a need for DYNAMIC_DEFAULT from the other thread, nor do I\nsee a need to a style like that:\n+/* GUC variable */\n+bool update_process_title =\n+#ifdef WIN32\n+ false;\n+#else\n+ true;\n+#endif \n\nI think that it would be cleaner to use the same approach as\nchecking_after_flush and similar GUCs with a centralized definition,\nrather than spreading such style in two places for each GUC that this\npatch touches (aka its declaration and its default value in\nguc_tables.c). In any case, the patch of this thread still needs some\nadjustments IMO.\n--\nMichael", "msg_date": "Thu, 27 Oct 2022 11:33:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Thu, Oct 27, 2022 at 11:33:48AM +0900, Michael Paquier wrote:\n> On Wed, Oct 26, 2022 at 09:14:37PM -0500, Justin Pryzby wrote:\n> > It seems like you're reviewing the previous version of the patch, rather\n> > than the one attached to the message you responded to (which doesn't\n> > have anything to do with GUC_DEFAULT_COMPILE).\n> \n> It does not seem so as things stand, I have been looking at\n> v5-0001-GUC-C-variable-sanity-check.patch as posted by Peter here:\n> https://www.postgresql.org/message-id/CAHut+PuCHjYXiTGdTOvHvDnjpbivLLr49gWVS+8VwnfoM4hJTw@mail.gmail.com\n\nThis thread is about consistency of the global variables with what's set\nby the GUC infrastructure.\n\nIn v4, Peter posted a 2-patch series with my patch as 001.\nBut I pointed out that it's better to fix the initialization of the\ncompile-time GUCs rather than exclude them from the check.\nThen Peter submitted v5 whcih does that, and isn't built on top of my\npatch.\n\n> In combination with a two-patch set as posted by you here:\n> 0001-add-DYNAMIC_DEFAULT-for-settings-which-vary-by-.-con.patch\n> 0002-WIP-test-guc-default-values.patch\n> https://www.postgresql.org/message-id/20221024220544.GJ16921@telsasoft.com\n\nThat's a separate thread regarding consistency of the default values\n(annotations) shown in postgresql.conf. (I'm not sure whether or not my\npatch adding GUC flags is an agreed way forward, although they might\nturn out to be useful for other purposes).\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 26 Oct 2022 21:49:34 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Thu, Oct 27, 2022 at 1:33 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Oct 26, 2022 at 09:14:37PM -0500, Justin Pryzby wrote:\n> > It seems like you're reviewing the previous version of the patch, rather\n> > than the one attached to the message you responded to (which doesn't\n> > have anything to do with GUC_DEFAULT_COMPILE).\n>\n> It does not seem so as things stand, I have been looking at\n> v5-0001-GUC-C-variable-sanity-check.patch as posted by Peter here:\n> https://www.postgresql.org/message-id/CAHut+PuCHjYXiTGdTOvHvDnjpbivLLr49gWVS+8VwnfoM4hJTw@mail.gmail.com\n>\n> In combination with a two-patch set as posted by you here:\n> 0001-add-DYNAMIC_DEFAULT-for-settings-which-vary-by-.-con.patch\n> 0002-WIP-test-guc-default-values.patch\n> https://www.postgresql.org/message-id/20221024220544.GJ16921@telsasoft.com\n>\n> These are the latest patch versions posted on their respective thread\n> I am aware of, and based on the latest updates of each thread it still\n> looked like there was a dependency between both. So, is that the case\n> or not? If not, sorry if I misunderstood things.\n\nNo. My v5 is no longer dependent on the other patch.\n\n>\n> > I don't know what you meant by \"make the CF bot happy\" (?)\n>\n> It is in my opinion confusing to see that the v5 posted on this\n> thread, which was marked as ready for committer as of\n> https://commitfest.postgresql.org/40/3934/, seem to rely on a facility\n> that it makes no use of. Hence it looks to me that this patch has\n> been posted as-is to allow the CF bot to pass (I would have posted\n> that as an isolated two-patch set with the first patch introducing the\n> flag if need be).\n\nYeah, my v4 was posted along with the other GUC flag patch as a\nprerequisite to make the cfbot happy. This is no longer the case - v5\nis a single independent patch. Sorry for the \"ready for the committer\"\nstatus being confusing. At that time I thought it was.\n\n>\n> Anyway, per my previous comments in my last message of this thread as\n> of https://www.postgresql.org/message-id/Y1nnwFTrnL3ItleP@paquier.xyz,\n> I don't see a need for DYNAMIC_DEFAULT from the other thread, nor do I\n> see a need to a style like that:\n> +/* GUC variable */\n> +bool update_process_title =\n> +#ifdef WIN32\n> + false;\n> +#else\n> + true;\n> +#endif\n>\n> I think that it would be cleaner to use the same approach as\n> checking_after_flush and similar GUCs with a centralized definition,\n> rather than spreading such style in two places for each GUC that this\n> patch touches (aka its declaration and its default value in\n> guc_tables.c). In any case, the patch of this thread still needs some\n> adjustments IMO.\n\nOK, I can make that adjustment if it is preferred. I think it is the\nsame as what I already suggested a while ago [1] (\"But probably it is\nno problem to just add #defines...\")\n\n------\n[1] https://www.postgresql.org/message-id/1113448.1665717297%40sss.pgh.pa.us\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Thu, 27 Oct 2022 13:49:53 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Wed, Oct 26, 2022 at 09:49:34PM -0500, Justin Pryzby wrote:\n> In v4, Peter posted a 2-patch series with my patch as 001.\n> But I pointed out that it's better to fix the initialization of the\n> compile-time GUCs rather than exclude them from the check.\n> Then Peter submitted v5 whcih does that, and isn't built on top of my\n> patch.\n\nOkidoki, thanks for the clarification.\n--\nMichael", "msg_date": "Thu, 27 Oct 2022 11:53:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Thu, Oct 27, 2022 at 1:33 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n>...\n>\n> Anyway, per my previous comments in my last message of this thread as\n> of https://www.postgresql.org/message-id/Y1nnwFTrnL3ItleP@paquier.xyz,\n> I don't see a need for DYNAMIC_DEFAULT from the other thread, nor do I\n> see a need to a style like that:\n> +/* GUC variable */\n> +bool update_process_title =\n> +#ifdef WIN32\n> + false;\n> +#else\n> + true;\n> +#endif\n>\n> I think that it would be cleaner to use the same approach as\n> checking_after_flush and similar GUCs with a centralized definition,\n> rather than spreading such style in two places for each GUC that this\n> patch touches (aka its declaration and its default value in\n> guc_tables.c). In any case, the patch of this thread still needs some\n> adjustments IMO.\n\nPSA patch v6.\n\nThe GUC defaults of guc_tables.c, and the modified GUC C var\ndeclarations now share the same common #define'd value (instead of\ncut/paste preprocessor code).\n\nPer Michael's suggestion [1] to use centralized definitions.\n\n------\n[1] https://www.postgresql.org/message-id/Y1nuDNZDncx7%2BA1j%40paquier.xyz\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 27 Oct 2022 19:00:26 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Thu, Oct 27, 2022 at 07:00:26PM +1100, Peter Smith wrote:\n> The GUC defaults of guc_tables.c, and the modified GUC C var\n> declarations now share the same common #define'd value (instead of\n> cut/paste preprocessor code).\n\nThanks. I have not looked at the checkup logic yet, but the central\ndeclarations seem rather sane, and I have a few comments about the\nlatter.\n\n+#ifdef WIN32\n+#define DEFAULT_UPDATE_PROCESS_TITLE false\n+#else\n+#define DEFAULT_UPDATE_PROCESS_TITLE true\n+#endif\nThis is the kind of things I would document as a comment, say\n\"Disabled on Windows as the performance overhead can be significant\".\n\nActually, pg_iovec.h uses WIN32 without any previous header declared,\nbut win32.h tells a different story as of ed9b3606, where we would\ndefine WIN32 if it does not exist yet. That may impact the default\ndepending on the environment used? I am wondering whether the top of\nwin32.h could be removed, these days..\n\n+#ifdef USE_PREFETCH\n+#define DEFAULT_EFFECTIVE_IO_CONCURRENCY 1\n+#define DEFAULT_MAINTENANCE_IO_CONCURRENCY 10\n+#else\n+#define DEFAULT_EFFECTIVE_IO_CONCURRENCY 0\n+#define DEFAULT_MAINTENANCE_IO_CONCURRENCY 0\n+#endif\nThese don't make sense without prefetching available. Perhaps that's\nobvious enough when reading the code still I would add a small note.\n--\nMichael", "msg_date": "Fri, 28 Oct 2022 11:48:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Fri, Oct 28, 2022 at 11:48:13AM +0900, Michael Paquier wrote:\n> Actually, pg_iovec.h uses WIN32 without any previous header declared,\n> but win32.h tells a different story as of ed9b3606, where we would\n> define WIN32 if it does not exist yet.\n\nSeeing all the places where pg_status.h is included, that should be\nfine, so please just ignore this part.\n--\nMichael", "msg_date": "Fri, 28 Oct 2022 12:05:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Fri, Oct 28, 2022 at 11:48:13AM +0900, Michael Paquier wrote:\n> Thanks. I have not looked at the checkup logic yet, but the central\n> declarations seem rather sane, and I have a few comments about the\n> latter.\n\nSo, I've had the energy to look at the check logic today, and noticed\nthat, while the proposed patch is doing the job when loading the\nin-core GUCs, nothing is happening for the custom GUCs that could be\nloaded through shared_preload_libraries or just from a LOAD command.\n\nAfter adding an extra check in define_custom_variable() (reworking a\nbit the interface proposed while on it), I have found a few more\nissues than what's been already found on this thread:\n- 5 missing spots in pg_stat_statements.\n- 3 float rounding issues in pg_trgm.\n- 1 spot in pg_prewarm.\n- A few more that had no initialization, but these had a default of\nfalse/0/0.0 so it does not influence the end result but I have added\nsome initializations anyway.\n\nWith all that addressed, I am finishing with the attached. I have\nadded some comments for the default definitions depending on the\nCFLAGS, explaining the reasons behind the choices made. The CI has\nproduced a green run, which is not the same as the buildfarm, still\ngives some confidence.\n\nThoughts?\n--\nMichael", "msg_date": "Fri, 28 Oct 2022 16:05:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Fri, Oct 28, 2022 at 6:05 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Oct 28, 2022 at 11:48:13AM +0900, Michael Paquier wrote:\n> > Thanks. I have not looked at the checkup logic yet, but the central\n> > declarations seem rather sane, and I have a few comments about the\n> > latter.\n>\n> So, I've had the energy to look at the check logic today, and noticed\n> that, while the proposed patch is doing the job when loading the\n> in-core GUCs, nothing is happening for the custom GUCs that could be\n> loaded through shared_preload_libraries or just from a LOAD command.\n>\n> After adding an extra check in define_custom_variable() (reworking a\n> bit the interface proposed while on it), I have found a few more\n> issues than what's been already found on this thread:\n> - 5 missing spots in pg_stat_statements.\n> - 3 float rounding issues in pg_trgm.\n> - 1 spot in pg_prewarm.\n> - A few more that had no initialization, but these had a default of\n> false/0/0.0 so it does not influence the end result but I have added\n> some initializations anyway.\n>\n> With all that addressed, I am finishing with the attached. I have\n> added some comments for the default definitions depending on the\n> CFLAGS, explaining the reasons behind the choices made. The CI has\n> produced a green run, which is not the same as the buildfarm, still\n> gives some confidence.\n>\n> Thoughts?\n\nLGTM.\n\nThe patch was intended to expose mismatches, and it seems to be doing\nthat job already...\n\nI only had some nitpicks for a couple of the new comments, below:\n\n======\n\n1. src/include/storage/bufmgr.h\n\n+\n+/* effective when prefetching is available */\n+#ifdef USE_PREFETCH\n+#define DEFAULT_EFFECTIVE_IO_CONCURRENCY 1\n+#define DEFAULT_MAINTENANCE_IO_CONCURRENCY 10\n+#else\n+#define DEFAULT_EFFECTIVE_IO_CONCURRENCY 0\n+#define DEFAULT_MAINTENANCE_IO_CONCURRENCY 0\n+#endif\n\nMaybe avoid the word \"effective\" since that is also one of the GUC names.\n\nUse uppercase.\n\nSUGGESTION\n/* Only applicable when prefetching is available */\n\n======\n\n2. src/include/utils/ps_status.h\n\n+/* Disabled on Windows as the performance overhead can be significant */\n+#ifdef WIN32\n+#define DEFAULT_UPDATE_PROCESS_TITLE false\n+#else\n+#define DEFAULT_UPDATE_PROCESS_TITLE true\n+#endif\n extern PGDLLIMPORT bool update_process_title;\n\n\nPerhaps put that comment inside the #ifdef WIN32\n\nSUGGESTION\n#ifdef WIN32\n/* Disabled on Windows because the performance overhead can be significant */\n#define DEFAULT_UPDATE_PROCESS_TITLE false\n#else\n...\n\n======\n\nsrc/backend/utils/misc/guc.c\n\n3. InitializeGUCOptions\n\n@@ -1413,6 +1496,9 @@ InitializeGUCOptions(void)\n hash_seq_init(&status, guc_hashtab);\n while ((hentry = (GUCHashEntry *) hash_seq_search(&status)) != NULL)\n {\n+ /* check mapping between initial and default value */\n+ Assert(check_GUC_init(hentry->gucvar));\n+\n\nUse uppercase.\n\nMinor re-wording.\n\nSUGGESTION\n/* Check the GUC default and declared initial value for consistency */\n\n~~~\n\n4. define_custom_variable\n\nSame as #3.\n\n------\nKind Regards,\nPeter Smith\nFujitsu Australia\n\n\n", "msg_date": "Mon, 31 Oct 2022 12:01:33 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Mon, Oct 31, 2022 at 12:01:33PM +1100, Peter Smith wrote:\n> SUGGESTION\n> /* Only applicable when prefetching is available */\n\nThanks for the suggestion. Done this way, then.\n\n> +/* Disabled on Windows as the performance overhead can be significant */\n> +#ifdef WIN32\n> +#define DEFAULT_UPDATE_PROCESS_TITLE false\n> +#else\n> +#define DEFAULT_UPDATE_PROCESS_TITLE true\n> +#endif\n> extern PGDLLIMPORT bool update_process_title;\n> \n> Perhaps put that comment inside the #ifdef WIN32\n\nI'd keep that externally, as ps_status.h does so.\n\n> [...]\n> SUGGESTION\n> /* Check the GUC default and declared initial value for consistency */\n\nOkay, fine by me.\n\nI have split the change into two parts at the end: one to refactor and\nfix the C declarations, and a second to introduce the check routine\nwith all the correct declarations in place.\n\nFWIW, I have been testing that with my own in-house modules and it has\ncaught a few stupid inconsistencies. Let's see how it goes.\n--\nMichael", "msg_date": "Mon, 31 Oct 2022 14:02:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" }, { "msg_contents": "On Mon, Oct 31, 2022 at 4:02 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Oct 31, 2022 at 12:01:33PM +1100, Peter Smith wrote:\n> > SUGGESTION\n> > /* Only applicable when prefetching is available */\n>\n> Thanks for the suggestion. Done this way, then.\n>\n> > +/* Disabled on Windows as the performance overhead can be significant */\n> > +#ifdef WIN32\n> > +#define DEFAULT_UPDATE_PROCESS_TITLE false\n> > +#else\n> > +#define DEFAULT_UPDATE_PROCESS_TITLE true\n> > +#endif\n> > extern PGDLLIMPORT bool update_process_title;\n> >\n> > Perhaps put that comment inside the #ifdef WIN32\n>\n> I'd keep that externally, as ps_status.h does so.\n>\n> > [...]\n> > SUGGESTION\n> > /* Check the GUC default and declared initial value for consistency */\n>\n> Okay, fine by me.\n>\n> I have split the change into two parts at the end: one to refactor and\n> fix the C declarations, and a second to introduce the check routine\n> with all the correct declarations in place.\n>\n> FWIW, I have been testing that with my own in-house modules and it has\n> caught a few stupid inconsistencies. Let's see how it goes.\n\nThanks for pushing.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Mon, 31 Oct 2022 17:21:48 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GUC values - recommended way to declare the C variables?" } ]
[ { "msg_contents": "I was wondering about how to debug failed builds on Cirrus CI, and\nafter poking at the interface I realized we helpfully upload the logs\nfrom CI runs for user download.\n\nIn an effort to save the next person a few minutes I thought the\nattached minor patch would help.\n\nThanks,\nJames Coleman", "msg_date": "Mon, 26 Sep 2022 22:08:12 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Add hint about downloadable logs to CI README" }, { "msg_contents": "Hi,\n\nOn 2022-09-26 22:08:12 -0400, James Coleman wrote:\n> I was wondering about how to debug failed builds on Cirrus CI, and\n> after poking at the interface I realized we helpfully upload the logs\n> from CI runs for user download.\n> \n> In an effort to save the next person a few minutes I thought the\n> attached minor patch would help.\n\nI'm not quite sure how likely it is to help, but it can't hurt... Pushed.\n\nThanks for the patch!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 26 Sep 2022 20:04:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Add hint about downloadable logs to CI README" } ]
[ { "msg_contents": "I had a build on Cirrus CI fail tonight in what I have to assume was\neither a problem with caching across builds or some such similar\nflakiness. In the Debian task [1] I received this error:\n\nsu postgres -c \"make -s -j${BUILD_JOBS} world-bin\"\nIn file included from parser.c:25:\n./gramparse.h:29:10: fatal error: 'gram.h' file not found\n#include \"gram.h\"\n^~~~~~~~\n1 error generated.\nmake[3]: *** [../../../src/Makefile.global:1078: parser.bc] Error 1\nmake[3]: *** Waiting for unfinished jobs....\nmake[2]: *** [common.mk:36: parser-recursive] Error 2\nmake[1]: *** [Makefile:42: all-backend-recurse] Error 2\nmake: *** [GNUmakefile:21: world-bin-src-recurse] Error 2\n\nThere were no changes in the commits I'd made to either parser.c or\ngramparse.h or gram.h. After running \"git commit --amend --no-edit\"\n(with zero changes) to rewrite the commit and forcing pushing the\nbuild [2] seems to be fine. I've double-checked there are no\ndifferences between the commits on the two builds (git diff shows no\noutput).\n\nIs it possible we're missing some kind of necessary build isolation in\nthe Cirrus CI scripting?\n\nThanks,\nJames Coleman\n\n1: https://cirrus-ci.com/task/6141559258218496\n2: https://cirrus-ci.com/build/6309235720978432\n\n\n", "msg_date": "Mon, 26 Sep 2022 22:36:24 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "cirrus-ci cross-build interactions?" }, { "msg_contents": "On Mon, Sep 26, 2022 at 10:36 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> I had a build on Cirrus CI fail tonight in what I have to assume was\n> either a problem with caching across builds or some such similar\n> flakiness. In the Debian task [1] I received this error:\n>\n> su postgres -c \"make -s -j${BUILD_JOBS} world-bin\"\n> In file included from parser.c:25:\n> ./gramparse.h:29:10: fatal error: 'gram.h' file not found\n> #include \"gram.h\"\n> ^~~~~~~~\n> 1 error generated.\n> make[3]: *** [../../../src/Makefile.global:1078: parser.bc] Error 1\n> make[3]: *** Waiting for unfinished jobs....\n> make[2]: *** [common.mk:36: parser-recursive] Error 2\n> make[1]: *** [Makefile:42: all-backend-recurse] Error 2\n> make: *** [GNUmakefile:21: world-bin-src-recurse] Error 2\n>\n> There were no changes in the commits I'd made to either parser.c or\n> gramparse.h or gram.h. After running \"git commit --amend --no-edit\"\n> (with zero changes) to rewrite the commit and forcing pushing the\n> build [2] seems to be fine. I've double-checked there are no\n> differences between the commits on the two builds (git diff shows no\n> output).\n>\n> Is it possible we're missing some kind of necessary build isolation in\n> the Cirrus CI scripting?\n>\n> Thanks,\n> James Coleman\n>\n> 1: https://cirrus-ci.com/task/6141559258218496\n> 2: https://cirrus-ci.com/build/6309235720978432\n\nHmm, it looks like I don't have the commit that came out of this\nthread [1] about gram.h issues; perhaps that's the issue.\n\nI'm not sure why it fails sometimes and not others, however, though I\nnoticed that on the second build from my original email the Debian\nstep passed while the compiler warnings step failed with the same\nerror.\n\nJames Coleman\n\n1: https://www.postgresql.org/message-id/20220914210427.y26tkagmxo5wwbvp%40awork3.anarazel.de\n\n\n", "msg_date": "Mon, 26 Sep 2022 22:43:17 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: cirrus-ci cross-build interactions?" }, { "msg_contents": "Hi,\n\nOn 2022-09-26 22:36:24 -0400, James Coleman wrote:\n> I had a build on Cirrus CI fail tonight in what I have to assume was\n> either a problem with caching across builds or some such similar\n> flakiness. In the Debian task [1] I received this error:\n> \n> su postgres -c \"make -s -j${BUILD_JOBS} world-bin\"\n> In file included from parser.c:25:\n> ./gramparse.h:29:10: fatal error: 'gram.h' file not found\n> #include \"gram.h\"\n> ^~~~~~~~\n> 1 error generated.\n> make[3]: *** [../../../src/Makefile.global:1078: parser.bc] Error 1\n> make[3]: *** Waiting for unfinished jobs....\n> make[2]: *** [common.mk:36: parser-recursive] Error 2\n> make[1]: *** [Makefile:42: all-backend-recurse] Error 2\n> make: *** [GNUmakefile:21: world-bin-src-recurse] Error 2\n> \n> There were no changes in the commits I'd made to either parser.c or\n> gramparse.h or gram.h. After running \"git commit --amend --no-edit\"\n> (with zero changes) to rewrite the commit and forcing pushing the\n> build [2] seems to be fine. I've double-checked there are no\n> differences between the commits on the two builds (git diff shows no\n> output).\n> \n> Is it possible we're missing some kind of necessary build isolation in\n> the Cirrus CI scripting?\n\nVery unlikely - most of the tasks, including debian, use VMs that are thrown\naway after a single use.\n\nThe explanation is likely that you're missing\n\ncommit 16492df70bb25bc99ca3c340a75ba84ca64171b8\nAuthor: John Naylor <john.naylor@postgresql.org>\nDate: 2022-09-15 10:24:55 +0700\n \n Blind attempt to fix LLVM dependency in the backend\n\nand that the reason you noticed this in one build but not another is purely\ndue to scheduling variances.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 26 Sep 2022 19:48:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: cirrus-ci cross-build interactions?" }, { "msg_contents": "On Mon, Sep 26, 2022 at 10:48 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-09-26 22:36:24 -0400, James Coleman wrote:\n> > I had a build on Cirrus CI fail tonight in what I have to assume was\n> > either a problem with caching across builds or some such similar\n> > flakiness. In the Debian task [1] I received this error:\n> >\n> > su postgres -c \"make -s -j${BUILD_JOBS} world-bin\"\n> > In file included from parser.c:25:\n> > ./gramparse.h:29:10: fatal error: 'gram.h' file not found\n> > #include \"gram.h\"\n> > ^~~~~~~~\n> > 1 error generated.\n> > make[3]: *** [../../../src/Makefile.global:1078: parser.bc] Error 1\n> > make[3]: *** Waiting for unfinished jobs....\n> > make[2]: *** [common.mk:36: parser-recursive] Error 2\n> > make[1]: *** [Makefile:42: all-backend-recurse] Error 2\n> > make: *** [GNUmakefile:21: world-bin-src-recurse] Error 2\n> >\n> > There were no changes in the commits I'd made to either parser.c or\n> > gramparse.h or gram.h. After running \"git commit --amend --no-edit\"\n> > (with zero changes) to rewrite the commit and forcing pushing the\n> > build [2] seems to be fine. I've double-checked there are no\n> > differences between the commits on the two builds (git diff shows no\n> > output).\n> >\n> > Is it possible we're missing some kind of necessary build isolation in\n> > the Cirrus CI scripting?\n>\n> Very unlikely - most of the tasks, including debian, use VMs that are thrown\n> away after a single use.\n>\n> The explanation is likely that you're missing\n>\n> commit 16492df70bb25bc99ca3c340a75ba84ca64171b8\n> Author: John Naylor <john.naylor@postgresql.org>\n> Date: 2022-09-15 10:24:55 +0700\n>\n> Blind attempt to fix LLVM dependency in the backend\n>\n> and that the reason you noticed this in one build but not another is purely\n> due to scheduling variances.\n\nYes, as noted in my child reply to yours the egg is on my face -- I\nhadn't rebased on the latest commits for a little too long.\n\nThanks for the troubleshooting and relevant fix.\n\nJames Coleman\n\n\n", "msg_date": "Mon, 26 Sep 2022 22:58:16 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": true, "msg_subject": "Re: cirrus-ci cross-build interactions?" } ]
[ { "msg_contents": "Like other Postgres hackers [1], I have a custom .clang-format file\nthat I use for my work on Postgres. It's a useful tool, despite some\nnotable problems.\n\nFirst, I should mention the problems. The config that I use makes an\nawkward trade-off that results in function declarations getting\nmangled. This trade-off seems unavoidable, perhaps owing to a design\nproblem with the tool. I also generally prefer the way that pgindent\nindents blocks of variables at the start of each scope; clang-format\naligns them in a column-perfect way, which is less aesthetically\npleasing and more distracting than what pgindent will do.\n\nclang-format also has some notable advantages over pgindent when used\nas a tool, day to day. I find that clang-format can reliably fix some\nthings that pgindent just won't fix. This includes misformated\nfunction parameters with a line break that puts the name on a separate\nline to the type. As a general rule, it tends to do better with code\nthat is *very* poorly formatted. It also has the advantage of being\neasy to run from my text editor. It can reformat even a range of lines\nin a way that is passably close to Postgres style, without any of the\nhassles of setting up pgindent.\n\nSince many of us are using clang-format anyway, it occurs to me that\nwe should perhaps commit a clang-format dot file, so that new\ncontributors have a reasonable way of formatting code that \"just\nworks\". Using pgindent is easy enough when you get used to it, but\nit's not easy to set up for the first time. I think that some editors\ncan use a project's clang-format file automatically, even. If a new\ncontributor can use the existing clang-format file it's likely to be\nsignificantly better than using nothing.\n\nI really don't see any real problem with making something available,\nwithout changing any official project guidelines. It's commonplace to\nprovide a clang-format file these days.\n\n[1] https://www.postgresql.org/message-id/flat/55665327.2060508%40gmx.net#080983bfcee12d46a33854e1064fdcca\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 26 Sep 2022 19:56:23 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Adding a clang-format file" }, { "msg_contents": "Hi Peter,\n\n> I really don't see any real problem with making something available,\n> without changing any official project guidelines. It's commonplace to\n> provide a clang-format file these days.\n\nPersonally I don't have anything against the idea. TimescaleDB uses\nclang-format to mimic pgindent and it works quite well. One problem\nworth mentioning though is that the clang-format file is dependent on\nthe particular version of clang-format. TSDB requires version 7 or 8\nand the reason why the project can't easily switch to a newer version\nis that the format has changed. So perhaps a directory would be more\nappropriate than a single file.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 27 Sep 2022 15:34:48 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Adding a clang-format file" }, { "msg_contents": "On Tue, Sep 27, 2022 at 5:35 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> Personally I don't have anything against the idea. TimescaleDB uses\n> clang-format to mimic pgindent and it works quite well. One problem\n> worth mentioning though is that the clang-format file is dependent on\n> the particular version of clang-format.\n\nI was hoping that something generic could work here. Something that we\ncould provide that didn't claim to be authoritative, that has a\nreasonable degree of compatibility that allows most people to use the\nfile without much fuss. Kind of like our .editorconfig file. That\nmight not be a realistic goal, though, since the clang-format settings\nare all quite complicated.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 27 Sep 2022 17:43:14 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Adding a clang-format file" }, { "msg_contents": "Hi, Just wondering would you mind sharing your .clang-format file? I find\r\nthe attachment you pointed to is only a demo with a “…” line and it doesn’t\r\nformat PG code very well.\r\n\r\nbtw I’m also greatly in favor of this idea. clang-format is tightly integrated into\r\nCLion and it’s very convenient. right now I’m using a CLion preset by Greenplum [1]\r\nbut it doesn’t handle struct fields and variables column alignment very well.\r\n\r\n[1] https://groups.google.com/a/greenplum.org/g/gpdb-dev/c/rDYSYotssbE/m/7HgsWuj7AwAJ\r\n\r\n\r\n> On Sep 28, 2022, at 08:43, Peter Geoghegan <pg@bowt.ie> wrote:\r\n> \r\n> On Tue, Sep 27, 2022 at 5:35 AM Aleksander Alekseev\r\n> <aleksander@timescale.com> wrote:\r\n>> Personally I don't have anything against the idea. TimescaleDB uses\r\n>> clang-format to mimic pgindent and it works quite well. One problem\r\n>> worth mentioning though is that the clang-format file is dependent on\r\n>> the particular version of clang-format.\r\n> \r\n> I was hoping that something generic could work here. Something that we\r\n> could provide that didn't claim to be authoritative, that has a\r\n> reasonable degree of compatibility that allows most people to use the\r\n> file without much fuss. Kind of like our .editorconfig file. That\r\n> might not be a realistic goal, though, since the clang-format settings\r\n> are all quite complicated.\r\n> \r\n> -- \r\n> Peter Geoghegan\r\n> \r\n> \r\n> \r\n\r\n", "msg_date": "Thu, 23 Nov 2023 04:00:26 +0000", "msg_from": "Ray Eldath <ray.eldath@outlook.com>", "msg_from_op": false, "msg_subject": "Re: Adding a clang-format file" } ]
[ { "msg_contents": "Hi,\n\nThe x86 mac VMs from cirrus-ci claim to have 12 CPUs, but when working on\ninitdb caching for tests I noticed that using all those CPUs for tests hurts\nthe test times noticably.\n\nSee [1] (note that the overall time is influenced by different degrees of\ncache hit ratios):\n\nconcurrency test time:\n4 05:58\n6 05:09\n8 04:58\n10 05:58\n12 (current) 06:58\n\nThere's a fair bit of run-to-run variance, but the rough shape of these\nlooks repeatable.\n\nI suspect the VMs might be overcommitted a fair bit - or macos just scales\npoorly. Cirrus-ci apparently is switching to M1 based macs, which could be\nrelated. It'd be good for us do that switch, as it'd give use ARM coverage for\nCI / cfbot. See also [3].\n\n\nIn 15 (and thus autoconf) the timings differ a bit less [2]:\n\nconcurrency test time:\n4 06:54\n6 05:43\n8 06:09\n10 06:01\n12 (current) 06:38\n\nLooks like changing TEST_JOBS=6 or 8 would be a good idea.\n\nGreetings,\n\nAndres Freund\n\n[1] https://cirrus-ci.com/build/5254074546257920\n[2] https://cirrus-ci.com/task/4888800445857792\n[3] https://postgr.es/m/CAN55FZ2R%2BXufuVgJ8ew_yDBk48PgXEBvyKNvnNdTTVyczbQj0g%40mail.gmail.com\n\n\n", "msg_date": "Mon, 26 Sep 2022 21:02:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "ci: reduce macos test concurrency" }, { "msg_contents": "Hi,\n\nOn 2022-09-26 21:02:08 -0700, Andres Freund wrote:\n> The x86 mac VMs from cirrus-ci claim to have 12 CPUs, but when working on\n> initdb caching for tests I noticed that using all those CPUs for tests hurts\n> the test times noticably.\n> \n> See [1] (note that the overall time is influenced by different degrees of\n> cache hit ratios):\n> \n> concurrency test time:\n> 4 05:58\n> 6 05:09\n> 8 04:58\n> 10 05:58\n> 12 (current) 06:58\n> \n> There's a fair bit of run-to-run variance, but the rough shape of these\n> looks repeatable.\n> \n> I suspect the VMs might be overcommitted a fair bit - or macos just scales\n> poorly. Cirrus-ci apparently is switching to M1 based macs, which could be\n> related. It'd be good for us do that switch, as it'd give use ARM coverage for\n> CI / cfbot. See also [3].\n> \n> \n> In 15 (and thus autoconf) the timings differ a bit less [2]:\n> \n> concurrency test time:\n> 4 06:54\n> 6 05:43\n> 8 06:09\n> 10 06:01\n> 12 (current) 06:38\n> \n> Looks like changing TEST_JOBS=6 or 8 would be a good idea.\n\nSet it to 8 now.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 1 Oct 2022 17:00:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: ci: reduce macos test concurrency" } ]
[ { "msg_contents": "Hi,\n\nI recently faced an issue on windows where one of the tests was\nfailing with 'unrecognized win32 error code: 123', see [1]. I figured\nout that it was due to a wrong file name being sent to open() system\ncall (this error is of my own making and I fixed it for that thread).\nThe failure occurs in dosmaperr() in win32error.c due to an unmapped\nerrno for win32 error code. The error code 123 i.e. ERROR_INVALID_NAME\nsays \"The file name, directory name, or volume label syntax is\nincorrect.\" [2], the closest errno mapping would be ENOENT. I quickly\nlooked around for the other win32 error codes [2] that don't have\nmapping. I filtered out some common error codes such as\nERROR_OUTOFMEMORY, ERROR_HANDLE_DISK_FULL, ERROR_INSUFFICIENT_BUFFER,\nERROR_NOACCESS. There may be many more, but these seemed common IMO.\n\nHaving a right errno mapping always helps recognize the errors correctly.\n\nI'm attaching a patch that maps the above win32 error codes to errno\nin win32error.c. I also think that we can add a note in win32error.c\nby mentioning the link [2] to revisit the mapping whenever\n\"unrecognized win32 error code:XXX\" error occurs.\n\nThoughts?\n\nThanks Michael Paquier for off list chat.\n\n[1] https://www.postgresql.org/message-id/CALj2ACWKvjOO-JzYpMBpk-o_o9CeKGEqMcS=yXf-pC6M+jOkuQ@mail.gmail.com\n[2] https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-erref/18d8fbe8-a967-4f1c-ae50-99ca8e491d2d\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 27 Sep 2022 15:23:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Extend win32 error codes to errno mapping in win32error.c" }, { "msg_contents": "On Tue, Sep 27, 2022 at 03:23:04PM +0530, Bharath Rupireddy wrote:\n> The failure occurs in dosmaperr() in win32error.c due to an unmapped\n> errno for win32 error code. The error code 123 i.e. ERROR_INVALID_NAME\n> says \"The file name, directory name, or volume label syntax is\n> incorrect.\" [2], the closest errno mapping would be ENOENT. I quickly\n> looked around for the other win32 error codes [2] that don't have\n> mapping.\n\n> I filtered out some common error codes such as\n> ERROR_OUTOFMEMORY, ERROR_HANDLE_DISK_FULL, ERROR_INSUFFICIENT_BUFFER,\n> ERROR_NOACCESS. There may be many more, but these seemed common IMO.\n> \n> Having a right errno mapping always helps recognize the errors correctly.\n\nOne important thing, in my opinion, when it comes to updating this\ntable, is that it could be better to report the original error number\nif errno can be somewhat confusing for the mapping. It is also less\ninteresting to increase the size of the table for errors that cannot\nbe reached, or related to system calls we don't use.\n\nERROR_INVALID_NAME => ENOENT\nYeah, this mapping looks fine.\n\nERROR_HANDLE_DISK_FULL => ENOSPC\nThis one maps to various Setup*Error(), as well as\nGetDiskFreeSpaceEx(). The former is not interesting, but I can buy\nthe case of the former for extension code (I've played with that in\nthe past on WIN32, actually).\n\nERROR_OUTOFMEMORY => ENOMEM\nERROR_NOACCESS => EACCES\nERROR_INSUFFICIENT_BUFFER => EINVAL\nHmm. I have looked at our WIN32 system calls and the upstream docs,\nbut these do not seem to be reachable in our code.\n--\nMichael", "msg_date": "Wed, 28 Sep 2022 13:40:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extend win32 error codes to errno mapping in win32error.c" }, { "msg_contents": "On Wed, Sep 28, 2022 at 10:10 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> One important thing, in my opinion, when it comes to updating this\n> table, is that it could be better to report the original error number\n> if errno can be somewhat confusing for the mapping.\n\nReturning errno = e instead of EINVAL in _dosmaperr() may have an\nimpact on the callers that do a special handling for errno EINVAL. I\ndon't think it's a good idea.\n\n> ERROR_INVALID_NAME => ENOENT\n> Yeah, this mapping looks fine.\n\nHm.\n\n> ERROR_HANDLE_DISK_FULL => ENOSPC\n> This one maps to various Setup*Error(), as well as\n> GetDiskFreeSpaceEx(). The former is not interesting, but I can buy\n> the case of the former for extension code (I've played with that in\n> the past on WIN32, actually).\n>\n> ERROR_OUTOFMEMORY => ENOMEM\n> ERROR_NOACCESS => EACCES\n> ERROR_INSUFFICIENT_BUFFER => EINVAL\n> Hmm. I have looked at our WIN32 system calls and the upstream docs,\n> but these do not seem to be reachable in our code.\n\nIMO, we can add mapping for just ERROR_INVALID_NAME which is an\nobvious error code and easy to hit, leaving others. There are actually\nmany win32 error codes that may get hit in our code base and actually\nmapping everything isn't possible.\n\nPlease see the v2 patch. I've also added a CF entry -\nhttps://commitfest.postgresql.org/40/3914/ so that the patch gets\ntested across.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 28 Sep 2022 11:14:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Extend win32 error codes to errno mapping in win32error.c" }, { "msg_contents": "On Wed, Sep 28, 2022 at 11:14:53AM +0530, Bharath Rupireddy wrote:\n> IMO, we can add mapping for just ERROR_INVALID_NAME which is an\n> obvious error code and easy to hit, leaving others. There are actually\n> many win32 error codes that may get hit in our code base and actually\n> mapping everything isn't possible.\n\nYes. I am fine to do just that as you have managed to hit it during\ndevelopment. The others may have more opinions to offer.\n--\nMichael", "msg_date": "Wed, 28 Sep 2022 14:56:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extend win32 error codes to errno mapping in win32error.c" }, { "msg_contents": "On Wed, Sep 28, 2022 at 11:14:53AM +0530, Bharath Rupireddy wrote:\n> IMO, we can add mapping for just ERROR_INVALID_NAME which is an\n> obvious error code and easy to hit, leaving others.\n\nOkidoki. Applied the minimalistic version, then.\n--\nMichael", "msg_date": "Thu, 29 Sep 2022 15:18:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extend win32 error codes to errno mapping in win32error.c" } ]
[ { "msg_contents": "Hi Wolfgang Walther, I saw your patch and thought it would be very handy in some cases!\r\n\r\n\r\nAfter I applied this patch, I felt something was wrong: the description in the sgml documentation is different from the actual code. I don't know if there is something wrong with my understanding, or is there a mistake here?\r\n\r\n\r\n\r\n\r\n\r\n\r\nI'm so sorry to bother you, but still thank you for the confirmation!", "msg_date": "Tue, 27 Sep 2022 18:23:25 +0800", "msg_from": "\"=?gb18030?B?ucKwwdChtv6hq7Ci4+U=?=\" <2903807914@qq.com>", "msg_from_op": true, "msg_subject": "Questions about the patch of Add ON CONFLICT DO RETURN clause" } ]
[ { "msg_contents": "Hi hackers,\n\nI saw a problem when using tab-complete for \"GRANT\", \"TABLES IN SCHEMA\" should\nbe \"ALL TABLES IN SCHEMA\" in the following case.\n\npostgres=# grant all on\nALL FUNCTIONS IN SCHEMA DATABASE FUNCTION PARAMETER SCHEMA TABLESPACE\nALL PROCEDURES IN SCHEMA DOMAIN information_schema. PROCEDURE SEQUENCE tbl\nALL ROUTINES IN SCHEMA FOREIGN DATA WRAPPER LANGUAGE public. TABLE TYPE\nALL SEQUENCES IN SCHEMA FOREIGN SERVER LARGE OBJECT ROUTINE TABLES IN SCHEMA\n\nI found that it is related to the recent commit 790bf615dd, and maybe it's\nbetter to fix it. I also noticed that some comments should be modified according\nto this new syntax. Attach a patch to fix them.\n\nRegards,\nShi yu", "msg_date": "Tue, 27 Sep 2022 10:28:27 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Fix some newly modified tab-complete changes" }, { "msg_contents": "On Tue, Sep 27, 2022 at 8:28 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> Hi hackers,\n>\n> I saw a problem when using tab-complete for \"GRANT\", \"TABLES IN SCHEMA\" should\n> be \"ALL TABLES IN SCHEMA\" in the following case.\n>\n> postgres=# grant all on\n> ALL FUNCTIONS IN SCHEMA DATABASE FUNCTION PARAMETER SCHEMA TABLESPACE\n> ALL PROCEDURES IN SCHEMA DOMAIN information_schema. PROCEDURE SEQUENCE tbl\n> ALL ROUTINES IN SCHEMA FOREIGN DATA WRAPPER LANGUAGE public. TABLE TYPE\n> ALL SEQUENCES IN SCHEMA FOREIGN SERVER LARGE OBJECT ROUTINE TABLES IN SCHEMA\n>\n> I found that it is related to the recent commit 790bf615dd, and maybe it's\n> better to fix it. I also noticed that some comments should be modified according\n> to this new syntax. Attach a patch to fix them.\n>\n\nThanks for the patch! Below are my review comments.\n\nThe patch looks good to me but I did find some other tab-completion\nanomalies. IIUC these are unrelated to your work, but since I found\nthem while testing your patch I am reporting them here.\n\nPerhaps you want to fix them in the same patch, or just raise them\nagain separately?\n\n======\n\n1. tab complete for CREATE PUBLICATION\n\nI don’t think this is any new bug, but I found that it is possible to do this...\n\ntest_pub=# create publication p for ALL TABLES IN SCHEMA <tab>\ninformation_schema pg_catalog pg_toast public\n\nor, even this...\n\ntest_pub=# create publication p for XXX TABLES IN SCHEMA <tab>\ninformation_schema pg_catalog pg_toast public\n\n======\n\n2. tab complete for GRANT\n\ntest_pub=# grant <tab>\nALL EXECUTE\npg_execute_server_program pg_read_server_files postgres\n TRIGGER\nALTER SYSTEM GRANT pg_monitor\n pg_signal_backend REFERENCES\nTRUNCATE\nCONNECT INSERT pg_read_all_data\n pg_stat_scan_tables SELECT UPDATE\nCREATE pg_checkpoint\npg_read_all_settings pg_write_all_data SET\n USAGE\nDELETE pg_database_owner\npg_read_all_stats pg_write_server_files TEMPORARY\n\n2a.\ngrant \"GRANT\" ??\n\n~\n\n2b.\ngrant \"TEMPORARY\" but not \"TEMP\" ??\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Wed, 28 Sep 2022 14:14:01 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix some newly modified tab-complete changes" }, { "msg_contents": "At Wed, 28 Sep 2022 14:14:01 +1000, Peter Smith <smithpb2250@gmail.com> wrote in \r\n> On Tue, Sep 27, 2022 at 8:28 PM shiy.fnst@fujitsu.com\r\n> <shiy.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Hi hackers,\r\n> >\r\n> > I saw a problem when using tab-complete for \"GRANT\", \"TABLES IN SCHEMA\" should\r\n> > be \"ALL TABLES IN SCHEMA\" in the following case.\r\n> >\r\n> > postgres=# grant all on\r\n> > ALL FUNCTIONS IN SCHEMA DATABASE FUNCTION PARAMETER SCHEMA TABLESPACE\r\n> > ALL PROCEDURES IN SCHEMA DOMAIN information_schema. PROCEDURE SEQUENCE tbl\r\n> > ALL ROUTINES IN SCHEMA FOREIGN DATA WRAPPER LANGUAGE public. TABLE TYPE\r\n> > ALL SEQUENCES IN SCHEMA FOREIGN SERVER LARGE OBJECT ROUTINE TABLES IN SCHEMA\r\n> >\r\n> > I found that it is related to the recent commit 790bf615dd, and maybe it's\r\n> > better to fix it. I also noticed that some comments should be modified according\r\n> > to this new syntax. Attach a patch to fix them.\r\n> >\r\n> \r\n> Thanks for the patch! Below are my review comments.\r\n> \r\n> The patch looks good to me but I did find some other tab-completion\r\n> anomalies. IIUC these are unrelated to your work, but since I found\r\n> them while testing your patch I am reporting them here.\r\n\r\nLooks fine to me, too.\r\n\r\n> Perhaps you want to fix them in the same patch, or just raise them\r\n> again separately?\r\n> \r\n> ======\r\n> \r\n> 1. tab complete for CREATE PUBLICATION\r\n> \r\n> I don’t think this is any new bug, but I found that it is possible to do this...\r\n> \r\n> test_pub=# create publication p for ALL TABLES IN SCHEMA <tab>\r\n> information_schema pg_catalog pg_toast public\r\n>\r\n> or, even this...\r\n> \r\n> test_pub=# create publication p for XXX TABLES IN SCHEMA <tab>\r\n> information_schema pg_catalog pg_toast public\r\n\r\nCompletion is responding to \"IN SCHEMA\" in these cases. However, I\r\ndon't reach this state only by completion becuase it doesn't suggest\r\n\"IN SCHEMA\" after \"TABLES\" nor \"ALL TABLES\". I don't see a reason to\r\nchange that behavior unless that fix doesn't cause any additional\r\ncomplexity.\r\n\r\n> ======\r\n> \r\n> 2. tab complete for GRANT\r\n> \r\n> test_pub=# grant <tab>\r\n> ALL EXECUTE\r\n> pg_execute_server_program pg_read_server_files postgres\r\n> TRIGGER\r\n> ALTER SYSTEM GRANT pg_monitor\r\n> pg_signal_backend REFERENCES\r\n> TRUNCATE\r\n> CONNECT INSERT pg_read_all_data\r\n> pg_stat_scan_tables SELECT UPDATE\r\n> CREATE pg_checkpoint\r\n> pg_read_all_settings pg_write_all_data SET\r\n> USAGE\r\n> DELETE pg_database_owner\r\n> pg_read_all_stats pg_write_server_files TEMPORARY\r\n> \r\n> 2a.\r\n> grant \"GRANT\" ??\r\n\r\nYeah, for the mement I thought that might a kind of admin option but\r\nthere's no such a privilege. REVOKE gets the same suggestion.\r\n\r\n> 2b.\r\n> grant \"TEMPORARY\" but not \"TEMP\" ??\r\n\r\nTEMP is an alternative spelling so that's fine.\r\n\r\n\r\nI found the following suggestion.\r\n\r\nCREATE PUBLICATION p FOR TABLES <tab> -> [\"IN SCHEMA\", \"WITH (\"]\r\n\r\nI believe \"WITH (\" doesn't come there.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Wed, 28 Sep 2022 14:49:24 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix some newly modified tab-complete changes" }, { "msg_contents": "On Wed, Sep 28, 2022 1:49 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\r\n> \r\n> At Wed, 28 Sep 2022 14:14:01 +1000, Peter Smith\r\n> <smithpb2250@gmail.com> wrote in\r\n> > On Tue, Sep 27, 2022 at 8:28 PM shiy.fnst@fujitsu.com\r\n> > <shiy.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > Hi hackers,\r\n> > >\r\n> > > I saw a problem when using tab-complete for \"GRANT\", \"TABLES IN\r\n> SCHEMA\" should\r\n> > > be \"ALL TABLES IN SCHEMA\" in the following case.\r\n> > >\r\n> > > postgres=# grant all on\r\n> > > ALL FUNCTIONS IN SCHEMA DATABASE FUNCTION\r\n> PARAMETER SCHEMA TABLESPACE\r\n> > > ALL PROCEDURES IN SCHEMA DOMAIN information_schema.\r\n> PROCEDURE SEQUENCE tbl\r\n> > > ALL ROUTINES IN SCHEMA FOREIGN DATA WRAPPER LANGUAGE\r\n> public. TABLE TYPE\r\n> > > ALL SEQUENCES IN SCHEMA FOREIGN SERVER LARGE OBJECT\r\n> ROUTINE TABLES IN SCHEMA\r\n> > >\r\n> > > I found that it is related to the recent commit 790bf615dd, and maybe it's\r\n> > > better to fix it. I also noticed that some comments should be modified\r\n> according\r\n> > > to this new syntax. Attach a patch to fix them.\r\n> > >\r\n> >\r\n> > Thanks for the patch! Below are my review comments.\r\n> >\r\n> > The patch looks good to me but I did find some other tab-completion\r\n> > anomalies. IIUC these are unrelated to your work, but since I found\r\n> > them while testing your patch I am reporting them here.\r\n> \r\n> Looks fine to me, too.\r\n> \r\n\r\nThanks for reviewing it.\r\n\r\n> > Perhaps you want to fix them in the same patch, or just raise them\r\n> > again separately?\r\n> >\r\n> > ======\r\n> >\r\n> > 1. tab complete for CREATE PUBLICATION\r\n> >\r\n> > I donʼt think this is any new bug, but I found that it is possible to do this...\r\n> >\r\n> > test_pub=# create publication p for ALL TABLES IN SCHEMA <tab>\r\n> > information_schema pg_catalog pg_toast public\r\n> >\r\n> > or, even this...\r\n> >\r\n> > test_pub=# create publication p for XXX TABLES IN SCHEMA <tab>\r\n> > information_schema pg_catalog pg_toast public\r\n> \r\n> Completion is responding to \"IN SCHEMA\" in these cases. However, I\r\n> don't reach this state only by completion becuase it doesn't suggest\r\n> \"IN SCHEMA\" after \"TABLES\" nor \"ALL TABLES\". I don't see a reason to\r\n> change that behavior unless that fix doesn't cause any additional\r\n> complexity.\r\n> \r\n\r\n+1\r\n\r\n> > ======\r\n> >\r\n> > 2. tab complete for GRANT\r\n> >\r\n> > test_pub=# grant <tab>\r\n> > ALL EXECUTE\r\n> > pg_execute_server_program pg_read_server_files postgres\r\n> > TRIGGER\r\n> > ALTER SYSTEM GRANT pg_monitor\r\n> > pg_signal_backend REFERENCES\r\n> > TRUNCATE\r\n> > CONNECT INSERT pg_read_all_data\r\n> > pg_stat_scan_tables SELECT UPDATE\r\n> > CREATE pg_checkpoint\r\n> > pg_read_all_settings pg_write_all_data SET\r\n> > USAGE\r\n> > DELETE pg_database_owner\r\n> > pg_read_all_stats pg_write_server_files TEMPORARY\r\n> >\r\n> > 2a.\r\n> > grant \"GRANT\" ??\r\n> \r\n> Yeah, for the mement I thought that might a kind of admin option but\r\n> there's no such a privilege. REVOKE gets the same suggestion.\r\n> \r\n\r\nMaybe that's for \"REVOKE GRANT OPTION FOR\". But it is used by both GRANT and\r\nREVOKE. I think it's a separate problem, I have tried to fix it in the attached\r\n0002 patch.\r\n\r\n> > 2b.\r\n> > grant \"TEMPORARY\" but not \"TEMP\" ??\r\n> \r\n> TEMP is an alternative spelling so that's fine.\r\n> \r\n\r\nAgreed.\r\n\r\n> \r\n> I found the following suggestion.\r\n> \r\n> CREATE PUBLICATION p FOR TABLES <tab> -> [\"IN SCHEMA\", \"WITH (\"]\r\n> \r\n> I believe \"WITH (\" doesn't come there.\r\n> \r\n\r\nFixed.\r\n\r\nAttach the updated patch.\r\n\r\nRegards,\r\nShi yu", "msg_date": "Thu, 29 Sep 2022 02:50:45 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Fix some newly modified tab-complete changes" }, { "msg_contents": "Thanks! I pushed 0001.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La rebeldía es la virtud original del hombre\" (Arthur Schopenhauer)\n\n\n", "msg_date": "Fri, 30 Sep 2022 12:59:55 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Fix some newly modified tab-complete changes" }, { "msg_contents": "On Thu, Sep 29, 2022 at 12:50 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Wed, Sep 28, 2022 1:49 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Wed, 28 Sep 2022 14:14:01 +1000, Peter Smith\n> > <smithpb2250@gmail.com> wrote in\n...\n> > >\n> > > 2. tab complete for GRANT\n> > >\n> > > test_pub=# grant <tab>\n> > > ALL EXECUTE\n> > > pg_execute_server_program pg_read_server_files postgres\n> > > TRIGGER\n> > > ALTER SYSTEM GRANT pg_monitor\n> > > pg_signal_backend REFERENCES\n> > > TRUNCATE\n> > > CONNECT INSERT pg_read_all_data\n> > > pg_stat_scan_tables SELECT UPDATE\n> > > CREATE pg_checkpoint\n> > > pg_read_all_settings pg_write_all_data SET\n> > > USAGE\n> > > DELETE pg_database_owner\n> > > pg_read_all_stats pg_write_server_files TEMPORARY\n> > >\n> > > 2a.\n> > > grant \"GRANT\" ??\n> >\n> > Yeah, for the mement I thought that might a kind of admin option but\n> > there's no such a privilege. REVOKE gets the same suggestion.\n> >\n>\n> Maybe that's for \"REVOKE GRANT OPTION FOR\". But it is used by both GRANT and\n> REVOKE. I think it's a separate problem, I have tried to fix it in the attached\n> 0002 patch.\n>\n\nI checked your v2-0002 patch and AFAICT it does fix properly the\npreviously reported GRANT/REVOKE problem.\n\n~\n\nBut, while testing I noticed another different quirk\n\nIt seems that neither the GRANT nor the REVOKE auto-complete\nrecognises the optional PRIVILEGE keyword\n\ne.g. GRANT ALL <tab> --> ON (but not PRIVILEGE)\ne.g. GRANT ALL PRIV<tab> --> ???\n\ne.g. REVOKE ALL <tab> --> ON (but not PRIVILEGE)..\ne.g. REVOKE ALL PRIV<tab> --> ???\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 4 Oct 2022 19:16:36 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix some newly modified tab-complete changes" }, { "msg_contents": "On Tue, Oct 4, 2022 4:17 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> On Thu, Sep 29, 2022 at 12:50 PM shiy.fnst@fujitsu.com\r\n> <shiy.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Wed, Sep 28, 2022 1:49 PM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> > >\r\n> > > At Wed, 28 Sep 2022 14:14:01 +1000, Peter Smith\r\n> > > <smithpb2250@gmail.com> wrote in\r\n> ...\r\n> > > >\r\n> > > > 2. tab complete for GRANT\r\n> > > >\r\n> > > > test_pub=# grant <tab>\r\n> > > > ALL EXECUTE\r\n> > > > pg_execute_server_program pg_read_server_files postgres\r\n> > > > TRIGGER\r\n> > > > ALTER SYSTEM GRANT pg_monitor\r\n> > > > pg_signal_backend REFERENCES\r\n> > > > TRUNCATE\r\n> > > > CONNECT INSERT pg_read_all_data\r\n> > > > pg_stat_scan_tables SELECT UPDATE\r\n> > > > CREATE pg_checkpoint\r\n> > > > pg_read_all_settings pg_write_all_data SET\r\n> > > > USAGE\r\n> > > > DELETE pg_database_owner\r\n> > > > pg_read_all_stats pg_write_server_files TEMPORARY\r\n> > > >\r\n> > > > 2a.\r\n> > > > grant \"GRANT\" ??\r\n> > >\r\n> > > Yeah, for the mement I thought that might a kind of admin option but\r\n> > > there's no such a privilege. REVOKE gets the same suggestion.\r\n> > >\r\n> >\r\n> > Maybe that's for \"REVOKE GRANT OPTION FOR\". But it is used by both\r\n> GRANT and\r\n> > REVOKE. I think it's a separate problem, I have tried to fix it in the attached\r\n> > 0002 patch.\r\n> >\r\n> \r\n> I checked your v2-0002 patch and AFAICT it does fix properly the\r\n> previously reported GRANT/REVOKE problem.\r\n> \r\n\r\nThanks for reviewing and testing it.\r\n\r\n> ~\r\n> \r\n> But, while testing I noticed another different quirk\r\n> \r\n> It seems that neither the GRANT nor the REVOKE auto-complete\r\n> recognises the optional PRIVILEGE keyword\r\n> \r\n> e.g. GRANT ALL <tab> --> ON (but not PRIVILEGE)\r\n> e.g. GRANT ALL PRIV<tab> --> ???\r\n> \r\n> e.g. REVOKE ALL <tab> --> ON (but not PRIVILEGE)..\r\n> e.g. REVOKE ALL PRIV<tab> --> ???\r\n> \r\n\r\nI tried to add tab-completion for it. Pleases see attached updated patch.\r\n\r\nRegards,\r\nShi yu", "msg_date": "Mon, 10 Oct 2022 06:12:09 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Fix some newly modified tab-complete changes" }, { "msg_contents": "On Mon, Oct 10, 2022 2:12 PM shiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com> wrote:\r\n> \r\n> On Tue, Oct 4, 2022 4:17 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> >\r\n> > But, while testing I noticed another different quirk\r\n> >\r\n> > It seems that neither the GRANT nor the REVOKE auto-complete\r\n> > recognises the optional PRIVILEGE keyword\r\n> >\r\n> > e.g. GRANT ALL <tab> --> ON (but not PRIVILEGE)\r\n> > e.g. GRANT ALL PRIV<tab> --> ???\r\n> >\r\n> > e.g. REVOKE ALL <tab> --> ON (but not PRIVILEGE)..\r\n> > e.g. REVOKE ALL PRIV<tab> --> ???\r\n> >\r\n> \r\n> I tried to add tab-completion for it. Pleases see attached updated patch.\r\n> \r\n\r\nSorry for attaching a wrong patch. Here is the right one.\r\n\r\nRegards,\r\nShi yu", "msg_date": "Mon, 10 Oct 2022 14:28:27 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Fix some newly modified tab-complete changes" }, { "msg_contents": "On Tue, Oct 11, 2022 at 1:28 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Mon, Oct 10, 2022 2:12 PM shiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com> wrote:\n> >\n> > On Tue, Oct 4, 2022 4:17 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > But, while testing I noticed another different quirk\n> > >\n> > > It seems that neither the GRANT nor the REVOKE auto-complete\n> > > recognises the optional PRIVILEGE keyword\n> > >\n> > > e.g. GRANT ALL <tab> --> ON (but not PRIVILEGE)\n> > > e.g. GRANT ALL PRIV<tab> --> ???\n> > >\n> > > e.g. REVOKE ALL <tab> --> ON (but not PRIVILEGE)..\n> > > e.g. REVOKE ALL PRIV<tab> --> ???\n> > >\n> >\n> > I tried to add tab-completion for it. Pleases see attached updated patch.\n> >\n\nHi Shi-san,\n\nI re-tested and confirm that the patch does indeed fix the quirk I'd\npreviously reported.\n\nBut, looking at the patch code, I don't know if it is the best way to\nfix the problem or not. Someone with more experience of the\ntab-complete module can judge that.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 18 Oct 2022 17:17:32 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix some newly modified tab-complete changes" }, { "msg_contents": "On Tue, Oct 18, 2022 at 05:17:32PM +1100, Peter Smith wrote:\n> I re-tested and confirm that the patch does indeed fix the quirk I'd\n> previously reported.\n> \n> But, looking at the patch code, I don't know if it is the best way to\n> fix the problem or not. Someone with more experience of the\n> tab-complete module can judge that.\n\nIt seems to me that the patch as proposed has more problems than\nthat. I have spotted a few issues at quick glance, there may be\nmore.\n\nFor example, take this one:\n+ else if (TailMatches(\"GRANT\") ||\n+ TailMatches(\"REVOKE\", \"GRANT\", \"OPTION\", \"FOR\"))\n COMPLETE_WITH_QUERY_PLUS(Query_for_list_of_roles,\n\n\"REVOKE GRANT OPTION FOR\" completes with a list of role names, which\nis incorrect.\n\nFWIW, I am not much a fan of the approach taken by the patch to\nduplicate the full list of keywords to append after REVOKE or GRANT,\nat the only difference of \"GRANT OPTION FOR\". This may be readable if\nunified with a single list, with extra items appended for GRANT and\nREVOKE?\n\nNote that REVOKE has a \"ADMIN OPTION FOR\" clause, which is not\ncompleted to.\n--\nMichael", "msg_date": "Thu, 10 Nov 2022 13:53:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix some newly modified tab-complete changes" }, { "msg_contents": "On Thu, Nov 10, 2022 12:54 PM Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Oct 18, 2022 at 05:17:32PM +1100, Peter Smith wrote:\n> > I re-tested and confirm that the patch does indeed fix the quirk I'd\n> > previously reported.\n> >\n> > But, looking at the patch code, I don't know if it is the best way to\n> > fix the problem or not. Someone with more experience of the\n> > tab-complete module can judge that.\n> \n> It seems to me that the patch as proposed has more problems than\n> that. I have spotted a few issues at quick glance, there may be\n> more.\n> \n> For example, take this one:\n> + else if (TailMatches(\"GRANT\") ||\n> + TailMatches(\"REVOKE\", \"GRANT\", \"OPTION\", \"FOR\"))\n> COMPLETE_WITH_QUERY_PLUS(Query_for_list_of_roles,\n> \n> \"REVOKE GRANT OPTION FOR\" completes with a list of role names, which\n> is incorrect.\n> \n> FWIW, I am not much a fan of the approach taken by the patch to\n> duplicate the full list of keywords to append after REVOKE or GRANT,\n> at the only difference of \"GRANT OPTION FOR\". This may be readable if\n> unified with a single list, with extra items appended for GRANT and\n> REVOKE?\n> \n> Note that REVOKE has a \"ADMIN OPTION FOR\" clause, which is not\n> completed to.\n\nThanks a lot for looking into this patch.\n\nI have fixed the problems you saw, and improved the patch as you suggested.\n\nBesides, I noticed that the tab completion for \"ALTER DEFAULT PRIVILEGES ...\nGRANT/REVOKE ...\" missed \"CREATE\". Fix it in 0001 patch.\n\nAnd commit e3ce2de09 supported GRANT ... WITH INHERIT ..., but there's no tab\ncompletion for it. Add this in 0002 patch.\n\nPlease see the attached patches.\n\nRegards,\nShi yu", "msg_date": "Wed, 16 Nov 2022 08:29:24 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Fix some newly modified tab-complete changes" }, { "msg_contents": "On Wed, Nov 16, 2022 at 08:29:24AM +0000, shiy.fnst@fujitsu.com wrote:\n> I have fixed the problems you saw, and improved the patch as you suggested.\n> \n> Besides, I noticed that the tab completion for \"ALTER DEFAULT PRIVILEGES ...\n> GRANT/REVOKE ...\" missed \"CREATE\". Fix it in 0001 patch.\n> \n> And commit e3ce2de09 supported GRANT ... WITH INHERIT ..., but there's no tab\n> completion for it. Add this in 0002 patch.\n\nThanks, I have been looking at the patch, and pondered about all the\nbloat added by the handling of PRIVILEGES, to note at the end that ALL\nPRIVILEGES is parsed the same way as ALL. So we don't actually need\nany of the complications related to it and the result would be the\nsame.\n\nI have merged 0001 and 0002 together, and applied the rest, which\nlooked rather fine. I have simplified as well a bit the parts where\n\"REVOKE GRANT\" are specified in a row, to avoid fancy results in some\nbranches when we apply Privilege_options_of_grant_and_revoke.\n--\nMichael", "msg_date": "Fri, 18 Nov 2022 11:27:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix some newly modified tab-complete changes" } ]
[ { "msg_contents": "Increase width of RelFileNumbers from 32 bits to 56 bits.\n\nRelFileNumbers are now assigned using a separate counter, instead of\nbeing assigned from the OID counter. This counter never wraps around:\nif all 2^56 possible RelFileNumbers are used, an internal error\noccurs. As the cluster is limited to 2^64 total bytes of WAL, this\nlimitation should not cause a problem in practice.\n\nIf the counter were 64 bits wide rather than 56 bits wide, we would\nneed to increase the width of the BufferTag, which might adversely\nimpact buffer lookup performance. Also, this lets us use bigint for\npg_class.relfilenode and other places where these values are exposed\nat the SQL level without worrying about overflow.\n\nThis should remove the need to keep \"tombstone\" files around until\nthe next checkpoint when relations are removed. We do that to keep\nRelFileNumbers from being recycled, but now that won't happen\nanyway. However, this patch doesn't actually change anything in\nthis area; it just makes it possible for a future patch to do so.\n\nDilip Kumar, based on an idea from Andres Freund, who also reviewed\nsome earlier versions of the patch. Further review and some\nwordsmithing by me. Also reviewed at various points by Ashutosh\nSharma, Vignesh C, Amul Sul, Álvaro Herrera, and Tom Lane.\n\nDiscussion: http://postgr.es/m/CA+Tgmobp7+7kmi4gkq7Y+4AM9fTvL+O1oQ4-5gFTT+6Ng-dQ=g@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/05d4cbf9b6ba708858984b01ca0fc56d59d4ec7c\n\nModified Files\n--------------\ncontrib/pg_buffercache/Makefile | 4 +-\n.../pg_buffercache/pg_buffercache--1.3--1.4.sql | 30 +++\ncontrib/pg_buffercache/pg_buffercache.control | 2 +-\ncontrib/pg_buffercache/pg_buffercache_pages.c | 39 +++-\ncontrib/pg_prewarm/autoprewarm.c | 4 +-\ncontrib/pg_walinspect/expected/pg_walinspect.out | 4 +-\ncontrib/pg_walinspect/sql/pg_walinspect.sql | 4 +-\ndoc/src/sgml/catalogs.sgml | 2 +-\ndoc/src/sgml/func.sgml | 5 +\ndoc/src/sgml/pgbuffercache.sgml | 2 +-\ndoc/src/sgml/storage.sgml | 11 +-\nsrc/backend/access/gin/ginxlog.c | 2 +-\nsrc/backend/access/rmgrdesc/gistdesc.c | 2 +-\nsrc/backend/access/rmgrdesc/heapdesc.c | 2 +-\nsrc/backend/access/rmgrdesc/nbtdesc.c | 2 +-\nsrc/backend/access/rmgrdesc/seqdesc.c | 2 +-\nsrc/backend/access/rmgrdesc/xlogdesc.c | 21 ++-\nsrc/backend/access/transam/README | 5 +-\nsrc/backend/access/transam/varsup.c | 209 ++++++++++++++++++++-\nsrc/backend/access/transam/xlog.c | 60 ++++++\nsrc/backend/access/transam/xlogprefetcher.c | 14 +-\nsrc/backend/access/transam/xlogrecovery.c | 6 +-\nsrc/backend/access/transam/xlogutils.c | 6 +-\nsrc/backend/backup/basebackup.c | 2 +-\nsrc/backend/catalog/catalog.c | 95 ----------\nsrc/backend/catalog/heap.c | 27 +--\nsrc/backend/catalog/index.c | 11 +-\nsrc/backend/catalog/storage.c | 8 +\nsrc/backend/commands/tablecmds.c | 12 +-\nsrc/backend/commands/tablespace.c | 2 +-\nsrc/backend/nodes/gen_node_support.pl | 4 +-\nsrc/backend/replication/logical/decode.c | 1 +\nsrc/backend/replication/logical/reorderbuffer.c | 2 +-\nsrc/backend/storage/file/reinit.c | 28 +--\nsrc/backend/storage/freespace/fsmpage.c | 2 +-\nsrc/backend/storage/lmgr/lwlocknames.txt | 1 +\nsrc/backend/storage/smgr/md.c | 7 +\nsrc/backend/storage/smgr/smgr.c | 2 +-\nsrc/backend/utils/adt/dbsize.c | 7 +-\nsrc/backend/utils/adt/pg_upgrade_support.c | 13 +-\nsrc/backend/utils/cache/relcache.c | 2 +-\nsrc/backend/utils/cache/relfilenumbermap.c | 4 +-\nsrc/backend/utils/misc/pg_controldata.c | 9 +-\nsrc/bin/pg_checksums/pg_checksums.c | 4 +-\nsrc/bin/pg_controldata/pg_controldata.c | 2 +\nsrc/bin/pg_dump/pg_dump.c | 26 +--\nsrc/bin/pg_rewind/filemap.c | 6 +-\nsrc/bin/pg_upgrade/info.c | 3 +-\nsrc/bin/pg_upgrade/pg_upgrade.c | 6 +-\nsrc/bin/pg_upgrade/relfilenumber.c | 4 +-\nsrc/bin/pg_waldump/pg_waldump.c | 2 +-\nsrc/bin/scripts/t/090_reindexdb.pl | 2 +-\nsrc/common/relpath.c | 20 +-\nsrc/fe_utils/option_utils.c | 40 ++++\nsrc/include/access/transam.h | 40 ++++\nsrc/include/access/xlog.h | 1 +\nsrc/include/catalog/catalog.h | 3 -\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/pg_class.h | 16 +-\nsrc/include/catalog/pg_control.h | 2 +\nsrc/include/catalog/pg_proc.dat | 10 +-\nsrc/include/common/relpath.h | 7 +-\nsrc/include/fe_utils/option_utils.h | 2 +\nsrc/include/storage/buf_internals.h | 55 +++++-\nsrc/include/storage/relfilelocator.h | 12 +-\nsrc/test/regress/expected/alter_table.out | 24 ++-\nsrc/test/regress/expected/fast_default.out | 4 +-\nsrc/test/regress/expected/oidjoins.out | 2 +-\nsrc/test/regress/sql/alter_table.sql | 8 +-\nsrc/test/regress/sql/fast_default.sql | 4 +-\n70 files changed, 694 insertions(+), 290 deletions(-)", "msg_date": "Tue, 27 Sep 2022 17:32:35 +0000", "msg_from": "Robert Haas <rhaas@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Increase width of RelFileNumbers from 32 bits to 56 bits." }, { "msg_contents": "This seems to be breaking cfbot:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql\n\nFor example:\nhttps://cirrus-ci.com/task/6720256776339456\n\nSome other minor issues:\n\nthais is only used during\n\n=> this\n\n+ elog(ERROR, \"unexpected relnumber \" UINT64_FORMAT \"that is bigger than nextRelFileNumber \" UINT64_FORMAT,\n\n=> there should be a space before \"that\".\n\n+ \"tli %u; prev tli %u; fpw %s; xid %u:%u; relfilenumber \" UINT64_FORMAT \";oid %u; \"\n\n=> and a space before \"oid\"\n\n+ * Parse relfilenumber value for an option. If the parsing is successful,\n+ * returns; if parsing fails, returns false.\n\nreturns *true;\n\n\n", "msg_date": "Tue, 27 Sep 2022 13:51:21 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Increase width of RelFileNumbers from 32 bits to 56 bits." }, { "msg_contents": "On Tue, Sep 27, 2022 at 2:51 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> This seems to be breaking cfbot:\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql\n>\n> For example:\n> https://cirrus-ci.com/task/6720256776339456\n\nOK, so it looks like the pg_buffercache test is failing there. But it\ndoesn't fail for me, and I don't see a regression.diffs file in the\noutput that would enable me to see what is failing. If it's there, can\nyou tell me how to find it?\n\n> Some other minor issues:\n\nWill push fixes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Sep 2022 15:12:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Increase width of RelFileNumbers from 32 bits to 56 bits." }, { "msg_contents": "On Tue, Sep 27, 2022 at 03:12:56PM -0400, Robert Haas wrote:\n> On Tue, Sep 27, 2022 at 2:51 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > This seems to be breaking cfbot:\n> > https://cirrus-ci.com/github/postgresql-cfbot/postgresql\n> >\n> > For example:\n> > https://cirrus-ci.com/task/6720256776339456\n> \n> OK, so it looks like the pg_buffercache test is failing there. But it\n> doesn't fail for me, and I don't see a regression.diffs file in the\n> output that would enable me to see what is failing. If it's there, can\n> you tell me how to find it?\n\nIt's here in the artifacts.\nhttps://api.cirrus-ci.com/v1/artifact/task/5647133427630080/testrun/build/testrun/pg_buffercache/regress/regression.diffs\n\nActually, this worked under autoconf but failed under meson.\n\nI think you just need to make the corresponding change in\ncontrib/pg_buffercache/meson.build that's in ./Makefile.\n\n\n", "msg_date": "Tue, 27 Sep 2022 14:17:10 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Increase width of RelFileNumbers from 32 bits to 56 bits." } ]
[ { "msg_contents": "Hi,\n\nlongfin and tamandua recently begun failing like this, quite possibly\nas a result of 05d4cbf9b6ba708858984b01ca0fc56d59d4ec7c:\n\n+++ regress check in contrib/test_decoding +++\ntest ddl ... FAILED (test process exited with\nexit code 2) 3276 ms\n(all other tests in this suite also fail, probably because the server\ncrashed here)\n\nThe server logs look like this:\n\n2022-09-27 13:51:08.652 EDT [37090:4] LOG: server process (PID 37105)\nwas terminated by signal 4: Illegal instruction: 4\n2022-09-27 13:51:08.652 EDT [37090:5] DETAIL: Failed process was\nrunning: SELECT data FROM\npg_logical_slot_get_changes('regression_slot', NULL, NULL,\n'include-xids', '0', 'skip-empty-xacts', '1');\n\nBoth animals are running with -fsanitize=alignment and it's not\ndifficult to believe that the commit mentioned above could have\nintroduced an alignment problem where we didn't have one before, but\nwithout a stack backtrace I don't know how to track it down. I tried\nrunning those tests locally with -fsanitize=alignment and they passed.\n\nAny ideas on how to track this down?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 27 Sep 2022 14:55:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Tue, Sep 27, 2022 at 02:55:18PM -0400, Robert Haas wrote:\n> Both animals are running with -fsanitize=alignment and it's not\n> difficult to believe that the commit mentioned above could have\n> introduced an alignment problem where we didn't have one before, but\n> without a stack backtrace I don't know how to track it down. I tried\n> running those tests locally with -fsanitize=alignment and they passed.\n\nThere's one here:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2022-09-27%2018%3A43%3A06\n\n/mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/access/rmgrdesc/xactdesc.c:102:30: runtime error: member access within misaligned address 0x000004125074 for type 'xl_xact_invals' (aka 'struct xl_xact_invals'), which requires 8 byte alignment\n\n #0 0x5b6702 in ParseCommitRecord /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/access/rmgrdesc/xactdesc.c:102:30\n #1 0xb5264d in xact_decode /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/replication/logical/decode.c:201:5\n #2 0xb521ac in LogicalDecodingProcessRecord /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/replication/logical/decode.c:119:3\n #3 0xb5e868 in pg_logical_slot_get_changes_guts /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/replication/logical/logicalfuncs.c:271:5\n #4 0xb5e25f in pg_logical_slot_get_changes /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/replication/logical/logicalfuncs.c:338:9\n #5 0x896bba in ExecMakeTableFunctionResult /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/executor/execSRF.c:234:13\n #6 0x8c7660 in FunctionNext /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/executor/nodeFunctionscan.c:95:5\n #7 0x899048 in ExecScanFetch /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/executor/execScan.c:133:9\n #8 0x89896b in ExecScan /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/executor/execScan.c:199:10\n #9 0x8c6892 in ExecFunctionScan /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/executor/nodeFunctionscan.c:270:9\n #10 0x892f42 in ExecProcNodeFirst /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/executor/execProcnode.c:464:9\n #11 0x8802dd in ExecProcNode /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/include/executor/executor.h:259:9\n #12 0x8802dd in ExecutePlan /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/executor/execMain.c:1636:10\n #13 0x8802dd in standard_ExecutorRun /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/executor/execMain.c:363:3\n #14 0x87ffbb in ExecutorRun /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/executor/execMain.c:307:3\n #15 0xc36c07 in PortalRunSelect /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/tcop/pquery.c:924:4\n #16 0xc364ca in PortalRun /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/tcop/pquery.c:768:18\n #17 0xc34138 in exec_simple_query /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/tcop/postgres.c:1238:10\n #18 0xc30953 in PostgresMain /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/tcop/postgres.c\n #19 0xb27e3f in BackendRun /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:4482:2\n #20 0xb2738d in BackendStartup /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:4210:3\n #21 0xb2738d in ServerLoop /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:1804:7\n #22 0xb24312 in PostmasterMain /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/postmaster/postmaster.c:1476:11\n #23 0x953694 in main /mnt/resource/bf/build/kestrel/HEAD/pgsql.build/../pgsql/src/backend/main/main.c:197:3\n #24 0x7f834e39a209 in __libc_start_call_main csu/../sysdeps/nptl/libc_start_call_main.h:58:16\n #25 0x7f834e39a2bb in __libc_start_main csu/../csu/libc-start.c:389:3\n #26 0x4a40a0 in _start (/mnt/resource/bf/build/kestrel/HEAD/pgsql.build/tmp_install/mnt/resource/bf/build/kestrel/HEAD/inst/bin/postgres+0x4a40a0)\n\nNote that cfbot is warning for a different reason now:\nhttps://cirrus-ci.com/task/5794615155490816\n\n\n", "msg_date": "Tue, 27 Sep 2022 15:07:12 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Tue, Sep 27, 2022 at 02:55:18PM -0400, Robert Haas wrote:\n>> Both animals are running with -fsanitize=alignment and it's not\n>> difficult to believe that the commit mentioned above could have\n>> introduced an alignment problem where we didn't have one before, but\n>> without a stack backtrace I don't know how to track it down. I tried\n>> running those tests locally with -fsanitize=alignment and they passed.\n\n> There's one here:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2022-09-27%2018%3A43%3A06\n\nOn longfin's host, the test_decoding run produces two core files.\nOne has a backtrace like this:\n\n * frame #0: 0x000000010a36af8c postgres`ParseCommitRecord(info='\\x80', xlrec=0x00007fa0678a8090, parsed=0x00007ff7b5c50e78) at xactdesc.c:102:30\n frame #1: 0x000000010a765f9e postgres`xact_decode(ctx=0x00007fa0680d9118, buf=0x00007ff7b5c51000) at decode.c:201:5 [opt]\n frame #2: 0x000000010a765d17 postgres`LogicalDecodingProcessRecord(ctx=0x00007fa0680d9118, record=<unavailable>) at decode.c:119:3 [opt]\n frame #3: 0x000000010a76d890 postgres`pg_logical_slot_get_changes_guts(fcinfo=<unavailable>, confirm=true, binary=false) at logicalfuncs.c:271:5 [opt]\n frame #4: 0x000000010a76d320 postgres`pg_logical_slot_get_changes(fcinfo=<unavailable>) at logicalfuncs.c:338:9 [opt]\n frame #5: 0x000000010a5a521d postgres`ExecMakeTableFunctionResult(setexpr=<unavailable>, econtext=0x00007fa068098f50, argContext=<unavailable>, expectedDesc=0x00007fa06701ba38, randomAccess=<unavailable>) at execSRF.c:234:13 [opt]\n frame #6: 0x000000010a5c405b postgres`FunctionNext(node=0x00007fa068098d40) at nodeFunctionscan.c:95:5 [opt]\n frame #7: 0x000000010a5a61b9 postgres`ExecScan(node=0x00007fa068098d40, accessMtd=(postgres`FunctionNext at nodeFunctionscan.c:61), recheckMtd=(postgres`FunctionRecheck at nodeFunctionscan.c:251)) at execScan.c:199:10 [opt]\n frame #8: 0x000000010a596ee0 postgres`standard_ExecutorRun [inlined] ExecProcNode(node=0x00007fa068098d40) at executor.h:259:9 [opt]\n frame #9: 0x000000010a596eb8 postgres`standard_ExecutorRun [inlined] ExecutePlan(estate=<unavailable>, planstate=0x00007fa068098d40, use_parallel_mode=<unavailable>, operation=CMD_SELECT, sendTuples=<unavailable>, numberTuples=0, direction=1745456112, dest=0x00007fa067023848, execute_once=<unavailable>) at execMain.c:1636:10 [opt]\n frame #10: 0x000000010a596e2a postgres`standard_ExecutorRun(queryDesc=<unavailable>, direction=1745456112, count=0, execute_once=<unavailable>) at execMain.c:363:3 [opt]\n\nand the other\n\n * frame #0: 0x000000010a36af8c postgres`ParseCommitRecord(info='\\x80', xlrec=0x00007fa06783a090, parsed=0x00007ff7b5c50040) at xactdesc.c:102:30\n frame #1: 0x000000010a3cd24d postgres`xact_redo(record=0x00007fa0670096c8) at xact.c:6161:3\n frame #2: 0x000000010a41770d postgres`ApplyWalRecord(xlogreader=0x00007fa0670096c8, record=0x00007fa06783a060, replayTLI=0x00007ff7b5c507f0) at xlogrecovery.c:1897:2\n frame #3: 0x000000010a4154be postgres`PerformWalRecovery at xlogrecovery.c:1728:4\n frame #4: 0x000000010a3e0dc7 postgres`StartupXLOG at xlog.c:5473:3\n frame #5: 0x000000010a7498a0 postgres`StartupProcessMain at startup.c:267:2 [opt]\n frame #6: 0x000000010a73e2cb postgres`AuxiliaryProcessMain(auxtype=StartupProcess) at auxprocess.c:141:4 [opt]\n frame #7: 0x000000010a745b97 postgres`StartChildProcess(type=StartupProcess) at postmaster.c:5408:3 [opt]\n frame #8: 0x000000010a7487e2 postgres`PostmasterStateMachine at postmaster.c:4006:16 [opt]\n frame #9: 0x000000010a745804 postgres`reaper(postgres_signal_arg=<unavailable>) at postmaster.c:3256:2 [opt]\n frame #10: 0x00007ff815b16dfd libsystem_platform.dylib`_sigtramp + 29\n frame #11: 0x00007ff815accd5b libsystem_kernel.dylib`__select + 11\n frame #12: 0x000000010a74689c postgres`ServerLoop at postmaster.c:1768:13 [opt]\n frame #13: 0x000000010a743fbb postgres`PostmasterMain(argc=<unavailable>, argv=0x00006000006480a0) at postmaster.c:1476:11 [opt]\n frame #14: 0x000000010a61c775 postgres`main(argc=8, argv=<unavailable>) at main.c:197:3 [opt]\n\nLooks like it might be the same bug, but perhaps not.\n\nI recompiled access/transam and access/rmgrdesc at -O0 to get the accurate\nline numbers shown for those files. Let me know if you need any more\ninfo; I can add -O0 in more places, or poke around in the cores.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Sep 2022 16:35:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "I wrote:\n> * frame #0: 0x000000010a36af8c postgres`ParseCommitRecord(info='\\x80', xlrec=0x00007fa0678a8090, parsed=0x00007ff7b5c50e78) at xactdesc.c:102:30\n\nOkay, so the problem is this: by widening RelFileNumber to 64 bits,\nyou have increased the alignment requirement of struct RelFileLocator,\nand thereby also SharedInvalidationMessage, to 8 bytes where it had\nbeen 4. longfin's alignment check is therefore expecting that\nxl_xact_twophase will likewise be 8-byte-aligned, but it isn't:\n\n(lldb) p data\n(char *) $0 = 0x00007fa06783a0a4 \"\\U00000001\"\n\nI'm not sure whether the code that generates commit WAL records is\nbreaking a contract it should maintain, or xactdesc.c needs to be\ntaught to not assume that this data is adequately aligned.\n\nThere is a second problem that I am going to hold your feet to the\nfire about:\n\n(lldb) p sizeof(SharedInvalidationMessage)\n(unsigned long) $1 = 24\n\nWe have sweated a good deal for a long time to keep that struct\nto 16 bytes. I do not think 50% bloat is acceptable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Sep 2022 16:50:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Tue, Sep 27, 2022 at 4:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > * frame #0: 0x000000010a36af8c postgres`ParseCommitRecord(info='\\x80', xlrec=0x00007fa0678a8090, parsed=0x00007ff7b5c50e78) at xactdesc.c:102:30\n>\n> Okay, so the problem is this: by widening RelFileNumber to 64 bits,\n> you have increased the alignment requirement of struct RelFileLocator,\n> and thereby also SharedInvalidationMessage, to 8 bytes where it had\n> been 4. longfin's alignment check is therefore expecting that\n> xl_xact_twophase will likewise be 8-byte-aligned, but it isn't:\n\nYeah, I reached the same conclusion.\n\n> There is a second problem that I am going to hold your feet to the\n> fire about:\n>\n> (lldb) p sizeof(SharedInvalidationMessage)\n> (unsigned long) $1 = 24\n>\n> We have sweated a good deal for a long time to keep that struct\n> to 16 bytes. I do not think 50% bloat is acceptable.\n\nI noticed that problem, too.\n\nThe attached patch, which perhaps you can try out, fixes the alignment\nissue and also reduces the size of SharedInvalidationMessage from 24\nbytes back to 20 bytes. I do not really see a way to do better than\nthat. We use 1 type byte, 3 bytes for the backend ID, 4 bytes for the\ndatabase OID, and 4 bytes for the tablespace OID. Previously, we then\nused 4 bytes for the relfilenode, but now we need 7, and there's no\nplace from which we can plausibly steal those bits, at least not as\nfar as I can see. If you have ideas, I'm all ears.\n\nAlso, I don't really know what problem you think it's going to cause\nif that structure gets bigger. If we increased the size from 16 bytes\neven all the way to 32 or 64 bytes, what negative impact do you think\nthat would have? It would use a little bit more shared memory, but on\nmodern systems I doubt it would be enough to get excited about. The\nbigger impact would probably be that it would make commit records a\nbit bigger since those carry invalidations as payload. That is not\ngreat, but I think it only affects commit records for transactions\nthat do DDL, so I'm struggling to see that as a big performance\nproblem.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 27 Sep 2022 17:21:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "... also, lapwing's not too happy [1]. The alter_table test\nexpects this to yield zero rows, but it doesn't:\n\n SELECT m.* FROM filenode_mapping m LEFT JOIN pg_class c ON c.oid = m.oid\n WHERE c.oid IS NOT NULL OR m.mapped_oid IS NOT NULL;\n\nI've reproduced that symptom in a 32-bit FreeBSD VM building with clang,\nso I suspect that it'll occur on any 32-bit build. mamba is a couple\nhours away from offering a confirmatory data point, though.\n\n(BTW, is that test case sane at all? I'm bemused by the symmetrical\nNOT NULL tests on a fundamentally not-symmetrical left join; what\nare those supposed to accomplish? Also, the fact that it doesn't\ndeign to show any fields from \"c\" is sure making it hard to tell\nwhat's wrong.)\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2022-09-27%2018%3A40%3A18\n\n\n", "msg_date": "Tue, 27 Sep 2022 17:29:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Sep 27, 2022 at 4:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> There is a second problem that I am going to hold your feet to the\n>> fire about:\n>> (lldb) p sizeof(SharedInvalidationMessage)\n>> (unsigned long) $1 = 24\n\n> Also, I don't really know what problem you think it's going to cause\n> if that structure gets bigger. If we increased the size from 16 bytes\n> even all the way to 32 or 64 bytes, what negative impact do you think\n> that would have?\n\nMaybe it wouldn't have any great impact. I don't know, but I don't\nthink it's incumbent on me to measure that. You or the patch author\nshould have had a handle on that question *before* committing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Sep 2022 17:50:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Wed, Sep 28, 2022 at 2:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> ... also, lapwing's not too happy [1]. The alter_table test\n> expects this to yield zero rows, but it doesn't:\n\nBy looking at regression diff as shown below, it seems that we are\nable to get the relfilenode from the Oid using\npg_relation_filenode(oid) but the reverse mapping\npg_filenode_relation(reltablespace, relfilenode) returned NULL.\n\nI am not sure but by looking at the code it is somehow related to\nalignment padding while computing the hash key size in the 32-bit\nmachine in the function InitializeRelfilenumberMap(). I am still\nlooking into this and will provide updates on this.\n\n+ oid | mapped_oid | reltablespace | relfilenode |\n relname\n+-------+------------+---------------+-------------+------------------------------------------------\n+ 16385 | | 0 | 100000 | char_tbl\n+ 16388 | | 0 | 100001 | float8_tbl\n+ 16391 | | 0 | 100002 | int2_tbl\n+ 16394 | | 0 | 100003 | int4_tbl\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 09:40:04 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "wrasse is also failing with a bus error, but I cannot get the stack\ntrace. So it seems it is hitting some alignment issues during startup\n[1]. Is it possible to get the backtrace or lineno?\n\n[1]\n2022-09-28 03:19:26.228 CEST [180:4] LOG: redo starts at 0/30FE9D8\n2022-09-28 03:19:27.674 CEST [177:3] LOG: startup process (PID 180)\nwas terminated by signal 10: Bus Error\n2022-09-28 03:19:27.674 CEST [177:4] LOG: terminating any other\nactive server processes\n2022-09-28 03:19:27.677 CEST [177:5] LOG: shutting down due to\nstartup process failure\n2022-09-28 03:19:27.681 CEST [177:6] LOG: database system is shut down\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 10:34:15 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Dilip Kumar <dilipbalaut@gmail.com> writes:\n> wrasse is also failing with a bus error,\n\nYeah. At this point I think it's time to call for this patch\nto get reverted. It should get tested *off line* on some\nnon-Intel, non-64-bit, alignment-picky architectures before\nthe rest of us have to deal with it any more.\n\nThere may be a larger conversation to be had here about how\nmuch our CI infrastructure should be detecting. There seems\nto be a depressingly large gap between what that found and\nwhat the buildfarm is finding --- not only in portability\nissues, but in things like cpluspluscheck failures, which\nI had supposed CI would find.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Sep 2022 01:14:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Wed, Sep 28, 2022 at 6:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> There may be a larger conversation to be had here about how\n> much our CI infrastructure should be detecting. There seems\n> to be a depressingly large gap between what that found and\n> what the buildfarm is finding --- not only in portability\n> issues, but in things like cpluspluscheck failures, which\n> I had supposed CI would find.\n\n+1, Andres had some sanitizer flags in the works (stopped by a weird\nproblem to be resolved first), and 32 bit CI would clearly be good.\nIt also seems that ARM is now available to us via CI (either Amazon's\nor a Mac), which IIRC is more SIGBUS-y about alignment than x86?\n\nFTR CI reported that cpluspluscheck failure and more[1], so perhaps we\njust need to get clearer agreement on the status of CI, ie a policy\nthat CI had better be passing before you get to the next stage. It's\nstill pretty new...\n\n[1] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/39/3711\n\n\n", "msg_date": "Wed, 28 Sep 2022 18:48:17 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Wed, Sep 28, 2022 at 10:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dilip Kumar <dilipbalaut@gmail.com> writes:\n> > wrasse is also failing with a bus error,\n>\n> Yeah. At this point I think it's time to call for this patch\n> to get reverted. It should get tested *off line* on some\n> non-Intel, non-64-bit, alignment-picky architectures before\n> the rest of us have to deal with it any more.\n>\n> There may be a larger conversation to be had here about how\n> much our CI infrastructure should be detecting. There seems\n> to be a depressingly large gap between what that found and\n> what the buildfarm is finding --- not only in portability\n> issues, but in things like cpluspluscheck failures, which\n> I had supposed CI would find.\n\nOkay.\n\nBtw, I think the reason for the bus error on wrasse is the same as\nwhat is creating failure on longfin[1], I mean this unaligned access\nis causing Bus error during startup, IMHO.\n\n frame #0: 0x000000010a36af8c postgres`ParseCommitRecord(info='\\x80',\nxlrec=0x00007fa06783a090, parsed=0x00007ff7b5c50040) at\nxactdesc.c:102:30\n frame #1: 0x000000010a3cd24d\npostgres`xact_redo(record=0x00007fa0670096c8) at xact.c:6161:3\n frame #2: 0x000000010a41770d\npostgres`ApplyWalRecord(xlogreader=0x00007fa0670096c8,\nrecord=0x00007fa06783a060, replayTLI=0x00007ff7b5c507f0) at\nxlogrecovery.c:1897:2\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 11:48:36 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Dilip Kumar <dilipbalaut@gmail.com> writes:\n> Btw, I think the reason for the bus error on wrasse is the same as\n> what is creating failure on longfin[1], I mean this unaligned access\n> is causing Bus error during startup, IMHO.\n\nMaybe, but there's not a lot of evidence for that. wrasse got\nthrough the test_decoding check where longfin, tamandua, kestrel,\nand now skink are failing. It's evidently not the same issue\nthat the 32-bit animals are choking on, either. Looks like yet\na third bug to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Sep 2022 02:27:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Wed, Sep 28, 2022 at 9:40 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Sep 28, 2022 at 2:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > ... also, lapwing's not too happy [1]. The alter_table test\n> > expects this to yield zero rows, but it doesn't:\n>\n> By looking at regression diff as shown below, it seems that we are\n> able to get the relfilenode from the Oid using\n> pg_relation_filenode(oid) but the reverse mapping\n> pg_filenode_relation(reltablespace, relfilenode) returned NULL.\n>\n\nIt was a silly mistake, I used the F_OIDEQ function instead of\nF_INT8EQ. Although this was correct on the 0003 patch where we have\nremoved the tablespace from key, but got missed in this :(\n\nI have locally reproduced this in a 32 bit machine consistently and\nthe attached patch is fixing the issue for me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 28 Sep 2022 13:55:35 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Wed, Sep 28, 2022 at 9:26 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> It was a silly mistake, I used the F_OIDEQ function instead of\n> F_INT8EQ. Although this was correct on the 0003 patch where we have\n> removed the tablespace from key, but got missed in this :(\n>\n> I have locally reproduced this in a 32 bit machine consistently and\n> the attached patch is fixing the issue for me.\n\nI tested this with an armhf (32 bit) toolchain, and it passes\ncheck-world, and was failing before.\n\nRobert's patch isn't needed on this system. I didn't look into this\nsubject for long but it seems that SIGBUS on misaligned access (as\ntypically seen on eg SPARC) requires a 32 bit Linux/ARM kernel, but I\nwas testing with 32 bit processes and a 64 bit kernel. Apparently 32\nbit Linux/ARM has a control /proc/cpu/alignment to select behaviour\n(options include emulation, SIGBUS) but 64 bit kernels don't have it\nand are happy with misaligned access.\n\n\n", "msg_date": "Wed, 28 Sep 2022 23:35:16 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Wed, Sep 28, 2022 at 11:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dilip Kumar <dilipbalaut@gmail.com> writes:\n> > Btw, I think the reason for the bus error on wrasse is the same as\n> > what is creating failure on longfin[1], I mean this unaligned access\n> > is causing Bus error during startup, IMHO.\n>\n> Maybe, but there's not a lot of evidence for that. wrasse got\n> through the test_decoding check where longfin, tamandua, kestrel,\n> and now skink are failing. It's evidently not the same issue\n> that the 32-bit animals are choking on, either. Looks like yet\n> a third bug to me.\n\nI think the reason is that \"longfin\" is configured with the\n-fsanitize=alignment option so it will report the failure for any\nunaligned access. Whereas \"wrasse\" actually generates the \"Bus error\"\ndue to architecture. So the difference is that with\n-fsanitize=alignment, it will always complain for any unaligned access\nbut all unaligned access will not end up in the \"Bus error\", and I\nthink that could be the reason \"wrasse\" is not failing in the test\ndecoding.\n\nYeah but anyway this is just a theory behind why failing at different\nplaces but we still do not have evidence/call stack to prove that.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 17:22:28 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Wed, Sep 28, 2022 at 1:48 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> FTR CI reported that cpluspluscheck failure and more[1], so perhaps we\n> just need to get clearer agreement on the status of CI, ie a policy\n> that CI had better be passing before you get to the next stage. It's\n> still pretty new...\n\nYeah, I suppose I have to get in the habit of looking at CI before\ncommitting anything. It's sort of annoying to me, though. Here's a\nlist of the follow-up fixes I've so far committed:\n\n1. headerscheck\n2. typos\n3. pg_buffercache's meson.build\n4. compiler warning\n5. alignment problem\n6. F_INTEQ/F_OIDEQ problem\n\nCI caught (1), (3), and (4). The buildfarm caught (1), (5), and (6).\nThe number of buildfarm failures that I would have avoided by checking\nCI is less than the number of extra things I had to fix to keep CI\nhappy, and the serious problems were caught by the buildfarm, not by\nCI. It's not even clear to me how I was supposed to know that every\nfuture Makefile change is going to require adjusting a meson.build\nfile as well. It's not like that was mentioned in the commit message\nfor the meson build system, which also has no README anywhere in the\nsource tree. I found the wiki page by looking up the commit and\nfinding the URL in the commit message, but, you know, that wiki page\nALSO doesn't mention the need to now update meson.build files going\nforward. So I guess the way you're supposed to know that you need to\nupdate meson.build that is by looking at CI, but CI is also the only\nreason it's necessary to carry about meson.build in the first place. I\nfeel like CI has not really made it in any easier to not break the\nbuildfarm -- it's just provided a second buildfarm that you can break\nindependently of the first one.\n\nAnd like the existing buildfarm, it's severely under-documented.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 08:26:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Wed, Sep 28, 2022 at 1:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Dilip Kumar <dilipbalaut@gmail.com> writes:\n> > wrasse is also failing with a bus error,\n>\n> Yeah. At this point I think it's time to call for this patch\n> to get reverted. It should get tested *off line* on some\n> non-Intel, non-64-bit, alignment-picky architectures before\n> the rest of us have to deal with it any more.\n\nI don't really understand how you expect me or Dilip to do this. Is\nevery PostgreSQL hacker supposed to have a bunch of antiquated servers\nin their basement so that they can test this stuff? I don't think I\nhave had easy access to non-Intel, non-64-bit, alignment-picky\nhardware in probably 25 years, unless my old Raspberry Pi counts.\n\nI admit that I should have checked the CI results before pushing this\ncommit, but as you say yourself, that missed a bunch of stuff, and I'd\nsay it was the important stuff. Unless and until CI is able to check\nall the same configurations that the buildfarm can check, it's not\ngoing to be possible to get test results on some of these platforms\nexcept by checking the code in and seeing what happens. If I revert\nthis, I'm just going to be sitting here not knowing where any of the\nproblems are and having no way to find them.\n\nMaybe I'm missing something here. Apart from visual inspection of the\ncode and missing fewer mistakes than I did, how would you have avoided\nthese problems in one of your commits?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 08:57:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Tue, Sep 27, 2022 at 5:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> ... also, lapwing's not too happy [1]. The alter_table test\n> expects this to yield zero rows, but it doesn't:\n>\n> SELECT m.* FROM filenode_mapping m LEFT JOIN pg_class c ON c.oid = m.oid\n> WHERE c.oid IS NOT NULL OR m.mapped_oid IS NOT NULL;\n>\n> I've reproduced that symptom in a 32-bit FreeBSD VM building with clang,\n> so I suspect that it'll occur on any 32-bit build. mamba is a couple\n> hours away from offering a confirmatory data point, though.\n>\n> (BTW, is that test case sane at all? I'm bemused by the symmetrical\n> NOT NULL tests on a fundamentally not-symmetrical left join; what\n> are those supposed to accomplish? Also, the fact that it doesn't\n> deign to show any fields from \"c\" is sure making it hard to tell\n> what's wrong.)\n\nThis was added by:\n\ncommit f3fdd257a430ff581090740570af9f266bb893e3\nAuthor: Noah Misch <noah@leadboat.com>\nDate: Fri Jun 13 19:57:59 2014 -0400\n\n Harden pg_filenode_relation test against concurrent DROP TABLE.\n\n Per buildfarm member prairiedog. Back-patch to 9.4, where the test was\n introduced.\n\n Reviewed by Tom Lane.\n\nThere seems to be a comment in that commit which explains the intent\nof those funny-looking NULL tests.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 09:07:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Tue, Sep 27, 2022 at 5:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Maybe it wouldn't have any great impact. I don't know, but I don't\n> think it's incumbent on me to measure that. You or the patch author\n> should have had a handle on that question *before* committing.\n\nI agree. I should have gone through and checked that every place where\nRelFileLocator got embedded in some larger struct, there was no\nproblem with making it bigger and increasing the alignment\nrequirement. I'll go back and do that as soon as the immediate\nproblems are fixed. This case would have stood out as something\nneeding attention.\n\nSome of the cases are pretty subtle, though. tamandua is still unhappy\neven after pushing that fix, because xl_xact_relfilelocators embeds a\nRelFileLocator which now requires 8-byte alignment, and\nParseCommitRecord has an undocumented assumption that none of the\nthings embedded in a commit record require more than 4-byte alignment.\nReally, if it's critical for a struct to never require more than\n4-byte alignment, there ought to be a comment about that *on the\nstruct itself*. Having a comment on a function that does something\nwith that struct is probably not really good enough, and we don't even\nhave that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 09:16:19 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Wed, Sep 28, 2022 at 9:16 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I agree. I should have gone through and checked that every place where\n> RelFileLocator got embedded in some larger struct, there was no\n> problem with making it bigger and increasing the alignment\n> requirement. I'll go back and do that as soon as the immediate\n> problems are fixed. This case would have stood out as something\n> needing attention.\n\nOn second thought, I'm going to revert the whole thing. There's a\nbigger mess here than can be cleaned up on the fly. The\nalignment-related mess in ParseCommitRecord is maybe something for\nwhich I could just hack a quick fix, but what I've also just now\nrealized is that this makes a huge number of WAL records larger by 4\nbytes, since most WAL records will contain a block reference. I don't\nknow whether that's OK or not, but I do know that it hasn't been\nthought about, and after commit is not the time to begin experimenting\nwith such things.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 09:48:37 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Wed, Sep 28, 2022 at 6:48 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On second thought, I'm going to revert the whole thing. There's a\n> bigger mess here than can be cleaned up on the fly. The\n> alignment-related mess in ParseCommitRecord is maybe something for\n> which I could just hack a quick fix, but what I've also just now\n> realized is that this makes a huge number of WAL records larger by 4\n> bytes, since most WAL records will contain a block reference.\n\nIt would be useful if there were generic tests that caught issues like\nthis. There are various subtle effects related to how struct layout\ncan impact WAL record size that might easily be missed. It's not like\nthere are a huge number of truly critical WAL records to have tests\nfor.\n\nThe example that comes to mind is the XLOG_BTREE_INSERT_POST record\ntype, which is used for B-Tree index tuple inserts with a posting list\nsplit. There is only an extra 2 bytes of payload for these record\ntypes compared to conventional XLOG_BTREE_INSERT_LEAF records, but we\nnevertheless tend to see a final record size that is consistently a\nfull 8 bytes larger in many important cases, despite not needing to\nstored the IndexTuple with alignment padding. I believe that this is a\nconsequence of the record header itself needing to be MAXALIGN()'d.\n\nAnother important factor in this scenario is the general tendency for\nindex tuple sizes to leave the final XLOG_BTREE_INSERT_LEAF record\nsize at 64 bytes. It wouldn't have been okay if the deduplication work\nmade that size jump up to 72 bytes for many kinds of indexes across\nthe board, even when there was no accompanying posting list split\n(i.e. the vast majority of the time). Maybe it would have been okay if\nnbtree leaf page insert records were naturally rare, but that isn't\nthe case at all, obviously.\n\nThat's why we have two different record types here in the first place.\nEarlier versions of the deduplication patch just added an OffsetNumber\nfield to XLOG_BTREE_INSERT_LEAF which could be set to\nInvalidOffsetNumber, resulting in a surprisingly large amount of waste\nin terms of WAL size. Because of the presence of 3 different factors.\nWe don't bother doing this with the split records, which can also have\naccompanying posting list splits, because it makes hardly any\ndifference at all (split records are much rarer than any kind of leaf\ninsert record, and are far larger when considered individually).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 28 Sep 2022 09:49:01 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On 2022-Sep-28, Peter Geoghegan wrote:\n\n> It would be useful if there were generic tests that caught issues like\n> this. There are various subtle effects related to how struct layout\n> can impact WAL record size that might easily be missed. It's not like\n> there are a huge number of truly critical WAL records to have tests\n> for.\n\nWhat do you think would constitute a test here?\n\nSay: insert N records to a heapam table with one index of each kind\n(under controlled conditions: no checkpoint, no autovacuum, no FPIs),\nthen measure the total number of bytes used by WAL records of each rmgr.\nHave a baseline and see how that changes over time.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 28 Sep 2022 21:20:15 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On 2022-Sep-28, Robert Haas wrote:\n\n> The number of buildfarm failures that I would have avoided by checking\n> CI is less than the number of extra things I had to fix to keep CI\n> happy, and the serious problems were caught by the buildfarm, not by\n> CI. [...] So I guess the way you're supposed to know that you need to\n> update meson.build that is by looking at CI, but CI is also the only\n> reason it's necessary to carry about meson.build in the first place. I\n> feel like CI has not really made it in any easier to not break the\n> buildfarm -- it's just provided a second buildfarm that you can break\n> independently of the first one.\n\nI have an additional, unrelated complaint about CI, which is that we\ndon't have anything for past branches. I have a partial hack(*), but\nI wish we had something we could readily use.\n\n(*) I just backpatched the commit that added the .cirrus.yml file, plus\nsome later fixes to it, and I keep that as a separate branch which I\nmerge with whatever other changes I want to test. I then push that to\ngithub, and ignore the windows results when looking at cirrus-ci.com.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I am amazed at [the pgsql-sql] mailing list for the wonderful support, and\nlack of hesitasion in answering a lost soul's question, I just wished the rest\nof the mailing list could be like this.\" (Fotis)\n (http://archives.postgresql.org/pgsql-sql/2006-06/msg00265.php)\n\n\n", "msg_date": "Wed, 28 Sep 2022 21:22:26 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Thu, Sep 29, 2022 at 1:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> ... Here's a\n> list of the follow-up fixes I've so far committed:\n>\n> 1. headerscheck\n> 2. typos\n> 3. pg_buffercache's meson.build\n> 4. compiler warning\n> 5. alignment problem\n> 6. F_INTEQ/F_OIDEQ problem\n>\n> CI caught (1), (3), and (4). The buildfarm caught (1), (5), and (6).\n\nI think at least some of 5 and all of 6 would be covered by sanitizer\nflags and a 32 bit test respectively, and I think we should add those.\nWe're feeling our way here, working out what's worth including at\nnon-zero cost for each thing we could check. In other cases you and I\nhave fought with, it's been Windows problems (mingw differences, or\nfile handle semantics), which are frustrating to us all, but I see\nMeson as part of the solution to that: uniform testing on Windows\n(whereas the crusty perl would not run all the tests), and CI support\nfor mingw is in the pipeline.\n\n> ... I\n> feel like CI has not really made it in any easier to not break the\n> buildfarm -- it's just provided a second buildfarm that you can break\n> independently of the first one.\n\nI don't agree with this. The build farm clearly has more ways to\nbreak than CI, because it has more CPUs, compilers, operating systems,\ncombinations of configure options and rolls of the timing dice, but CI\nnow catches a lot and, importantly, *before* it reaches the 'farm and\neveryone starts shouting a lot of stuff at you that you already knew,\nbecause it's impacting their work. Unless you don't look, and then\nit's just something that breaks with the build farm, and then you\nbreak CI on master for everyone else too and more people shout.\n\nI'm not personally aware of any significant project that isn't using\nCI, and although we're late to the party I happen to think that ours\nis getting pretty good considering the complexities. And it's going\nto keep improving.\n\n\n", "msg_date": "Thu, 29 Sep 2022 08:31:52 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Wed, Sep 28, 2022 at 12:32 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I don't agree with this. The build farm clearly has more ways to\n> break than CI, because it has more CPUs, compilers, operating systems,\n> combinations of configure options and rolls of the timing dice, but CI\n> now catches a lot and, importantly, *before* it reaches the 'farm and\n> everyone starts shouting a lot of stuff at you that you already knew,\n> because it's impacting their work.\n\nRight. I really don't can't imagine how CI could be seen as anything\nless than a very significant improvement. It wasn't that long ago that\ncommits that certain kinds of work that used OS facilities would\nroutinely break Windows in some completely predictable way. Just\nbreaking every single Windows buildfarm animal was almost a routine\noccurrence. It was normal. Remember that?\n\nOf course it is also true that anything that breaks the buildfarm\ntoday will be disproportionately difficult and subtle. You really do\nhave 2 buildfarms to break -- it's just that one of those buildfarms\ncan be broken and fixed without it bothering anybody else, which is\ntypically enough to prevent breaking the real buildfarm. But only if\nyou actually check both!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 28 Sep 2022 12:49:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Yeah, I suppose I have to get in the habit of looking at CI before\n> committing anything. It's sort of annoying to me, though. Here's a\n> list of the follow-up fixes I've so far committed:\n\n> 1. headerscheck\n> 2. typos\n> 3. pg_buffercache's meson.build\n> 4. compiler warning\n> 5. alignment problem\n> 6. F_INTEQ/F_OIDEQ problem\n\n> CI caught (1), (3), and (4). The buildfarm caught (1), (5), and (6).\n> The number of buildfarm failures that I would have avoided by checking\n> CI is less than the number of extra things I had to fix to keep CI\n> happy, and the serious problems were caught by the buildfarm, not by\n> CI.\n\nThat seems like an unfounded complaint. You would have had to fix\n(3) and (4) in any case, on some time schedule or other. I agree\nthat it'd be good if CI did some 32-bit testing so it could have\ncaught (5) and (6), but that's being worked on.\n\n> So I guess the way you're supposed to know that you need to\n> update meson.build that is by looking at CI, but CI is also the only\n> reason it's necessary to carry about meson.build in the first place.\n\nNot so. People are already using meson in preference to the makefiles\nfor some things, I believe. And we're expecting that meson will\nsupplant the MSVC scripts pretty soon and the makefiles eventually.\n\n> And like the existing buildfarm, it's severely under-documented.\n\nThat complaint I agree with. A wiki page is a pretty poor substitute\nfor in-tree docs.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Sep 2022 16:07:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Wed, Sep 28, 2022 at 12:20 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> What do you think would constitute a test here?\n\nI would start with something simple. Focus on the record types that we\nknow are the most common. It's very skewed towards heap and nbtree\nrecord types, plus some transaction rmgr types.\n\n> Say: insert N records to a heapam table with one index of each kind\n> (under controlled conditions: no checkpoint, no autovacuum, no FPIs),\n> then measure the total number of bytes used by WAL records of each rmgr.\n> Have a baseline and see how that changes over time.\n\nThere are multiple flavors of alignment involved here, which makes it\ntricky. For example, in index AMs the lp_len field from each line\npointer is always MAXALIGN()'d. It is only aligned as required for the\nunderlying heap tuple attributes in the case of heap tuples, though.\nOf course you also have alignment considerations for the record itself\n-- buffer data can usually be stored without being aligned at all. But\nyou can still have an impact from WAL header alignment, especially for\nrecord types that tend to be relatively small -- like nbtree index\ntuple inserts on leaf pages.\n\nI think that the most interesting variation is among boundary cases\nfor those records that affect a variable number of page items. These\nrecord types may be impacted by alignment considerations in subtle\nthough important ways. Things like PRUNE records often don't have that\nmany items. So having coverage of the overhead of every variation of a\nsmall PRUNE record could be important as a way of catching regressions\nthat would otherwise be hard to catch.\n\nContinuing with that example, we could probably cover every possible\npermutation of PRUNE records that affect 5 or so items. Let's say that\nwe have a regression where PRUNE records that happen to have 3 items\nthat must all be set LP_DEAD increase in size by one MAXALIGN()\nquantum. This will probably make a huge difference in many workloads,\nbut it's difficult to spot after the fact when it only affects those\nrecords that happen to have a number of items that happen to fall in\nsome narrow but critical range. It might not affect PRUNE records\nwith (say) 5 items at all. So if we're looking at the macro picture\nwith (say) pgbench and pg_waldump we'll tend to miss the regression\nright now; it'll be obscured by the fact that the regression only\naffects a minority of all PRUNE records, for whatever reason.\n\nThis is just a made up example, so the specifics might be off\nsignificantly -- I'd have to work on it to be sure. Hopefully the\nexample still gets the general idea across.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 28 Sep 2022 13:20:37 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Hi,\n\nOn 2022-09-28 16:07:13 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > And like the existing buildfarm, it's severely under-documented.\n> \n> That complaint I agree with. A wiki page is a pretty poor substitute\n> for in-tree docs.\n\nI assume we're talking about CI?\n\nWhat would you like to see documented? There is some content in\nsrc/tools/ci/README and the wiki page links to that too. Should we lift it\ninto the sgml docs?\n\nIf we're talking about meson - there's a pending documentation commit. It'd be\ngood to get some review for it!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 28 Sep 2022 19:10:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-09-28 16:07:13 -0400, Tom Lane wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> And like the existing buildfarm, it's severely under-documented.\n\n>> That complaint I agree with. A wiki page is a pretty poor substitute\n>> for in-tree docs.\n\n> I assume we're talking about CI?\n\nI was thinking of meson when I wrote that ... but re-reading it,\nI think Robert meant CI.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Sep 2022 22:14:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Hi,\n\nOn 2022-09-28 22:14:11 -0400, Tom Lane wrote:\n> I was thinking of meson when I wrote that ... but re-reading it,\n> I think Robert meant CI.\n\nFWIW, I had planned to put the \"translation table\" between autoconf and meson\ninto the docs, but Peter E. argued that the wiki is better for that. Happy to\nchange course on that aspect if there's agreement on it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 28 Sep 2022 19:56:58 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Hi,\n\nOn 2022-09-28 21:22:26 +0200, Alvaro Herrera wrote:\n> I have an additional, unrelated complaint about CI, which is that we\n> don't have anything for past branches. I have a partial hack(*), but\n> I wish we had something we could readily use.\n> \n> (*) I just backpatched the commit that added the .cirrus.yml file, plus\n> some later fixes to it, and I keep that as a separate branch which I\n> merge with whatever other changes I want to test. I then push that to\n> github, and ignore the windows results when looking at cirrus-ci.com.\n\nI'd not be against backpatching the ci stuff if there were sufficient demand\nfor it. It'd probably be a decent bit of initial work, but after that it\nshouldn't be too bad.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 28 Sep 2022 20:45:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Hi,\n\nOn 2022-09-28 16:07:13 -0400, Tom Lane wrote:\n> I agree that it'd be good if CI did some 32-bit testing so it could have\n> caught (5) and (6), but that's being worked on.\n\nI wasn't aware of anybody doing so, thus here's a patch for that.\n\nI already added the necessary packages to the image. I didn't install llvm for\n32bit because that'd have a) bloated the image unduly b) they can't currently\nbe installed in parallel afaics.\n\nAttached is the patch adding it to CI. To avoid paying the task startup\noverhead twice, it seemed a tad better to build and test 32bit as part of an\nexisting task. We could instead give each job fewer CPUs and run them\nconcurrently.\n\nIt might be worth changing one of the builds to use -Dwal_blocksize=4 and a\nfew other flags, to increase our coverage.\n\nGreetings,\n\nAndres Freund", "msg_date": "Thu, 29 Sep 2022 17:31:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Hi,\n\nOn 2022-09-29 17:31:35 -0700, Andres Freund wrote:\n> I already added the necessary packages to the image. I didn't install llvm for\n> 32bit because that'd have a) bloated the image unduly b) they can't currently\n> be installed in parallel afaics.\n\n> Attached is the patch adding it to CI. To avoid paying the task startup\n> overhead twice, it seemed a tad better to build and test 32bit as part of an\n> existing task. We could instead give each job fewer CPUs and run them\n> concurrently.\n\nAh, one thing I forgot to mention: The 32bit perl currently can't have a\npackaged IO:Pty installed. We could install it via cpan, but it doesn't seem\nworth the bother. Hence one skipped test in the 32bit build.\n\nExample run:\nhttps://cirrus-ci.com/task/4632556472631296?logs=test_world_32#L249 (scroll to\nthe bottom)\n\n\nOnder if we should vary some build options like ICU or the system collation?\nTom, wasn't there something recently that made you complain about not having\ncoverage around collations due to system settings?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 29 Sep 2022 17:40:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Thu, Sep 29, 2022 at 5:40 PM Andres Freund <andres@anarazel.de> wrote:\n> Onder if we should vary some build options like ICU or the system collation?\n> Tom, wasn't there something recently that made you complain about not having\n> coverage around collations due to system settings?\n\nThat was related to TRUST_STRXFRM:\n\nhttps://postgr.es/m/CAH2-Wzmqrjqv9pgyzebgnqmcac1Ct+UxG3VQU7kSVUNDf_yF2A@mail.gmail.com\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 29 Sep 2022 18:16:51 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Hi,\n\nOn 2022-09-29 18:16:51 -0700, Peter Geoghegan wrote:\n> On Thu, Sep 29, 2022 at 5:40 PM Andres Freund <andres@anarazel.de> wrote:\n> > Onder if we should vary some build options like ICU or the system collation?\n> > Tom, wasn't there something recently that made you complain about not having\n> > coverage around collations due to system settings?\n> \n> That was related to TRUST_STRXFRM:\n> \n> https://postgr.es/m/CAH2-Wzmqrjqv9pgyzebgnqmcac1Ct+UxG3VQU7kSVUNDf_yF2A@mail.gmail.com\n\nIt wasn't even that one, although I do recall it now that I reread the thread.\nBut it did successfully jog my memory:\nhttps://www.postgresql.org/message-id/69170.1663425842%40sss.pgh.pa.us\n\nSo possibly it could be worth running one of them with LANG=C?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 29 Sep 2022 18:23:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Tom, wasn't there something recently that made you complain about not having\n> coverage around collations due to system settings?\n\nWe found there was a gap for ICU plus LANG=C, IIRC.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 29 Sep 2022 21:24:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Hi,\n\nOn 2022-09-29 21:24:44 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Tom, wasn't there something recently that made you complain about not having\n> > coverage around collations due to system settings?\n> \n> We found there was a gap for ICU plus LANG=C, IIRC.\n\nUsing that then.\n\n\nAny opinions about whether to do this only in head or backpatch to 15?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 29 Sep 2022 19:09:33 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Any opinions about whether to do this only in head or backpatch to 15?\n\nHEAD should be sufficient, IMO.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 29 Sep 2022 22:16:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Hi,\n\nOn 2022-09-29 22:16:10 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Any opinions about whether to do this only in head or backpatch to 15?\n> \n> HEAD should be sufficient, IMO.\n\nPushed. I think we should add some more divergent options to increase the\ncoverage. E.g. a different xlog pagesize, a smaller segmement size so we can\ntest the \"segment boundary\" code (although we don't currently allow < 1GB via\nnormal means right now) etc. But that's for later.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 29 Sep 2022 21:17:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Wed, Sep 28, 2022 at 08:45:31PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-09-28 21:22:26 +0200, Alvaro Herrera wrote:\n> > I have an additional, unrelated complaint about CI, which is that we\n> > don't have anything for past branches. I have a partial hack(*), but\n> > I wish we had something we could readily use.\n> > \n> > (*) I just backpatched the commit that added the .cirrus.yml file, plus\n> > some later fixes to it, and I keep that as a separate branch which I\n> > merge with whatever other changes I want to test. I then push that to\n> > github, and ignore the windows results when looking at cirrus-ci.com.\n\nYou wouldn't need to ignore Windows tap test failures if you also \nbackpatch 76e38b37a, and either disable PG_TEST_USE_UNIX_SOCKETS, or\ninclude 45f52709d.\n\n> I'd not be against backpatching the ci stuff if there were sufficient demand\n> for it. It'd probably be a decent bit of initial work, but after that it\n> shouldn't be too bad.\n\nI just tried this, which works fine at least for v11-v14:\n| git checkout origin/REL_15_STABLE .cirrus.yml src/tools/ci\n\nhttps://cirrus-ci.com/task/5742859943936000 v15a\nhttps://cirrus-ci.com/task/6725412431593472 v15b\nhttps://cirrus-ci.com/task/5105320283340800 v13\nhttps://cirrus-ci.com/task/4809469463887872 v12\nhttps://cirrus-ci.com/task/6659971021537280 v11\n\n(I still suggest my patches to run all tests using vcregress. The number of\npeople who remember that, for v15, cirrusci runs incomplete tests is probably\nfewer than five.)\nhttps://www.postgresql.org/message-id/20220623193125.GB22452%40telsasoft.com\nhttps://www.postgresql.org/message-id/20220828144447.GA21897%40telsasoft.com\n\nIf cirrusci were backpatched, it'd be kind of nice to use a ccache key\nthat includes the branch name (but maybe the overhead of compilation is\nunimportant compared to the workload induced by cfbot).\n\nA gripe from me: the regression.diffs and other logs from the SQL regression\ntests are in a directory called \"main\" (same for \"isolation\"). I imagine I\nwon't be the last person to spend minutes looking through the list of test dirs\nfor the entry called \"regress\", conclude that it's inexplicably absent, and\nlocate it only after reading src/test/regress/meson.build.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 1 Oct 2022 11:14:20 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "Hi,\n\nOn 2022-10-01 11:14:20 -0500, Justin Pryzby wrote:\n> I just tried this, which works fine at least for v11-v14:\n> | git checkout origin/REL_15_STABLE .cirrus.yml src/tools/ci\n> \n> https://cirrus-ci.com/task/5742859943936000 v15a\n> https://cirrus-ci.com/task/6725412431593472 v15b\n> https://cirrus-ci.com/task/5105320283340800 v13\n> https://cirrus-ci.com/task/4809469463887872 v12\n> https://cirrus-ci.com/task/6659971021537280 v11\n\nCool, thanks for trying that! I wonder if there's any problems on other\nbranches...\n\n\n> (I still suggest my patches to run all tests using vcregress. The number of\n> people who remember that, for v15, cirrusci runs incomplete tests is probably\n> fewer than five.)\n> https://www.postgresql.org/message-id/20220623193125.GB22452%40telsasoft.com\n> https://www.postgresql.org/message-id/20220828144447.GA21897%40telsasoft.com\n\nAndrew, the defacto maintainer of src/tools/msvc, kind of NACKed those. But\nthe reasoning might not hold with vcregress being on life support.\n\nOTOH, to me the basic advantage is to have *any* CI coverage. We don't need to\nput the bar for the backbranches higher than were we were at ~2 weeks ago.\n\n\n> If cirrusci were backpatched, it'd be kind of nice to use a ccache key\n> that includes the branch name (but maybe the overhead of compilation is\n> unimportant compared to the workload induced by cfbot).\n\nHm. The branch name in general sounds like it might be too broad, particularly\nfor cfbot. I think we possibly should just put the major version into\n.cirrus.yml and use that as the cache key. I think that'd also solve some of\nthe \"diff against what\" arguments we've had around your CI improvements.\n\n\n> A gripe from me: the regression.diffs and other logs from the SQL regression\n> tests are in a directory called \"main\" (same for \"isolation\"). I imagine I\n> won't be the last person to spend minutes looking through the list of test dirs\n> for the entry called \"regress\", conclude that it's inexplicably absent, and\n> locate it only after reading src/test/regress/meson.build.\n\nI'd have no problem renaming main/isolation to isolation/isolation and\nmain/regress to pg_regress/regress or such.\n\nFWIW, if you add --print-errorlogs meson test will show you the output of just\nfailed tests, which for pg_regress style tests will include the path to\nregression.diffs:\n\n...\nThe differences that caused some tests to fail can be viewed in the\nfile \"/srv/dev/build/m/testrun/cube/regress/regression.diffs\". A copy of the test summary that you see\nabove is saved in the file \"/srv/dev/build/m/testrun/cube/regress/regression.out\".\n\n\nIt's too bad the default of --print-errorlogs can't be changed.\n\n\nUnfortunately we don't print something as useful in the case of tap tests. I\nwonder if we should do something like\n\ndiff --git i/src/test/perl/PostgreSQL/Test/Utils.pm w/src/test/perl/PostgreSQL/Test/Utils.pm\nindex 99d33451064..acc18ca7c85 100644\n--- i/src/test/perl/PostgreSQL/Test/Utils.pm\n+++ w/src/test/perl/PostgreSQL/Test/Utils.pm\n@@ -239,6 +239,8 @@ END\n #\n # Preserve temporary directories after (1) and after (2).\n $File::Temp::KEEP_ALL = 1 unless $? == 0 && all_tests_passing();\n+\n+ diag(\"test logfile: $test_logfile\");\n }\n \n =pod\n\nPotentially doing so only if $? != 0.\n\nThis would make the output for a failing test end like this:\n―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― ✀ ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\nstderr:\n# Failed test at /home/andres/src/postgresql/contrib/amcheck/t/001_verify_heapam.pl line 20.\n# Failed test at /home/andres/src/postgresql/contrib/amcheck/t/001_verify_heapam.pl line 22.\n# test logfile: /srv/dev/build/m/testrun/amcheck/001_verify_heapam/log/regress_log_001_verify_heapam\n# Looks like you failed 2 tests of 275.\n\n(test program exited with status code 2)\n―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\n\nwhich should make it a lot easier to find the log?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 1 Oct 2022 15:15:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" }, { "msg_contents": "On Sat, Oct 01, 2022 at 03:15:14PM -0700, Andres Freund wrote:\n> On 2022-10-01 11:14:20 -0500, Justin Pryzby wrote:\n> > (I still suggest my patches to run all tests using vcregress. The number of\n> > people who remember that, for v15, cirrusci runs incomplete tests is probably\n> > fewer than five.)\n> > https://www.postgresql.org/message-id/20220623193125.GB22452%40telsasoft.com\n> > https://www.postgresql.org/message-id/20220828144447.GA21897%40telsasoft.com\n> \n> Andrew, the defacto maintainer of src/tools/msvc, kind of NACKed those. But\n> the reasoning might not hold with vcregress being on life support.\n\nI think you're referring to comment here:\n87a81b91-87bf-c0bc-7e4f-06dffadcf737@dunslane.net\n\n..which I tried to discuss here:\n20220528153741.GK19626@telsasoft.com\n| I think there was some confusion about the vcregress \"alltaptests\"\n| target. I said that it's okay to add it and make cirrus use it (and\n| that the buildfarm could use it too). Andrew responded that the\n| buildfarm wants to run different tests separately. But Andres seems\n| to have interpretted that as an objection to the addition of an\n| \"alltaptests\" target, which I think isn't what's intended - it's fine\n| if the buildfarm prefers not to use it. \n\n> OTOH, to me the basic advantage is to have *any* CI coverage. We don't need to\n> put the bar for the backbranches higher than were we were at ~2 weeks ago.\n\nI agree that something is frequently better than nothing. But it could\nbe worse if it gives the impression that \"CI showed that everything was\ngreen\", when in fact it hadn't run 10% of the tests:\nhttps://www.postgresql.org/message-id/CA%2BhUKGLneD%2Bq%2BE7upHGwn41KGvbxhsKbJ%2BM-y9nvv7_Xjv8Qog%40mail.gmail.com\n\n> I'd have no problem renaming main/isolation to isolation/isolation and\n> main/regress to pg_regress/regress or such.\n\n+1\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 1 Oct 2022 17:58:27 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: longfin and tamandua aren't too happy but I'm not sure why" } ]
[ { "msg_contents": "Hi,\n\nMelih mentioned on IM that while he could build postgres with meson on windows\nw/ mingw, the tests didn't run.\n\nIssues:\n\n- The bit computing PATH to the temporary installation for running tests only\n dealt with backward slashes in paths on windows, because that's what\n python.org python uses by default. But a msys ucrt python returns forward\n slashes. Trivial fix.\n\n I didn't encounter this because I'd used a meson from git, which thus didn't\n have msys's patch to return a different prefix.\n\n This made pg_regress/isolationtester tests other than the main regression\n tests pass.\n\n\n- I'd only passed in a fake HOST_TUPLE when building pg_regress, oops.\n\n I don't think it makes sense to come up with a config.guess compatible name\n - they're quite random. And we can't rely on shell to work when targetting\n msvc. The attached patch does:\n\n# Need make up something roughly like x86_64-pc-mingw64. resultmap matches on\n# patterns like \".*-.*-mingw.*\". We probably can do better, but for now just\n# replace 'gcc' with 'mingw' on windows.\nhost_tuple_cc = cc.get_id()\nif host_system == 'windows' and host_tuple_cc == 'gcc'\n host_tuple_cc = 'mingw'\nendif\nhost_tuple = '@0@-@1@-@2@'.format(host_cpu, host_system, host_tuple_cc)\n\n which I don't perfectly like (e.g. clang also kind of works on windows), but\n it seems ok enough for now. I suspect we'd need a bunch of other changes to\n make clang on windows work.\n\n This made the main pg_regress tests pass.\n\n\n- I had not added the logic to not use existing getopt on mingw, causing tap\n tests to fail. Fixing that didn't immediately work because of duplicate\n symbols - because I hadn't copied over -Wl,--allow-multiple-definition.\n\n \"Contrary\" to the comment in src/template/win32 it doesn't appear to be\n needed for libpq and pgport - likely because for the meson build an export\n file is generated (needed for msvc anyway, I didn't think to disable it for\n mingw). But since we need to be able to override getopt(), we obviously\n need --allow-multiple-definition anyway.\n\n\n- This lead me to try to also add -Wl,--disable-auto-import. However, that\n caused two problems.\n\n 1) plpython tests started to fail, due to not finding Pg_magic_func in\n plpython3.dll. This confounded me for quite a while. It worked for every\n other extension .dll? A lot of looking around lead me to define\n#define PGDLLEXPORT __declspec (dllexport)\n for mingw as well. For mingw we otherwise end up with\n#define PGDLLEXPORT __attribute__((visibility(\"default\")))\n which works.\n\n As far as I can tell __attribute__((visibility(\"default\"))) works as long as\n as there's no declspec(dllexport) symbol in the same dll. If there is, it\n stops working. Ugh.\n\n I don't see a reason not to define PGDLLEXPORT as __declspec(dllexport)\n for mingw as well?\n\n I suspect this is an issue for autoconf mingw build as well, but that\n fails to even configure - there's no coverage on the BF for it I think.\n\n This made plpython's test pass (again).\n\n\n 2) psql failed to build due to readline. I hadn't implemented disabling it\n automatically. Somewhat luckily - turns out it actually works (as long as\n --disable-auto-import isn't used), including autocomplete!\n\n The issue afaict is that while readline has an import library, functions\n aren't \"annotated\" with __declspec(dllimport), thus without\n --enable-auto-import the references are assumed to be local, and thus\n linking fails.\n\n It's possible we could \"fix\" this by declaring the relevant symbols\n ourselves or such. But for now just adding --enable-auto-import to the\n flags used to link to readline seems saner?\n\n I think it'd be very cool to finally have a working readline on windows.\n\n Unfortunately IO::Pty isn't installable on windows, it'd have been\n interesting to see how well that readline works.\n\n\n- Before I updated mingw, interactive psql didn't show a prompt, making me\n think something was broken. That turned out to be the same in an autoconf\n build. When inside the msys terminal (mintty) isatty returned 0, because of\n some detail of how it emulates ttys. After updating mingw that problem is\n gone.\n\n I included this partially so I have something to find in email next time I\n search for mintty and isatty :)\n\n\nWith these things fixed, postgres built and ran tests successfully! With\nnearly all \"relevant\" dependencies:\nicu, libxml, libslt, lz4, nls, plperl, plpython, pltcl, readline, ssl, zlib,\nzstd\n\nMissing are gss and uuid. Both apparently work on windows, but they're not in\nthe msys repository, and I don't feel like compiling them myself.\n\nOnly 5 tests skipped:\n- recovery/017_shm - probably not applicable\n- recovery/022_crash_temp_files - I don't think it's ok that this test skips,\n but that's for another thread\n- psql/010_tab_completion - IO::Pty can't be installed\n- psql/010_cancel - IO::Pty can't be installed\n- ldap/001_auth - doesn't know how to find slapd on windows\n\n\nStared too long at the screen to figure all of this out. Food next. I'll clean\nthe patches up later tonight or tomorrow morning.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 27 Sep 2022 19:27:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "meson vs mingw, plpython, readline and other fun" }, { "msg_contents": "Hi,\n\nOn 2022-09-27 19:27:24 -0700, Andres Freund wrote:\n> Stared too long at the screen to figure all of this out. Food next. I'll clean\n> the patches up later tonight or tomorrow morning.\n\nAttached:\n\n0001 - meson: windows: Normalize slashes in prefix\n0002 - meson: pg_regress: Define a HOST_TUPLE sufficient to make resultmap work\n0003 - meson: mingw: Allow multiple definitions\n0004 - meson: Implement getopt logic from autoconf\n0005 - mingw: Define PGDLLEXPORT as __declspec (dllexport) as done for msvc\n0006 - meson: mingw: Add -Wl,--disable-auto-import, enable when linking with readline\n\n0005 is the one that I'd most like review for. The rest just affect meson, so\nI'm planning to push them fairly soon - review would nevertheless be nice.\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 28 Sep 2022 12:33:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: meson vs mingw, plpython, readline and other fun" }, { "msg_contents": "Hi,\n\nOn 2022-09-28 12:33:17 -0700, Andres Freund wrote:\n> 0001 - meson: windows: Normalize slashes in prefix\n> 0002 - meson: pg_regress: Define a HOST_TUPLE sufficient to make resultmap work\n> 0003 - meson: mingw: Allow multiple definitions\n> 0004 - meson: Implement getopt logic from autoconf\n> 0005 - mingw: Define PGDLLEXPORT as __declspec (dllexport) as done for msvc\n> 0006 - meson: mingw: Add -Wl,--disable-auto-import, enable when linking with readline\n> \n> 0005 is the one that I'd most like review for. The rest just affect meson, so\n> I'm planning to push them fairly soon - review would nevertheless be nice.\n\nI have pushed 1-4, was holding out for opinions on 5.\n\nI'm planning to push 0005 (and 0006) soon, to allow the mingw CI patch to\nprogress. So if somebody has concerns defining PGDLLEXPORT to __declspec\n(dllexport) for mingw (already the case for msvc)...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 29 Sep 2022 21:21:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: meson vs mingw, plpython, readline and other fun" } ]
[ { "msg_contents": "Hi,\n\nWhile reviewing the “Fast COPY FROM based on batch insert\" patch, I\nnoticed this comment introduced in commit b663a4136:\n\n /*\n * If a certain number of tuples have already been accumulated, or\n * a tuple has come for a different relation than that for the\n * accumulated tuples, perform the batch insert\n */\n if (resultRelInfo->ri_NumSlots == resultRelInfo->ri_BatchSize)\n {\n ExecBatchInsert(mtstate, resultRelInfo,\n resultRelInfo->ri_Slots,\n resultRelInfo->ri_PlanSlots,\n resultRelInfo->ri_NumSlots,\n estate, canSetTag);\n resultRelInfo->ri_NumSlots = 0;\n }\n\nI think the “or a tuple has come for a different relation than that\nfor the accumulated tuples\" part in the comment is a leftover from an\nearlier version of the patch [1]. As the code shows, we do not handle\nthat case anymore, so I think we should remove that part from the\ncomment. Attached is a patch for that.\n\nBest regards,\nEtsuro Fujita\n\n[1] https://www.postgresql.org/message-id/TYAPR01MB2990ECD1C68EA694DD0667E4FEE90%40TYAPR01MB2990.jpnprd01.prod.outlook.com", "msg_date": "Wed, 28 Sep 2022 19:25:12 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Obsolete comment in ExecInsert()" }, { "msg_contents": "Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> I think the “or a tuple has come for a different relation than that\n> for the accumulated tuples\" part in the comment is a leftover from an\n> earlier version of the patch [1]. As the code shows, we do not handle\n> that case anymore, so I think we should remove that part from the\n> comment. Attached is a patch for that.\n\n+1, but what remains still seems awkwardly worded. How about something\nlike \"When we've reached the desired batch size, perform the insertion\"?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Sep 2022 10:42:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Obsolete comment in ExecInsert()" }, { "msg_contents": "Hi,\n\nOn Wed, Sep 28, 2022 at 11:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Etsuro Fujita <etsuro.fujita@gmail.com> writes:\n> > I think the “or a tuple has come for a different relation than that\n> > for the accumulated tuples\" part in the comment is a leftover from an\n> > earlier version of the patch [1]. As the code shows, we do not handle\n> > that case anymore, so I think we should remove that part from the\n> > comment. Attached is a patch for that.\n>\n> +1, but what remains still seems awkwardly worded. How about something\n> like \"When we've reached the desired batch size, perform the insertion\"?\n\n+1 for that change. Pushed that way.\n\nThanks for reviewing!\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 29 Sep 2022 17:10:09 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Obsolete comment in ExecInsert()" } ]
[ { "msg_contents": "Hi,\r\n\r\nAttached is a draft of the PostgreSQL 15 RC 1 release announcement. \r\nPlease provide feedback no later than 2022-09-29 0:00 AoE.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 28 Sep 2022 09:42:39 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "PostgreSQL 15 RC1 release announcement draft" } ]
[ { "msg_contents": "Hi,\n\nxlog.c currently has ~9000 LOC, out of which ~700 LOC is backup\nrelated, making the file really unmanageable. The commit\n7d708093b7400327658a30d1aa1d5e284d37622c added new files\nxlogbackup.c/.h for hosting all backup related code eventually. I\npropose to move all the backp related code from xlog.c and xlogfuncs.c\nto xlogbackup.c/.h. In doing so, I had to add a few Get/Set functions\nfor XLogCtl variables so that xlogbackup.c can use them.\n\nI'm attaching a patch set where 0001 and 0002 move backup code from\nxlogfuncs.c and xlog.c to xlogbackup.c/.h respectively. The advantage\nis that all the core's backup code is in one single file making it\nmore readable and manageable while reducing the xlog.c's file size.\n\nThoughts?\n\nThanks Michael Paquier for suggesting to have new files for backup related code.\n\n[1] https://www.postgresql.org/message-id/CALj2ACX0wjo%2B49hbUmvc_zT1zwdqFOQyhorN0Ox-Rk6v97Nejw%40mail.gmail.com\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 28 Sep 2022 20:16:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Wed, Sep 28, 2022 at 08:16:08PM +0530, Bharath Rupireddy wrote:\n> In doing so, I had to add a few Get/Set functions\n> for XLogCtl variables so that xlogbackup.c can use them.\n\nI would suggest moving this to a separate prerequisite patch that can be\nreviewed independently from the patches that simply move code to a\ndifferent file.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 4 Oct 2022 15:54:20 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Tue, Oct 04, 2022 at 03:54:20PM -0700, Nathan Bossart wrote:\n> I would suggest moving this to a separate prerequisite patch that can be\n> reviewed independently from the patches that simply move code to a\n> different file.\n\nAnd FWIW, the SQL interfaces for pg_backup_start() and\npg_backup_stop() could stay in xlogfuncs.c. This has the advantage to\ncentralize in the same file all the SQL-function-specific checks.\n--\nMichael", "msg_date": "Wed, 5 Oct 2022 16:50:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Wed, Oct 5, 2022 at 1:20 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Oct 04, 2022 at 03:54:20PM -0700, Nathan Bossart wrote:\n> > I would suggest moving this to a separate prerequisite patch that can be\n> > reviewed independently from the patches that simply move code to a\n> > different file.\n\nI added the new functions in 0001 patch for ease of review.\n\n> And FWIW, the SQL interfaces for pg_backup_start() and\n> pg_backup_stop() could stay in xlogfuncs.c. This has the advantage to\n> centralize in the same file all the SQL-function-specific checks.\n\nAgreed.\n\n+extern void WALInsertLockAcquire(void);\n+extern void WALInsertLockAcquireExclusive(void);\n+extern void WALInsertLockRelease(void);\n+extern void WALInsertLockUpdateInsertingAt(XLogRecPtr insertingAt);\n\nNote that I had moved all WAL insert lock related functions to xlog.h\ndespite xlogbackup.c using 2 of them. This is done to keep all the\nfunctions together.\n\nPlease review the attached v2 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 5 Oct 2022 15:22:01 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On 2022-Oct-05, Michael Paquier wrote:\n\n> And FWIW, the SQL interfaces for pg_backup_start() and\n> pg_backup_stop() could stay in xlogfuncs.c. This has the advantage to\n> centralize in the same file all the SQL-function-specific checks.\n\nAs I recall, that has the disadvantage that the API exposure is a bit\nhigher -- I mean, with the patch as originally posted, there was less\ncross-inclusion of header files, but that is gone in the version Bharat\nposted as reply to this. I'm not sure if that's caused by *this*\ncomment, or even that it's terribly significant, but it seems worth\nconsidering at least.\n\nxlog.h is included by a lot of stuff, so it would be great if it\nitself included the smallest set of other files possible.\n\n... that said, looking at the chart in\nhttps://doxygen.postgresql.org//xlog_8h.html looks like the only file\nwe'd avoid indirectly including is pgtime.h (in addition to xlogbackup.h\nitself).\n\n\n(It's strange that xlog.h seems to have become included into rel.h by\ncommit 848ef42bb8c7 that did not otherwise touch either rel.h nor xlog.h.)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Nunca se desea ardientemente lo que solo se desea por razón\" (F. Alexandre)\n\n\n", "msg_date": "Wed, 5 Oct 2022 19:58:21 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Wed, Oct 05, 2022 at 03:22:01PM +0530, Bharath Rupireddy wrote:\n>> On Tue, Oct 04, 2022 at 03:54:20PM -0700, Nathan Bossart wrote:\n>> > I would suggest moving this to a separate prerequisite patch that can be\n>> > reviewed independently from the patches that simply move code to a\n>> > different file.\n> \n> I added the new functions in 0001 patch for ease of review.\n\nCan we also replace the relevant code with calls to these functions in\n0001? That way, we can more easily review the changes you are making to\nthis code separately from the large patch that just moves the code.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 5 Oct 2022 15:52:24 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "Hi,\n\nOn 2022-10-05 15:22:01 +0530, Bharath Rupireddy wrote:\n> +extern void WALInsertLockAcquire(void);\n> +extern void WALInsertLockAcquireExclusive(void);\n> +extern void WALInsertLockRelease(void);\n> +extern void WALInsertLockUpdateInsertingAt(XLogRecPtr insertingAt);\n> \n> Note that I had moved all WAL insert lock related functions to xlog.h\n> despite xlogbackup.c using 2 of them. This is done to keep all the\n> functions together.\n> \n> Please review the attached v2 patch set.\n\nI'm doubtful it's a good idea to expose these to outside of xlog.c - they are\nvery low level, and it's very easy to break stuff by using them wrongly. IMO,\nif that's necessary, the split isn't right.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Oct 2022 16:20:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Thu, Oct 6, 2022 at 4:50 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> I'm doubtful it's a good idea to expose these to outside of xlog.c - they are\n> very low level, and it's very easy to break stuff by using them wrongly.\n\nHm. Here's the v3 patch set without exposing WAL insert lock related\nfunctions. Please have a look.\n\nOn Thu, Oct 6, 2022 at 4:22 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> Can we also replace the relevant code with calls to these functions in\n> 0001? That way, we can more easily review the changes you are making to\n> this code separately from the large patch that just moves the code.\n\nDone. Please have a look at 0001.\n\nOn Wed, Oct 5, 2022 at 11:28 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Oct-05, Michael Paquier wrote:\n>\n> > And FWIW, the SQL interfaces for pg_backup_start() and\n> > pg_backup_stop() could stay in xlogfuncs.c. This has the advantage to\n> > centralize in the same file all the SQL-function-specific checks.\n>\n> As I recall, that has the disadvantage that the API exposure is a bit\n> higher -- I mean, with the patch as originally posted, there was less\n> cross-inclusion of header files, but that is gone in the version Bharat\n> posted as reply to this. I'm not sure if that's caused by *this*\n> comment, or even that it's terribly significant, but it seems worth\n> considering at least.\n\nFWIW, I'm attaching 0003 patch for moving backup functions from\nxlogfuncs.c to xlogbackup.c. It's natural to have them there when\nwe're moving backup related things, this also reduces backup code\nfootprint. We can leave xlogfuncs.c for WAL related SQL-callable\nfunctions.\n\nPlease review the attached v3 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 6 Oct 2022 17:54:15 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On 2022-Oct-06, Bharath Rupireddy wrote:\n\n> On Thu, Oct 6, 2022 at 4:50 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > I'm doubtful it's a good idea to expose these to outside of xlog.c - they are\n> > very low level, and it's very easy to break stuff by using them wrongly.\n> \n> Hm. Here's the v3 patch set without exposing WAL insert lock related\n> functions. Please have a look.\n\nHmm, I don't like your 0001 very much. This sort of thing:\n\n+/*\n+ * Get the ControlFile.\n+ */\n+ControlFileData *\n+GetControlFile(void)\n+{\n+ return ControlFile;\n+}\n\nlooks too easy to misuse; what about locking? Also, isn't the addition\nof ControlFile as a variable in do_pg_backup_start going to cause shadow\nvariable warnings? Given the locking requirements, I think it would be\nfeasible to copy stuff out of ControlFile under lock, then return the\ncopies.\n\n\n+/*\n+ * Increment runningBackups and forcePageWrites.\n+ *\n+ * NOTE: This function is tailor-made for use in xlogbackup.c. It doesn't set\n+ * the respective XLogCtl members directly, and acquires and releases locks.\n+ * Hence be careful when using it elsewhere.\n+ */\n+void\n+SetXLogBackupRelatedInfo(void)\n\nI understand that naming is difficult, but I think \"Set foo Related\nInfo\" seems way too vague. And the comment says \"it doesn't set stuff\ndirectly\", and then it goes and sets stuff directly. What gives?\n\nYou added some commentary that these functions are tailor-made for\ninternal operations, and then declared them in the most public header\nfunction that xlog has? I think at the bare minimum, these prototypes\nshould be in xlog_internal.h, not xlog.h.\n\n\nI didn't look at 0002 and 0003 other than to notice that xlogbackup.h is\nno longer removed from xlog.h. So what is the point of all this?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 12 Oct 2022 09:34:39 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Wed, Oct 12, 2022 at 1:04 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > Hm. Here's the v3 patch set without exposing WAL insert lock related\n> > functions. Please have a look.\n>\n> Hmm, I don't like your 0001 very much. This sort of thing:\n\nThanks for reviewing.\n\n> +ControlFileData *\n> +GetControlFile(void)\n>\n> looks too easy to misuse; what about locking? Also, isn't the addition\n> of ControlFile as a variable in do_pg_backup_start going to cause shadow\n> variable warnings? Given the locking requirements, I think it would be\n> feasible to copy stuff out of ControlFile under lock, then return the\n> copies.\n\n+1. Done that way.\n\n> +/*\n> + * Increment runningBackups and forcePageWrites.\n> + *\n> + * NOTE: This function is tailor-made for use in xlogbackup.c. It doesn't set\n> + * the respective XLogCtl members directly, and acquires and releases locks.\n> + * Hence be careful when using it elsewhere.\n> + */\n> +void\n> +SetXLogBackupRelatedInfo(void)\n>\n> I understand that naming is difficult, but I think \"Set foo Related\n> Info\" seems way too vague.\n\nI've used SetXLogBackupActivity() and ResetXLogBackupActivity()\nbecause they match with the members that these functions deal with.\n\n> And the comment says \"it doesn't set stuff\n> directly\", and then it goes and sets stuff directly. What gives?\n\nMy bad. That comment was meant for the reset function above. However,\nI've removed it entirely now because one can look at the function and\ninfer that the forcePageWrites isn't set directly but only when\nrunningBackups is 0.\n\n> You added some commentary that these functions are tailor-made for\n> internal operations, and then declared them in the most public header\n> function that xlog has? I think at the bare minimum, these prototypes\n> should be in xlog_internal.h, not xlog.h.\n\nI removed such comments. These are functions used by xlogbackup.c to\ncall back into xlog.c similar to the call back functions defined in\nxlog.h for xlogrecovery.c. And, most of the XLogCtl set/get sort of\nfunction declarations are in xlog.h. So, I'm retaining them in xlog.h.\n\n> I didn't look at 0002 and 0003 other than to notice that xlogbackup.h is\n> no longer removed from xlog.h. So what is the point of all this?\n\nThe whole idea is to move as much as possible backup related code to\nxlogbackup.c/.h because xlog.c has already grown.\n\nI've earlier moved macros BACKUP_LABEL_FILE, TABLESPACE_MAP to\nxlogbackup.h, but I think they're good to stay in xlog.h as they're\nbeing used in xlog.c and xlogrecovery.c. This reduces the xlogbackup.h\nfootprint a bit - we don't need xlogbackup.h in xlogrecovery.c.\n\nAnother reason we need xlogbackup.h in xlog.h is for\nSessionBackupState and it needs to be set before we release WAL insert\nlocks, see the comment [1]. Well, for this reason, should we move all\nxlogbackup.c callbacks for xlog.c to xlog_internal.h? Or should we\njust remove the SessionBackupState enum and convert\nSESSION_BACKUP_NONE and SESSION_BACKUP_RUNNING to just macros in\nxlogbackup.h and use integer type to pass the state across? I don't\nknow what's better here. Thoughts?\n\nI'm attaching the v4 patch set, please review it further.\n\n[1]\n * You might think that WALInsertLockRelease() can be called before\n * cleaning up session-level lock because session-level lock doesn't need\n * to be protected with WAL insertion lock. But since\n * CHECK_FOR_INTERRUPTS() can occur in it, session-level lock must be\n * cleaned up before it.\n */\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 13 Oct 2022 11:42:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On 2022-Oct-13, Bharath Rupireddy wrote:\n\n> On Wed, Oct 12, 2022 at 1:04 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > You added some commentary that these functions are tailor-made for\n> > internal operations, and then declared them in the most public header\n> > function that xlog has? I think at the bare minimum, these prototypes\n> > should be in xlog_internal.h, not xlog.h.\n> \n> I removed such comments. These are functions used by xlogbackup.c to\n> call back into xlog.c similar to the call back functions defined in\n> xlog.h for xlogrecovery.c. And, most of the XLogCtl set/get sort of\n> function declarations are in xlog.h. So, I'm retaining them in xlog.h.\n\nAs I see it, xlog.h is a header that exports XLOG manipulations to the\noutside world (everything that produces WAL, as well as stuff that\ncontrols operation); xlog_internal is the header that exports xlog*.c\ninternal stuff for other xlog*.c files and specialized frontends to use.\nThese new functions are internal to xlogbackup.c and xlog.c, so IMO they\nbelong in xlog_internal.h.\n\nStuff that is used from xlog.c only by xlogrecovery.c should also appear\nin xlog_internal.h only, not xlog.h, so I suggest not to take that as\nprecedent. Also, that file (xlogrecovery.c) is pretty new so we haven't\nhad time to nail down the .h layout yet.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 13 Oct 2022 11:42:04 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Thu, Oct 13, 2022 at 3:12 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> As I see it, xlog.h is a header that exports XLOG manipulations to the\n> outside world (everything that produces WAL, as well as stuff that\n> controls operation); xlog_internal is the header that exports xlog*.c\n> internal stuff for other xlog*.c files and specialized frontends to use.\n> These new functions are internal to xlogbackup.c and xlog.c, so IMO they\n> belong in xlog_internal.h.\n>\n> Stuff that is used from xlog.c only by xlogrecovery.c should also appear\n> in xlog_internal.h only, not xlog.h, so I suggest not to take that as\n> precedent. Also, that file (xlogrecovery.c) is pretty new so we haven't\n> had time to nail down the .h layout yet.\n\nHm. Agree. But, that requires us to include xlogbackup.h in\nxlog_internal.h for SessionBackupState enum in\nResetXLogBackupActivity(). Is that okay?\n\nSessionBackupState and it needs to be set before we release WAL insert\nlocks, see the comment [1]. Should we just remove the\nSessionBackupState enum and convert SESSION_BACKUP_NONE and\nSESSION_BACKUP_RUNNING to just macros in xlogbackup.h and use integer\ntype to pass the state across? I don't know what's better here. Do you\nhave any thoughts on this?\n\n[1]\n * You might think that WALInsertLockRelease() can be called before\n * cleaning up session-level lock because session-level lock doesn't need\n * to be protected with WAL insertion lock. But since\n * CHECK_FOR_INTERRUPTS() can occur in it, session-level lock must be\n * cleaned up before it.\n */\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 13 Oct 2022 15:55:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On 2022-Oct-13, Bharath Rupireddy wrote:\n\n> Hm. Agree. But, that requires us to include xlogbackup.h in\n> xlog_internal.h for SessionBackupState enum in\n> ResetXLogBackupActivity(). Is that okay?\n\nIt's not great, but it's not *that* bad, ISTM, mainly because\nxlog_internal.h will affect less stuff than xlog.h.\n\n> SessionBackupState and it needs to be set before we release WAL insert\n> locks, see the comment [1].\n\nI see. Maybe we could keep that enum in xlog.h, instead.\n\nWhile looking at how that works: I think calling a local variable\n\"session_backup_state\" is super confusing, seeing that we have a\nfile-global variable called sessionBackupState. I recommend naming the\nlocal \"newstate\" or something along those lines instead.\n\nI wonder why does pg_backup_start_callback() not change the backup state\nbefore your patch. This seems a gratuitous difference, or is it? If\nyou change that code so that it also sets the status to BACKUP_NONE,\nthen you can pass a bare SessionBackupState to ResetXLogBackupActivity\nrather than a pointer to one, which is a very strange arrangement that\nexists only so that you can have a third state (NULL) meaning \"don't\nchange state\" -- that looks quite weird.\n\nAlternatively, if you don't want or can't change\npg_backup_start_callback to pass a valid state value, another solution\nmight be to pass a separate boolean \"change state\".\n\nBut I would look at having another patch before your series that changes\npg_backup_start_callback to make the code identical for the three\ncallers, then you can simplify the patched code.\n\n> Should we just remove the\n> SessionBackupState enum and convert SESSION_BACKUP_NONE and\n> SESSION_BACKUP_RUNNING to just macros in xlogbackup.h and use integer\n> type to pass the state across? I don't know what's better here. Do you\n> have any thoughts on this?\n\nNo, please, no passing of unadorned magic numbers.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 13 Oct 2022 13:13:30 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Thu, Oct 13, 2022 at 7:13 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Oct-13, Bharath Rupireddy wrote:\n> > Hm. Agree. But, that requires us to include xlogbackup.h in\n> > xlog_internal.h for SessionBackupState enum in\n> > ResetXLogBackupActivity(). Is that okay?\n>\n> It's not great, but it's not *that* bad, ISTM, mainly because\n> xlog_internal.h will affect less stuff than xlog.h.\n\nThis is unfortunately a lot less true than I would like. I count 75\nplaces where we #include \"access/xlog.h\" and 53 where we #include\n\"access/xlog_internal.h\". And many of those are in frontend code. I\nfeel like the contents of xlog_internal.h are a bit too eclectic.\nMaybe stuff that has to do with the on-disk directory structure, like\nXLOGDIR and XLOG_FNAME_LEN, as well as stuff that has to do with where\nbytes are located, like XLByteToSeg, should move to another file.\nBesides that, which is the biggest part of the file, there's also\nstuff that has to do with the page and record format generally (like\nXLOG_PAGE_MAGIC and SizeOfXLogShortPHD) and stuff that is used for\ncertain specific WAL record types (like xl_parameter_change and\nxl_restore_point) and some random rmgr-related things (like RmgrData\nand the stuff that folllows) and the usual assortment of random GUCs\nand global variables (like RecoveryTargetAction and\nArchiveRecoveryRequested). Maybe it doesn't make sense to split this\nup into a thousand tiny little header files, but I think some\nrethinking would be a good idea, because it really doesn't make much\nsense to me to mix stuff that has to do with file-naming conventions,\nwhich a bunch of frontend code needs to know about, together with a\nbunch of backend-only things.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 13 Oct 2022 10:05:06 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Thu, Oct 13, 2022 at 4:43 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n\nThanks for reviewing.\n\n> > Hm. Agree. But, that requires us to include xlogbackup.h in\n> > xlog_internal.h for SessionBackupState enum in\n> > ResetXLogBackupActivity(). Is that okay?\n>\n> It's not great, but it's not *that* bad, ISTM, mainly because\n> xlog_internal.h will affect less stuff than xlog.h.\n\nMoved them to xlog_internal.h without xlogbackup.h included, please see below.\n\n> > SessionBackupState and it needs to be set before we release WAL insert\n> > locks, see the comment [1].\n>\n> I see. Maybe we could keep that enum in xlog.h, instead.\n\nIt's not required now, please see below.\n\n> While looking at how that works: I think calling a local variable\n> \"session_backup_state\" is super confusing, seeing that we have a\n> file-global variable called sessionBackupState. I recommend naming the\n> local \"newstate\" or something along those lines instead.\n>\n> I wonder why does pg_backup_start_callback() not change the backup state\n> before your patch. This seems a gratuitous difference, or is it? If\n> you change that code so that it also sets the status to BACKUP_NONE,\n> then you can pass a bare SessionBackupState to ResetXLogBackupActivity\n> rather than a pointer to one, which is a very strange arrangement that\n> exists only so that you can have a third state (NULL) meaning \"don't\n> change state\" -- that looks quite weird.\n>\n> Alternatively, if you don't want or can't change\n> pg_backup_start_callback to pass a valid state value, another solution\n> might be to pass a separate boolean \"change state\".\n>\n> But I would look at having another patch before your series that changes\n> pg_backup_start_callback to make the code identical for the three\n> callers, then you can simplify the patched code.\n\nThe pg_backup_start_callback() can just go ahead and reset\nsessionBackupState. However, it leads us to the complete removal of\npg_backup_start_callback() itself and use do_pg_abort_backup()\nconsistently across, saving 20 LOC attached as v5-0001.\n\nWith this, the other patches would get simplified a bit too,\nxlogbackup.h footprint got reduced now.\n\nPlease find the v5 patch-set. 0002-0004 moves the backup code to\nxlogbackup.c/.h.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 13 Oct 2022 19:38:30 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On 2022-Oct-13, Bharath Rupireddy wrote:\n\n> The pg_backup_start_callback() can just go ahead and reset\n> sessionBackupState. However, it leads us to the complete removal of\n> pg_backup_start_callback() itself and use do_pg_abort_backup()\n> consistently across, saving 20 LOC attached as v5-0001.\n\nOK, that's not bad -- but there is a fatal flaw here: do_pg_backup_start\nonly sets sessionBackupState *after* it has finished setting things up,\nso if you only change it like this, do_pg_abort_backup will indeed run,\nbut it'll do nothing because it hits the \"quick exit\" test. Therefore,\nif a backup aborts while setting up, you'll keep running with forced\npage writes until next postmaster crash or restart. Not good.\n\nISTM we need to give another flag to the callback function besides\nemit_warning: one that says whether to test sessionBackupState or not.\nI suppose the easiest way to do it with no other changes is to turn\n'arg' into a bitmask.\nBut alternatively, we could just remove emit_warning as a flag and have\nthe warning be emitted always; then we can use the boolean for the other\npurpose. I don't think the extra WARNING thrown during backup set-up is\ngoing to be a problem, since it will mostly never be seen anyway (and if\nyou do see it, it's not a lie.)\n\nHowever, what's most problematic about this patch is that it introduces\na pretty serious bug, yet that bug goes unnoticed if you just run the\nbuiltin test suites. I only noticed because I added an elog(ERROR,\n\"oops\") in the area protected by ENSURE_ERROR_CLEANUP and a debug\nelog(WARNING) in the cleanup area, then examined the server log after\nthe pg_basebackup test filed; but this is not very workable. I wonder\nwhat would be a good way to keep this in check. The naive way seems to\nbe to run a pg_basebackup, have it abort partway through (how?), then\ntest the server and see if forced page writes are enabled or not.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"The problem with the facetime model is not just that it's demoralizing, but\nthat the people pretending to work interrupt the ones actually working.\"\n (Paul Graham)\n\n\n", "msg_date": "Fri, 14 Oct 2022 10:24:41 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Fri, Oct 14, 2022 at 10:24:41AM +0200, Alvaro Herrera wrote:\n> However, what's most problematic about this patch is that it introduces\n> a pretty serious bug, yet that bug goes unnoticed if you just run the\n> builtin test suites. I only noticed because I added an elog(ERROR,\n> \"oops\") in the area protected by ENSURE_ERROR_CLEANUP and a debug\n> elog(WARNING) in the cleanup area, then examined the server log after\n> the pg_basebackup test filed; but this is not very workable. I wonder\n> what would be a good way to keep this in check. The naive way seems to\n> be to run a pg_basebackup, have it abort partway through (how?), then\n> test the server and see if forced page writes are enabled or not.\n\nSee around the bottom of 010_pg_basebackup.pl, where a combination of\nIPC::Run::start('pg_basebackup') with --max-rate and\npg_terminate_backend() is able to achieve that.\n--\nMichael", "msg_date": "Fri, 14 Oct 2022 17:33:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Fri, Oct 14, 2022 at 1:54 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Oct-13, Bharath Rupireddy wrote:\n>\n> > The pg_backup_start_callback() can just go ahead and reset\n> > sessionBackupState. However, it leads us to the complete removal of\n> > pg_backup_start_callback() itself and use do_pg_abort_backup()\n> > consistently across, saving 20 LOC attached as v5-0001.\n>\n> OK, that's not bad -- but there is a fatal flaw here: do_pg_backup_start\n> only sets sessionBackupState *after* it has finished setting things up,\n> so if you only change it like this, do_pg_abort_backup will indeed run,\n> but it'll do nothing because it hits the \"quick exit\" test. Therefore,\n> if a backup aborts while setting up, you'll keep running with forced\n> page writes until next postmaster crash or restart. Not good.\n\nUgh.\n\n> ISTM we need to give another flag to the callback function besides\n> emit_warning: one that says whether to test sessionBackupState or not.\n\nI think this needs a new structure, something like below, which makes\nthings complex.\ntypedef struct pg_abort_backup_params\n{\n /* This tells whether or not the do_pg_abort_backup callback can\nquickly exit. */\n bool can_quick_exit;\n /* This tells whether or not the do_pg_abort_backup callback can\nemit a warning. */\n bool emit_warning;\n} pg_abort_backup_params;\n\n> I suppose the easiest way to do it with no other changes is to turn\n> 'arg' into a bitmask.\n\nThis one too isn't good IMO.\n\n> But alternatively, we could just remove emit_warning as a flag and have\n> the warning be emitted always; then we can use the boolean for the other\n> purpose. I don't think the extra WARNING thrown during backup set-up is\n> going to be a problem, since it will mostly never be seen anyway (and if\n> you do see it, it's not a lie.)\n\n+1 for this.\n\nPlease review the v6 patch-set further.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 14 Oct 2022 18:51:05 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On 2022-Oct-14, Bharath Rupireddy wrote:\n\n> On Fri, Oct 14, 2022 at 1:54 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > But alternatively, we could just remove emit_warning as a flag and have\n> > the warning be emitted always; then we can use the boolean for the other\n> > purpose. I don't think the extra WARNING thrown during backup set-up is\n> > going to be a problem, since it will mostly never be seen anyway (and if\n> > you do see it, it's not a lie.)\n> \n> +1 for this.\n\nOK, pushed 0001, but I modified it some more, because the flag is not\nreally a \"quick exit\" optimization but actually critical for\ncorrectness; so I reworked the function to have an if block around it\nrather than an early return, and I added an assert about the flag and\nsession backup state. CI was green for it and on manual testing it\nseems to work correctly.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No es bueno caminar con un hombre muerto\"\n\n\n", "msg_date": "Wed, 19 Oct 2022 10:48:25 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "Another point before we move on with your 0002 is that forcePageWrites\nis no longer useful and we can remove it, as per the attached.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"No deja de ser humillante para una persona de ingenio saber\nque no hay tonto que no le pueda enseñar algo.\" (Jean B. Say)", "msg_date": "Wed, 19 Oct 2022 11:00:29 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Wed, Oct 19, 2022 at 2:30 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Another point before we move on with your 0002 is that forcePageWrites\n> is no longer useful and we can remove it, as per the attached.\n\n+1. The following comment enables us to rely on runningBackups and get\nrid of forcePageWrites completely.\n\n * in progress. forcePageWrites is set to true when runningBackups is\n * non-zero. lastBackupStart is the latest checkpoint redo location used\n\nWhen the standby is in recovery, calls to XLogInsertRecord() or\nAdvanceXLInsertBuffer()) where forcePageWrites is being used, won't\nhappen, no?\n\n * Note that forcePageWrites has no effect during an online backup from\n- * the standby.\n+ * the standby. XXX what does this mean??\n\nI removed the 2 more instances of forcePageWrites left-out and tweaked\nthe comments a little and attached 0002.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 19 Oct 2022 15:10:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On 2022-Oct-19, Bharath Rupireddy wrote:\n\n> When the standby is in recovery, calls to XLogInsertRecord() or\n> AdvanceXLInsertBuffer()) where forcePageWrites is being used, won't\n> happen, no?\n> \n> * Note that forcePageWrites has no effect during an online backup from\n> - * the standby.\n> + * the standby. XXX what does this mean??\n\nWell, yes, but when looking at this comment I wonder why do I *care*\nabout this point. I left this comment has you changed it, but I wonder\nif we shouldn't just remove it.\n\n> I removed the 2 more instances of forcePageWrites left-out and tweaked\n> the comments a little and attached 0002.\n\nThanks for looking. Pushed now.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"People get annoyed when you try to debug them.\" (Larry Wall)\n\n\n", "msg_date": "Wed, 19 Oct 2022 12:53:00 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Wed, Oct 19, 2022 at 4:23 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Oct-19, Bharath Rupireddy wrote:\n>\n> > When the standby is in recovery, calls to XLogInsertRecord() or\n> > AdvanceXLInsertBuffer()) where forcePageWrites is being used, won't\n> > happen, no?\n> >\n> > * Note that forcePageWrites has no effect during an online backup from\n> > - * the standby.\n> > + * the standby. XXX what does this mean??\n>\n> Well, yes, but when looking at this comment I wonder why do I *care*\n> about this point. I left this comment has you changed it, but I wonder\n> if we shouldn't just remove it.\n\nWell,retaining that comment does no harm IMO.\n\n> > I removed the 2 more instances of forcePageWrites left-out and tweaked\n> > the comments a little and attached 0002.\n>\n> Thanks for looking. Pushed now.\n\nThanks. I will rebase and post the other patches soon.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 19 Oct 2022 16:26:03 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Wed, Oct 19, 2022 at 4:26 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Oct 19, 2022 at 4:23 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > Thanks for looking. Pushed now.\n>\n> Thanks. I will rebase and post the other patches soon.\n\nPlease review the attached v7 patch-set.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 19 Oct 2022 17:54:17 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "0001 seems mostly OK, but I don't like some of these new function names.\nI see you've named them so that they are case-consistent with the name\nof the struct member that they affect, but I don't think that's a good\ncriterion. I propose\n\nSetrunningBackups -> XLogBackupSetRunning()\nResetXLogBackupActivity -> XLogBackupNotRunning()\n// or maybe SetNotRunning, or ResetRunning? I prefer the one above\nSetlastBackupStart -> XLogBackupSetLastStart()\n\nGetlastFpwDisableRecPtr -> XLogGetLastFPWDisableRecptr()\nGetminRecoveryPoint -> XLogGetMinRecoveryPoint()\n\nI wouldn't say in the xlog_internal.h comment that these new functions\nare for xlogbackup.c to use. The API definition doesn't have to concern\nitself with that. Maybe one day xlogrecovery.c or some other xlog*.c\nwould like to call those functions, and then the comment becomes a lie;\nand what for?\n\n\n0002 is where the interesting stuff happens. I have not reviewed that\npart with any care, but it appears that set_backup_state is pretty much\nuseless. Let's get rid of it instead of moving it. Which also means\nthat we shouldn't introduce reset_backup_status in 0001, I suppose.\nI think xlogfuncs.c is content with having just get_backup_status().\n\nSpeaking of which -- I'm not sure we really want to do 0003.\nxlogfuncs.c is not a big file, the functions are not complex, and there\nare no interesting interactions in those functions with the internals\n(other than get_backup_status). I see that Michael advised the same.\nI propose we keep those functions where they are.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"I suspect most samba developers are already technically insane...\nOf course, since many of them are Australians, you can't tell.\" (L. Torvalds)\n\n\n", "msg_date": "Wed, 19 Oct 2022 15:00:15 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Wed, Oct 19, 2022 at 6:30 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> 0001 seems mostly OK, but I don't like some of these new function names.\n> I see you've named them so that they are case-consistent with the name\n> of the struct member that they affect, but I don't think that's a good\n> criterion. I propose\n>\n> SetrunningBackups -> XLogBackupSetRunning()\n> ResetXLogBackupActivity -> XLogBackupNotRunning()\n> // or maybe SetNotRunning, or ResetRunning? I prefer the one above\n> SetlastBackupStart -> XLogBackupSetLastStart()\n>\n> GetlastFpwDisableRecPtr -> XLogGetLastFPWDisableRecptr()\n> GetminRecoveryPoint -> XLogGetMinRecoveryPoint()\n\nXLogBackupResetRunning() seemed better. +1 for above function names.\n\n> I wouldn't say in the xlog_internal.h comment that these new functions\n> are for xlogbackup.c to use. The API definition doesn't have to concern\n> itself with that. Maybe one day xlogrecovery.c or some other xlog*.c\n> would like to call those functions, and then the comment becomes a lie;\n> and what for?\n\nRemoved.\n\n> 0002 is where the interesting stuff happens. I have not reviewed that\n> part with any care, but it appears that set_backup_state is pretty much\n> useless. Let's get rid of it instead of moving it. Which also means\n> that we shouldn't introduce reset_backup_status in 0001, I suppose.\n> I think xlogfuncs.c is content with having just get_backup_status().\n\nThere's no set_backup_state() at all. We need get_backup_status() for\nxlogfuncs.c and basebackup.c and we need reset_backup_status() for\nXLogBackupResetRunning() sitting in xlog.c.\n\n> Speaking of which -- I'm not sure we really want to do 0003.\n> xlogfuncs.c is not a big file, the functions are not complex, and there\n> are no interesting interactions in those functions with the internals\n> (other than get_backup_status). I see that Michael advised the same.\n> I propose we keep those functions where they are.\n\nI'm okay either way.\n\nPlease see the attached v8 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 19 Oct 2022 21:07:04 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Wed, Oct 19, 2022 at 09:07:04PM +0530, Bharath Rupireddy wrote:\n> XLogBackupResetRunning() seemed better. +1 for above function names.\n\nI see what you are doing here. XLogCtl would still live in xlog.c,\nbut we want to have functions that are able to manipulate some of its\nfields. I am not sure to like that much because it introduces a\ncircling dependency between xlog.c and xlogbackup.c. As of HEAD,\nxlog.c calls build_backup_content() from xlogbackup.c, which is fine\nas xlog.c is kind of a central piece that feeds on the insert and\nrecovery pieces. However your patch makes some code paths of\nxlogbackup.c call routines from xlog.c, and I don't think that we\nshould do that.\n\n> I'm okay either way.\n> \n> Please see the attached v8 patch set.\n\nAmong all that, CleanupBackupHistory() is different, still it has a\ndependency with some of the archiving pieces..\n--\nMichael", "msg_date": "Mon, 24 Oct 2022 16:30:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Mon, Oct 24, 2022 at 1:00 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Oct 19, 2022 at 09:07:04PM +0530, Bharath Rupireddy wrote:\n> > XLogBackupResetRunning() seemed better. +1 for above function names.\n>\n> I see what you are doing here. XLogCtl would still live in xlog.c,\n> but we want to have functions that are able to manipulate some of its\n> fields.\n\nRight.\n\n> I am not sure to like that much because it introduces a\n> circling dependency between xlog.c and xlogbackup.c. As of HEAD,\n> xlog.c calls build_backup_content() from xlogbackup.c, which is fine\n> as xlog.c is kind of a central piece that feeds on the insert and\n> recovery pieces. However your patch makes some code paths of\n> xlogbackup.c call routines from xlog.c, and I don't think that we\n> should do that.\n\nIf you're talking about header file dependency, there's already header\nfile dependency between them - xlog.c includes xlogbackup.h for\nbuild_backup_content() and xlogbackup.c includes xlog.h for\nwal_segment_size. And, I think the same kind of dependency exists\nbetween xlog.c and xlogrecovery.c.\n\nPlease note that we're trying to reduce xlog.c file size apart from\ncentralizing backup related code.\n\n> > I'm okay either way.\n> >\n> > Please see the attached v8 patch set.\n>\n> Among all that, CleanupBackupHistory() is different, still it has a\n> dependency with some of the archiving pieces..\n\nIs there a problem with that? This function is used solely by backup\nfunctions and it happens to use one of the archiving utility\nfunctions. Please see the other archiving utility functions being used\nelsewhere in the code, not only in xlog.c -\nfor instance, KeepFileRestoredFromArchive() and XLogArchiveNotify().\n\nI'm attaching the v9 patch set herewith after rebasing. Please review\nit further.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 26 Oct 2022 11:36:17 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" }, { "msg_contents": "On Wed, 26 Oct 2022 at 02:08, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I'm attaching the v9 patch set herewith after rebasing. Please review\n> it further.\n\nIt looks like neither reviewer has been really convinced this is the\ndirection they want to go and I think that's why the thread has been\npretty dead since last October. I think people are pretty hesitant to\ngive bad news but I don't think we're doing you any favours having you\nrebasing and rebasing and trying to justify specific code changes when\nit looks like people are skeptical about the basic approach.\n\nSo I'm going to mark this Rejected for now. Perhaps a fresh approach\nnext release cycle starting with a discussion of the specific goals\nrather than starting with a patch would be better.\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Thu, 23 Mar 2023 23:31:49 -0400", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move backup-related code to xlogbackup.c/.h" } ]
[ { "msg_contents": "We have discussed the problems caused by the use of pg_stat_reset() and \npg_stat_reset_shared(), specifically the removal of information needed\nby autovacuum. I don't see these risks documented anywhere. Should we\ndo that? Are there other risks?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Wed, 28 Sep 2022 11:44:59 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Warning about using pg_stat_reset() and pg_stat_reset_shared()" }, { "msg_contents": "On Wed, Sep 28, 2022 at 11:45 AM Bruce Momjian <bruce@momjian.us> wrote:\n> We have discussed the problems caused by the use of pg_stat_reset() and\n> pg_stat_reset_shared(), specifically the removal of information needed\n> by autovacuum. I don't see these risks documented anywhere. Should we\n> do that?\n\n+1.\n\n> Are there other risks?\n\nI don't know.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 12:02:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Warning about using pg_stat_reset() and pg_stat_reset_shared()" }, { "msg_contents": "On Thu, 29 Sept 2022 at 04:45, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> We have discussed the problems caused by the use of pg_stat_reset() and\n> pg_stat_reset_shared(), specifically the removal of information needed\n> by autovacuum. I don't see these risks documented anywhere. Should we\n> do that? Are there other risks?\n\nThere was some discussion in [1] a few years back. A few people were\nfor the warning. Nobody seemed to object to it. There's a patch in\n[2].\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/CAKJS1f8DTbCHf9gedU0He6ARsd58E6qOhEHM1caomqj_r9MOiQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAKJS1f80o98hcfSk8j%3DfdN09S7Sjz%2BvuzhEwbyQqvHJb_sZw0g%40mail.gmail.com\n\n\n", "msg_date": "Wed, 5 Oct 2022 11:07:49 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Warning about using pg_stat_reset() and pg_stat_reset_shared()" }, { "msg_contents": "On Wed, Oct 5, 2022 at 11:07:49AM +1300, David Rowley wrote:\n> On Thu, 29 Sept 2022 at 04:45, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > We have discussed the problems caused by the use of pg_stat_reset() and\n> > pg_stat_reset_shared(), specifically the removal of information needed\n> > by autovacuum. I don't see these risks documented anywhere. Should we\n> > do that? Are there other risks?\n> \n> There was some discussion in [1] a few years back. A few people were\n> for the warning. Nobody seemed to object to it. There's a patch in\n> [2].\n> \n> David\n> \n> [1] https://www.postgresql.org/message-id/flat/CAKJS1f8DTbCHf9gedU0He6ARsd58E6qOhEHM1caomqj_r9MOiQ%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/CAKJS1f80o98hcfSk8j%3DfdN09S7Sjz%2BvuzhEwbyQqvHJb_sZw0g%40mail.gmail.com\n\nAh, good point. I have slightly reworded the doc patch, attached. \nHowever, the last line has me confused:\n\n\tA database-wide <command>ANALYZE</command> is recommended after\n\tthe statistics have been reset.\n\nAs far as I can tell, analyze updates pg_statistics values, but not\npg_stat_all_tables.n_dead_tup and n_live_tup, which are used by\nautovacuum to trigger vacuum operations. I am afraid we have to\nrecommand VACUUM ANALYZE after pg_stat_reset(), no?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson", "msg_date": "Tue, 11 Oct 2022 11:11:39 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Warning about using pg_stat_reset() and pg_stat_reset_shared()" }, { "msg_contents": "On Wed, 12 Oct 2022 at 04:11, Bruce Momjian <bruce@momjian.us> wrote:\n> As far as I can tell, analyze updates pg_statistics values, but not\n> pg_stat_all_tables.n_dead_tup and n_live_tup, which are used by\n> autovacuum to trigger vacuum operations. I am afraid we have to\n> recommand VACUUM ANALYZE after pg_stat_reset(), no?\n\nAs far as I can see ANALYZE will update these fields. I'm looking at\npgstat_report_analyze() called from do_analyze_rel().\n\nIt does:\n\ntabentry->n_live_tuples = livetuples;\ntabentry->n_dead_tuples = deadtuples;\n\nI also see it working from testing:\n\ncreate table t as select x from generate_Series(1,100000)x;\ndelete from t where x > 90000;\nselect pg_sleep(1);\nselect n_live_tup,n_dead_tup from pg_stat_user_tables where relname = 't';\nselect pg_stat_reset();\nselect n_live_tup,n_dead_tup from pg_stat_user_tables where relname = 't';\nanalyze t;\nselect n_live_tup,n_dead_tup from pg_stat_user_tables where relname = 't';\n\nThe result of the final query is:\n\n n_live_tup | n_dead_tup\n------------+------------\n 90000 | 10000\n\nMaybe the random sample taken by ANALYZE for your case didn't happen\nto land on any pages with dead tuples?\n\nDavid\n\n\n", "msg_date": "Wed, 12 Oct 2022 08:50:19 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Warning about using pg_stat_reset() and pg_stat_reset_shared()" }, { "msg_contents": "On Wed, Oct 12, 2022 at 08:50:19AM +1300, David Rowley wrote:\n> On Wed, 12 Oct 2022 at 04:11, Bruce Momjian <bruce@momjian.us> wrote:\n> > As far as I can tell, analyze updates pg_statistics values, but not\n> > pg_stat_all_tables.n_dead_tup and n_live_tup, which are used by\n> > autovacuum to trigger vacuum operations. I am afraid we have to\n> > recommand VACUUM ANALYZE after pg_stat_reset(), no?\n> \n> As far as I can see ANALYZE will update these fields. I'm looking at\n> pgstat_report_analyze() called from do_analyze_rel().\n> \n> It does:\n> \n> tabentry->n_live_tuples = livetuples;\n> tabentry->n_dead_tuples = deadtuples;\n> \n> I also see it working from testing:\n> \n> create table t as select x from generate_Series(1,100000)x;\n> delete from t where x > 90000;\n> select pg_sleep(1);\n> select n_live_tup,n_dead_tup from pg_stat_user_tables where relname = 't';\n> select pg_stat_reset();\n> select n_live_tup,n_dead_tup from pg_stat_user_tables where relname = 't';\n> analyze t;\n> select n_live_tup,n_dead_tup from pg_stat_user_tables where relname = 't';\n> \n> The result of the final query is:\n> \n> n_live_tup | n_dead_tup\n> ------------+------------\n> 90000 | 10000\n> \n> Maybe the random sample taken by ANALYZE for your case didn't happen\n> to land on any pages with dead tuples?\n\nAh, good point, I missed that in pgstat_report_analyze(). I will apply\nthe patch then in a few days, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Wed, 12 Oct 2022 12:04:08 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Warning about using pg_stat_reset() and pg_stat_reset_shared()" }, { "msg_contents": "On Wed, Oct 12, 2022 at 12:04:08PM -0400, Bruce Momjian wrote:\n> > Maybe the random sample taken by ANALYZE for your case didn't happen\n> > to land on any pages with dead tuples?\n> \n> Ah, good point, I missed that in pgstat_report_analyze(). I will apply\n> the patch then in a few days, thanks.\n\nPatch applied back to PG 10, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Mon, 17 Oct 2022 15:07:20 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Warning about using pg_stat_reset() and pg_stat_reset_shared()" }, { "msg_contents": "On Tue, 18 Oct 2022 at 08:07, Bruce Momjian <bruce@momjian.us> wrote:\n> Patch applied back to PG 10, thanks.\n\nThanks.\n\nDavid\n\n\n", "msg_date": "Tue, 18 Oct 2022 13:00:19 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Warning about using pg_stat_reset() and pg_stat_reset_shared()" } ]
[ { "msg_contents": "Hi hackers,\n\nWith PG 15 (rc1 or beta4), I'm observing an interesting memory pattern. I\nhave not seen a similar discussion on the mailing list. If I missed that,\nplease refer me there. The problem that I'm going to explain does not\nhappen on PG 13/14.\n\nIt seems like there is a memory leak(?) with $title. Still, not sure about\nwhat is going on and, thought it'd be useful to share at least my initial\ninvestigation.\n\nAfter running the query and waiting a few minutes (see steps to repro\nbelow), use pg_log_backend_memory_contexts() to get the contexts of the\nbackend executing the command. See that it goes beyond 100GB. And depending\non vm.overcommit_memory, you get an OOM error or OOM crash eventually.\n\n```\n2022-09-28 17:33:38.155 CEST [32224] LOG: level: 2; PortalContext: 1024\ntotal in 1 blocks; 592 free (0 chunks); 432 used: <unnamed>\n2022-09-28 17:33:38.159 CEST [32224] LOG: level: 3; ExecutorState:\n*114923929600* total in 13710 blocks; 7783264 free (3 chunks); 114916146336\nused\n2022-09-28 17:33:38.159 CEST [32224] LOG: level: 4; TupleSort main: 8192\ntotal in 1 blocks; 3928 free (0 chunks); 4264 used\n2022-09-28 17:33:38.159 CEST [32224] LOG: level: 5; TupleSort sort: 295096\ntotal in 8 blocks; 256952 free (67 chunks); 38144 used\n2022-09-28 17:33:38.159 CEST [32224] LOG: level: 6; Caller tuples: 8192\ntotal in 1 blocks (0 chunks); 7992 free (0 chunks); 200 used\n2022-09-28 17:33:38.159 CEST [32224] LOG: level: 4; TupleSort main: 8192\ntotal in 1 blocks; 3928 free (0 chunks); 4264 used\n2022-09-28 17:33:38.159 CEST [32224] LOG: level: 5; TupleSort sort:\n4309736 total in 18 blocks; 263864 free (59 chunks); 4045872 used\n2022-09-28 17:33:38.159 CEST [32224] LOG: level: 6; Caller tuples: 8192\ntotal in 1 blocks (0 chunks); 7992 free (0 chunks); 200 used\n...\n2022-09-28 17:33:38.160 CEST [32224] LOG: Grand total: *114930446784*\nbytes in 13972 blocks; 8802248 free (275 chunks); 114921644536 used\n```\n\nI observed this with a merge join involving a table and set returning\nfunction. To simulate the problem with two tables, I have the following\nsteps:\n\n```\nCREATE TABLE t1 (a text);\nCREATE TABLE t2 (a text);\n\n-- make the text a little large by adding 100000000000\nINSERT INTO t1 SELECT (100000000000+i%1000)::text FROM\ngenerate_series(0,10000000) i;\n\n-- make the text a little large by adding 100000000000\nINSERT INTO t2 SELECT (100000000000+i%10000)::text FROM\ngenerate_series(0,10000000) i;\n\n-- to simplify the explain plan, not strictly necessary\nSET max_parallel_workers_per_gather TO 0;\n\n-- these two are necessary so that the problem is triggered\n-- these are helpful to use Merge join and avoid materialization\nSET enable_hashjoin TO false;\nSET enable_material TO false;\n\n-- the join is on a TEXT column\n-- when the join is on INT column with a similar setup, I do not observe\nthis problem\nSELECT count(*) FROM t1 JOIN t2 USING (a);\n```\n\n\nThe explain output for the query like the following:\n```\nexplain SELECT count(*) FROM t1 JOIN t2 USING (a);\n┌─────────────────────────────────────────────────────────────────────────────────┐\n│ QUERY PLAN\n │\n├─────────────────────────────────────────────────────────────────────────────────┤\n│ Aggregate (cost=177735283.36..177735283.37 rows=1 width=8)\n │\n│ -> Merge Join (cost=2556923.81..152703372.24 rows=10012764448\nwidth=0) │\n│ Merge Cond: (t1.a = t2.a)\n │\n│ -> Sort (cost=1658556.19..1683556.63 rows=10000175 width=13)\n │\n│ Sort Key: t1.a\n │\n│ -> Seq Scan on t1 (cost=0.00..154056.75 rows=10000175\nwidth=13) │\n│ -> Sort (cost=1658507.28..1683506.93 rows=9999861 width=13)\n │\n│ Sort Key: t2.a\n │\n│ -> Seq Scan on t2 (cost=0.00..154053.61 rows=9999861\nwidth=13) │\n└─────────────────────────────────────────────────────────────────────────────────┘\n(9 rows)\n```\n\nIn the end, my investigation mostly got me to the following palloc(), where\nwe seem to allocate memory over and over again as memory grows:\n```\n(gdb) bt\n#0 __GI___libc_malloc (bytes=bytes@entry=8388608) at malloc.c:3038\n#1 0x00005589f3c55444 in AllocSetAlloc (context=0x5589f4896300, size=14)\nat aset.c:920\n#2 0x00005589f3c5d763 in palloc (size=size@entry=14) at mcxt.c:1082\n#3 0x00005589f3b1f553 in datumCopy (value=94051002161216,\ntypByVal=typByVal@entry=false,\n typLen=<optimized out>) at datum.c:162\n#4 0x00005589f3c6ed0b in tuplesort_getdatum (state=state@entry\n=0x5589f49274e0,\n forward=forward@entry=true, val=0x5589f48d7860, isNull=0x5589f48d7868,\nabbrev=abbrev@entry=0x0)\n at tuplesort.c:2675\n#5 0x00005589f3947925 in ExecSort (pstate=0x5589f48d0a38) at nodeSort.c:200\n#6 0x00005589f393d74c in ExecProcNode (node=0x5589f48d0a38)\n at ../../../src/include/executor/executor.h:259\n#7 ExecMergeJoin (pstate=0x5589f4896cc8) at nodeMergejoin.c:871\n#8 0x00005589f391fbc8 in ExecProcNode (node=0x5589f4896cc8)\n at ../../../src/include/executor/executor.h:259\n#9 fetch_input_tuple (aggstate=aggstate@entry=0x5589f4896670) at\nnodeAgg.c:563\n#10 0x00005589f3923742 in agg_retrieve_direct (aggstate=aggstate@entry\n=0x5589f4896670)\n at nodeAgg.c:2441\n....\n```\n\nCould this be a bug, or am I missing anything?\n\nThanks,\nOnder KALACI\n\nHi hackers,With PG 15 (rc1 or beta4), I'm observing an interesting memory pattern. I have not seen a similar discussion on the mailing list. If I missed that, please refer me there. The problem that I'm going to explain does not happen on PG 13/14.It seems like there is a memory leak(?) with $title. Still, not sure about what is going on and, thought it'd be useful to share at least my initial investigation.After running the query and waiting a few minutes (see steps to repro below), use pg_log_backend_memory_contexts() to get the contexts of the backend executing the command. See that it goes beyond 100GB. And depending on vm.overcommit_memory, you get an OOM error or OOM crash eventually.```2022-09-28 17:33:38.155 CEST [32224] LOG:  level: 2; PortalContext: 1024 total in 1 blocks; 592 free (0 chunks); 432 used: <unnamed>2022-09-28 17:33:38.159 CEST [32224] LOG:  level: 3; ExecutorState: 114923929600 total in 13710 blocks; 7783264 free (3 chunks); 114916146336 used2022-09-28 17:33:38.159 CEST [32224] LOG:  level: 4; TupleSort main: 8192 total in 1 blocks; 3928 free (0 chunks); 4264 used2022-09-28 17:33:38.159 CEST [32224] LOG:  level: 5; TupleSort sort: 295096 total in 8 blocks; 256952 free (67 chunks); 38144 used2022-09-28 17:33:38.159 CEST [32224] LOG:  level: 6; Caller tuples: 8192 total in 1 blocks (0 chunks); 7992 free (0 chunks); 200 used2022-09-28 17:33:38.159 CEST [32224] LOG:  level: 4; TupleSort main: 8192 total in 1 blocks; 3928 free (0 chunks); 4264 used2022-09-28 17:33:38.159 CEST [32224] LOG:  level: 5; TupleSort sort: 4309736 total in 18 blocks; 263864 free (59 chunks); 4045872 used2022-09-28 17:33:38.159 CEST [32224] LOG:  level: 6; Caller tuples: 8192 total in 1 blocks (0 chunks); 7992 free (0 chunks); 200 used...2022-09-28 17:33:38.160 CEST [32224] LOG:  Grand total: 114930446784 bytes in 13972 blocks; 8802248 free (275 chunks); 114921644536 used```I observed this with a merge join involving a table and set returning function. To simulate the problem with two tables, I have the following steps:```CREATE TABLE t1 (a text);CREATE TABLE t2 (a text);-- make the text a little large by adding 100000000000INSERT INTO t1 SELECT (100000000000+i%1000)::text FROM generate_series(0,10000000) i;-- make the text a little large by adding 100000000000INSERT INTO t2 SELECT (100000000000+i%10000)::text FROM generate_series(0,10000000) i;-- to simplify the explain plan, not strictly necessarySET max_parallel_workers_per_gather TO 0;-- these two are necessary so that the problem is triggered-- these are helpful to use Merge join and avoid materializationSET enable_hashjoin TO false;SET enable_material TO false;-- the join is on a TEXT column-- when the join is on INT column with a similar setup, I do not observe this problemSELECT count(*) FROM t1 JOIN t2 USING (a);```The explain output for the query like the following:```explain SELECT count(*) FROM t1 JOIN t2 USING (a);┌─────────────────────────────────────────────────────────────────────────────────┐│                                   QUERY PLAN                                    │├─────────────────────────────────────────────────────────────────────────────────┤│ Aggregate  (cost=177735283.36..177735283.37 rows=1 width=8)                     ││   ->  Merge Join  (cost=2556923.81..152703372.24 rows=10012764448 width=0)      ││         Merge Cond: (t1.a = t2.a)                                               ││         ->  Sort  (cost=1658556.19..1683556.63 rows=10000175 width=13)          ││               Sort Key: t1.a                                                    ││               ->  Seq Scan on t1  (cost=0.00..154056.75 rows=10000175 width=13) ││         ->  Sort  (cost=1658507.28..1683506.93 rows=9999861 width=13)           ││               Sort Key: t2.a                                                    ││               ->  Seq Scan on t2  (cost=0.00..154053.61 rows=9999861 width=13)  │└─────────────────────────────────────────────────────────────────────────────────┘(9 rows)```In the end, my investigation mostly got me to the following palloc(), where we seem to allocate memory over and over again as memory grows:```(gdb) bt#0  __GI___libc_malloc (bytes=bytes@entry=8388608) at malloc.c:3038#1  0x00005589f3c55444 in AllocSetAlloc (context=0x5589f4896300, size=14) at aset.c:920#2  0x00005589f3c5d763 in palloc (size=size@entry=14) at mcxt.c:1082#3  0x00005589f3b1f553 in datumCopy (value=94051002161216, typByVal=typByVal@entry=false,    typLen=<optimized out>) at datum.c:162#4  0x00005589f3c6ed0b in tuplesort_getdatum (state=state@entry=0x5589f49274e0,    forward=forward@entry=true, val=0x5589f48d7860, isNull=0x5589f48d7868, abbrev=abbrev@entry=0x0)    at tuplesort.c:2675#5  0x00005589f3947925 in ExecSort (pstate=0x5589f48d0a38) at nodeSort.c:200#6  0x00005589f393d74c in ExecProcNode (node=0x5589f48d0a38)    at ../../../src/include/executor/executor.h:259#7  ExecMergeJoin (pstate=0x5589f4896cc8) at nodeMergejoin.c:871#8  0x00005589f391fbc8 in ExecProcNode (node=0x5589f4896cc8)    at ../../../src/include/executor/executor.h:259#9  fetch_input_tuple (aggstate=aggstate@entry=0x5589f4896670) at nodeAgg.c:563#10 0x00005589f3923742 in agg_retrieve_direct (aggstate=aggstate@entry=0x5589f4896670)    at nodeAgg.c:2441....```Could this be a bug, or am I missing anything?Thanks,Onder KALACI", "msg_date": "Wed, 28 Sep 2022 18:08:41 +0200", "msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>", "msg_from_op": true, "msg_subject": "A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com> writes:\n> With PG 15 (rc1 or beta4), I'm observing an interesting memory pattern.\n\nYup, that is a leak. valgrind'ing it blames this call chain:\n\n==00:00:16:12.228 4011013== 790,404,056 bytes in 60,800,312 blocks are definitely lost in loss record 1,108 of 1,108\n==00:00:16:12.228 4011013== at 0x9A5104: palloc (mcxt.c:1170)\n==00:00:16:12.228 4011013== by 0x89F8D9: datumCopy (datum.c:175)\n==00:00:16:12.228 4011013== by 0x9B5BEE: tuplesort_getdatum (tuplesortvariants.c:882)\n==00:00:16:12.228 4011013== by 0x6FA8B3: ExecSort (nodeSort.c:200)\n==00:00:16:12.228 4011013== by 0x6F1E87: ExecProcNode (executor.h:259)\n==00:00:16:12.228 4011013== by 0x6F1E87: ExecMergeJoin (nodeMergejoin.c:871)\n==00:00:16:12.228 4011013== by 0x6D7800: ExecProcNode (executor.h:259)\n==00:00:16:12.228 4011013== by 0x6D7800: fetch_input_tuple (nodeAgg.c:562)\n==00:00:16:12.228 4011013== by 0x6DAE2E: agg_retrieve_direct (nodeAgg.c:2454)\n==00:00:16:12.228 4011013== by 0x6DAE2E: ExecAgg (nodeAgg.c:2174)\n==00:00:16:12.228 4011013== by 0x6C6122: ExecProcNode (executor.h:259)\n==00:00:16:12.228 4011013== by 0x6C6122: ExecutePlan (execMain.c:1636)\n\nand bisecting fingers this commit as the guilty party:\n\ncommit 91e9e89dccdfdf4216953d3d8f5515dcdef177fb\nAuthor: David Rowley <drowley@postgresql.org>\nDate: Thu Jul 22 14:03:19 2021 +1200\n\n Make nodeSort.c use Datum sorts for single column sorts\n\nLooks like that forgot that tuplesort_getdatum()'s result has to\nbe freed by the caller.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Sep 2022 13:35:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "I wrote:\n> and bisecting fingers this commit as the guilty party:\n\n> commit 91e9e89dccdfdf4216953d3d8f5515dcdef177fb\n> Author: David Rowley <drowley@postgresql.org>\n> Date: Thu Jul 22 14:03:19 2021 +1200\n\n> Make nodeSort.c use Datum sorts for single column sorts\n\nAfter looking at that for a little while, I wonder if we shouldn't\nfix this by restricting the Datum-sort path to be used only with\npass-by-value data types. That'd require only a minor addition\nto the new logic in ExecInitSort.\n\nThe alternative of inserting a pfree of the old value would complicate\nthe code nontrivially, I think, and really it would necessitate a\ncomplete performance re-test. I'm wondering if the claimed speedup\nfor pass-by-ref types wasn't fictional and based on skipping the\nrequired pfrees. Besides, if you think this code is hot enough that\nyou don't want to add a test-and-branch per tuple (a claim I also\ndoubt, BTW) then you probably don't want to add such overhead into\nthe pass-by-value case where the speedup is clear.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Sep 2022 14:34:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "Thanks for investigating this and finding the guilty commit.\n\nOn Thu, 29 Sept 2022 at 07:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> After looking at that for a little while, I wonder if we shouldn't\n> fix this by restricting the Datum-sort path to be used only with\n> pass-by-value data types. That'd require only a minor addition\n> to the new logic in ExecInitSort.\n\nI'm also wondering if that's the best fix given the timing of this discovery.\n\n> The alternative of inserting a pfree of the old value would complicate\n> the code nontrivially, I think, and really it would necessitate a\n> complete performance re-test. I'm wondering if the claimed speedup\n> for pass-by-ref types wasn't fictional and based on skipping the\n> required pfrees. Besides, if you think this code is hot enough that\n> you don't want to add a test-and-branch per tuple (a claim I also\n> doubt, BTW) then you probably don't want to add such overhead into\n> the pass-by-value case where the speedup is clear.\n\nI'm wondering if the best way to fix it if doing it that way would be\nto invent tuplesort_getdatum_nocopy() which would be the same as\ntuplesort_getdatum() except it wouldn't do the datumCopy for byref\ntypes. It looks like tuplesort_gettupleslot() when copy==false just\ndirectly stores the MinimalTuple that's in stup.tuple and shouldFree\nis set to false.\n\nGoing by [1], it looks like I saw gains in test 6, which was a byref\nDatum. Skipping the datumCopy() I imagine could only make the gains\nslightly higher on that. That puts me a bit more on the fence about\nthe best fix for PG15.\n\nI've attached a patch to restrict the optimisation to byval types in\nthe meantime.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvrWV%3Dv0qKsC9_BHqhCn9TusrNvCaZDz77StCO--fmgbKA%40mail.gmail.com", "msg_date": "Thu, 29 Sep 2022 08:47:55 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I'm wondering if the best way to fix it if doing it that way would be\n> to invent tuplesort_getdatum_nocopy() which would be the same as\n> tuplesort_getdatum() except it wouldn't do the datumCopy for byref\n> types.\n\nYeah, perhaps. We'd need a clear spec on how long the Datum could\nbe presumed good --- probably till the next tuplesort_getdatum_nocopy\ncall, but that'd need to be checked --- and then check if that is\nsatisfactory for nodeSort's purposes.\n\nIf we had such a thing, I wonder if any of the other existing\ntuplesort_getdatum callers would be happier with that. nodeAgg for\none is tediously freeing the result, but could we drop that logic?\n(I hasten to add that I'm not proposing we touch that for v15.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Sep 2022 15:57:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "On Thu, 29 Sept 2022 at 08:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I'm wondering if the best way to fix it if doing it that way would be\n> > to invent tuplesort_getdatum_nocopy() which would be the same as\n> > tuplesort_getdatum() except it wouldn't do the datumCopy for byref\n> > types.\n>\n> Yeah, perhaps. We'd need a clear spec on how long the Datum could\n> be presumed good --- probably till the next tuplesort_getdatum_nocopy\n> call, but that'd need to be checked --- and then check if that is\n> satisfactory for nodeSort's purposes.\n\nYeah, I think the same rules around scope apply as\ntuplesort_gettupleslot() with copy==false. We could do it by adding a\ncopy flag to the existing function, but I'd rather not add the\nbranching to that function. It's probably just better to duplicate it\nand adjust.\n\n> If we had such a thing, I wonder if any of the other existing\n> tuplesort_getdatum callers would be happier with that. nodeAgg for\n> one is tediously freeing the result, but could we drop that logic?\n\nLooking at process_ordered_aggregate_single(), it's likely more\nefficient to use the nocopy version and just perform a datumCopy()\nwhen we need to store the oldVal. At least, that would be more\nefficient when many values are being skipped due to being the same as\nthe last one.\n\nI've just pushed the disable byref Datums patch I posted earlier. I\nonly made a small adjustment to make use of the TupleDescAttr() macro.\nÖnder, thank you for the report.\n\nDavid\n\n\n", "msg_date": "Thu, 29 Sep 2022 11:58:17 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "On Wed, Sep 28, 2022 at 12:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I'm wondering if the best way to fix it if doing it that way would be\n> > to invent tuplesort_getdatum_nocopy() which would be the same as\n> > tuplesort_getdatum() except it wouldn't do the datumCopy for byref\n> > types.\n>\n> Yeah, perhaps. We'd need a clear spec on how long the Datum could\n> be presumed good --- probably till the next tuplesort_getdatum_nocopy\n> call, but that'd need to be checked --- and then check if that is\n> satisfactory for nodeSort's purposes.\n\nI am reminded of the discussion that led to bugfix commit c2d4eb1b\nsome years back.\n\nAs the commit message of that old bugfix notes, tuplesort_getdatum()\nand tuplesort_gettupleslot() are \"the odd ones out\" among \"get tuple\"\nroutines (i.e. routines that get a tuple from a tuplesort by calling\ntuplesort_gettuple_common()). We used to sometimes do that with\ntuplesort_getindextuple() and possible other such routines, but the\nneed for that capability was eliminated on the caller side around the\nsame time as the bugfix went in.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 28 Sep 2022 16:00:04 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "On Wed, Sep 28, 2022 at 4:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I am reminded of the discussion that led to bugfix commit c2d4eb1b\n> some years back.\n\nAlso potentially relevant: the 2017 commit fa117ee4 anticipated adding\na \"copy\" argument to tuplesort_getdatum() (the same commit added such\na \"copy\" argument to tuplesort_gettupleslot()). I see that that still\nhasn't happened to tuplesort_getdatum() all these years later. Might\nbe a good idea to do it in the next year or two, though.\n\nIf David is interested in pursuing this now then I certainly won't object.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 28 Sep 2022 16:07:23 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "On Thu, Sep 29, 2022 at 11:58:17AM +1300, David Rowley wrote:\n> I've just pushed the disable byref Datums patch I posted earlier. I\n> only made a small adjustment to make use of the TupleDescAttr() macro.\n> Önder, thank you for the report.\n\nWouldn't it be better to have 3a58176 reflect the non-optimization\npath in the EXPLAIN output of a new regression test if none of the\nexisting tests are able to show any difference?\n--\nMichael", "msg_date": "Thu, 29 Sep 2022 08:30:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not\n below Materialize node" }, { "msg_contents": "On Thu, 29 Sept 2022 at 12:30, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Sep 29, 2022 at 11:58:17AM +1300, David Rowley wrote:\n> > I've just pushed the disable byref Datums patch I posted earlier. I\n> > only made a small adjustment to make use of the TupleDescAttr() macro.\n> > Önder, thank you for the report.\n>\n> Wouldn't it be better to have 3a58176 reflect the non-optimization\n> path in the EXPLAIN output of a new regression test if none of the\n> existing tests are able to show any difference?\n\nThere's nothing in EXPLAIN that shows that this optimization occurs.\nOr, are you proposing that you think there should be something? and\nfor 15??\n\nDavid\n\n\n", "msg_date": "Thu, 29 Sep 2022 12:34:51 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Wouldn't it be better to have 3a58176 reflect the non-optimization\n> path in the EXPLAIN output of a new regression test if none of the\n> existing tests are able to show any difference?\n\nThis decision is not visible in EXPLAIN in any case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Sep 2022 19:35:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "On Wed, Sep 28, 2022 at 07:35:07PM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> Wouldn't it be better to have 3a58176 reflect the non-optimization\n>> path in the EXPLAIN output of a new regression test if none of the\n>> existing tests are able to show any difference?\n> \n> This decision is not visible in EXPLAIN in any case.\n\nOkay, thanks!\n--\nMichael", "msg_date": "Thu, 29 Sep 2022 08:57:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not\n below Materialize node" }, { "msg_contents": "On Thu, 29 Sept 2022 at 12:07, Peter Geoghegan <pg@bowt.ie> wrote:\n> Also potentially relevant: the 2017 commit fa117ee4 anticipated adding\n> a \"copy\" argument to tuplesort_getdatum() (the same commit added such\n> a \"copy\" argument to tuplesort_gettupleslot()). I see that that still\n> hasn't happened to tuplesort_getdatum() all these years later. Might\n> be a good idea to do it in the next year or two, though.\n>\n> If David is interested in pursuing this now then I certainly won't object.\n\nJust while this is fresh in my head, I wrote some code to make this\nhappen. My preference would be not to add the \"copy\" param to the\nexisting function and instead just add a new function to prevent\nadditional branching.\n\nThe attached puts back the datum sort in nodeSort.c for byref types\nand adjusts process_ordered_aggregate_single() to make use of this\nfunction.\n\nI did a quick benchmark to see if this help DISTINCT aggregate any:\n\ncreate table t1 (a varchar(32) not null, b varchar(32) not null);\ninsert into t1 select md5((x%10)::text),md5((x%10)::text) from\ngenerate_Series(1,1000000)x;\nvacuum freeze t1;\ncreate index on t1(a);\n\nWith a work_mem of 256MBs I get:\n\nquery = select max(distinct a), max(distinct b) from t1;\n\nMaster:\nlatency average = 313.197 ms\n\nPatched:\nlatency average = 304.335 ms\n\nSo not a very impressive speedup there (about 3%)\n\nSome excerpts from perf top show:\n\nMaster:\n 1.40% postgres [.] palloc\n 1.13% postgres [.] tuplesort_getdatum\n 0.77% postgres [.] datumCopy\n\nPatched:\n 0.91% postgres [.] tuplesort_getdatum_nocopy\n 0.65% postgres [.] palloc\n\nI stared for a while at the mode_final() function and thought maybe we\ncould use the nocopy variant there. I just didn't quite pluck up the\nmotivation to write any code to see if it could be made faster.\n\nDavid", "msg_date": "Thu, 29 Sep 2022 14:12:44 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "On Wed, Sep 28, 2022 at 6:13 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> Master:\n> latency average = 313.197 ms\n>\n> Patched:\n> latency average = 304.335 ms\n>\n> So not a very impressive speedup there (about 3%)\n\nWorth a try, at least. Having a more consistent interface is valuable\nin itself too.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 28 Sep 2022 18:31:42 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "On Thu, 29 Sept 2022 at 14:32, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Sep 28, 2022 at 6:13 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > Master:\n> > latency average = 313.197 ms\n> >\n> > Patched:\n> > latency average = 304.335 ms\n> >\n> > So not a very impressive speedup there (about 3%)\n>\n> Worth a try, at least. Having a more consistent interface is valuable\n> in itself too.\n\nJust testing the datum sort in nodeSort.c with the same table as\nbefore but using the query:\n\nselect b from t1 order by b offset 1000000;\n\nMaster:\nlatency average = 344.763 ms\n\nPatched:\nlatency average = 268.374 ms\n\nabout 28% faster.\n\nI'll take this to another thread and put it in the next CF\n\nDavid\n\n\n", "msg_date": "Thu, 29 Sep 2022 17:59:16 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "On Wed, Sep 28, 2022 at 9:59 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> select b from t1 order by b offset 1000000;\n>\n> Master:\n> latency average = 344.763 ms\n>\n> Patched:\n> latency average = 268.374 ms\n>\n> about 28% faster.\n\nThat's more like it!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 28 Sep 2022 22:13:06 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "> I've just pushed the disable byref Datums patch I posted earlier. I\n> only made a small adjustment to make use of the TupleDescAttr() macro.\n> Önder, thank you for the report.\n\nThank you David for taking care of this.\n\n> Yeah, I think the same rules around scope apply as\n> tuplesort_gettupleslot() with copy==false. We could do it by adding a\n> copy flag to the existing function, but I'd rather not add the\n> branching to that function. It's probably just better to duplicate it\n> and adjust.\n> \n\nFor the record, I tried to see if gcc would optimize the function by \ngenerating two different versions when copy is true or false, thus getting rid \nof the branching while still having only one function to deal with. Using the \n-fipa-cp-clone (or even the whole set of additional flags coming with -O3), it \ndoes generate a special-case version of the function, but it seems to then \nonly be used by heapam_index_validate_scan and \npercentile_cont_multi_final_common. This is from my investigation looking for \nreferences to the specialized version in the DWARF debug information.\n\nRegards,\n\n-- \nRonan Dunklau\n\n\n\n\n", "msg_date": "Thu, 29 Sep 2022 15:52:59 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "Ronan Dunklau <ronan.dunklau@aiven.io> writes:\n>> Yeah, I think the same rules around scope apply as\n>> tuplesort_gettupleslot() with copy==false. We could do it by adding a\n>> copy flag to the existing function, but I'd rather not add the\n>> branching to that function. It's probably just better to duplicate it\n>> and adjust.\n\n> For the record, I tried to see if gcc would optimize the function by \n> generating two different versions when copy is true or false, thus getting rid \n> of the branching while still having only one function to deal with.\n\nTBH, I think this is completely ridiculous over-optimization.\nThere's exactly zero evidence that a second copy of the function\nwould improve performance, or do anything but contribute to code\nbloat (which does have a distributed performance cost).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 29 Sep 2022 10:10:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "Le jeudi 29 septembre 2022, 16:10:03 CEST Tom Lane a écrit :\n> Ronan Dunklau <ronan.dunklau@aiven.io> writes:\n> >> Yeah, I think the same rules around scope apply as\n> >> tuplesort_gettupleslot() with copy==false. We could do it by adding a\n> >> copy flag to the existing function, but I'd rather not add the\n> >> branching to that function. It's probably just better to duplicate it\n> >> and adjust.\n> > \n> > For the record, I tried to see if gcc would optimize the function by\n> > generating two different versions when copy is true or false, thus getting \nrid\n> > of the branching while still having only one function to deal with.\n> \n> TBH, I think this is completely ridiculous over-optimization.\n> There's exactly zero evidence that a second copy of the function\n> would improve performance, or do anything but contribute to code\n> bloat (which does have a distributed performance cost).\n\nI wasn't commenting on the merit of the optimization, but just that I tried to \nget gcc to apply it itself, which it doesn't.\n\nRegards,\n\n-- \nRonan Dunklau\n\n\n\n\n", "msg_date": "Thu, 29 Sep 2022 16:15:25 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "On Thu, Sep 29, 2022 at 7:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> TBH, I think this is completely ridiculous over-optimization.\n> There's exactly zero evidence that a second copy of the function\n> would improve performance, or do anything but contribute to code\n> bloat (which does have a distributed performance cost).\n\nI thought that that was unjustified myself.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 29 Sep 2022 08:07:06 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "Hi David, Tom, all,\n\n\n> I've just pushed the disable byref Datums patch I posted earlier. I\n> only made a small adjustment to make use of the TupleDescAttr() macro.\n> Önder, thank you for the report.\n>\n>\nWith this commit, I re-run the query patterns where we observed the\nproblem, all looks good now. Wanted to share this information as fyi.\n\nThanks for the quick turnaround!\n\nOnder KALACI\n\nHi David, Tom, all,\nI've just pushed the disable byref Datums patch I posted earlier. I\nonly made a small adjustment to make use of the TupleDescAttr() macro.\nÖnder, thank you for the report.With this commit, I re-run the query patterns where we observed the problem, all looks good now. Wanted to share this information as fyi.Thanks for the quick turnaround!Onder KALACI", "msg_date": "Thu, 29 Sep 2022 19:14:06 +0200", "msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" } ]
[ { "msg_contents": ">CREATE TABLE t1 (a text);\n>CREATE TABLE t2 (a text);\n\n>-- make the text a little large by adding 100000000000\n>INSERT INTO t1 SELECT (100000000000+i%1000)::text FROM\n>generate_series(0,10000000) i;\n\n>-- make the text a little large by adding 100000000000\n>INSERT INTO t2 SELECT (100000000000+i%10000)::text FROM\n>generate_series(0,10000000) i;\n\n>-- to simplify the explain plan, not strictly necessary\n>SET max_parallel_workers_per_gather TO 0;\n\n>-- these two are necessary so that the problem is triggered\n>-- these are helpful to use Merge join and avoid materialization\n>SET enable_hashjoin TO false;\n>SET enable_material TO false;\n\n>-- the join is on a TEXT column\n>-- when the join is on INT column with a similar setup, I do not observe\n>this problem\n>SELECT count(*) FROM t1 JOIN t2 USING (a);\n>```\n\n>The explain output for the query like the following:\n>```\n>explain SELECT count(*) FROM t1 JOIN t2 USING (a);\n\nI run your test here with a fix attached.\n\nCan you retake your test with the patch attached?\nregards,\n\nRanier Vilela", "msg_date": "Wed, 28 Sep 2022 13:56:34 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "Hi,\n\nThanks for replying so quickly!\n\nI run your test here with a fix attached.\n>\n> Can you retake your test with the patch attached?\n>\n>\n> Unfortunately, with the patch, I still see the memory usage increase and\nget the OOMs\n\nThanks,\nOnder KALACI\n\nHi,Thanks for replying so quickly!I run your test here with a fix attached.Can you retake your test with the patch attached?Unfortunately, with the patch, I still see the memory usage increase and get the OOMsThanks,Onder KALACI", "msg_date": "Wed, 28 Sep 2022 19:23:53 +0200", "msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" }, { "msg_contents": "Em qua., 28 de set. de 2022 às 14:24, Önder Kalacı <onderkalaci@gmail.com>\nescreveu:\n\n> Hi,\n>\n> Thanks for replying so quickly!\n>\n> I run your test here with a fix attached.\n>>\n>> Can you retake your test with the patch attached?\n>>\n>>\n>> Unfortunately, with the patch, I still see the memory usage increase and\n> get the OOMs\n>\nThanks for sharing the result.\n\nregards,\nRanier Vilela\n\nEm qua., 28 de set. de 2022 às 14:24, Önder Kalacı <onderkalaci@gmail.com> escreveu:Hi,Thanks for replying so quickly!I run your test here with a fix attached.Can you retake your test with the patch attached?Unfortunately, with the patch, I still see the memory usage increase and get the OOMsThanks for sharing the result.regards,Ranier Vilela", "msg_date": "Wed, 28 Sep 2022 14:29:57 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A potential memory leak on Merge Join when Sort node is not below\n Materialize node" } ]
[ { "msg_contents": "Hi hackers,\n\nThe proposed patch removes the redundant `fixOwner` argument.\n\n\"\"\"\nThe fixOwner bool argument ended up always being true, so it doesn't do much\nanymore. Removing it doesn't necessarily affect the performance a lot, but at\nleast improves the readability. The procedure is static thus the extension\nauthors are not going to be upset.\n\"\"\"\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 28 Sep 2022 20:14:23 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Refactor UnpinBuffer()" }, { "msg_contents": "On Wed, Sep 28, 2022 at 08:14:23PM +0300, Aleksander Alekseev wrote:\n> + ResourceOwnerForgetBuffer(CurrentResourceOwner, b);\n> +\n> /* not moving as we're likely deleting it soon anyway */\n> ref = GetPrivateRefCountEntry(b, false);\n> Assert(ref != NULL);\n> -\n> - if (fixOwner)\n> - ResourceOwnerForgetBuffer(CurrentResourceOwner, b);\n\nIs it safe to move the call to ResourceOwnerForgetBuffer() to before the\ncall to GetPrivateRefCountEntry()? From my quick skim of the code, it\nseems like it should be safe, but I thought I'd ask the question.\nOtherwise, LGTM.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 14:08:28 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactor UnpinBuffer()" }, { "msg_contents": "HI,\n\nOn Sep 29, 2022, 05:08 +0800, Nathan Bossart <nathandbossart@gmail.com>, wrote:\n> On Wed, Sep 28, 2022 at 08:14:23PM +0300, Aleksander Alekseev wrote:\n> > + ResourceOwnerForgetBuffer(CurrentResourceOwner, b);\n> > +\n> > /* not moving as we're likely deleting it soon anyway */\n> > ref = GetPrivateRefCountEntry(b, false);\n> > Assert(ref != NULL);\n> > -\n> > - if (fixOwner)\n> > - ResourceOwnerForgetBuffer(CurrentResourceOwner, b);\n+1, Good catch.\n>\n> Is it safe to move the call to ResourceOwnerForgetBuffer() to before the\n> call to GetPrivateRefCountEntry()? From my quick skim of the code, it\n> seems like it should be safe, but I thought I'd ask the question.\nSame question, have a look, it doesn’t seem to matter.\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHI,\n\nOn Sep 29, 2022, 05:08 +0800, Nathan Bossart <nathandbossart@gmail.com>, wrote:\nOn Wed, Sep 28, 2022 at 08:14:23PM +0300, Aleksander Alekseev wrote:\n+ ResourceOwnerForgetBuffer(CurrentResourceOwner, b);\n+\n/* not moving as we're likely deleting it soon anyway */\nref = GetPrivateRefCountEntry(b, false);\nAssert(ref != NULL);\n-\n- if (fixOwner)\n- ResourceOwnerForgetBuffer(CurrentResourceOwner, b);\n\n+1, Good catch.\n\nIs it safe to move the call to ResourceOwnerForgetBuffer() to before the\ncall to GetPrivateRefCountEntry()? From my quick skim of the code, it\nseems like it should be safe, but I thought I'd ask the question.\nSame question, have a look, it doesn’t seem to matter.\n\n\nRegards,\nZhang Mingli", "msg_date": "Thu, 29 Sep 2022 09:05:18 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactor UnpinBuffer()" }, { "msg_contents": "Nathan, Zhang,\n\nThanks for the review!\n\n> Is it safe to move the call to ResourceOwnerForgetBuffer() to before the\n> call to GetPrivateRefCountEntry()? From my quick skim of the code, it\n> seems like it should be safe, but I thought I'd ask the question.\n>\n> Same question, have a look, it doesn’t seem to matter.\n\nYep, I had some doubts here as well but it seems to be safe.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 29 Sep 2022 11:22:24 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Refactor UnpinBuffer()" }, { "msg_contents": "On Thu, Sep 29, 2022 at 1:52 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> > Is it safe to move the call to ResourceOwnerForgetBuffer() to before the\n> > call to GetPrivateRefCountEntry()? From my quick skim of the code, it\n> > seems like it should be safe, but I thought I'd ask the question.\n> >\n> > Same question, have a look, it doesn’t seem to matter.\n>\n> Yep, I had some doubts here as well but it seems to be safe.\n\nThe commit 2d115e47c861878669ba0814b3d97a4e4c347e8b that removed the\nlast UnpinBuffer() call with fixOwner as false in ReleaseBuffer().\nThis commit is pretty old and +1 for removing the unused function\nparameter.\n\nAlso, it looks like changing the order of GetPrivateRefCountEntry()\nand ResourceOwnerForgetBuffer() doesn't have any effect as they are\nindependent, but do we want to actually do that if there's no specific\nreason?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 29 Sep 2022 14:29:09 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactor UnpinBuffer()" }, { "msg_contents": "Hi Bharath,\n\n> Also, it looks like changing the order of GetPrivateRefCountEntry()\n> and ResourceOwnerForgetBuffer() doesn't have any effect as they are\n> independent, but do we want to actually do that if there's no specific\n> reason?\n\nIf we keep the order as it is now the code will become:\n\n```\n ref = GetPrivateRefCountEntry(b, false);\n Assert(ref != NULL);\n\n ResourceOwnerForgetBuffer(CurrentResourceOwner, b);\n\n Assert(ref->refcount > 0);\n ref->refcount--;\n if (ref->refcount == 0)\n```\n\nI figured it would not hurt to gather all the calls and Asserts\nrelated to `ref` together. This is the only reason why I choose to\nrearrange the order of the calls in the patch.\n\nSo, no strong opinion in this respect from my side. I'm fine with\nkeeping the existing order.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 29 Sep 2022 14:47:51 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Refactor UnpinBuffer()" }, { "msg_contents": "I've marked this one as ready-for-committer.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 29 Sep 2022 10:35:20 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactor UnpinBuffer()" }, { "msg_contents": "On Thu, Sep 29, 2022 at 10:35:20AM -0700, Nathan Bossart wrote:\n> I've marked this one as ready-for-committer.\n\nUnpinBuffer() is local to bufmgr.c, so it would not be an issue for\nexternal code, and that's 10 callers that don't need to worry about\nthat anymore. 2d115e4 is from 2015, and nobody has used this option\nsince, additionally.\n\nAnyway, per the rule of consistency with the surroundings (see\nReleaseBuffer() and ReleaseAndReadBuffer()), it seems to me that there\nis a good case for keeping the adjustment of CurrentResourceOwner\nbefore any refcount checks. I have also kept a mention to\nCurrentResourceOwner in the top comment of the function, and applied\nthat.\n--\nMichael", "msg_date": "Fri, 30 Sep 2022 15:59:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Refactor UnpinBuffer()" } ]
[ { "msg_contents": "OK, so the recent commit and revert of the 56-bit relfilenode patch\nrevealed a few issues that IMHO need design-level input. Let me try to\nsurface those here, starting a new thread to separate this discussion\nfrom the clutter:\n\n1. Commit Record Alignment. ParseCommitRecord() and ParseAbortRecord()\nare dependent on every subsidiary structure that can be added to a\ncommit or abort record requiring exactly 4-byte alignment. IMHO, this\nseems awfully fragile, even leaving the 56-bit patch aside. Prepare\nrecords seem to have a much saner scheme: they've also got a bunch of\ndifferent things that can be stuck onto the main record, but they\nmaxalign each top-level thing that they stick in there. So\nParsePrepareRecord() doesn't have to make any icky alignment\nassumptions the way ParseCommitRecord() and ParseAbortRecord() do.\nUnfortuantely, that scheme doesn't work as well for commit records,\nbecause the very first top-level thing only needs 2 bytes. We're\ncurrently using 4, and it would obviously be nicer to cut that down to\n2 than to have it go up to 8. We could try to rejigger things around\nsomehow to avoid needing that 2-byte quantity in there as a separate\ntoplevel item, but I'm not quite sure how to do that, or we could just\ncopy everything to ensure alignment, but that seems kind of expensive.\n\nIf we don't decide to do either of those things, we should at least\nbetter document, and preferably enforce via assets, the requirement\nthat these structs be exactly 4-byte aligned, so that nobody else\nmakes the same mistake in the future.\n\n2. WAL Size. Block references in the WAL are by RelFileLocator, so if\nyou make RelFileLocators bigger, WAL gets bigger. We'd have to test\nthe exact impact of this, but it seems a bit scary: if you have a WAL\nstream with few FPIs doing DML on a narrow table, probably most\nrecords will contain 1 block reference (and occasionally more, but I\nguess most will use BKPBLOCK_SAME_REL) and adding 4 bytes to that\nblock reference feels like it might add up to something significant. I\ndon't really see any way around this, either: if you make relfilenode\nvalues wider, they take up more space. Perhaps there's a way to claw\nthat back elsewhere, or we could do something really crazy like switch\nto variable-width representations of integer quantities in WAL\nrecords, but there doesn't seem to be any simple way forward other\nthan, you know, deciding that we're willing to pay the cost of the\nadditional WAL volume.\n\n3. Sinval Message Size. Sinval messages are 16 bytes right now.\nThey'll have to grow to 20 bytes if we do this. There's even less room\nfor bit-squeezing here than there is for the WAL stuff. I'm skeptical\nthat this really matters, but Tom seems concerned.\n\n4. Other Uses of RelFileLocator. There are a bunch of structs I\nhaven't looked into yet that also embed RelFileLocator, which may have\ntheir own issues with alignment, padding, and/or size: ginxlogSplit,\nginxlogDeletePage, ginxlogUpdateMeta, gistxlogPageReuse,\nxl_heap_new_cid, xl_btree_reuse_page, LogicalRewriteMappingData,\nxl_smgr_truncate, xl_seq_rec, ReorderBufferChange, FileTag. I think a\nbunch of these are things that get written into WAL, but at least some\nof them seem like they probably don't get written into WAL enough to\nmatter. Needs more investigation, though.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Sep 2022 17:05:53 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "problems with making relfilenodes 56-bits" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> 3. Sinval Message Size. Sinval messages are 16 bytes right now.\n> They'll have to grow to 20 bytes if we do this. There's even less room\n> for bit-squeezing here than there is for the WAL stuff. I'm skeptical\n> that this really matters, but Tom seems concerned.\n\nAs far as that goes, I'm entirely prepared to accept a conclusion\nthat the benefits of widening relfilenodes justify whatever space\nor speed penalties may exist there. However, we cannot honestly\nmake that conclusion if we haven't measured said penalties.\nThe same goes for the other issues you raise here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Sep 2022 19:08:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Wed, Sep 28, 2022 at 4:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> As far as that goes, I'm entirely prepared to accept a conclusion\n> that the benefits of widening relfilenodes justify whatever space\n> or speed penalties may exist there. However, we cannot honestly\n> make that conclusion if we haven't measured said penalties.\n> The same goes for the other issues you raise here.\n\nI generally agree, but the devil is in the details.\n\nI tend to agree with Robert that many individual WAL record types just\ndon't appear frequently enough to matter (it also helps that even the\nper-record space overhead with wider 56-bit relfilenodes isn't so\nbad). Just offhand I'd say that ginxlogSplit, ginxlogDeletePage,\nginxlogUpdateMeta, gistxlogPageReuse and xl_btree_reuse_page are\nlikely to be in this category (though would be nice to see some\nnumbers for those).\n\nI'm much less sure about the other record types. Any WAL records with\na variable number of relfilenode entries seem like they might be more\nof a problem. But I'm not ready to accept that that cannot be\nameliorated in some way. Just for example, it wouldn't be impossible to\ndo some kind of varbyte encoding for some record types. How many times\nwill the cluster actually need billions of relfilenodes? It has to\nwork, but maybe it can be suboptimal from a space overhead\nperspective.\n\nI'm not saying that we need to do anything fancy just yet. I'm\njust saying that there definitely *are* options. Maybe it's not really\nnecessary to come up with something like a varbyte encoding, and maybe\nthe complexity it imposes just won't be worth it -- I really have no\nopinion on that just yet.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 28 Sep 2022 16:24:13 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Thu, 29 Sep 2022, 00:06 Robert Haas, <robertmhaas@gmail.com> wrote:\n>\n> 2. WAL Size. Block references in the WAL are by RelFileLocator, so if\n> you make RelFileLocators bigger, WAL gets bigger. We'd have to test\n> the exact impact of this, but it seems a bit scary: if you have a WAL\n> stream with few FPIs doing DML on a narrow table, probably most\n> records will contain 1 block reference (and occasionally more, but I\n> guess most will use BKPBLOCK_SAME_REL) and adding 4 bytes to that\n> block reference feels like it might add up to something significant. I\n> don't really see any way around this, either: if you make relfilenode\n> values wider, they take up more space. Perhaps there's a way to claw\n> that back elsewhere, or we could do something really crazy like switch\n> to variable-width representations of integer quantities in WAL\n> records, but there doesn't seem to be any simple way forward other\n> than, you know, deciding that we're willing to pay the cost of the\n> additional WAL volume.\n\nRe: WAL volume and record size optimization\n\nI've been working off and on with WAL for some time now due to [0] and\nthe interest of Neon in the area, and I think we can reduce the size\nof the base record by a significant margin:\n\nCurrently, our minimal WAL record is exactly 24 bytes: length (4B),\nTransactionId (4B), previous record pointer (8B), flags (1B), redo\nmanager (1B), 2 bytes of padding and lastly the 4-byte CRC. Of these\nfields, TransactionID could reasonably be omitted for certain WAL\nrecords (as example: index insertions don't really need the XID).\nAdditionally, the length field could be made to be variable length,\nand any padding is just plain bad (adding 4 bytes to all\ninsert/update/delete/lock records was frowned upon).\n\nI'm working on a prototype patch for a more bare-bones WAL record\nheader of which the only required fields would be prevptr (8B), CRC\n(4B), rmgr (1B) and flags (1B) for a minimal size of 14 bytes. I don't\nyet know the performance of this, but the considering that there will\nbe a lot more conditionals in header decoding it might be slower for\nany one backend, but faster overall (less overall IOps)\n\nThe flags field would be indications for additional information: [flag\nname (bits): explanation (additional xlog header data in bytes)]\n- len_size(0..1): xlog record size is at most xlrec_header_only (0B),\nuint8_max(1B), uint16_max(2B), uint32_max(4B)\n- has_xid (2): contains transaction ID of logging transaction (4B, or\nprobably 8B when we introduce 64-bit xids)\n- has_cid (3): contains the command ID of the logging statement (4B)\n(rationale for logging CID in [0], now in record header because XID is\nincluded there as well, and both are required for consistent\nsnapshots.\n- has_rminfo (4): has non-zero redo-manager flags field (1B)\n(rationale for separate field [1], non-zero allows 1B space\noptimization for one of each RMGR's operations)\n- special_rel (5): pre-existing definition\n- check_consistency (6): pre-existing definition\n- unset (7): no meaning defined yet. Could be used for full record\ncompression, or other purposes.\n\nA normal record header (XLOG record with at least some registered\ndata) would be only 15 to 17 bytes (0-1B rminfo + 1-2B in xl_len), and\none with XID only up to 21 bytes. So, when compared to the current\nXLogRecord format, we would in general recover 2 or 3 bytes from the\nxl_tot_len field, 1 or 2 bytes from the alignment hole, and\npotentially the 4 bytes of the xid when that data is considered\nuseless during recovery, or physical or logical replication.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://postgr.es/m/CAEze2WhmU8WciEgaVPZm71vxFBOpp8ncDc%3DSdEHHsW6HS%2Bk9zw%40mail.gmail.com\n[1] https://postgr.es/m/20220715173731.6t3km5cww3f5ztfq%40awork3.anarazel.de\n\n\n", "msg_date": "Thu, 29 Sep 2022 18:24:12 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Thu, Sep 29, 2022 at 12:24 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Currently, our minimal WAL record is exactly 24 bytes: length (4B),\n> TransactionId (4B), previous record pointer (8B), flags (1B), redo\n> manager (1B), 2 bytes of padding and lastly the 4-byte CRC. Of these\n> fields, TransactionID could reasonably be omitted for certain WAL\n> records (as example: index insertions don't really need the XID).\n> Additionally, the length field could be made to be variable length,\n> and any padding is just plain bad (adding 4 bytes to all\n> insert/update/delete/lock records was frowned upon).\n\nRight. I was shocked when I realized that we had two bytes of padding\nin there, considering that numerous rmgrs are stealing bits from the\n1-byte field that identifies the record type. My question was: why\naren't we exposing those 2 bytes for rmgr-type-specific use? Or for\nsomething like xl_xact_commit, we could get rid of xl_xact_info if we\nhad those 2 bytes to work with.\n\nRight now, I see that a bare commit record is 34 bytes which rounds\nout to 40. With the trick above, we could shave off 4 bytes bringing\nthe size to 30 which would round to 32. That's a pretty significant\nsavings, although it'd be a lot better if we could get some kind of\nsavings for DML records which could be much higher frequency.\n\n> I'm working on a prototype patch for a more bare-bones WAL record\n> header of which the only required fields would be prevptr (8B), CRC\n> (4B), rmgr (1B) and flags (1B) for a minimal size of 14 bytes. I don't\n> yet know the performance of this, but the considering that there will\n> be a lot more conditionals in header decoding it might be slower for\n> any one backend, but faster overall (less overall IOps)\n>\n> The flags field would be indications for additional information: [flag\n> name (bits): explanation (additional xlog header data in bytes)]\n> - len_size(0..1): xlog record size is at most xlrec_header_only (0B),\n> uint8_max(1B), uint16_max(2B), uint32_max(4B)\n> - has_xid (2): contains transaction ID of logging transaction (4B, or\n> probably 8B when we introduce 64-bit xids)\n> - has_cid (3): contains the command ID of the logging statement (4B)\n> (rationale for logging CID in [0], now in record header because XID is\n> included there as well, and both are required for consistent\n> snapshots.\n> - has_rminfo (4): has non-zero redo-manager flags field (1B)\n> (rationale for separate field [1], non-zero allows 1B space\n> optimization for one of each RMGR's operations)\n> - special_rel (5): pre-existing definition\n> - check_consistency (6): pre-existing definition\n> - unset (7): no meaning defined yet. Could be used for full record\n> compression, or other purposes.\n\nInteresting. One fly in the ointment here is that WAL records start on\n8-byte boundaries (probably MAXALIGN boundaries, but I didn't check\nthe details). And after the 24-byte header, there's a 2-byte header\n(or 5-byte header) introducing the payload data (see\nXLR_BLOCK_ID_DATA_SHORT/LONG). So if the size of the actual payload\ndata is a multiple of 8, and is short enough that we use the short\ndata header, we waste 6 bytes. If the data length is a multiple of 4,\nwe waste 2 bytes. And those are probably really common cases. So the\nbig improvements probably come from saving 2 bytes or 6 bytes or 10\nbytes, and saving say 3 or 5 is probably not much better than 2. Or at\nleast that's what I'm guessing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 29 Sep 2022 17:58:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Thu, Sep 29, 2022 at 2:36 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> 2. WAL Size. Block references in the WAL are by RelFileLocator, so if\n> you make RelFileLocators bigger, WAL gets bigger. We'd have to test\n> the exact impact of this, but it seems a bit scary\n\nI have done some testing around this area to see the impact on WAL\nsize especially when WAL sizes are smaller, with a very simple test\nwith insert/update/delete I can see around an 11% increase in WAL size\n[1] then I did some more test with pgbench with smaller scale\nfactor(1) there I do not see a significant increase in the WAL size\nalthough it increases WAL size around 1-2%. [2].\n\n[1]\ncheckpoint;\ndo $$\ndeclare\n lsn1 pg_lsn;\n lsn2 pg_lsn;\n diff float;\nbegin\n select pg_current_wal_lsn() into lsn1;\n CREATE TABLE test(a int);\n for counter in 1..1000 loop\n INSERT INTO test values(1);\n UPDATE test set a=a+1;\n DELETE FROM test where a=1;\n end loop;\n DROP TABLE test;\n select pg_current_wal_lsn() into lsn2;\n select pg_wal_lsn_diff(lsn2, lsn1) into diff;\n raise notice '%', diff/1024;\nend; $$;\n\nwal generated head: 66199.09375 kB\nwal generated patch: 73906.984375 kB\nwal-size increase: 11%\n\n[2]\n./pgbench -i postgres\n./pgbench -c1 -j1 -t 30000 -M prepared postgres\nwal generated head: 30780 kB\nwal generated patch: 31284 kB\nwal-size increase: ~1-2%\n\nI have done further analysis to know why on pgbench workload the wal\nsize is increasing by 1-2%. So with waldump I could see that wal size\nper transaction size increased from 566 (on head) to 590 (with patch),\nso that is around 4% but when we see total wal size difference after\n30k transaction then it is just 1-2%\nand I think that is because there would be other records which are not\nimpacted like FPI\n\nConclusion: So as suspected with very small WAL sizes with a very\ntargeted test case we can see a significant 11% increase in WAL size\nbut with pgbench kind of workload the increase in WAL size is very\nless.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Sep 2022 15:36:11 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "Hi,\n\nOn 2022-09-30 15:36:11 +0530, Dilip Kumar wrote:\n> I have done some testing around this area to see the impact on WAL\n> size especially when WAL sizes are smaller, with a very simple test\n> with insert/update/delete I can see around an 11% increase in WAL size\n> [1] then I did some more test with pgbench with smaller scale\n> factor(1) there I do not see a significant increase in the WAL size\n> although it increases WAL size around 1-2%. [2].\n\nI think it'd be interesting to look at per-record-type stats between two\nequivalent workload, to see where practical workloads suffer the most\n(possibly with fpw=off, to make things more repeatable).\n\nI think it'd be an OK tradeoff to optimize WAL usage for a few of the worst to\npay off for 56bit relfilenodes. The class of problems foreclosed is large\nenough to \"waste\" \"improvement potential\" on this.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 30 Sep 2022 17:20:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Fri, Sep 30, 2022 at 5:20 PM Andres Freund <andres@anarazel.de> wrote:\n> I think it'd be an OK tradeoff to optimize WAL usage for a few of the worst to\n> pay off for 56bit relfilenodes. The class of problems foreclosed is large\n> enough to \"waste\" \"improvement potential\" on this.\n\nI agree overall.\n\nA closely related but distinct question occurs to me: if we're going\nto be \"wasting\" space on alignment padding in certain cases one way or\nanother, can we at least recognize those cases and take advantage at\nthe level of individual WAL record formats? In other words: So far\nwe've been discussing the importance of not going over a critical\nthreshold for certain WAL records. But it might also be valuable to\nconsider recognizing that that's inevitable, and that we might as well\nmake the most of it by including one or two other things.\n\nThis seems most likely to matter when managing the problem of negative\ncompression with per-WAL-record compression schemes for things like\narrays of page offset numbers [1]. If (say) a given compression scheme\n\"wastes\" space for arrays of only 1-3 items, but we already know that\nthe relevant space will all be lost to alignment needed by code one\nlevel down in any case, does it really count as waste? We're likely\nalways going to have some kind of negative compression, but you do get\nto influence where and when the negative compression happens.\n\nNot sure how relevant this will turn out to be, but seems worth\nconsidering. More generally, thinking about how things work across\nmultiple layers of abstraction seems like it could be valuable in\nother ways.\n\n[1] https://postgr.es/m/CAH2-WzmLCn2Hx9tQLdmdb+9CkHKLyWD2bsz=PmRebc4dAxjy6g@mail.gmail.com\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 30 Sep 2022 18:44:52 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Sat, Oct 1, 2022 at 5:50 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-09-30 15:36:11 +0530, Dilip Kumar wrote:\n> > I have done some testing around this area to see the impact on WAL\n> > size especially when WAL sizes are smaller, with a very simple test\n> > with insert/update/delete I can see around an 11% increase in WAL size\n> > [1] then I did some more test with pgbench with smaller scale\n> > factor(1) there I do not see a significant increase in the WAL size\n> > although it increases WAL size around 1-2%. [2].\n>\n> I think it'd be interesting to look at per-record-type stats between two\n> equivalent workload, to see where practical workloads suffer the most\n> (possibly with fpw=off, to make things more repeatable).\n\nWhile testing pgbench, I dumped the wal sizes using waldump. So in\npgbench case, most of the record sizes increased by 4 bytes as they\ninclude single block references and the same is true for the other\ntest case I sent. Here is the wal dump of what the sizes look like\nfor a single pgbench transaction[1]. Maybe for seeing these changes\nwith the different workloads we can run some of the files from the\nregression test and compare the individual wal sizes.\n\nHead:\nrmgr: Heap len (rec/tot): 54/ 54, tx: 867, lsn:\n0/02DD1280, prev 0/02DD1250, desc: LOCK off 44: xid 867: flags 0x01\nLOCK_ONLY EXCL_LOCK , blkref #0: rel 1663/5/16424 blk 226\nrmgr: Heap len (rec/tot): 171/ 171, tx: 867, lsn:\n0/02DD12B8, prev 0/02DD1280, desc: UPDATE off 44 xmax 867 flags 0x11 ;\nnew off 30 xmax 0, blkref #0: rel 1663/5/16424 blk 1639, blkref #1:\nrel 1663/5/16424 blk 226\nrmgr: Btree len (rec/tot): 64/ 64, tx: 867, lsn:\n0/02DD1368, prev 0/02DD12B8, desc: INSERT_LEAF off 290, blkref #0: rel\n1663/5/16432 blk 39\nrmgr: Heap len (rec/tot): 78/ 78, tx: 867, lsn:\n0/02DD13A8, prev 0/02DD1368, desc: HOT_UPDATE off 15 xmax 867 flags\n0x10 ; new off 19 xmax 0, blkref #0: rel 1663/5/16427 blk 0\nrmgr: Heap len (rec/tot): 74/ 74, tx: 867, lsn:\n0/02DD13F8, prev 0/02DD13A8, desc: HOT_UPDATE off 9 xmax 867 flags\n0x10 ; new off 10 xmax 0, blkref #0: rel 1663/5/16425 blk 0\nrmgr: Heap len (rec/tot): 79/ 79, tx: 867, lsn:\n0/02DD1448, prev 0/02DD13F8, desc: INSERT off 9 flags 0x08, blkref #0:\nrel 1663/5/16434 blk 0\nrmgr: Transaction len (rec/tot): 46/ 46, tx: 867, lsn:\n0/02DD1498, prev 0/02DD1448, desc: COMMIT 2022-10-01 11:24:03.464437\nIST\n\n\nPatch:\nrmgr: Heap len (rec/tot): 58/ 58, tx: 818, lsn:\n0/0218BEB0, prev 0/0218BE80, desc: LOCK off 34: xid 818: flags 0x01\nLOCK_ONLY EXCL_LOCK , blkref #0: rel 1663/5/100004 blk 522\nrmgr: Heap len (rec/tot): 175/ 175, tx: 818, lsn:\n0/0218BEF0, prev 0/0218BEB0, desc: UPDATE off 34 xmax 818 flags 0x11 ;\nnew off 8 xmax 0, blkref #0: rel 1663/5/100004 blk 1645, blkref #1:\nrel 1663/5/100004 blk 522\nrmgr: Btree len (rec/tot): 68/ 68, tx: 818, lsn:\n0/0218BFA0, prev 0/0218BEF0, desc: INSERT_LEAF off 36, blkref #0: rel\n1663/5/100010 blk 89\nrmgr: Heap len (rec/tot): 82/ 82, tx: 818, lsn:\n0/0218BFE8, prev 0/0218BFA0, desc: HOT_UPDATE off 66 xmax 818 flags\n0x10 ; new off 90 xmax 0, blkref #0: rel 1663/5/100007 blk 0\nrmgr: Heap len (rec/tot): 78/ 78, tx: 818, lsn:\n0/0218C058, prev 0/0218BFE8, desc: HOT_UPDATE off 80 xmax 818 flags\n0x10 ; new off 81 xmax 0, blkref #0: rel 1663/5/100005 blk 0\nrmgr: Heap len (rec/tot): 83/ 83, tx: 818, lsn:\n0/0218C0A8, prev 0/0218C058, desc: INSERT off 80 flags 0x08, blkref\n#0: rel 1663/5/100011 blk 0\nrmgr: Transaction len (rec/tot): 46/ 46, tx: 818, lsn:\n0/0218C100, prev 0/0218C0A8, desc: COMMIT 2022-10-01 11:11:03.564063\nIST\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 1 Oct 2022 11:44:01 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Fri, Sep 30, 2022 at 8:20 PM Andres Freund <andres@anarazel.de> wrote:\n> I think it'd be interesting to look at per-record-type stats between two\n> equivalent workload, to see where practical workloads suffer the most\n> (possibly with fpw=off, to make things more repeatable).\n\nI would expect, and Dilip's results seem to confirm, the effect to be\npretty uniform: basically, nearly every record gets bigger by 4 bytes.\nThat's because most records contain at least one block reference, and\nif they contain multiple block references, likely all but one will be\nmarked BKPBLOCK_SAME_REL, so we pay the cost just once.\n\nBecause of alignment padding, the practical effect is probably that\nabout half of the records get bigger by 8 bytes and the other half\ndon't get bigger at all. But I see no reason to believe that things\nare any better or worse than that. Most interesting record types are\ngoing to contain some kind of variable-length payload, so the chances\nthat a 4 byte size increase pushes you across a MAXALIGN boundary seem\nto be no better or worse than fifty-fifty.\n\n> I think it'd be an OK tradeoff to optimize WAL usage for a few of the worst to\n> pay off for 56bit relfilenodes. The class of problems foreclosed is large\n> enough to \"waste\" \"improvement potential\" on this.\n\nI thought about trying to buy back some space elsewhere, and I think\nthat would be a reasonable approach to getting this committed if we\ncould find a way to do it. However, I don't see a terribly obvious way\nof making it happen. Trying to do it by optimizing specific WAL record\ntypes seems like a real pain in the neck, because there's tons of\ndifferent WAL records that all have the same problem. Trying to do it\nin a generic way makes more sense, and the fact that we have 2 padding\nbytes available in XLogRecord seems like a place to start looking, but\nthe way forward from there is not clear to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 3 Oct 2022 08:12:39 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "Hi,\n\nOn 2022-10-03 08:12:39 -0400, Robert Haas wrote:\n> On Fri, Sep 30, 2022 at 8:20 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think it'd be interesting to look at per-record-type stats between two\n> > equivalent workload, to see where practical workloads suffer the most\n> > (possibly with fpw=off, to make things more repeatable).\n>\n> I would expect, and Dilip's results seem to confirm, the effect to be\n> pretty uniform: basically, nearly every record gets bigger by 4 bytes.\n> That's because most records contain at least one block reference, and\n> if they contain multiple block references, likely all but one will be\n> marked BKPBLOCK_SAME_REL, so we pay the cost just once.\n\nBut it doesn't really matter that much if an already large record gets a bit\nbigger. Whereas it does matter if it's a small record. Focussing on optimizing\nthe record types where the increase is large seems like a potential way\nforward to me, even if we can't find something generic.\n\n\n> I thought about trying to buy back some space elsewhere, and I think\n> that would be a reasonable approach to getting this committed if we\n> could find a way to do it. However, I don't see a terribly obvious way\n> of making it happen.\n\nI think there's plenty potential...\n\n\n> Trying to do it by optimizing specific WAL record\n> types seems like a real pain in the neck, because there's tons of\n> different WAL records that all have the same problem.\n\nI am not so sure about that. Improving a bunch of the most frequent small\nrecords might buy you back enough on just about every workload to be OK.\n\nI put the top record sizes for an installcheck run with full_page_writes off\nat the bottom. Certainly our regression tests aren't generally\nrepresentative. But I think it still decently highlights how just improving a\nfew records could buy you back more than enough.\n\n\n> Trying to do it in a generic way makes more sense, and the fact that we have\n> 2 padding bytes available in XLogRecord seems like a place to start looking,\n> but the way forward from there is not clear to me.\n\nRandom idea: xl_prev is large. Store a full xl_prev in the page header, but\nonly store a 2 byte offset from the page header xl_prev within each record.\n\nGreetings,\n\nAndres Freund\n\nby total size:\n\nType N (%) Record size (%) FPI size (%) Combined size (%)\n---- - --- ----------- --- -------- --- ------------- ---\nHeap/INSERT 1041666 ( 50.48) 106565255 ( 50.54) 0 ( 0.00) 106565255 ( 43.92)\nBtree/INSERT_LEAF 352196 ( 17.07) 24067672 ( 11.41) 0 ( 0.00) 24067672 ( 9.92)\nHeap/DELETE 250852 ( 12.16) 13546008 ( 6.42) 0 ( 0.00) 13546008 ( 5.58)\nHash/INSERT 108499 ( 5.26) 7811928 ( 3.70) 0 ( 0.00) 7811928 ( 3.22)\nTransaction/COMMIT 16053 ( 0.78) 6402657 ( 3.04) 0 ( 0.00) 6402657 ( 2.64)\nGist/PAGE_UPDATE 57225 ( 2.77) 5217100 ( 2.47) 0 ( 0.00) 5217100 ( 2.15)\nGin/UPDATE_META_PAGE 23943 ( 1.16) 4539970 ( 2.15) 0 ( 0.00) 4539970 ( 1.87)\nGin/INSERT 27004 ( 1.31) 3623998 ( 1.72) 0 ( 0.00) 3623998 ( 1.49)\nGist/PAGE_SPLIT 448 ( 0.02) 3391244 ( 1.61) 0 ( 0.00) 3391244 ( 1.40)\nSPGist/ADD_LEAF 38968 ( 1.89) 3341696 ( 1.58) 0 ( 0.00) 3341696 ( 1.38)\n...\nXLOG/FPI 7228 ( 0.35) 378924 ( 0.18) 29788166 ( 93.67) 30167090 ( 12.43)\n...\nGin/SPLIT 141 ( 0.01) 13011 ( 0.01) 1187588 ( 3.73) 1200599 ( 0.49)\n...\n -------- -------- -------- --------\nTotal 2063609 210848282 [86.89%] 31802766 [13.11%] 242651048 [100%]\n\n(Included XLOG/FPI and Gin/SPLIT to explain why there's FPIs despite running with fpw=off)\n\nsorted by number of records:\nHeap/INSERT 1041666 ( 50.48) 106565255 ( 50.54) 0 ( 0.00) 106565255 ( 43.92)\nBtree/INSERT_LEAF 352196 ( 17.07) 24067672 ( 11.41) 0 ( 0.00) 24067672 ( 9.92)\nHeap/DELETE 250852 ( 12.16) 13546008 ( 6.42) 0 ( 0.00) 13546008 ( 5.58)\nHash/INSERT 108499 ( 5.26) 7811928 ( 3.70) 0 ( 0.00) 7811928 ( 3.22)\nGist/PAGE_UPDATE 57225 ( 2.77) 5217100 ( 2.47) 0 ( 0.00) 5217100 ( 2.15)\nSPGist/ADD_LEAF 38968 ( 1.89) 3341696 ( 1.58) 0 ( 0.00) 3341696 ( 1.38)\nGin/INSERT 27004 ( 1.31) 3623998 ( 1.72) 0 ( 0.00) 3623998 ( 1.49)\nGin/UPDATE_META_PAGE 23943 ( 1.16) 4539970 ( 2.15) 0 ( 0.00) 4539970 ( 1.87)\nStandby/LOCK 18451 ( 0.89) 775026 ( 0.37) 0 ( 0.00) 775026 ( 0.32)\nTransaction/COMMIT 16053 ( 0.78) 6402657 ( 3.04) 0 ( 0.00) 6402657 ( 2.64)\n\n\n", "msg_date": "Mon, 3 Oct 2022 10:01:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Mon, 3 Oct 2022, 19:01 Andres Freund, <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-10-03 08:12:39 -0400, Robert Haas wrote:\n> > On Fri, Sep 30, 2022 at 8:20 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I think it'd be interesting to look at per-record-type stats between two\n> > > equivalent workload, to see where practical workloads suffer the most\n> > > (possibly with fpw=off, to make things more repeatable).\n> >\n> > I would expect, and Dilip's results seem to confirm, the effect to be\n> > pretty uniform: basically, nearly every record gets bigger by 4 bytes.\n> > That's because most records contain at least one block reference, and\n> > if they contain multiple block references, likely all but one will be\n> > marked BKPBLOCK_SAME_REL, so we pay the cost just once.\n>\n> But it doesn't really matter that much if an already large record gets a bit\n> bigger. Whereas it does matter if it's a small record. Focussing on optimizing\n> the record types where the increase is large seems like a potential way\n> forward to me, even if we can't find something generic.\n>\n>\n> > I thought about trying to buy back some space elsewhere, and I think\n> > that would be a reasonable approach to getting this committed if we\n> > could find a way to do it. However, I don't see a terribly obvious way\n> > of making it happen.\n>\n> I think there's plenty potential...\n>\n>\n> > Trying to do it by optimizing specific WAL record\n> > types seems like a real pain in the neck, because there's tons of\n> > different WAL records that all have the same problem.\n>\n> I am not so sure about that. Improving a bunch of the most frequent small\n> records might buy you back enough on just about every workload to be OK.\n>\n> I put the top record sizes for an installcheck run with full_page_writes off\n> at the bottom. Certainly our regression tests aren't generally\n> representative. But I think it still decently highlights how just improving a\n> few records could buy you back more than enough.\n>\n>\n> > Trying to do it in a generic way makes more sense, and the fact that we have\n> > 2 padding bytes available in XLogRecord seems like a place to start looking,\n> > but the way forward from there is not clear to me.\n>\n> Random idea: xl_prev is large. Store a full xl_prev in the page header, but\n> only store a 2 byte offset from the page header xl_prev within each record.\n\nWith that small xl_prev we may not detect partial page writes in\nrecycled segments; or other issues in the underlying file system. With\nsmall record sizes, the chance of returning incorrect data would be\nsignificant for small records (it would be approximately the chance of\ngetting a record boundary on the underlying page boundary * chance of\ngetting the same MAXALIGN-adjusted size record before the persistence\nboundary). That issue is part of the reason why my proposed change\nupthread still contains the full xl_prev.\n\nA different idea is removing most block_ids from the record, and\noptionally reducing per-block length fields to 1B. Used block ids are\neffectively always sequential, and we only allow 33+4 valid values, so\nwe can use 2 bits to distinguish between 'block belonging to this ID\nfield have at most 255B of data registered' and 'blocks up to this ID\nfollow sequentially without own block ID'. That would save 2N-1 total\nbytes for N blocks. It is scraping the barrel, but I think it is quite\npossible.\n\nLastly, we could add XLR_BLOCK_ID_DATA_MED for values >255 containing\nup to UINT16_MAX lengths. That would save 2 bytes for records that\nonly just pass the 255B barrier, where 2B is still a fairly\nsignificant part of the record size.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 3 Oct 2022 19:40:30 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "Hi,\n\nOn 2022-10-03 19:40:30 +0200, Matthias van de Meent wrote:\n> On Mon, 3 Oct 2022, 19:01 Andres Freund, <andres@anarazel.de> wrote:\n> > Random idea: xl_prev is large. Store a full xl_prev in the page header, but\n> > only store a 2 byte offset from the page header xl_prev within each record.\n> \n> With that small xl_prev we may not detect partial page writes in\n> recycled segments; or other issues in the underlying file system. With\n> small record sizes, the chance of returning incorrect data would be\n> significant for small records (it would be approximately the chance of\n> getting a record boundary on the underlying page boundary * chance of\n> getting the same MAXALIGN-adjusted size record before the persistence\n> boundary). That issue is part of the reason why my proposed change\n> upthread still contains the full xl_prev.\n\nWhat exactly is the theory for this significant increase? I don't think\nxl_prev provides a meaningful protection against torn pages in the first\nplace?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 3 Oct 2022 14:25:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Mon, 3 Oct 2022 at 23:26, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-10-03 19:40:30 +0200, Matthias van de Meent wrote:\n> > On Mon, 3 Oct 2022, 19:01 Andres Freund, <andres@anarazel.de> wrote:\n> > > Random idea: xl_prev is large. Store a full xl_prev in the page header, but\n> > > only store a 2 byte offset from the page header xl_prev within each record.\n> >\n> > With that small xl_prev we may not detect partial page writes in\n> > recycled segments; or other issues in the underlying file system. With\n> > small record sizes, the chance of returning incorrect data would be\n> > significant for small records (it would be approximately the chance of\n> > getting a record boundary on the underlying page boundary * chance of\n> > getting the same MAXALIGN-adjusted size record before the persistence\n> > boundary). That issue is part of the reason why my proposed change\n> > upthread still contains the full xl_prev.\n>\n> What exactly is the theory for this significant increase? I don't think\n> xl_prev provides a meaningful protection against torn pages in the first\n> place?\n\nXLog pages don't have checksums, so they do not provide torn page\nprotection capabilities on their own.\nA singular xlog record is protected against torn page writes through\nthe checksum that covers the whole record - if only part of the record\nwas written, we can detect that through the mismatching checksum.\nHowever, if records end at the tear boundary, we must know for certain\nthat any record that starts after the tear is the record that was\nwritten after the one before the tear. Page-local references/offsets\nwould not work, because the record decoding doesn't know which xlog\npage the record should be located on; it could be both the version of\nthe page before it was recycled, or the one after.\nCurrently, we can detect this because the value of xl_prev will point\nto a record far in the past (i.e. not the expected value), but with a\npage-local version of xl_prev we would be less likely to detect torn\npages (and thus be unable to handle this without risk of corruption)\ndue to the significant chance of the truncated xl_prev value being the\nsame in both the old and new record.\n\nExample: Page { [ record A ] | tear boundary | [ record B ] } gets\nrecycled and receives a new record C at the place of A with the same\nlength.\n\nWith your proposal, record B would still be a valid record when it\nfollows C; as the page-local serial number/offset reference to the\nprevious record would still match after the torn write.\nWith the current situation and a full LSN in xl_prev, the mismatching\nvalue in the xl_prev pointer allows us to detect this torn page write\nand halt replay, before redoing an old (incorrect) record.\n\nKind regards,\n\nMatthias van de Meent\n\nPS. there are ideas floating around (I heard about this one from\nHeikki) where we could concatenate WAL records into one combined\nrecord that has only one shared xl_prev+crc; which would save these 12\nbytes per record. However, that needs a lot of careful consideration\nto make sure that the persistence guarantee of operations doesn't get\nlost somewhere in the traffic.\n\n\n", "msg_date": "Tue, 4 Oct 2022 15:05:47 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "Hi,\n\nOn 2022-10-04 15:05:47 +0200, Matthias van de Meent wrote:\n> On Mon, 3 Oct 2022 at 23:26, Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-10-03 19:40:30 +0200, Matthias van de Meent wrote:\n> > > On Mon, 3 Oct 2022, 19:01 Andres Freund, <andres@anarazel.de> wrote:\n> > > > Random idea: xl_prev is large. Store a full xl_prev in the page header, but\n> > > > only store a 2 byte offset from the page header xl_prev within each record.\n> > >\n> > > With that small xl_prev we may not detect partial page writes in\n> > > recycled segments; or other issues in the underlying file system. With\n> > > small record sizes, the chance of returning incorrect data would be\n> > > significant for small records (it would be approximately the chance of\n> > > getting a record boundary on the underlying page boundary * chance of\n> > > getting the same MAXALIGN-adjusted size record before the persistence\n> > > boundary). That issue is part of the reason why my proposed change\n> > > upthread still contains the full xl_prev.\n> >\n> > What exactly is the theory for this significant increase? I don't think\n> > xl_prev provides a meaningful protection against torn pages in the first\n> > place?\n> \n> XLog pages don't have checksums, so they do not provide torn page\n> protection capabilities on their own.\n> A singular xlog record is protected against torn page writes through\n> the checksum that covers the whole record - if only part of the record\n> was written, we can detect that through the mismatching checksum.\n> However, if records end at the tear boundary, we must know for certain\n> that any record that starts after the tear is the record that was\n> written after the one before the tear. Page-local references/offsets\n> would not work, because the record decoding doesn't know which xlog\n> page the record should be located on; it could be both the version of\n> the page before it was recycled, or the one after.\n> Currently, we can detect this because the value of xl_prev will point\n> to a record far in the past (i.e. not the expected value), but with a\n> page-local version of xl_prev we would be less likely to detect torn\n> pages (and thus be unable to handle this without risk of corruption)\n> due to the significant chance of the truncated xl_prev value being the\n> same in both the old and new record.\n\nThink this is addressable, see below.\n\n\n> Example: Page { [ record A ] | tear boundary | [ record B ] } gets\n> recycled and receives a new record C at the place of A with the same\n> length.\n> \n> With your proposal, record B would still be a valid record when it\n> follows C; as the page-local serial number/offset reference to the\n> previous record would still match after the torn write.\n> With the current situation and a full LSN in xl_prev, the mismatching\n> value in the xl_prev pointer allows us to detect this torn page write\n> and halt replay, before redoing an old (incorrect) record.\n\nIn this concrete scenario the 8 byte xl_prev doesn't provide *any* protection?\nAs you specified it, C has the same length as A, so B's xl_prev will be the\nsame whether it's a page local offset or the full 8 bytes.\n\n\nThe relevant protection against issues like this isn't xl_prev, it's the\nCRC. We could improve the CRC by using the \"full width\" LSN for xl_prev rather\nthan the offset.\n\n\n> PS. there are ideas floating around (I heard about this one from\n> Heikki) where we could concatenate WAL records into one combined\n> record that has only one shared xl_prev+crc; which would save these 12\n> bytes per record. However, that needs a lot of careful consideration\n> to make sure that the persistence guarantee of operations doesn't get\n> lost somewhere in the traffic.\n\nOne version of that is to move the CRCs to the page header, make the pages\nsmaller (512 bytes / 4K, depending on the hardware), and to pad out partial\npages when flushing them out. Rewriting pages is bad for hardware and prevents\nhaving multiple WAL IOs in flight at the same time.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 4 Oct 2022 08:34:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Tue, Oct 4, 2022 at 11:34 AM Andres Freund <andres@anarazel.de> wrote:\n> > Example: Page { [ record A ] | tear boundary | [ record B ] } gets\n> > recycled and receives a new record C at the place of A with the same\n> > length.\n> >\n> > With your proposal, record B would still be a valid record when it\n> > follows C; as the page-local serial number/offset reference to the\n> > previous record would still match after the torn write.\n> > With the current situation and a full LSN in xl_prev, the mismatching\n> > value in the xl_prev pointer allows us to detect this torn page write\n> > and halt replay, before redoing an old (incorrect) record.\n>\n> In this concrete scenario the 8 byte xl_prev doesn't provide *any* protection?\n> As you specified it, C has the same length as A, so B's xl_prev will be the\n> same whether it's a page local offset or the full 8 bytes.\n>\n> The relevant protection against issues like this isn't xl_prev, it's the\n> CRC. We could improve the CRC by using the \"full width\" LSN for xl_prev rather\n> than the offset.\n\nI'm really confused. xl_prev *is* a full-width LSN currently, as I\nunderstand it. So in the scenario that Matthias poses, let's say the\nsegment was previously 000000010000000400000025 and now it's\n000000010000000400000049. So if a given chunk of the page is leftover\nfrom when the page was 000000010000000400000025, it will have xl_prev\nvalues like 4/25xxxxxx. If it's been rewritten since the segment was\nrecycled, it will have xl_prev values like 4/49xxxxxx. So, we can tell\nwhether record B has been overwritten with a new record since the\nsegment was recycled. But if we stored only 2 bytes in each xl_prev\nfield, that would no longer be possible.\n\nSo I'm lost. It seems like Matthias has correctly identified a real\nhazard, and not some weird corner case but actually something that\nwill happen regularly. All you need is for the old segment that got\nrecycled to have a record stating at the same place where the page\ntore, and for the previous record to have been the same length as the\none on the new page. Given that there's only <~1024 places on a page\nwhere a record can start, and given that in many workloads the lengths\nof WAL records will be fairly uniform, this doesn't seem unlikely at\nall.\n\nA way to up the chances of detecting this case would be to store only\n2 or 4 bytes of xl_prev on disk, but arrange to include the full\nxl_prev value in the xl_crc calculation. Then your chances of a\ncollision are about 2^-32, or maybe more if you posit that CRC is a\nweak and crappy algorithm, but even then it's strictly better than\njust hoping that there isn't a tear point at a record boundary where\nthe same length record precedes the tear in both the old and new WAL\nsegments. However, on the flip side, even if you assume that CRC is a\nfantastic algorithm with beautiful and state-of-the-art bit mixing,\nthe chances of it failing to notice the problem are still >0, whereas\nthe current algorithm that compares the full xl_prev value is a sure\nthing. Because xl_prev values are never repeated, it's certain that\nwhen a segment is recycled, any values that were legal for the old one\naren't legal in the new one.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 Oct 2022 13:36:33 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "Hi,\n\nOn 2022-10-04 13:36:33 -0400, Robert Haas wrote:\n> On Tue, Oct 4, 2022 at 11:34 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Example: Page { [ record A ] | tear boundary | [ record B ] } gets\n> > > recycled and receives a new record C at the place of A with the same\n> > > length.\n> > >\n> > > With your proposal, record B would still be a valid record when it\n> > > follows C; as the page-local serial number/offset reference to the\n> > > previous record would still match after the torn write.\n> > > With the current situation and a full LSN in xl_prev, the mismatching\n> > > value in the xl_prev pointer allows us to detect this torn page write\n> > > and halt replay, before redoing an old (incorrect) record.\n> >\n> > In this concrete scenario the 8 byte xl_prev doesn't provide *any* protection?\n> > As you specified it, C has the same length as A, so B's xl_prev will be the\n> > same whether it's a page local offset or the full 8 bytes.\n> >\n> > The relevant protection against issues like this isn't xl_prev, it's the\n> > CRC. We could improve the CRC by using the \"full width\" LSN for xl_prev rather\n> > than the offset.\n>\n> I'm really confused. xl_prev *is* a full-width LSN currently, as I\n> understand it. So in the scenario that Matthias poses, let's say the\n> segment was previously 000000010000000400000025 and now it's\n> 000000010000000400000049. So if a given chunk of the page is leftover\n> from when the page was 000000010000000400000025, it will have xl_prev\n> values like 4/25xxxxxx. If it's been rewritten since the segment was\n> recycled, it will have xl_prev values like 4/49xxxxxx. So, we can tell\n> whether record B has been overwritten with a new record since the\n> segment was recycled. But if we stored only 2 bytes in each xl_prev\n> field, that would no longer be possible.\n\nOh, I think I misunderstood the scenario. I was thinking of cases where we\nwrite out a bunch of pages, crash, only some of the pages made it to disk, we\nthen write new ones of the same length, and now find a record after the \"real\"\nend of the WAL to be valid. Not sure how I mentally swallowed the \"recycled\".\n\nFor the recycling scenario to be a problem we'll also need to crash, with\nparts of the page ending up with the new contents and parts of the page ending\nup with the old \"pre recycling\" content, correct? Because without a crash\nwe'll have zeroed out the remainder of the page (well, leaving walreceiver out\nof the picture, grr).\n\nHowever, this can easily happen without any record boundaries on the partially\nrecycled page, so we rely on the CRCs to protect against this.\n\n\nHere I originally wrote a more in-depth explanation of the scenario I was\nthinking about, where we alread rely on CRCs to protect us. But, ooph, I think\nthey don't reliably, with today's design. But maybe I'm missing more things\ntoday. Consider the following sequence:\n\n1) we write WAL like this:\n\n[record A][tear boundary][record B, prev A_lsn][tear boundary][record C, prev B_lsn]\n\n2) crash, the sectors with A and C made it to disk, the one with B didn't\n\n3) We replay A, discover B is invalid (possibly zeroes or old contents),\n insert a new record B' with the same length. Now it looks like this:\n\n[record A][tear boundary][record B', prev A_lsn][tear boundary][record C, prev B_lsn]\n\n4) crash, the sector with B' makes it to disk\n\n5) we replay A, B', C, because C has an xl_prev that's compatible with B'\n location and a valid CRC.\n\nOops.\n\nI think this can happen both within a single page and across page boundaries.\n\nI hope I am missing something here?\n\n\n> A way to up the chances of detecting this case would be to store only\n> 2 or 4 bytes of xl_prev on disk, but arrange to include the full\n> xl_prev value in the xl_crc calculation.\n\nRight, that's what I was suggesting as well.\n\n\n> Then your chances of a collision are about 2^-32, or maybe more if you posit\n> that CRC is a weak and crappy algorithm, but even then it's strictly better\n> than just hoping that there isn't a tear point at a record boundary where\n> the same length record precedes the tear in both the old and new WAL\n> segments. However, on the flip side, even if you assume that CRC is a\n> fantastic algorithm with beautiful and state-of-the-art bit mixing, the\n> chances of it failing to notice the problem are still >0, whereas the\n> current algorithm that compares the full xl_prev value is a sure\n> thing. Because xl_prev values are never repeated, it's certain that when a\n> segment is recycled, any values that were legal for the old one aren't legal\n> in the new one.\n\nGiven that we already rely on the CRCs to detect corruption within a single\nrecord spanning tear boundaries, this doesn't cause me a lot of heartburn. But\nI suspect we might need to do something about the scenario I outlined above,\nwhich likely would also increase the protection against this issue.\n\nI think there might be reasonable ways to increase the guarantees based on the\n2 byte xl_prev approach \"alone\". We don't have to store the offset from the\npage header as a plain offset. What about storing something like:\n page_offset ^ (page_lsn >> wal_segsz_shift)\n\nI think something like that'd result in prev_not_really_lsn typically not\nsimply matching after recycling. Of course it only provides so much\nprotection, given 16bits...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 4 Oct 2022 11:30:03 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Tue, Oct 4, 2022 at 2:30 PM Andres Freund <andres@anarazel.de> wrote:\n> Consider the following sequence:\n>\n> 1) we write WAL like this:\n>\n> [record A][tear boundary][record B, prev A_lsn][tear boundary][record C, prev B_lsn]\n>\n> 2) crash, the sectors with A and C made it to disk, the one with B didn't\n>\n> 3) We replay A, discover B is invalid (possibly zeroes or old contents),\n> insert a new record B' with the same length. Now it looks like this:\n>\n> [record A][tear boundary][record B', prev A_lsn][tear boundary][record C, prev B_lsn]\n>\n> 4) crash, the sector with B' makes it to disk\n>\n> 5) we replay A, B', C, because C has an xl_prev that's compatible with B'\n> location and a valid CRC.\n>\n> Oops.\n>\n> I think this can happen both within a single page and across page boundaries.\n>\n> I hope I am missing something here?\n\nIf you are, I don't know what it is off-hand. That seems like a\nplausible scenario to me. It does require the OS to write things out\nof order, and I don't know how likely that is in practice, but the\nanswer probably isn't zero.\n\n> I think there might be reasonable ways to increase the guarantees based on the\n> 2 byte xl_prev approach \"alone\". We don't have to store the offset from the\n> page header as a plain offset. What about storing something like:\n> page_offset ^ (page_lsn >> wal_segsz_shift)\n>\n> I think something like that'd result in prev_not_really_lsn typically not\n> simply matching after recycling. Of course it only provides so much\n> protection, given 16bits...\n\nMaybe. That does seem somewhat better, but I feel like it's hard to\nreason about whether it's safe in absolute terms or just resistant to\nthe precise scenario Matthias postulated while remaining vulnerable to\nslightly modified versions.\n\nHow about this: remove xl_prev. widen xl_crc to 64 bits. include the\nCRC of the previous WAL record in the xl_crc calculation. That doesn't\ncut quite as many bytes out of the record size as your proposal, but\nit feels like it should strongly resist pretty much every attack of\nthis general type, with only the minor disadvantage that the more\nexpensive CRC calculation will destroy all hope of getting anything\ncommitted.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 Oct 2022 14:53:34 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "Hi,\n\nOn 2022-10-03 10:01:25 -0700, Andres Freund wrote:\n> On 2022-10-03 08:12:39 -0400, Robert Haas wrote:\n> > On Fri, Sep 30, 2022 at 8:20 PM Andres Freund <andres@anarazel.de> wrote:\n> > I thought about trying to buy back some space elsewhere, and I think\n> > that would be a reasonable approach to getting this committed if we\n> > could find a way to do it. However, I don't see a terribly obvious way\n> > of making it happen.\n>\n> I think there's plenty potential...\n\nI light dusted off my old varint implementation from [1] and converted the\nRelFileLocator and BlockNumber from fixed width integers to varint ones. This\nisn't meant as a serious patch, but an experiment to see if this is a path\nworth pursuing.\n\nA run of installcheck in a cluster with autovacuum=off, full_page_writes=off\n(for increased reproducability) shows a decent saving:\n\nmaster: 241106544 - 230 MB\nvarint: 227858640 - 217 MB\n\nThe average record size goes from 102.7 to 95.7 bytes excluding the remaining\nFPIs, 118.1 to 111.0 including FPIs.\n\nThere's plenty other spots that could be converted (e.g. the record length\nwhich rarely needs four bytes), this is just meant as a demonstration.\n\n\nI used pg_waldump --stats for that range of WAL to measure the CPU overhead. A\nprofile does show pg_varint_decode_uint64(), but partially that seems to be\noffset by the reduced amount of bytes to CRC. Maybe a ~2% overhead remains.\n\nThat would be tolerable, I think, because waldump --stats pretty much doesn't\ndo anything with the WAL.\n\nBut I suspect there's plenty of optimization potential in the varint\ncode. Right now it e.g. stores data as big endian, and the bswap instructions\ndo show up. And a lot of my bit-maskery could be optimized too.\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 4 Oct 2022 16:49:52 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Wed, Oct 5, 2022 at 5:19 AM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> I light dusted off my old varint implementation from [1] and converted the\n> RelFileLocator and BlockNumber from fixed width integers to varint ones. This\n> isn't meant as a serious patch, but an experiment to see if this is a path\n> worth pursuing.\n>\n> A run of installcheck in a cluster with autovacuum=off, full_page_writes=off\n> (for increased reproducability) shows a decent saving:\n>\n> master: 241106544 - 230 MB\n> varint: 227858640 - 217 MB\n>\n> The average record size goes from 102.7 to 95.7 bytes excluding the remaining\n> FPIs, 118.1 to 111.0 including FPIs.\n>\n\nI have also executed my original test after applying these patches on\ntop of the 56 bit relfilenode patch. So earlier we saw the WAL size\nincreased by 11% (66199.09375 kB to 73906.984375 kB) and after this\npatch now the WAL generated is 58179.2265625. That means in this\nparticular example this patch is reducing the WAL size by 12% even\nwith the 56 bit relfilenode patch.\n\n[1] https://www.postgresql.org/message-id/CAFiTN-uut%2B04AdwvBY_oK_jLvMkwXUpDJj5mXg--nek%2BucApPQ%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 Oct 2022 14:46:17 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Mon, Oct 10, 2022 at 5:16 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have also executed my original test after applying these patches on\n> top of the 56 bit relfilenode patch. So earlier we saw the WAL size\n> increased by 11% (66199.09375 kB to 73906.984375 kB) and after this\n> patch now the WAL generated is 58179.2265625. That means in this\n> particular example this patch is reducing the WAL size by 12% even\n> with the 56 bit relfilenode patch.\n\nThat's a very promising result, but the question in my mind is how\nmuch work would be required to bring this patch to a committable\nstate?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 10 Oct 2022 08:10:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "Hi,\n\nOn 2022-10-10 08:10:22 -0400, Robert Haas wrote:\n> On Mon, Oct 10, 2022 at 5:16 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I have also executed my original test after applying these patches on\n> > top of the 56 bit relfilenode patch. So earlier we saw the WAL size\n> > increased by 11% (66199.09375 kB to 73906.984375 kB) and after this\n> > patch now the WAL generated is 58179.2265625. That means in this\n> > particular example this patch is reducing the WAL size by 12% even\n> > with the 56 bit relfilenode patch.\n> \n> That's a very promising result, but the question in my mind is how\n> much work would be required to bring this patch to a committable\n> state?\n\nThe biggest part clearly is to review the variable width integer patch. It's\nnot a large amount of code, but probably more complicated than average.\n\nOne complication there is that currently the patch assumes:\n * Note that this function, for efficiency, reads 8 bytes, even if the\n * variable integer is less than 8 bytes long. The buffer has to be\n * allocated sufficiently large to account for that fact. The maximum\n * amount of memory read is 9 bytes.\n\nWe could make a less efficient version without that assumption, but I think it\nmight be easier to just guarantee it in the xlog*.c case.\n\n\nUsing it in xloginsert.c is pretty darn simple, code-wise. xlogreader is bit\nharder, although not for intrinsic reasons - the logic underlying\nCOPY_HEADER_FIELD seems unneccessary complicated to me. The minimal solution\nwould likely be to just wrap the varint reads in another weird macro.\n\n\nLeaving the code issues themselves aside, one important thing would be to\nevaluate what the performance impacts of the varint encoding/decoding are as\npart of \"full\" server. I suspect it'll vanish in the noise, but we'd need to\nvalidate that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 10 Oct 2022 14:22:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Mon, Oct 10, 2022 at 5:40 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Oct 10, 2022 at 5:16 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I have also executed my original test after applying these patches on\n> > top of the 56 bit relfilenode patch. So earlier we saw the WAL size\n> > increased by 11% (66199.09375 kB to 73906.984375 kB) and after this\n> > patch now the WAL generated is 58179.2265625. That means in this\n> > particular example this patch is reducing the WAL size by 12% even\n> > with the 56 bit relfilenode patch.\n>\n> That's a very promising result, but the question in my mind is how\n> much work would be required to bring this patch to a committable\n> state?\n\nRight, the results are promising. I have done some more testing with\nmake installcheck WAL size (fpw=off) and I have seen a similar gain\nwith this patch.\n\n1. Head: 272 MB\n2. 56 bit RelfileLocator: 285 MB\n3. 56 bit RelfileLocator + this patch: 261 MB\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 11 Oct 2022 14:03:28 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Wed, 5 Oct 2022 at 01:50, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-10-03 10:01:25 -0700, Andres Freund wrote:\n> > On 2022-10-03 08:12:39 -0400, Robert Haas wrote:\n> > > On Fri, Sep 30, 2022 at 8:20 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I thought about trying to buy back some space elsewhere, and I think\n> > > that would be a reasonable approach to getting this committed if we\n> > > could find a way to do it. However, I don't see a terribly obvious way\n> > > of making it happen.\n> >\n> > I think there's plenty potential...\n>\n> I light dusted off my old varint implementation from [1] and converted the\n> RelFileLocator and BlockNumber from fixed width integers to varint ones. This\n> isn't meant as a serious patch, but an experiment to see if this is a path\n> worth pursuing.\n>\n> A run of installcheck in a cluster with autovacuum=off, full_page_writes=off\n> (for increased reproducability) shows a decent saving:\n>\n> master: 241106544 - 230 MB\n> varint: 227858640 - 217 MB\n\nI think a signficant part of this improvement comes from the premise\nof starting with a fresh database. tablespace OID will indeed most\nlikely be low, but database OID may very well be linearly distributed\nif concurrent workloads in the cluster include updating (potentially\nunlogged) TOASTed columns and the databases are not created in one\n\"big bang\" but over the lifetime of the cluster. In that case DBOID\nwill consume 5B for a significant fraction of databases (anything with\nOID >=2^28).\n\nMy point being: I don't think that we should have different WAL\nperformance in databases which is dependent on which OID was assigned\nto that database.\nIn addition; this varlen encoding of relfilenode would mean that\nperformance would drop over time, as a relations' relfile locator is\nupdated to something with a wider number (through VACUUM FULL or other\nrelfilelocator cycling; e.g. re-importing a database). For maximum\nperformance, you'd have to tune your database to have the lowest\npossible database, namespace and relfilelocator numbers; which (in\nolder clusters) implies hacking into the catalogs - which seems like\nan antipattern.\n\nI would have much less issue with this if we had separate counters per\ndatabase (and approximately incremental dbOid:s), but that's not the\ncase right now.\n\n> The average record size goes from 102.7 to 95.7 bytes excluding the remaining\n> FPIs, 118.1 to 111.0 including FPIs.\n\nThat's quite promising.\n\n> There's plenty other spots that could be converted (e.g. the record length\n> which rarely needs four bytes), this is just meant as a demonstration.\n\nAgreed.\n\n> I used pg_waldump --stats for that range of WAL to measure the CPU overhead. A\n> profile does show pg_varint_decode_uint64(), but partially that seems to be\n> offset by the reduced amount of bytes to CRC. Maybe a ~2% overhead remains.\n>\n> That would be tolerable, I think, because waldump --stats pretty much doesn't\n> do anything with the WAL.\n>\n> But I suspect there's plenty of optimization potential in the varint\n> code. Right now it e.g. stores data as big endian, and the bswap instructions\n> do show up. And a lot of my bit-maskery could be optimized too.\n\nOne thing that comes to mind is that we will never see dbOid < 2^8\n(and rarely < 2^14, nor spcOid less than 2^8 for that matter), so\nwe'll probably waste at least one or two bits in the encoding of those\nvalues. That's not the end of the world, but it'd probably be better\nif we could improve on that - up to 6% of the field's disk usage would\nbe wasted on an always-on bit.\n\n----\n\nAttached is a prototype patchset that reduces the WAL record size in\nmany common cases. This is a prototype, as it fails tests due to a\nlocking issue in prepared_xacts that I have not been able to find the\nsource of yet. It also could use some more polishing, but the base\ncase seems quite good. I haven't yet run the numbers though...\n\n0001 - Extract xl_rminfo from xl_info\nSee [0] for more info as to why that's useful, the patch was pulled\nfrom there. It is mainly used to reduce the size of 0002; and mostly\nconsists of find-and-replace of rmgrs extracting their bits from\nxl_info.\n\n0002 - Rework XLogRecord\nThis makes many fields in the xlog header optional, reducing the size\nof many xlog records by several bytes. This implements the design I\nshared in my earlier message [1].\n\n0003 - Rework XLogRecordBlockHeader.\nThis patch could be applied on current head, and saves some bytes in\nper-block data. It potentially saves some bytes per registered\nblock/buffer in the WAL record (max 2 bytes for the first block, after\nthat up to 3). See the patch's commit message in the patch for\ndetailed information.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://postgr.es/m/CAEze2WgZti_Bgs-Aw3egsR5PJQpHcYZwZFCJND5MS-O_DX0-Hg%40mail.gmail.com\n[1] https://postgr.es/m/CAEze2WjOFzRzPMPYhH4odSa9OCF2XeZszE3jGJhJzrpdFmyLOw@mail.gmail.com", "msg_date": "Wed, 12 Oct 2022 22:05:30 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "Hi,\n\nOn 2022-10-12 22:05:30 +0200, Matthias van de Meent wrote:\n> On Wed, 5 Oct 2022 at 01:50, Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-10-03 10:01:25 -0700, Andres Freund wrote:\n> > > On 2022-10-03 08:12:39 -0400, Robert Haas wrote:\n> > > > On Fri, Sep 30, 2022 at 8:20 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > I thought about trying to buy back some space elsewhere, and I think\n> > > > that would be a reasonable approach to getting this committed if we\n> > > > could find a way to do it. However, I don't see a terribly obvious way\n> > > > of making it happen.\n> > >\n> > > I think there's plenty potential...\n> >\n> > I light dusted off my old varint implementation from [1] and converted the\n> > RelFileLocator and BlockNumber from fixed width integers to varint ones. This\n> > isn't meant as a serious patch, but an experiment to see if this is a path\n> > worth pursuing.\n> >\n> > A run of installcheck in a cluster with autovacuum=off, full_page_writes=off\n> > (for increased reproducability) shows a decent saving:\n> >\n> > master: 241106544 - 230 MB\n> > varint: 227858640 - 217 MB\n> \n> I think a signficant part of this improvement comes from the premise\n> of starting with a fresh database. tablespace OID will indeed most\n> likely be low, but database OID may very well be linearly distributed\n> if concurrent workloads in the cluster include updating (potentially\n> unlogged) TOASTed columns and the databases are not created in one\n> \"big bang\" but over the lifetime of the cluster. In that case DBOID\n> will consume 5B for a significant fraction of databases (anything with\n> OID >=2^28).\n> \n> My point being: I don't think that we should have different WAL\n> performance in databases which is dependent on which OID was assigned\n> to that database.\n\nTo me this is raising the bar to an absurd level. Some minor space usage\nincrease after oid wraparound and for very large block numbers isn't a huge\nissue - if you're in that situation you already have a huge amount of wal.\n\n\n> 0002 - Rework XLogRecord\n> This makes many fields in the xlog header optional, reducing the size\n> of many xlog records by several bytes. This implements the design I\n> shared in my earlier message [1].\n> \n> 0003 - Rework XLogRecordBlockHeader.\n> This patch could be applied on current head, and saves some bytes in\n> per-block data. It potentially saves some bytes per registered\n> block/buffer in the WAL record (max 2 bytes for the first block, after\n> that up to 3). See the patch's commit message in the patch for\n> detailed information.\n\nThe amount of complexity these two introduce seems quite substantial to\nme. Both from an maintenance and a runtime perspective. I think we'd be better\noff using building blocks like variable lengths encoded values than open\ncoding it in many places.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 12 Oct 2022 14:13:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Wed, 12 Oct 2022 at 23:13, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-10-12 22:05:30 +0200, Matthias van de Meent wrote:\n> > On Wed, 5 Oct 2022 at 01:50, Andres Freund <andres@anarazel.de> wrote:\n> > > I light dusted off my old varint implementation from [1] and converted the\n> > > RelFileLocator and BlockNumber from fixed width integers to varint ones. This\n> > > isn't meant as a serious patch, but an experiment to see if this is a path\n> > > worth pursuing.\n> > >\n> > > A run of installcheck in a cluster with autovacuum=off, full_page_writes=off\n> > > (for increased reproducability) shows a decent saving:\n> > >\n> > > master: 241106544 - 230 MB\n> > > varint: 227858640 - 217 MB\n> >\n> > I think a signficant part of this improvement comes from the premise\n> > of starting with a fresh database. tablespace OID will indeed most\n> > likely be low, but database OID may very well be linearly distributed\n> > if concurrent workloads in the cluster include updating (potentially\n> > unlogged) TOASTed columns and the databases are not created in one\n> > \"big bang\" but over the lifetime of the cluster. In that case DBOID\n> > will consume 5B for a significant fraction of databases (anything with\n> > OID >=2^28).\n> >\n> > My point being: I don't think that we should have different WAL\n> > performance in databases which is dependent on which OID was assigned\n> > to that database.\n>\n> To me this is raising the bar to an absurd level. Some minor space usage\n> increase after oid wraparound and for very large block numbers isn't a huge\n> issue - if you're in that situation you already have a huge amount of wal.\n\nI didn't want to block all varlen encoding, I just want to make clear\nthat I don't think it's great for performance testing and consistency\nacross installations if WAL size (and thus part of your performance)\nis dependent on which actual database/relation/tablespace combination\nyou're running your workload in.\n\nWith the 56-bit relfilenode, the size of a block reference would\nrealistically differ between 7 bytes and 23 bytes:\n\n- tblspc=0=1B\n db=16386=3B\n rel=797=2B (787 = 4 * default # of data relations in a fresh DB in PG14, + 1)\n block=0=1B\n\nvs\n\n- tsp>=2^28 = 5B\n dat>=2^28 =5B\n rel>=2^49 =8B\n block>=2^28 =5B\n\nThat's a difference of 16 bytes, of which only the block number can\nrealistically be directly influenced by the user (\"just don't have\nrelations larger than X blocks\").\n\nIf applied to Dilip's pgbench transaction data, that would imply a\nminimum per transaction wal usage of 509 bytes, and a maximum per\ntransaction wal usage of 609 bytes. That is nearly a 20% difference in\nWAL size based only on the location of your data, and I'm just not\ncomfortable with that. Users have little or zero control over the\ninternal IDs we assign to these fields, while it would affect\nperformance fairly significantly.\n\n(difference % between min/max wal size is unchanged (within 0.1%)\nafter accounting for record alignment)\n\n> > 0002 - Rework XLogRecord\n> > This makes many fields in the xlog header optional, reducing the size\n> > of many xlog records by several bytes. This implements the design I\n> > shared in my earlier message [1].\n> >\n> > 0003 - Rework XLogRecordBlockHeader.\n> > This patch could be applied on current head, and saves some bytes in\n> > per-block data. It potentially saves some bytes per registered\n> > block/buffer in the WAL record (max 2 bytes for the first block, after\n> > that up to 3). See the patch's commit message in the patch for\n> > detailed information.\n>\n> The amount of complexity these two introduce seems quite substantial to\n> me. Both from an maintenance and a runtime perspective. I think we'd be better\n> off using building blocks like variable lengths encoded values than open\n> coding it in many places.\n\nI guess that's true for length fields, but I don't think dynamic\nheader field presence (the 0002 rewrite, and the omission of\ndata_length in 0003) is that bad. We already have dynamic data\ninclusion through block ids 25x; I'm not sure why we couldn't do that\nmore compactly with bitfields as indicators instead (hence the dynamic\nheader size).\n\nAs for complexity, I think my current patchset is mostly complex due\nto a lack of tooling. Note that decoding makes common use of\nCOPY_HEADER_FIELD, which we don't really have an equivalent for in\nXLogRecordAssemble. I think the code for 0002 would improve\nsignificantly in readability if such construct would be available.\n\nTo reduce complexity in 0003, I could drop the 'repeat id'\noptimization, as that reduces the complexity significantly, at the\ncost of not saving that 1 byte per registered block after the first.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Fri, 14 Oct 2022 00:53:48 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Wed, Oct 12, 2022 at 5:13 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think a signficant part of this improvement comes from the premise\n> > of starting with a fresh database. tablespace OID will indeed most\n> > likely be low, but database OID may very well be linearly distributed\n> > if concurrent workloads in the cluster include updating (potentially\n> > unlogged) TOASTed columns and the databases are not created in one\n> > \"big bang\" but over the lifetime of the cluster. In that case DBOID\n> > will consume 5B for a significant fraction of databases (anything with\n> > OID >=2^28).\n> >\n> > My point being: I don't think that we should have different WAL\n> > performance in databases which is dependent on which OID was assigned\n> > to that database.\n>\n> To me this is raising the bar to an absurd level. Some minor space usage\n> increase after oid wraparound and for very large block numbers isn't a huge\n> issue - if you're in that situation you already have a huge amount of wal.\n\nI have to admit that I worried about the same thing that Matthias\nraises, more or less. But I don't know whether I'm right to be\nworried. A variable-length representation of any kind is essentially a\ngamble that values requiring fewer bytes will be more common than\nvalues requiring more bytes, and by enough to justify the overhead\nthat the method has. And, you want it to be more common for each\nindividual user, not just overall. For example, more people are going\nto have small relations than large ones, but nobody wants performance\nto drop off a cliff when the relation passes a certain size threshold.\nNow, it wouldn't drop off a cliff here, but what about someone with a\nreally big, append-only relation? Won't they just end up writing more\nto WAL than with the present system?\n\nMaybe not. They might still have some writes to relations other than\nthe very large, append-only relation, and then they could still win.\nAlso, if we assume that the overhead of the variable-length\nrepresentation is never more than 1 byte beyond what is needed to\nrepresent the underlying quantity in the minimal number of bytes, they\nare only going to lose if their relation is already more than half the\nmaximum theoretical size, and if that is the case, they are in danger\nof hitting the size limit anyway. You can argue that there's still a\nrisk here, but it doesn't seem like that bad of a risk.\n\nBut the same thing is not so obvious for, let's say, database OIDs.\nWhat if you just have one or a few databases, but due to the previous\nhistory of the cluster, their OIDs just happen to be big? Then you're\njust behind where you would have been without the patch. Granted, if\nthis happens to you, you will be in the minority, because most users\nare likely to have small database OIDs, but the fact that other people\nare writing less WAL on average isn't going to make you happy about\nwriting more WAL on average. And even for a user for which that\ndoesn't happen, it's not at all unlikely that the gains they see will\nbe less than what we see on a freshly-initdb'd database.\n\nSo I don't really know what the answer is here. I don't think this\ntechnique sucks, but I don't think it's necessarily a categorical win\nfor every case, either. And it even seems hard to reason about which\ncases are likely to be wins and which cases are likely to be losses.\n\n> > 0002 - Rework XLogRecord\n> > This makes many fields in the xlog header optional, reducing the size\n> > of many xlog records by several bytes. This implements the design I\n> > shared in my earlier message [1].\n> >\n> > 0003 - Rework XLogRecordBlockHeader.\n> > This patch could be applied on current head, and saves some bytes in\n> > per-block data. It potentially saves some bytes per registered\n> > block/buffer in the WAL record (max 2 bytes for the first block, after\n> > that up to 3). See the patch's commit message in the patch for\n> > detailed information.\n>\n> The amount of complexity these two introduce seems quite substantial to\n> me. Both from a maintenance and a runtime perspective. I think we'd be better\n> off using building blocks like variable lengths encoded values than open\n> coding it in many places.\n\nI agree that this looks pretty ornate as written, but I think there\nmight be some good ideas in here, too. It is also easy to reason about\nthis kind of thing at least in terms of space consumption. It's a bit\nharder to know how things will play out in terms of CPU cycles and\ncode complexity.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 17 Oct 2022 17:14:21 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "Hi,\n\nOn 2022-10-17 17:14:21 -0400, Robert Haas wrote:\n> I have to admit that I worried about the same thing that Matthias\n> raises, more or less. But I don't know whether I'm right to be\n> worried. A variable-length representation of any kind is essentially a\n> gamble that values requiring fewer bytes will be more common than\n> values requiring more bytes, and by enough to justify the overhead\n> that the method has. And, you want it to be more common for each\n> individual user, not just overall. For example, more people are going\n> to have small relations than large ones, but nobody wants performance\n> to drop off a cliff when the relation passes a certain size threshold.\n> Now, it wouldn't drop off a cliff here, but what about someone with a\n> really big, append-only relation? Won't they just end up writing more\n> to WAL than with the present system?\n\nPerhaps. But I suspect it'd be a very small increase because they'd be using\nbulk-insert paths in all likelihood anyway, if they managed to get to a very\nlarge relation. And even in that case, if we e.g. were to make the record size\nvariable length, they'd still pretty much never reach that and it'd be an\noverall win.\n\nThe number of people with that large relations, leaving partitioning aside\nwhich'd still benefit as each relation is smaller, strikes me as a very small\npercentage. And as you say, it's not like there's a cliff where everything\nstarts to be horrible.\n\n\n> Maybe not. They might still have some writes to relations other than\n> the very large, append-only relation, and then they could still win.\n> Also, if we assume that the overhead of the variable-length\n> representation is never more than 1 byte beyond what is needed to\n> represent the underlying quantity in the minimal number of bytes, they\n> are only going to lose if their relation is already more than half the\n> maximum theoretical size, and if that is the case, they are in danger\n> of hitting the size limit anyway. You can argue that there's still a\n> risk here, but it doesn't seem like that bad of a risk.\n\nAnother thing here is that I suspect we ought to increase our relation size\nbeyond 4 byte * blocksize at some point - and then we'll have to use variable\nencodings... Admittedly the amount of work needed to get there is substantial.\n\nSomewhat relatedly, I think we, very slowly, should move towards wider OIDs as\nwell. Not having to deal with oid wraparound will be a significant win\n(particularly for toast), but to keep the overhead reasonable, we're going to\nneed variable encodings.\n\n\n> But the same thing is not so obvious for, let's say, database OIDs.\n> What if you just have one or a few databases, but due to the previous\n> history of the cluster, their OIDs just happen to be big? Then you're\n> just behind where you would have been without the patch. Granted, if\n> this happens to you, you will be in the minority, because most users\n> are likely to have small database OIDs, but the fact that other people\n> are writing less WAL on average isn't going to make you happy about\n> writing more WAL on average. And even for a user for which that\n> doesn't happen, it's not at all unlikely that the gains they see will\n> be less than what we see on a freshly-initdb'd database.\n\nI agree that going for variable width encodings on the basis of the database\noid field alone would be an unconvincing proposition. But variably encoding\ndatabase oids when we already variably encode other fields seems like a decent\nbet. If you e.g. think of the 56-bit relfilenode field itself - obviously what\nI was thinking about in the first place - it's going to be a win much more\noften.\n\nTo really loose you'd not just have to have a large database oid, but also a\nlarge tablespace and relation oid and a huge block number...\n\n\n> So I don't really know what the answer is here. I don't think this\n> technique sucks, but I don't think it's necessarily a categorical win\n> for every case, either. And it even seems hard to reason about which\n> cases are likely to be wins and which cases are likely to be losses.\n\nTrue. I'm far less concerned than you or Matthias about increasing the size in\nrare cases as long as it wins in the majority of cases. But that doesn't mean\nevery case is easy to consider.\n\n\n> > > 0002 - Rework XLogRecord\n> > > This makes many fields in the xlog header optional, reducing the size\n> > > of many xlog records by several bytes. This implements the design I\n> > > shared in my earlier message [1].\n> > >\n> > > 0003 - Rework XLogRecordBlockHeader.\n> > > This patch could be applied on current head, and saves some bytes in\n> > > per-block data. It potentially saves some bytes per registered\n> > > block/buffer in the WAL record (max 2 bytes for the first block, after\n> > > that up to 3). See the patch's commit message in the patch for\n> > > detailed information.\n> >\n> > The amount of complexity these two introduce seems quite substantial to\n> > me. Both from a maintenance and a runtime perspective. I think we'd be better\n> > off using building blocks like variable lengths encoded values than open\n> > coding it in many places.\n> \n> I agree that this looks pretty ornate as written, but I think there\n> might be some good ideas in here, too.\n\nAgreed! Several of the ideas seem orthogonal to using variable encodings, so\nthis isn't really an either / or.\n\n\n> It is also easy to reason about this kind of thing at least in terms of\n> space consumption.\n\nHm, not for me, but...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 19 Oct 2022 12:21:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" }, { "msg_contents": "On Thu, Oct 20, 2022 at 12:51 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-10-17 17:14:21 -0400, Robert Haas wrote:\n> > I have to admit that I worried about the same thing that Matthias\n> > raises, more or less. But I don't know whether I'm right to be\n> > worried. A variable-length representation of any kind is essentially a\n> > gamble that values requiring fewer bytes will be more common than\n> > values requiring more bytes, and by enough to justify the overhead\n> > that the method has. And, you want it to be more common for each\n> > individual user, not just overall. For example, more people are going\n> > to have small relations than large ones, but nobody wants performance\n> > to drop off a cliff when the relation passes a certain size threshold.\n> > Now, it wouldn't drop off a cliff here, but what about someone with a\n> > really big, append-only relation? Won't they just end up writing more\n> > to WAL than with the present system?\n>\n> Perhaps. But I suspect it'd be a very small increase because they'd be using\n> bulk-insert paths in all likelihood anyway, if they managed to get to a very\n> large relation. And even in that case, if we e.g. were to make the record size\n> variable length, they'd still pretty much never reach that and it'd be an\n> overall win.\n\nI think the number of cases where we will reduce the WAL size will be\nfar more than the cases where it will slightly increase the size. And\nalso the number of bytes we save in winning cases is far bigger than\nthe number of bytes we increase. So IMHO it seems like an overall win\nat least from the WAL size reduction pov. Do we have some number that\nhow much overhead it has for encoding/decoding?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 20 Oct 2022 14:10:50 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: problems with making relfilenodes 56-bits" } ]
[ { "msg_contents": "Hi,\n\nThe max size for the shmem hash table name is SHMEM_INDEX_KEYSIZE - 1.\nbut when the caller uses a longer hash table name, it doesn't report any error, instead\nit just uses the first SHMEM_INDEX_KEYSIZE -1 chars as the hash table name.\n\nI created some shmem hash tables with the same prefix which was longer than\nSHMEM_INDEX_KEYSIZE - 1, and the size of those hash tables were the same,\nthen only one hash table was created. But I thought those hash tables were created\nsuccessfully.\n\nI know this is a corner case, but it's difficult to figure it out when run into it. So I add\nan assertion to prevent it.\n\n\nThanks,\nXiaoran", "msg_date": "Thu, 29 Sep 2022 01:37:59 +0000", "msg_from": "Xiaoran Wang <wxiaoran@vmware.com>", "msg_from_op": true, "msg_subject": "[patch] Adding an assertion to report too long hash table name" }, { "msg_contents": "LGTM\n+1\n\nOn Thu, Sep 29, 2022 at 9:38 AM Xiaoran Wang <wxiaoran@vmware.com> wrote:\n>\n> Hi,\n>\n> The max size for the shmem hash table name is SHMEM_INDEX_KEYSIZE - 1.\n> but when the caller uses a longer hash table name, it doesn't report any error, instead\n> it just uses the first SHMEM_INDEX_KEYSIZE -1 chars as the hash table name.\n>\n> I created some shmem hash tables with the same prefix which was longer than\n> SHMEM_INDEX_KEYSIZE - 1, and the size of those hash tables were the same,\n> then only one hash table was created. But I thought those hash tables were created\n> successfully.\n>\n> I know this is a corner case, but it's difficult to figure it out when run into it. So I add\n> an assertion to prevent it.\n>\n>\n> Thanks,\n> Xiaoran\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Thu, 29 Sep 2022 16:09:51 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] Adding an assertion to report too long hash table name" }, { "msg_contents": "Hi,\n\n\nOn Sep 29, 2022, 09:38 +0800, Xiaoran Wang <wxiaoran@vmware.com>, wrote:\n> Hi,\n> The max size for the shmem hash table name is SHMEM_INDEX_KEYSIZE - 1. but when the caller uses a longer hash table name, it doesn't report any error, insteadit just uses the first SHMEM_INDEX_KEYSIZE -1 chars as the hash table name.\n> I created some shmem hash tables with the same prefix which was longer thanSHMEM_INDEX_KEYSIZE - 1, and the size of those hash tables were the same,then only one hash table was created. But I thought those hash tables were createdsuccessfully.\n> I know this is a corner case, but it's difficult to figure it out when run into it. So I addan assertion to prevent it.\n>\n> Thanks,Xiaoran\nSeems Postgres doesn’t have a case that strlen(name) >= SHMEM_INDEX_KEYSIZE(48).\nThe max length of name I found is 29:\n```\nShmemInitHash(\"Shared Buffer Lookup Table”\n\n```\nBut it will help for other Databases built on Postgres or later Postgres in case of forgetting to update SHMEM_INDEX_KEYSIZE\nwhen new shmem added with a name longer than current SHMEM_INDEX_KEYSIZE.\nAnd we don’t have such assertion now.\nSo, +1 for the patch.\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi, \n\n\n\n\n\nOn Sep 29, 2022, 09:38 +0800, Xiaoran Wang <wxiaoran@vmware.com>, wrote:\nHi,\nThe max size for the shmem hash table name is SHMEM_INDEX_KEYSIZE - 1. but when the caller uses a longer hash table name, it doesn't report any error, insteadit just uses the first SHMEM_INDEX_KEYSIZE -1 chars as the hash table name.\nI created some shmem hash tables with the same prefix which was longer thanSHMEM_INDEX_KEYSIZE - 1, and the size of those hash tables were the same,then only one hash table was created. But I thought those hash tables were createdsuccessfully.\nI know this is a corner case, but it's difficult to figure it out when run into it. So I addan assertion to prevent it.\n\nThanks,Xiaoran\nSeems Postgres doesn’t have a case that strlen(name) >= SHMEM_INDEX_KEYSIZE(48).The max length of name I found is 29:```ShmemInitHash(\"Shared Buffer Lookup Table”\n\n```\nBut it will help for other Databases built on Postgres or later Postgres in case of forgetting to update SHMEM_INDEX_KEYSIZEwhen new shmem added with a name longer than current SHMEM_INDEX_KEYSIZE.And we don’t have such assertion now.So, +1 for the patch.Regards,\nZhang Mingli", "msg_date": "Mon, 7 Nov 2022 17:04:18 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] Adding an assertion to report too long hash table\n name" } ]
[ { "msg_contents": "We originally did this in 91e9e89dc, but a memory leak was discovered\nas I neglected to pfree the datum which is freshly allocated in\ntuplesort_getdatum. Because that was discovered late in the PG15\ncycle, we opted to just disable the datum sort optimisation for byref\ntypes in 3a5817695.\n\nAs was mentioned in [1], it looks like we could really use a version\nof tuplesort_getdatum which does not palloc a new Datum. nodeSort.c,\nwhen calling tuplesort_gettupleslot passes copy==false, so it would\nmake sense if the datum sort variation didn't do any copying either.\n\nIn the attached patch, I've added a function named\ntuplesort_getdatum_nocopy() which is the same as tuplesort_getdatum()\nonly without the datumCopy(). I opted for the new function rather than\na new parameter in the existing function just to reduce branching and\nadditional needless overhead.\n\nI also looked at the tuplesort_getdatum() call inside\nprocess_ordered_aggregate_single() and made a few changes there so we\ndon't needlessly perform a datumCopy() when we skip a Datum due to\nfinding it the same as the previous Datum in a DISTINCT aggregate\nsituation.\n\nI was also looking at mode_final(). Perhaps that could do with the\nsame treatment, I just didn't touch it in the attached patch.\n\nA quick performance test with:\n\ncreate table t1 (a varchar(32) not null, b varchar(32) not null);\ninsert into t1 select md5((x%10)::text),md5((x%10)::text) from\ngenerate_Series(1,1000000)x;\nvacuum freeze t1;\ncreate index on t1(a);\n\nYields a small speedup for the DISTINCT aggregate case.\n\nwork_mem = 256MB\nquery = select max(distinct a), max(distinct b) from t1;\n\nMaster:\nlatency average = 313.197 ms\n\nPatched:\nlatency average = 304.335 ms (about 3% faster)\n\nThe Datum sort in nodeSort.c is more impressive.\n\nquery = select b from t1 order by b offset 1000000;\n\nMaster:\nlatency average = 344.763 ms\n\nPatched:\nlatency average = 268.374 ms (about 28% faster)\n\nI'll add this to the November CF\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvqS6wC5U==k9Hd26E4EQXH3QR67-T4=Q1rQ36NGvjfVSg@mail.gmail.com", "msg_date": "Thu, 29 Sep 2022 18:12:06 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Have nodeSort.c use datum sorts single-value byref types" }, { "msg_contents": "On Thu, 29 Sept 2022 at 18:12, David Rowley <dgrowleyml@gmail.com> wrote:\n> In the attached patch, I've added a function named\n> tuplesort_getdatum_nocopy() which is the same as tuplesort_getdatum()\n> only without the datumCopy(). I opted for the new function rather than\n> a new parameter in the existing function just to reduce branching and\n> additional needless overhead.\n\nPer what was said over on [1], I've adjusted the patch to just add a\n'copy' parameter to tuplesort_getdatum() instead of adding the\ntuplesort_getdatum_nocopy() function.\n\nI also adjusted some code in heapam_index_validate_scan() to pass\ncopy=false to tuplesort_getdatum(). The datum in question here is a\nTID type, so this really only saves a datumCopy() / pfree on 32-bit\nsystems. I wasn't too interested in speeding 32-bit systems up with\nthis, it was more a case of being able to remove the #ifndef\nUSE_FLOAT8_BYVAL / pfree code.\n\nI think this is a fairly trivial patch, so if nobody objects, I plan\nto push it in the next few days.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/65629.1664460603%40sss.pgh.pa.us", "msg_date": "Wed, 26 Oct 2022 23:35:37 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Have nodeSort.c use datum sorts single-value byref types" }, { "msg_contents": "On Wed, 26 Oct 2022 at 23:35, David Rowley <dgrowleyml@gmail.com> wrote:\n> I think this is a fairly trivial patch, so if nobody objects, I plan\n> to push it in the next few days.\n\nPushed.\n\nDavid\n\n\n", "msg_date": "Fri, 28 Oct 2022 09:26:00 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Have nodeSort.c use datum sorts single-value byref types" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile running `make check LANC=C` with 32-bit virtual machine,\nI found that it was failed at \"aggregates\". PSA the a1b3bca1_regression.diffs.\nIIUC that part has been added by db0d67db. \nI checked out the source, tested, and got same result. PSA the db0d67db_regression.diffs\n\nI'm not sure about it, but is it an expected behavior? I know that we do not have to\nconsider about \"row\" ordering, \n\nFollowings show the environment. Please tell me if another information is needed.\n\nOS: RHEL 6.10 server \nArch: i686\nGcc: 4.4.7\n\n$ uname -a\nLinux VMXXXXX 2.6.32-754.41.2.el6.i686 #1 SMP Sat Jul 10 04:21:20 EDT 2021 i686 i686 i386 GNU/Linux\n\nConfigure option: --enable-cassert --enable-debug --enable-tap-tests\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED", "msg_date": "Thu, 29 Sep 2022 06:29:58 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com> writes:\n> While running `make check LANC=C` with 32-bit virtual machine,\n> I found that it was failed at \"aggregates\".\n\nHmm, we're not seeing any such failures in the buildfarm's 32-bit\nanimals, so there must be some additional condition needed to make\nit happen. Can you be more specific?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 29 Sep 2022 14:01:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "Dear Tom,\n\n> Hmm, we're not seeing any such failures in the buildfarm's 32-bit\n> animals, so there must be some additional condition needed to make\n> it happen. Can you be more specific?\n\nHmm, I was not sure about additional conditions, sorry.\nI could reproduce with followings steps: \n\n$ git clone https://github.com/postgres/postgres.git\n$ cd postgres\n$ ./configure --enable-cassert --enable-debug\n$ make -j2\n$ make check LANG=C\n\n-> aggregates ... FAILED 3562 ms\n\n\n\n\nThe hypervisor of the virtual machine is \" VMware vSphere 7.0\"\n\nAnd I picked another information related with the machine.\nCould you find something?\n\n```\n\npg_config]$ ./pg_config \n...\nCONFIGURE = '--enable-cassert' '--enable-debug'\nCC = gcc -std=gnu99\nCPPFLAGS = -D_GNU_SOURCE\nCFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -g -O2\nCFLAGS_SL = -fPIC\nLDFLAGS = -Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags\nLDFLAGS_EX = \nLDFLAGS_SL = \nLIBS = -lpgcommon -lpgport -lz -lreadline -lrt -ldl -lm \nVERSION = PostgreSQL 16devel\n\n$ locale\nLANG=C\n...\n\n$ arch \ni686\n\n\n$cat /proc/cpuinfo \nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 85\nmodel name : Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz\nstepping : 7\nmicrocode : 83898371\ncpu MHz : 2394.374\ncache size : 36608 KB\nphysical id : 0\nsiblings : 1\ncore id : 0\ncpu cores : 1\napicid : 0\ninitial apicid : 0\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 22\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss nx pdpe1gb rdtscp lm constant_tsc arch_perfmon xtopology tsc_reliable nonstop_tsc unfair_spinlock eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch arat xsaveopt ssbd ibrs ibpb stibp fsgsbase bmi1 avx2 smep bmi2 invpcid avx512f rdseed adx avx512cd md_clear flush_l1d arch_capabilities\nbogomips : 4788.74\nclflush size : 64\ncache_alignment : 64\naddress sizes : 43 bits physical, 48 bits virtual\npower management:\n\nprocessor : 1\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 85\nmodel name : Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz\nstepping : 7\nmicrocode : 83898371\ncpu MHz : 2394.374\ncache size : 36608 KB\nphysical id : 2\nsiblings : 1\ncore id : 0\ncpu cores : 1\napicid : 2\ninitial apicid : 2\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 22\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss nx pdpe1gb rdtscp lm constant_tsc arch_perfmon xtopology tsc_reliable nonstop_tsc unfair_spinlock eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch arat xsaveopt ssbd ibrs ibpb stibp fsgsbase bmi1 avx2 smep bmi2 invpcid avx512f rdseed adx avx512cd md_clear flush_l1d arch_capabilities\nbogomips : 4788.74\nclflush size : 64\ncache_alignment : 64\naddress sizes : 43 bits physical, 48 bits virtual\npower management\n```\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Fri, 30 Sep 2022 02:39:40 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com> writes:\n> Hmm, I was not sure about additional conditions, sorry.\n> I could reproduce with followings steps: \n\nI tried this on a 32-bit VM with gcc 11.3, but couldn't reproduce.\nYou said earlier\n\n>> OS: RHEL 6.10 server \n>> Arch: i686\n>> Gcc: 4.4.7\n\nThat is an awfully old compiler; I fear I no longer have anything\ncomparable on a working platform.\n\nThe most likely theory, I think, is that that compiler is generating\nslightly different floating-point code causing different plans to\nbe costed slightly differently than what the test case is expecting.\nProbably, the different orderings of the keys in this test case have\nexactly the same cost, or almost exactly, so that different roundoff\nerror could be enough to change the selected plan.\n\nThis probably doesn't have a lot of real-world impact, but it's\nstill annoying on a couple of grounds. Failing regression isn't\nnice, and also this suggests that db0d67db2 is causing us to waste\ntime considering multiple plans with effectively equal costs.\nMaybe that code needs to filter a little harder.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Sep 2022 12:13:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "Hi,\n\nOn 2022-09-30 12:13:11 -0400, Tom Lane wrote:\n> \"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com> writes:\n> > Hmm, I was not sure about additional conditions, sorry.\n> > I could reproduce with followings steps: \n> \n> I tried this on a 32-bit VM with gcc 11.3, but couldn't reproduce.\n> You said earlier\n> \n> >> OS: RHEL 6.10 server \n> >> Arch: i686\n> >> Gcc: 4.4.7\n> \n> That is an awfully old compiler; I fear I no longer have anything\n> comparable on a working platform.\n> \n> The most likely theory, I think, is that that compiler is generating\n> slightly different floating-point code causing different plans to\n> be costed slightly differently than what the test case is expecting.\n> Probably, the different orderings of the keys in this test case have\n> exactly the same cost, or almost exactly, so that different roundoff\n> error could be enough to change the selected plan.\n\nYea. I suspect that's because that compiler version doesn't have\n-fexcess-precision=standard:\n\n> CFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -g -O2\n\nIt's possible one could work around the issue with -msse -mfpmath=sse instead\nof -fexcess-precision=standard. \n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 30 Sep 2022 09:35:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "I wrote:\n> The most likely theory, I think, is that that compiler is generating\n> slightly different floating-point code causing different plans to\n> be costed slightly differently than what the test case is expecting.\n> Probably, the different orderings of the keys in this test case have\n> exactly the same cost, or almost exactly, so that different roundoff\n> error could be enough to change the selected plan.\n\nI added some debug printouts to get_cheapest_group_keys_order()\nand verified that in the two problematic queries, there are two\ndifferent orderings that have (on my machine) exactly equal lowest\ncost. So the code picks the first of those and ignores the second.\nDifferent roundoff error would be enough to make it do something\nelse.\n\nI find this problematic because \"exactly equal\" costs are not going\nto be unusual. That's because the values that cost_sort_estimate\nrelies on are, sadly, just about completely fictional. It's expecting\nthat it can get a good cost estimate based on:\n\n* procost. In case you hadn't noticed, this is going to be 1 for\njust about every function we might be considering here.\n\n* column width. This is either going to be a constant (e.g. 4\nfor integers) or, again, largely fictional. The logic for\nconverting widths to cost multipliers adds yet another layer\nof debatability.\n\n* numdistinct estimates. Sometimes we know what we're talking\nabout there, but often we don't.\n\nSo what I'm afraid we are dealing with here is usually going to\nbe garbage in, garbage out. And we're expending an awful lot\nof code and cycles to arrive at these highly questionable choices.\n\nGiven the previous complaints about db0d67db2, I wonder if it's not\nmost prudent to revert it. I doubt we are going to get satisfactory\nbehavior out of it until there's fairly substantial improvements in\nall these underlying estimates.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Sep 2022 12:57:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "I wrote:\n> Given the previous complaints about db0d67db2, I wonder if it's not\n> most prudent to revert it. I doubt we are going to get satisfactory\n> behavior out of it until there's fairly substantial improvements in\n> all these underlying estimates.\n\nAfter spending some more time looking at the code, I think that that\nis something we absolutely have to discuss. I already complained at\n[1] about how db0d67db2 made very significant changes in sort cost\nestimation behavior, which seem likely to result in significant\nuser-visible plan changes that might or might not be for the better.\nBut I hadn't read any of the code at that point. Now I have, and\nfrankly it's not ready for prime time. Beyond the question of\nwhether we have sufficiently accurate input values, I see these\nissues in and around compute_cpu_sort_cost():\n\n1. The algorithm is said to be based on Sedgewick & Bentley 2002 [2].\nI have the highest regard for those two gentlemen, so I'm quite\nprepared to believe that their estimate of the number of comparisons\nused by Quicksort is good. However, the expression given in our\ncomments:\n\n *\tlog(N! / (X1! * X2! * ..)) ~ sum(Xi * log(N/Xi))\n\ndoesn't look much like anything they wrote. More, what we're actually\ndoing is\n\n * We assume all Xi the same because now we don't have any estimation of\n * group sizes, we have only know the estimate of number of groups (distinct\n * values). In that case, formula becomes:\n *\tN * log(NumberOfGroups)\n\nThat's a pretty drastic simplification. No argument is given as to why\nthat's still reliable enough to be useful for the purposes to which this\ncode tries to put it --- especially when you consider that real-world\ndata is more likely to follow Zipf's law than have uniform group sizes.\nIf you're going to go as far as doing this:\n\n * For multi-column sorts we need to estimate the number of comparisons for\n * each individual column - for example with columns (c1, c2, ..., ck) we\n * can estimate that number of comparisons on ck is roughly\n *\tncomparisons(c1, c2, ..., ck) / ncomparisons(c1, c2, ..., c(k-1))\n\nyou'd better pray that your number-of-comparisons estimates are pretty\ndarn good, or what you're going to get out is going to be mostly\nfiction.\n\n2. Sedgewick & Bentley analyzed a specific version of Quicksort,\nwhich is ... um ... not the version we are using. It doesn't look\nto me like the choice of partitioning element is the same. Maybe\nthat doesn't matter much in the end, but there's sure no discussion\nof the point in this patch.\n\nSo at this point I've lost all faith in the estimates being meaningful\nat all. And that's assuming that the simplified algorithm is\nimplemented accurately, which it is not:\n\n3. totalFuncCost is started off at 1.0. Surely that should be zero?\nIf not, at least a comment to justify it would be nice.\n\n4. The code around the add_function_cost call evidently wants to carry\nthe procost lookup result from one column to the next, because it\nskips the lookup when prev_datatype == em->em_datatype. However, the\nvalue of funcCost isn't carried across columns, because it's local to\nthe loop. The effect of this is that anyplace where adjacent GROUP BY\ncolumns are of the same datatype, we'll use the fixed 1.0 value of\nfuncCost instead of looking up the real procost. Admittedly, since\nthe real procost is probably also 1.0, this might not mean much in\npractice. Nonetheless it's broken code. (Oh, btw: I doubt that\nusing add_function_cost rather than raw procost is of any value\nwhatsoever if you're just going to pass it a NULL node tree.)\n\n5. I'm pretty dubious about the idea that we can use the rather-random\nfirst element of the EquivalenceClass to determine the datatype that\nwill be compared, much less the average widths of the columns. It's\nentirely possible for an EC to contain both int4 and int8 vars, or\ntext vars of substantially different average widths. I think we\nreally need to be going back to the original GroupClauses and looking\nat the variables named there.\n\n6. Worse than that, we're also using the first element of the\nEquivalenceClass to calculate the number of groups of this sort key.\nThis is FLAT OUT WRONG, as certainly different EC members can have\nvery different stats.\n\n7. The code considers that presorted-key columns do not add to the\ncomparison costs, yet the comment about it claims the opposite:\n\n /*\n * Presorted keys are not considered in the cost above, but we still\n * do have to compare them in the qsort comparator. So make sure to\n * factor in the cost in that case.\n */\n if (i >= nPresortedKeys)\n {\n\nI'm not entirely sure whether the code is broken or the comment is,\nbut at least one of them is. I'm also pretty confused about why\nwe still add such columns' comparison functions to the running\ntotalFuncCost if we think they're not sorted on.\n\n8. In the case complained of to start this thread, we're unable\nto perceive any sort-cost difference between \"p, d, c, v\" and\n\"p, c, d, v\", which is a little surprising because that test case\nsets up c with twice as many distinct values as d. Other things\nbeing equal (which they are, because both columns are int4), surely\nthe latter key ordering should be favored in hopes of reducing the\nnumber of times we have to compare the third column. But it's not.\nI think that this can probably be blamed on the early-exit condition\nat the bottom of the loop:\n\n /*\n * Once we get single-row group, it means tuples in the group are\n * unique and we can skip all remaining columns.\n */\n if (tuplesPerPrevGroup <= 1.0)\n break;\n\nOrdering on p already gets us down to 2 tuples per group, so pretty\nmuch any of the other columns as second grouping column will compute\na next group size of 1, and then we don't consider columns beyond that.\n\n9. The is_fake_var() hackery is pretty sad. We should have found a\nbetter solution than that. Maybe estimate_num_groups() needs more\nwork.\n\n10. As I already mentioned, get_width_cost_multiplier() doesn't appear\nto have any foundation in reality; or if it does, the comments sure\nprovide no justification for these particular equations rather than\nsome other ones. The shakiness of the logic can be inferred\nimmediately from the fact that the header comment is fundamentally\nconfused about what it's doing:\n * Return value is in cpu_operator_cost units.\nNo it isn't, it's a pure ratio.\n\n\nIn short, I think the claim that this code provides better sort cost\nestimates than we had before is quite unjustified. Maybe it could\nget there eventually, but I do not want to ship v15 with this.\nI think we ought to revert all the changes around cost_sort.\n\nPerhaps we could salvage the GROUP BY changes by just ordering the\ncolumns by decreasing number of groups, which is the only component of\nthe current cost estimation that I think has any detectable connection\nto reality. But I suspect the RMT will favor just reverting the\nwhole thing for v15.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/3242058.1659563057%40sss.pgh.pa.us\n[2] The URL given in the code doesn't work anymore, but this does:\nhttps://sedgewick.io/wp-content/uploads/2022/03/2002QuicksortIsOptimal.pdf\n\n\n", "msg_date": "Fri, 30 Sep 2022 15:40:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "I wrote:\n> So at this point I've lost all faith in the estimates being meaningful\n> at all.\n\nI spent some time today looking into the question of what our qsort\ncode actually does. I wrote a quick-n-dirty little test module\n(attached) to measure the number of comparisons qsort really uses\nfor assorted sample inputs. The results will move a bit from run\nto run because of randomization, but the average counts should be\npretty stable I think. I got results like these:\n\nregression=# create temp table data as\nselect * from qsort_comparisons(100000);\nSELECT 10\nregression=# select n * log(groups)/log(2) as est, 100*(n * log(groups)/log(2) - avg_cmps)/avg_cmps as pct_err, * from data;\n est | pct_err | n | groups | avg_cmps | min_cmps | max_cmps | note \n--------------------+--------------------+--------+--------+----------+----------+----------+------------------------\n 0 | -100 | 100000 | 1 | 99999 | 99999 | 99999 | all values the same\n 1660964.0474436812 | -5.419880052975057 | 100000 | 100000 | 1756145 | 1722569 | 1835627 | all values distinct\n 100000 | -33.33911061041376 | 100000 | 2 | 150013 | 150008 | 150024 | 2 distinct values\n 400000 | 11.075628618635713 | 100000 | 16 | 360115 | 337586 | 431376 | 16 distinct values\n 600000 | 8.369757612975473 | 100000 | 64 | 553660 | 523858 | 639492 | 64 distinct values\n 800000 | 4.770461016221087 | 100000 | 256 | 763574 | 733898 | 844450 | 256 distinct values\n 1000000 | 1.5540821186618827 | 100000 | 1024 | 984697 | 953830 | 1111384 | 1024 distinct values\n 1457116.0087927429 | 41.97897366170798 | 100000 | 24342 | 1026290 | 994694 | 1089503 | Zipfian, parameter 1.1\n 1150828.9986140348 | 158.28880094758154 | 100000 | 2913 | 445559 | 426575 | 511214 | Zipfian, parameter 1.5\n 578135.9713524659 | 327.6090378488971 | 100000 | 55 | 135202 | 132541 | 213467 | Zipfian, parameter 3.0\n(10 rows)\n\nSo \"N * log(NumberOfGroups)\" is a pretty solid estimate for\nuniformly-sized groups ... except when NumberOfGroups = 1 ... but it\nis a significant overestimate if the groups aren't uniformly sized.\nNow a factor of 2X or 3X isn't awful --- we're very happy to accept\nestimates only that good in other contexts --- but I still wonder\nwhether this is reliable enough to justify the calculations being\ndone in compute_cpu_sort_cost. I'm still very afraid that the\nconclusions we're drawing about the sort costs for different column\norders are mostly junk.\n\nIn any case, something's got to be done about the failure at\nNumberOfGroups = 1. Instead of this:\n\n correctedNGroups = Max(1.0, ceil(correctedNGroups));\n per_tuple_cost += totalFuncCost * LOG2(correctedNGroups);\n\nI suggest\n\n if (correctedNGroups > 1.0)\n per_tuple_cost += totalFuncCost * LOG2(correctedNGroups);\n else /* Sorting N all-alike tuples takes only N-1 comparisons */\n per_tuple_cost += totalFuncCost;\n\n(Note that the ceil() here is a complete waste, because all paths leading\nto this produced integral estimates already. Even if they didn't, I see\nno good argument why ceil() makes the result better.)\n\nI'm still of the opinion that we need to revert this code for now.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 01 Oct 2022 15:13:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "On Sat, Oct 1, 2022 at 12:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I spent some time today looking into the question of what our qsort\n> code actually does. I wrote a quick-n-dirty little test module\n> (attached) to measure the number of comparisons qsort really uses\n> for assorted sample inputs.\n\nReminds me of the other sort testing program that you wrote when the\nB&M code first went in:\n\nhttps://www.postgresql.org/message-id/18732.1142967137@sss.pgh.pa.us\n\nThis was notable for recreating the tests from the original B&M paper.\nThe paper uses various types of test inputs with characteristics that\nwere challenging to the implementation and worth specifically getting\nright. For example, \"saw tooth\" input.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 1 Oct 2022 12:26:30 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Reminds me of the other sort testing program that you wrote when the\n> B&M code first went in:\n> https://www.postgresql.org/message-id/18732.1142967137@sss.pgh.pa.us\n\nHa, I'd totally forgotten about that ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 01 Oct 2022 15:50:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "On 10/1/22 3:13 PM, Tom Lane wrote:\r\n\r\n> I'm still of the opinion that we need to revert this code for now.\r\n\r\n[RMT hat, but speaking just for me] reading through Tom's analysis, this \r\nseems to be the safest path forward. I have a few questions to better \r\nunderstand:\r\n\r\n1. How invasive would the revert be?\r\n2. Are the other user-visible items that would be impacted?\r\n3. Is there an option of disabling the feature by default viable?\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Sat, 1 Oct 2022 16:58:28 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 10/1/22 3:13 PM, Tom Lane wrote:\n>> I'm still of the opinion that we need to revert this code for now.\n\n> [RMT hat, but speaking just for me] reading through Tom's analysis, this \n> seems to be the safest path forward. I have a few questions to better \n> understand:\n\n> 1. How invasive would the revert be?\n\nI've just finished constructing a draft full-reversion patch. I'm not\nconfident in this yet; in particular, teasing it apart from 1349d2790\n(\"Improve performance of ORDER BY / DISTINCT aggregates\") was fairly\nmessy. I need to look through the regression test changes and make\nsure that none are surprising. But this is approximately the right\nscope if we rip it out entirely.\n\nI plan to have a look tomorrow at the idea of reverting only the cost_sort\nchanges, and rewriting get_cheapest_group_keys_order() to just sort the\nkeys by decreasing numgroups estimates as I suggested upthread. That\nmight be substantially less messy, because of fewer interactions with\n1349d2790.\n\n> 2. Are the other user-visible items that would be impacted?\n\nSee above. (But note that 1349d2790 is HEAD-only, not in v15.)\n\n> 3. Is there an option of disabling the feature by default viable?\n\nNot one that usefully addresses my concerns. The patch did add an\nenable_group_by_reordering GUC which we could change to default-off,\nbut it does nothing about the cost_sort behavioral changes. I would\nbe a little inclined to rip out that GUC in either case, because\nI doubt that we need it with the more restricted change.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 01 Oct 2022 18:57:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "On 10/1/22 6:57 PM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> On 10/1/22 3:13 PM, Tom Lane wrote:\r\n>>> I'm still of the opinion that we need to revert this code for now.\r\n> \r\n>> [RMT hat, but speaking just for me] reading through Tom's analysis, this\r\n>> seems to be the safest path forward. I have a few questions to better\r\n>> understand:\r\n> \r\n>> 1. How invasive would the revert be?\r\n> \r\n> I've just finished constructing a draft full-reversion patch. I'm not\r\n> confident in this yet; in particular, teasing it apart from 1349d2790\r\n> (\"Improve performance of ORDER BY / DISTINCT aggregates\") was fairly\r\n> messy. I need to look through the regression test changes and make\r\n> sure that none are surprising. But this is approximately the right\r\n> scope if we rip it out entirely.\r\n> \r\n> I plan to have a look tomorrow at the idea of reverting only the cost_sort\r\n> changes, and rewriting get_cheapest_group_keys_order() to just sort the\r\n> keys by decreasing numgroups estimates as I suggested upthread. That\r\n> might be substantially less messy, because of fewer interactions with\r\n> 1349d2790.\r\n\r\nMaybe this leads to a follow-up question of do we continue to improve \r\nwhat is in HEAD while reverting the code in v15 (particularly if it's \r\neasier to do it that way)?\r\n\r\nI know we're generally not in favor of that approach, but wanted to ask.\r\n\r\n>> 2. Are the other user-visible items that would be impacted?\r\n> \r\n> See above. (But note that 1349d2790 is HEAD-only, not in v15.)\r\n\r\nWith the RMT hat, I'm hyperfocused on PG15 stability. We have plenty of \r\ntime time to stabilize head for v16 :)\r\n\r\n> \r\n>> 3. Is there an option of disabling the feature by default viable?\r\n> \r\n> Not one that usefully addresses my concerns. The patch did add an\r\n> enable_group_by_reordering GUC which we could change to default-off,\r\n> but it does nothing about the cost_sort behavioral changes. I would\r\n> be a little inclined to rip out that GUC in either case, because\r\n> I doubt that we need it with the more restricted change.\r\n\r\nUnderstood.\r\n\r\nI'll wait for your analysis of reverting only the cost_sort changes etc. \r\nmentioned above.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Sun, 2 Oct 2022 12:36:52 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 10/1/22 6:57 PM, Tom Lane wrote:\n>> I plan to have a look tomorrow at the idea of reverting only the cost_sort\n>> changes, and rewriting get_cheapest_group_keys_order() to just sort the\n>> keys by decreasing numgroups estimates as I suggested upthread. That\n>> might be substantially less messy, because of fewer interactions with\n>> 1349d2790.\n\n> Maybe this leads to a follow-up question of do we continue to improve \n> what is in HEAD while reverting the code in v15 (particularly if it's \n> easier to do it that way)?\n\nNo. I see no prospect that the cost_sort code currently in HEAD is going\nto become shippable in the near future. Quite aside from the plain bugs,\nI think it's based on untenable assumptions about how accurately we can\nestimate the CPU costs associated with different sort-column orders.\n\nHaving said that, it's certainly possible that we should do something\ndifferent in HEAD than in v15. We could do the rewrite I suggest above\nin HEAD while doing a straight-up revert in v15. I've been finding that\n1349d2790 is sufficiently entwined with this code that the patches would\nlook significantly different in any case, so that might be the most\nreliable way to proceed in v15.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 02 Oct 2022 13:12:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "\n> On Oct 2, 2022, at 1:12 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>>> On 10/1/22 6:57 PM, Tom Lane wrote:\n>>> I plan to have a look tomorrow at the idea of reverting only the cost_sort\n>>> changes, and rewriting get_cheapest_group_keys_order() to just sort the\n>>> keys by decreasing numgroups estimates as I suggested upthread. That\n>>> might be substantially less messy, because of fewer interactions with\n>>> 1349d2790.\n> \n>> Maybe this leads to a follow-up question of do we continue to improve \n>> what is in HEAD while reverting the code in v15 (particularly if it's \n>> easier to do it that way)?\n> \n> No. I see no prospect that the cost_sort code currently in HEAD is going\n> to become shippable in the near future. Quite aside from the plain bugs,\n> I think it's based on untenable assumptions about how accurately we can\n> estimate the CPU costs associated with different sort-column orders.\n\nOK.\n\n> Having said that, it's certainly possible that we should do something\n> different in HEAD than in v15. We could do the rewrite I suggest above\n> in HEAD while doing a straight-up revert in v15. I've been finding that\n> 1349d2790 is sufficiently entwined with this code that the patches would\n> look significantly different in any case, so that might be the most\n> reliable way to proceed in v15.\n\nOK. For v15 I am heavily in favor for the least risky approach given the\npoint we are at in the release cycle. The RMT hasn’t met yet to discuss,\nbut from re-reading this thread again, I would recommend to revert\n(i.e. the “straight up revert”).\n\nI’m less opinionated on the approach for what’s in HEAD, but the rewrite\nyou suggest sounds promising.\n\nThanks,\n\nJonathan\n\n", "msg_date": "Sun, 2 Oct 2022 13:32:43 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> OK. For v15 I am heavily in favor for the least risky approach given the\n> point we are at in the release cycle. The RMT hasn’t met yet to discuss,\n> but from re-reading this thread again, I would recommend to revert\n> (i.e. the “straight up revert”).\n\nOK by me.\n\n> I’m less opinionated on the approach for what’s in HEAD, but the rewrite\n> you suggest sounds promising.\n\nI'm just about to throw up my hands and go for reversion in both branches,\nbecause I'm now discovering that the code I'd hoped to salvage in\npathkeys.c (get_useful_group_keys_orderings and related) has its very own\nbugs. It's imagining that it can rearrange a PathKeys list arbitrarily\nand then rearrange the GROUP BY SortGroupClause list to match, but that's\neasier said than done, for a couple of different reasons. (I now\nunderstand why db0d67db2 made a cowboy hack in get_eclass_for_sort_expr ...\nbut it's still a cowboy hack with difficult-to-foresee side effects.)\nThere are other things in there that make it painfully obvious that\nthis code wasn't very carefully reviewed, eg XXX comments that should\nhave been followed up and were not, or a reference to a nonexistent\n\"debug_group_by_match_order_by\" flag (maybe that was a GUC at some point?).\n\nOn top of that, it's producing several distinct pathkey orderings for\nthe caller to try, but it's completely unclear to me that the subsequent\nchoice of cheapest path isn't going to largely reduce to the question\nof whether we can accurately estimate the relative costs of different\nsort-column orders. Which is exactly what we're finding we can't do.\nSo that end of it seems to need a good deal of rethinking as well.\n\nIn short, this needs a whole lotta work, and I'm not volunteering.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 02 Oct 2022 14:11:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "I wrote:\n> I'm just about to throw up my hands and go for reversion in both branches,\n\nAs attached.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 02 Oct 2022 15:10:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "On Mon, 3 Oct 2022 at 08:10, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> As attached.\n\nFor the master version, I think it's safe just to get rid of\nPlannerInfo.num_groupby_pathkeys now. I only added that so I could\nstrip off the ORDER BY / DISTINCT aggregate PathKeys from the group by\npathkeys before passing to the functions that rearranged the GROUP BY\nclause.\n\nDavid\n\n\n", "msg_date": "Mon, 3 Oct 2022 09:35:55 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> For the master version, I think it's safe just to get rid of\n> PlannerInfo.num_groupby_pathkeys now. I only added that so I could\n> strip off the ORDER BY / DISTINCT aggregate PathKeys from the group by\n> pathkeys before passing to the functions that rearranged the GROUP BY\n> clause.\n\nI was kind of unhappy with that data structure too, but from the\nother direction: I didn't like that you were folding aggregate-derived\npathkeys into root->group_pathkeys in the first place. That seems like\na kluge that might work all right for the moment but will cause problems\ndown the road. (Despite the issues with the patch at hand, I don't\nthink it's unreasonable to suppose that somebody will have a more\nsuccessful go at optimizing GROUP BY sorting later.) If we keep the\ndata structure like this, I think we absolutely need num_groupby_pathkeys,\nor some other way of recording which pathkeys came from what source.\n\nOne way to manage that would be to insist that the length of\nroot->group_clauses should indicate the number of associated grouping\npathkeys. Right now they might not be the same because we might discover\nsome of the pathkeys to be redundant --- but if we do, ISTM that the\ncorresponding GROUP BY clauses are also redundant and could get dropped.\nThat ties into the stuff I was worried about in [1], though. I'll keep\nthis in mind when I get back to messing with that.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/1657885.1657647073%40sss.pgh.pa.us\n\n\n", "msg_date": "Sun, 02 Oct 2022 16:59:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "On Mon, 3 Oct 2022 at 09:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > For the master version, I think it's safe just to get rid of\n> > PlannerInfo.num_groupby_pathkeys now. I only added that so I could\n> > strip off the ORDER BY / DISTINCT aggregate PathKeys from the group by\n> > pathkeys before passing to the functions that rearranged the GROUP BY\n> > clause.\n>\n> I was kind of unhappy with that data structure too, but from the\n> other direction: I didn't like that you were folding aggregate-derived\n> pathkeys into root->group_pathkeys in the first place. That seems like\n> a kluge that might work all right for the moment but will cause problems\n> down the road. (Despite the issues with the patch at hand, I don't\n> think it's unreasonable to suppose that somebody will have a more\n> successful go at optimizing GROUP BY sorting later.) If we keep the\n> data structure like this, I think we absolutely need num_groupby_pathkeys,\n> or some other way of recording which pathkeys came from what source.\n\nOk, I don't feel too strongly about removing num_groupby_pathkeys. I'm\nfine to leave it there. However, I'll reserve slight concerns that\nwe'll likely receive sporadic submissions of cleanup patches that\nremove the unused field over the course of the next few years and that\ndealing with those might take up more time than just removing it now\nand putting it back when we need it. We have been receiving quite a\nfew patches along those lines lately.\n\nAs for the slight misuse of group_pathkeys, I guess since there are no\nusers that require just the plain pathkeys belonging to the GROUP BY,\nthen likely the best thing would be just to rename that field to\nsomething like groupagg_pathkeys. Maintaining two separate fields and\nconcatenating them every time we want group_pathkeys does not seem\nthat appealing to me. Seems like a waste of memory and effort. I don't\nwant to hi-jack this thread to discuss that, but if you have a\npreferred course of action, then I'm happy to kick off a discussion on\na new thread.\n\nDavid\n\n\n", "msg_date": "Mon, 3 Oct 2022 10:28:21 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> As for the slight misuse of group_pathkeys, I guess since there are no\n> users that require just the plain pathkeys belonging to the GROUP BY,\n> then likely the best thing would be just to rename that field to\n> something like groupagg_pathkeys. Maintaining two separate fields and\n> concatenating them every time we want group_pathkeys does not seem\n> that appealing to me. Seems like a waste of memory and effort. I don't\n> want to hi-jack this thread to discuss that, but if you have a\n> preferred course of action, then I'm happy to kick off a discussion on\n> a new thread.\n\nI don't feel any great urgency to resolve this. Let's wait and see\nwhat comes out of the other thread.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 02 Oct 2022 17:36:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "On Sun, Oct 02, 2022 at 02:11:12PM -0400, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> OK. For v15 I am heavily in favor for the least risky approach given the\n>> point we are at in the release cycle. The RMT hasn’t met yet to discuss,\n>> but from re-reading this thread again, I would recommend to revert\n>> (i.e. the “straight up revert”).\n> \n> OK by me.\n\nI don't quite see why it would be to let this code live on HEAD if it\nis not ready to be merged as there is a risk of creating side issues\nwith things tied to the costing still ready to be merged, so I agree\nthat the reversion done on both branches is the way to go for now.\nThis could always be reworked and reproposed in the future.\n\n> I'm just about to throw up my hands and go for reversion in both branches,\n> because I'm now discovering that the code I'd hoped to salvage in\n> pathkeys.c (get_useful_group_keys_orderings and related) has its very own\n> bugs. It's imagining that it can rearrange a PathKeys list arbitrarily\n> and then rearrange the GROUP BY SortGroupClause list to match, but that's\n> easier said than done, for a couple of different reasons. (I now\n> understand why db0d67db2 made a cowboy hack in get_eclass_for_sort_expr ...\n> but it's still a cowboy hack with difficult-to-foresee side effects.)\n> There are other things in there that make it painfully obvious that\n> this code wasn't very carefully reviewed, eg XXX comments that should\n> have been followed up and were not, or a reference to a nonexistent\n> \"debug_group_by_match_order_by\" flag (maybe that was a GUC at some point?).\n\nOkay. Ugh.\n--\nMichael", "msg_date": "Mon, 3 Oct 2022 09:45:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "On 10/2/22 8:45 PM, Michael Paquier wrote:\r\n> On Sun, Oct 02, 2022 at 02:11:12PM -0400, Tom Lane wrote:\r\n>> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>>> OK. For v15 I am heavily in favor for the least risky approach given the\r\n>>> point we are at in the release cycle. The RMT hasn’t met yet to discuss,\r\n>>> but from re-reading this thread again, I would recommend to revert\r\n>>> (i.e. the “straight up revert”).\r\n>>\r\n>> OK by me.\r\n> \r\n> I don't quite see why it would be to let this code live on HEAD if it\r\n> is not ready to be merged as there is a risk of creating side issues\r\n> with things tied to the costing still ready to be merged, so I agree\r\n> that the reversion done on both branches is the way to go for now.\r\n> This could always be reworked and reproposed in the future.\r\n\r\n[RMT-hat]\r\n\r\nJust to follow things procedure-wise[1], while there do not seem to be \r\nany objections to reverting through regular community processes, I do \r\nthink the RMT has to make this ask as Tomas (patch committer) has not \r\ncommented and we are up against release deadlines.\r\n\r\nBased on the above discussion, the RMT asks for a revert of db0d67db2 in \r\nthe v15 release. The RMT also recommends a revert in HEAD but does not \r\nhave the power to request that.\r\n\r\nWe do hope to see continued work and inclusion of this feature for a \r\nfuture release. We understand that the work on this optimization is \r\ncomplicated and appreciate all of the efforts on it.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://wiki.postgresql.org/wiki/Release_Management_Team", "msg_date": "Mon, 3 Oct 2022 09:58:11 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Based on the above discussion, the RMT asks for a revert of db0d67db2 in \n> the v15 release. The RMT also recommends a revert in HEAD but does not \n> have the power to request that.\n\nRoger, I'll push these shortly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Oct 2022 10:05:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "[ Just for the archives' sake at this point, in case somebody has\nanother go at this feature. ]\n\nI wrote:\n> ... I'm now discovering that the code I'd hoped to salvage in\n> pathkeys.c (get_useful_group_keys_orderings and related) has its very own\n> bugs. It's imagining that it can rearrange a PathKeys list arbitrarily\n> and then rearrange the GROUP BY SortGroupClause list to match, but that's\n> easier said than done, for a couple of different reasons.\n\nIt strikes me that the easy solution here is to *not* rearrange the\nSortGroupClause list at all. What that would be used for later is\nto generate a Unique node's list of columns to compare, but since\nUnique only cares about equality-or-not, there's no strong reason\nwhy it has to compare the columns in the same order they're sorted\nin. (Indeed, if anything we should prefer to compare them in the\nopposite order, since the least-significant column should be the\nmost likely to be different from the previous row.)\n\nI'm fairly sure that the just-reverted code is buggy on its\nown terms, in that it might sometimes produce a clause list\nthat's not ordered the same as the pathkeys; but there's no\nvisible misbehavior, because that does not in fact matter.\n\nSo this'd let us simplify the APIs here, in particular PathKeyInfo\nseems unnecessary, because we don't have to pass the SortGroupClause\nlist into or out of the pathkey-reordering logic.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Oct 2022 12:08:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" }, { "msg_contents": "\nOn 10/3/22 16:05, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> Based on the above discussion, the RMT asks for a revert of db0d67db2 in \n>> the v15 release. The RMT also recommends a revert in HEAD but does not \n>> have the power to request that.\n> \n> Roger, I'll push these shortly.\n> \n\nThanks for resolving this, and apologies for not noticing this thread\nearlier (and for the bugs in the code, ofc).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 3 Oct 2022 19:21:55 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Question: test \"aggregates\" failed in 32-bit machine" } ]
[ { "msg_contents": "I already mentioned this in [1]: we can remove a few subcmd types that\nwere added to support exec-time recursion, by keeping a separate flag\nfor it. We're already doing that for alter trigger operations, so this\npatch just extends that to the other subcommand types that need it.\n\nThere's no visible change, just some code simplification.\n\n[1] https://postgr.es/m/20220729184452.2i4xcru3lzey76m6@alvherre.pgsql\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"All rings of power are equal,\nBut some rings of power are more equal than others.\"\n (George Orwell's The Lord of the Rings)", "msg_date": "Thu, 29 Sep 2022 11:00:33 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "do away with ALTER TABLE \"Recurse\" subcmd types" } ]
[ { "msg_contents": "Hi,\n\nThe psql improvement in v15 to output multiple result sets does not\nbehave as one might expect with \\g: the output file or program\nto pipe into is opened/closed on each result set, overwriting the\nprevious ones in the case of \\g file.\n\nExample:\n\npsql -At <<EOF\n-- good (two results output)\nselect 1\\; select 2;\n\n-- bad: ends up with only \"2\" in the file\nselect 1\\; select 2 \\g file\n\nEOF\n\n\nThat problem with \\g is due to PrintQueryTuples() an HandleCopyResult()\nstill having the responsibility to open/close the output stream.\nI think this code should be moved upper in the call stack, in\nExecQueryAndProcessResults().\n\nThe first attached patch implements a fix that way.\n\nWhen testing this I've stumbled on another issue nearby: COPY TO\nSTDOUT followed by \\watch should normally produce the error message\n\"\\watch cannot be used with COPY\", but the execution goes into a\ninfinite busy loop instead.\nThis is because ClearOrSaveAllResults() loops over PQgetResult() until\nit returns NULL, but as it turns out, that never happens: it seems\nstuck on a PGRES_COPY_OUT result.\n\nWhile looking to fix that, it occurred to me that it would be\nsimpler to allow \\watch to deal with COPY results rather than\ncontinuing to disallow it. ISTM that before v15, the reason\nwhy PSQLexecWatch() did not want to deal with COPY was to not\nbother with a niche use case, rather than because of some\nspecific impossibility with it.\nNow that it calls the generic ExecQueryAndProcessResults() code\nthat can handle COPY transfers, \\watch on a COPY query seems to work\nfine if not disallowed.\nBesides, v15 adds the possibility to feed \\watch output into\na program through PSQL_WATCH_PAGER, and since the copy format is\nthe best format to be consumed by programs, this seems like\na good reason to allow COPY TO STDOUT with it.\n\\watch on a COPY FROM STDIN query doesn't make much sense,\nbut it can be interrupted with ^C if run by mistake, so I don't see a\nneed to disallow it specifically.\n\nSo the second patch fixes the infinite loop problem like that, on top of\nthe first patch.\n\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite", "msg_date": "Thu, 29 Sep 2022 13:10:43 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": true, "msg_subject": "[patch] \\g with multiple result sets and \\watch with copy queries" }, { "msg_contents": "\"Daniel Verite\" <daniel@manitou-mail.org> writes:\n> The psql improvement in v15 to output multiple result sets does not\n> behave as one might expect with \\g: the output file or program\n> to pipe into is opened/closed on each result set, overwriting the\n> previous ones in the case of \\g file.\n\nUgh. I think we'd better fix that before 15.0, else somebody may\nthink this is the new intended behavior and raise compatibility\nconcerns when we fix it. I will see if I can squeeze it in before\nthis afternoon's 15rc2 wrap.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Oct 2022 13:00:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [patch] \\g with multiple result sets and \\watch with copy queries" }, { "msg_contents": "I wrote:\n> Ugh. I think we'd better fix that before 15.0, else somebody may\n> think this is the new intended behavior and raise compatibility\n> concerns when we fix it. I will see if I can squeeze it in before\n> this afternoon's 15rc2 wrap.\n\nPushed after making some corrections.\n\nGiven the time pressure, I did not worry about installing regression\ntest coverage for this stuff, but I wonder if we shouldn't add some.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 03 Oct 2022 15:09:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [patch] \\g with multiple result sets and \\watch with copy queries" }, { "msg_contents": "\tTom Lane wrote:\n\n> Pushed after making some corrections.\n\nThanks!\n\n> Given the time pressure, I did not worry about installing regression\n> test coverage for this stuff, but I wonder if we shouldn't add some.\n\nCurrently, test/regress/sql/psql.sql doesn't AFAICS write anything\noutside of stdout, but \\g, \\o, \\copy need to write to external\nfiles to be tested properly.\n\nLooking at nearby tests, I see that commit d1029bb5a26 brings\ninteresting additions in test/regress/sql/misc.sql that could be used\nas a model to handle output files. psql.sql could write\ninto PG_ABS_BUILDDIR, then read the files back with \\copy I guess,\nthen output that again to stdout for comparison. I'll see if I can get\nthat to work.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Tue, 04 Oct 2022 14:58:17 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": true, "msg_subject": "Re: [patch] \\g with multiple result sets and \\watch with copy queries" }, { "msg_contents": "\"Daniel Verite\" <daniel@manitou-mail.org> writes:\n> \tTom Lane wrote:\n>> Given the time pressure, I did not worry about installing regression\n>> test coverage for this stuff, but I wonder if we shouldn't add some.\n\n> Currently, test/regress/sql/psql.sql doesn't AFAICS write anything\n> outside of stdout, but \\g, \\o, \\copy need to write to external\n> files to be tested properly.\n\nYeah, I don't think we can usefully test these in psql.sql, because\nfile-system side effects are bad in that context. But maybe a TAP\ntest could cope?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Oct 2022 09:08:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [patch] \\g with multiple result sets and \\watch with copy queries" }, { "msg_contents": "Tom Lane wrote:\n\n> > Currently, test/regress/sql/psql.sql doesn't AFAICS write anything\n> > outside of stdout, but \\g, \\o, \\copy need to write to external\n> > files to be tested properly.\n> \n> Yeah, I don't think we can usefully test these in psql.sql, because\n> file-system side effects are bad in that context. But maybe a TAP\n> test could cope?\n\nI've came up with the attached using psql.sql only, at least for\n\\g and \\o writing to files.\nThis is a bit more complicated than the usual tests, but not\nthat much.\nAny opinions on this?\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite", "msg_date": "Fri, 07 Oct 2022 15:18:54 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": true, "msg_subject": "Re: [patch] \\g with multiple result sets and \\watch with copy queries" }, { "msg_contents": ">\n> This is a bit more complicated than the usual tests, but not\n> that much.\n> Any opinions on this?\n\n\n+1\n\nI think that because it is more complicated than usual psql, we may want to\ncomment on the intention of the tests and some of the less-than-common psql\nelements (\\set concatenation, resetting \\o, etc). If you see value in that\nI can amend the patch.\n\nAre there any options on COPY (header, formats) that we think we should\ntest as well?\n\nThis is a bit more complicated than the usual tests, but not\nthat much.\nAny opinions on this?+1I think that because it is more complicated than usual psql, we may want to comment on the intention of the tests and some of the less-than-common psql elements (\\set concatenation, resetting \\o, etc). If you see value in that I can amend the patch.Are there any options on COPY (header, formats) that we think we should test as well?", "msg_date": "Mon, 10 Oct 2022 13:58:02 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [patch] \\g with multiple result sets and \\watch with copy queries" }, { "msg_contents": "Bonjour Daniel,\n\nGood catch! Thanks for the quick fix!\n\nAs usual, what is not tested does not:-(\n\nAttached a tap test to check for the expected behavior with multiple \ncommand \\g.\n\n-- \nFabien.", "msg_date": "Mon, 10 Oct 2022 20:17:47 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: [patch] \\g with multiple result sets and \\watch with copy\n queries" }, { "msg_contents": "\tCorey Huinker wrote:\n\n> I think that because it is more complicated than usual psql, we may want to\n> comment on the intention of the tests and some of the less-than-common psql\n> elements (\\set concatenation, resetting \\o, etc). If you see value in that\n> I can amend the patch.\n\nIf the intentions of some tests appear to be unclear, then yes, sure.\nI don't feel the need to explain the \"how\" though. The other\ncomments in these files say why we're testing such or such case, but\ndon't go beyond that.\n\n> Are there any options on COPY (header, formats) that we think we should\n> test as well?\n\nThere are COPY tests already in src/test/regress/sql/copy*.sql, which\nhopefully cover the many combination of options.\n\nFor \\g and \\o the intention behind the tests is to check that the\nquery output goes where it should in all cases. The options that can't\naffect where the results go are not really in scope.\n\nFTR I started a followup thread on this at [1], to be associated to a\nnew CF entry [2]\n\n[1]\nhttps://www.postgresql.org/message-id/flat/25c2bb5b-9012-40f8-8088-774cb764046d%40manitou-mail.org\n\n[2] https://commitfest.postgresql.org/40/4000/\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Tue, 01 Nov 2022 13:43:03 +0100", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": true, "msg_subject": "Re: [patch] \\g with multiple result sets and \\watch with copy queries" } ]
[ { "msg_contents": "The following documentation comment has been logged on the website:\n\nPage: https://www.postgresql.org/docs/14/app-pgrestore.html\nDescription:\n\npg_restore seems to have two ways to restore data:\r\n\r\n--section=data \r\n\r\nor\r\n\r\n --data-only\r\n\r\nThere is this cryptic warning that --data-only is \"similar to but for\nhistorical reasons different from\" --section=data\r\n\r\nBut there is no further explanation of what those differences are or what\nmight be missed or different in your restore if you pick one option or the\nother. Maybe one or the other option is the \"preferred current way\" and one\nis the \"historical way\" or they are aimed at different types of use cases,\nbut that's not clear.", "msg_date": "Thu, 29 Sep 2022 14:30:13 +0000", "msg_from": "PG Doc comments form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "request clarification on pg_restore documentation" }, { "msg_contents": "On Thu, Sep 29, 2022 at 02:30:13PM +0000, PG Doc comments form wrote:\n> The following documentation comment has been logged on the website:\n> \n> Page: https://www.postgresql.org/docs/14/app-pgrestore.html\n> Description:\n> \n> pg_restore seems to have two ways to restore data:\n> \n> --section=data \n> \n> or\n> \n> --data-only\n> \n> There is this cryptic warning that --data-only is \"similar to but for\n> historical reasons different from\" --section=data\n> \n> But there is no further explanation of what those differences are or what\n> might be missed or different in your restore if you pick one option or the\n> other. Maybe one or the other option is the \"preferred current way\" and one\n> is the \"historical way\" or they are aimed at different types of use cases,\n> but that's not clear.\n\n[Thread moved from docs to hackers because there are behavioral issues.]\n\nVery good question. I dug into this and found this commit which says\n--data-only and --section=data were equivalent:\n\n\tcommit a4cd6abcc9\n\tAuthor: Andrew Dunstan <andrew@dunslane.net>\n\tDate: Fri Dec 16 19:09:38 2011 -0500\n\t\n\t Add --section option to pg_dump and pg_restore.\n\t\n\t Valid values are --pre-data, data and post-data. The option can be\n\t given more than once. --schema-only is equivalent to\n-->\t --section=pre-data --section=post-data. --data-only is equivalent\n-->\t to --section=data.\n\t\nand then this commit which says they are not:\n\n\tcommit 4317e0246c\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\tDate: Tue May 29 23:22:14 2012 -0400\n\t\n\t Rewrite --section option to decouple it from --schema-only/--data-only.\n\t\n-->\t The initial implementation of pg_dump's --section option supposed that the\n-->\t existing --schema-only and --data-only options could be made equivalent to\n-->\t --section settings. This is wrong, though, due to dubious but long since\n\t set-in-stone decisions about where to dump SEQUENCE SET items, as seen in\n\t bug report from Martin Pitt. (And I'm not totally convinced there weren't\n\t other bugs, either.) Undo that coupling and instead drive --section\n\t filtering off current-section state tracked as we scan through the TOC\n\t list to call _tocEntryRequired().\n\t\n\t To make sure those decisions don't shift around and hopefully save a few\n\t cycles, run _tocEntryRequired() only once per TOC entry and save the result\n\t in a new TOC field. This required minor rejiggering of ACL handling but\n\t also allows a far cleaner implementation of inhibit_data_for_failed_table.\n\t\n\t Also, to ensure that pg_dump and pg_restore have the same behavior with\n\t respect to the --section switches, add _tocEntryRequired() filtering to\n\t WriteToc() and WriteDataChunks(), rather than trying to implement section\n\t filtering in an entirely orthogonal way in dumpDumpableObject(). This\n\t required adjusting the handling of the special ENCODING and STDSTRINGS\n\t items, but they were pretty weird before anyway.\n\nand this commit which made them closer:\n\n\tcommit 5a39114fe7\n\tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n\tDate: Fri Oct 26 12:12:42 2012 -0400\n\t\n\t In pg_dump, dump SEQUENCE SET items in the data not pre-data section.\n\t\n\t Represent a sequence's current value as a separate TableDataInfo dumpable\n\t object, so that it can be dumped within the data section of the archive\n-->\t rather than in pre-data. This fixes an undesirable inconsistency between\n-->\t the meanings of \"--data-only\" and \"--section=data\", and also fixes dumping\n\t of sequences that are marked as extension configuration tables, as per a\n\t report from Marko Kreen back in July. The main cost is that we do one more\n\t SQL query per sequence, but that's probably not very meaningful in most\n\t databases.\n\nLooking at the restore code, I see --data-only disabling triggers, while\n--section=data doesn't. I also tested --data-only vs. --section=data in\npg_dump for the regression database and saw the only differences as the\ncreation and comments on large objects, e.g.,\n\n\t-- Name: 2121; Type: BLOB; Schema: -; Owner: postgres\n\t--\n\t\n\tSELECT pg_catalog.lo_create('2121');\n\n\n\tALTER LARGE OBJECT 2121 OWNER TO postgres;\n\n\t--\n\t-- Name: LARGE OBJECT 2121; Type: COMMENT; Schema: -; Owner: postgres\n\t--\n\n\tCOMMENT ON LARGE OBJECT 2121 IS 'testing comments';\n\nbut the large object _data_ was dumped in both cases.\n\nSo, where does this leave us? We know we need --section=data because\nthe pre/post-data options are clearly useful, so why would someone use\n--data-only vs. --section=data. We don't document why to use one rather\nthan the other, so the --data-only option looks useless to me. Do we\nremove it, adjust it, or leave it alone?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Thu, 6 Oct 2022 10:28:15 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: request clarification on pg_restore documentation" }, { "msg_contents": "\nThread moved to hackers.\n\n---------------------------------------------------------------------------\n\nOn Thu, Sep 29, 2022 at 02:30:13PM +0000, PG Doc comments form wrote:\n> The following documentation comment has been logged on the website:\n> \n> Page: https://www.postgresql.org/docs/14/app-pgrestore.html\n> Description:\n> \n> pg_restore seems to have two ways to restore data:\n> \n> --section=data \n> \n> or\n> \n> --data-only\n> \n> There is this cryptic warning that --data-only is \"similar to but for\n> historical reasons different from\" --section=data\n> \n> But there is no further explanation of what those differences are or what\n> might be missed or different in your restore if you pick one option or the\n> other. Maybe one or the other option is the \"preferred current way\" and one\n> is the \"historical way\" or they are aimed at different types of use cases,\n> but that's not clear.\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Thu, 6 Oct 2022 10:28:27 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: request clarification on pg_restore documentation" }, { "msg_contents": "\nDoes anyone have a suggestion on how to handle this issue? The report\nis a year old.\n\n---------------------------------------------------------------------------\n\nOn Thu, Oct 6, 2022 at 10:28:15AM -0400, Bruce Momjian wrote:\n> On Thu, Sep 29, 2022 at 02:30:13PM +0000, PG Doc comments form wrote:\n> > The following documentation comment has been logged on the website:\n> > \n> > Page: https://www.postgresql.org/docs/14/app-pgrestore.html\n> > Description:\n> > \n> > pg_restore seems to have two ways to restore data:\n> > \n> > --section=data \n> > \n> > or\n> > \n> > --data-only\n> > \n> > There is this cryptic warning that --data-only is \"similar to but for\n> > historical reasons different from\" --section=data\n> > \n> > But there is no further explanation of what those differences are or what\n> > might be missed or different in your restore if you pick one option or the\n> > other. Maybe one or the other option is the \"preferred current way\" and one\n> > is the \"historical way\" or they are aimed at different types of use cases,\n> > but that's not clear.\n> \n> [Thread moved from docs to hackers because there are behavioral issues.]\n> \n> Very good question. I dug into this and found this commit which says\n> --data-only and --section=data were equivalent:\n> \n> \tcommit a4cd6abcc9\n> \tAuthor: Andrew Dunstan <andrew@dunslane.net>\n> \tDate: Fri Dec 16 19:09:38 2011 -0500\n> \t\n> \t Add --section option to pg_dump and pg_restore.\n> \t\n> \t Valid values are --pre-data, data and post-data. The option can be\n> \t given more than once. --schema-only is equivalent to\n> -->\t --section=pre-data --section=post-data. --data-only is equivalent\n> -->\t to --section=data.\n> \t\n> and then this commit which says they are not:\n> \n> \tcommit 4317e0246c\n> \tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n> \tDate: Tue May 29 23:22:14 2012 -0400\n> \t\n> \t Rewrite --section option to decouple it from --schema-only/--data-only.\n> \t\n> -->\t The initial implementation of pg_dump's --section option supposed that the\n> -->\t existing --schema-only and --data-only options could be made equivalent to\n> -->\t --section settings. This is wrong, though, due to dubious but long since\n> \t set-in-stone decisions about where to dump SEQUENCE SET items, as seen in\n> \t bug report from Martin Pitt. (And I'm not totally convinced there weren't\n> \t other bugs, either.) Undo that coupling and instead drive --section\n> \t filtering off current-section state tracked as we scan through the TOC\n> \t list to call _tocEntryRequired().\n> \t\n> \t To make sure those decisions don't shift around and hopefully save a few\n> \t cycles, run _tocEntryRequired() only once per TOC entry and save the result\n> \t in a new TOC field. This required minor rejiggering of ACL handling but\n> \t also allows a far cleaner implementation of inhibit_data_for_failed_table.\n> \t\n> \t Also, to ensure that pg_dump and pg_restore have the same behavior with\n> \t respect to the --section switches, add _tocEntryRequired() filtering to\n> \t WriteToc() and WriteDataChunks(), rather than trying to implement section\n> \t filtering in an entirely orthogonal way in dumpDumpableObject(). This\n> \t required adjusting the handling of the special ENCODING and STDSTRINGS\n> \t items, but they were pretty weird before anyway.\n> \n> and this commit which made them closer:\n> \n> \tcommit 5a39114fe7\n> \tAuthor: Tom Lane <tgl@sss.pgh.pa.us>\n> \tDate: Fri Oct 26 12:12:42 2012 -0400\n> \t\n> \t In pg_dump, dump SEQUENCE SET items in the data not pre-data section.\n> \t\n> \t Represent a sequence's current value as a separate TableDataInfo dumpable\n> \t object, so that it can be dumped within the data section of the archive\n> -->\t rather than in pre-data. This fixes an undesirable inconsistency between\n> -->\t the meanings of \"--data-only\" and \"--section=data\", and also fixes dumping\n> \t of sequences that are marked as extension configuration tables, as per a\n> \t report from Marko Kreen back in July. The main cost is that we do one more\n> \t SQL query per sequence, but that's probably not very meaningful in most\n> \t databases.\n> \n> Looking at the restore code, I see --data-only disabling triggers, while\n> --section=data doesn't. I also tested --data-only vs. --section=data in\n> pg_dump for the regression database and saw the only differences as the\n> creation and comments on large objects, e.g.,\n> \n> \t-- Name: 2121; Type: BLOB; Schema: -; Owner: postgres\n> \t--\n> \t\n> \tSELECT pg_catalog.lo_create('2121');\n> \n> \n> \tALTER LARGE OBJECT 2121 OWNER TO postgres;\n> \n> \t--\n> \t-- Name: LARGE OBJECT 2121; Type: COMMENT; Schema: -; Owner: postgres\n> \t--\n> \n> \tCOMMENT ON LARGE OBJECT 2121 IS 'testing comments';\n> \n> but the large object _data_ was dumped in both cases.\n> \n> So, where does this leave us? We know we need --section=data because\n> the pre/post-data options are clearly useful, so why would someone use\n> --data-only vs. --section=data. We don't document why to use one rather\n> than the other, so the --data-only option looks useless to me. Do we\n> remove it, adjust it, or leave it alone?\n> \n> -- \n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n> \n> Indecision is a decision. Inaction is an action. Mark Batterson\n> \n> \n> \n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 27 Oct 2023 18:33:38 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: request clarification on pg_restore documentation" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Does anyone have a suggestion on how to handle this issue?\n\nIt might be that the later decision to change the representation\nof sequence dumps would make it possible to undo 4317e0246c\nand go back to having --schema-only/--data-only be true aliases\nfor --section. But it'd take some research and probably end up\ncausing some behavioral changes (eg. trigger handling as you note).\n\nMuch the same research would be needed if you just wanted to\ndocument the current state of affairs more clearly.\n\nThe real issue here is that --schema-only/--data-only do a few\nthings that are not within --section's remit, such as trigger\nadjustments. Do we want to cause --section to have those effects\ntoo? I dunno. Do we want to give up those extra behaviors?\nAlmost certainly not.\n\nEither way, I'm not personally planning to put effort into that\nanytime soon.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Oct 2023 18:53:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: request clarification on pg_restore documentation" } ]
[ { "msg_contents": "I'd like to revamp this important discussion.\n\nAs is well described in this fairly recent paper here https://www.vldb.org/pvldb/vol9/p204-leis.pdf (which also looks at Postgres) \"estimation errors quickly grow as the number of joins increases, and that these errors are usually the reason for bad plans\" - I think we can all get behind that statement.\n\nWhile nested loop joins work great when cardinality estimates are correct, they are notoriously bad when the optimizer underestimates and they degrade very fast in such cases - the good vs. bad here is very asymmetric. On the other hand hash joins degrade much more gracefully - they are considered very robust against underestimation. The above mentioned paper illustrates that all mayor DBMS (including Postgres) tend to underestimate and usually that underestimation increases drastically with the number of joins (see Figures 3+4 of the paper).\n\nNow, a simple approach to guarding against bad plans that arise from underestimation could be to use what I would call a nested-loop-conviction-multiplier based on the current depth of the join tree, e.g. for a base table that multiplier would obviously be 1, but would then grow (e.g.) quadratically. That conviction-multiplier would *NOT* be used to skew the cardinality estimates themselves, but rather be applied to the overall nested loop join cost at each particular stage of the plan when comparing it to other more robust join strategies like hash or sort-merge joins. That way when we can be sure to have a good estimate at the bottom of the join tree we treat all things equal, but favor nested loops less and less as we move up the join tree for the sake of robustness.\nAlso, we can expand the multiplier whenever we fall back to using the default cardinality constant as surely all bets are off at that point - we should definitely treat nested loop joins as out of favor in this instance and that could easily be incorporated by simply increasing the conviction-mutliplier.\n\nWhat are your thoughts on this simple idea - is it perhaps too simple?\n\nCheers, Ben\n\n\n", "msg_date": "Thu, 29 Sep 2022 16:32:58 +0200", "msg_from": "Benjamin Coutu <ben.coutu@zeyos.com>", "msg_from_op": true, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Thu, Sep 29, 2022 at 7:32 AM Benjamin Coutu <ben.coutu@zeyos.com> wrote:\n> Also, we can expand the multiplier whenever we fall back to using the default cardinality constant as surely all bets are off at that point - we should definitely treat nested loop joins as out of favor in this instance and that could easily be incorporated by simply increasing the conviction-mutliplier.\n>\n> What are your thoughts on this simple idea - is it perhaps too simple?\n\nOffhand I'd say it's more likely to be too complicated. Without\nmeaning to sound glib, the first question that occurs to me is \"will\nwe also need a conviction multiplier conviction multiplier?\". Anything\nlike that is going to have unintended consequences that might very\nwell be much worse than the problem that you set out to solve.\n\nPersonally I still like the idea of just avoiding unparameterized\nnested loop joins altogether when an \"equivalent\" hash join plan is\navailable. I think of it as preferring the hash join plan because it\nwill have virtually the same performance characteristics when you have\na good cardinality estimate (probably very often), but far better\nperformance characteristics when you don't. We can perhaps be\napproximately 100% sure that something like that will be true in all\ncases, no matter the details. That seems like a very different concept\nto what you've proposed.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 29 Sep 2022 16:12:06 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Thu, Sep 29, 2022 at 04:12:06PM -0700, Peter Geoghegan wrote:\n> Offhand I'd say it's more likely to be too complicated. Without\n> meaning to sound glib, the first question that occurs to me is \"will\n> we also need a conviction multiplier conviction multiplier?\". Anything\n> like that is going to have unintended consequences that might very\n> well be much worse than the problem that you set out to solve.\n> \n> Personally I still like the idea of just avoiding unparameterized\n> nested loop joins altogether when an \"equivalent\" hash join plan is\n> available. I think of it as preferring the hash join plan because it\n> will have virtually the same performance characteristics when you have\n> a good cardinality estimate (probably very often), but far better\n> performance characteristics when you don't. We can perhaps be\n> approximately 100% sure that something like that will be true in all\n> cases, no matter the details. That seems like a very different concept\n> to what you've proposed.\n\nI think the point the original poster as making, and I have made in the\npast, is that even of two optimizer costs are the same, one might be\nmore penalized by misestimation than the other, and we don't have a good\nway of figuring that into our plan choices.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Thu, 29 Sep 2022 19:27:16 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Thu, Sep 29, 2022 at 4:27 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I think the point the original poster as making, and I have made in the\n> past, is that even of two optimizer costs are the same, one might be\n> more penalized by misestimation than the other, and we don't have a good\n> way of figuring that into our plan choices.\n\nRight. But that seems fraught with difficulty. I suspect that the\ncosts that the planner attributes to each plan often aren't very\nreliable in any absolute sense, even when everything is working very\nwell by every available metric. Even a very noisy cost model with\nsomewhat inaccurate selectivity estimates will often pick the cheapest\nplan, or close enough.\n\nHaving a cost-based optimizer that determines the cheapest plan quite\nreliably is one thing. Taking the same underlying information and\nadding the dimension of risk to it and expecting a useful result is\nquite another -- that seems far harder.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 29 Sep 2022 16:40:14 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I think the point the original poster as making, and I have made in the\n> past, is that even of two optimizer costs are the same, one might be\n> more penalized by misestimation than the other, and we don't have a good\n> way of figuring that into our plan choices.\n\nAgreed, but dealing with uncertainty in those numbers is an enormous\ntask if you want to do it right. \"Doing it right\", IMV, would start\nout by extending all the selectivity estimation functions to include\nerror bars; then we could have error bars on rowcount estimates and\nthen costs; then we could start adding policies about avoiding plans\nwith too large a possible upper-bound cost. Trying to add such\npolicy with no data to go on is not going to work well.\n\nI think Peter's point is that a quick-n-dirty patch is likely to make\nas many cases worse as it makes better. That's certainly my opinion\nabout the topic.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 29 Sep 2022 19:46:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Thu, Sep 29, 2022 at 07:46:18PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I think the point the original poster as making, and I have made in the\n> > past, is that even of two optimizer costs are the same, one might be\n> > more penalized by misestimation than the other, and we don't have a good\n> > way of figuring that into our plan choices.\n> \n> Agreed, but dealing with uncertainty in those numbers is an enormous\n> task if you want to do it right. \"Doing it right\", IMV, would start\n> out by extending all the selectivity estimation functions to include\n> error bars; then we could have error bars on rowcount estimates and\n> then costs; then we could start adding policies about avoiding plans\n> with too large a possible upper-bound cost. Trying to add such\n> policy with no data to go on is not going to work well.\n> \n> I think Peter's point is that a quick-n-dirty patch is likely to make\n> as many cases worse as it makes better. That's certainly my opinion\n> about the topic.\n\nAgreed on all points --- I was thinking error bars too.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n\n", "msg_date": "Thu, 29 Sep 2022 19:51:47 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Thu, Sep 29, 2022 at 4:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Agreed, but dealing with uncertainty in those numbers is an enormous\n> task if you want to do it right. \"Doing it right\", IMV, would start\n> out by extending all the selectivity estimation functions to include\n> error bars; then we could have error bars on rowcount estimates and\n> then costs; then we could start adding policies about avoiding plans\n> with too large a possible upper-bound cost. Trying to add such\n> policy with no data to go on is not going to work well.\n\nIn general I suspect that we'd be better off focussing on mitigating\nthe impact at execution time. There are at least a few things that we\ncould do there, at least in theory. Mostly very ambitious, long term\nthings.\n\nI like the idea of just avoiding unparameterized nested loop joins\naltogether when an \"equivalent\" hash join plan is available because\nit's akin to an execution-time mitigation, despite the fact that it\nhappens during planning. While it doesn't actually change anything in\nthe executor, it is built on the observation that we have virtually\neverything to gain and nothing to lose during execution, no matter\nwhat happens.\n\nIt seems like a very small oasis of certainty in a desert of\nuncertainty -- which seems nice, as far as it goes.\n\n> I think Peter's point is that a quick-n-dirty patch is likely to make\n> as many cases worse as it makes better. That's certainly my opinion\n> about the topic.\n\nRight. Though I am actually sympathetic to the idea that users might\ngladly pay a cost for performance stability -- even a fairly large\ncost. That part doesn't seem like the problem.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 29 Sep 2022 17:06:32 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Thu, Sep 29, 2022 at 07:51:47PM -0400, Bruce Momjian wrote:\n> On Thu, Sep 29, 2022 at 07:46:18PM -0400, Tom Lane wrote:\n> > Agreed, but dealing with uncertainty in those numbers is an enormous\n> > task if you want to do it right. \"Doing it right\", IMV, would start\n> > out by extending all the selectivity estimation functions to include\n> > error bars; then we could have error bars on rowcount estimates and\n> > then costs; then we could start adding policies about avoiding plans\n> > with too large a possible upper-bound cost. Trying to add such\n> > policy with no data to go on is not going to work well.\n> > \n> > I think Peter's point is that a quick-n-dirty patch is likely to make\n> > as many cases worse as it makes better. That's certainly my opinion\n> > about the topic.\n> \n> Agreed on all points --- I was thinking error bars too.\n\nActually, if we wanted to improve things in this area, we should have a\nset of queries that don't chose optimal plans we can test with. We used\nto see them a lot before we had extended statistics, but I don't\nremember seeing many recently, let alone a collection of them. I guess\nthat is good.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Thu, 29 Sep 2022 22:30:54 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Fri, 30 Sept 2022 at 13:06, Peter Geoghegan <pg@bowt.ie> wrote:\n> I like the idea of just avoiding unparameterized nested loop joins\n> altogether when an \"equivalent\" hash join plan is available because\n> it's akin to an execution-time mitigation, despite the fact that it\n> happens during planning. While it doesn't actually change anything in\n> the executor, it is built on the observation that we have virtually\n> everything to gain and nothing to lose during execution, no matter\n> what happens.\n\nI'm not sure if it's a good idea to assume that performing\nnon-parameterised Nested Loops when we shouldn't is the only shape of\nplan that causes us problems.\n\nWe also have the case where we assume early start-up plans are\nfavourable. For example:\n\nSELECT * FROM t WHERE a = 1 ORDER BY b LIMIT 10;\n\nwhere we have two indexes, one on t(a) and another on t(b).\n\nShould we use the t(b) index and filter out the rows that don't match\na = 1 and hope we get 10 a=1 rows soon in the t(b) index? or do we use\nt(a) and then perform a sort? Best case for using the t(b) index is\nthat we find 10 a=1 rows in the first 10 rows of the index scan, the\nworst case is that there are no rows with a=1.\n\nHaving something coded into the cost model is a more generic way of\naddressing this issue. Providing we design the cost model correctly,\nwe'd be able to address future issues we discover using which ever\ncost model infrastructure that we design for this.\n\nI understand that what you propose would be a fast way to fix this\nissue. However, if we went and changed the join path creation code to\nnot add non-parameterised nested loop paths when other paths exist,\nthen how could we ever dare to put that code back again when we come\nup with a better solution?\n\nDavid\n\n\n", "msg_date": "Fri, 30 Sep 2022 16:59:49 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "> Right. But that seems fraught with difficulty. I suspect that the\n> costs that the planner attributes to each plan often aren't very\n> reliable in any absolute sense, even when everything is working very\n> well by every available metric. Even a very noisy cost model with\n> somewhat inaccurate selectivity estimates will often pick the cheapest\n> plan, or close enough.\n\nSure, the absolute cost of a complex plan will always be inaccurate at best.\nMy point is that we can be very confident in the cardinalities of base tables. As the paper states in \"3.1. Estimates for Base Tables\":\n\n\"The median q-error is close to the optimal value of 1 for all systems,\nindicating that the majority of all selections are estimated correctly.\"\n\nThanks to the statistics will practically never be off by an order of magnitude when estimating base table cardinalities.\n\nThe paper also clearly shows (and that certainly coincides with my experience) that those cardinality underestimations grow exponentially as they propagate up the join tree.\n\nGiven the research I'd stipulate that at any given level of the join tree, the current depth is a reasonable indicator of underestimation. Taking that into account (even if only to mitigate nested loops on higher levels) is IMV a principled approach, and not necesseraly a hack.\n\nObviously having something like error bars as proposed by Tom would be even better and perhaps more general, but that is on a whole different level in terms of complexity and I certainly have no idea how we would easily get there.\n\n\n", "msg_date": "Fri, 30 Sep 2022 08:05:31 +0200", "msg_from": "Benjamin Coutu <ben.coutu@zeyos.com>", "msg_from_op": true, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "\n> In general I suspect that we'd be better off focussing on mitigating\n> the impact at execution time. There are at least a few things that we\n> could do there, at least in theory. Mostly very ambitious, long term\n> things.\n\nI think these things are orthogonal. No matter how good the cost model ever gets, we will always have degenerate cases.\nHaving some smarts about that in the executor is surely a good thing, but it shouldn't distract us from improving on the planner front.\n\n> \n> I like the idea of just avoiding unparameterized nested loop joins\n> altogether when an \"equivalent\" hash join plan is available because\n> it's akin to an execution-time mitigation, despite the fact that it\n> happens during planning. While it doesn't actually change anything in\n> the executor, it is built on the observation that we have virtually\n> everything to gain and nothing to lose during execution, no matter\n> what happens.\n\nI agree with you, that those plans are too risky. But let's maybe find a more general way of dealing with this.\n\n> Right. Though I am actually sympathetic to the idea that users might\n> gladly pay a cost for performance stability -- even a fairly large\n> cost. That part doesn't seem like the problem.\n\n\n", "msg_date": "Fri, 30 Sep 2022 08:44:46 +0200", "msg_from": "Benjamin Coutu <ben.coutu@zeyos.com>", "msg_from_op": true, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "> Agreed, but dealing with uncertainty in those numbers is an enormous\n> task if you want to do it right. \"Doing it right\", IMV, would start\n> out by extending all the selectivity estimation functions to include\n> error bars; then we could have error bars on rowcount estimates and\n> then costs; then we could start adding policies about avoiding plans\n> with too large a possible upper-bound cost. Trying to add such\n> policy with no data to go on is not going to work well.\n\nError bars would be fantastic, no question. But that would make things very complex.\nA lot of judgment calls would be necessary for the policy behind upper-bound pruning, picking up on Peter's comment about \"conviction multiplier of conviction multiplier\" ;)\nAlso, the math in deriving those bounds based on the stats and how they propagate up the join tree doesn't seem trivial either.\n\n> I think Peter's point is that a quick-n-dirty patch is likely to make\n> as many cases worse as it makes better. That's certainly my opinion\n> about the topic.\n\nAs in my reply to Peter, I think the join level/depth metric is a simple but principled way of dealing with it, given the referenced research.\nIn the first step, we'd use this merely to be more risk-averse towards nested loop joins as we climb up the join tree - we are not fiddling with the cost model itself, nor the join ordering, just when it comes to considering that particular join algorithm. Later this could be expanded to be more broadly scoped.\n\nPlease not give up on a simple way to reap most of the fruits just yet.\n\n\n", "msg_date": "Fri, 30 Sep 2022 09:09:30 +0200", "msg_from": "Benjamin Coutu <ben.coutu@zeyos.com>", "msg_from_op": true, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "> Actually, if we wanted to improve things in this area, we should have a\n> set of queries that don't chose optimal plans we can test with. We used\n> to see them a lot before we had extended statistics, but I don't\n> remember seeing many recently, let alone a collection of them. I guess\n> that is good.\n\nIn the VLDB paper they actually created their own \"Join Order Benchmark\", which is publicly available under https://github.com/gregrahn/join-order-benchmark\nIt would probably be more suited for this kind of testing than, e.g. the TPC benchmarks.\n\nIf there is interest, I could also compile a set of relevant cases based on the message history of the performance mailing list.\n\n\n", "msg_date": "Fri, 30 Sep 2022 10:00:31 +0200", "msg_from": "Benjamin Coutu <ben.coutu@zeyos.com>", "msg_from_op": true, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Thu, Sep 29, 2022 at 7:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Agreed, but dealing with uncertainty in those numbers is an enormous\n> task if you want to do it right. \"Doing it right\", IMV, would start\n> out by extending all the selectivity estimation functions to include\n> error bars; then we could have error bars on rowcount estimates and\n> then costs; then we could start adding policies about avoiding plans\n> with too large a possible upper-bound cost. Trying to add such\n> policy with no data to go on is not going to work well.\n\nI think that the point of the paper which started the thread that this\ndiscussion branched from was essentially that trying to add such a\npolicy with no data to go on worked extremely well in practice, and\nother database systems are already doing it, and we're hurting\nourselves by not doing it. And specifically what they did was to\ndisfavor unparameterized nested loops.\n\nAnd I think that actually makes a lot of sense. If you think through\nparameterized nested loops, unparameterized nested loops, hash joins,\nand merge joins, which is basically all the strategies, and you\nimagine having many more or fewer rows on one side of the join or the\nother than you thought, unparameterized nested loops are the standout\ncase. It's the only kind of join where the expense grows as the\nproduct of the sizes of the two inputs. The other cases tend to be\nmore like O(n+m) rather than O(nm), and even if there are some lg n or\nlg m terms in there too they don't tend to make a whole lot of\ndifference in practice.\n\nSo I think what you would find if you did all of this analysis is\nthat, basically, every time you did the cost computation for a\nparameterized nested loop, a hash join, or a merge join, the error\nbars would be whatever they were, and then when you did a cost\ncomputation for an unparameterized nested loop, the error bars would\nbe way worse. Like, if we assume that the estimates for each side of a\nhash join are off by 100x, then the cost will be off by roughly 100x\nif it still fits in work_mem and by several hundred x if it now spills\nto disk. But if we assume the same thing for an unparameterized nested\nloop, the cost is now off by 10000x. And that is massively, massively\nmore, so clearly we should avoid the unparameterized nested loop. But\nit's unnecessary to do this computation at runtime for every separate\nunparameterized nested loop: we can do it right here, in a generic\nway, for every such loop.\n\nNow, it's true that the actual risk depends on how certain we are of\nthe estimates for the input rels, but that's difficult to quantify and\nI'm not convinced it really matters at all. Like, if the input is a\nbase relation with no filter condition, then the error should be\nsmall, unless the table size changes a lot between planning and\nexecution. If it's the output of a user-defined aggregate, the error\ncould be really, really large. But that has no impact on the\n*relative* dangers of the unparameterized nested loop vs. some other\njoin method. If join A is between two input rels whose sizes are\nprobably known fairly precisely, and join B is between two input rels\nwhose sizes we might be drastically wrong about, then a hash join is\nriskier for join B than it is for join A. But that really does not\nmatter. What *does* matter is that an unparameterized nested loop is\nriskier for join A than a hash join is for join A; and likewise for\njoin B.\n\nI think we're kind of just making life complicated for ourselves here\nby pretending that unparameterized nested loops are part of some\ngeneral class of uncertainty problems that we need to worry about. In\nsome sense, they are, and David Rowley is right to mention the other\none that comes up pretty frequently. But like that list of two is\npretty much the whole list. I think we've talked ourselves into\nbelieving that this problem is much harder than it really is. Maybe a\nblanket ban on unparameterized nested loops is too strong (or maybe\nit's exactly the right thing) but it can't possibly be wrong to think\nabout that case in particular as something we need to solve. It's the\nonly join method that can go quadratic in easy, realistic scenarios --\nand it often does.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 30 Sep 2022 13:43:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Fri, Sep 30, 2022 at 10:43 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> But it's unnecessary to do this computation at runtime for every separate\n> unparameterized nested loop: we can do it right here, in a generic\n> way, for every such loop.\n\nIt's not just that the risks are ludicrously high, of course. The\npotential benefits must *also* be very low. It's both factors,\ntogether.\n\n> Now, it's true that the actual risk depends on how certain we are of\n> the estimates for the input rels, but that's difficult to quantify and\n> I'm not convinced it really matters at all.\n\nWe're talking about a problem that is fairly unlikely to occur in\ngeneral, I think -- let's not forget that. These are presumably rare\nevents that nevertheless cause many real practical problems.\n\nIf we're going to add error bars, why wouldn't we also need error bars\nfor our error bars?\n\n> I think we're kind of just making life complicated for ourselves here\n> by pretending that unparameterized nested loops are part of some\n> general class of uncertainty problems that we need to worry about. In\n> some sense, they are, and David Rowley is right to mention the other\n> one that comes up pretty frequently. But like that list of two is\n> pretty much the whole list. I think we've talked ourselves into\n> believing that this problem is much harder than it really is.\n\n+1\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 30 Sep 2022 11:24:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Thu, Sep 29, 2022 at 9:00 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I understand that what you propose would be a fast way to fix this\n> issue. However, if we went and changed the join path creation code to\n> not add non-parameterised nested loop paths when other paths exist,\n> then how could we ever dare to put that code back again when we come\n> up with a better solution?\n\nBut why would it matter, even then?\n\nI don't deny that something like that could make sense, but I don't\nsee why it should be in tension with this proposal. We're talking\nabout a plan shape that is (in practical terms) inherently\nunreasonable, given the availability of an alternative plan shape. Why\nwouldn't that continue to be true in every such case, forever?\n\nTo put it another way, the proposal seems like taking away something\nthat we don't want to have, ever. It seems like a subtractive thing to\nme. The potential upside of allowing unparameterized nestloop joins\nseems infinitesimal; zero for all practical purposes. So even with a\nfar more sophisticated framework for \"plan riskiness\" in place, it\nwould still make sense to treat unparameterized nestloop joins as\ninherently undesirable. There is perhaps a theoretical sense in which\nthat isn't quite true, but it's true for all practical purposes, which\nshould be enough.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 30 Sep 2022 11:44:25 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Thu, Sep 29, 2022 at 11:44 PM Benjamin Coutu <ben.coutu@zeyos.com> wrote:\n> I think these things are orthogonal.\n\nI agree that they're orthogonal. I just meant that execution time\nstrategies seem underexplored in general.\n\n> No matter how good the cost model ever gets, we will always have degenerate cases.\n\nSure, but the model isn't the problem here, really -- not to me. The\nproblem is that the planner can in some cases choose a plan that is\ninherently unreasonable, at least in practical terms. You're talking\nabout uncertainties. But I'm actually talking about the opposite thing\n-- certainty (albeit a limited kind of certainty that applies only to\none narrow set of conditions).\n\nIt's theoretically possible that bogosort will be faster than\nquicksort in some individual cases. After all, bogosort is O(n) in the\nbest case, which is impossible to beat! But there is no practical\nsense in which bogosort could ever be better than quicksort. Having\nfewer choices by just not offering inherently bad choices seems quite\nunrelated to what you're talking about.\n\nFor all I know you might be onto something. But it really seems\nindependent to me.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 30 Sep 2022 12:04:38 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "> Sure, but the model isn't the problem here, really -- not to me. The\n> problem is that the planner can in some cases choose a plan that is\n> inherently unreasonable, at least in practical terms. You're talking\n> about uncertainties. But I'm actually talking about the opposite thing\n> -- certainty (albeit a limited kind of certainty that applies only to\n> one narrow set of conditions).\n\nI absolutely agree and support your proposal to simply not generate those paths at all unless necessary.\n\n> For all I know you might be onto something. But it really seems\n> independent to me.\n\nYeah, I‘m sorry if I highjacked this thread for something related but technically different. I just wanted to expand on your proposal by taking into account the join depth and also not just talking about unparametrized nested loop joins. The research is very clear that the uncertainty is proportional to the join level, and that is the point I am trying to focus the discussion on.\n\nI really encourage everyone to read the VLDB paper. BTW, the unnamed proprietary DBMSs in that paper are the 3 big ones from Washington, California and NY, in that order.\n\n\n", "msg_date": "Sat, 01 Oct 2022 00:19:16 +0200", "msg_from": "Benjamin Coutu <ben.coutu@zeyos.com>", "msg_from_op": true, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Fri, Sep 30, 2022 at 2:24 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> It's not just that the risks are ludicrously high, of course. The\n> potential benefits must *also* be very low. It's both factors,\n> together.\n\nHmm, maybe. But it also wouldn't surprise me very much if someone can\ncome up with a test case where a nested loop with a single row (or\nmaybe no rows) on one side or the other and it's significantly faster\nthan any alternative plan. I believe, though, that even if such cases\nexist, they are probably relatively few in number compared to the\ncases where parameterized nested loops hurt, and the savings are\nprobably small compared to the multiple-orders-of-magnitude slowdowns\nthat you can get when a nested loop goes bad. But they might still be\nrelatively large -- 2x, 3x? -- in absolute terms.\n\nIn the prior discussion, the only person who showed a case in which he\nthought that an unparameterized nested loop might be a clear winner\nwas Tom, but it was just sort of a general \"this kind of case might be\na problem\" thing rather than a fully worked out example with real\ntimings. Perhaps someone ought to try to characterize the kinds of\ncases he mentioned, to help us get a clearer feeling about what, if\nanything, we're gaining from the current scheme.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 2 Oct 2022 06:43:45 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Sun, Oct 2, 2022 at 3:43 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Sep 30, 2022 at 2:24 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > It's not just that the risks are ludicrously high, of course. The\n> > potential benefits must *also* be very low. It's both factors,\n> > together.\n>\n> Hmm, maybe. But it also wouldn't surprise me very much if someone can\n> come up with a test case where a nested loop with a single row (or\n> maybe no rows) on one side or the other and it's significantly faster\n> than any alternative plan.\n\nThat's certainly possible, but wouldn't the difference all come from\nfixed startup costs? If we're talking about a single row, with a\nminimal test case, then the potential downside of this more\nconservative strategy might indeed amount to something like a 2x or 3x\nslowdown, if we look at it in isolation. But why measure it that way?\nI think that absolute differences like milliseconds of execution time\nare much more relevant.\n\nReal production databases have many queries with very diverse\ncharacteristics -- there is a lot going on at any given moment. The\nproportion of queries that will be affected either way by avoiding\nunparamaterized nested loop joins is probably going to be very small.\nNobody is going to notice if only a small subset or all queries are\nmaybe 1 ms or 2 ms slower. As long as it's well within the margin of\nnoise in 100% of all cases, it really shouldn't matter.\n\nAFAICT the choice is one of \"bounded, low upside versus unbounded,\nhigh downside\".\n\n> I believe, though, that even if such cases\n> exist, they are probably relatively few in number compared to the\n> cases where parameterized nested loops hurt, and the savings are\n> probably small compared to the multiple-orders-of-magnitude slowdowns\n> that you can get when a nested loop goes bad. But they might still be\n> relatively large -- 2x, 3x? -- in absolute terms.\n\nI suspect it won't even matter if disallowing unparamaterized nested\nloop joins loses on average.\n\nI am reminded of this:\nhttps://en.wikipedia.org/wiki/St._Petersburg_paradox#Expected_utility_theory\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 2 Oct 2022 11:39:49 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Fri, Sep 30, 2022 at 3:19 PM Benjamin Coutu <ben.coutu@zeyos.com> wrote:\n> > For all I know you might be onto something. But it really seems\n> > independent to me.\n>\n> Yeah, I‘m sorry if I highjacked this thread for something related but technically different.\n\nI certainly wouldn't say that you hijacked the thread. I'm glad that\nyou revived the discussion, in fact.\n\nThe way that I've framed the problem is probably at least somewhat\ncontroversial. In fact I'm almost certain that at least one or two\npeople will flat out disagree with me. But even if everybody else\nalready thought about unparameterized nested loop joins in the same\nterms, it might still be useful to make the arguments that you've\nmade.\n\nWhat I'm saying is that the probability of \"getting it right\" is\nvirtually irrelevant in the case of these unparameterized nested loop\njoin plans specifically. Any probability that's less than 1.0 is\nalready unacceptable, more or less. A probability of 1.0 is never\nunattainable in the real world, no matter what, so why should the true\nprobability (whatever that means) matter at all? The appropriate\ncourse of action will still be \"just don't do that, ever\".\n\nTo me this dynamic seems qualitatively different to other cases, where\nwe might want to give some weight to uncertainty. Understanding where\nthe boundaries lie between those trickier cases and this simpler case\nseems important and relevant to me.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 2 Oct 2022 13:22:01 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On 29/9/2022 21:32, Benjamin Coutu wrote:\n> I'd like to revamp this important discussion.\n> \n> As is well described in this fairly recent paper here https://www.vldb.org/pvldb/vol9/p204-leis.pdf (which also looks at Postgres) \"estimation errors quickly grow as the number of joins increases, and that these errors are usually the reason for bad plans\" - I think we can all get behind that statement.\n> \n> While nested loop joins work great when cardinality estimates are correct, they are notoriously bad when the optimizer underestimates and they degrade very fast in such cases - the good vs. bad here is very asymmetric. On the other hand hash joins degrade much more gracefully - they are considered very robust against underestimation. The above mentioned paper illustrates that all mayor DBMS (including Postgres) tend to underestimate and usually that underestimation increases drastically with the number of joins (see Figures 3+4 of the paper).\n> \n> Now, a simple approach to guarding against bad plans that arise from underestimation could be to use what I would call a nested-loop-conviction-multiplier based on the current depth of the join tree, e.g. for a base table that multiplier would obviously be 1, but would then grow (e.g.) quadratically. That conviction-multiplier would *NOT* be used to skew the cardinality estimates themselves, but rather be applied to the overall nested loop join cost at each particular stage of the plan when comparing it to other more robust join strategies like hash or sort-merge joins. That way when we can be sure to have a good estimate at the bottom of the join tree we treat all things equal, but favor nested loops less and less as we move up the join tree for the sake of robustness.\n> Also, we can expand the multiplier whenever we fall back to using the default cardinality constant as surely all bets are off at that point - we should definitely treat nested loop joins as out of favor in this instance and that could easily be incorporated by simply increasing the conviction-mutliplier.\n> \n> What are your thoughts on this simple idea - is it perhaps too simple?\nIn my practice, parameterized nested loop reduces, sometimes \ndrastically, execution time. If your query touches a lot of tables but \nextracts only a tiny part of the data, and you have good coverage by \nindexes, PNL works great.\nMoreover, I have pondered extending parameterization through subqueries \nand groupings.\n\nWhat could you say about a different way: hybrid join? In MS SQL Server, \nthey have such a feature [1], and, according to their description, it \nrequires low overhead. They start from HashJoin and switch to NestLoop \nif the inner input contains too small tuples. It solves the issue, Isn't it?\n\n[1] \nhttps://techcommunity.microsoft.com/t5/sql-server-blog/introducing-batch-mode-adaptive-joins/ba-p/385411\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Wed, 20 Sep 2023 14:56:53 +0700", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "On Wed, 20 Sept 2023 at 19:56, Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> What could you say about a different way: hybrid join? In MS SQL Server,\n> they have such a feature [1], and, according to their description, it\n> requires low overhead. They start from HashJoin and switch to NestLoop\n> if the inner input contains too small tuples. It solves the issue, Isn't it?\n\nA complexity which you may not be considering here is that Nested Loop\njoins always preserve the tuple order from the outer side of the join,\nwhereas hash joins will not do this when multi-batching.\n\nI've no idea how the SQL Server engineers solved that.\n\nDavid\n\n> [1]\n> https://techcommunity.microsoft.com/t5/sql-server-blog/introducing-batch-mode-adaptive-joins/ba-p/385411\n\n\n", "msg_date": "Wed, 20 Sep 2023 21:49:51 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" }, { "msg_contents": "\n\nOn Wed, Sep 20, 2023, at 4:49 PM, David Rowley wrote:\n> On Wed, 20 Sept 2023 at 19:56, Andrey Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> What could you say about a different way: hybrid join? In MS SQL Server,\n>> they have such a feature [1], and, according to their description, it\n>> requires low overhead. They start from HashJoin and switch to NestLoop\n>> if the inner input contains too small tuples. It solves the issue, Isn't it?\n>\n> A complexity which you may not be considering here is that Nested Loop\n> joins always preserve the tuple order from the outer side of the join,\n> whereas hash joins will not do this when multi-batching.\n\nMy idea here is the same as MS SQL guys did: prefetch from the HashJoin inner some predefined number of tuples and, if the planner has made a mistake and overestimated it, move hash join inner to NestLoop as an outer.\nThe opposite strategy, \"underestimation\" - starting with NestLoop and switching to HashJoin looks more difficult, but the main question is: is it worthwhile to research?\n\n> I've no idea how the SQL Server engineers solved that.\n\n>> [1]\n>> https://techcommunity.microsoft.com/t5/sql-server-blog/introducing-batch-mode-adaptive-joins/ba-p/385411\n\n-- \nRegards,\nAndrei Lepikhov\n\n\n", "msg_date": "Wed, 20 Sep 2023 22:17:46 +0700", "msg_from": "\"Lepikhov Andrei\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: disfavoring unparameterized nested loops" } ]
[ { "msg_contents": "Hackers,\n\nPer the documentation in TupleTableSlotOps, an AM can choose not to supply a get_heap_tuple function, and instead set this field to NULL. Doing so appears to almost work, but breaks the xmin and xmax returned by a INSERT..ON CONFLICT DO UPDATE..RETURNING. In particular, the call chain ExecOnConflictUpdate -> ExecUpdate -> table_tuple_update seems to expect that upon return from table_tuple_update, the slot will hold onto a copy of the updated tuple, including its header fields. This assumption is inherent in how the slot is later used by the destination receiver. But for TAMs which do not keep a copy heaptuple of their own, the slot will only have copies of (tts_tupleDescriptor, tts_values, tts_isnull) to use to form up a tuple when the receiver asks for one, and that formed up MinimalTuple won't be preceded by any meaningful header.\n\nI would expect similar problems for an UPDATE..RETURNING, but have not tested that yet.\n\nI'd like to know if others agree with my analysis, and if this is a bug in the RETURNING, or just an unsupported way to design a custom TAM. If the latter, is this documented somewhere? For reference, I am working against REL_14_STABLE.\n\n\n\nDetails....\n\nTo illustrate this issue, I expanded the update.sql a little to give a bit more information, which demonstrates that the xmin and xmax returned are not the same as what gets written to the table for the same row, using a custom TAM named \"pile\" which neglects to provide a get_heap_update implementation:\n\n\nSELECT tableoid::regclass, xmin, xmax, pg_current_xact_id()::xid, a, b FROM upsert_test;\n tableoid | xmin | xmax | pg_current_xact_id | a | b\n-------------+------+------+--------------------+---+---------------------------\n upsert_test | 756 | 756 | 757 | 1 | Foo, Correlated, Excluded\n upsert_test | 756 | 756 | 757 | 3 | Zoo, Correlated, Excluded\n(2 rows)\n\nINSERT INTO upsert_test VALUES (2, 'Beeble') ON CONFLICT(a)\n DO UPDATE SET (b, a) = (SELECT b || ', Excluded', a from upsert_test i WHERE i.a = excluded.a)\n RETURNING tableoid::regclass, xmin, xmax, pg_current_xact_id()::xid, xmin = pg_current_xact_id()::xid AS xmin_correct, xmax = 0 AS xmax_correct;\n tableoid | xmin | xmax | pg_current_xact_id | xmin_correct | xmax_correct\n-------------+------+------------+--------------------+--------------+--------------\n upsert_test | 140 | 4294967295 | 758 | f | f\n(1 row)\n\nSELECT tableoid::regclass, xmin, xmax, pg_current_xact_id()::xid, a, b FROM upsert_test;\n tableoid | xmin | xmax | pg_current_xact_id | a | b\n-------------+------+------+--------------------+---+---------------------------\n upsert_test | 756 | 756 | 759 | 1 | Foo, Correlated, Excluded\n upsert_test | 756 | 756 | 759 | 3 | Zoo, Correlated, Excluded\n upsert_test | 758 | 0 | 759 | 2 | Beeble\n(3 rows)\n\n\nAdding a bogus Assert I can get the following stack trace, showing in frame 4 tts_buffer_pile_copy_heap_tuple is called (rather than the tts_buffer_pile_get_heap_tuple which was called in this location prior to changing the get_heap_tuple to NULL). In frame 6, pileam_tuple_update is going to see that shouldFree is true and will free the slot's tuple, so the slot's copy won't be valid by the time the dest receiver wants it. That will force a tts_pile_materialize call, but since the slot's tuple will not be vaild, the materialize will operate by forming a tuple from the (descriptor,values,isnull) triple, rather than by copying a tuple, and the pile_form_tuple call won't do anything to set the tuple header fields.\n\n\n* thread #1, stop reason = signal SIGSTOP\n * frame #0: 0x00007fff70ea632a libsystem_kernel.dylib`__pthread_kill + 10\n frame #1: 0x00007fff70f62e60 libsystem_pthread.dylib`pthread_kill + 430\n frame #2: 0x00007fff70e2d808 libsystem_c.dylib`abort + 120\n frame #3: 0x000000010c992251 postgres`ExceptionalCondition(conditionName=\"false\", errorType=\"FailedAssertion\", fileName=\"access/pile_slotops.c\", lineNumber=419) at assert.c:69:2\n frame #4: 0x000000010d33f8b8 pile.so`tts_buffer_pile_copy_heap_tuple(slot=0x00007f9edb02c550) at pile_slotops.c:419:3\n frame #5: 0x000000010d33e2f7 pile.so`ExecFetchSlotPileTuple(slot=0x00007f9edb02c550, materialize=true, shouldFree=0x00007ffee3aa3ace) at pile_slotops.c:639:32\n frame #6: 0x000000010d35bccc pile.so`pileam_tuple_update(relation=0x00007f9ed0083c58, otid=0x00007ffee3aa3f30, slot=0x00007f9edb02c550, cid=0, snapshot=0x00007f9edd04b7f0, crosscheck=0x0000000000000000, wait=true, tmfd=0x00007ffee3aa3cf8, lockmode=0x00007ffee3aa3ce8, update_indexes=0x00007ffee3aa3ce6) at pileam_handler.c:327:20\n frame #7: 0x000000010c51bcad postgres`table_tuple_update(rel=0x00007f9ed0083c58, otid=0x00007ffee3aa3f30, slot=0x00007f9edb02c550, cid=0, snapshot=0x00007f9edd04b7f0, crosscheck=0x0000000000000000, wait=true, tmfd=0x00007ffee3aa3cf8, lockmode=0x00007ffee3aa3ce8, update_indexes=0x00007ffee3aa3ce6) at tableam.h:1509:9\n frame #8: 0x000000010c518ec7 postgres`ExecUpdate(mtstate=0x00007f9edd0acd40, resultRelInfo=0x00007f9edd0acf58, tupleid=0x00007ffee3aa3f30, oldtuple=0x0000000000000000, slot=0x00007f9edb02c550, planSlot=0x00007f9edd0ad540, epqstate=0x00007f9edd0ace28, estate=0x00007f9edd0abd20, canSetTag=true) at nodeModifyTable.c:1809:12\n frame #9: 0x000000010c51b187 postgres`ExecOnConflictUpdate(mtstate=0x00007f9edd0acd40, resultRelInfo=0x00007f9edd0acf58, conflictTid=0x00007ffee3aa3f30, planSlot=0x00007f9edd0ad540, excludedSlot=0x00007f9edb02ec40, estate=0x00007f9edd0abd20, canSetTag=true, returning=0x00007ffee3aa3f18) at nodeModifyTable.c:2199:15\n frame #10: 0x000000010c518453 postgres`ExecInsert(mtstate=0x00007f9edd0acd40, resultRelInfo=0x00007f9edd0acf58, slot=0x00007f9edb02ec40, planSlot=0x00007f9edd0ad540, estate=0x00007f9edd0abd20, canSetTag=true) at nodeModifyTable.c:870:10\n frame #11: 0x000000010c516fd4 postgres`ExecModifyTable(pstate=0x00007f9edd0acd40) at nodeModifyTable.c:2583:12\n frame #12: 0x000000010c4d4862 postgres`ExecProcNodeFirst(node=0x00007f9edd0acd40) at execProcnode.c:464:9\n frame #13: 0x000000010c4cc6d2 postgres`ExecProcNode(node=0x00007f9edd0acd40) at executor.h:257:9\n frame #14: 0x000000010c4c7d21 postgres`ExecutePlan(estate=0x00007f9edd0abd20, planstate=0x00007f9edd0acd40, use_parallel_mode=false, operation=CMD_INSERT, sendTuples=true, numberTuples=0, direction=ForwardScanDirection, dest=0x00007f9edc00b458, execute_once=true) at execMain.c:1551:10\n frame #15: 0x000000010c4c7bf1 postgres`standard_ExecutorRun(queryDesc=0x00007f9edc00b4f0, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:361:3\n frame #16: 0x000000010c4c7982 postgres`ExecutorRun(queryDesc=0x00007f9edc00b4f0, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:305:3\n frame #17: 0x000000010c7930dc postgres`ProcessQuery(plan=0x00007f9edb021fc0, sourceText=\"WITH aaa AS (SELECT 1 AS a, 'Foo' AS b) INSERT INTO upsert_test\\n VALUES (1, 'Bar') ON CONFLICT(a)\\n DO UPDATE SET (b, a) = (SELECT b, a FROM aaa) RETURNING *;\", params=0x0000000000000000, queryEnv=0x0000000000000000, dest=0x00007f9edc00b458, qc=0x00007ffee3aa4408) at pquery.c:160:2\n frame #18: 0x000000010c791f07 postgres`PortalRunMulti(portal=0x00007f9edd028920, isTopLevel=true, setHoldSnapshot=true, dest=0x00007f9edc00b458, altdest=0x000000010cbc8890, qc=0x00007ffee3aa4408) at pquery.c:1274:5\n frame #19: 0x000000010c791835 postgres`FillPortalStore(portal=0x00007f9edd028920, isTopLevel=true) at pquery.c:1023:4\n frame #20: 0x000000010c7913ee postgres`PortalRun(portal=0x00007f9edd028920, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x00007f9edb0220b0, altdest=0x00007f9edb0220b0, qc=0x00007ffee3aa46a0) at pquery.c:760:6\n frame #21: 0x000000010c78c394 postgres`exec_simple_query(query_string=\"WITH aaa AS (SELECT 1 AS a, 'Foo' AS b) INSERT INTO upsert_test\\n VALUES (1, 'Bar') ON CONFLICT(a)\\n DO UPDATE SET (b, a) = (SELECT b, a FROM aaa) RETURNING *;\") at postgres.c:1213:10\n frame #22: 0x000000010c78b3f7 postgres`PostgresMain(argc=1, argv=0x00007ffee3aa49d0, dbname=\"contrib_regression\", username=\"mark.dilger\") at postgres.c:4496:7\n frame #23: 0x000000010c692a59 postgres`BackendRun(port=0x00007f9edc804080) at postmaster.c:4530:2\n frame #24: 0x000000010c691fa5 postgres`BackendStartup(port=0x00007f9edc804080) at postmaster.c:4252:3\n frame #25: 0x000000010c690d0e postgres`ServerLoop at postmaster.c:1745:7\n frame #26: 0x000000010c68e23a postgres`PostmasterMain(argc=8, argv=0x00007f9edac06440) at postmaster.c:1417:11\n frame #27: 0x000000010c565249 postgres`main(argc=8, argv=0x00007f9edac06440) at main.c:209:3\n frame #28: 0x00007fff70d5ecc9 libdyld.dylib`start + 1\n frame #29: 0x00007fff70d5ecc9 libdyld.dylib`start + 1\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 29 Sep 2022 09:04:42 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Should setting TupleTableSlotOps get_heap_tuple=NULL break INSERT..ON\n CONFLICT DO UPDATE..RETURNING?" }, { "msg_contents": "Hi,\n\nOn 2022-09-29 09:04:42 -0700, Mark Dilger wrote:\n> Per the documentation in TupleTableSlotOps, an AM can choose not to supply a\n> get_heap_tuple function, and instead set this field to NULL. Doing so\n> appears to almost work, but breaks the xmin and xmax returned by a\n> INSERT..ON CONFLICT DO UPDATE..RETURNING. In particular, the call chain\n> ExecOnConflictUpdate -> ExecUpdate -> table_tuple_update seems to expect\n> that upon return from table_tuple_update, the slot will hold onto a copy of\n> the updated tuple, including its header fields. This assumption is inherent\n> in how the slot is later used by the destination receiver. But for TAMs\n> which do not keep a copy heaptuple of their own, the slot will only have\n> copies of (tts_tupleDescriptor, tts_values, tts_isnull) to use to form up a\n> tuple when the receiver asks for one, and that formed up MinimalTuple won't\n> be preceded by any meaningful header.\n\nI would assume that this can be avoided by the tuple slot implementation, but\nwithout seeing what precisely you did in your pile slot...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 29 Sep 2022 09:22:29 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Should setting TupleTableSlotOps get_heap_tuple=NULL break\n INSERT..ON CONFLICT DO UPDATE..RETURNING?" }, { "msg_contents": "\n\n> On Sep 29, 2022, at 9:22 AM, Andres Freund <andres@anarazel.de> wrote:\n> \n> I would assume that this can be avoided by the tuple slot implementation, but\n> without seeing what precisely you did in your pile slot...\n\n\"pile\" is just a copy of \"heap\" placed into an extension with a slightly smarter version of s/heap/pile/g performed across the sources. It is intended to behave exactly as heap does. Without disabling the get_heap_tuple function, it passes a wide variety of the regression/isolation/tap tests. To test the claim made in the TupleTableSlotOps code comments, I disabled that one function:\n\n /*\n * Return a heap tuple \"owned\" by the slot. It is slot's responsibility to\n * free the memory consumed by the heap tuple. If the slot can not \"own\" a\n * heap tuple, it should not implement this callback and should set it as\n * NULL.\n */\n HeapTuple (*get_heap_tuple) (TupleTableSlot *slot);\n\nThat comment suggests that I do not need to keep a copy of the heap tuple, and per the next comment:\n\n /*\n * Return a copy of heap tuple representing the contents of the slot. The\n * copy needs to be palloc'd in the current memory context. The slot\n * itself is expected to remain unaffected. It is *not* expected to have\n * meaningful \"system columns\" in the copy. The copy is not be \"owned\" by\n * the slot i.e. the caller has to take responsibility to free memory\n * consumed by the slot.\n */\n HeapTuple (*copy_heap_tuple) (TupleTableSlot *slot);\n\nI do not need to keep a copy of the \"system columns\". But clearly this doesn't work. When get_heap_tuple=NULL, the AM's tuple_update is at liberty to free the update tuple (per the above documentation) and later return a copy of the slot's tuple sans any \"system columns\" (also per the above documentation) and that's when the core code breaks. It's not the TAM that is broken here, not according to the interface's documentation as I read it. Am I reading it wrong?\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 29 Sep 2022 09:40:02 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Should setting TupleTableSlotOps get_heap_tuple=NULL break\n INSERT..ON CONFLICT DO UPDATE..RETURNING?" } ]