threads
listlengths 1
2.99k
|
|---|
[
{
"msg_contents": "Hi,\nI was looking at the following commit:\n\ncommit efc981627a723d91e86865fb363d793282e473d1\nAuthor: Michael Paquier <michael@paquier.xyz>\nDate: Thu Nov 24 08:21:55 2022 +0900\n\n Rework memory contexts in charge of HBA/ident tokenization\n\nI think when the file cannot be opened, the context should be deleted.\n\nPlease see attached patch.\n\nI also modified one comment where `deleted` would be more appropriate verb\nfor the context.\n\nCheers",
"msg_date": "Wed, 23 Nov 2022 16:54:58 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": true,
"msg_subject": "cleanup in open_auth_file"
},
{
"msg_contents": "On Wed, Nov 23, 2022 at 4:54 PM Ted Yu <yuzhihong@gmail.com> wrote:\n\n> Hi,\n> I was looking at the following commit:\n>\n> commit efc981627a723d91e86865fb363d793282e473d1\n> Author: Michael Paquier <michael@paquier.xyz>\n> Date: Thu Nov 24 08:21:55 2022 +0900\n>\n> Rework memory contexts in charge of HBA/ident tokenization\n>\n> I think when the file cannot be opened, the context should be deleted.\n>\n> Please see attached patch.\n>\n> I also modified one comment where `deleted` would be more appropriate verb\n> for the context.\n>\n> Cheers\n>\n\nThinking more on this.\nThe context should be created when the file is successfully opened.\n\nPlease take a look at patch v2.",
"msg_date": "Wed, 23 Nov 2022 17:09:22 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: cleanup in open_auth_file"
},
{
"msg_contents": "On Wed, Nov 23, 2022 at 05:09:22PM -0800, Ted Yu wrote:\n> Thinking more on this.\n> The context should be created when the file is successfully opened.\n\nIndeed. Both operations ought to be done in the reverse order, or we\nwould run into leaks in the postmaster on reload if pg_ident.conf has\nbeen removed, for example, and this is prefectly valid. That's what\nthe previous logic did, actually. Will fix in a minute..\n--\nMichael",
"msg_date": "Thu, 24 Nov 2022 10:19:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: cleanup in open_auth_file"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on something else, I noticed that the proc array group\nXID clearing leader resets procArrayGroupNext of all the followers\natomically along with procArrayGroupMember. ISTM that it's enough for\nthe followers to exit the wait loop and continue if the leader resets\njust procArrayGroupMember, the followers can reset procArrayGroupNext\nby themselves. This relieves the leader a bit, especially when there\nare many followers, as it avoids a bunch of atomic writes and\npg_write_barrier() for the leader .\n\nI'm attaching a small patch with the above change. It doesn't seem to\nbreak anything, the cirrus-ci members are happy\nhttps://github.com/BRupireddy/postgres/tree/allow_processes_to_reset_proc_array_v1.\nIt doesn't seem to show visible benefit on my system, nor hurt anyone\nin my testing [1].\n\nThoughts?\n\n[1]\n# of clients HEAD PATCHED\n1 31661 31512\n2 67134 66789\n4 135084 132372\n8 254174 255384\n16 418598 420903\n32 491922 494183\n64 509824 509451\n128 513298 512838\n256 505191 496266\n512 465208 453588\n768 431304 425736\n1024 398110 397352\n2048 308732 308901\n4096 200355 219480\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 24 Nov 2022 10:43:46 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Allow processes to reset procArrayGroupNext themselves instead of\n leader resetting for all the followers"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-24 10:43:46 +0530, Bharath Rupireddy wrote:\n> While working on something else, I noticed that the proc array group\n> XID clearing leader resets procArrayGroupNext of all the followers\n> atomically along with procArrayGroupMember. ISTM that it's enough for\n> the followers to exit the wait loop and continue if the leader resets\n> just procArrayGroupMember, the followers can reset procArrayGroupNext\n> by themselves. This relieves the leader a bit, especially when there\n> are many followers, as it avoids a bunch of atomic writes and\n> pg_write_barrier() for the leader .\n\nI doubt this is a useful change - the leader already has to modify the\nrelevant cacheline (for procArrayGroupMember). That makes it pretty much free\nto modify another field in the same cacheline.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 26 Nov 2022 13:18:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Allow processes to reset procArrayGroupNext themselves instead\n of leader resetting for all the followers"
},
{
"msg_contents": "On Sun, Nov 27, 2022 at 2:48 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-11-24 10:43:46 +0530, Bharath Rupireddy wrote:\n> > While working on something else, I noticed that the proc array group\n> > XID clearing leader resets procArrayGroupNext of all the followers\n> > atomically along with procArrayGroupMember. ISTM that it's enough for\n> > the followers to exit the wait loop and continue if the leader resets\n> > just procArrayGroupMember, the followers can reset procArrayGroupNext\n> > by themselves. This relieves the leader a bit, especially when there\n> > are many followers, as it avoids a bunch of atomic writes and\n> > pg_write_barrier() for the leader .\n>\n> I doubt this is a useful change - the leader already has to modify the\n> relevant cacheline (for procArrayGroupMember). That makes it pretty much free\n> to modify another field in the same cacheline.\n\nAgreed. Thanks for the response.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 28 Nov 2022 11:23:15 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow processes to reset procArrayGroupNext themselves instead of\n leader resetting for all the followers"
}
] |
[
{
"msg_contents": "Looking at a recent pg_upgrade thread I happened to notice that the check for\nroles with a pg_ prefix only reports the error, not the roles it found. Other\nsimilar checks where the user is expected to alter the old cluster typically\nreports the found objects in a textfile. The attached adds reporting to make\nthat class of checks consistent (the check for prepared transactions which also\nisn't reporting is different IMO as it doesn't expect ALTER commands).\n\nAs this check is only executed against the old cluster the patch removes the\ncheck when printing the error.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Thu, 24 Nov 2022 12:31:09 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Report roles in pg_upgrade pg_ prefix check"
},
{
"msg_contents": "On Thu, Nov 24, 2022 at 12:31:09PM +0100, Daniel Gustafsson wrote:\n> Looking at a recent pg_upgrade thread I happened to notice that the check for\n> roles with a pg_ prefix only reports the error, not the roles it found. Other\n> similar checks where the user is expected to alter the old cluster typically\n> reports the found objects in a textfile. The attached adds reporting to make\n> that class of checks consistent (the check for prepared transactions which also\n> isn't reporting is different IMO as it doesn't expect ALTER commands).\n> \n> As this check is only executed against the old cluster the patch removes the\n> check when printing the error.\n\n+1\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Fri, 25 Nov 2022 15:20:02 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Report roles in pg_upgrade pg_ prefix check"
},
{
"msg_contents": "On Thu, Nov 24, 2022 at 12:31:09PM +0100, Daniel Gustafsson wrote:\n> Looking at a recent pg_upgrade thread I happened to notice that the check for\n> roles with a pg_ prefix only reports the error, not the roles it found. Other\n> similar checks where the user is expected to alter the old cluster typically\n> reports the found objects in a textfile. The attached adds reporting to make\n> that class of checks consistent (the check for prepared transactions which also\n> isn't reporting is different IMO as it doesn't expect ALTER commands).\n> \n> As this check is only executed against the old cluster the patch removes the\n> check when printing the error.\n\n+1. A backpatch would be nice, though not strictly mandatory as\nthat's not a bug fix.\n\n+ ntups = PQntuples(res);\n+ i_rolname = PQfnumber(res, \"rolname\");\n\nWould it be worth adding the OID on top of the role name in the\ngenerated report? That would be a free meal.\n--\nMichael",
"msg_date": "Mon, 28 Nov 2022 10:18:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Report roles in pg_upgrade pg_ prefix check"
},
{
"msg_contents": "> On 28 Nov 2022, at 02:18, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Nov 24, 2022 at 12:31:09PM +0100, Daniel Gustafsson wrote:\n>> Looking at a recent pg_upgrade thread I happened to notice that the check for\n>> roles with a pg_ prefix only reports the error, not the roles it found. Other\n>> similar checks where the user is expected to alter the old cluster typically\n>> reports the found objects in a textfile. The attached adds reporting to make\n>> that class of checks consistent (the check for prepared transactions which also\n>> isn't reporting is different IMO as it doesn't expect ALTER commands).\n>> \n>> As this check is only executed against the old cluster the patch removes the\n>> check when printing the error.\n> \n> +1. A backpatch would be nice, though not strictly mandatory as\n> that's not a bug fix.\n\nYeah, it doesn't really qualify since this not a bugfix.\n\n> + ntups = PQntuples(res);\n> + i_rolname = PQfnumber(res, \"rolname\");\n> \n> Would it be worth adding the OID on top of the role name in the\n> generated report? That would be a free meal.\n\nWe are a bit inconsistent in how much details we include in the report\ntextfiles, so could do that without breaking any consistency in reporting.\nLooking at other checks, the below format would match what we already do fairly\nwell:\n\n<rolname> (oid=<oid>)\n\nDone in the attached.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Mon, 28 Nov 2022 09:58:46 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Report roles in pg_upgrade pg_ prefix check"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 09:58:46AM +0100, Daniel Gustafsson wrote:\n> We are a bit inconsistent in how much details we include in the report\n> textfiles, so could do that without breaking any consistency in reporting.\n> Looking at other checks, the below format would match what we already do fairly\n> well:\n> \n> <rolname> (oid=<oid>)\n> \n> Done in the attached.\n\nWFM. Thanks!\n--\nMichael",
"msg_date": "Tue, 29 Nov 2022 08:24:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Report roles in pg_upgrade pg_ prefix check"
},
{
"msg_contents": "> On 29 Nov 2022, at 00:24, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Nov 28, 2022 at 09:58:46AM +0100, Daniel Gustafsson wrote:\n>> We are a bit inconsistent in how much details we include in the report\n>> textfiles, so could do that without breaking any consistency in reporting.\n>> Looking at other checks, the below format would match what we already do fairly\n>> well:\n>> \n>> <rolname> (oid=<oid>)\n>> \n>> Done in the attached.\n> \n> WFM. Thanks!\n\nTook another look at it, and applied it. Thanks!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 2 Dec 2022 13:27:14 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Report roles in pg_upgrade pg_ prefix check"
}
] |
[
{
"msg_contents": "Hi,\nI was looking at :\n\ncommit d09dbeb9bde6b9faabd30e887eff4493331d6424\nAuthor: David Rowley <drowley@postgresql.org>\nDate: Thu Nov 24 17:21:44 2022 +1300\n\n Speedup hash index builds by skipping needless binary searches\n\nIn _hash_pgaddtup(), it seems the indentation is off for the assertion.\n\nPlease take a look at the patch.\n\nThanks",
"msg_date": "Thu, 24 Nov 2022 04:42:31 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": true,
"msg_subject": "indentation in _hash_pgaddtup()"
},
{
"msg_contents": "> On 24 Nov 2022, at 13:42, Ted Yu <yuzhihong@gmail.com> wrote:\n\n> In _hash_pgaddtup(), it seems the indentation is off for the assertion.\n> \n> Please take a look at the patch.\n\nIndentation is handled by applying src/tools/pgindent to the code, and\nre-running it on this file yields no re-indentation so this is in fact correct\naccording to the pgindent rules.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 24 Nov 2022 15:11:27 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: indentation in _hash_pgaddtup()"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 24 Nov 2022, at 13:42, Ted Yu <yuzhihong@gmail.com> wrote:\n>> In _hash_pgaddtup(), it seems the indentation is off for the assertion.\n\n> Indentation is handled by applying src/tools/pgindent to the code, and\n> re-running it on this file yields no re-indentation so this is in fact correct\n> according to the pgindent rules.\n\nIt is one messy bit of code though --- perhaps a little more thought\nabout where to put line breaks would help? Alternatively, it could\nbe split into multiple statements, along the lines of\n\n#ifdef USE_ASSERT_CHECKING\n if (PageGetMaxOffsetNumber(page) > 0)\n {\n IndexTuple lasttup = PageGetItem(page,\n PageGetItemId(page,\n PageGetMaxOffsetNumber(page)));\n\n Assert(_hash_get_indextuple_hashkey(lasttup) <=\n _hash_get_indextuple_hashkey(itup));\n }\n#endif\n\n(details obviously tweakable)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 24 Nov 2022 10:11:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: indentation in _hash_pgaddtup()"
},
{
"msg_contents": "On Thu, Nov 24, 2022 at 7:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> >> On 24 Nov 2022, at 13:42, Ted Yu <yuzhihong@gmail.com> wrote:\n> >> In _hash_pgaddtup(), it seems the indentation is off for the assertion.\n>\n> > Indentation is handled by applying src/tools/pgindent to the code, and\n> > re-running it on this file yields no re-indentation so this is in fact\n> correct\n> > according to the pgindent rules.\n>\n> It is one messy bit of code though --- perhaps a little more thought\n> about where to put line breaks would help? Alternatively, it could\n> be split into multiple statements, along the lines of\n>\n> #ifdef USE_ASSERT_CHECKING\n> if (PageGetMaxOffsetNumber(page) > 0)\n> {\n> IndexTuple lasttup = PageGetItem(page,\n> PageGetItemId(page,\n>\n> PageGetMaxOffsetNumber(page)));\n>\n> Assert(_hash_get_indextuple_hashkey(lasttup) <=\n> _hash_get_indextuple_hashkey(itup));\n> }\n> #endif\n>\n> (details obviously tweakable)\n>\n> regards, tom lane\n>\n\nThanks Tom for the suggestion.\n\nHere is patch v2.",
"msg_date": "Thu, 24 Nov 2022 07:29:05 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: indentation in _hash_pgaddtup()"
},
{
"msg_contents": "On Fri, 25 Nov 2022 at 04:29, Ted Yu <yuzhihong@gmail.com> wrote:\n> Here is patch v2.\n\nAfter running pgindent on v2, I see it still pushes the lines out\nquite far. If I add a new line after PageGetItemId(page, and put the\nvariable assignment away from the variable declaration then it looks a\nbit better. It's still 1 char over the limit.\n\nDavid",
"msg_date": "Fri, 25 Nov 2022 09:24:37 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: indentation in _hash_pgaddtup()"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> After running pgindent on v2, I see it still pushes the lines out\n> quite far. If I add a new line after PageGetItemId(page, and put the\n> variable assignment away from the variable declaration then it looks a\n> bit better. It's still 1 char over the limit.\n\nIf you wanted to be hard-nosed about 80 character width, you could\npull out the PageGetItemId call into a separate local variable.\nI wasn't going to be quite that picky, but I won't object if that\nseems better to you.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 24 Nov 2022 15:31:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: indentation in _hash_pgaddtup()"
},
{
"msg_contents": "On Thu, Nov 24, 2022 at 12:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > After running pgindent on v2, I see it still pushes the lines out\n> > quite far. If I add a new line after PageGetItemId(page, and put the\n> > variable assignment away from the variable declaration then it looks a\n> > bit better. It's still 1 char over the limit.\n>\n> If you wanted to be hard-nosed about 80 character width, you could\n> pull out the PageGetItemId call into a separate local variable.\n> I wasn't going to be quite that picky, but I won't object if that\n> seems better to you.\n>\n> regards, tom lane\n>\n\nPatch v4 stores ItemId in a local variable.",
"msg_date": "Thu, 24 Nov 2022 12:39:59 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: indentation in _hash_pgaddtup()"
},
{
"msg_contents": "On Fri, 25 Nov 2022 at 09:40, Ted Yu <yuzhihong@gmail.com> wrote:\n> Patch v4 stores ItemId in a local variable.\n\nok, I pushed that one. Thank you for working on this.\n\nDavid\n\n\n",
"msg_date": "Fri, 25 Nov 2022 10:11:45 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: indentation in _hash_pgaddtup()"
},
{
"msg_contents": "On Fri, 25 Nov 2022 at 09:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If you wanted to be hard-nosed about 80 character width, you could\n> pull out the PageGetItemId call into a separate local variable.\n> I wasn't going to be quite that picky, but I won't object if that\n> seems better to you.\n\nI wasn't too worried about the 1 char, but Ted wrote a patch and it\nlooked a little nicer.\n\nDavid\n\n\n",
"msg_date": "Fri, 25 Nov 2022 10:12:59 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: indentation in _hash_pgaddtup()"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on something else, I noticed that\nWaitXLogInsertionsToFinish() goes the LWLockWaitForVar() route even\nfor a process that's holding a WAL insertion lock. Typically, a\nprocess holding WAL insertion lock reaches\nWaitXLogInsertionsToFinish() when it's in need of WAL buffer pages for\nits insertion and waits for other older in-progress WAL insertions to\nfinish. This fact guarantees that the process holding a WAL insertion\nlock will never have its insertingAt less than 'upto'.\n\nWith that said, here's a small improvement I can think of, that is, to\navoid calling LWLockWaitForVar() for the WAL insertion lock the caller\nof WaitXLogInsertionsToFinish() currently holds. Since\nLWLockWaitForVar() does a bunch of things - holds interrupts, does\natomic reads, acquires and releases wait list lock and so on, avoiding\nit may be a good idea IMO.\n\nI'm attaching a patch herewith. Here's the cirrus-ci tests -\nhttps://github.com/BRupireddy/postgres/tree/avoid_LWLockWaitForVar_for_currently_held_wal_ins_lock_v1.\n\nThoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 24 Nov 2022 18:13:10 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Avoid LWLockWaitForVar() for currently held WAL insertion lock in\n WaitXLogInsertionsToFinish()"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-24 18:13:10 +0530, Bharath Rupireddy wrote:\n> With that said, here's a small improvement I can think of, that is, to\n> avoid calling LWLockWaitForVar() for the WAL insertion lock the caller\n> of WaitXLogInsertionsToFinish() currently holds. Since\n> LWLockWaitForVar() does a bunch of things - holds interrupts, does\n> atomic reads, acquires and releases wait list lock and so on, avoiding\n> it may be a good idea IMO.\n\nThat doesn't seem like a big win. We're still going to call LWLockWaitForVar()\nfor all the other locks.\n\nI think we could improve this code more significantly by avoiding the call to\nLWLockWaitForVar() for all locks that aren't acquired or don't have a\nconflicting insertingAt, that'd require just a bit more work to handle systems\nwithout tear-free 64bit writes/reads.\n\nThe easiest way would probably be to just make insertingAt a 64bit atomic,\nthat transparently does the required work to make even non-atomic read/writes\ntear free. Then we could trivially avoid the spinlock in\nLWLockConflictsWithVar(), LWLockReleaseClearVar() and with just a bit more\nwork add a fastpath to LWLockUpdateVar(). We don't need to acquire the wait\nlist lock if there aren't any waiters.\n\nI'd bet that start to have visible effects in a workload with many small\nrecords.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 24 Nov 2022 10:46:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Avoid LWLockWaitForVar() for currently held WAL insertion lock\n in WaitXLogInsertionsToFinish()"
},
{
"msg_contents": "On Fri, Nov 25, 2022 at 12:16 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-11-24 18:13:10 +0530, Bharath Rupireddy wrote:\n> > With that said, here's a small improvement I can think of, that is, to\n> > avoid calling LWLockWaitForVar() for the WAL insertion lock the caller\n> > of WaitXLogInsertionsToFinish() currently holds. Since\n> > LWLockWaitForVar() does a bunch of things - holds interrupts, does\n> > atomic reads, acquires and releases wait list lock and so on, avoiding\n> > it may be a good idea IMO.\n>\n> That doesn't seem like a big win. We're still going to call LWLockWaitForVar()\n> for all the other locks.\n>\n> I think we could improve this code more significantly by avoiding the call to\n> LWLockWaitForVar() for all locks that aren't acquired or don't have a\n> conflicting insertingAt, that'd require just a bit more work to handle systems\n> without tear-free 64bit writes/reads.\n>\n> The easiest way would probably be to just make insertingAt a 64bit atomic,\n> that transparently does the required work to make even non-atomic read/writes\n> tear free. Then we could trivially avoid the spinlock in\n> LWLockConflictsWithVar(), LWLockReleaseClearVar() and with just a bit more\n> work add a fastpath to LWLockUpdateVar(). We don't need to acquire the wait\n> list lock if there aren't any waiters.\n>\n> I'd bet that start to have visible effects in a workload with many small\n> records.\n\nThanks Andres! I quickly came up with the attached patch. I also ran\nan insert test [1], below are the results. I also attached the results\ngraph. The cirrus-ci is happy with the patch -\nhttps://github.com/BRupireddy/postgres/tree/wal_insertion_lock_improvements_v1_2.\nPlease let me know if the direction of the patch seems right.\nclients HEAD PATCHED\n1 1354 1499\n2 1451 1464\n4 3069 3073\n8 5712 5797\n16 11331 11157\n32 22020 22074\n64 41742 42213\n128 71300 76638\n256 103652 118944\n512 111250 161582\n768 99544 161987\n1024 96743 164161\n2048 72711 156686\n4096 54158 135713\n\n[1]\n./configure --prefix=$PWD/inst/ CFLAGS=\"-O3\" > install.log && make -j\n8 install > install.log 2>&1 &\ncd inst/bin\n./pg_ctl -D data -l logfile stop\nrm -rf data logfile\nfree -m\nsudo su -c 'sync; echo 3 > /proc/sys/vm/drop_caches'\nfree -m\n./initdb -D data\n./pg_ctl -D data -l logfile start\n./psql -d postgres -c 'ALTER SYSTEM SET shared_buffers = \"8GB\";'\n./psql -d postgres -c 'ALTER SYSTEM SET max_wal_size = \"16GB\";'\n./psql -d postgres -c 'ALTER SYSTEM SET max_connections = \"4096\";'\n./pg_ctl -D data -l logfile restart\n./pgbench -i -s 1 -d postgres\n./psql -d postgres -c \"ALTER TABLE pgbench_accounts DROP CONSTRAINT\npgbench_accounts_pkey;\"\ncat << EOF >> insert.sql\n\\set aid random(1, 10 * :scale)\n\\set delta random(1, 100000 * :scale)\nINSERT INTO pgbench_accounts (aid, bid, abalance) VALUES (:aid, :aid, :delta);\nEOF\nulimit -S -n 5000\nfor c in 1 2 4 8 16 32 64 128 256 512 768 1024 2048 4096; do echo -n\n\"$c \";./pgbench -n -M prepared -U ubuntu postgres -f insert.sql -c$c\n-j$c -T5 2>&1|grep '^tps'|awk '{print $3}';done\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 25 Nov 2022 16:54:19 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid LWLockWaitForVar() for currently held WAL insertion lock in\n WaitXLogInsertionsToFinish()"
},
{
"msg_contents": "On 2022-11-25 16:54:19 +0530, Bharath Rupireddy wrote:\n> On Fri, Nov 25, 2022 at 12:16 AM Andres Freund <andres@anarazel.de> wrote:\n> > I think we could improve this code more significantly by avoiding the call to\n> > LWLockWaitForVar() for all locks that aren't acquired or don't have a\n> > conflicting insertingAt, that'd require just a bit more work to handle systems\n> > without tear-free 64bit writes/reads.\n> >\n> > The easiest way would probably be to just make insertingAt a 64bit atomic,\n> > that transparently does the required work to make even non-atomic read/writes\n> > tear free. Then we could trivially avoid the spinlock in\n> > LWLockConflictsWithVar(), LWLockReleaseClearVar() and with just a bit more\n> > work add a fastpath to LWLockUpdateVar(). We don't need to acquire the wait\n> > list lock if there aren't any waiters.\n> >\n> > I'd bet that start to have visible effects in a workload with many small\n> > records.\n> \n> Thanks Andres! I quickly came up with the attached patch. I also ran\n> an insert test [1], below are the results. I also attached the results\n> graph. The cirrus-ci is happy with the patch -\n> https://github.com/BRupireddy/postgres/tree/wal_insertion_lock_improvements_v1_2.\n> Please let me know if the direction of the patch seems right.\n> clients HEAD PATCHED\n> 1 1354 1499\n> 2 1451 1464\n> 4 3069 3073\n> 8 5712 5797\n> 16 11331 11157\n> 32 22020 22074\n> 64 41742 42213\n> 128 71300 76638\n> 256 103652 118944\n> 512 111250 161582\n> 768 99544 161987\n> 1024 96743 164161\n> 2048 72711 156686\n> 4096 54158 135713\n\nNice.\n\n\n> From 293e789f9c1a63748147acd613c556961f1dc5c4 Mon Sep 17 00:00:00 2001\n> From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n> Date: Fri, 25 Nov 2022 10:53:56 +0000\n> Subject: [PATCH v1] WAL Insertion Lock Improvements\n> \n> ---\n> src/backend/access/transam/xlog.c | 8 +++--\n> src/backend/storage/lmgr/lwlock.c | 56 +++++++++++++++++--------------\n> src/include/storage/lwlock.h | 7 ++--\n> 3 files changed, 41 insertions(+), 30 deletions(-)\n> \n> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> index a31fbbff78..b3f758abb3 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -376,7 +376,7 @@ typedef struct XLogwrtResult\n> typedef struct\n> {\n> \tLWLock\t\tlock;\n> -\tXLogRecPtr\tinsertingAt;\n> +\tpg_atomic_uint64\tinsertingAt;\n> \tXLogRecPtr\tlastImportantAt;\n> } WALInsertLock;\n> \n> @@ -1482,6 +1482,10 @@ WaitXLogInsertionsToFinish(XLogRecPtr upto)\n> \t{\n> \t\tXLogRecPtr\tinsertingat = InvalidXLogRecPtr;\n> \n> +\t\t/* Quickly check and continue if no one holds the lock. */\n> +\t\tif (!IsLWLockHeld(&WALInsertLocks[i].l.lock))\n> +\t\t\tcontinue;\n\nI'm not sure this is quite right - don't we need a memory barrier. But I don't\nsee a reason to not just leave this code as-is. I think this should be\noptimized entirely in lwlock.c\n\n\nI'd probably split the change to an atomic from other changes either way.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Dec 2022 16:40:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Avoid LWLockWaitForVar() for currently held WAL insertion lock\n in WaitXLogInsertionsToFinish()"
},
{
"msg_contents": "On Fri, Dec 2, 2022 at 6:10 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-11-25 16:54:19 +0530, Bharath Rupireddy wrote:\n> > On Fri, Nov 25, 2022 at 12:16 AM Andres Freund <andres@anarazel.de> wrote:\n> > > I think we could improve this code more significantly by avoiding the call to\n> > > LWLockWaitForVar() for all locks that aren't acquired or don't have a\n> > > conflicting insertingAt, that'd require just a bit more work to handle systems\n> > > without tear-free 64bit writes/reads.\n> > >\n> > > The easiest way would probably be to just make insertingAt a 64bit atomic,\n> > > that transparently does the required work to make even non-atomic read/writes\n> > > tear free. Then we could trivially avoid the spinlock in\n> > > LWLockConflictsWithVar(), LWLockReleaseClearVar() and with just a bit more\n> > > work add a fastpath to LWLockUpdateVar(). We don't need to acquire the wait\n> > > list lock if there aren't any waiters.\n> > >\n> > > I'd bet that start to have visible effects in a workload with many small\n> > > records.\n> >\n> > Thanks Andres! I quickly came up with the attached patch. I also ran\n> > an insert test [1], below are the results. I also attached the results\n> > graph. The cirrus-ci is happy with the patch -\n> > https://github.com/BRupireddy/postgres/tree/wal_insertion_lock_improvements_v1_2.\n> > Please let me know if the direction of the patch seems right.\n> > clients HEAD PATCHED\n> > 1 1354 1499\n> > 2 1451 1464\n> > 4 3069 3073\n> > 8 5712 5797\n> > 16 11331 11157\n> > 32 22020 22074\n> > 64 41742 42213\n> > 128 71300 76638\n> > 256 103652 118944\n> > 512 111250 161582\n> > 768 99544 161987\n> > 1024 96743 164161\n> > 2048 72711 156686\n> > 4096 54158 135713\n>\n> Nice.\n\nThanks for taking a look at it.\n\n> > From 293e789f9c1a63748147acd613c556961f1dc5c4 Mon Sep 17 00:00:00 2001\n> > From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n> > Date: Fri, 25 Nov 2022 10:53:56 +0000\n> > Subject: [PATCH v1] WAL Insertion Lock Improvements\n> >\n> > ---\n> > src/backend/access/transam/xlog.c | 8 +++--\n> > src/backend/storage/lmgr/lwlock.c | 56 +++++++++++++++++--------------\n> > src/include/storage/lwlock.h | 7 ++--\n> > 3 files changed, 41 insertions(+), 30 deletions(-)\n> >\n> > diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> > index a31fbbff78..b3f758abb3 100644\n> > --- a/src/backend/access/transam/xlog.c\n> > +++ b/src/backend/access/transam/xlog.c\n> > @@ -376,7 +376,7 @@ typedef struct XLogwrtResult\n> > typedef struct\n> > {\n> > LWLock lock;\n> > - XLogRecPtr insertingAt;\n> > + pg_atomic_uint64 insertingAt;\n> > XLogRecPtr lastImportantAt;\n> > } WALInsertLock;\n> >\n> > @@ -1482,6 +1482,10 @@ WaitXLogInsertionsToFinish(XLogRecPtr upto)\n> > {\n> > XLogRecPtr insertingat = InvalidXLogRecPtr;\n> >\n> > + /* Quickly check and continue if no one holds the lock. */\n> > + if (!IsLWLockHeld(&WALInsertLocks[i].l.lock))\n> > + continue;\n>\n> I'm not sure this is quite right - don't we need a memory barrier. But I don't\n> see a reason to not just leave this code as-is. I think this should be\n> optimized entirely in lwlock.c\n\nActually, we don't need that at all as LWLockWaitForVar() will return\nimmediately if the lock is free. So, I removed it.\n\n> I'd probably split the change to an atomic from other changes either way.\n\nDone. I've added commit messages to each of the patches.\n\nI've also brought the patch from [1] here as 0003.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/CALj2ACXtQdrGXtb%3DrbUOXddm1wU1vD9z6q_39FQyX0166dq%3D%3DA%40mail.gmail.com\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 2 Dec 2022 16:32:38 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "WAL Insertion Lock Improvements (was: Re: Avoid LWLockWaitForVar()\n for currently held WAL insertion lock in WaitXLogInsertionsToFinish())"
},
{
"msg_contents": "On Fri, Dec 02, 2022 at 04:32:38PM +0530, Bharath Rupireddy wrote:\n> On Fri, Dec 2, 2022 at 6:10 AM Andres Freund <andres@anarazel.de> wrote:\n>> I'm not sure this is quite right - don't we need a memory barrier. But I don't\n>> see a reason to not just leave this code as-is. I think this should be\n>> optimized entirely in lwlock.c\n> \n> Actually, we don't need that at all as LWLockWaitForVar() will return\n> immediately if the lock is free. So, I removed it.\n\nI briefly looked at the latest patch set, and I'm curious how this change\navoids introducing memory ordering bugs. Perhaps I am missing something\nobvious.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 2 Dec 2022 16:31:58 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements (was: Re: Avoid\n LWLockWaitForVar() for currently held WAL insertion lock in\n WaitXLogInsertionsToFinish())"
},
{
"msg_contents": "Hi,\n\nFWIW, I don't see an advantage in 0003. If it allows us to make something else\nsimpler / faster, cool, but on its own it doesn't seem worthwhile.\n\n\n\n\nOn 2022-12-02 16:31:58 -0800, Nathan Bossart wrote:\n> On Fri, Dec 02, 2022 at 04:32:38PM +0530, Bharath Rupireddy wrote:\n> > On Fri, Dec 2, 2022 at 6:10 AM Andres Freund <andres@anarazel.de> wrote:\n> >> I'm not sure this is quite right - don't we need a memory barrier. But I don't\n> >> see a reason to not just leave this code as-is. I think this should be\n> >> optimized entirely in lwlock.c\n> > \n> > Actually, we don't need that at all as LWLockWaitForVar() will return\n> > immediately if the lock is free. So, I removed it.\n> \n> I briefly looked at the latest patch set, and I'm curious how this change\n> avoids introducing memory ordering bugs. Perhaps I am missing something\n> obvious.\n\nI'm a bit confused too - the comment above talks about LWLockWaitForVar(), but\nthe patches seem to optimize LWLockUpdateVar().\n\n\nI think it'd be safe to optimize LWLockConflictsWithVar(), due to some\npre-existing, quite crufty, code. LWLockConflictsWithVar() says:\n\n\t * Test first to see if it the slot is free right now.\n\t *\n\t * XXX: the caller uses a spinlock before this, so we don't need a memory\n\t * barrier here as far as the current usage is concerned. But that might\n\t * not be safe in general.\n\nwhich happens to be true in the single, indirect, caller:\n\n\t/* Read the current insert position */\n\tSpinLockAcquire(&Insert->insertpos_lck);\n\tbytepos = Insert->CurrBytePos;\n\tSpinLockRelease(&Insert->insertpos_lck);\n\treservedUpto = XLogBytePosToEndRecPtr(bytepos);\n\nI think at the very least we ought to have a comment in\nWaitXLogInsertionsToFinish() highlighting this.\n\n\n\nIt's not at all clear to me that the proposed fast-path for LWLockUpdateVar()\nis safe. I think at the very least we could end up missing waiters that we\nshould have woken up.\n\nI think it ought to be safe to do something like\n\npg_atomic_exchange_u64()..\nif (!(pg_atomic_read_u32(&lock->state) & LW_FLAG_HAS_WAITERS))\n return;\n\nbecause the pg_atomic_exchange_u64() will provide the necessary memory\nbarrier.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Dec 2022 10:30:07 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements (was: Re: Avoid\n LWLockWaitForVar() for currently held WAL insertion lock in\n WaitXLogInsertionsToFinish())"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 12:00 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> FWIW, I don't see an advantage in 0003. If it allows us to make something else\n> simpler / faster, cool, but on its own it doesn't seem worthwhile.\n\nThanks. I will discard it.\n\n> I think it'd be safe to optimize LWLockConflictsWithVar(), due to some\n> pre-existing, quite crufty, code. LWLockConflictsWithVar() says:\n>\n> * Test first to see if it the slot is free right now.\n> *\n> * XXX: the caller uses a spinlock before this, so we don't need a memory\n> * barrier here as far as the current usage is concerned. But that might\n> * not be safe in general.\n>\n> which happens to be true in the single, indirect, caller:\n>\n> /* Read the current insert position */\n> SpinLockAcquire(&Insert->insertpos_lck);\n> bytepos = Insert->CurrBytePos;\n> SpinLockRelease(&Insert->insertpos_lck);\n> reservedUpto = XLogBytePosToEndRecPtr(bytepos);\n>\n> I think at the very least we ought to have a comment in\n> WaitXLogInsertionsToFinish() highlighting this.\n\nSo, using a spinlock ensures no memory ordering occurs while reading\nlock->state in LWLockConflictsWithVar()? How does spinlock that gets\nacquired and released in the caller WaitXLogInsertionsToFinish()\nitself and the memory barrier in the called function\nLWLockConflictsWithVar() relate here? Can you please help me\nunderstand this a bit?\n\n> It's not at all clear to me that the proposed fast-path for LWLockUpdateVar()\n> is safe. I think at the very least we could end up missing waiters that we\n> should have woken up.\n>\n> I think it ought to be safe to do something like\n>\n> pg_atomic_exchange_u64()..\n> if (!(pg_atomic_read_u32(&lock->state) & LW_FLAG_HAS_WAITERS))\n> return;\n\npg_atomic_exchange_u64(&lock->state, exchange_with_what_?. Exchange\nwill change the value no?\n\n> because the pg_atomic_exchange_u64() will provide the necessary memory\n> barrier.\n\nI'm reading some comments [1], are these also true for 64-bit atomic\nCAS? Does it mean that an atomic CAS operation inherently provides a\nmemory barrier? Can you please point me if it's described better\nsomewhere else?\n\n[1]\n * Full barrier semantics.\n */\nstatic inline uint32\npg_atomic_exchange_u32(volatile pg_atomic_uint32 *ptr,\n\n /*\n * Get and clear the flags that are set for this backend. Note that\n * pg_atomic_exchange_u32 is a full barrier, so we're guaranteed that the\n * read of the barrier generation above happens before we atomically\n * extract the flags, and that any subsequent state changes happen\n * afterward.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 8 Dec 2022 12:29:54 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL Insertion Lock Improvements (was: Re: Avoid\n LWLockWaitForVar() for currently held WAL insertion lock in\n WaitXLogInsertionsToFinish())"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-08 12:29:54 +0530, Bharath Rupireddy wrote:\n> On Tue, Dec 6, 2022 at 12:00 AM Andres Freund <andres@anarazel.de> wrote:\n> > I think it'd be safe to optimize LWLockConflictsWithVar(), due to some\n> > pre-existing, quite crufty, code. LWLockConflictsWithVar() says:\n> >\n> > * Test first to see if it the slot is free right now.\n> > *\n> > * XXX: the caller uses a spinlock before this, so we don't need a memory\n> > * barrier here as far as the current usage is concerned. But that might\n> > * not be safe in general.\n> >\n> > which happens to be true in the single, indirect, caller:\n> >\n> > /* Read the current insert position */\n> > SpinLockAcquire(&Insert->insertpos_lck);\n> > bytepos = Insert->CurrBytePos;\n> > SpinLockRelease(&Insert->insertpos_lck);\n> > reservedUpto = XLogBytePosToEndRecPtr(bytepos);\n> >\n> > I think at the very least we ought to have a comment in\n> > WaitXLogInsertionsToFinish() highlighting this.\n> \n> So, using a spinlock ensures no memory ordering occurs while reading\n> lock->state in LWLockConflictsWithVar()?\n\nNo, a spinlock *does* imply ordering. But your patch does remove several of\nthe spinlock acquisitions (via LWLockWaitListLock()). And moved the assignment\nin LWLockUpdateVar() out from under the spinlock.\n\nIf you remove spinlock operations (or other barrier primitives), you need to\nmake sure that such modifications don't break the required memory ordering.\n\n\n> How does spinlock that gets acquired and released in the caller\n> WaitXLogInsertionsToFinish() itself and the memory barrier in the called\n> function LWLockConflictsWithVar() relate here? Can you please help me\n> understand this a bit?\n\nThe caller's barrier means that we'll see values that are at least as \"up to\ndate\" as at the time of the barrier (it's a bit more complicated than that, a\nbarrier always needs to be paired with another barrier).\n\n\n> > It's not at all clear to me that the proposed fast-path for LWLockUpdateVar()\n> > is safe. I think at the very least we could end up missing waiters that we\n> > should have woken up.\n> >\n> > I think it ought to be safe to do something like\n> >\n> > pg_atomic_exchange_u64()..\n> > if (!(pg_atomic_read_u32(&lock->state) & LW_FLAG_HAS_WAITERS))\n> > return;\n> \n> pg_atomic_exchange_u64(&lock->state, exchange_with_what_?. Exchange will\n> change the value no?\n\nNot lock->state, but the atomic passed to LWLockUpdateVar(), which we do want\nto update. An pg_atomic_exchange_u64() includes a memory barrier.\n\n\n> > because the pg_atomic_exchange_u64() will provide the necessary memory\n> > barrier.\n> \n> I'm reading some comments [1], are these also true for 64-bit atomic\n> CAS?\n\nYes. See\n/* ----\n * The 64 bit operations have the same semantics as their 32bit counterparts\n * if they are available. Check the corresponding 32bit function for\n * documentation.\n * ----\n */\n\n\n> Does it mean that an atomic CAS operation inherently provides a\n> memory barrier?\n\nYes.\n\n\n> Can you please point me if it's described better somewhere else?\n\nI'm not sure what you'd like to have described more extensively, tbh.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Jan 2023 14:24:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements (was: Re: Avoid\n LWLockWaitForVar() for currently held WAL insertion lock in\n WaitXLogInsertionsToFinish())"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 12:00 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi\n\nThanks for reviewing.\n\n> FWIW, I don't see an advantage in 0003. If it allows us to make something else\n> simpler / faster, cool, but on its own it doesn't seem worthwhile.\n\nI've discarded this change.\n\n> On 2022-12-02 16:31:58 -0800, Nathan Bossart wrote:\n> > On Fri, Dec 02, 2022 at 04:32:38PM +0530, Bharath Rupireddy wrote:\n> > > On Fri, Dec 2, 2022 at 6:10 AM Andres Freund <andres@anarazel.de> wrote:\n> > >> I'm not sure this is quite right - don't we need a memory barrier. But I don't\n> > >> see a reason to not just leave this code as-is. I think this should be\n> > >> optimized entirely in lwlock.c\n> > >\n> > > Actually, we don't need that at all as LWLockWaitForVar() will return\n> > > immediately if the lock is free. So, I removed it.\n> >\n> > I briefly looked at the latest patch set, and I'm curious how this change\n> > avoids introducing memory ordering bugs. Perhaps I am missing something\n> > obvious.\n>\n> I'm a bit confused too - the comment above talks about LWLockWaitForVar(), but\n> the patches seem to optimize LWLockUpdateVar().\n>\n> I think it'd be safe to optimize LWLockConflictsWithVar(), due to some\n> pre-existing, quite crufty, code. LWLockConflictsWithVar() says:\n>\n> * Test first to see if it the slot is free right now.\n> *\n> * XXX: the caller uses a spinlock before this, so we don't need a memory\n> * barrier here as far as the current usage is concerned. But that might\n> * not be safe in general.\n>\n> which happens to be true in the single, indirect, caller:\n>\n> /* Read the current insert position */\n> SpinLockAcquire(&Insert->insertpos_lck);\n> bytepos = Insert->CurrBytePos;\n> SpinLockRelease(&Insert->insertpos_lck);\n> reservedUpto = XLogBytePosToEndRecPtr(bytepos);\n>\n> I think at the very least we ought to have a comment in\n> WaitXLogInsertionsToFinish() highlighting this.\n\nDone.\n\n> It's not at all clear to me that the proposed fast-path for LWLockUpdateVar()\n> is safe. I think at the very least we could end up missing waiters that we\n> should have woken up.\n>\n> I think it ought to be safe to do something like\n>\n> pg_atomic_exchange_u64()..\n> if (!(pg_atomic_read_u32(&lock->state) & LW_FLAG_HAS_WAITERS))\n> return;\n>\n> because the pg_atomic_exchange_u64() will provide the necessary memory\n> barrier.\n\nDone.\n\nI'm attaching the v3 patch with the above review comments addressed.\nHopefully, no memory ordering issues now. FWIW, I've added it to CF\nhttps://commitfest.postgresql.org/42/4141/.\n\nTest results with the v3 patch and insert workload are the same as\nthat of the earlier run - TPS starts to scale at higher clients as\nexpected after 512 clients and peaks at 2X with 2048 and 4096 clients.\n\nHEAD:\n1 1380.411086\n2 1358.378988\n4 2701.974332\n8 5925.380744\n16 10956.501237\n32 20877.513953\n64 40838.046774\n128 70251.744161\n256 108114.321299\n512 120478.988268\n768 99140.425209\n1024 93645.984364\n2048 70111.159909\n4096 55541.804826\n\nv3 PATCHED:\n1 1493.800209\n2 1569.414953\n4 3154.186605\n8 5965.578904\n16 11912.587645\n32 22720.964908\n64 42001.094528\n128 78361.158983\n256 110457.926232\n512 148941.378393\n768 167256.590308\n1024 155510.675372\n2048 147499.376882\n4096 119375.457779\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 24 Jan 2023 19:00:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL Insertion Lock Improvements (was: Re: Avoid\n LWLockWaitForVar() for currently held WAL insertion lock in\n WaitXLogInsertionsToFinish())"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 7:00 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I'm attaching the v3 patch with the above review comments addressed.\n> Hopefully, no memory ordering issues now. FWIW, I've added it to CF\n> https://commitfest.postgresql.org/42/4141/.\n>\n> Test results with the v3 patch and insert workload are the same as\n> that of the earlier run - TPS starts to scale at higher clients as\n> expected after 512 clients and peaks at 2X with 2048 and 4096 clients.\n>\n> HEAD:\n> 1 1380.411086\n> 2 1358.378988\n> 4 2701.974332\n> 8 5925.380744\n> 16 10956.501237\n> 32 20877.513953\n> 64 40838.046774\n> 128 70251.744161\n> 256 108114.321299\n> 512 120478.988268\n> 768 99140.425209\n> 1024 93645.984364\n> 2048 70111.159909\n> 4096 55541.804826\n>\n> v3 PATCHED:\n> 1 1493.800209\n> 2 1569.414953\n> 4 3154.186605\n> 8 5965.578904\n> 16 11912.587645\n> 32 22720.964908\n> 64 42001.094528\n> 128 78361.158983\n> 256 110457.926232\n> 512 148941.378393\n> 768 167256.590308\n> 1024 155510.675372\n> 2048 147499.376882\n> 4096 119375.457779\n\nI slightly modified the comments and attached the v4 patch for further\nreview. I also took perf report - there's a clear reduction in the\nfunctions that are affected by the patch - LWLockWaitListLock,\nWaitXLogInsertionsToFinish, LWLockWaitForVar and\nLWLockConflictsWithVar. Note that I compiled the source code with\n-ggdb for capturing symbols for perf, still the benefit stands at > 2X\nfor a higher number of clients.\n\nHEAD:\n+ 16.87% 0.01% postgres [.] CommitTransactionCommand\n+ 16.86% 0.00% postgres [.] finish_xact_command\n+ 16.81% 0.01% postgres [.] CommitTransaction\n+ 15.09% 0.20% postgres [.] LWLockWaitListLock\n+ 14.53% 0.01% postgres [.] WaitXLogInsertionsToFinish\n+ 14.51% 0.02% postgres [.] LWLockWaitForVar\n+ 11.70% 11.63% postgres [.] pg_atomic_read_u32_impl\n+ 11.66% 0.08% postgres [.] pg_atomic_read_u32\n+ 9.96% 0.03% postgres [.] LWLockConflictsWithVar\n+ 4.78% 0.00% postgres [.] LWLockQueueSelf\n+ 1.91% 0.01% postgres [.] pg_atomic_fetch_or_u32\n+ 1.91% 1.89% postgres [.] pg_atomic_fetch_or_u32_impl\n+ 1.73% 0.00% postgres [.] XLogInsert\n+ 1.69% 0.01% postgres [.] XLogInsertRecord\n+ 1.41% 0.01% postgres [.] LWLockRelease\n+ 1.37% 0.47% postgres [.] perform_spin_delay\n+ 1.11% 1.11% postgres [.] spin_delay\n+ 1.10% 0.03% postgres [.] exec_bind_message\n+ 0.91% 0.00% postgres [.] WALInsertLockRelease\n+ 0.91% 0.00% postgres [.] LWLockReleaseClearVar\n+ 0.72% 0.02% postgres [.] LWLockAcquire\n+ 0.60% 0.00% postgres [.] LWLockDequeueSelf\n+ 0.58% 0.00% postgres [.] GetTransactionSnapshot\n 0.58% 0.49% postgres [.] GetSnapshotData\n+ 0.58% 0.00% postgres [.] WALInsertLockAcquire\n+ 0.55% 0.00% postgres [.] XactLogCommitRecord\n\nTPS (compiled with -ggdb for capturing symbols for perf)\n1 1392.512967\n2 1435.899119\n4 3104.091923\n8 6159.305522\n16 11477.641780\n32 22701.000718\n64 41662.425880\n128 23743.426209\n256 89837.651619\n512 65164.221500\n768 66015.733370\n1024 56421.223080\n2048 52909.018072\n4096 40071.146985\n\nPATCHED:\n+ 2.19% 0.05% postgres [.] LWLockWaitListLock\n+ 2.10% 0.01% postgres [.] LWLockQueueSelf\n+ 1.73% 1.71% postgres [.] pg_atomic_read_u32_impl\n+ 1.73% 0.02% postgres [.] pg_atomic_read_u32\n+ 1.72% 0.02% postgres [.] LWLockRelease\n+ 1.65% 0.04% postgres [.] exec_bind_message\n+ 1.43% 0.00% postgres [.] XLogInsert\n+ 1.42% 0.01% postgres [.] WaitXLogInsertionsToFinish\n+ 1.40% 0.03% postgres [.] LWLockWaitForVar\n+ 1.38% 0.02% postgres [.] XLogInsertRecord\n+ 0.93% 0.03% postgres [.] LWLockAcquireOrWait\n+ 0.91% 0.00% postgres [.] GetTransactionSnapshot\n+ 0.91% 0.79% postgres [.] GetSnapshotData\n+ 0.91% 0.00% postgres [.] WALInsertLockRelease\n+ 0.91% 0.00% postgres [.] LWLockReleaseClearVar\n+ 0.53% 0.02% postgres [.] ExecInitModifyTable\n\nTPS (compiled with -ggdb for capturing symbols for perf)\n1 1295.296611\n2 1459.079162\n4 2865.688987\n8 5533.724983\n16 10771.697842\n32 20557.499312\n64 39436.423783\n128 42555.639048\n256 73139.060227\n512 124649.665196\n768 131162.826976\n1024 132185.160007\n2048 117377.586644\n4096 88240.336940\n\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 2 Feb 2023 19:00:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "+\tpg_atomic_exchange_u64(valptr, val);\n\nnitpick: I'd add a (void) at the beginning of these calls to\npg_atomic_exchange_u64() so that it's clear that we are discarding the\nreturn value.\n\n+\t/*\n+\t * Update the lock variable atomically first without having to acquire wait\n+\t * list lock, so that if anyone looking for the lock will have chance to\n+\t * grab it a bit quickly.\n+\t *\n+\t * NB: Note the use of pg_atomic_exchange_u64 as opposed to just\n+\t * pg_atomic_write_u64 to update the value. Since pg_atomic_exchange_u64 is\n+\t * a full barrier, we're guaranteed that the subsequent atomic read of lock\n+\t * state to check if it has any waiters happens after we set the lock\n+\t * variable to new value here. Without a barrier, we could end up missing\n+\t * waiters that otherwise should have been woken up.\n+\t */\n+\tpg_atomic_exchange_u64(valptr, val);\n+\n+\t/*\n+\t * Quick exit when there are no waiters. This avoids unnecessary lwlock's\n+\t * wait list lock acquisition and release.\n+\t */\n+\tif ((pg_atomic_read_u32(&lock->state) & LW_FLAG_HAS_WAITERS) == 0)\n+\t\treturn;\n\nI think this makes sense. A waiter could queue itself after the exchange,\nbut it'll recheck after queueing. IIUC this is basically how this works\ntoday. We update the value and release the lock before waking up any\nwaiters, so the same principle applies.\n\nOverall, I think this patch is in reasonable shape.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 8 Feb 2023 14:06:03 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Thu, Feb 9, 2023 at 3:36 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> + pg_atomic_exchange_u64(valptr, val);\n>\n> nitpick: I'd add a (void) at the beginning of these calls to\n> pg_atomic_exchange_u64() so that it's clear that we are discarding the\n> return value.\n\nI did that in the attached v5 patch although it's a mix elsewhere;\nsome doing explicit return value cast with (void) and some not.\n\n> + /*\n> + * Update the lock variable atomically first without having to acquire wait\n> + * list lock, so that if anyone looking for the lock will have chance to\n> + * grab it a bit quickly.\n> + *\n> + * NB: Note the use of pg_atomic_exchange_u64 as opposed to just\n> + * pg_atomic_write_u64 to update the value. Since pg_atomic_exchange_u64 is\n> + * a full barrier, we're guaranteed that the subsequent atomic read of lock\n> + * state to check if it has any waiters happens after we set the lock\n> + * variable to new value here. Without a barrier, we could end up missing\n> + * waiters that otherwise should have been woken up.\n> + */\n> + pg_atomic_exchange_u64(valptr, val);\n> +\n> + /*\n> + * Quick exit when there are no waiters. This avoids unnecessary lwlock's\n> + * wait list lock acquisition and release.\n> + */\n> + if ((pg_atomic_read_u32(&lock->state) & LW_FLAG_HAS_WAITERS) == 0)\n> + return;\n>\n> I think this makes sense. A waiter could queue itself after the exchange,\n> but it'll recheck after queueing. IIUC this is basically how this works\n> today. We update the value and release the lock before waking up any\n> waiters, so the same principle applies.\n\nYes, a waiter right after self-queuing (LWLockQueueSelf) checks for\nthe value (LWLockConflictsWithVar) before it goes and waits until\nawakened in LWLockWaitForVar. A waiter added to the queue is\nguaranteed to be woken up by the\nLWLockUpdateVar but before that the lock value is set and we have\npg_atomic_exchange_u64 as a memory barrier, so no memory reordering.\nEssentially, the order of these operations aren't changed. The benefit\nthat we're seeing is from avoiding LWLock's waitlist lock for reading\nand updating the lock value relying on 64-bit atomics.\n\n> Overall, I think this patch is in reasonable shape.\n\nThanks for reviewing. Please see the attached v5 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 9 Feb 2023 11:51:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Thu, Feb 09, 2023 at 11:51:28AM +0530, Bharath Rupireddy wrote:\n> On Thu, Feb 9, 2023 at 3:36 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> Overall, I think this patch is in reasonable shape.\n> \n> Thanks for reviewing. Please see the attached v5 patch.\n\nI'm marking this as ready-for-committer. I think a couple of the comments\ncould use some small adjustments, but that probably doesn't need to hold up\nthis patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 20 Feb 2023 21:49:48 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "Hi Andres Freund\r\n This patch improves performance significantly,Commitfest 2023-03 is coming to an end,Is it not submitted yet since the patch still needs to be improved?\r\n\r\nBest wish\r\n________________________________\r\n发件人: Nathan Bossart <nathandbossart@gmail.com>\r\n发送时间: 2023年2月21日 13:49\r\n收件人: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\r\n抄送: Andres Freund <andres@anarazel.de>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\r\n主题: Re: WAL Insertion Lock Improvements\r\n\r\nOn Thu, Feb 09, 2023 at 11:51:28AM +0530, Bharath Rupireddy wrote:\r\n> On Thu, Feb 9, 2023 at 3:36 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\r\n>> Overall, I think this patch is in reasonable shape.\r\n>\r\n> Thanks for reviewing. Please see the attached v5 patch.\r\n\r\nI'm marking this as ready-for-committer. I think a couple of the comments\r\ncould use some small adjustments, but that probably doesn't need to hold up\r\nthis patch.\r\n\r\n--\r\nNathan Bossart\r\nAmazon Web Services: https://aws.amazon.com\r\n\r\n\r\n\n\n\n\n\n\n\nHi Andres Freund\n\n This patch improves performance significantly,Commitfest 2023-03 is coming to an end,Is it not submitted yet since the patch still needs to be improved?\n\n\n\n\n\n\nBest wish\n\n\n发件人: Nathan Bossart <nathandbossart@gmail.com>\n发送时间: 2023年2月21日 13:49\n收件人: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n抄送: Andres Freund <andres@anarazel.de>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\n主题: Re: WAL Insertion Lock Improvements\n \n\n\nOn Thu, Feb 09, 2023 at 11:51:28AM +0530, Bharath Rupireddy wrote:\n> On Thu, Feb 9, 2023 at 3:36 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> Overall, I think this patch is in reasonable shape.\n> \n> Thanks for reviewing. Please see the attached v5 patch.\n\nI'm marking this as ready-for-committer. I think a couple of the comments\ncould use some small adjustments, but that probably doesn't need to hold up\nthis patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 23 Mar 2023 02:51:44 +0000",
"msg_from": "adherent postgres <adherent_postgres@hotmail.com>",
"msg_from_op": false,
"msg_subject": "=?gb2312?B?u9i4tDogV0FMIEluc2VydGlvbiBMb2NrIEltcHJvdmVtZW50cw==?="
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 09:49:48PM -0800, Nathan Bossart wrote:\n> I'm marking this as ready-for-committer. I think a couple of the comments\n> could use some small adjustments, but that probably doesn't need to hold up\n> this patch.\n\nApologies. I was planning to have a thorough look at this patch but\nlife got in the way and I have not been able to study what's happening\non this thread this close to the feature freeze.\n\nAnyway, I am attaching two modules I have written for the sake of this\nthread while beginning my lookup of the patch:\n- lwlock_test.tar.gz, validation module for LWLocks with variable\nwaits. This module can be loaded with shared_preload_libraries to\nhave two LWLocks and two variables in shmem, then have 2 backends play\nping-pong with each other's locks. An isolation test may be possible,\nthough I have not thought hard about it. Just use a SQL sequence like\nthat, for example, with N > 1 (see README):\n Backend 1: SELECT lwlock_test_acquire();\n Backend 2: SELECT lwlock_test_wait(N);\n Backend 1: SELECT lwlock_test_update(N);\n Backend 1: SELECT lwlock_test_release();\n- custom_wal.tar.gz, thin wrapper for LogLogicalMessage() able to\ngenerate N records of size M bytes in a single SQL call. This can be\nused to generate records of various sizes for benchmarking, limiting\nthe overhead of individual calls to pg_logical_emit_message_bytea().\nI have begun gathering numbers with WAL records of various size and\nlength, using pgbench like:\n$ cat script.sql\n\\set record_size 1\n\\set record_number 5000\nSELECT custom_wal(:record_size, :record_number);\n$ pgbench -n -c 500 -t 100 -f script.sql\nSo this limits most the overhead of behind parsing, planning, and most\nof the INSERT logic.\n\nI have been trying to get some reproducible numbers, but I think that\nI am going to need a bigger maching than what I have been using for\nthe last few days, up to 400 connections. It is worth noting that\n00d1e02b may influence a bit the results, so we may want to have more\nnumbers with that in place particularly with INSERTs, and one of the\ntests used upthread uses single row INSERTs.\n\nAnother question I had: would it be worth having some tests with\npg_wal/ mounted to a tmpfs so as I/O would not be a bottleneck? It\nshould be instructive to get more measurement with a fixed number of\ntransactions and a rather high amount of concurrent connections (1k at\nleast?), where the contention would be on the variable waits. My\nfirst impression is that records should not be too small if you want\nto see more the effects of this patch, either.\n\nLooking at the patch.. LWLockConflictsWithVar() and\nLWLockReleaseClearVar() are the trivial bits. These are OK.\n\n+ * NB: Note the use of pg_atomic_exchange_u64 as opposed to just\n+ * pg_atomic_write_u64 to update the value. Since pg_atomic_exchange_u64 is\n+ * a full barrier, we're guaranteed that the subsequent shared memory\n+ * reads/writes, if any, happen after we reset the lock variable.\n\nThis mentions that the subsequent read/write operations are safe, so\nthis refers to anything happening after the variable is reset. As\na full barrier, should be also mention that this is also ordered with\nrespect to anything that the caller did before clearing the variable?\nFrom this perspective using pg_atomic_exchange_u64() makes sense to me\nin LWLockReleaseClearVar().\n\n+ * XXX: Use of a spinlock at the beginning of this function to read\n+ * current insert position implies memory ordering. That means that\n+ * the immediate loads and stores to shared memory (for instance,\n+ * in LWLockUpdateVar called via LWLockWaitForVar) don't need an\n+ * explicit memory barrier as far as the current usage is\n+ * concerned. But that might not be safe in general.\n */\nWhat's the part where this is not safe? Based on what I see, this\ncode path is safe because of the previous spinlock. This is the same\ncomment as at the beginning of LWLockConflictsWithVar(). Is that\nsomething that we ought to document at the top of LWLockWaitForVar()\nas well? We have one caller of this function currently, but there may\nbe more in the future.\n\n- * you're about to write out.\n+ * you're about to write out. Using an atomic variable for insertingAt avoids\n+ * taking any explicit lock for reads and writes.\n\nHmm. Not sure that we need to comment at all.\n\n-LWLockUpdateVar(LWLock *lock, uint64 *valptr, uint64 val)\n+LWLockUpdateVar(LWLock *lock, pg_atomic_uint64 *valptr, uint64 val)\n[...]\n Assert(pg_atomic_read_u32(&lock->state) & LW_VAL_EXCLUSIVE);\n \n- /* Update the lock's value */\n- *valptr = val;\n\nThe sensitive change is in LWLockUpdateVar(). I am not completely\nsure to understand this removal, though. Does that influence the case\nwhere there are waiters?\n\nAnother thing I was wondering about: how much does the fast-path used\nin LWLockUpdateVar() influence the performance numbers? Am I right to\nguess that it counts for most of the gain seen? Or could it be that\nthe removal of the spin lock in\nLWLockConflictsWithVar()/LWLockWaitForVar() the point that has the\nhighest effect?\n--\nMichael",
"msg_date": "Mon, 10 Apr 2023 13:08:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Mon, Apr 10, 2023 at 9:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> I have been trying to get some reproducible numbers, but I think that\n> I am going to need a bigger maching than what I have been using for\n> the last few days, up to 400 connections. It is worth noting that\n> 00d1e02b may influence a bit the results, so we may want to have more\n> numbers with that in place particularly with INSERTs, and one of the\n> tests used upthread uses single row INSERTs.\n\nI ran performance tests on the patch with different use-cases. Clearly\nthe patch reduces burden on LWLock's waitlist lock (evident from perf\nreports [1]). However, to see visible impact in the output, the txns\nmust be generating small (between 16 bytes to 2 KB) amounts of WAL in\na highly concurrent manner, check the results below (FWIW, I've zipped\nand attached perf images for better illustration along with test\nsetup).\n\nWhen the txns are generating a small amount of WAL i.e. between 16\nbytes to 2 KB in a highly concurrent manner, the benefit is clearly\nvisible in the TPS more than 2.3X improvement. When the txns are\ngenerating more WAL i.e. more than 2 KB, the gain from reduced burden\non waitlist lock is offset by increase in the wait/release for WAL\ninsertion locks and no visible benefit is seen.\n\nAs the amount of WAL each txn generates increases, it looks like the\nbenefit gained from reduced burden on waitlist lock is offset by\nincrease in the wait for WAL insertion locks.\n\nNote that I've used pg_logical_emit_message() for ease of\nunderstanding about the txns generating various amounts of WAL, but\nthe pattern is the same if txns are generating various amounts of WAL\nsay with inserts.\n\ntest-case 1: -T5, WAL ~16 bytes\nclients HEAD PATCHED\n1 1437 1352\n2 1376 1419\n4 2919 2774\n8 5875 6371\n16 11148 12242\n32 22108 23532\n64 41414 46478\n128 85304 85235\n256 83771 152901\n512 61970 141021\n768 56514 118899\n1024 51784 110960\n2048 39141 84150\n4096 16901 45759\n\ntest-case 1: -t1000, WAL ~16 bytes\nclients HEAD PATCHED\n1 1417 1333\n2 1363 1791\n4 2978 2970\n8 5954 6198\n16 11179 11164\n32 23742 24043\n64 45537 44103\n128 84683 91762\n256 80369 146293\n512 61080 132079\n768 57236 118046\n1024 53497 114574\n2048 46423 93588\n4096 42067 85790\n\ntest-case 2: -T5, WAL ~256 bytes\nclients HEAD PATCHED\n1 1521 1386\n2 1647 1637\n4 3088 3270\n8 6011 5631\n16 12778 10317\n32 24117 20006\n64 43966 38199\n128 72660 67936\n256 93096 121261\n512 57247 142418\n768 53782 126218\n1024 50279 109153\n2048 35109 91602\n4096 21184 39848\n\ntest-case 2: -t1000, WAL ~256 bytes\nclients HEAD PATCHED\n1 1265 1389\n2 1522 1258\n4 2802 2775\n8 5875 5422\n16 11664 10853\n32 21961 22145\n64 44304 40851\n128 73278 80494\n256 91172 122287\n512 60966 136734\n768 56590 125050\n1024 52481 124341\n2048 47878 104760\n4096 42838 94121\n\ntest-case 3: -T5, WAL 512 bytes\nclients HEAD PATCHED\n1 1464 1284\n2 1520 1381\n4 2985 2877\n8 6237 5261\n16 11296 10621\n32 22257 20789\n64 40548 37243\n128 66507 59891\n256 92516 97506\n512 56404 119716\n768 51127 112482\n1024 48463 103484\n2048 38079 81424\n4096 18977 40942\n\ntest-case 3: -t1000, WAL 512 bytes\nclients HEAD PATCHED\n1 1452 1434\n2 1604 1649\n4 3051 2971\n8 5967 5650\n16 10471 10702\n32 20257 20899\n64 39412 36750\n128 62767 61110\n256 81050 89768\n512 56888 122786\n768 51238 114444\n1024 48972 106867\n2048 43451 98847\n4096 40018 111079\n\ntest-case 4: -T5, WAL 1024 bytes\nclients HEAD PATCHED\n1 1405 1395\n2 1638 1607\n4 3176 3207\n8 6271 6024\n16 11653 11103\n32 20530 20260\n64 34313 32367\n128 55939 52079\n256 74355 76420\n512 56506 90983\n768 50088 100410\n1024 44589 99025\n2048 39640 90931\n4096 20942 36035\n\ntest-case 4: -t1000, WAL 1024 bytes\nclients HEAD PATCHED\n1 1330 1304\n2 1615 1366\n4 3117 2667\n8 6179 5390\n16 10524 10426\n32 19819 18620\n64 34844 29731\n128 52180 48869\n256 73284 71396\n512 55714 96014\n768 49336 108100\n1024 46113 102789\n2048 44627 104721\n4096 44979 106189\n\ntest-case 5: -T5, WAL 2048 bytes\nclients HEAD PATCHED\n1 1407 1377\n2 1518 1559\n4 2589 2870\n8 4883 5493\n16 9075 9201\n32 15957 16295\n64 27471 25029\n128 37493 38642\n256 46369 45787\n512 61755 62836\n768 59144 68419\n1024 52495 68933\n2048 48608 72500\n4096 26463 61252\n\ntest-case 5: -t1000, WAL 2048 bytes\nclients HEAD PATCHED\n1 1289 1366\n2 1489 1628\n4 2960 3036\n8 5536 5965\n16 9248 10399\n32 15770 18140\n64 27626 27800\n128 36817 39483\n256 48533 52105\n512 64453 64007\n768 59146 64160\n1024 57637 61756\n2048 59063 62109\n4096 58268 61206\n\ntest-case 6: -T5, WAL 4096 bytes\nclients HEAD PATCHED\n1 1322 1325\n2 1504 1551\n4 2811 2880\n8 5330 5159\n16 8625 8315\n32 12820 13534\n64 19737 19965\n128 26298 24633\n256 34630 29939\n512 34382 36669\n768 33421 33316\n1024 33525 32821\n2048 37053 37752\n4096 37334 39114\n\ntest-case 6: -t1000, WAL 4096 bytes\nclients HEAD PATCHED\n1 1212 1371\n2 1383 1566\n4 2858 2967\n8 5092 5035\n16 8233 8486\n32 13353 13678\n64 19052 20072\n128 24803 24726\n256 34065 33139\n512 31590 32029\n768 31432 31404\n1024 31357 31366\n2048 31465 31508\n4096 32157 32180\n\ntest-case 7: -T5, WAL 8192 bytes\nclients HEAD PATCHED\n1 1287 1233\n2 1552 1521\n4 2658 2617\n8 4680 4532\n16 6732 7110\n32 9649 9198\n64 13276 12042\n128 17100 17187\n256 17408 17448\n512 16595 16358\n768 16599 16500\n1024 16975 17300\n2048 19073 19137\n4096 21368 21735\n\ntest-case 7: -t1000, WAL 8192 bytes\nclients HEAD PATCHED\n1 1144 1190\n2 1414 1395\n4 2618 2438\n8 4645 4485\n16 6766 7001\n32 9620 9804\n64 12943 13023\n128 15904 17148\n256 16645 16035\n512 15800 15796\n768 15788 15810\n1024 15814 15817\n2048 17775 17771\n4096 31715 31682\n\n> Looking at the patch.. LWLockConflictsWithVar() and\n> LWLockReleaseClearVar() are the trivial bits. These are OK.\n\nHm, the crux of the patch is avoiding LWLock's waitlist lock for\nreading/writing the lock variable. Essentially, they are important\nbits.\n\n> + * NB: Note the use of pg_atomic_exchange_u64 as opposed to just\n> + * pg_atomic_write_u64 to update the value. Since pg_atomic_exchange_u64 is\n> + * a full barrier, we're guaranteed that the subsequent shared memory\n> + * reads/writes, if any, happen after we reset the lock variable.\n>\n> This mentions that the subsequent read/write operations are safe, so\n> this refers to anything happening after the variable is reset. As\n> a full barrier, should be also mention that this is also ordered with\n> respect to anything that the caller did before clearing the variable?\n> From this perspective using pg_atomic_exchange_u64() makes sense to me\n> in LWLockReleaseClearVar().\n\nWordsmithed that comment a bit.\n\n> + * XXX: Use of a spinlock at the beginning of this function to read\n> + * current insert position implies memory ordering. That means that\n> + * the immediate loads and stores to shared memory (for instance,\n> + * in LWLockUpdateVar called via LWLockWaitForVar) don't need an\n> + * explicit memory barrier as far as the current usage is\n> + * concerned. But that might not be safe in general.\n> */\n> What's the part where this is not safe? Based on what I see, this\n> code path is safe because of the previous spinlock. This is the same\n> comment as at the beginning of LWLockConflictsWithVar(). Is that\n> something that we ought to document at the top of LWLockWaitForVar()\n> as well? We have one caller of this function currently, but there may\n> be more in the future.\n\n'But that might not be safe in general' applies only for\nLWLockWaitForVar not for WaitXLogInsertionsToFinish for sure. My bad.\n\nIf there's another caller for LWLockWaitForVar without any spinlock,\nthat's when the LWLockWaitForVar needs to have an explicit memory\nbarrier.\n\nPer a comment upthread\nhttps://www.postgresql.org/message-id/20221205183007.s72oygp63s43dqyz%40awork3.anarazel.de,\nI had a note in WaitXLogInsertionsToFinish before LWLockWaitForVar. I\nnow have modified that comment.\n\n> - * you're about to write out.\n> + * you're about to write out. Using an atomic variable for insertingAt avoids\n> + * taking any explicit lock for reads and writes.\n>\n> Hmm. Not sure that we need to comment at all.\n\nRemoved. I was being verbose. One who understands pg_atomic_uint64 can\nget to that point easily.\n\n> -LWLockUpdateVar(LWLock *lock, uint64 *valptr, uint64 val)\n> +LWLockUpdateVar(LWLock *lock, pg_atomic_uint64 *valptr, uint64 val)\n> [...]\n> Assert(pg_atomic_read_u32(&lock->state) & LW_VAL_EXCLUSIVE);\n>\n> - /* Update the lock's value */\n> - *valptr = val;\n>\n> The sensitive change is in LWLockUpdateVar(). I am not completely\n> sure to understand this removal, though. Does that influence the case\n> where there are waiters?\n\nI'll send about this in a follow-up email to not overload this\nresponse with too much data.\n\n> Another thing I was wondering about: how much does the fast-path used\n> in LWLockUpdateVar() influence the performance numbers? Am I right to\n> guess that it counts for most of the gain seen?\n\nI'll send about this in a follow-up email to not overload this\nresponse with too much data.\n\n> Or could it be that\n> the removal of the spin lock in\n> LWLockConflictsWithVar()/LWLockWaitForVar() the point that has the\n> highest effect?\n\nI'll send about this in a follow-up email to not overload this\nresponse with too much data.\n\nI've addressed the above review comments and attached the v6 patch.\n\n[1]\ntest-case 1: -T5, WAL ~16 bytes HEAD:\n+ 81.52% 0.03% postgres [.] __vstrfmon_l_internal\n+ 81.52% 0.00% postgres [.] startup_hacks\n+ 81.52% 0.00% postgres [.] PostmasterMain\n+ 63.95% 1.01% postgres [.] LWLockWaitListLock\n+ 61.93% 0.02% postgres [.] WaitXLogInsertionsToFinish\n+ 61.89% 0.05% postgres [.] LWLockWaitForVar\n+ 48.83% 48.33% postgres [.] pg_atomic_read_u32_impl\n+ 48.78% 0.40% postgres [.] pg_atomic_read_u32\n+ 43.19% 0.12% postgres [.] LWLockConflictsWithVar\n+ 19.81% 0.01% postgres [.] LWLockQueueSelf\n+ 7.86% 2.46% postgres [.] perform_spin_delay\n+ 6.14% 6.06% postgres [.] spin_delay\n+ 5.82% 0.01% postgres [.] pg_atomic_fetch_or_u32\n+ 5.81% 5.76% postgres [.] pg_atomic_fetch_or_u32_impl\n+ 4.00% 0.01% postgres [.] XLogInsert\n+ 3.93% 0.03% postgres [.] XLogInsertRecord\n+ 2.13% 0.02% postgres [.] LWLockRelease\n+ 2.10% 0.03% postgres [.] LWLockAcquire\n+ 1.92% 0.00% postgres [.] LWLockDequeueSelf\n+ 1.87% 0.01% postgres [.] WALInsertLockAcquire\n+ 1.68% 0.04% postgres [.] LWLockAcquireOrWait\n+ 1.64% 0.01% postgres [.] pg_analyze_and_rewrite_fixedparams\n+ 1.62% 0.00% postgres [.] WALInsertLockRelease\n+ 1.62% 0.00% postgres [.] LWLockReleaseClearVar\n+ 1.55% 0.01% postgres [.] parse_analyze_fixedparams\n+ 1.51% 0.00% postgres [.] transformTopLevelStmt\n+ 1.50% 0.00% postgres [.] transformOptionalSelectInto\n+ 1.50% 0.01% postgres [.] transformStmt\n+ 1.47% 0.02% postgres [.] transformSelectStmt\n+ 1.29% 0.01% postgres [.] XactLogCommitRecord\n\ntest-case 1: -T5, WAL ~16 bytes PATCHED:\n+ 74.49% 0.04% postgres [.] __vstrfmon_l_internal\n+ 74.49% 0.00% postgres [.] startup_hacks\n+ 74.49% 0.00% postgres [.] PostmasterMain\n+ 51.60% 0.01% postgres [.] finish_xact_command\n+ 51.60% 0.02% postgres [.] CommitTransactionCommand\n+ 51.37% 0.03% postgres [.] CommitTransaction\n+ 49.43% 0.05% postgres [.] RecordTransactionCommit\n+ 46.55% 0.05% postgres [.] XLogFlush\n+ 46.37% 0.85% postgres [.] LWLockWaitListLock\n+ 43.79% 0.02% postgres [.] LWLockQueueSelf\n+ 38.87% 0.03% postgres [.] WaitXLogInsertionsToFinish\n+ 38.79% 0.11% postgres [.] LWLockWaitForVar\n+ 34.99% 34.49% postgres [.] pg_atomic_read_u32_impl\n+ 34.93% 0.35% postgres [.] pg_atomic_read_u32\n+ 6.99% 2.12% postgres [.] perform_spin_delay\n+ 6.64% 0.01% postgres [.] XLogInsert\n+ 6.54% 0.06% postgres [.] XLogInsertRecord\n+ 6.26% 0.08% postgres [.] LWLockAcquireOrWait\n+ 5.31% 5.22% postgres [.] spin_delay\n+ 4.23% 0.04% postgres [.] LWLockRelease\n+ 3.74% 0.01% postgres [.] pg_atomic_fetch_or_u32\n+ 3.73% 3.68% postgres [.] pg_atomic_fetch_or_u32_impl\n+ 3.33% 0.06% postgres [.] LWLockAcquire\n+ 2.97% 0.01% postgres [.] pg_plan_queries\n+ 2.95% 0.01% postgres [.] WALInsertLockAcquire\n+ 2.94% 0.02% postgres [.] planner\n+ 2.94% 0.01% postgres [.] pg_plan_query\n+ 2.92% 0.01% postgres [.] LWLockDequeueSelf\n+ 2.89% 0.04% postgres [.] standard_planner\n+ 2.81% 0.00% postgres [.] WALInsertLockRelease\n+ 2.80% 0.00% postgres [.] LWLockReleaseClearVar\n+ 2.38% 0.07% postgres [.] subquery_planner\n+ 2.35% 0.01% postgres [.] XactLogCommitRecord\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 8 May 2023 17:57:09 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Mon, May 8, 2023 at 5:57 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Apr 10, 2023 at 9:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > -LWLockUpdateVar(LWLock *lock, uint64 *valptr, uint64 val)\n> > +LWLockUpdateVar(LWLock *lock, pg_atomic_uint64 *valptr, uint64 val)\n> > [...]\n> > Assert(pg_atomic_read_u32(&lock->state) & LW_VAL_EXCLUSIVE);\n> >\n> > - /* Update the lock's value */\n> > - *valptr = val;\n> >\n> > The sensitive change is in LWLockUpdateVar(). I am not completely\n> > sure to understand this removal, though. Does that influence the case\n> > where there are waiters?\n>\n> I'll send about this in a follow-up email to not overload this\n> response with too much data.\n\nIt helps the case when there are no waiters. IOW, it updates value\nwithout waitlist lock when there are no waiters, so no extra waitlist\nlock acquisition/release just to update the value. In turn, it helps\nthe other backend wanting to flush the WAL looking for the new updated\nvalue of insertingAt in WaitXLogInsertionsToFinish(), now the flushing\nbackend can get the new value faster.\n\n> > Another thing I was wondering about: how much does the fast-path used\n> > in LWLockUpdateVar() influence the performance numbers? Am I right to\n> > guess that it counts for most of the gain seen?\n>\n> I'll send about this in a follow-up email to not overload this\n> response with too much data.\n\nThe fastpath exit in LWLockUpdateVar() doesn't seem to influence the\nresults much, see below results. However, it avoids waitlist lock\nacquisition when there are no waiters.\n\ntest-case 1: -T5, WAL ~16 bytes\nclients HEAD PATCHED with fastpath PATCHED no fast path\n1 1482 1486 1457\n2 1617 1620 1569\n4 3174 3233 3031\n8 6136 6365 5725\n16 12566 12269 11685\n32 24284 23621 23177\n64 50135 45528 46653\n128 94903 89791 89103\n256 82289 152915 152835\n512 62498 138838 142084\n768 57083 125074 126768\n1024 51308 113593 115930\n2048 41084 88764 85110\n4096 19939 42257 43917\n\n> > Or could it be that\n> > the removal of the spin lock in\n> > LWLockConflictsWithVar()/LWLockWaitForVar() the point that has the\n> > highest effect?\n>\n> I'll send about this in a follow-up email to not overload this\n> response with too much data.\n\nOut of 3 functions that got rid of waitlist lock\nLWLockConflictsWithVar/LWLockWaitForVar, LWLockUpdateVar,\nLWLockReleaseClearVar, perf reports tell that the biggest gain (for\nthe use-cases that I've tried) is for\nLWLockConflictsWithVar/LWLockWaitForVar:\n\ntest-case 1: -T5, WAL ~16 bytes\nHEAD:\n+ 61.89% 0.05% postgres [.] LWLockWaitForVar\n+ 43.19% 0.12% postgres [.] LWLockConflictsWithVar\n+ 1.62% 0.00% postgres [.] LWLockReleaseClearVar\n\nPATCHED:\n+ 38.79% 0.11% postgres [.] LWLockWaitForVar\n 0.40% 0.02% postgres [.] LWLockConflictsWithVar\n+ 2.80% 0.00% postgres [.] LWLockReleaseClearVar\n\ntest-case 6: -T5, WAL 4096 bytes\nHEAD:\n+ 29.66% 0.07% postgres [.] LWLockWaitForVar\n+ 20.94% 0.08% postgres [.] LWLockConflictsWithVar\n 0.19% 0.03% postgres [.] LWLockUpdateVar\n\nPATCHED:\n+ 3.95% 0.08% postgres [.] LWLockWaitForVar\n 0.19% 0.03% postgres [.] LWLockConflictsWithVar\n+ 1.73% 0.00% postgres [.] LWLockReleaseClearVar\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 8 May 2023 20:18:04 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Mon, May 08, 2023 at 05:57:09PM +0530, Bharath Rupireddy wrote:\n> I ran performance tests on the patch with different use-cases. Clearly\n> the patch reduces burden on LWLock's waitlist lock (evident from perf\n> reports [1]). However, to see visible impact in the output, the txns\n> must be generating small (between 16 bytes to 2 KB) amounts of WAL in\n> a highly concurrent manner, check the results below (FWIW, I've zipped\n> and attached perf images for better illustration along with test\n> setup).\n> \n> When the txns are generating a small amount of WAL i.e. between 16\n> bytes to 2 KB in a highly concurrent manner, the benefit is clearly\n> visible in the TPS more than 2.3X improvement. When the txns are\n> generating more WAL i.e. more than 2 KB, the gain from reduced burden\n> on waitlist lock is offset by increase in the wait/release for WAL\n> insertion locks and no visible benefit is seen.\n> \n> As the amount of WAL each txn generates increases, it looks like the\n> benefit gained from reduced burden on waitlist lock is offset by\n> increase in the wait for WAL insertion locks.\n\nNice.\n\n> test-case 1: -T5, WAL ~16 bytes\n> test-case 1: -t1000, WAL ~16 bytes\n\nI wonder if it's worth doing a couple of long-running tests, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 8 May 2023 16:04:10 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Mon, May 08, 2023 at 04:04:10PM -0700, Nathan Bossart wrote:\n> On Mon, May 08, 2023 at 05:57:09PM +0530, Bharath Rupireddy wrote:\n>> test-case 1: -T5, WAL ~16 bytes\n>> test-case 1: -t1000, WAL ~16 bytes\n> \n> I wonder if it's worth doing a couple of long-running tests, too.\n\nYes, 5s or 1000 transactions per client is too small, though it shows\nthat things are going in the right direction. \n\n(Will reply to the rest in a bit..)\n--\nMichael",
"msg_date": "Tue, 9 May 2023 12:32:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Tue, May 9, 2023 at 9:02 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, May 08, 2023 at 04:04:10PM -0700, Nathan Bossart wrote:\n> > On Mon, May 08, 2023 at 05:57:09PM +0530, Bharath Rupireddy wrote:\n> >> test-case 1: -T5, WAL ~16 bytes\n> >> test-case 1: -t1000, WAL ~16 bytes\n> >\n> > I wonder if it's worth doing a couple of long-running tests, too.\n>\n> Yes, 5s or 1000 transactions per client is too small, though it shows\n> that things are going in the right direction.\n\nI'll pick a test case that generates a reasonable amount of WAL 256\nbytes. What do you think of the following?\n\ntest-case 2: -T900, WAL ~256 bytes (for c in 1 2 4 8 16 32 64 128 256\n512 768 1024 2048 4096 - takes 3.5hrs)\ntest-case 2: -t1000000, WAL ~256 bytes\n\nIf okay, I'll fire the tests.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 9 May 2023 09:24:14 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Tue, May 09, 2023 at 09:24:14AM +0530, Bharath Rupireddy wrote:\n> I'll pick a test case that generates a reasonable amount of WAL 256\n> bytes. What do you think of the following?\n> \n> test-case 2: -T900, WAL ~256 bytes (for c in 1 2 4 8 16 32 64 128 256\n> 512 768 1024 2048 4096 - takes 3.5hrs)\n> test-case 2: -t1000000, WAL ~256 bytes\n> \n> If okay, I'll fire the tests.\n\nSounds like a sensible duration, yes. What's your setting for\nmin/max_wal_size? I assume that there are still 16GB throttled with\ntarget_completion at 0.9?\n--\nMichael",
"msg_date": "Tue, 9 May 2023 12:57:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Tue, May 9, 2023 at 9:27 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, May 09, 2023 at 09:24:14AM +0530, Bharath Rupireddy wrote:\n> > I'll pick a test case that generates a reasonable amount of WAL 256\n> > bytes. What do you think of the following?\n> >\n> > test-case 2: -T900, WAL ~256 bytes (for c in 1 2 4 8 16 32 64 128 256\n> > 512 768 1024 2048 4096 - takes 3.5hrs)\n> > test-case 2: -t1000000, WAL ~256 bytes\n> >\n> > If okay, I'll fire the tests.\n>\n> Sounds like a sensible duration, yes. What's your setting for\n> min/max_wal_size? I assume that there are still 16GB throttled with\n> target_completion at 0.9?\n\nBelow is the configuration I've been using. I have been keeping the\ncheckpoints away so far to get expected numbers. Probably, something\nthat I should modify for this long run? Change checkpoint_timeout to\n15 min or so?\n\nmax_wal_size=64GB\ncheckpoint_timeout=1d\nshared_buffers=8GB\nmax_connections=5000\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 9 May 2023 09:34:56 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Tue, May 09, 2023 at 09:34:56AM +0530, Bharath Rupireddy wrote:\n> Below is the configuration I've been using. I have been keeping the\n> checkpoints away so far to get expected numbers. Probably, something\n> that I should modify for this long run? Change checkpoint_timeout to\n> 15 min or so?\n> \n> max_wal_size=64GB\n> checkpoint_timeout=1d\n> shared_buffers=8GB\n> max_connections=5000\n\nNoted. Something like that should be OK IMO, with all the checkpoints\ngenerated based on the volume generated. With records that have a\nfixed size, this should, I assume, lead to results that could be\ncompared across runs, even if the patched code would lead to more\ncheckpoints generated.\n--\nMichael",
"msg_date": "Tue, 9 May 2023 13:17:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Mon, May 08, 2023 at 08:18:04PM +0530, Bharath Rupireddy wrote:\n> On Mon, May 8, 2023 at 5:57 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> On Mon, Apr 10, 2023 at 9:38 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>> The sensitive change is in LWLockUpdateVar(). I am not completely\n>>> sure to understand this removal, though. Does that influence the case\n>>> where there are waiters?\n>>\n>> I'll send about this in a follow-up email to not overload this\n>> response with too much data.\n> \n> It helps the case when there are no waiters. IOW, it updates value\n> without waitlist lock when there are no waiters, so no extra waitlist\n> lock acquisition/release just to update the value. In turn, it helps\n> the other backend wanting to flush the WAL looking for the new updated\n> value of insertingAt in WaitXLogInsertionsToFinish(), now the flushing\n> backend can get the new value faster.\n\nSure, which is what the memory barrier given by exchange_u64\nguarantees. My thoughts on this one is that I am not completely sure\nto understand that we won't miss any waiters that should have been\nawaken.\n\n> The fastpath exit in LWLockUpdateVar() doesn't seem to influence the\n> results much, see below results. However, it avoids waitlist lock\n> acquisition when there are no waiters.\n> \n> test-case 1: -T5, WAL ~16 bytes\n> clients HEAD PATCHED with fastpath PATCHED no fast path\n> 64 50135 45528 46653\n> 128 94903 89791 89103\n> 256 82289 152915 152835\n> 512 62498 138838 142084\n> 768 57083 125074 126768\n> 1024 51308 113593 115930\n> 2048 41084 88764 85110\n> 4096 19939 42257 43917\n\nConsidering that there could be a few percents of noise mixed into\nthat, that's not really surprising as the workload is highly\nconcurrent on inserts so the fast path won't really shine :)\n\nShould we split this patch into two parts, as they aim at tackling two\ndifferent cases then? One for LWLockConflictsWithVar() and\nLWLockReleaseClearVar() which are the straight-forward pieces\n(using one pg_atomic_write_u64() in LWLockUpdateVar instead), then\na second for LWLockUpdateVar()?\n\nAlso, the fast path treatment in LWLockUpdateVar() may show some\nbetter benefits when there are really few backends and a bunch of very\nlittle records? Still, even that sounds a bit limited..\n\n> Out of 3 functions that got rid of waitlist lock\n> LWLockConflictsWithVar/LWLockWaitForVar, LWLockUpdateVar,\n> LWLockReleaseClearVar, perf reports tell that the biggest gain (for\n> the use-cases that I've tried) is for\n> LWLockConflictsWithVar/LWLockWaitForVar:\n>\n> test-case 6: -T5, WAL 4096 bytes\n> HEAD:\n> + 29.66% 0.07% postgres [.] LWLockWaitForVar\n> + 20.94% 0.08% postgres [.] LWLockConflictsWithVar\n> 0.19% 0.03% postgres [.] LWLockUpdateVar\n> \n> PATCHED:\n> + 3.95% 0.08% postgres [.] LWLockWaitForVar\n> 0.19% 0.03% postgres [.] LWLockConflictsWithVar\n> + 1.73% 0.00% postgres [.] LWLockReleaseClearVar\n\nIndeed.\n--\nMichael",
"msg_date": "Tue, 9 May 2023 14:10:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Tue, May 09, 2023 at 02:10:20PM +0900, Michael Paquier wrote:\n> Should we split this patch into two parts, as they aim at tackling two\n> different cases then? One for LWLockConflictsWithVar() and\n> LWLockReleaseClearVar() which are the straight-forward pieces\n> (using one pg_atomic_write_u64() in LWLockUpdateVar instead), then\n> a second for LWLockUpdateVar()?\n\nI have been studying that a bit more, and I'd like to take this\nsuggestion back. Apologies for the noise.\n--\nMichael",
"msg_date": "Tue, 9 May 2023 15:25:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Mon, May 08, 2023 at 05:57:09PM +0530, Bharath Rupireddy wrote:\n> Note that I've used pg_logical_emit_message() for ease of\n> understanding about the txns generating various amounts of WAL, but\n> the pattern is the same if txns are generating various amounts of WAL\n> say with inserts.\n\nSounds good to me to just rely on that for some comparison numbers.\n\n+ * NB: LWLockConflictsWithVar (which is called from\n+ * LWLockWaitForVar) relies on the spinlock used above in this\n+ * function and doesn't use a memory barrier.\n\nThis patch adds the following comment in WaitXLogInsertionsToFinish()\nbecause lwlock.c on HEAD mentions that:\n /*\n * Test first to see if it the slot is free right now.\n *\n * XXX: the caller uses a spinlock before this, so we don't need a memory\n * barrier here as far as the current usage is concerned. But that might\n * not be safe in general.\n */\n\nShould it be something where we'd better be noisy about at the top of\nLWLockWaitForVar()? We don't want to add a memory barrier at the\nbeginning of LWLockConflictsWithVar(), still it strikes me that\nsomebody that aims at using LWLockWaitForVar() may miss this point\nbecause LWLockWaitForVar() is the routine published in lwlock.h, not\nLWLockConflictsWithVar(). This does not need to be really\ncomplicated, say a note at the top of LWLockWaitForVar() among the\nlines of (?):\n\"Be careful that LWLockConflictsWithVar() does not include a memory\nbarrier, hence the caller of this function may want to rely on an\nexplicit barrier or a spinlock to avoid memory ordering issues.\"\n\n>> + * NB: Note the use of pg_atomic_exchange_u64 as opposed to just\n>> + * pg_atomic_write_u64 to update the value. Since pg_atomic_exchange_u64 is\n>> + * a full barrier, we're guaranteed that the subsequent shared memory\n>> + * reads/writes, if any, happen after we reset the lock variable.\n>>\n>> This mentions that the subsequent read/write operations are safe, so\n>> this refers to anything happening after the variable is reset. As\n>> a full barrier, should be also mention that this is also ordered with\n>> respect to anything that the caller did before clearing the variable?\n>> From this perspective using pg_atomic_exchange_u64() makes sense to me\n>> in LWLockReleaseClearVar().\n>\n> Wordsmithed that comment a bit.\n\n- * Set the variable's value before releasing the lock, that prevents race\n- * a race condition wherein a new locker acquires the lock, but hasn't yet\n- * set the variables value.\n[...]\n+ * NB: pg_atomic_exchange_u64 is used here as opposed to just\n+ * pg_atomic_write_u64 to update the variable. Since pg_atomic_exchange_u64\n+ * is a full barrier, we're guaranteed that all loads and stores issued\n+ * prior to setting the variable are completed before any loads or stores\n+ * issued after setting the variable.\n\nThis is the same explanation as LWLockUpdateVar(), except that we\nlose the details explaining why we are doing the update before\nreleasing the lock.\n\nIt took me some time, but I have been able to deploy a big box to see\nthe effect of this patch at a rather large scale (64 vCPU, 512G of\nmemory), with the following test characteristics for HEAD and v6:\n- TPS comparison with pgbench and pg_logical_emit_message().\n- Record sizes of 16, 64, 256, 1k, 4k and 16k.\n- Clients and jobs equal at 4, 16, 64, 256, 512, 1024, 2048, 4096.\n- Runs of 3 mins for each of the 48 combinations, meaning 96 runs in\ntotal.\n\nAnd here are the results I got:\nmessage_size_b | 16 | 64 | 256 | 1024 | 4096 | 16k\n------------------|--------|--------|--------|--------|-------|-------\nhead_4_clients | 3026 | 2965 | 2846 | 2880 | 2778 | 2412\nhead_16_clients | 12087 | 11287 | 11670 | 11100 | 9003 | 5608\nhead_64_clients | 42995 | 44005 | 43592 | 35437 | 21533 | 11273\nhead_256_clients | 106775 | 109233 | 104201 | 80759 | 42118 | 16254\nhead_512_clients | 153849 | 156950 | 142915 | 99288 | 57714 | 16198\nhead_1024_clients | 122102 | 123895 | 114248 | 117317 | 62270 | 16261\nhead_2048_clients | 126730 | 115594 | 109671 | 119564 | 62454 | 16298\nhead_4096_clients | 111564 | 111697 | 119164 | 123483 | 62430 | 16140\nv6_4_clients | 2893 | 2917 | 3087 | 2904 | 2786 | 2262\nv6_16_clients | 12097 | 11387 | 11579 | 11242 | 9228 | 5661\nv6_64_clients | 45124 | 46533 | 42275 | 36124 | 21696 | 11386\nv6_256_clients | 121500 | 125732 | 104328 | 78989 | 41949 | 16254\nv6_512_clients | 164120 | 174743 | 146677 | 98110 | 60228 | 16171\nv6_1024_clients | 168990 | 180710 | 149894 | 117431 | 62271 | 16259\nv6_2048_clients | 165426 | 162893 | 146322 | 132476 | 62468 | 16274\nv6_4096_clients | 161283 | 158732 | 162474 | 135636 | 62461 | 16030\n\nThese tests are not showing me any degradation, and a correlation\nbetween the record size and the number of clients where the TPS begins\nto show a difference between HEAD and v6 of the patch. In short the\nshorter the record, the better performance gets at a lower client\nnumber, still this required at least 256~512 clients with even\nmessages of 16bytes. At the end I'm cool with that.\n--\nMichael",
"msg_date": "Wed, 10 May 2023 21:04:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Tue, May 9, 2023 at 9:24 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, May 9, 2023 at 9:02 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Mon, May 08, 2023 at 04:04:10PM -0700, Nathan Bossart wrote:\n> > > On Mon, May 08, 2023 at 05:57:09PM +0530, Bharath Rupireddy wrote:\n> > >> test-case 1: -T5, WAL ~16 bytes\n> > >> test-case 1: -t1000, WAL ~16 bytes\n> > >\n> > > I wonder if it's worth doing a couple of long-running tests, too.\n> >\n> > Yes, 5s or 1000 transactions per client is too small, though it shows\n> > that things are going in the right direction.\n>\n> I'll pick a test case that generates a reasonable amount of WAL 256\n> bytes. What do you think of the following?\n>\n> test-case 2: -T900, WAL ~256 bytes (for c in 1 2 4 8 16 32 64 128 256\n> 512 768 1024 2048 4096 - takes 3.5hrs)\n> test-case 2: -t1000000, WAL ~256 bytes\n>\n> If okay, I'll fire the tests.\n\ntest-case 2: -T900, WAL ~256 bytes - ran for about 3.5 hours and the\nmore than 3X improvement in TPS is seen - 3.11X @ 512 3.79 @ 768, 3.47\n@ 1024, 2.27 @ 2048, 2.77 @ 4096\n\ntest-case 2: -T900, WAL ~256 bytes\nclients HEAD PATCHED\n1 1394 1351\n2 1551 1445\n4 3104 2881\n8 5974 5774\n16 12154 11319\n32 22438 21606\n64 43689 40567\n128 80726 77993\n256 139987 141638\n512 60108 187126\n768 51188 194406\n1024 48766 169353\n2048 46617 105961\n4096 44163 122697\n\ntest-case 2: -t1000000, WAL ~256 bytes - ran for more than 12 hours\nand the maximum improvement is 1.84X @ 1024 client.\n\ntest-case 2: -t1000000, WAL ~256 bytes\nclients HEAD PATCHED\n1 1454 1500\n2 1657 1612\n4 3223 3224\n8 6305 6295\n16 12447 12260\n32 24855 24335\n64 45229 44386\n128 80752 79518\n256 120663 119083\n512 149546 159396\n768 118298 181732\n1024 101829 187492\n2048 107506 191378\n4096 125130 186728\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 10 May 2023 22:40:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Wed, May 10, 2023 at 10:40:20PM +0530, Bharath Rupireddy wrote:\n> test-case 2: -T900, WAL ~256 bytes - ran for about 3.5 hours and the\n> more than 3X improvement in TPS is seen - 3.11X @ 512 3.79 @ 768, 3.47\n> @ 1024, 2.27 @ 2048, 2.77 @ 4096\n>\n> [...]\n>\n> test-case 2: -t1000000, WAL ~256 bytes - ran for more than 12 hours\n> and the maximum improvement is 1.84X @ 1024 client.\n\nThanks. So that's pretty close to what I was seeing when it comes to\nthis message size where you see much more effects under a number of\nclients of at least 512~. Any of these tests have been using fsync =\non, I assume. I think that disabling fsync or just mounting pg_wal to\na tmpfs should show the same pattern for larger record sizes (after 1k\nof message size the curve begins to go down with 512~ clients).\n--\nMichael",
"msg_date": "Thu, 11 May 2023 08:31:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Wed, May 10, 2023 at 09:04:47PM +0900, Michael Paquier wrote:\n> It took me some time, but I have been able to deploy a big box to see\n> the effect of this patch at a rather large scale (64 vCPU, 512G of\n> memory), with the following test characteristics for HEAD and v6:\n> - TPS comparison with pgbench and pg_logical_emit_message().\n> - Record sizes of 16, 64, 256, 1k, 4k and 16k.\n> - Clients and jobs equal at 4, 16, 64, 256, 512, 1024, 2048, 4096.\n> - Runs of 3 mins for each of the 48 combinations, meaning 96 runs in\n> total.\n> \n> And here are the results I got:\n> message_size_b | 16 | 64 | 256 | 1024 | 4096 | 16k\n> ------------------|--------|--------|--------|--------|-------|-------\n> head_4_clients | 3026 | 2965 | 2846 | 2880 | 2778 | 2412\n> head_16_clients | 12087 | 11287 | 11670 | 11100 | 9003 | 5608\n> head_64_clients | 42995 | 44005 | 43592 | 35437 | 21533 | 11273\n> head_256_clients | 106775 | 109233 | 104201 | 80759 | 42118 | 16254\n> head_512_clients | 153849 | 156950 | 142915 | 99288 | 57714 | 16198\n> head_1024_clients | 122102 | 123895 | 114248 | 117317 | 62270 | 16261\n> head_2048_clients | 126730 | 115594 | 109671 | 119564 | 62454 | 16298\n> head_4096_clients | 111564 | 111697 | 119164 | 123483 | 62430 | 16140\n> v6_4_clients | 2893 | 2917 | 3087 | 2904 | 2786 | 2262\n> v6_16_clients | 12097 | 11387 | 11579 | 11242 | 9228 | 5661\n> v6_64_clients | 45124 | 46533 | 42275 | 36124 | 21696 | 11386\n> v6_256_clients | 121500 | 125732 | 104328 | 78989 | 41949 | 16254\n> v6_512_clients | 164120 | 174743 | 146677 | 98110 | 60228 | 16171\n> v6_1024_clients | 168990 | 180710 | 149894 | 117431 | 62271 | 16259\n> v6_2048_clients | 165426 | 162893 | 146322 | 132476 | 62468 | 16274\n> v6_4096_clients | 161283 | 158732 | 162474 | 135636 | 62461 | 16030\n\nAnother thing I was wondering is if it would be able to see a\ndifference by reducing the I/O pressure. After mounting pg_wal to a\ntmpfs, I am getting the following table:\n message_size_b | 16 | 64 | 256 | 1024 | 4096 | 16000\n-------------------+--------+--------+--------+--------+--------+-------\n head_4_clients | 86476 | 86592 | 84645 | 76784 | 57887 | 30199\n head_16_clients | 277006 | 278431 | 263238 | 228614 | 143880 | 67237\n head_64_clients | 373972 | 370082 | 352217 | 297377 | 190974 | 96843\n head_256_clients | 144510 | 147077 | 146281 | 189059 | 156294 | 88345\n head_512_clients | 122863 | 119054 | 127790 | 162187 | 142771 | 84109\n head_1024_clients | 140802 | 138728 | 147200 | 172449 | 138022 | 81054\n head_2048_clients | 175950 | 164143 | 154070 | 161432 | 128205 | 76732\n head_4096_clients | 161438 | 158666 | 152057 | 139520 | 113955 | 69335\n v6_4_clients | 87356 | 86985 | 83933 | 76397 | 57352 | 30084\n v6_16_clients | 277466 | 280125 | 259733 | 224916 | 143832 | 66589\n v6_64_clients | 388352 | 386188 | 362358 | 302719 | 190353 | 96687\n v6_256_clients | 365797 | 360114 | 337135 | 266851 | 172252 | 88898\n v6_512_clients | 339751 | 332384 | 308182 | 249624 | 158868 | 84258\n v6_1024_clients | 301294 | 295140 | 276769 | 226034 | 148392 | 80909\n v6_2048_clients | 268846 | 261001 | 247110 | 205332 | 137271 | 76299\n v6_4096_clients | 229322 | 227049 | 217271 | 183708 | 124888 | 69263\n\nThis shows more difference from 64 clients up to 4k records, without\ndegradation noticed across the board.\n--\nMichael",
"msg_date": "Thu, 11 May 2023 15:26:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Thu, May 11, 2023 at 11:56 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, May 10, 2023 at 09:04:47PM +0900, Michael Paquier wrote:\n> > It took me some time, but I have been able to deploy a big box to see\n> > the effect of this patch at a rather large scale (64 vCPU, 512G of\n> > memory), with the following test characteristics for HEAD and v6:\n> > - TPS comparison with pgbench and pg_logical_emit_message().\n> > - Record sizes of 16, 64, 256, 1k, 4k and 16k.\n> > - Clients and jobs equal at 4, 16, 64, 256, 512, 1024, 2048, 4096.\n> > - Runs of 3 mins for each of the 48 combinations, meaning 96 runs in\n> > total.\n> >\n> > And here are the results I got:\n> > message_size_b | 16 | 64 | 256 | 1024 | 4096 | 16k\n> > ------------------|--------|--------|--------|--------|-------|-------\n> > head_4_clients | 3026 | 2965 | 2846 | 2880 | 2778 | 2412\n> > head_16_clients | 12087 | 11287 | 11670 | 11100 | 9003 | 5608\n> > head_64_clients | 42995 | 44005 | 43592 | 35437 | 21533 | 11273\n> > head_256_clients | 106775 | 109233 | 104201 | 80759 | 42118 | 16254\n> > head_512_clients | 153849 | 156950 | 142915 | 99288 | 57714 | 16198\n> > head_1024_clients | 122102 | 123895 | 114248 | 117317 | 62270 | 16261\n> > head_2048_clients | 126730 | 115594 | 109671 | 119564 | 62454 | 16298\n> > head_4096_clients | 111564 | 111697 | 119164 | 123483 | 62430 | 16140\n> > v6_4_clients | 2893 | 2917 | 3087 | 2904 | 2786 | 2262\n> > v6_16_clients | 12097 | 11387 | 11579 | 11242 | 9228 | 5661\n> > v6_64_clients | 45124 | 46533 | 42275 | 36124 | 21696 | 11386\n> > v6_256_clients | 121500 | 125732 | 104328 | 78989 | 41949 | 16254\n> > v6_512_clients | 164120 | 174743 | 146677 | 98110 | 60228 | 16171\n> > v6_1024_clients | 168990 | 180710 | 149894 | 117431 | 62271 | 16259\n> > v6_2048_clients | 165426 | 162893 | 146322 | 132476 | 62468 | 16274\n> > v6_4096_clients | 161283 | 158732 | 162474 | 135636 | 62461 | 16030\n>\n> Another thing I was wondering is if it would be able to see a\n> difference by reducing the I/O pressure. After mounting pg_wal to a\n> tmpfs, I am getting the following table:\n> message_size_b | 16 | 64 | 256 | 1024 | 4096 | 16000\n> -------------------+--------+--------+--------+--------+--------+-------\n> head_4_clients | 86476 | 86592 | 84645 | 76784 | 57887 | 30199\n> head_16_clients | 277006 | 278431 | 263238 | 228614 | 143880 | 67237\n> head_64_clients | 373972 | 370082 | 352217 | 297377 | 190974 | 96843\n> head_256_clients | 144510 | 147077 | 146281 | 189059 | 156294 | 88345\n> head_512_clients | 122863 | 119054 | 127790 | 162187 | 142771 | 84109\n> head_1024_clients | 140802 | 138728 | 147200 | 172449 | 138022 | 81054\n> head_2048_clients | 175950 | 164143 | 154070 | 161432 | 128205 | 76732\n> head_4096_clients | 161438 | 158666 | 152057 | 139520 | 113955 | 69335\n> v6_4_clients | 87356 | 86985 | 83933 | 76397 | 57352 | 30084\n> v6_16_clients | 277466 | 280125 | 259733 | 224916 | 143832 | 66589\n> v6_64_clients | 388352 | 386188 | 362358 | 302719 | 190353 | 96687\n> v6_256_clients | 365797 | 360114 | 337135 | 266851 | 172252 | 88898\n> v6_512_clients | 339751 | 332384 | 308182 | 249624 | 158868 | 84258\n> v6_1024_clients | 301294 | 295140 | 276769 | 226034 | 148392 | 80909\n> v6_2048_clients | 268846 | 261001 | 247110 | 205332 | 137271 | 76299\n> v6_4096_clients | 229322 | 227049 | 217271 | 183708 | 124888 | 69263\n>\n> This shows more difference from 64 clients up to 4k records, without\n> degradation noticed across the board.\n\nImpressive. I further covered the following test cases. There's a\nclear gain with the patch i.e. reducing burden on LWLock's waitlist\nlock is helping out.\n\nfsync=off, -T120:\n message_size_b | 16 | 64 | 256 | 1024 | 4096 | 16384\n-------------------+--------+--------+--------+--------+--------+--------\n head_1_clients | 33609 | 33862 | 32975 | 29722 | 21842 | 10606\n head_2_clients | 60583 | 60524 | 57833 | 53582 | 38583 | 20120\n head_4_clients | 115209 | 114012 | 114077 | 102991 | 73452 | 39179\n head_8_clients | 181786 | 177592 | 174404 | 155350 | 98642 | 41406\n head_16_clients | 313750 | 309024 | 295375 | 253101 | 159328 | 73617\n head_32_clients | 406456 | 416809 | 400527 | 344573 | 213756 | 96322\n head_64_clients | 199619 | 197948 | 198871 | 208606 | 221751 | 107762\n head_128_clients | 108576 | 108727 | 107606 | 112137 | 173998 | 106976\n head_256_clients | 75303 | 74983 | 73986 | 76100 | 148209 | 98080\n head_512_clients | 62559 | 60189 | 59588 | 61102 | 131803 | 90534\n head_768_clients | 55650 | 54486 | 54813 | 55515 | 120707 | 88009\n head_1024_clients | 54709 | 52395 | 51672 | 52910 | 113904 | 86116\n head_2048_clients | 48640 | 47098 | 46787 | 47582 | 98394 | 80766\n head_4096_clients | 43205 | 42709 | 42591 | 43649 | 88903 | 72362\n v6_1_clients | 33337 | 32877 | 31880 | 29372 | 21695 | 10596\n v6_2_clients | 60125 | 60682 | 58770 | 53709 | 38390 | 20266\n v6_4_clients | 115338 | 114053 | 114232 | 93527 | 74409 | 40437\n v6_8_clients | 179472 | 183899 | 175474 | 154547 | 101807 | 43508\n v6_16_clients | 318181 | 318580 | 296591 | 258094 | 159351 | 74758\n v6_32_clients | 439681 | 447005 | 428459 | 367307 | 218511 | 97635\n v6_64_clients | 473440 | 478388 | 464287 | 394825 | 244365 | 109194\n v6_128_clients | 384433 | 412694 | 405916 | 366046 | 232421 | 110274\n v6_256_clients | 312480 | 303635 | 291900 | 307573 | 206784 | 104171\n v6_512_clients | 218560 | 189207 | 216267 | 252513 | 186762 | 97918\n v6_768_clients | 168432 | 155493 | 145941 | 226616 | 178178 | 95435\n v6_1024_clients | 150300 | 132078 | 134657 | 224515 | 172950 | 94356\n v6_2048_clients | 126941 | 120189 | 120702 | 195684 | 158683 | 88055\n v6_4096_clients | 163993 | 140795 | 139702 | 170149 | 139740 | 78907\n\npg_wal on tmpfs, -T180:\n message_size_b | 16 | 64 | 256 | 1024 | 4096 | 16384\n-------------------+--------+--------+--------+--------+--------+--------\n head_1_clients | 32956 | 32766 | 32244 | 29772 | 22094 | 11212\n head_2_clients | 60093 | 60382 | 58825 | 53812 | 39764 | 20953\n head_4_clients | 117178 | 104986 | 112060 | 103416 | 75588 | 39753\n head_8_clients | 177556 | 179926 | 173413 | 156684 | 107727 | 42001\n head_16_clients | 311033 | 313842 | 298362 | 261298 | 165293 | 76183\n head_32_clients | 425750 | 433988 | 419193 | 370925 | 227392 | 101638\n head_64_clients | 227463 | 219832 | 221421 | 235603 | 236601 | 113677\n head_128_clients | 117188 | 116847 | 118414 | 123605 | 194533 | 111480\n head_256_clients | 80596 | 80541 | 79130 | 83949 | 167529 | 102401\n head_512_clients | 64912 | 63610 | 63209 | 65554 | 146882 | 94936\n head_768_clients | 59050 | 57082 | 57061 | 58966 | 133336 | 92389\n head_1024_clients | 56880 | 54951 | 54864 | 56554 | 125270 | 90893\n head_2048_clients | 52148 | 49603 | 50422 | 50692 | 110789 | 86659\n head_4096_clients | 47001 | 46992 | 46075 | 47793 | 99617 | 77762\n v6_1_clients | 32915 | 32854 | 31676 | 29341 | 21956 | 11220\n v6_2_clients | 59592 | 59146 | 58106 | 53235 | 38973 | 20943\n v6_4_clients | 113947 | 114897 | 97349 | 104630 | 73628 | 40719\n v6_8_clients | 177996 | 179673 | 176190 | 156831 | 104183 | 42884\n v6_16_clients | 312284 | 317065 | 300130 | 268788 | 165765 | 77299\n v6_32_clients | 443101 | 450025 | 436774 | 380398 | 229081 | 101916\n v6_64_clients | 450794 | 469633 | 470252 | 411374 | 253232 | 113722\n v6_128_clients | 413357 | 399514 | 386713 | 364070 | 236133 | 112780\n v6_256_clients | 264674 | 252701 | 268273 | 296090 | 208050 | 105477\n v6_512_clients | 196481 | 154815 | 158316 | 238805 | 188363 | 99507\n v6_768_clients | 139839 | 132645 | 131391 | 219846 | 179226 | 97808\n v6_1024_clients | 124540 | 119543 | 120140 | 206740 | 174657 | 96629\n v6_2048_clients | 118793 | 113033 | 113881 | 190997 | 161421 | 91888\n v6_4096_clients | 156341 | 156971 | 131391 | 177024 | 146564 | 84096\n\n--enable-atomics=no, -T60:\n message_size_b | 16 | 64 | 256 | 1024 | 4096 | 16384\n-------------------+-------+-------+-------+-------+-------+-------\n head_1_clients | 1701 | 1686 | 1636 | 1693 | 1523 | 1331\n head_2_clients | 1751 | 1712 | 1698 | 1769 | 1690 | 1579\n head_4_clients | 3328 | 3370 | 3405 | 3495 | 3107 | 2713\n head_8_clients | 6580 | 6521 | 6459 | 6370 | 5470 | 4253\n head_16_clients | 13433 | 13476 | 12986 | 11461 | 9249 | 6313\n head_32_clients | 25697 | 26729 | 24879 | 20862 | 14344 | 9454\n head_64_clients | 51499 | 48322 | 46297 | 35224 | 20970 | 13241\n head_128_clients | 56777 | 57177 | 59129 | 47687 | 27591 | 16007\n head_256_clients | 9555 | 10041 | 9526 | 9830 | 13179 | 15776\n head_512_clients | 5795 | 5871 | 5809 | 5954 | 5828 | 15647\n head_768_clients | 4322 | 4366 | 4782 | 4624 | 4853 | 12959\n head_1024_clients | 4003 | 3789 | 3647 | 3865 | 4160 | 7991\n head_2048_clients | 2687 | 2573 | 2569 | 2829 | 2918 | 5462\n head_4096_clients | 1694 | 1802 | 1813 | 1948 | 2256 | 5862\n v6_1_clients | 1560 | 1595 | 1690 | 1621 | 1526 | 1374\n v6_2_clients | 1737 | 1736 | 1738 | 1663 | 1601 | 1568\n v6_4_clients | 3575 | 3583 | 3449 | 3137 | 3157 | 2788\n v6_8_clients | 6660 | 6900 | 6802 | 6158 | 5605 | 4521\n v6_16_clients | 14084 | 12991 | 13485 | 12628 | 10025 | 6211\n v6_32_clients | 26408 | 24652 | 24672 | 21441 | 14966 | 9753\n v6_64_clients | 49537 | 47703 | 45583 | 33524 | 21476 | 13259\n v6_128_clients | 86938 | 79745 | 73740 | 53007 | 34863 | 15901\n v6_256_clients | 20391 | 21433 | 21730 | 30836 | 43821 | 15891\n v6_512_clients | 13128 | 12181 | 12309 | 11596 | 14744 | 15851\n v6_768_clients | 10511 | 9942 | 9713 | 9373 | 10181 | 15964\n v6_1024_clients | 9264 | 8745 | 8031 | 7500 | 8762 | 15198\n v6_2048_clients | 6070 | 5724 | 5939 | 5987 | 5513 | 10828\n v6_4096_clients | 4322 | 4035 | 3616 | 3637 | 5628 | 10970\n\n--enable-spinlocks=no, -T60:\n message_size_b | 16 | 64 | 256 | 1024 | 4096 | 16384\n-------------------+--------+--------+--------+--------+-------+-------\n head_1_clients | 1644 | 1716 | 1701 | 1636 | 1569 | 1368\n head_2_clients | 1779 | 1875 | 1728 | 1728 | 1770 | 1568\n head_4_clients | 3448 | 3569 | 3330 | 3324 | 3319 | 2780\n head_8_clients | 6159 | 6996 | 6893 | 6401 | 6308 | 4423\n head_16_clients | 13195 | 13810 | 13139 | 12892 | 10744 | 6714\n head_32_clients | 26752 | 26834 | 25749 | 21739 | 18071 | 9706\n head_64_clients | 52303 | 49759 | 47785 | 36625 | 26993 | 13685\n head_128_clients | 98325 | 89753 | 83276 | 62302 | 38515 | 16005\n head_256_clients | 128075 | 124396 | 111059 | 97165 | 56941 | 15779\n head_512_clients | 140908 | 132622 | 126363 | 119113 | 62572 | 15919\n head_768_clients | 118694 | 111764 | 109464 | 120368 | 62129 | 15905\n head_1024_clients | 102542 | 99007 | 94291 | 109485 | 62680 | 16039\n head_2048_clients | 57994 | 57003 | 57410 | 60350 | 62487 | 16091\n head_4096_clients | 33995 | 32944 | 34174 | 33483 | 61071 | 15655\n v6_1_clients | 1743 | 1711 | 1722 | 1655 | 1588 | 1378\n v6_2_clients | 1714 | 1830 | 1767 | 1667 | 1725 | 1518\n v6_4_clients | 3638 | 3602 | 3594 | 3452 | 3216 | 2713\n v6_8_clients | 7047 | 6671 | 7148 | 6342 | 5577 | 4573\n v6_16_clients | 13885 | 13247 | 13951 | 13037 | 10570 | 6391\n v6_32_clients | 27766 | 27230 | 27079 | 22911 | 17152 | 9700\n v6_64_clients | 50748 | 51548 | 47852 | 36479 | 27232 | 13290\n v6_128_clients | 97611 | 89554 | 85009 | 67349 | 37046 | 16005\n v6_256_clients | 124475 | 128603 | 108888 | 95277 | 55021 | 15785\n v6_512_clients | 181639 | 176544 | 152852 | 120914 | 62674 | 15921\n v6_768_clients | 188600 | 180691 | 158997 | 128740 | 62402 | 15979\n v6_1024_clients | 191845 | 180830 | 161597 | 143032 | 62426 | 15985\n v6_2048_clients | 179227 | 168906 | 173510 | 149689 | 62721 | 16090\n v6_4096_clients | 156613 | 152795 | 154231 | 134587 | 62245 | 15781\n\n--enable-atomics=no --enable-spinlocks=no, -T60:\n message_size_b | 16 | 64 | 256 | 1024 | 4096 | 16384\n-------------------+-------+-------+-------+-------+-------+-------\n head_1_clients | 1644 | 1768 | 1726 | 1698 | 1544 | 1344\n head_2_clients | 1805 | 1829 | 1746 | 1869 | 1730 | 1565\n head_4_clients | 3562 | 3606 | 3571 | 3656 | 3145 | 2704\n head_8_clients | 6921 | 7051 | 6774 | 6676 | 5999 | 4425\n head_16_clients | 13418 | 13998 | 13634 | 12640 | 9782 | 6440\n head_32_clients | 21716 | 21690 | 21124 | 18977 | 14050 | 9168\n head_64_clients | 27085 | 26498 | 26108 | 23048 | 17843 | 13278\n head_128_clients | 26704 | 26373 | 25845 | 24056 | 19777 | 15922\n head_256_clients | 24694 | 24586 | 24148 | 22525 | 23523 | 15852\n head_512_clients | 21364 | 21143 | 20697 | 20334 | 21770 | 15870\n head_768_clients | 16985 | 16618 | 16544 | 16511 | 17360 | 15945\n head_1024_clients | 13133 | 13640 | 13521 | 13716 | 14202 | 16020\n head_2048_clients | 8051 | 8140 | 7711 | 8673 | 9027 | 15091\n head_4096_clients | 4692 | 4549 | 4924 | 4908 | 6853 | 14752\n v6_1_clients | 1676 | 1722 | 1781 | 1681 | 1527 | 1394\n v6_2_clients | 1868 | 1706 | 1868 | 1842 | 1762 | 1573\n v6_4_clients | 3668 | 3591 | 3449 | 3556 | 3309 | 2707\n v6_8_clients | 7279 | 6818 | 6842 | 6846 | 5888 | 4283\n v6_16_clients | 13604 | 13364 | 14099 | 12851 | 9959 | 6271\n v6_32_clients | 22899 | 22453 | 22488 | 20127 | 15970 | 8915\n v6_64_clients | 33289 | 32943 | 32280 | 28683 | 22885 | 13215\n v6_128_clients | 43614 | 42954 | 41336 | 36660 | 29107 | 15928\n v6_256_clients | 46542 | 46593 | 45673 | 41064 | 38759 | 15850\n v6_512_clients | 36303 | 35923 | 34640 | 32828 | 38359 | 15913\n v6_768_clients | 29654 | 29822 | 29317 | 28703 | 34194 | 15903\n v6_1024_clients | 25871 | 25219 | 25801 | 25099 | 29323 | 16015\n v6_2048_clients | 16497 | 17041 | 16401 | 17128 | 19656 | 15962\n v6_4096_clients | 10067 | 10873 | 10702 | 10540 | 12909 | 16041\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 12 May 2023 07:35:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Fri, May 12, 2023 at 07:35:20AM +0530, Bharath Rupireddy wrote:\n> --enable-atomics=no, -T60:\n> --enable-spinlocks=no, -T60:\n> --enable-atomics=no --enable-spinlocks=no, -T60:\n\nThanks for these extra tests, I have not done these specific cases but\nthe profiles look similar to what I've seen myself. If I recall\ncorrectly the fallback implementation of atomics just uses spinlocks\ninternally to force the barriers required.\n--\nMichael",
"msg_date": "Sat, 13 May 2023 07:56:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Wed, May 10, 2023 at 5:34 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> + * NB: LWLockConflictsWithVar (which is called from\n> + * LWLockWaitForVar) relies on the spinlock used above in this\n> + * function and doesn't use a memory barrier.\n>\n> This patch adds the following comment in WaitXLogInsertionsToFinish()\n> because lwlock.c on HEAD mentions that:\n> /*\n> * Test first to see if it the slot is free right now.\n> *\n> * XXX: the caller uses a spinlock before this, so we don't need a memory\n> * barrier here as far as the current usage is concerned. But that might\n> * not be safe in general.\n> */\n>\n> Should it be something where we'd better be noisy about at the top of\n> LWLockWaitForVar()? We don't want to add a memory barrier at the\n> beginning of LWLockConflictsWithVar(), still it strikes me that\n> somebody that aims at using LWLockWaitForVar() may miss this point\n> because LWLockWaitForVar() is the routine published in lwlock.h, not\n> LWLockConflictsWithVar(). This does not need to be really\n> complicated, say a note at the top of LWLockWaitForVar() among the\n> lines of (?):\n> \"Be careful that LWLockConflictsWithVar() does not include a memory\n> barrier, hence the caller of this function may want to rely on an\n> explicit barrier or a spinlock to avoid memory ordering issues.\"\n\n+1. Now, we have comments in 3 places to warn about the\nLWLockConflictsWithVar not using memory barrier - one in\nWaitXLogInsertionsToFinish, one in LWLockWaitForVar and another one\n(existing) in LWLockConflictsWithVar specifying where exactly a memory\nbarrier is needed if the caller doesn't use a spinlock. Looks fine to\nme.\n\n> + * NB: pg_atomic_exchange_u64 is used here as opposed to just\n> + * pg_atomic_write_u64 to update the variable. Since pg_atomic_exchange_u64\n> + * is a full barrier, we're guaranteed that all loads and stores issued\n> + * prior to setting the variable are completed before any loads or stores\n> + * issued after setting the variable.\n>\n> This is the same explanation as LWLockUpdateVar(), except that we\n> lose the details explaining why we are doing the update before\n> releasing the lock.\n\nI think what I have so far seems more verbose explaining what a\nbarrier does and all that. I honestly think we don't need to be that\nverbose, thanks to README.barrier.\n\nI simplified those 2 comments as the following:\n\n * NB: pg_atomic_exchange_u64, having full barrier semantics will ensure\n * the variable is updated before releasing the lock.\n\n * NB: pg_atomic_exchange_u64, having full barrier semantics will ensure\n * the variable is updated before waking up waiters.\n\nPlease find the attached v7 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 18 May 2023 11:18:25 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Thu, May 18, 2023 at 11:18:25AM +0530, Bharath Rupireddy wrote:\n> I think what I have so far seems more verbose explaining what a\n> barrier does and all that. I honestly think we don't need to be that\n> verbose, thanks to README.barrier.\n\nAgreed. This file is a mine of information.\n\n> I simplified those 2 comments as the following:\n> \n> * NB: pg_atomic_exchange_u64, having full barrier semantics will ensure\n> * the variable is updated before releasing the lock.\n> \n> * NB: pg_atomic_exchange_u64, having full barrier semantics will ensure\n> * the variable is updated before waking up waiters.\n> \n> Please find the attached v7 patch.\n\nNit. These sentences seem to be worded a bit weirdly to me. How\nabout:\n\"pg_atomic_exchange_u64 has full barrier semantics, ensuring that the\nvariable is updated before (releasing the lock|waking up waiters).\"\n\n+ * Be careful that LWLockConflictsWithVar() does not include a memory barrier,\n+ * hence the caller of this function may want to rely on an explicit barrier or\n+ * a spinlock to avoid memory ordering issues.\n\nThanks, this addition looks OK to me.\n--\nMichael",
"msg_date": "Fri, 19 May 2023 15:54:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Fri, May 19, 2023 at 12:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, May 18, 2023 at 11:18:25AM +0530, Bharath Rupireddy wrote:\n> > I think what I have so far seems more verbose explaining what a\n> > barrier does and all that. I honestly think we don't need to be that\n> > verbose, thanks to README.barrier.\n>\n> Agreed. This file is a mine of information.\n>\n> > I simplified those 2 comments as the following:\n> >\n> > * NB: pg_atomic_exchange_u64, having full barrier semantics will ensure\n> > * the variable is updated before releasing the lock.\n> >\n> > * NB: pg_atomic_exchange_u64, having full barrier semantics will ensure\n> > * the variable is updated before waking up waiters.\n> >\n> > Please find the attached v7 patch.\n>\n> Nit. These sentences seem to be worded a bit weirdly to me. How\n> about:\n> \"pg_atomic_exchange_u64 has full barrier semantics, ensuring that the\n> variable is updated before (releasing the lock|waking up waiters).\"\n\nI get it. How about the following similar to what\nProcessProcSignalBarrier() has?\n\n+ * Note that pg_atomic_exchange_u64 is a full barrier, so we're guaranteed\n+ * that the variable is updated before waking up waiters.\n+ */\n\n+ * Note that pg_atomic_exchange_u64 is a full barrier, so we're guaranteed\n+ * that the variable is updated before releasing the lock.\n */\n\nPlease find the attached v8 patch with the above change.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 19 May 2023 20:34:16 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Fri, May 19, 2023 at 08:34:16PM +0530, Bharath Rupireddy wrote:\n> I get it. How about the following similar to what\n> ProcessProcSignalBarrier() has?\n> \n> + * Note that pg_atomic_exchange_u64 is a full barrier, so we're guaranteed\n> + * that the variable is updated before waking up waiters.\n> + */\n> \n> + * Note that pg_atomic_exchange_u64 is a full barrier, so we're guaranteed\n> + * that the variable is updated before releasing the lock.\n> */\n> \n> Please find the attached v8 patch with the above change.\n\nSimpler and consistent, nice. I don't have much more to add, so I\nhave switched the patch as RfC.\n--\nMichael",
"msg_date": "Mon, 22 May 2023 09:26:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Mon, May 22, 2023 at 09:26:25AM +0900, Michael Paquier wrote:\n> Simpler and consistent, nice. I don't have much more to add, so I\n> have switched the patch as RfC.\n\nWhile at PGcon, Andres has asked me how many sockets are in the\nenvironment I used for the tests, and lscpu tells me the following,\nwhich is more than 1:\nCPU(s): 64\nOn-line CPU(s) list: 0-63\nCore(s) per socket: 16\nSocket(s): 2\nNUMA node(s): 2\n\n@Andres: Were there any extra tests you wanted to be run for more\ninput?\n--\nMichael",
"msg_date": "Wed, 31 May 2023 07:35:55 -0400",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Wed, May 31, 2023 at 5:05 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, May 22, 2023 at 09:26:25AM +0900, Michael Paquier wrote:\n> > Simpler and consistent, nice. I don't have much more to add, so I\n> > have switched the patch as RfC.\n>\n> While at PGcon, Andres has asked me how many sockets are in the\n> environment I used for the tests,\n\nI'm glad to know that the feature was discussed at PGCon.\n\n> and lscpu tells me the following,\n> which is more than 1:\n> CPU(s): 64\n> On-line CPU(s) list: 0-63\n> Core(s) per socket: 16\n> Socket(s): 2\n> NUMA node(s): 2\n\nMine says this:\n\nCPU(s): 96\n On-line CPU(s) list: 0-95\nCore(s) per socket: 24\nSocket(s): 2\nNUMA:\n NUMA node(s): 2\n NUMA node0 CPU(s): 0-23,48-71\n NUMA node1 CPU(s): 24-47,72-95\n\n> @Andres: Were there any extra tests you wanted to be run for more\n> input?\n\n@Andres Freund please let us know your thoughts.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 5 Jun 2023 08:00:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Mon, Jun 05, 2023 at 08:00:00AM +0530, Bharath Rupireddy wrote:\n> On Wed, May 31, 2023 at 5:05 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> @Andres: Were there any extra tests you wanted to be run for more\n>> input?\n> \n> @Andres Freund please let us know your thoughts.\n\nErr, ping. It seems like this thread is waiting on input from you,\nAndres?\n--\nMichael",
"msg_date": "Tue, 11 Jul 2023 09:20:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On 2023-07-11 09:20:45 +0900, Michael Paquier wrote:\n> On Mon, Jun 05, 2023 at 08:00:00AM +0530, Bharath Rupireddy wrote:\n> > On Wed, May 31, 2023 at 5:05 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >> @Andres: Were there any extra tests you wanted to be run for more\n> >> input?\n> > \n> > @Andres Freund please let us know your thoughts.\n> \n> Err, ping. It seems like this thread is waiting on input from you,\n> Andres?\n\nLooking. Sorry for not getting to this earlier.\n\n- Andres\n\n\n",
"msg_date": "Thu, 13 Jul 2023 14:04:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-13 14:04:31 -0700, Andres Freund wrote:\n> From b74b6e953cb5a7e7ea1a89719893f6ce9e231bba Mon Sep 17 00:00:00 2001\n> From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n> Date: Fri, 19 May 2023 15:00:21 +0000\n> Subject: [PATCH v8] Optimize WAL insertion lock acquisition and release\n>\n> This commit optimizes WAL insertion lock acquisition and release\n> in the following way:\n\nI think this commit does too many things at once.\n\n\n> 1. WAL insertion lock's variable insertingAt is currently read and\n> written with the help of lwlock's wait list lock to avoid\n> torn-free reads/writes. This wait list lock can become a point of\n> contention on a highly concurrent write workloads. Therefore, make\n> insertingAt a 64-bit atomic which inherently provides torn-free\n> reads/writes.\n\n\"inherently\" is a bit strong, given that we emulate 64bit atomics where not\navailable...\n\n\n> 2. LWLockUpdateVar currently acquires lwlock's wait list lock even when\n> there are no waiters at all. Add a fastpath exit to LWLockUpdateVar when\n> there are no waiters to avoid unnecessary locking.\n\nI don't think there's enough of an explanation for why this isn't racy.\n\nThe reason it's, I think, safe, is that anyone using LWLockConflictsWithVar()\nwill do so twice in a row, with a barrier inbetween. But that really relies on\nwhat I explain in the paragraph below:\n\n\n\n> It also adds notes on why LWLockConflictsWithVar doesn't need a\n> memory barrier as far as its current usage is concerned.\n\nI don't think:\n\t\t\t * NB: LWLockConflictsWithVar (which is called from\n\t\t\t * LWLockWaitForVar) relies on the spinlock used above in this\n\t\t\t * function and doesn't use a memory barrier.\n\nhelps to understand why any of this is safe to a meaningful degree.\n\n\nThe existing comments aren't obviously aren't sufficient to explain this, but\nthe reason it's, I think, safe today, is that we are only waiting for\ninsertions that started before WaitXLogInsertionsToFinish() was called. The\nlack of memory barriers in the loop means that we might see locks as \"unused\"\nthat have *since* become used, which is fine, because they only can be for\nlater insertions that we wouldn't want to wait on anyway.\n\nNot taking a lock to acquire the current insertingAt value means that we might\nsee older insertingAt value. Which should also be fine, because if we read a\ntoo old value, we'll add ourselves to the queue, which contains atomic\noperations.\n\n\n\n> diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c\n> index 59347ab951..82266e6897 100644\n> --- a/src/backend/storage/lmgr/lwlock.c\n> +++ b/src/backend/storage/lmgr/lwlock.c\n> @@ -1547,9 +1547,8 @@ LWLockAcquireOrWait(LWLock *lock, LWLockMode mode)\n> * *result is set to true if the lock was free, and false otherwise.\n> */\n> static bool\n> -LWLockConflictsWithVar(LWLock *lock,\n> -\t\t\t\t\t uint64 *valptr, uint64 oldval, uint64 *newval,\n> -\t\t\t\t\t bool *result)\n> +LWLockConflictsWithVar(LWLock *lock, pg_atomic_uint64 *valptr, uint64 oldval,\n> +\t\t\t\t\t uint64 *newval, bool *result)\n> {\n> \tbool\t\tmustwait;\n> \tuint64\t\tvalue;\n> @@ -1572,13 +1571,11 @@ LWLockConflictsWithVar(LWLock *lock,\n> \t*result = false;\n>\n> \t/*\n> -\t * Read value using the lwlock's wait list lock, as we can't generally\n> -\t * rely on atomic 64 bit reads/stores. TODO: On platforms with a way to\n> -\t * do atomic 64 bit reads/writes the spinlock should be optimized away.\n> +\t * Reading the value atomically ensures that we don't need any explicit\n> +\t * locking. Note that in general, 64 bit atomic APIs in postgres inherently\n> +\t * provide explicit locking for the platforms without atomics support.\n> \t */\n\nThis comment seems off to me. Using atomics doesn't guarantee not needing\nlocking. It just guarantees that we are reading a non-torn value.\n\n\n\n> @@ -1605,9 +1602,14 @@ LWLockConflictsWithVar(LWLock *lock,\n> *\n> * Note: this function ignores shared lock holders; if the lock is held\n> * in shared mode, returns 'true'.\n> + *\n> + * Be careful that LWLockConflictsWithVar() does not include a memory barrier,\n> + * hence the caller of this function may want to rely on an explicit barrier or\n> + * a spinlock to avoid memory ordering issues.\n> */\n\ns/careful/aware/?\n\ns/spinlock/implied barrier via spinlock or lwlock/?\n\n\nGreetings,\n\nAndres\n\n\n",
"msg_date": "Thu, 13 Jul 2023 15:47:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 4:17 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-07-13 14:04:31 -0700, Andres Freund wrote:\n> > From b74b6e953cb5a7e7ea1a89719893f6ce9e231bba Mon Sep 17 00:00:00 2001\n> > From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n> > Date: Fri, 19 May 2023 15:00:21 +0000\n> > Subject: [PATCH v8] Optimize WAL insertion lock acquisition and release\n> >\n> > This commit optimizes WAL insertion lock acquisition and release\n> > in the following way:\n>\n> I think this commit does too many things at once.\n\nI've split the patch into three - 1) Make insertingAt 64-bit atomic.\n2) Have better commenting on why there's no memory barrier or spinlock\nin and around LWLockWaitForVar call sites. 3) Have a quick exit for\nLWLockUpdateVar.\n\n> > 1. WAL insertion lock's variable insertingAt is currently read and\n> > written with the help of lwlock's wait list lock to avoid\n> > torn-free reads/writes. This wait list lock can become a point of\n> > contention on a highly concurrent write workloads. Therefore, make\n> > insertingAt a 64-bit atomic which inherently provides torn-free\n> > reads/writes.\n>\n> \"inherently\" is a bit strong, given that we emulate 64bit atomics where not\n> available...\n\nModified.\n\n> > 2. LWLockUpdateVar currently acquires lwlock's wait list lock even when\n> > there are no waiters at all. Add a fastpath exit to LWLockUpdateVar when\n> > there are no waiters to avoid unnecessary locking.\n>\n> I don't think there's enough of an explanation for why this isn't racy.\n>\n> The reason it's, I think, safe, is that anyone using LWLockConflictsWithVar()\n> will do so twice in a row, with a barrier inbetween. But that really relies on\n> what I explain in the paragraph below:\n\nThe twice-in-a-row lock acquisition protocol used by LWLockWaitForVar\nis helping us out have quick exit in LWLockUpdateVar. Because,\nLWLockWaitForVar ensures that they are added to the wait queue even if\nLWLockUpdateVar thinks that there aren't waiters. Is my understanding\ncorrect here?\n\n> > It also adds notes on why LWLockConflictsWithVar doesn't need a\n> > memory barrier as far as its current usage is concerned.\n>\n> I don't think:\n> * NB: LWLockConflictsWithVar (which is called from\n> * LWLockWaitForVar) relies on the spinlock used above in this\n> * function and doesn't use a memory barrier.\n>\n> helps to understand why any of this is safe to a meaningful degree.\n>\n> The existing comments aren't obviously aren't sufficient to explain this, but\n> the reason it's, I think, safe today, is that we are only waiting for\n> insertions that started before WaitXLogInsertionsToFinish() was called. The\n> lack of memory barriers in the loop means that we might see locks as \"unused\"\n> that have *since* become used, which is fine, because they only can be for\n> later insertions that we wouldn't want to wait on anyway.\n\nRight.\n\n> Not taking a lock to acquire the current insertingAt value means that we might\n> see older insertingAt value. Which should also be fine, because if we read a\n> too old value, we'll add ourselves to the queue, which contains atomic\n> operations.\n\nRight. An older value adds ourselves to the queue in LWLockWaitForVar,\nand we should be woken up eventually by LWLockUpdateVar.\n\nThis matches with my understanding. I used more or less your above\nwording in 0002 patch.\n\n> > /*\n> > - * Read value using the lwlock's wait list lock, as we can't generally\n> > - * rely on atomic 64 bit reads/stores. TODO: On platforms with a way to\n> > - * do atomic 64 bit reads/writes the spinlock should be optimized away.\n> > + * Reading the value atomically ensures that we don't need any explicit\n> > + * locking. Note that in general, 64 bit atomic APIs in postgres inherently\n> > + * provide explicit locking for the platforms without atomics support.\n> > */\n>\n> This comment seems off to me. Using atomics doesn't guarantee not needing\n> locking. It just guarantees that we are reading a non-torn value.\n\nModified the comment.\n\n> > @@ -1605,9 +1602,14 @@ LWLockConflictsWithVar(LWLock *lock,\n> > *\n> > * Note: this function ignores shared lock holders; if the lock is held\n> > * in shared mode, returns 'true'.\n> > + *\n> > + * Be careful that LWLockConflictsWithVar() does not include a memory barrier,\n> > + * hence the caller of this function may want to rely on an explicit barrier or\n> > + * a spinlock to avoid memory ordering issues.\n> > */\n>\n> s/careful/aware/?\n>\n> s/spinlock/implied barrier via spinlock or lwlock/?\n\nDone.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 20 Jul 2023 14:38:29 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 02:38:29PM +0530, Bharath Rupireddy wrote:\n> On Fri, Jul 14, 2023 at 4:17 AM Andres Freund <andres@anarazel.de> wrote:\n>> I think this commit does too many things at once.\n> \n> I've split the patch into three - 1) Make insertingAt 64-bit atomic.\n> 2) Have better commenting on why there's no memory barrier or spinlock\n> in and around LWLockWaitForVar call sites. 3) Have a quick exit for\n> LWLockUpdateVar.\n\nFWIW, I was kind of already OK with 0001, as it shows most of the\ngains observed while 0003 had a limited impact:\nhttps://www.postgresql.org/message-id/CALj2ACWgeOPEKVY-TEPvno%3DVatyzrb-vEEP8hN7QqrQ%3DyPRupA%40mail.gmail.com\n\nIt is kind of a no-brainer to replace the spinlocks with atomic reads\nand writes there.\n\n>> I don't think:\n>> * NB: LWLockConflictsWithVar (which is called from\n>> * LWLockWaitForVar) relies on the spinlock used above in this\n>> * function and doesn't use a memory barrier.\n>>\n>> helps to understand why any of this is safe to a meaningful degree.\n>>\n>> The existing comments aren't obviously aren't sufficient to explain this, but\n>> the reason it's, I think, safe today, is that we are only waiting for\n>> insertions that started before WaitXLogInsertionsToFinish() was called. The\n>> lack of memory barriers in the loop means that we might see locks as \"unused\"\n>> that have *since* become used, which is fine, because they only can be for\n>> later insertions that we wouldn't want to wait on anyway.\n> \n> Right.\n\nFWIW, I always have a hard time coming back to this code and see it\nrely on undocumented assumptions with code in lwlock.c while we need\nto keep an eye in xlog.c (we take a spinlock there while the variable\nwait logic relies on it for ordering @-@). So the proposal of getting\nmore documentation in place via 0002 goes in the right direction.\n\n>> This comment seems off to me. Using atomics doesn't guarantee not needing\n>> locking. It just guarantees that we are reading a non-torn value.\n> \n> Modified the comment.\n\n- /*\n- * Read value using the lwlock's wait list lock, as we can't generally\n- * rely on atomic 64 bit reads/stores. TODO: On platforms with a way to\n- * do atomic 64 bit reads/writes the spinlock should be optimized away.\n- */\n- LWLockWaitListLock(lock);\n- value = *valptr;\n- LWLockWaitListUnlock(lock);\n+ /* Reading atomically avoids getting a torn value */\n+ value = pg_atomic_read_u64(valptr);\n\nShould this specify that this is specifically important for platforms\nwhere reading a uint64 could lead to a torn value read, if you apply\nthis term in this context? Sounding picky, I would make that a bit\nlonger, say something like that:\n\"Reading this value atomically is safe even on platforms where uint64\ncannot be read without observing a torn value.\"\n\nOnly xlogprefetcher.c uses the term \"torn\" for a value by the way, but\nfor a write.\n\n>>> @@ -1605,9 +1602,14 @@ LWLockConflictsWithVar(LWLock *lock,\n>>> *\n>>> * Note: this function ignores shared lock holders; if the lock is held\n>>> * in shared mode, returns 'true'.\n>>> + *\n>>> + * Be careful that LWLockConflictsWithVar() does not include a memory barrier,\n>>> + * hence the caller of this function may want to rely on an explicit barrier or\n>>> + * a spinlock to avoid memory ordering issues.\n>>> */\n>>\n>> s/careful/aware/?\n>>\n>> s/spinlock/implied barrier via spinlock or lwlock/?\n> \n> Done.\n\nOkay to mention a LWLock here, even if the sole caller of this routine\nrelies on a spinlock.\n\n0001 looks OK-ish seen from here. Thoughts?\n--\nMichael",
"msg_date": "Fri, 21 Jul 2023 14:59:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Fri, Jul 21, 2023 at 11:29 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> + /* Reading atomically avoids getting a torn value */\n> + value = pg_atomic_read_u64(valptr);\n>\n> Should this specify that this is specifically important for platforms\n> where reading a uint64 could lead to a torn value read, if you apply\n> this term in this context? Sounding picky, I would make that a bit\n> longer, say something like that:\n> \"Reading this value atomically is safe even on platforms where uint64\n> cannot be read without observing a torn value.\"\n>\n> Only xlogprefetcher.c uses the term \"torn\" for a value by the way, but\n> for a write.\n\nDone.\n\n> 0001 looks OK-ish seen from here. Thoughts?\n\nYes, it looks safe to me too. FWIW, 0001 essentially implements what\nan existing TODO comment introduced by commit 008608b9d5106 says:\n\n /*\n * Read value using the lwlock's wait list lock, as we can't generally\n * rely on atomic 64 bit reads/stores. TODO: On platforms with a way to\n * do atomic 64 bit reads/writes the spinlock should be optimized away.\n */\n\nI'm attaching v10 patch set here - 0001 has modified the comment as\nabove, no other changes in patch set.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 22 Jul 2023 13:08:49 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Sat, Jul 22, 2023 at 01:08:49PM +0530, Bharath Rupireddy wrote:\n> Yes, it looks safe to me too.\n\n0001 has been now applied. I have done more tests while looking at\nthis patch since yesterday and was surprised to see higher TPS numbers\non HEAD with the same tests as previously, and the patch was still\nshining with more than 256 clients.\n\n> FWIW, 0001 essentially implements what\n> an existing TODO comment introduced by commit 008608b9d5106 says:\n\nWe really need to do something in terms of documentation with\nsomething like 0002, so I'll try to look at that next. Regarding\n0003, I don't know. I think that we'd better look more into cases\nwhere it shows actual benefits for specific workloads (like workloads\nwith a fixed rate of read and/or write operations?).\n--\nMichael",
"msg_date": "Tue, 25 Jul 2023 16:43:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-25 16:43:16 +0900, Michael Paquier wrote:\n> On Sat, Jul 22, 2023 at 01:08:49PM +0530, Bharath Rupireddy wrote:\n> > Yes, it looks safe to me too.\n> \n> 0001 has been now applied. I have done more tests while looking at\n> this patch since yesterday and was surprised to see higher TPS numbers\n> on HEAD with the same tests as previously, and the patch was still\n> shining with more than 256 clients.\n> \n> > FWIW, 0001 essentially implements what\n> > an existing TODO comment introduced by commit 008608b9d5106 says:\n> \n> We really need to do something in terms of documentation with\n> something like 0002, so I'll try to look at that next. Regarding\n> 0003, I don't know. I think that we'd better look more into cases\n> where it shows actual benefits for specific workloads (like workloads\n> with a fixed rate of read and/or write operations?).\n\nFWIW, I'm working on a patch that replaces WAL insert locks as a whole,\nbecause they don't scale all that well. If there's no very clear improvements,\nI'm not sure it's worth putting too much effort into polishing them all that\nmuch.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 25 Jul 2023 09:49:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-25 16:43:16 +0900, Michael Paquier wrote:\n> On Sat, Jul 22, 2023 at 01:08:49PM +0530, Bharath Rupireddy wrote:\n> > Yes, it looks safe to me too.\n> \n> 0001 has been now applied. I have done more tests while looking at\n> this patch since yesterday and was surprised to see higher TPS numbers\n> on HEAD with the same tests as previously, and the patch was still\n> shining with more than 256 clients.\n\nJust a small heads up:\n\nI just rebased my aio tree over the commit and promptly, on the first run, saw\na hang. I did some debugging on that. Unfortunately repeated runs haven't\nrepeated that hang, despite quite a bit of trying.\n\nThe symptom I was seeing is that all running backends were stuck in\nLWLockWaitForVar(), even though the value they're waiting for had\nchanged. Which obviously \"shouldn't be possible\".\n\nIt's of course possible that this is AIO specific, but I didn't see anything\nin stacks to suggest that.\n\n\nI do wonder if this possibly exposed an undocumented prior dependency on the\nvalue update always happening under the list lock.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 25 Jul 2023 12:57:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 12:57:37PM -0700, Andres Freund wrote:\n> I just rebased my aio tree over the commit and promptly, on the first run, saw\n> a hang. I did some debugging on that. Unfortunately repeated runs haven't\n> repeated that hang, despite quite a bit of trying.\n> \n> The symptom I was seeing is that all running backends were stuck in\n> LWLockWaitForVar(), even though the value they're waiting for had\n> changed. Which obviously \"shouldn't be possible\".\n\nHmm. I've also spent a few days looking at this past report that made\nthe LWLock part what it is today, but I don't quite see immediately\nhow it would be possible to reach a state where all the backends are\nwaiting for an update that's not happening:\nhttps://www.postgresql.org/message-id/CAMkU=1zLztROwH3B42OXSB04r9ZMeSk3658qEn4_8+b+K3E7nQ@mail.gmail.com\n\nAll the assumptions of this code and its dependencies with\nxloginsert.c are hard to come by.\n\n> It's of course possible that this is AIO specific, but I didn't see anything\n> in stacks to suggest that.\n\nOr AIO handles the WAL syncs so quickly that it has more chances in\nshowing a race condition here?\n\n> I do wonder if this possibly exposed an undocumented prior dependency on the\n> value update always happening under the list lock.\n\nI would not be surprised by that.\n--\nMichael",
"msg_date": "Wed, 26 Jul 2023 07:39:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 09:49:01AM -0700, Andres Freund wrote:\n> FWIW, I'm working on a patch that replaces WAL insert locks as a whole,\n> because they don't scale all that well.\n\nWhat were you looking at here? Just wondering.\n--\nMichael",
"msg_date": "Wed, 26 Jul 2023 07:40:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "Hi,\n\nOn 2023-07-26 07:40:31 +0900, Michael Paquier wrote:\n> On Tue, Jul 25, 2023 at 09:49:01AM -0700, Andres Freund wrote:\n> > FWIW, I'm working on a patch that replaces WAL insert locks as a whole,\n> > because they don't scale all that well.\n>\n> What were you looking at here? Just wondering.\n\nHere's what I had written offlist a few days ago:\n\n> The basic idea is to have a ringbuffer of in-progress insertions. The\n> acquisition of a position in the ringbuffer is done at the same time as\n> advancing the reserved LSN, using a 64bit xadd. The trick that makes that\n> possible is to use the high bits of the atomic for the position in the\n> ringbuffer. The point of using the high bits is that they wrap around, without\n> affecting the rest of the value.\n>\n> Of course, when using xadd, we can't keep the \"prev\" pointer in the\n> atomic. That's where the ringbuffer comes into play. Whenever one inserter has\n> determined the byte pos of its insertion, it updates the \"prev byte pos\" in\n> the *next* ringbuffer entry.\n>\n> Of course that means that insertion N+1 needs to wait for N to set the prev\n> position - but that happens very quickly. In my very hacky prototype the\n> relevant path (which for now just spins) is reached very rarely, even when\n> massively oversubscribed. While I've not implemented that, N+1 could actually\n> do the first \"iteration\" in CopyXLogRecordToWAL() before it needs the prev\n> position, the COMP_CRC32C() could happen \"inside\" the buffer.\n>\n>\n> There's a fair bit of trickyness in turning that into something working, of\n> course :). Ensuring that the ring buffer of insertions doesn't wrap around is\n> non-trivial. Nor is trivial to ensure that the \"reduced\" space LSN in the\n> atomic can't overflow.\n>\n> I do wish MAX_BACKENDS were smaller...\n>\n>\n> Until last night I thought all my schemes would continue to need something\n> like the existing WAL insertion locks, to implement\n> WALInsertLockAcquireExclusive().\n>\n> But I think I came up with an idea to do away with that (not even prototyped\n> yet): Use one bit in the atomic that indicates that no new insertions are\n> allowed. Whenever the xadd finds that old value actually was locked, it\n> \"aborts\" the insertion, and instead waits for a condition variable (or\n> something similar). Of course that's after modifying the atomic - to deal with\n> that the \"lock holder\" reverts all modifications that have been made to the\n> atomic when releasing the \"lock\", they weren't actually successful and all\n> those backends will retry.\n>\n> Except that this doesn't quite suffice - XLogInsertRecord() needs to be able\n> to \"roll back\", when it finds that we now need to log FPIs. I can't quite see\n> how to make that work with what I describe above. The only idea I have so far\n> is to just waste the space with a NOOP record - it should be pretty rare. At\n> least if we updated RedoRecPtr eagerly (or just stopped this stupid business\n> of having an outdated backend-local copy).\n>\n>\n>\n> My prototype shows this idea to be promising. It's a tad slower at low\n> concurrency, but much better at high concurrency. I think most if not all of\n> the low-end overhead isn't inherent, but comes from having both old and new\n> infrastructure in place (as well as a lot of debugging cruft).\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 25 Jul 2023 15:46:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 04:43:16PM +0900, Michael Paquier wrote:\n> We really need to do something in terms of documentation with\n> something like 0002, so I'll try to look at that next.\n\nI have applied a slightly-tweaked version of 0002 as of 66d86d4 to\nimprove a bit the documentation of the area, and switched the CF entry\nas committed.\n\n(I got interested in what Andres has seen on his latest AIO branch, so\nI have a few extra benchmarks running in the background on HEAD, but\nnothing able to freeze all the backends yet waiting for a variable\nupdate. These are still running now.)\n--\nMichael",
"msg_date": "Thu, 27 Jul 2023 13:36:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Wed, Jul 26, 2023 at 1:27 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> > 0001 has been now applied. I have done more tests while looking at\n> > this patch since yesterday and was surprised to see higher TPS numbers\n> > on HEAD with the same tests as previously, and the patch was still\n> > shining with more than 256 clients.\n>\n> Just a small heads up:\n>\n> I just rebased my aio tree over the commit and promptly, on the first run, saw\n> a hang. I did some debugging on that. Unfortunately repeated runs haven't\n> repeated that hang, despite quite a bit of trying.\n\nHm. Please share workload details, test scripts, system info and any\nspecial settings for running in my setup.\n\n> The symptom I was seeing is that all running backends were stuck in\n> LWLockWaitForVar(), even though the value they're waiting for had\n> changed. Which obviously \"shouldn't be possible\".\n\nWere the backends stuck there indefinitely? IOW, did they get into a deadlock?\n\n> It's of course possible that this is AIO specific, but I didn't see anything\n> in stacks to suggest that.\n>\n> I do wonder if this possibly exposed an undocumented prior dependency on the\n> value update always happening under the list lock.\n\nI'm going through the other thread mentioned by Michael Paquier. I'm\nwondering if the deadlock issue illustrated here\nhttps://www.postgresql.org/message-id/55BB50D3.9000702%40iki.fi is\nshowing up again, because 71e4cc6b8e reduced the contention on\nwaitlist lock and made things *a bit* faster.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 28 Jul 2023 16:57:18 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 04:43:16PM +0900, Michael Paquier wrote:\n> 0001 has been now applied. I have done more tests while looking at\n> this patch since yesterday and was surprised to see higher TPS numbers\n> on HEAD with the same tests as previously, and the patch was still\n> shining with more than 256 clients.\n\nI found this code when searching for callers that use atomic exchanges as\natomic writes with barriers (for a separate thread [0]). Can't we use\npg_atomic_write_u64() here since the locking functions that follow should\nserve as barriers?\n\nI've attached a patch to demonstrate what I'm thinking. This might be more\nperformant, although maybe less so after commit 64b1fb5. Am I missing\nsomething obvious here? If not, I might rerun the benchmarks to see\nwhether it makes any difference.\n\n[0] https://www.postgresql.org/message-id/flat/20231110205128.GB1315705%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 18 Dec 2023 22:00:29 -0600",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
},
{
"msg_contents": "On Mon, Dec 18, 2023 at 10:00:29PM -0600, Nathan Bossart wrote:\n> I found this code when searching for callers that use atomic exchanges as\n> atomic writes with barriers (for a separate thread [0]). Can't we use\n> pg_atomic_write_u64() here since the locking functions that follow should\n> serve as barriers?\n\nThe full barrier guaranteed by pg_atomic_exchange_u64() in\nLWLockUpdateVar() was also necessary for the shortcut based on\nread_u32() to see if there are no waiters, but it has been discarded\nin the later versions of the patch because it did not influence\nperformance under heavy WAL inserts.\n\nSo you mean to rely on the full barriers taken by the fetches in\nLWLockRelease() and LWLockWaitListLock()? Hmm, I got the impression\nthat pg_atomic_exchange_u64() with its full barrier was necessary in\nthese two paths so as all the loads and stores are completed *before*\nupdating these variables. So I am not sure to get why it would be\nsafe to switch to a write with no barrier.\n\n> I've attached a patch to demonstrate what I'm thinking. This might be more\n> performant, although maybe less so after commit 64b1fb5. Am I missing\n> something obvious here? If not, I might rerun the benchmarks to see\n> whether it makes any difference.\n\nI was wondering as well if the numbers we got upthread would go up\nafter what you have committed to improve the exchanges. :)\nAny change in this area should be strictly benchmarked. \n--\nMichael",
"msg_date": "Thu, 21 Dec 2023 14:38:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: WAL Insertion Lock Improvements"
}
] |
[
{
"msg_contents": "Hi,\n\nTo the developers working on pg_rman, are there any plans to support PG15\nand when might pg_rman source be released?\nThe latest version of pg_rman, V1.3.14, appears to be incompatible with\nPG15.\n(In PG15, pg_start_backup()/pg_stop_backup() have been renamed.)\n\nHi,To the developers working on pg_rman, are there any plans to support PG15and when might pg_rman source be released?The latest version of pg_rman, V1.3.14, appears to be incompatible with PG15.(In PG15, pg_start_backup()/pg_stop_backup() have been renamed.)",
"msg_date": "Fri, 25 Nov 2022 01:31:12 +0900",
"msg_from": "T Adachi <adachi.t01@gmail.com>",
"msg_from_op": true,
"msg_subject": "Does pg_rman support PG15?"
},
{
"msg_contents": "> On 24 Nov 2022, at 17:31, T Adachi <adachi.t01@gmail.com> wrote:\n\n> To the developers working on pg_rman, are there any plans to support PG15\n> and when might pg_rman source be released?\n> The latest version of pg_rman, V1.3.14, appears to be incompatible with PG15.\n> (In PG15, pg_start_backup()/pg_stop_backup() have been renamed.)\n\nDiscussions on individual extensions is best kept within the forum that the\nextension use for discussions. For pg_rman I recommend opening an issue on\ntheir Github project page.\n\n\thttps://github.com/ossc-db/pg_rman\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 25 Nov 2022 13:13:50 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Does pg_rman support PG15?"
}
] |
[
{
"msg_contents": "Hello,\n\nI found that qual_is_pushdown_safe() has an argument \"subquery\"\nthat is not used in the function. This argument has not been\nreferred to since the commit 964c0d0f80e485dd3a4073e073ddfd9bfdda90b2.\n\nI think we can remove this if there is no special reason. \n\nRegards,\nYugo Nagata \n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Fri, 25 Nov 2022 15:27:02 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Remove a unused argument from qual_is_pushdown_safe"
},
{
"msg_contents": "On Fri, Nov 25, 2022 at 2:27 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> I found that qual_is_pushdown_safe() has an argument \"subquery\"\n> that is not used in the function. This argument has not been\n> referred to since the commit 964c0d0f80e485dd3a4073e073ddfd9bfdda90b2.\n>\n> I think we can remove this if there is no special reason.\n\n\n+1. In 964c0d0f the checks in qual_is_pushdown_safe() that need to\nreference 'subquery' were moved to subquery_is_pushdown_safe(), so param\n'subquery' is not used any more. I think we can just remove it.\n\nI wonder if we need to revise the comment atop qual_is_pushdown_safe()\ntoo which says\n\n * rinfo is a restriction clause applying to the given subquery (whose RTE\n * has index rti in the parent query).\n\nsince there is no 'given subquery' after we remove it from the params.\n\nThanks\nRichard\n\nOn Fri, Nov 25, 2022 at 2:27 PM Yugo NAGATA <nagata@sraoss.co.jp> wrote:\nI found that qual_is_pushdown_safe() has an argument \"subquery\"\nthat is not used in the function. This argument has not been\nreferred to since the commit 964c0d0f80e485dd3a4073e073ddfd9bfdda90b2.\n\nI think we can remove this if there is no special reason. +1. In 964c0d0f the checks in qual_is_pushdown_safe() that need toreference 'subquery' were moved to subquery_is_pushdown_safe(), so param'subquery' is not used any more. I think we can just remove it.I wonder if we need to revise the comment atop qual_is_pushdown_safe()too which says * rinfo is a restriction clause applying to the given subquery (whose RTE * has index rti in the parent query).since there is no 'given subquery' after we remove it from the params.ThanksRichard",
"msg_date": "Fri, 25 Nov 2022 16:05:13 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove a unused argument from qual_is_pushdown_safe"
},
{
"msg_contents": "On Fri, Nov 25, 2022 at 04:05:13PM +0800, Richard Guo wrote:\n> +1. In 964c0d0f the checks in qual_is_pushdown_safe() that need to\n> reference 'subquery' were moved to subquery_is_pushdown_safe(), so param\n> 'subquery' is not used any more. I think we can just remove it.\n> \n> I wonder if we need to revise the comment atop qual_is_pushdown_safe()\n> too which says\n> \n> * rinfo is a restriction clause applying to the given subquery (whose RTE\n> * has index rti in the parent query).\n> \n> since there is no 'given subquery' after we remove it from the params.\n\nWhen it comes to specific subpaths of the tree, it is sometimes good\nto keep some symmetry in the arguments of the sub-routines used, but\nthat does not seem to apply much to allpaths.c. Removing that is fine\nby me, so let's do this.\n--\nMichael",
"msg_date": "Mon, 28 Nov 2022 11:54:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove a unused argument from qual_is_pushdown_safe"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 11:54:45AM +0900, Michael Paquier wrote:\n> On Fri, Nov 25, 2022 at 04:05:13PM +0800, Richard Guo wrote:\n>> I wonder if we need to revise the comment atop qual_is_pushdown_safe()\n>> too which says\n>> \n>> * rinfo is a restriction clause applying to the given subquery (whose RTE\n>> * has index rti in the parent query).\n>> \n>> since there is no 'given subquery' after we remove it from the params.\n\nI was thinking about this point, and it seems to me that we could just\ndo s/the given subquery/a subquery/. But perhaps you have a different\nview on the matter?\n--\nMichael",
"msg_date": "Mon, 28 Nov 2022 16:40:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove a unused argument from qual_is_pushdown_safe"
},
{
"msg_contents": "On Mon, 28 Nov 2022 16:40:52 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Nov 28, 2022 at 11:54:45AM +0900, Michael Paquier wrote:\n> > On Fri, Nov 25, 2022 at 04:05:13PM +0800, Richard Guo wrote:\n> >> I wonder if we need to revise the comment atop qual_is_pushdown_safe()\n> >> too which says\n> >> \n> >> * rinfo is a restriction clause applying to the given subquery (whose RTE\n> >> * has index rti in the parent query).\n> >> \n> >> since there is no 'given subquery' after we remove it from the params.\n> \n> I was thinking about this point, and it seems to me that we could just\n> do s/the given subquery/a subquery/. But perhaps you have a different\n> view on the matter?\n\n+1\nI also was just about to send a patch updated as so, and this is attached.\n\nRegards,\nYugo Nagata\n\n> --\n> Michael\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>",
"msg_date": "Mon, 28 Nov 2022 16:56:36 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Remove a unused argument from qual_is_pushdown_safe"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 3:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> > On Fri, Nov 25, 2022 at 04:05:13PM +0800, Richard Guo wrote:\n> >> I wonder if we need to revise the comment atop qual_is_pushdown_safe()\n> >> too which says\n> >>\n> >> * rinfo is a restriction clause applying to the given subquery (whose\n> RTE\n> >> * has index rti in the parent query).\n> >>\n> >> since there is no 'given subquery' after we remove it from the params.\n>\n> I was thinking about this point, and it seems to me that we could just\n> do s/the given subquery/a subquery/. But perhaps you have a different\n> view on the matter?\n\n\nI think the new wording is good. Thanks for the change.\n\nThanks\nRichard\n\nOn Mon, Nov 28, 2022 at 3:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Fri, Nov 25, 2022 at 04:05:13PM +0800, Richard Guo wrote:\n>> I wonder if we need to revise the comment atop qual_is_pushdown_safe()\n>> too which says\n>> \n>> * rinfo is a restriction clause applying to the given subquery (whose RTE\n>> * has index rti in the parent query).\n>> \n>> since there is no 'given subquery' after we remove it from the params.\n\nI was thinking about this point, and it seems to me that we could just\ndo s/the given subquery/a subquery/. But perhaps you have a different\nview on the matter? I think the new wording is good. Thanks for the change.ThanksRichard",
"msg_date": "Mon, 28 Nov 2022 16:12:46 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove a unused argument from qual_is_pushdown_safe"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Nov 28, 2022 at 11:54:45AM +0900, Michael Paquier wrote:\n>> On Fri, Nov 25, 2022 at 04:05:13PM +0800, Richard Guo wrote:\n> I wonder if we need to revise the comment atop qual_is_pushdown_safe()\n> too which says\n> \n> * rinfo is a restriction clause applying to the given subquery (whose RTE\n> * has index rti in the parent query).\n> \n> since there is no 'given subquery' after we remove it from the params.\n\n> I was thinking about this point, and it seems to me that we could just\n> do s/the given subquery/a subquery/. But perhaps you have a different\n> view on the matter?\n\nMy viewpoint is that this change is misguided. Even if the current\ncoding of qual_is_pushdown_safe doesn't happen to reference the\nsubquery, it might need to in future.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 28 Nov 2022 09:15:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove a unused argument from qual_is_pushdown_safe"
},
{
"msg_contents": "> On 28 Nov 2022, at 15:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> My viewpoint is that this change is misguided. Even if the current\n> coding of qual_is_pushdown_safe doesn't happen to reference the\n> subquery, it might need to in future.\n\nIf I understand the code correctly the variable has some value in terms of\n\"documenting the code\" (for lack of better terminology), and I would assume\nvirtually every modern compiler to figure out it's not needed.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 28 Nov 2022 15:31:33 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Remove a unused argument from qual_is_pushdown_safe"
}
] |
[
{
"msg_contents": "During DecodeCommit() for skipping a transaction we use ReadRecPtr to\ncheck whether to skip this transaction or not. Whereas in\nReorderBufferCanStartStreaming() we use EndRecPtr to check whether to\nstream or not. Generally it will not create a problem but if the\ncommit record itself is adding some changes to the transaction(e.g.\nsnapshot) and if the \"start_decoding_at\" is in between ReadRecPtr and\nEndRecPtr then streaming will decide to stream the transaction where\nas DecodeCommit will decide to skip it. And for handling this case in\nReorderBufferForget() we call stream_abort().\n\nSo ideally if we are planning to skip the transaction we should never\nstream it hence there is no need to stream abort such transaction in\ncase of skip.\n\nIn this patch I have fixed the skip condition in the streaming case\nand also added an assert inside ReorderBufferForget() to ensure that\nthe transaction should have never been streamed.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 25 Nov 2022 13:35:24 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Avoid streaming the transaction which are skipped (in corner cases)"
},
{
"msg_contents": "Excellent catch. We were looking at this code last week and wondered\nthe purpose of this abort. Probably we should have some macro or\nfunction to decided whether to skip a transaction based on log record.\nThat will avoid using different values in different places.\n\nOn Fri, Nov 25, 2022 at 1:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> During DecodeCommit() for skipping a transaction we use ReadRecPtr to\n> check whether to skip this transaction or not. Whereas in\n> ReorderBufferCanStartStreaming() we use EndRecPtr to check whether to\n> stream or not. Generally it will not create a problem but if the\n> commit record itself is adding some changes to the transaction(e.g.\n> snapshot) and if the \"start_decoding_at\" is in between ReadRecPtr and\n> EndRecPtr then streaming will decide to stream the transaction where\n> as DecodeCommit will decide to skip it. And for handling this case in\n> ReorderBufferForget() we call stream_abort().\n>\n> So ideally if we are planning to skip the transaction we should never\n> stream it hence there is no need to stream abort such transaction in\n> case of skip.\n>\n> In this patch I have fixed the skip condition in the streaming case\n> and also added an assert inside ReorderBufferForget() to ensure that\n> the transaction should have never been streamed.\n>\n> --\n> Regards,\n> Dilip Kumar\n> EnterpriseDB: http://www.enterprisedb.com\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 25 Nov 2022 16:04:25 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Fri, Nov 25, 2022 at 1:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> During DecodeCommit() for skipping a transaction we use ReadRecPtr to\n> check whether to skip this transaction or not. Whereas in\n> ReorderBufferCanStartStreaming() we use EndRecPtr to check whether to\n> stream or not. Generally it will not create a problem but if the\n> commit record itself is adding some changes to the transaction(e.g.\n> snapshot) and if the \"start_decoding_at\" is in between ReadRecPtr and\n> EndRecPtr then streaming will decide to stream the transaction where\n> as DecodeCommit will decide to skip it. And for handling this case in\n> ReorderBufferForget() we call stream_abort().\n>\n\nThe other cases are probably where we don't have FilterByOrigin or\ndbid check, for example, XLOG_HEAP2_NEW_CID/XLOG_XACT_INVALIDATIONS.\nWe anyway actually don't send anything for such cases except empty\nstart/stop messages. Can we add some flag to txn which says that there\nis at least one change like DML that we want to stream? Then we can\nuse that flag to decide whether to stream or not.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 25 Nov 2022 17:38:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Fri, Nov 25, 2022 at 4:04 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Excellent catch. We were looking at this code last week and wondered\n> the purpose of this abort. Probably we should have some macro or\n> function to decided whether to skip a transaction based on log record.\n> That will avoid using different values in different places.\n\nWe do have a common function i.e. SnapBuildXactNeedsSkip() but there\nare two problems 1) it has a dependency on the input parameter so the\nresult may vary based on the input 2) this is only checked based on\nthe LSN but there are other factors dbid and originid based on those\nalso transaction could be skipped during DecodeCommit. So I think one\npossible solution could be to remember a dbid and originid in\nReorderBufferTXN as soon as we get the first change which has valid\nvalues for these parameters. And now as you suggested have a common\nfunction that will be used by streaming as well as by DecodeCommit to\ndecide on whether to skip or not.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 26 Nov 2022 10:58:55 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Fri, Nov 25, 2022 at 5:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Nov 25, 2022 at 1:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > During DecodeCommit() for skipping a transaction we use ReadRecPtr to\n> > check whether to skip this transaction or not. Whereas in\n> > ReorderBufferCanStartStreaming() we use EndRecPtr to check whether to\n> > stream or not. Generally it will not create a problem but if the\n> > commit record itself is adding some changes to the transaction(e.g.\n> > snapshot) and if the \"start_decoding_at\" is in between ReadRecPtr and\n> > EndRecPtr then streaming will decide to stream the transaction where\n> > as DecodeCommit will decide to skip it. And for handling this case in\n> > ReorderBufferForget() we call stream_abort().\n> >\n>\n> The other cases are probably where we don't have FilterByOrigin or\n> dbid check, for example, XLOG_HEAP2_NEW_CID/XLOG_XACT_INVALIDATIONS.\n> We anyway actually don't send anything for such cases except empty\n> start/stop messages. Can we add some flag to txn which says that there\n> is at least one change like DML that we want to stream?\n>\n\nWe can probably think of using txn_flags for this purpose.\n\n> Then we can\n> use that flag to decide whether to stream or not.\n>\n\nThe other possibility here is to use ReorderBufferTXN's base_snapshot.\nNormally, we don't stream unless the base_snapshot is set. However,\nthere are a few challenges (a) the base_snapshot is set in\nSnapBuildProcessChange which is called before DecodeInsert, and\nsimilar APIs. So, now even if we filter out insert due origin or\ndb_id, the base_snapshot will be set. (b) we currently set it to\nexecute invalidations even when there are no changes to send to\ndownstream.\n\nFor (a), I guess we can split SnapBuildProcessChange, such that the\npart where we set the base snapshot will be done after DecodeInsert\nand similar APIs decide to queue the change. I am less sure about\npoint (b), ideally, if we don't need a snapshot to execute\ninvalidations, I think we can avoid setting base_snapshot even in that\ncase. Then we can use it as a check whether we want to stream\nanything.\n\nI feel this approach required a lot more changes and bit riskier as\ncompared to having a flag in txn_flags.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 26 Nov 2022 12:15:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Sat, Nov 26, 2022 at 10:59 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Nov 25, 2022 at 4:04 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Excellent catch. We were looking at this code last week and wondered\n> > the purpose of this abort. Probably we should have some macro or\n> > function to decided whether to skip a transaction based on log record.\n> > That will avoid using different values in different places.\n>\n> We do have a common function i.e. SnapBuildXactNeedsSkip() but there\n> are two problems 1) it has a dependency on the input parameter so the\n> result may vary based on the input 2) this is only checked based on\n> the LSN but there are other factors dbid and originid based on those\n> also transaction could be skipped during DecodeCommit. So I think one\n> possible solution could be to remember a dbid and originid in\n> ReorderBufferTXN as soon as we get the first change which has valid\n> values for these parameters.\n>\n\nBut is the required information say 'dbid' available in all records,\nfor example, what about XLOG_XACT_INVALIDATIONS? The other thing to\nconsider in this regard is if we are planning to have additional\ninformation as mentioned by me in another to decide whether to stream\nor not then the additional checks may be redundant anyway. It is a\ngood idea to have a common check at both places but if not, we can at\nleast add some comments to say why the check is different.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 26 Nov 2022 16:55:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Sat, Nov 26, 2022 at 12:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Nov 25, 2022 at 5:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Nov 25, 2022 at 1:35 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > During DecodeCommit() for skipping a transaction we use ReadRecPtr to\n> > > check whether to skip this transaction or not. Whereas in\n> > > ReorderBufferCanStartStreaming() we use EndRecPtr to check whether to\n> > > stream or not. Generally it will not create a problem but if the\n> > > commit record itself is adding some changes to the transaction(e.g.\n> > > snapshot) and if the \"start_decoding_at\" is in between ReadRecPtr and\n> > > EndRecPtr then streaming will decide to stream the transaction where\n> > > as DecodeCommit will decide to skip it. And for handling this case in\n> > > ReorderBufferForget() we call stream_abort().\n> > >\n> >\n> > The other cases are probably where we don't have FilterByOrigin or\n> > dbid check, for example, XLOG_HEAP2_NEW_CID/XLOG_XACT_INVALIDATIONS.\n> > We anyway actually don't send anything for such cases except empty\n> > start/stop messages. Can we add some flag to txn which says that there\n> > is at least one change like DML that we want to stream?\n> >\n>\n> We can probably think of using txn_flags for this purpose.\n\nIn the attached patch I have used txn_flags to identify whether it has\nany streamable change or not and the transaction will not be selected\nfor streaming unless it has at least one streamable change.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sun, 27 Nov 2022 11:03:07 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Sun, Nov 27, 2022 1:33 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> \r\n> On Sat, Nov 26, 2022 at 12:15 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Fri, Nov 25, 2022 at 5:38 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Fri, Nov 25, 2022 at 1:35 PM Dilip Kumar <dilipbalaut@gmail.com>\r\n> wrote:\r\n> > > >\r\n> > > > During DecodeCommit() for skipping a transaction we use ReadRecPtr to\r\n> > > > check whether to skip this transaction or not. Whereas in\r\n> > > > ReorderBufferCanStartStreaming() we use EndRecPtr to check whether\r\n> to\r\n> > > > stream or not. Generally it will not create a problem but if the\r\n> > > > commit record itself is adding some changes to the transaction(e.g.\r\n> > > > snapshot) and if the \"start_decoding_at\" is in between ReadRecPtr and\r\n> > > > EndRecPtr then streaming will decide to stream the transaction where\r\n> > > > as DecodeCommit will decide to skip it. And for handling this case in\r\n> > > > ReorderBufferForget() we call stream_abort().\r\n> > > >\r\n> > >\r\n> > > The other cases are probably where we don't have FilterByOrigin or\r\n> > > dbid check, for example,\r\n> XLOG_HEAP2_NEW_CID/XLOG_XACT_INVALIDATIONS.\r\n> > > We anyway actually don't send anything for such cases except empty\r\n> > > start/stop messages. Can we add some flag to txn which says that there\r\n> > > is at least one change like DML that we want to stream?\r\n> > >\r\n> >\r\n> > We can probably think of using txn_flags for this purpose.\r\n> \r\n> In the attached patch I have used txn_flags to identify whether it has\r\n> any streamable change or not and the transaction will not be selected\r\n> for streaming unless it has at least one streamable change.\r\n> \r\n\r\nThanks for your patch.\r\n\r\nI saw that the patch added a check when selecting largest transaction, but in\r\naddition to ReorderBufferCheckMemoryLimit(), the transaction can also be\r\nstreamed in ReorderBufferProcessPartialChange(). Should we add the check in\r\nthis function, too?\r\n\r\ndiff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\r\nindex 9a58c4bfb9..108737b02f 100644\r\n--- a/src/backend/replication/logical/reorderbuffer.c\r\n+++ b/src/backend/replication/logical/reorderbuffer.c\r\n@@ -768,7 +768,8 @@ ReorderBufferProcessPartialChange(ReorderBuffer *rb, ReorderBufferTXN *txn,\r\n \t */\r\n \tif (ReorderBufferCanStartStreaming(rb) &&\r\n \t\t!(rbtxn_has_partial_change(toptxn)) &&\r\n-\t\trbtxn_is_serialized(txn))\r\n+\t\trbtxn_is_serialized(txn) &&\r\n+\t\trbtxn_has_streamable_change(txn))\r\n \t\tReorderBufferStreamTXN(rb, toptxn);\r\n }\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Mon, 28 Nov 2022 08:16:03 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 1:46 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> Thanks for your patch.\n>\n> I saw that the patch added a check when selecting largest transaction, but in\n> addition to ReorderBufferCheckMemoryLimit(), the transaction can also be\n> streamed in ReorderBufferProcessPartialChange(). Should we add the check in\n> this function, too?\n>\n> diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\n> index 9a58c4bfb9..108737b02f 100644\n> --- a/src/backend/replication/logical/reorderbuffer.c\n> +++ b/src/backend/replication/logical/reorderbuffer.c\n> @@ -768,7 +768,8 @@ ReorderBufferProcessPartialChange(ReorderBuffer *rb, ReorderBufferTXN *txn,\n> */\n> if (ReorderBufferCanStartStreaming(rb) &&\n> !(rbtxn_has_partial_change(toptxn)) &&\n> - rbtxn_is_serialized(txn))\n> + rbtxn_is_serialized(txn) &&\n> + rbtxn_has_streamable_change(txn))\n> ReorderBufferStreamTXN(rb, toptxn);\n> }\n\nYou are right we need this in ReorderBufferProcessPartialChange() as\nwell. I will fix this in the next version.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 28 Nov 2022 15:19:13 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 3:19 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Nov 28, 2022 at 1:46 PM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> >\n> > Thanks for your patch.\n> >\n> > I saw that the patch added a check when selecting largest transaction, but in\n> > addition to ReorderBufferCheckMemoryLimit(), the transaction can also be\n> > streamed in ReorderBufferProcessPartialChange(). Should we add the check in\n> > this function, too?\n> >\n> > diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\n> > index 9a58c4bfb9..108737b02f 100644\n> > --- a/src/backend/replication/logical/reorderbuffer.c\n> > +++ b/src/backend/replication/logical/reorderbuffer.c\n> > @@ -768,7 +768,8 @@ ReorderBufferProcessPartialChange(ReorderBuffer *rb, ReorderBufferTXN *txn,\n> > */\n> > if (ReorderBufferCanStartStreaming(rb) &&\n> > !(rbtxn_has_partial_change(toptxn)) &&\n> > - rbtxn_is_serialized(txn))\n> > + rbtxn_is_serialized(txn) &&\n> > + rbtxn_has_streamable_change(txn))\n> > ReorderBufferStreamTXN(rb, toptxn);\n> > }\n>\n> You are right we need this in ReorderBufferProcessPartialChange() as\n> well. I will fix this in the next version.\n\nFixed this in the attached patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 29 Nov 2022 09:37:51 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Tuesday, November 29, 2022 12:08 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n\r\nHi,\r\n\r\n> \r\n> On Mon, Nov 28, 2022 at 3:19 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> >\r\n> > On Mon, Nov 28, 2022 at 1:46 PM shiy.fnst@fujitsu.com\r\n> > <shiy.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > Thanks for your patch.\r\n> > >\r\n> > > I saw that the patch added a check when selecting largest\r\n> > > transaction, but in addition to ReorderBufferCheckMemoryLimit(), the\r\n> > > transaction can also be streamed in\r\n> > > ReorderBufferProcessPartialChange(). Should we add the check in this\r\n> function, too?\r\n> > >\r\n> > > diff --git a/src/backend/replication/logical/reorderbuffer.c\r\n> > > b/src/backend/replication/logical/reorderbuffer.c\r\n> > > index 9a58c4bfb9..108737b02f 100644\r\n> > > --- a/src/backend/replication/logical/reorderbuffer.c\r\n> > > +++ b/src/backend/replication/logical/reorderbuffer.c\r\n> > > @@ -768,7 +768,8 @@\r\n> ReorderBufferProcessPartialChange(ReorderBuffer *rb, ReorderBufferTXN\r\n> *txn,\r\n> > > */\r\n> > > if (ReorderBufferCanStartStreaming(rb) &&\r\n> > > !(rbtxn_has_partial_change(toptxn)) &&\r\n> > > - rbtxn_is_serialized(txn))\r\n> > > + rbtxn_is_serialized(txn) &&\r\n> > > + rbtxn_has_streamable_change(txn))\r\n> > > ReorderBufferStreamTXN(rb, toptxn); }\r\n> >\r\n> > You are right we need this in ReorderBufferProcessPartialChange() as\r\n> > well. I will fix this in the next version.\r\n> \r\n> Fixed this in the attached patch.\r\n\r\nThanks for updating the patch.\r\n\r\nI have few comments about the patch.\r\n\r\n1.\r\n\r\n1.1.\r\n-\t/* For streamed transactions notify the remote node about the abort. */\r\n-\tif (rbtxn_is_streamed(txn))\r\n-\t\trb->stream_abort(rb, txn, lsn);\r\n+\t/* the transaction which is being skipped shouldn't have been streamed */\r\n+\tAssert(!rbtxn_is_streamed(txn));\r\n\r\n1.2\r\n-\t\trbtxn_is_serialized(txn))\r\n+\t\trbtxn_is_serialized(txn) &&\r\n+\t\trbtxn_has_streamable_change(txn))\r\n \t\tReorderBufferStreamTXN(rb, toptxn);\r\n\r\nIn the above two places, I think we should do the check for the top-level\r\ntransaction(e.g. toptxn) because the patch only set flag for the top-level\r\ntransaction.\r\n\r\n2.\r\n\r\n+\t/*\r\n+\t * If there are any streamable changes getting queued then get the top\r\n+\t * transaction and mark it has streamable change. This is required for\r\n+\t * streaming in-progress transactions, the in-progress transaction will\r\n+\t * not be selected for streaming unless it has at least one streamable\r\n+\t * change.\r\n+\t */\r\n+\tif (change->action == REORDER_BUFFER_CHANGE_INSERT ||\r\n+\t\tchange->action == REORDER_BUFFER_CHANGE_UPDATE ||\r\n+\t\tchange->action == REORDER_BUFFER_CHANGE_DELETE ||\r\n+\t\tchange->action == REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT ||\r\n+\t\tchange->action == REORDER_BUFFER_CHANGE_TRUNCATE)\r\n\r\nI think that a transaction that contains REORDER_BUFFER_CHANGE_MESSAGE can also be\r\nconsidered as streamable. Is there a reason that we don't check it here ?\r\n\r\nBest regards,\r\nHou zj\r\n\r\n",
"msg_date": "Tue, 29 Nov 2022 06:53:44 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "Hi Dilip,\n\nOn Tue, Nov 29, 2022 at 9:38 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> >\n> > You are right we need this in ReorderBufferProcessPartialChange() as\n> > well. I will fix this in the next version.\n>\n> Fixed this in the attached patch.\n>\n\nI focused my attention on SnapBuildXactNeedsSkip() usages and I see\nthey are using different end points of WAL record\n1 decode.c logicalmsg_decode 594\nSnapBuildXactNeedsSkip(builder, buf->origptr)))\n2 decode.c DecodeTXNNeedSkip 1250 return\n(SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) ||\n3 reorderbuffer.c AssertTXNLsnOrder 897 if\n(SnapBuildXactNeedsSkip(ctx->snapshot_builder,\nctx->reader->EndRecPtr))\n4 reorderbuffer.c ReorderBufferCanStartStreaming 3922\n!SnapBuildXactNeedsSkip(builder, ctx->reader->EndRecPtr))\n5 snapbuild.c SnapBuildXactNeedsSkip 429\nSnapBuildXactNeedsSkip(SnapBuild *builder, XLogRecPtr ptr)\n\nThe first two are using origin ptr and the last two are using end ptr.\nyou have fixed the fourth one. Do we need to fix the third one as\nwell?\n\nProbably we need to create two wrappers (macros) around\nSnapBuildXactNeedsSkip(), one which accepts a XLogRecordBuffer and\nother which accepts XLogReaderState. Then use those. That way at least\nwe have logic unified as to which XLogRecPtr to use.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 2 Dec 2022 16:58:30 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Fri, Dec 2, 2022 at 4:58 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Hi Dilip,\n>\n> On Tue, Nov 29, 2022 at 9:38 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > >\n> > > You are right we need this in ReorderBufferProcessPartialChange() as\n> > > well. I will fix this in the next version.\n> >\n> > Fixed this in the attached patch.\n> >\n>\n> I focused my attention on SnapBuildXactNeedsSkip() usages and I see\n> they are using different end points of WAL record\n> 1 decode.c logicalmsg_decode 594\n> SnapBuildXactNeedsSkip(builder, buf->origptr)))\n> 2 decode.c DecodeTXNNeedSkip 1250 return\n> (SnapBuildXactNeedsSkip(ctx->snapshot_builder, buf->origptr) ||\n> 3 reorderbuffer.c AssertTXNLsnOrder 897 if\n> (SnapBuildXactNeedsSkip(ctx->snapshot_builder,\n> ctx->reader->EndRecPtr))\n> 4 reorderbuffer.c ReorderBufferCanStartStreaming 3922\n> !SnapBuildXactNeedsSkip(builder, ctx->reader->EndRecPtr))\n> 5 snapbuild.c SnapBuildXactNeedsSkip 429\n> SnapBuildXactNeedsSkip(SnapBuild *builder, XLogRecPtr ptr)\n>\n> The first two are using origin ptr and the last two are using end ptr.\n> you have fixed the fourth one. Do we need to fix the third one as\n> well?\n>\n\nI think we can change the third one as well but I haven't tested it.\nAdding Sawada-San for his inputs as it is added in commit 16b1fe0037.\nIn any case, I think we can do that as a separate patch because it is\nnot directly related to the streaming case we are trying to solve as\npart of this patch.\n\n> Probably we need to create two wrappers (macros) around\n> SnapBuildXactNeedsSkip(), one which accepts a XLogRecordBuffer and\n> other which accepts XLogReaderState. Then use those. That way at least\n> we have logic unified as to which XLogRecPtr to use.\n>\n\nI don't know how that will be an improvement because both those have\nthe start and end locations of the record.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 3 Dec 2022 10:37:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Tue, Nov 29, 2022 at 12:23 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, November 29, 2022 12:08 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I have few comments about the patch.\n>\n> 1.\n>\n> 1.1.\n> - /* For streamed transactions notify the remote node about the abort. */\n> - if (rbtxn_is_streamed(txn))\n> - rb->stream_abort(rb, txn, lsn);\n> + /* the transaction which is being skipped shouldn't have been streamed */\n> + Assert(!rbtxn_is_streamed(txn));\n>\n> 1.2\n> - rbtxn_is_serialized(txn))\n> + rbtxn_is_serialized(txn) &&\n> + rbtxn_has_streamable_change(txn))\n> ReorderBufferStreamTXN(rb, toptxn);\n>\n> In the above two places, I think we should do the check for the top-level\n> transaction(e.g. toptxn) because the patch only set flag for the top-level\n> transaction.\n>\n\nAmong these, the first one seems okay because it will check both the\ntransaction and its subtransactions from that path and none of those\nshould be marked as streamed. I have fixed the second one in the\nattached patch.\n\n> 2.\n>\n> + /*\n> + * If there are any streamable changes getting queued then get the top\n> + * transaction and mark it has streamable change. This is required for\n> + * streaming in-progress transactions, the in-progress transaction will\n> + * not be selected for streaming unless it has at least one streamable\n> + * change.\n> + */\n> + if (change->action == REORDER_BUFFER_CHANGE_INSERT ||\n> + change->action == REORDER_BUFFER_CHANGE_UPDATE ||\n> + change->action == REORDER_BUFFER_CHANGE_DELETE ||\n> + change->action == REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT ||\n> + change->action == REORDER_BUFFER_CHANGE_TRUNCATE)\n>\n> I think that a transaction that contains REORDER_BUFFER_CHANGE_MESSAGE can also be\n> considered as streamable. Is there a reason that we don't check it here ?\n>\n\nNo, I don't see any reason not to do this check for\nREORDER_BUFFER_CHANGE_MESSAGE.\n\nApart from the above, I have slightly adjusted the comments in the\nattached. Do let me know what you think of the attached.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Sat, 3 Dec 2022 17:07:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Saturday, December 3, 2022 7:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Tue, Nov 29, 2022 at 12:23 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Tuesday, November 29, 2022 12:08 PM Dilip Kumar\r\n> <dilipbalaut@gmail.com> wrote:\r\n> >\r\n> > I have few comments about the patch.\r\n> >\r\n> > 1.\r\n> >\r\n> > 1.1.\r\n> > - /* For streamed transactions notify the remote node about the abort.\r\n> */\r\n> > - if (rbtxn_is_streamed(txn))\r\n> > - rb->stream_abort(rb, txn, lsn);\r\n> > + /* the transaction which is being skipped shouldn't have been\r\n> streamed */\r\n> > + Assert(!rbtxn_is_streamed(txn));\r\n> >\r\n> > 1.2\r\n> > - rbtxn_is_serialized(txn))\r\n> > + rbtxn_is_serialized(txn) &&\r\n> > + rbtxn_has_streamable_change(txn))\r\n> > ReorderBufferStreamTXN(rb, toptxn);\r\n> >\r\n> > In the above two places, I think we should do the check for the\r\n> > top-level transaction(e.g. toptxn) because the patch only set flag for\r\n> > the top-level transaction.\r\n> >\r\n> \r\n> Among these, the first one seems okay because it will check both the transaction\r\n> and its subtransactions from that path and none of those should be marked as\r\n> streamed. I have fixed the second one in the attached patch.\r\n> \r\n> > 2.\r\n> >\r\n> > + /*\r\n> > + * If there are any streamable changes getting queued then get the\r\n> top\r\n> > + * transaction and mark it has streamable change. This is required\r\n> for\r\n> > + * streaming in-progress transactions, the in-progress transaction will\r\n> > + * not be selected for streaming unless it has at least one streamable\r\n> > + * change.\r\n> > + */\r\n> > + if (change->action == REORDER_BUFFER_CHANGE_INSERT ||\r\n> > + change->action == REORDER_BUFFER_CHANGE_UPDATE ||\r\n> > + change->action == REORDER_BUFFER_CHANGE_DELETE ||\r\n> > + change->action ==\r\n> REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT ||\r\n> > + change->action ==\r\n> REORDER_BUFFER_CHANGE_TRUNCATE)\r\n> >\r\n> > I think that a transaction that contains REORDER_BUFFER_CHANGE_MESSAGE\r\n> > can also be considered as streamable. Is there a reason that we don't check it\r\n> here ?\r\n> >\r\n> \r\n> No, I don't see any reason not to do this check for\r\n> REORDER_BUFFER_CHANGE_MESSAGE.\r\n> \r\n> Apart from the above, I have slightly adjusted the comments in the attached. Do\r\n> let me know what you think of the attached.\r\n\r\nThanks for updating the patch. It looks good to me.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Sun, 4 Dec 2022 11:44:15 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Sun, Dec 4, 2022 at 5:14 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Saturday, December 3, 2022 7:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Apart from the above, I have slightly adjusted the comments in the attached. Do\n> > let me know what you think of the attached.\n>\n> Thanks for updating the patch. It looks good to me.\n>\n\nI feel the function name ReorderBufferLargestTopTXN() is slightly\nmisleading because it also checks some of the streaming properties\n(like whether the TXN has partial changes and whether it contains any\nstreamable change). Shall we rename it to\nReorderBufferLargestStreamableTopTXN() or something like that?\n\nThe other point to consider is whether we need to have a test case for\nthis patch. I think before this patch if the size of DDL changes in a\ntransaction exceeds logical_decoding_work_mem, the empty streams will\nbe output in the plugin but after this patch, there won't be any such\nstream.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 5 Dec 2022 08:59:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 8:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Dec 4, 2022 at 5:14 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Saturday, December 3, 2022 7:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Apart from the above, I have slightly adjusted the comments in the attached. Do\n> > > let me know what you think of the attached.\n> >\n> > Thanks for updating the patch. It looks good to me.\n> >\n>\n> I feel the function name ReorderBufferLargestTopTXN() is slightly\n> misleading because it also checks some of the streaming properties\n> (like whether the TXN has partial changes and whether it contains any\n> streamable change). Shall we rename it to\n> ReorderBufferLargestStreamableTopTXN() or something like that?\n\nYes that makes sense\n\n> The other point to consider is whether we need to have a test case for\n> this patch. I think before this patch if the size of DDL changes in a\n> transaction exceeds logical_decoding_work_mem, the empty streams will\n> be output in the plugin but after this patch, there won't be any such\n> stream.\n\nYes, we can do that, I will make these two changes.\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 5 Dec 2022 09:21:10 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 9:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Dec 5, 2022 at 8:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sun, Dec 4, 2022 at 5:14 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Saturday, December 3, 2022 7:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > Apart from the above, I have slightly adjusted the comments in the attached. Do\n> > > > let me know what you think of the attached.\n> > >\n> > > Thanks for updating the patch. It looks good to me.\n> > >\n> >\n> > I feel the function name ReorderBufferLargestTopTXN() is slightly\n> > misleading because it also checks some of the streaming properties\n> > (like whether the TXN has partial changes and whether it contains any\n> > streamable change). Shall we rename it to\n> > ReorderBufferLargestStreamableTopTXN() or something like that?\n>\n> Yes that makes sense\n\nI have done this change in the attached patch.\n\n> > The other point to consider is whether we need to have a test case for\n> > this patch. I think before this patch if the size of DDL changes in a\n> > transaction exceeds logical_decoding_work_mem, the empty streams will\n> > be output in the plugin but after this patch, there won't be any such\n> > stream.\n\nI tried this test, but I think generating 64k data with just CID\nmessages will make the test case really big. I tried using multiple\nsessions such that one session makes the reorder buffer full but\ncontains partial changes so that we try to stream another transaction\nbut that is not possible in an automated test to consistently generate\nthe partial change.\n\nI think we need something like this[1] so that we can better control\nthe streaming.\n\n[1]https://www.postgresql.org/message-id/OSZPR01MB631042582805A8E8615BC413FD329%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 5 Dec 2022 15:41:21 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 3:41 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Dec 5, 2022 at 9:21 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Dec 5, 2022 at 8:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Sun, Dec 4, 2022 at 5:14 PM houzj.fnst@fujitsu.com\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > > > On Saturday, December 3, 2022 7:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > Apart from the above, I have slightly adjusted the comments in the attached. Do\n> > > > > let me know what you think of the attached.\n> > > >\n> > > > Thanks for updating the patch. It looks good to me.\n> > > >\n> > >\n> > > I feel the function name ReorderBufferLargestTopTXN() is slightly\n> > > misleading because it also checks some of the streaming properties\n> > > (like whether the TXN has partial changes and whether it contains any\n> > > streamable change). Shall we rename it to\n> > > ReorderBufferLargestStreamableTopTXN() or something like that?\n> >\n> > Yes that makes sense\n>\n> I have done this change in the attached patch.\n>\n> > > The other point to consider is whether we need to have a test case for\n> > > this patch. I think before this patch if the size of DDL changes in a\n> > > transaction exceeds logical_decoding_work_mem, the empty streams will\n> > > be output in the plugin but after this patch, there won't be any such\n> > > stream.\n>\n> I tried this test, but I think generating 64k data with just CID\n> messages will make the test case really big. I tried using multiple\n> sessions such that one session makes the reorder buffer full but\n> contains partial changes so that we try to stream another transaction\n> but that is not possible in an automated test to consistently generate\n> the partial change.\n>\n\nI also don't see a way to achieve it in an automated way because both\ntoast and speculative inserts are part of one statement, so we need a\nreal concurrent test to make it happen. Can anyone else think of a way\nto achieve it?\n\n> I think we need something like this[1] so that we can better control\n> the streaming.\n>\n\n+1. The additional advantage would be that we can generate parallel\napply and new streaming tests with much lesser data. Shi-San, can you\nplease start a new thread for the GUC patch proposed by you as\nindicated by Dilip?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 5 Dec 2022 16:27:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Mon, Dec 5, 2022 6:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Mon, Dec 5, 2022 at 3:41 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> >\r\n> > I think we need something like this[1] so that we can better control\r\n> > the streaming.\r\n> >\r\n> \r\n> +1. The additional advantage would be that we can generate parallel\r\n> apply and new streaming tests with much lesser data. Shi-San, can you\r\n> please start a new thread for the GUC patch proposed by you as\r\n> indicated by Dilip?\r\n> \r\n\r\nOK, I started a new thread for it. [1]\r\n\r\n[1] https://www.postgresql.org/message-id/OSZPR01MB63104E7449DBE41932DB19F1FD1B9%40OSZPR01MB6310.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Tue, 6 Dec 2022 06:25:38 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 11:55 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Mon, Dec 5, 2022 6:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Dec 5, 2022 at 3:41 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > I think we need something like this[1] so that we can better control\n> > > the streaming.\n> > >\n> >\n> > +1. The additional advantage would be that we can generate parallel\n> > apply and new streaming tests with much lesser data. Shi-San, can you\n> > please start a new thread for the GUC patch proposed by you as\n> > indicated by Dilip?\n> >\n>\n> OK, I started a new thread for it. [1]\n>\n\nThanks. I think it is better to go ahead with this patch and once we\ndecide what is the right thing to do in terms of GUC then we can try\nto add additional tests for this. Anyway, it is not that the code\nadded by this patch is not getting covered by existing tests. What do\nyou think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 7 Dec 2022 09:28:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 9:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Dec 6, 2022 at 11:55 AM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> >\n> > On Mon, Dec 5, 2022 6:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Dec 5, 2022 at 3:41 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > I think we need something like this[1] so that we can better control\n> > > > the streaming.\n> > > >\n> > >\n> > > +1. The additional advantage would be that we can generate parallel\n> > > apply and new streaming tests with much lesser data. Shi-San, can you\n> > > please start a new thread for the GUC patch proposed by you as\n> > > indicated by Dilip?\n> > >\n> >\n> > OK, I started a new thread for it. [1]\n> >\n>\n> Thanks. I think it is better to go ahead with this patch and once we\n> decide what is the right thing to do in terms of GUC then we can try\n> to add additional tests for this. Anyway, it is not that the code\n> added by this patch is not getting covered by existing tests. What do\n> you think?\n\nThat makes sense to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Dec 2022 09:30:38 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Wed, Dec 7, 2022 12:01 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> \r\n> On Wed, Dec 7, 2022 at 9:28 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, Dec 6, 2022 at 11:55 AM shiy.fnst@fujitsu.com\r\n> > <shiy.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > On Mon, Dec 5, 2022 6:57 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > >\r\n> > > > On Mon, Dec 5, 2022 at 3:41 PM Dilip Kumar <dilipbalaut@gmail.com>\r\n> wrote:\r\n> > > > >\r\n> > > > > I think we need something like this[1] so that we can better control\r\n> > > > > the streaming.\r\n> > > > >\r\n> > > >\r\n> > > > +1. The additional advantage would be that we can generate parallel\r\n> > > > apply and new streaming tests with much lesser data. Shi-San, can you\r\n> > > > please start a new thread for the GUC patch proposed by you as\r\n> > > > indicated by Dilip?\r\n> > > >\r\n> > >\r\n> > > OK, I started a new thread for it. [1]\r\n> > >\r\n> >\r\n> > Thanks. I think it is better to go ahead with this patch and once we\r\n> > decide what is the right thing to do in terms of GUC then we can try\r\n> > to add additional tests for this. Anyway, it is not that the code\r\n> > added by this patch is not getting covered by existing tests. What do\r\n> > you think?\r\n> \r\n> That makes sense to me.\r\n> \r\n\r\n+1\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Wed, 7 Dec 2022 04:05:03 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Avoid streaming the transaction which are skipped (in corner\n cases)"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 9:35 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Wed, Dec 7, 2022 12:01 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > >\n> > > Thanks. I think it is better to go ahead with this patch and once we\n> > > decide what is the right thing to do in terms of GUC then we can try\n> > > to add additional tests for this. Anyway, it is not that the code\n> > > added by this patch is not getting covered by existing tests. What do\n> > > you think?\n> >\n> > That makes sense to me.\n> >\n>\n> +1\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 8 Dec 2022 10:23:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid streaming the transaction which are skipped (in corner\n cases)"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen doing some other related work, I noticed that when decoding the COMMIT\nrecord via SnapBuildCommitTxn()-> SnapBuildDistributeNewCatalogSnapshot() we\nwill add a new snapshot to all transactions including the one being decoded(just committed one).\n\nBut since we've already done required modifications in the snapshot for the\ncurrent transaction being decoded(in SnapBuildCommitTxn()), so I think we can\navoid adding a snapshot for it again.\n\nHere is the patch to improve this.\nThoughts ?\n\nBest regards,\nHou zhijie",
"msg_date": "Fri, 25 Nov 2022 08:58:03 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Avoid distributing new catalog snapshot again for the transaction\n being decoded."
},
{
"msg_contents": "Hi Hou,\nThanks for the patch. With a simple condition, we can eliminate the\nneed to queueing snapshot change in the current transaction and then\napplying it. Saves some memory and computation. This looks useful.\n\nWhen the queue snapshot change is processed in\nReorderBufferProcessTXN(), we call SetupHistoricSnapshot(). But I\ndidn't find a path through which SetupHistoricSnapshot() is called\nfrom SnapBuildCommitTxn(). Either I missed some code path or it's not\nneeded. Can you please enlighten me?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Fri, Nov 25, 2022 at 2:28 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Hi,\n>\n> When doing some other related work, I noticed that when decoding the COMMIT\n> record via SnapBuildCommitTxn()-> SnapBuildDistributeNewCatalogSnapshot() we\n> will add a new snapshot to all transactions including the one being decoded(just committed one).\n>\n> But since we've already done required modifications in the snapshot for the\n> current transaction being decoded(in SnapBuildCommitTxn()), so I think we can\n> avoid adding a snapshot for it again.\n>\n> Here is the patch to improve this.\n> Thoughts ?\n>\n> Best regards,\n> Hou zhijie\n>\n\n\n",
"msg_date": "Fri, 25 Nov 2022 17:29:01 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid distributing new catalog snapshot again for the transaction\n being decoded."
},
{
"msg_contents": "On Fri, Nov 25, 2022 at 5:30 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Hi Hou,\n> Thanks for the patch. With a simple condition, we can eliminate the\n> need to queueing snapshot change in the current transaction and then\n> applying it. Saves some memory and computation. This looks useful.\n>\n> When the queue snapshot change is processed in\n> ReorderBufferProcessTXN(), we call SetupHistoricSnapshot(). But I\n> didn't find a path through which SetupHistoricSnapshot() is called\n> from SnapBuildCommitTxn().\n>\n\nIt will be called after SnapBuildCommitTxn() via\nReorderBufferCommit()->ReorderBufferReplay()->ReorderBufferProcessTXN().\nBut, I think what I don't see is how the snapshot we build in\nSnapBuildCommitTxn() will be assigned as a historic snapshot. We\nassign base_snapshot as a historic snapshot and the new snapshot we\nbuild in SnapBuildCommitTxn() is assigned as base_snapshot only if the\nsame is not set previously. I might be missing something but if that\nis true then I don't think the patch is correct, OTOH I would expect\nsome existing tests to fail if this change is incorrect.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 26 Nov 2022 17:20:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid distributing new catalog snapshot again for the transaction\n being decoded."
},
{
"msg_contents": "On Sat, Nov 26, 2022 at 19:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Nov 25, 2022 at 5:30 PM Ashutosh Bapat\r\n> <ashutosh.bapat.oss@gmail.com> wrote:\r\n> >\r\n> > Hi Hou,\r\n> > Thanks for the patch. With a simple condition, we can eliminate the\r\n> > need to queueing snapshot change in the current transaction and then\r\n> > applying it. Saves some memory and computation. This looks useful.\r\n> >\r\n> > When the queue snapshot change is processed in\r\n> > ReorderBufferProcessTXN(), we call SetupHistoricSnapshot(). But I\r\n> > didn't find a path through which SetupHistoricSnapshot() is called\r\n> > from SnapBuildCommitTxn().\r\n> >\r\n> \r\n> It will be called after SnapBuildCommitTxn() via ReorderBufferCommit()-\r\n> >ReorderBufferReplay()->ReorderBufferProcessTXN().\r\n> But, I think what I don't see is how the snapshot we build in\r\n> SnapBuildCommitTxn() will be assigned as a historic snapshot. We assign\r\n> base_snapshot as a historic snapshot and the new snapshot we build in\r\n> SnapBuildCommitTxn() is assigned as base_snapshot only if the same is not set\r\n> previously. I might be missing something but if that is true then I don't think the\r\n> patch is correct, OTOH I would expect some existing tests to fail if this change is\r\n> incorrect.\r\n\r\nHi,\r\n\r\nI also think that the snapshot we build in SnapBuildCommitTxn() will not be\r\nassigned as a historic snapshot. But I think that when the commandID message is\r\nprocessed in the function ReorderBufferProcessTXN, the snapshot of the current\r\ntransaction will be updated. And I also did some tests and found no problems. \r\n\r\nHere is my detailed analysis:\r\n\r\nI think that when a transaction internally modifies the catalog, a record of\r\nXLOG_HEAP2_NEW_CID will be inserted into the WAL (see function\r\nlog_heap_new_cid). Then during logical decoding, this record will be decoded\r\ninto a change of type REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID (see function\r\nReorderBufferAddNewCommandId). And I think the function ReorderBufferProcessTXN\r\nwould update the HistoricSnapshot (member subxip and member curcid of snapshot)\r\nwhen processing this change.\r\n\r\nAnd here is only one small comment:\r\n\r\n+\t\t * We've already done required modifications in snapshot for the\r\n+\t\t * transaction that just committed, so there's no need to add a new\r\n+\t\t * snapshot for the transaction again.\r\n+\t\t */\r\n+\t\tif (xid == txn->xid)\r\n+\t\t\tcontinue;\r\n\r\nThis comment seems a bit inaccurate. How about the comment below?\r\n```\r\nWe don't need to add a snapshot to the transaction that just committed as it\r\nwill be able to see the new catalog contents after processing the new commandID\r\nin ReorderBufferProcessTXN.\r\n```\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Tue, 29 Nov 2022 08:49:20 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Avoid distributing new catalog snapshot again for the transaction\n being decoded."
}
] |
[
{
"msg_contents": "Hi,\n(In light of commit 7b2ccc5e03bf16d1e1bbabca25298108c839ec52)\n\nIn RelationBuildDesc(), we have:\n\n if (relation->rd_rel->relhasrules)\n RelationBuildRuleLock(relation);\n\nI wonder if we should check relation->rd_rules after the call\nto RelationBuildRuleLock().\n\nYour comment is appreciated.",
"msg_date": "Fri, 25 Nov 2022 07:56:13 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": true,
"msg_subject": "checking rd_rules in RelationBuildDesc"
},
{
"msg_contents": "Ted Yu <yuzhihong@gmail.com> writes:\n> I wonder if we should check relation->rd_rules after the call\n> to RelationBuildRuleLock().\n\nThat patch is both pointless and wrong. There is some\nvalue in updating relhasrules in the catalog, so that future\nrelcache loads don't uselessly call RelationBuildRuleLock;\nbut we certainly can't try to do so right there. That being\nthe case, making the relcache be out of sync with what's on\ndisk cannot have any good consequences. The most likely\neffect is that it would block later logic from fixing things\ncorrectly. There is logic in VACUUM to clean out obsolete\nrelhasrules flags (see vac_update_relstats), but I suspect\nthat would no longer work properly if we did this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 25 Nov 2022 11:17:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: checking rd_rules in RelationBuildDesc"
},
{
"msg_contents": "On Fri, Nov 25, 2022 at 8:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Ted Yu <yuzhihong@gmail.com> writes:\n> > I wonder if we should check relation->rd_rules after the call\n> > to RelationBuildRuleLock().\n>\n> That patch is both pointless and wrong. There is some\n> value in updating relhasrules in the catalog, so that future\n> relcache loads don't uselessly call RelationBuildRuleLock;\n> but we certainly can't try to do so right there. That being\n> the case, making the relcache be out of sync with what's on\n> disk cannot have any good consequences. The most likely\n> effect is that it would block later logic from fixing things\n> correctly. There is logic in VACUUM to clean out obsolete\n> relhasrules flags (see vac_update_relstats), but I suspect\n> that would no longer work properly if we did this.\n>\n> regards, tom lane\n>\nHi,\nThanks for evaluating the patch.\n\nThe change was originating from what we have in\nRelationCacheInitializePhase3():\n\n if (relation->rd_rel->relhasrules && relation->rd_rules ==\nNULL)\n {\n RelationBuildRuleLock(relation);\n if (relation->rd_rules == NULL)\n relation->rd_rel->relhasrules = false;\n\nFYI\n\nOn Fri, Nov 25, 2022 at 8:17 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Ted Yu <yuzhihong@gmail.com> writes:\n> I wonder if we should check relation->rd_rules after the call\n> to RelationBuildRuleLock().\n\nThat patch is both pointless and wrong. There is some\nvalue in updating relhasrules in the catalog, so that future\nrelcache loads don't uselessly call RelationBuildRuleLock;\nbut we certainly can't try to do so right there. That being\nthe case, making the relcache be out of sync with what's on\ndisk cannot have any good consequences. The most likely\neffect is that it would block later logic from fixing things\ncorrectly. There is logic in VACUUM to clean out obsolete\nrelhasrules flags (see vac_update_relstats), but I suspect\nthat would no longer work properly if we did this.\n\n regards, tom laneHi,Thanks for evaluating the patch.The change was originating from what we have in RelationCacheInitializePhase3(): if (relation->rd_rel->relhasrules && relation->rd_rules == NULL) { RelationBuildRuleLock(relation); if (relation->rd_rules == NULL) relation->rd_rel->relhasrules = false;FYI",
"msg_date": "Fri, 25 Nov 2022 08:22:58 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: checking rd_rules in RelationBuildDesc"
}
] |
[
{
"msg_contents": "After further contemplation of bug #17691 [1], I've concluded that\nwhat I did in commit c9b0c678d was largely misguided. For one\nthing, the new hlCover() algorithm no longer finds shortest-possible\ncover strings: if your query is \"x & y\" and the text is like\n\"... x ... x ... y ...\", then the selected cover string will run\nfrom the first occurrence of x to the y, whereas the old algorithm\nwould have correctly selected \"x ... y\". For another thing, the\nmaximum-cover-length hack that I added in 78e73e875 to band-aid\nover the performance issues of the original c9b0c678d patch means\nthat various scenarios no longer work as well as they used to,\nwhich is the proximate cause of the complaints in bug #17691.\n\nWhat I'm now thinking is that the original hlCover() algorithm was\nfine (if underdocumented) *as long as it's only asked to deal with\nAND-like semantics*. Given that restriction, its approach of\nfinding the latest first occurrence of any query term, and then\nbacking up to the earliest last occurrence, is visibly correct;\nand we weren't hearing of any performance issues with it either.\nThe problem is that this approach fails miserably for tsqueries\nthat aren't pure ANDs, because it will insist on including query\nterms that aren't required for a match and indeed could prevent a\nmatch.\n\nSo what we need is to find a way to fold the other sorts of queries\ninto an AND context. After a couple of false starts, I came up\nwith the attached patch. It builds on the existing TS_phrase_execute\ninfrastructure, which already produces an ExecPhraseData structure\ncontaining an exact list of the match locations for a phrase subquery.\nIt's easy to get an ExecPhraseData structure for a single-lexeme\nmatch, too. We can handle plain ANDs by forming lists of\nExecPhraseData structs (with implicit AND semantics across the list).\nWe can handle ORs by union'ing the component ExecPhraseData structs.\nAnd we can handle NOTs by just dropping them, because we don't want\nts_headline to prioritize matches to such words. There are some fine\npoints but it all seems to work pretty well.\n\nHence I propose the attached patchset. 0001 adds some test cases that\nI felt necessary after examining code coverage reports; I split this\nout mainly so that 0002 clearly shows the behavioral changes from\ncurrent code. 0002 adds TS_execute_locations() which does what I just\ndescribed, and rewrites hlCover() into something that's a spiritual\ndescendant of the original algorithm. It can't be quite the same\nas before because the location data that TS_execute_locations() returns\nis measured in lexeme indexes not word indexes, so some translation is\nneeded.\n\nNotable here is that a couple of regression test results revert\nto what they were before c9b0c678d. They're both instances of\npreferring \"x ... x ... y\" to \"x ... y\", which I argued was okay\nat the time, but I now see the error in that.\n\nAlthough we back-patched c9b0c678d, I'm inclined to propose this\nfor HEAD only. The misbehaviors it's fixing are less bad than what\nwe were dealing with in that patch.\n\nBTW, while experimenting with this I realized that tsvector's limitation\nof lexeme indexes to be at most 16383 is really quite disastrous for\nphrase searches. That limitation was arguably okay before we had phrase\nsearching, but now it seems untenable. For example, tsvector entries like\n\"foo:16383 bar:16383\" will not match \"foo <-> bar\" because the phrase\nmatch code wants their lexeme positions to differ by one. This basically\nmeans that phrase searches do not work beyond ~20K words into a document.\nI'm not sure what to do about that exactly, but I think we need to do\nsomething.\n\nWhatever we might do about that would be largely orthogonal to this\npatch, in any case. I'll stick this in the January CF.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/17691-93cef39a14663963%40postgresql.org",
"msg_date": "Fri, 25 Nov 2022 14:52:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Rethinking the implementation of ts_headline()"
},
{
"msg_contents": "On 2022-Nov-25, Tom Lane wrote:\n\n> After further contemplation of bug #17691 [1], I've concluded that\n> what I did in commit c9b0c678d was largely misguided. For one\n> thing, the new hlCover() algorithm no longer finds shortest-possible\n> cover strings: if your query is \"x & y\" and the text is like\n> \"... x ... x ... y ...\", then the selected cover string will run\n> from the first occurrence of x to the y, whereas the old algorithm\n> would have correctly selected \"x ... y\". For another thing, the\n> maximum-cover-length hack that I added in 78e73e875 to band-aid\n> over the performance issues of the original c9b0c678d patch means\n> that various scenarios no longer work as well as they used to,\n> which is the proximate cause of the complaints in bug #17691.\n\nI came across #17556 which contains a different test for this, and I'm\nnot sure that this patch changes things completely for the better. In\nthat bug report, Alex Malek presents this example\n\nselect ts_headline('baz baz baz ipsum ' || repeat(' foo ',4998) || 'labor',\n $$'ipsum' & 'labor'$$::tsquery,\n\t 'StartSel={, StopSel=}, MaxFragments=100, MaxWords=7, MinWords=3'),\n\tts_headline('baz baz baz ipsum ' || repeat(' foo ',4999) || 'labor',\n $$'ipsum' & 'labor'$$::tsquery,\n\t 'StartSel={, StopSel=}, MaxFragments=100, MaxWords=7, MinWords=3');\n\nwhich returns, in the current HEAD, the following\n ts_headline │ ts_headline \n─────────────────────┼─────────────\n {ipsum} ... {labor} │ baz baz baz\n(1 fila)\n\nThat is, once past the 5000 words of distance, it fails to find a good\ncover, but before that it returns an acceptable headline. However,\nafter your proposed patch, we get this:\n\n ts_headline │ ts_headline \n─────────────┼─────────────\n {ipsum} │ {ipsum}\n(1 fila)\n\nwhich is an improvement in the second case, though perhaps not as much\nas we would like, and definitely not an improvement in the first case.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"If you have nothing to say, maybe you need just the right tool to help you\nnot say it.\" (New York Times, about Microsoft PowerPoint)\n\n\n",
"msg_date": "Mon, 16 Jan 2023 13:23:03 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking the implementation of ts_headline()"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I came across #17556 which contains a different test for this, and I'm\n> not sure that this patch changes things completely for the better.\n\nThanks for looking at my patch. However ...\n\n> That is, once past the 5000 words of distance, it fails to find a good\n> cover, but before that it returns an acceptable headline. However,\n> after your proposed patch, we get this:\n\n> ts_headline │ ts_headline \n> ─────────────┼─────────────\n> {ipsum} │ {ipsum}\n> (1 fila)\n\nI get this with the patch:\n\n ts_headline | ts_headline \n---------------------+---------------------\n {ipsum} ... {labor} | {ipsum} ... {labor}\n(1 row)\n\nwhich is what I'd expect, because it removes the artificial limit on\ncover length that I added in 78e73e875. So I'm wondering why you got a\ndifferent result. Maybe something to do with locale? I tried it in\nC and en_US.utf8.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Jan 2023 11:28:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking the implementation of ts_headline()"
},
{
"msg_contents": "On 2023-Jan-16, Tom Lane wrote:\n\n> I get this with the patch:\n> \n> ts_headline | ts_headline \n> ---------------------+---------------------\n> {ipsum} ... {labor} | {ipsum} ... {labor}\n> (1 row)\n> \n> which is what I'd expect, because it removes the artificial limit on\n> cover length that I added in 78e73e875. So I'm wondering why you got a\n> different result.\n\nIndeed, that's what I get now, too, after re-applying the patches.\nI find no way to reproduce the bogus result I got yesterday.\n\nSorry for the noise.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Every machine is a smoke machine if you operate it wrong enough.\"\nhttps://twitter.com/libseybieda/status/1541673325781196801\n\n\n",
"msg_date": "Tue, 17 Jan 2023 09:32:11 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking the implementation of ts_headline()"
},
{
"msg_contents": "I tried this other test, based on looking at the new regression tests\nyou added,\n\nSELECT ts_headline('english', '\nDay after day, day after day,\n We stuck, nor breath nor motion,\nAs idle as a painted Ship\n Upon a painted Ocean.\nWater, water, every where\n And all the boards did shrink;\nWater, water, every where,\n Nor any drop to drink.\nS. T. Coleridge (1772-1834)\n', to_tsquery('english', '(day & drink) | (idle & painted)'), 'MaxFragments=5, MaxWords=9, MinWords=4');\n ts_headline \n─────────────────────────────────────────\n motion, ↵\n As <b>idle</b> as a <b>painted</b> Ship↵\n Upon\n(1 fila)\n\nand was surprised that the match for the 'day & drink' arm of the OR\ndisappears from the reported headline.\n\nThis is what 15 reports for the same query:\n\nSELECT ts_headline('english', '\nDay after day, day after day,\n We stuck, nor breath nor motion,\nAs idle as a painted Ship\n Upon a painted Ocean.\nWater, water, every where\n And all the boards did shrink;\nWater, water, every where,\n Nor any drop to drink.\nS. T. Coleridge (1772-1834)\n', to_tsquery('english', '(day & drink) | (idle & painted)'), 'MaxFragments=5, MaxWords=9, MinWords=4');\n ts_headline \n───────────────────────────────────────────────────────────\n <b>Day</b> after <b>day</b>, <b>day</b> after <b>day</b>,↵\n We stuck ... motion, ↵\n As <b>idle</b> as a <b>painted</b> Ship ↵\n Upon\n(1 fila)\n\nI think this was better.\n\n15 seems to fail in other ways; for instance, 'drink' is not highlighted in the\nheadline when the OR matches, but if the other arm of the OR doesn't match, it\nis; for example both 15 and master return the same for this one:\n\nSELECT ts_headline('english', '\nDay after day, day after day,\n We stuck, nor breath nor motion,\nAs idle as a painted Ship\n Upon a painted Ocean.\nWater, water, every where\n And all the boards did shrink;\nWater, water, every where,\n Nor any drop to drink.\nS. T. Coleridge (1772-1834)\n', to_tsquery('english', '(day & drink) | (mountain & backpack)'), 'MaxFragments=5, MaxWords=9, MinWords=4');\n ts_headline \n───────────────────────────────────────────────────────────\n <b>Day</b> after <b>day</b>, <b>day</b> after <b>day</b>,↵\n We stuck ... drop to <b>drink</b>. ↵\n S. T. Coleridge\n(1 fila)\n\n\n\nAnother thing I think might be a regression is the way fragments are\nselected. Consider what happens if I change the \"idle & painted\" in the\nearlier query to \"idle <-> painted\", and MaxWords is kept low:\n\nSELECT ts_headline('english', '\nDay after day, day after day,\n We stuck, nor breath nor motion,\nAs idle as a painted Ship\n Upon a painted Ocean.\nWater, water, every where\n And all the boards did shrink;\nWater, water, every where,\n Nor any drop to drink.\nS. T. Coleridge (1772-1834)\n', to_tsquery('english', '(day & drink) | (idle <-> painted)'), 'MaxFragments=5, MaxWords=9, MinWords=4');\n ts_headline \n───────────────────────────────────────────────\n <b>day</b>, ↵\n We stuck, nor breath nor motion, ↵\n As <b>idle</b> ... <b>painted</b> Ship ↵\n Upon a <b>painted</b> Ocean. ↵\n Water, water, every ... drop to <b>drink</b>.↵\n S. T. Coleridge\n(1 fila)\n\nNote that it chose to put a fragment delimiter exactly in the middle of the\nphrase match, where the stop words are. If I raise MaxWords, it is of course\nmuch better, I suppose because the word limit doesn't force a new fragment,\n\nSELECT ts_headline('english', '\nDay after day, day after day,\n We stuck, nor breath nor motion,\nAs idle as a painted Ship\n Upon a painted Ocean.\nWater, water, every where\n And all the boards did shrink;\nWater, water, every where,\n Nor any drop to drink.\nS. T. Coleridge (1772-1834)\n', to_tsquery('english', '(day & drink) | (idle <-> painted)'), 'MaxFragments=5, MaxWords=25, MinWords=4');\n ts_headline \n──────────────────────────────────────────────────\n after <b>day</b>, <b>day</b> after <b>day</b>, ↵\n We stuck, nor breath nor motion, ↵\n As <b>idle</b> as a <b>painted</b> Ship ↵\n Upon a <b>painted</b> Ocean. ↵\n Water, water, every where ... boards did shrink;↵\n Water, water, every where, ↵\n Nor any drop to <b>drink</b>. ↵\n S. T. Coleridge\n(1 fila)\n\nBut in 15, the query with low MaxWords does this instead, where the\nfragment delimiter occurs just *before* the phrasal match.\n\nSELECT ts_headline('english', '\nDay after day, day after day,\n We stuck, nor breath nor motion,\nAs idle as a painted Ship\n Upon a painted Ocean.\nWater, water, every where\n And all the boards did shrink;\nWater, water, every where,\n Nor any drop to drink.\nS. T. Coleridge (1772-1834)\n', to_tsquery('english', '(day & drink) | (idle <-> painted)'), 'MaxFragments=5, MaxWords=9, MinWords=4');\n ts_headline \n───────────────────────────────────────────────────────────\n <b>Day</b> after <b>day</b>, <b>day</b> after <b>day</b>,↵\n We stuck ... <b>idle</b> as a <b>painted</b> Ship ↵\n Upon a <b>painted</b> Ocean ... drop to <b>drink</b>. ↵\n S. T. Coleridge\n(1 fila)\n\n(Both 15 and master highlight 'painted' in the \"Upon a painted Ocean\"\nverse, which perhaps they shouldn't do, since it's not preceded by\n'idle'.)\n\n\n(I think it's super annoying that the fragment separation algorithm\nfails to preserve newlines between verses as it adds the '...'\nseparator. But I guess poetry is not the main use case for text search\nanyway, so it probably doesn't matter much.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Every machine is a smoke machine if you operate it wrong enough.\"\nhttps://twitter.com/libseybieda/status/1541673325781196801\n\n\n",
"msg_date": "Wed, 18 Jan 2023 12:09:42 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking the implementation of ts_headline()"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I tried this other test, based on looking at the new regression tests\n> you added,\n\n> SELECT ts_headline('english', '\n> Day after day, day after day,\n> We stuck, nor breath nor motion,\n> As idle as a painted Ship\n> Upon a painted Ocean.\n> Water, water, every where\n> And all the boards did shrink;\n> Water, water, every where,\n> Nor any drop to drink.\n> S. T. Coleridge (1772-1834)\n> ', to_tsquery('english', '(day & drink) | (idle & painted)'), 'MaxFragments=5, MaxWords=9, MinWords=4');\n> ts_headline \n> ─────────────────────────────────────────\n> motion, ↵\n> As <b>idle</b> as a <b>painted</b> Ship↵\n> Upon\n> (1 fila)\n\n> and was surprised that the match for the 'day & drink' arm of the OR\n> disappears from the reported headline.\n\nI'd argue that that's exactly what should happen. It's supposed to\nfind as-short-as-possible cover strings that satisfy the query.\nIn this case, satisfying the 'day & drink' condition would require\nnearly the entire input text, whereas 'idle & painted' can be\nsatisfied in just the third line. So what you get is the third line,\nslightly expanded because of some later rules that like to add\ncontext if the cover is shorter than MaxWords. I don't find anything\nparticularly principled about the old behavior:\n\n> <b>Day</b> after <b>day</b>, <b>day</b> after <b>day</b>,↵\n> We stuck ... motion, ↵\n> As <b>idle</b> as a <b>painted</b> Ship ↵\n> Upon\n\nIt's including hits for \"day\" into the cover despite the lack of any\nnearby match to \"drink\".\n\n> Another thing I think might be a regression is the way fragments are\n> selected. Consider what happens if I change the \"idle & painted\" in the\n> earlier query to \"idle <-> painted\", and MaxWords is kept low:\n\nOf course, \"idle <-> painted\" is satisfied nowhere in this text\n(the words are there, but not adjacent). So the cover has to\nrun from the last 'day' to the 'drink'. I think v15 is deciding\nthat it runs from the first 'day' to the 'drink', which while not\nexactly wrong is not the shortest cover. The rest of this is just\nminor variations in what mark_hl_fragments() decides to do based\non the precise cover string it's given. I don't dispute that\nmark_hl_fragments() could be improved, but this patch doesn't touch\nits algorithm and I think that doing so should be material for a\ndifferent patch. (I have no immediate ideas about what would be\na better algorithm for it, anyway.)\n\n> (Both 15 and master highlight 'painted' in the \"Upon a painted Ocean\"\n> verse, which perhaps they shouldn't do, since it's not preceded by\n> 'idle'.)\n\nYeah, and 'idle' too. Once it's chosen a string to show, it'll\nhighlight all the query words within that string, whether they\nconstitute part of the match or not. I can see arguments on both\nsides of doing it that way; it was probably more sensible before\nwe had phrase match than it is now. But again, changing that phase\nof the processing is outside the scope of this patch. I'm just\ntrying to undo the damage I did to the cover-string-selection phase.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Jan 2023 17:55:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking the implementation of ts_headline()"
},
{
"msg_contents": "On 2023-Jan-18, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> > and was surprised that the match for the 'day & drink' arm of the OR\n> > disappears from the reported headline.\n> \n> I'd argue that that's exactly what should happen. It's supposed to\n> find as-short-as-possible cover strings that satisfy the query.\n\nOK, that makes sense.\n\n> I don't find anything particularly principled about the old behavior:\n> \n> > <b>Day</b> after <b>day</b>, <b>day</b> after <b>day</b>,↵\n> > We stuck ... motion, ↵\n> > As <b>idle</b> as a <b>painted</b> Ship ↵\n> > Upon\n> \n> It's including hits for \"day\" into the cover despite the lack of any\n> nearby match to \"drink\".\n\nI suppose it would be possible to put 'day' and 'drink' in two different\nfragments: since the query has a & operator for them, the words don't\nnecessarily have to be nearby. But OK, your argument for this being the\nshortest result is sensible.\n\nI do wonder, though, if it's effectively usable for somebody building a\nsearch interface on top. If I'm ranking the results from several\ndocuments, and this document comes on top of others that only match one\narm of the OR query, then I would like to be able to show the matches\nfor both arms of the OR.\n\n> > Another thing I think might be a regression is the way fragments are\n> > selected. Consider what happens if I change the \"idle & painted\" in the\n> > earlier query to \"idle <-> painted\", and MaxWords is kept low:\n> \n> Of course, \"idle <-> painted\" is satisfied nowhere in this text\n> (the words are there, but not adjacent).\n\nOh, I see the problem, and it is my misunderstanding: the <-> operator\nis counting the words in between, even if they are stop words. I\nunderstood from the docs that those words were ignored, but that is not\nso. I misread the phraseto_tsquery doc as though they were explaining\nthe <-> operator.\n\nAnother experiment shows that the headline becomes \"complete\" only if I\nspecify the exact distance in the <N> operator:\n\nSELECT dist, ts_headline('simple', 'As idle as a painted Ship', to_tsquery('simple', format('(idle <%s> painted)', dist)), 'MaxFragments=5, MaxWords=8, MinWords=4') from generate_series(1, 4) dist;\n dist │ ts_headline \n──────┼──────────────────────────────────────\n 1 │ As <b>idle</b> as a\n 2 │ As <b>idle</b> as a\n 3 │ <b>idle</b> as a <b>painted</b> Ship\n 4 │ As <b>idle</b> as a\n(4 filas)\n\nI again have to question how valuable in practice is a <N> operator\nthat's so strict that I have to know exactly how many stop words I want\nthere to be in between the phrase search. For some reason, in my mind I\nhad it as \"at most N words, ignoring stop words\", but that's not what it\nis.\n\nAnyway, I don't think this needs to stop your current patch.\n\n> So the cover has to run from the last 'day' to the 'drink'. I think\n> v15 is deciding that it runs from the first 'day' to the 'drink',\n> which while not exactly wrong is not the shortest cover.\n\nSounds fair.\n\n> The rest of this is just minor variations in what mark_hl_fragments()\n> decides to do based on the precise cover string it's given. I don't\n> dispute that mark_hl_fragments() could be improved, but this patch\n> doesn't touch its algorithm and I think that doing so should be\n> material for a different patch.\n\nUnderstood and agreed.\n\n> > (Both 15 and master highlight 'painted' in the \"Upon a painted Ocean\"\n> > verse, which perhaps they shouldn't do, since it's not preceded by\n> > 'idle'.)\n> \n> Yeah, and 'idle' too. Once it's chosen a string to show, it'll\n> highlight all the query words within that string, whether they\n> constitute part of the match or not. I can see arguments on both\n> sides of doing it that way; it was probably more sensible before\n> we had phrase match than it is now. But again, changing that phase\n> of the processing is outside the scope of this patch. I'm just\n> trying to undo the damage I did to the cover-string-selection phase.\n\nAll clear then.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"The eagle never lost so much time, as\nwhen he submitted to learn of the crow.\" (William Blake)\n\n\n",
"msg_date": "Thu, 19 Jan 2023 11:16:29 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking the implementation of ts_headline()"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2023-Jan-18, Tom Lane wrote:\n>> It's including hits for \"day\" into the cover despite the lack of any\n>> nearby match to \"drink\".\n\n> I suppose it would be possible to put 'day' and 'drink' in two different\n> fragments: since the query has a & operator for them, the words don't\n> necessarily have to be nearby. But OK, your argument for this being the\n> shortest result is sensible.\n\n> I do wonder, though, if it's effectively usable for somebody building a\n> search interface on top. If I'm ranking the results from several\n> documents, and this document comes on top of others that only match one\n> arm of the OR query, then I would like to be able to show the matches\n> for both arms of the OR.\n\nThe fundamental problem with the case you're posing is that MaxWords\nis too small to allow the 'day & drink' match to be shown as a whole.\nIf you make MaxWords large enough then you do find it including\n(and highlighting) 'drink', but I'm not sure we should stress out\nabout what happens otherwise.\n\n> Oh, I see the problem, and it is my misunderstanding: the <-> operator\n> is counting the words in between, even if they are stop words.\n\nYeah. AFAICS this is a very deliberate, longstanding decision,\nso I'm hesitant to change it. Your test case with 'simple'\nproves little, because there are no stop words in 'simple':\n\nregression=# select to_tsvector('simple', 'As idle as a painted Ship');\n to_tsvector \n----------------------------------------------\n 'a':4 'as':1,3 'idle':2 'painted':5 'ship':6\n(1 row)\n\nHowever, when I switch to 'english':\n\nregression=# select to_tsvector('english', 'As idle as a painted Ship');\n to_tsvector \n----------------------------\n 'idl':2 'paint':5 'ship':6\n(1 row)\n\nthe stop words are gone, but the recorded positions remain the same.\nSo this is really a matter of how to_tsvector chooses to count word\npositions, it's not the fault of the <-> construct in particular.\n\nI'm not convinced that this particular behavior is wrong, anyway.\nThe user of text search isn't supposed to have to think about\nwhich words are stopwords or not, so I think that it's entirely\nsensible for 'idle as a painted' to match 'idle <3> painted'.\nMaybe the docs need some adjustment? But in any case that's\nmaterial for a different thread.\n\n> I again have to question how valuable in practice is a <N> operator\n> that's so strict that I have to know exactly how many stop words I want\n> there to be in between the phrase search. For some reason, in my mind I\n> had it as \"at most N words, ignoring stop words\", but that's not what it\n> is.\n\nYeah, I recall discussing \"up to N words\" semantics for this in the\npast, but nobody has made that happen.\n\n> Anyway, I don't think this needs to stop your current patch.\n\nMany thanks for looking at it!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Jan 2023 11:13:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking the implementation of ts_headline()"
},
{
"msg_contents": "Hi,\n\n19.01.2023 19:13, Tom Lane wrote:\n> Alvaro Herrera<alvherre@alvh.no-ip.org> writes:\n>\n>> Anyway, I don't think this needs to stop your current patch.\n> Many thanks for looking at it!\n\nI've found that starting from commit 5a617d75 this query:\nSELECT ts_headline('english', 'To be, or not to be', to_tsquery('english', 'or'));\n\ninvokes a valgrind-detected error:\n==00:00:00:03.950 3241424== Invalid read of size 1\n==00:00:00:03.950 3241424== at 0x8EE2A9: TS_execute_locations_recurse (tsvector_op.c:2046)\n==00:00:00:03.950 3241424== by 0x8EE220: TS_execute_locations (tsvector_op.c:2017)\n==00:00:00:03.950 3241424== by 0x77EAC4: prsd_headline (wparser_def.c:2677)\n==00:00:00:03.950 3241424== by 0x94536C: FunctionCall3Coll (fmgr.c:1173)\n==00:00:00:03.950 3241424== by 0x778648: ts_headline_byid_opt (wparser.c:322)\n==00:00:00:03.950 3241424== by 0x9440F5: DirectFunctionCall3Coll (fmgr.c:836)\n==00:00:00:03.950 3241424== by 0x778763: ts_headline_byid (wparser.c:343)\n==00:00:00:03.950 3241424== by 0x4AC9F2: ExecInterpExpr (execExprInterp.c:752)\n==00:00:00:03.950 3241424== by 0x4AEDFE: ExecInterpExprStillValid (execExprInterp.c:1838)\n==00:00:00:03.950 3241424== by 0x636A7E: ExecEvalExprSwitchContext (executor.h:344)\n==00:00:00:03.950 3241424== by 0x63E92D: evaluate_expr (clauses.c:4843)\n==00:00:00:03.950 3241424== by 0x63DA53: evaluate_function (clauses.c:4345)\n...\n(Initially I had encountered an asan-detected heap-buffer-overflow with a\nmore informative document.)\n\nBut the less-verbose call:\nSELECT ts_headline('', '');\n\ndiscovers a different error even on 5a617d75~1:\n==00:00:00:04.113 3139158== Conditional jump or move depends on uninitialised value(s)\n==00:00:00:04.113 3139158== at 0x77B44F: mark_fragment (wparser_def.c:2100)\n==00:00:00:04.113 3139158== by 0x77E2F2: mark_hl_words (wparser_def.c:2519)\n==00:00:00:04.113 3139158== by 0x77E891: prsd_headline (wparser_def.c:2610)\n==00:00:00:04.113 3139158== by 0x944B68: FunctionCall3Coll (fmgr.c:1173)\n==00:00:00:04.113 3139158== by 0x778648: ts_headline_byid_opt (wparser.c:322)\n==00:00:00:04.113 3139158== by 0x9438F1: DirectFunctionCall3Coll (fmgr.c:836)\n==00:00:00:04.113 3139158== by 0x7787B6: ts_headline (wparser.c:352)\n==00:00:00:04.113 3139158== by 0x4AC9F2: ExecInterpExpr (execExprInterp.c:752)\n==00:00:00:04.113 3139158== by 0x4AEDFE: ExecInterpExprStillValid (execExprInterp.c:1838)\n==00:00:00:04.113 3139158== by 0x50BD0C: ExecEvalExprSwitchContext (executor.h:344)\n==00:00:00:04.113 3139158== by 0x50BD84: ExecProject (executor.h:378)\n==00:00:00:04.113 3139158== by 0x50BFBB: ExecResult (nodeResult.c:136)\n==00:00:00:04.113 3139158==\n\nI've reproduced it on REL9_4_STABLE (REL9_4_15) and it looks like the defect\nin mark_hl_words():\n int bestb = -1,\n beste = -1;\n int bestlen = -1;\n int pose = 0,\n...\n if (highlight == 0)\n {\n while (hlCover(prs, query, &p, &q))\n {\n...\n if (bestlen < 0)\n {\n curlen = 0;\n for (i = 0; i < prs->curwords && curlen < min_words; i++)\n {\n if (!NONWORDTOKEN(prs->words[i].type))\n curlen++;\n pose = i;\n }\n bestb = 0;\n beste = pose;\n }\n...\n// here we have bestb == 0, beste == 0, but no prs->words in this case\n for (i = bestb; i <= beste; i++)\n {\n if (prs->words[i].item)\n prs->words[i].selected = 1;\n\nexists since 2a0083ede.\n(Sorry for the distraction.)\n\nBest regards,\nAlexander\n\n\n\n\n\nHi,\n\n 19.01.2023 19:13, Tom Lane wrote:\n\n\nAlvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n\n\nAnyway, I don't think this needs to stop your current patch.\n\n\n\nMany thanks for looking at it!\n\n\n\n I've found that starting from commit 5a617d75 this query:\n SELECT ts_headline('english', 'To be, or not to be',\n to_tsquery('english', 'or'));\n\n invokes a valgrind-detected error:\n ==00:00:00:03.950 3241424== Invalid read of size 1\n ==00:00:00:03.950 3241424== at 0x8EE2A9:\n TS_execute_locations_recurse (tsvector_op.c:2046)\n ==00:00:00:03.950 3241424== by 0x8EE220: TS_execute_locations\n (tsvector_op.c:2017)\n ==00:00:00:03.950 3241424== by 0x77EAC4: prsd_headline\n (wparser_def.c:2677)\n ==00:00:00:03.950 3241424== by 0x94536C: FunctionCall3Coll\n (fmgr.c:1173)\n ==00:00:00:03.950 3241424== by 0x778648: ts_headline_byid_opt\n (wparser.c:322)\n ==00:00:00:03.950 3241424== by 0x9440F5: DirectFunctionCall3Coll\n (fmgr.c:836)\n ==00:00:00:03.950 3241424== by 0x778763: ts_headline_byid\n (wparser.c:343)\n ==00:00:00:03.950 3241424== by 0x4AC9F2: ExecInterpExpr\n (execExprInterp.c:752)\n ==00:00:00:03.950 3241424== by 0x4AEDFE: ExecInterpExprStillValid\n (execExprInterp.c:1838)\n ==00:00:00:03.950 3241424== by 0x636A7E:\n ExecEvalExprSwitchContext (executor.h:344)\n ==00:00:00:03.950 3241424== by 0x63E92D: evaluate_expr\n (clauses.c:4843)\n ==00:00:00:03.950 3241424== by 0x63DA53: evaluate_function\n (clauses.c:4345)\n ...\n (Initially I had encountered an asan-detected heap-buffer-overflow\n with a\n more informative document.)\n\n But the less-verbose call:\n SELECT ts_headline('', '');\n\n discovers a different error even on 5a617d75~1:\n ==00:00:00:04.113 3139158== Conditional jump or move depends on\n uninitialised value(s)\n ==00:00:00:04.113 3139158== at 0x77B44F: mark_fragment\n (wparser_def.c:2100)\n ==00:00:00:04.113 3139158== by 0x77E2F2: mark_hl_words\n (wparser_def.c:2519)\n ==00:00:00:04.113 3139158== by 0x77E891: prsd_headline\n (wparser_def.c:2610)\n ==00:00:00:04.113 3139158== by 0x944B68: FunctionCall3Coll\n (fmgr.c:1173)\n ==00:00:00:04.113 3139158== by 0x778648: ts_headline_byid_opt\n (wparser.c:322)\n ==00:00:00:04.113 3139158== by 0x9438F1: DirectFunctionCall3Coll\n (fmgr.c:836)\n ==00:00:00:04.113 3139158== by 0x7787B6: ts_headline\n (wparser.c:352)\n ==00:00:00:04.113 3139158== by 0x4AC9F2: ExecInterpExpr\n (execExprInterp.c:752)\n ==00:00:00:04.113 3139158== by 0x4AEDFE: ExecInterpExprStillValid\n (execExprInterp.c:1838)\n ==00:00:00:04.113 3139158== by 0x50BD0C:\n ExecEvalExprSwitchContext (executor.h:344)\n ==00:00:00:04.113 3139158== by 0x50BD84: ExecProject\n (executor.h:378)\n ==00:00:00:04.113 3139158== by 0x50BFBB: ExecResult\n (nodeResult.c:136)\n ==00:00:00:04.113 3139158==\n\n I've reproduced it on REL9_4_STABLE (REL9_4_15) and it looks like\n the defect\n in mark_hl_words():\n int bestb = -1,\n beste = -1;\n int bestlen = -1;\n int pose = 0,\n ...\n if (highlight == 0)\n {\n while (hlCover(prs, query, &p, &q))\n {\n ...\n if (bestlen < 0)\n {\n curlen = 0;\n for (i = 0; i < prs->curwords\n && curlen < min_words; i++)\n {\n if\n (!NONWORDTOKEN(prs->words[i].type))\n curlen++;\n pose = i;\n }\n bestb = 0;\n beste = pose;\n }\n ...\n // here we have bestb == 0, beste == 0, but no prs->words in this\n case\n for (i = bestb; i <= beste; i++)\n {\n if (prs->words[i].item)\n prs->words[i].selected = 1;\n\n exists since 2a0083ede.\n (Sorry for the distraction.)\n\n Best regards,\n Alexander",
"msg_date": "Thu, 6 Apr 2023 20:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rethinking the implementation of ts_headline()"
},
{
"msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> I've found that starting from commit 5a617d75 this query:\n> SELECT ts_headline('english', 'To be, or not to be', to_tsquery('english', 'or'));\n> invokes a valgrind-detected error:\n> ==00:00:00:03.950 3241424== Invalid read of size 1\n\nOn my machine, I also see PG-reported errors such as \"unrecognized\noperator: 0\". It's a live bug all right. We need to be more careful\nabout empty tsqueries.\n\n> But the less-verbose call:\n> SELECT ts_headline('', '');\n\n> discovers a different error even on 5a617d75~1:\n> ==00:00:00:04.113 3139158== Conditional jump or move depends on uninitialised value(s)\n> ==00:00:00:04.113 3139158== at 0x77B44F: mark_fragment (wparser_def.c:2100)\n\nYeah, this one seems to be ancient sloppiness. I don't think it has\nany bad effect beyond annoying valgrind, but I fixed it anyway.\n\nThanks for the report!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Apr 2023 15:55:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Rethinking the implementation of ts_headline()"
}
] |
[
{
"msg_contents": "Hi.\n\nThere another assorted fixes to the head branch.\n\n1. Avoid useless pointer increment\n(src/backend/utils/activity/pgstat_shmem.c)\nThe pointer *p, is used in creation dsa memory,\nnot p + MAXALIGN(pgstat_dsa_init_size()).\n\n2. Discard result unused (src/backend/access/transam/xlogrecovery.c)\nSome compilers raise warnings about discarding return from strtoul.\n\n3. Fix dead code (src/bin/pg_dump/pg_dump.c)\ntbinfo->relkind == RELKIND_MATVIEW is always true, so \"INDEX\"\nis never hit.\nPer Coverity.\n\n4. Fix dead code (src/backend/utils/adt/formatting.c)\nNp->sign == '+', is different than \"!= '-'\" and is different than \"!= '+'\"\nSo the else is never hit.\nPer Coverity.\n\n5. Use boolean operator with boolean operands\n(b/src/backend/commands/tablecmds.c)\n\nregards,\nRanier Vilela",
"msg_date": "Fri, 25 Nov 2022 18:27:04 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Small miscellaneus fixes (Part II)"
},
{
"msg_contents": "On Fri, Nov 25, 2022 at 06:27:04PM -0300, Ranier Vilela wrote:\n> 5. Use boolean operator with boolean operands\n> (b/src/backend/commands/tablecmds.c)\n\ntablecmds.c: right. Since 074c5cfbf\n\npg_dump.c: right. Since b08dee24a\n\n> 4. Fix dead code (src/backend/utils/adt/formatting.c)\n> Np->sign == '+', is different than \"!= '-'\" and is different than \"!= '+'\"\n> So the else is never hit.\n\nformatting.c: I don't see the problem.\n\n\tif (Np->sign != '-')\n\t...\n\telse if (Np->sign != '+' && IS_PLUS(Np->Num))\n\t...\n\nYou said that the \"else\" is never hit, but why ?\nWhat if sign is 0 ?\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 20 Dec 2022 18:51:34 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Small miscellaneus fixes (Part II)"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 06:51:34PM -0600, Justin Pryzby wrote:\n> On Fri, Nov 25, 2022 at 06:27:04PM -0300, Ranier Vilela wrote:\n> > 5. Use boolean operator with boolean operands\n> > (b/src/backend/commands/tablecmds.c)\n> \n> tablecmds.c: right. Since 074c5cfbf\n\nMost of this does not seem to be really worth poking at.\n\n newcons = AddRelationNewConstraints(rel, NIL,\n list_make1(copyObject(constr)),\n- recursing | is_readd, /* allow_merge */\n+ recursing || is_readd, /* allow_merge */\nThere is no damage here, but that looks like a typo so no objections\non this one.\n--\nMichael",
"msg_date": "Wed, 21 Dec 2022 16:10:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Small miscellaneus fixes (Part II)"
},
{
"msg_contents": "Thanks for looking at this.\n\nEm ter., 20 de dez. de 2022 às 21:51, Justin Pryzby <pryzby@telsasoft.com>\nescreveu:\n\n> On Fri, Nov 25, 2022 at 06:27:04PM -0300, Ranier Vilela wrote:\n> > 5. Use boolean operator with boolean operands\n> > (b/src/backend/commands/tablecmds.c)\n>\n> tablecmds.c: right. Since 074c5cfbf\n>\n> pg_dump.c: right. Since b08dee24a\n>\n> > 4. Fix dead code (src/backend/utils/adt/formatting.c)\n> > Np->sign == '+', is different than \"!= '-'\" and is different than \"!=\n> '+'\"\n> > So the else is never hit.\n>\n> formatting.c: I don't see the problem.\n>\n> if (Np->sign != '-')\n> ...\n> else if (Np->sign != '+' && IS_PLUS(Np->Num))\n> ...\n>\n> You said that the \"else\" is never hit, but why ?\n>\nMaybe this part of the patch is wrong.\nThe only case for the first if not handled is sign == '-',\nsign == '-' is handled by else.\nSo always the \"else is true\", because sign == '+' is\nhandled by the first if.\n\nregards,\nRanier Vilela\n\nThanks for looking at this.Em ter., 20 de dez. de 2022 às 21:51, Justin Pryzby <pryzby@telsasoft.com> escreveu:On Fri, Nov 25, 2022 at 06:27:04PM -0300, Ranier Vilela wrote:\n> 5. Use boolean operator with boolean operands\n> (b/src/backend/commands/tablecmds.c)\n\ntablecmds.c: right. Since 074c5cfbf\n\npg_dump.c: right. Since b08dee24a\n\n> 4. Fix dead code (src/backend/utils/adt/formatting.c)\n> Np->sign == '+', is different than \"!= '-'\" and is different than \"!= '+'\"\n> So the else is never hit.\n\nformatting.c: I don't see the problem.\n\n if (Np->sign != '-')\n ...\n else if (Np->sign != '+' && IS_PLUS(Np->Num))\n ...\n\nYou said that the \"else\" is never hit, but why ?Maybe this part of the patch is wrong.The only case for the first if not handled is sign == '-',sign == '-' is handled by else.So always the \"else is true\", because sign == '+' ishandled by the first if.regards,Ranier Vilela",
"msg_date": "Wed, 21 Dec 2022 14:51:38 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small miscellaneus fixes (Part II)"
},
{
"msg_contents": "Em qua., 21 de dez. de 2022 às 04:10, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Tue, Dec 20, 2022 at 06:51:34PM -0600, Justin Pryzby wrote:\n> > On Fri, Nov 25, 2022 at 06:27:04PM -0300, Ranier Vilela wrote:\n> > > 5. Use boolean operator with boolean operands\n> > > (b/src/backend/commands/tablecmds.c)\n> >\n> > tablecmds.c: right. Since 074c5cfbf\n>\n> Most of this does not seem to be really worth poking at.\n>\n> newcons = AddRelationNewConstraints(rel, NIL,\n> list_make1(copyObject(constr)),\n> - recursing | is_readd, /*\n> allow_merge */\n> + recursing || is_readd, /*\n> allow_merge */\n> There is no damage here, but that looks like a typo so no objections\n> on this one.\n>\n\nThanks Michael, for the commit.\n\nregards,\nRanier Vilela\n\nEm qua., 21 de dez. de 2022 às 04:10, Michael Paquier <michael@paquier.xyz> escreveu:On Tue, Dec 20, 2022 at 06:51:34PM -0600, Justin Pryzby wrote:\n> On Fri, Nov 25, 2022 at 06:27:04PM -0300, Ranier Vilela wrote:\n> > 5. Use boolean operator with boolean operands\n> > (b/src/backend/commands/tablecmds.c)\n> \n> tablecmds.c: right. Since 074c5cfbf\n\nMost of this does not seem to be really worth poking at.\n\n newcons = AddRelationNewConstraints(rel, NIL,\n list_make1(copyObject(constr)),\n- recursing | is_readd, /* allow_merge */\n+ recursing || is_readd, /* allow_merge */\nThere is no damage here, but that looks like a typo so no objections\non this one. Thanks Michael, for the commit.regards,Ranier Vilela",
"msg_date": "Thu, 22 Dec 2022 11:35:57 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small miscellaneus fixes (Part II)"
},
{
"msg_contents": "Em ter., 20 de dez. de 2022 às 21:51, Justin Pryzby <pryzby@telsasoft.com>\nescreveu:\n\n> On Fri, Nov 25, 2022 at 06:27:04PM -0300, Ranier Vilela wrote:\n> > 5. Use boolean operator with boolean operands\n> > (b/src/backend/commands/tablecmds.c)\n>\n> tablecmds.c: right. Since 074c5cfbf\n>\n> pg_dump.c: right. Since b08dee24a\n>\n> > 4. Fix dead code (src/backend/utils/adt/formatting.c)\n> > Np->sign == '+', is different than \"!= '-'\" and is different than \"!=\n> '+'\"\n> > So the else is never hit.\n>\n> formatting.c: I don't see the problem.\n>\n> if (Np->sign != '-')\n> ...\n> else if (Np->sign != '+' && IS_PLUS(Np->Num))\n> ...\n>\n> You said that the \"else\" is never hit, but why ?\n>\nThis is a Coverity report.\n\ndead_error_condition: The condition Np->Num->flag & 0x200 cannot be true.\n5671 else if (Np->sign != '+' && IS_PLUS(Np->Num))\n\nCID 1501076 (#1 of 1): Logically dead code (DEADCODE)dead_error_line: Execution\ncannot reach this statement: Np->Num->flag &= 0xffffffff....\n\nSo, the dead code is because IS_PUS(Np->Num) is already tested and cannot\nbe true on else.\n\nv1 patch attached.\n\nregards,\nRanier Vilela",
"msg_date": "Thu, 22 Dec 2022 14:29:11 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small miscellaneus fixes (Part II)"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 02:29:11PM -0300, Ranier Vilela wrote:\n> Em ter., 20 de dez. de 2022 �s 21:51, Justin Pryzby <pryzby@telsasoft.com> escreveu:\n> \n> > On Fri, Nov 25, 2022 at 06:27:04PM -0300, Ranier Vilela wrote:\n> > > 5. Use boolean operator with boolean operands\n> > > (b/src/backend/commands/tablecmds.c)\n> >\n> > tablecmds.c: right. Since 074c5cfbf\n> >\n> > pg_dump.c: right. Since b08dee24a\n> >\n> > > 4. Fix dead code (src/backend/utils/adt/formatting.c)\n> > > Np->sign == '+', is different than \"!= '-'\" and is different than \"!=\n> > '+'\"\n> > > So the else is never hit.\n> >\n> > formatting.c: I don't see the problem.\n> >\n> > if (Np->sign != '-')\n> > ...\n> > else if (Np->sign != '+' && IS_PLUS(Np->Num))\n> > ...\n> >\n> > You said that the \"else\" is never hit, but why ?\n>\n> This is a Coverity report.\n> \n> dead_error_condition: The condition Np->Num->flag & 0x200 cannot be true.\n> 5671 else if (Np->sign != '+' && IS_PLUS(Np->Num))\n> \n> CID 1501076 (#1 of 1): Logically dead code (DEADCODE)dead_error_line: Execution\n> cannot reach this statement: Np->Num->flag &= 0xffffffff....\n> \n> So, the dead code is because IS_PUS(Np->Num) is already tested and cannot\n> be true on else.\n\nMakes sense now (in your first message, you said that the problem was\nwith \"sign\", and the patch didn't address the actual problem in\nIS_PLUS()).\n\nOne can look and find that the unreachable code was introduced at\n7a3e7b64a.\n\nWith your proposed change, the unreachable line is hit by regression\ntests, which is an improvment. As is the change to pg_dump.c.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 22 Dec 2022 19:08:18 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Small miscellaneus fixes (Part II)"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 8:08 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> Makes sense now (in your first message, you said that the problem was\n> with \"sign\", and the patch didn't address the actual problem in\n> IS_PLUS()).\n>\n> One can look and find that the unreachable code was introduced at\n> 7a3e7b64a.\n>\n> With your proposed change, the unreachable line is hit by regression\n> tests, which is an improvment. As is the change to pg_dump.c.\n\nBut that now reachable line just unsets a flag that we previously found\nunset, right?\nAnd if that line was unreachable, then surely the previous flag-clearing\noperation is too?\n\n5669 994426 : if (IS_MINUS(Np->Num)) // <- also always\nfalse\n5670 0 : Np->Num->flag &= ~NUM_F_MINUS;\n5671 : }\n5672 524 : else if (Np->sign != '+' && IS_PLUS(Np->Num))\n5673 0 : Np->Num->flag &= ~NUM_F_PLUS;\n\nhttps://coverage.postgresql.org/src/backend/utils/adt/formatting.c.gcov.html\n\nI'm inclined to turn the dead unsets into asserts.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Dec 23, 2022 at 8:08 AM Justin Pryzby <pryzby@telsasoft.com> wrote:> Makes sense now (in your first message, you said that the problem was> with \"sign\", and the patch didn't address the actual problem in> IS_PLUS()).>> One can look and find that the unreachable code was introduced at> 7a3e7b64a.>> With your proposed change, the unreachable line is hit by regression> tests, which is an improvment. As is the change to pg_dump.c.But that now reachable line just unsets a flag that we previously found unset, right?And if that line was unreachable, then surely the previous flag-clearing operation is too?5669 994426 : if (IS_MINUS(Np->Num)) // <- also always false5670 0 : Np->Num->flag &= ~NUM_F_MINUS;5671 : }5672 524 : else if (Np->sign != '+' && IS_PLUS(Np->Num))5673 0 : Np->Num->flag &= ~NUM_F_PLUS;https://coverage.postgresql.org/src/backend/utils/adt/formatting.c.gcov.htmlI'm inclined to turn the dead unsets into asserts.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 12 Jan 2023 12:15:24 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Small miscellaneus fixes (Part II)"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 12:15:24PM +0700, John Naylor wrote:\n> On Fri, Dec 23, 2022 at 8:08 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> > Makes sense now (in your first message, you said that the problem was\n> > with \"sign\", and the patch didn't address the actual problem in\n> > IS_PLUS()).\n> >\n> > One can look and find that the unreachable code was introduced at\n> > 7a3e7b64a.\n> >\n> > With your proposed change, the unreachable line is hit by regression\n> > tests, which is an improvment. As is the change to pg_dump.c.\n> \n> But that now reachable line just unsets a flag that we previously found\n> unset, right?\n\nGood point.\n\n> And if that line was unreachable, then surely the previous flag-clearing\n> operation is too?\n> \n> 5669 994426 : if (IS_MINUS(Np->Num)) // <- also always\n> false\n> 5670 0 : Np->Num->flag &= ~NUM_F_MINUS;\n> 5671 : }\n> 5672 524 : else if (Np->sign != '+' && IS_PLUS(Np->Num))\n> 5673 0 : Np->Num->flag &= ~NUM_F_PLUS;\n> \n> https://coverage.postgresql.org/src/backend/utils/adt/formatting.c.gcov.html\n> \n> I'm inclined to turn the dead unsets into asserts.\n\nTo be clear - did you mean like this ?\n\ndiff --git a/src/backend/utils/adt/formatting.c b/src/backend/utils/adt/formatting.c\nindex a4b524ea3ac..848956879f5 100644\n--- a/src/backend/utils/adt/formatting.c\n+++ b/src/backend/utils/adt/formatting.c\n@@ -5662,15 +5662,13 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout,\n \t\t}\n \t\telse\n \t\t{\n+\t\t\tAssert(!IS_MINUS(Np->Num));\n+\t\t\tAssert(!IS_PLUS(Np->Num));\n \t\t\tif (Np->sign != '-')\n \t\t\t{\n \t\t\t\tif (IS_BRACKET(Np->Num) && IS_FILLMODE(Np->Num))\n \t\t\t\t\tNp->Num->flag &= ~NUM_F_BRACKET;\n-\t\t\t\tif (IS_MINUS(Np->Num))\n-\t\t\t\t\tNp->Num->flag &= ~NUM_F_MINUS;\n \t\t\t}\n-\t\t\telse if (Np->sign != '+' && IS_PLUS(Np->Num))\n-\t\t\t\tNp->Num->flag &= ~NUM_F_PLUS;\n \n \t\t\tif (Np->sign == '+' && IS_FILLMODE(Np->Num) && IS_LSIGN(Np->Num) == false)\n \t\t\t\tNp->sign_wrote = true;\t/* needn't sign */\n\n\n",
"msg_date": "Wed, 11 Jan 2023 23:34:07 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Small miscellaneus fixes (Part II)"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 12:34 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Jan 12, 2023 at 12:15:24PM +0700, John Naylor wrote:\n> > On Fri, Dec 23, 2022 at 8:08 AM Justin Pryzby <pryzby@telsasoft.com>\nwrote:\n> >\n> > > Makes sense now (in your first message, you said that the problem was\n> > > with \"sign\", and the patch didn't address the actual problem in\n> > > IS_PLUS()).\n> > >\n> > > One can look and find that the unreachable code was introduced at\n> > > 7a3e7b64a.\n> > >\n> > > With your proposed change, the unreachable line is hit by regression\n> > > tests, which is an improvment. As is the change to pg_dump.c.\n> >\n> > But that now reachable line just unsets a flag that we previously found\n> > unset, right?\n>\n> Good point.\n>\n> > And if that line was unreachable, then surely the previous flag-clearing\n> > operation is too?\n> >\n> > 5669 994426 : if (IS_MINUS(Np->Num)) // <- also\nalways\n> > false\n> > 5670 0 : Np->Num->flag &= ~NUM_F_MINUS;\n> > 5671 : }\n> > 5672 524 : else if (Np->sign != '+' &&\nIS_PLUS(Np->Num))\n> > 5673 0 : Np->Num->flag &= ~NUM_F_PLUS;\n> >\n> >\nhttps://coverage.postgresql.org/src/backend/utils/adt/formatting.c.gcov.html\n> >\n> > I'm inclined to turn the dead unsets into asserts.\n>\n> To be clear - did you mean like this ?\n>\n> diff --git a/src/backend/utils/adt/formatting.c\nb/src/backend/utils/adt/formatting.c\n> index a4b524ea3ac..848956879f5 100644\n> --- a/src/backend/utils/adt/formatting.c\n> +++ b/src/backend/utils/adt/formatting.c\n> @@ -5662,15 +5662,13 @@ NUM_processor(FormatNode *node, NUMDesc *Num,\nchar *inout,\n> }\n> else\n> {\n> + Assert(!IS_MINUS(Np->Num));\n> + Assert(!IS_PLUS(Np->Num));\n> if (Np->sign != '-')\n> {\n> if (IS_BRACKET(Np->Num) &&\nIS_FILLMODE(Np->Num))\n> Np->Num->flag &= ~NUM_F_BRACKET;\n> - if (IS_MINUS(Np->Num))\n> - Np->Num->flag &= ~NUM_F_MINUS;\n> }\n> - else if (Np->sign != '+' && IS_PLUS(Np->Num))\n> - Np->Num->flag &= ~NUM_F_PLUS;\n>\n> if (Np->sign == '+' && IS_FILLMODE(Np->Num) &&\nIS_LSIGN(Np->Num) == false)\n> Np->sign_wrote = true; /* needn't sign */\n\nI was originally thinking of something more localized:\n\n--- a/src/backend/utils/adt/formatting.c\n+++ b/src/backend/utils/adt/formatting.c\n@@ -5666,11 +5666,10 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char\n*inout,\n {\n if (IS_BRACKET(Np->Num) && IS_FILLMODE(Np->Num))\n Np->Num->flag &= ~NUM_F_BRACKET;\n- if (IS_MINUS(Np->Num))\n- Np->Num->flag &= ~NUM_F_MINUS;\n+ Assert(!IS_MINUS(Np->Num));\n }\n- else if (Np->sign != '+' && IS_PLUS(Np->Num))\n- Np->Num->flag &= ~NUM_F_PLUS;\n+ else if (Np->sign != '+')\n+ Assert(!IS_PLUS(Np->Num));\n\n...but arguably the earlier check is close enough that it's silly to assert\nin the \"else\" branch, and I'd be okay with just nuking those lines. Another\nthing that caught my attention is the assumption that unsetting a bit is so\nexpensive that we have to first check if it's set, so we may as well remove\n\"IS_BRACKET(Np->Num)\" as well.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jan 12, 2023 at 12:34 PM Justin Pryzby <pryzby@telsasoft.com> wrote:>> On Thu, Jan 12, 2023 at 12:15:24PM +0700, John Naylor wrote:> > On Fri, Dec 23, 2022 at 8:08 AM Justin Pryzby <pryzby@telsasoft.com> wrote:> >> > > Makes sense now (in your first message, you said that the problem was> > > with \"sign\", and the patch didn't address the actual problem in> > > IS_PLUS()).> > >> > > One can look and find that the unreachable code was introduced at> > > 7a3e7b64a.> > >> > > With your proposed change, the unreachable line is hit by regression> > > tests, which is an improvment. As is the change to pg_dump.c.> >> > But that now reachable line just unsets a flag that we previously found> > unset, right?>> Good point.>> > And if that line was unreachable, then surely the previous flag-clearing> > operation is too?> >> > 5669 994426 : if (IS_MINUS(Np->Num)) // <- also always> > false> > 5670 0 : Np->Num->flag &= ~NUM_F_MINUS;> > 5671 : }> > 5672 524 : else if (Np->sign != '+' && IS_PLUS(Np->Num))> > 5673 0 : Np->Num->flag &= ~NUM_F_PLUS;> >> > https://coverage.postgresql.org/src/backend/utils/adt/formatting.c.gcov.html> >> > I'm inclined to turn the dead unsets into asserts.>> To be clear - did you mean like this ?>> diff --git a/src/backend/utils/adt/formatting.c b/src/backend/utils/adt/formatting.c> index a4b524ea3ac..848956879f5 100644> --- a/src/backend/utils/adt/formatting.c> +++ b/src/backend/utils/adt/formatting.c> @@ -5662,15 +5662,13 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout,> }> else> {> + Assert(!IS_MINUS(Np->Num));> + Assert(!IS_PLUS(Np->Num));> if (Np->sign != '-')> {> if (IS_BRACKET(Np->Num) && IS_FILLMODE(Np->Num))> Np->Num->flag &= ~NUM_F_BRACKET;> - if (IS_MINUS(Np->Num))> - Np->Num->flag &= ~NUM_F_MINUS;> }> - else if (Np->sign != '+' && IS_PLUS(Np->Num))> - Np->Num->flag &= ~NUM_F_PLUS;>> if (Np->sign == '+' && IS_FILLMODE(Np->Num) && IS_LSIGN(Np->Num) == false)> Np->sign_wrote = true; /* needn't sign */I was originally thinking of something more localized:--- a/src/backend/utils/adt/formatting.c+++ b/src/backend/utils/adt/formatting.c@@ -5666,11 +5666,10 @@ NUM_processor(FormatNode *node, NUMDesc *Num, char *inout, { if (IS_BRACKET(Np->Num) && IS_FILLMODE(Np->Num)) Np->Num->flag &= ~NUM_F_BRACKET;- if (IS_MINUS(Np->Num))- Np->Num->flag &= ~NUM_F_MINUS;+ Assert(!IS_MINUS(Np->Num)); }- else if (Np->sign != '+' && IS_PLUS(Np->Num))- Np->Num->flag &= ~NUM_F_PLUS;+ else if (Np->sign != '+')+ Assert(!IS_PLUS(Np->Num));...but arguably the earlier check is close enough that it's silly to assert in the \"else\" branch, and I'd be okay with just nuking those lines. Another thing that caught my attention is the assumption that unsetting a bit is so expensive that we have to first check if it's set, so we may as well remove \"IS_BRACKET(Np->Num)\" as well.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 12 Jan 2023 13:44:00 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Small miscellaneus fixes (Part II)"
},
{
"msg_contents": "I wrote:\n> ...but arguably the earlier check is close enough that it's silly to\nassert in the \"else\" branch, and I'd be okay with just nuking those lines.\nAnother thing that caught my attention is the assumption that unsetting a\nbit is so expensive that we have to first check if it's set, so we may as\nwell remove \"IS_BRACKET(Np->Num)\" as well.\n\nThe attached is what I mean. I'll commit this this week unless there are\nobjections.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 16 Jan 2023 13:28:02 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Small miscellaneus fixes (Part II)"
},
{
"msg_contents": "Em seg., 16 de jan. de 2023 às 03:28, John Naylor <\njohn.naylor@enterprisedb.com> escreveu:\n\n>\n> I wrote:\n> > ...but arguably the earlier check is close enough that it's silly to\n> assert in the \"else\" branch, and I'd be okay with just nuking those lines.\n> Another thing that caught my attention is the assumption that unsetting a\n> bit is so expensive that we have to first check if it's set, so we may as\n> well remove \"IS_BRACKET(Np->Num)\" as well.\n>\n> The attached is what I mean. I'll commit this this week unless there are\n> objections.\n>\n+1 looks good to me.\n\nregards,\nRanier Vilela\n\nEm seg., 16 de jan. de 2023 às 03:28, John Naylor <john.naylor@enterprisedb.com> escreveu:I wrote:> ...but arguably the earlier check is close enough that it's silly to assert in the \"else\" branch, and I'd be okay with just nuking those lines. Another thing that caught my attention is the assumption that unsetting a bit is so expensive that we have to first check if it's set, so we may as well remove \"IS_BRACKET(Np->Num)\" as well.The attached is what I mean. I'll commit this this week unless there are objections.+1 looks good to me.regards,Ranier Vilela",
"msg_date": "Mon, 16 Jan 2023 08:20:46 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small miscellaneus fixes (Part II)"
},
{
"msg_contents": "On Mon, Jan 16, 2023 at 1:28 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n>\n>\n> I wrote:\n> > ...but arguably the earlier check is close enough that it's silly to\nassert in the \"else\" branch, and I'd be okay with just nuking those lines.\nAnother thing that caught my attention is the assumption that unsetting a\nbit is so expensive that we have to first check if it's set, so we may as\nwell remove \"IS_BRACKET(Np->Num)\" as well.\n>\n> The attached is what I mean. I'll commit this this week unless there are\nobjections.\n\nI've pushed this and the cosmetic fix in pg_dump. Those were the only\nthings I saw that had some interest, so I closed the CF entry.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jan 16, 2023 at 1:28 PM John Naylor <john.naylor@enterprisedb.com> wrote:>>> I wrote:> > ...but arguably the earlier check is close enough that it's silly to assert in the \"else\" branch, and I'd be okay with just nuking those lines. Another thing that caught my attention is the assumption that unsetting a bit is so expensive that we have to first check if it's set, so we may as well remove \"IS_BRACKET(Np->Num)\" as well.>> The attached is what I mean. I'll commit this this week unless there are objections.I've pushed this and the cosmetic fix in pg_dump. Those were the only things I saw that had some interest, so I closed the CF entry.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 17 Jan 2023 14:36:56 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Small miscellaneus fixes (Part II)"
},
{
"msg_contents": "Em ter., 17 de jan. de 2023 às 04:37, John Naylor <\njohn.naylor@enterprisedb.com> escreveu:\n\n>\n> On Mon, Jan 16, 2023 at 1:28 PM John Naylor <john.naylor@enterprisedb.com>\n> wrote:\n> >\n> >\n> > I wrote:\n> > > ...but arguably the earlier check is close enough that it's silly to\n> assert in the \"else\" branch, and I'd be okay with just nuking those lines.\n> Another thing that caught my attention is the assumption that unsetting a\n> bit is so expensive that we have to first check if it's set, so we may as\n> well remove \"IS_BRACKET(Np->Num)\" as well.\n> >\n> > The attached is what I mean. I'll commit this this week unless there are\n> objections.\n>\n> I've pushed this and the cosmetic fix in pg_dump. Those were the only\n> things I saw that had some interest, so I closed the CF entry.\n>\nThanks for the commits.\n\nregards,\nRanier Vilela\n\nEm ter., 17 de jan. de 2023 às 04:37, John Naylor <john.naylor@enterprisedb.com> escreveu:On Mon, Jan 16, 2023 at 1:28 PM John Naylor <john.naylor@enterprisedb.com> wrote:>>> I wrote:> > ...but arguably the earlier check is close enough that it's silly to assert in the \"else\" branch, and I'd be okay with just nuking those lines. Another thing that caught my attention is the assumption that unsetting a bit is so expensive that we have to first check if it's set, so we may as well remove \"IS_BRACKET(Np->Num)\" as well.>> The attached is what I mean. I'll commit this this week unless there are objections.I've pushed this and the cosmetic fix in pg_dump. Those were the only things I saw that had some interest, so I closed the CF entry.Thanks for the commits.regards,Ranier Vilela",
"msg_date": "Tue, 17 Jan 2023 08:26:11 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small miscellaneus fixes (Part II)"
}
] |
[
{
"msg_contents": "For various reasons (see below) it's preferable to build on Windows with\nStrawberry Perl. This works OK if you're building with Msys2, I upgraded\nthe instance on the machine that runs fairywren and drongo today, and\nfairywren seems fine. Not so drongo, however. First it encountered a\nbuild error, which I attempted to cure using something I found in the\nvim sources [1]. That worked, but then there's a failure at runtime like\nthis:\n\n2022-11-25 21:33:01.073 UTC [3764:3] pg_regress LOG: statement: CREATE EXTENSION IF NOT EXISTS \"plperl\"\nsrc/pl/plperl/Util.c: loadable library and perl binaries are mismatched (got handshake key 0000000012800080, needed 0000000012900080)\n2022-11-25 21:33:01.100 UTC [5048:5] LOG: server process (PID 3764) exited with exit code 1\n\nI don't know how to debug this. I'm not sure if there's some flag we\ncould set which would cure it. It looks like a 1 bit difference, but I\nhaven't found what that bit corresponds to.\n\nThat leaves ActiveState Perl as the alternative. The chocolatey package\nis no longer maintained, and the last maintained package is no longer\ninstallable. The maintainer says [2]:\n\n To everybody having problems with the ActivePerl installer. I'm\n sorry, but I'm not maintaining the package anymore and the reason\n for this is as follows. The Chocolatey moderation team requires,\n that the download URL and checksum for the actual binary package are\n static (for security reasons). However, nowadays ActiveState\n provides the community edition download as a weekly build only. This\n means that the checksum of the URL changes every week. (The\n Chocolatey moderation team proposed that we would setup automation\n that would update the version of the Chocolatey package weekly too.\n While this would kind-of work, it would still mean that only the\n latest version ever would work and every time when the version would\n update there would be a short but annoying time window when even the\n latest package would be broken. I think this is not the way to go.)\n Thus I contacted ActiveState and asked for their support. They were\n very friendly and promised to take over the maintenance of the\n Chocolatey installer themselves and fix the problem. In my opinion\n this is really the best solution and I hope that it will get fixed\n soon by the new maintainers. So, for any further questions, it is\n probably best to contact the ActiveState support directly.\n\nHowever, nothing has actually been done along these lines AFAICT.\n\nI could download the installer from ActiveState, but they want me to\nsign up for an account before I do that, which I'm not happy about. And\nwhile I think our use probably comes within their license terms, IANAL\nand I'm not dead sure, whereas Strawberry is licensed under the usual\nperl license terms.\n\nThe upshot of this is that I have disabled building drongo with perl for\nnow. There will possibly be some fallout with cross version upgrades,\nwhich I will try to prevent.\n\n\ncheers\n\n\nandrew\n\n\n[1]\nhttps://git.postgresql.org/pg/commitdiff/171c7fffaa4a3f2b000f980ecb33c2f7441a9a03\n\n[2] https://community.chocolatey.org/packages/ActivePerl#comment-5484577151\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 25 Nov 2022 18:48:26 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "MSVC vs Perl"
},
{
"msg_contents": "\nOn 2022-11-25 Fr 18:48, Andrew Dunstan wrote:\n> For various reasons (see below) it's preferable to build on Windows with\n> Strawberry Perl. This works OK if you're building with Msys2, I upgraded\n> the instance on the machine that runs fairywren and drongo today, and\n> fairywren seems fine. Not so drongo, however. First it encountered a\n> build error, which I attempted to cure using something I found in the\n> vim sources [1]. That worked, but then there's a failure at runtime like\n> this:\n>\n> 2022-11-25 21:33:01.073 UTC [3764:3] pg_regress LOG: statement: CREATE EXTENSION IF NOT EXISTS \"plperl\"\n> src/pl/plperl/Util.c: loadable library and perl binaries are mismatched (got handshake key 0000000012800080, needed 0000000012900080)\n> 2022-11-25 21:33:01.100 UTC [5048:5] LOG: server process (PID 3764) exited with exit code 1\n>\n> I don't know how to debug this. I'm not sure if there's some flag we\n> could set which would cure it. It looks like a 1 bit difference, but I\n> haven't found what that bit corresponds to.\n\n\n\nI just saw Andres's post on -committers about this. Will see if that\nhelps me.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 25 Nov 2022 18:52:38 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: MSVC vs Perl"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-25 18:48:26 -0500, Andrew Dunstan wrote:\n> I could download the installer from ActiveState, but they want me to\n> sign up for an account before I do that, which I'm not happy about. And\n> while I think our use probably comes within their license terms, IANAL\n> and I'm not dead sure, whereas Strawberry is licensed under the usual\n> perl license terms.\n\nFWIW, my impression is that strawberry perl is of uh, dubious quality. It's\ndefinitely rarely updated. I think going with msys' ucrt perl might be the\nbest choice.\n\nmsys ucrt perl IIRC has the same issues mentioned in [1], but otherwise\nworked.\n\nI wonder if we ought to add a script that installs most of the windows build\ndependencies from msys. I think there's a few where we can't use msys built\nversions (due to too much gcc specific stuff ending up in headers), but IIRC\nmost things were usable. It's just too much work to have everyone do this\nstuff manually.\n\nRegards,\n\nAndres\n\n[1] https://www.postgresql.org/message-id/20220130221659.tlyr2lbw3wk22owg%40alap3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 25 Nov 2022 16:43:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: MSVC vs Perl"
},
{
"msg_contents": "\nOn 2022-11-25 Fr 18:52, Andrew Dunstan wrote:\n> On 2022-11-25 Fr 18:48, Andrew Dunstan wrote:\n>> For various reasons (see below) it's preferable to build on Windows with\n>> Strawberry Perl. This works OK if you're building with Msys2, I upgraded\n>> the instance on the machine that runs fairywren and drongo today, and\n>> fairywren seems fine. Not so drongo, however. First it encountered a\n>> build error, which I attempted to cure using something I found in the\n>> vim sources [1]. That worked, but then there's a failure at runtime like\n>> this:\n>>\n>> 2022-11-25 21:33:01.073 UTC [3764:3] pg_regress LOG: statement: CREATE EXTENSION IF NOT EXISTS \"plperl\"\n>> src/pl/plperl/Util.c: loadable library and perl binaries are mismatched (got handshake key 0000000012800080, needed 0000000012900080)\n>> 2022-11-25 21:33:01.100 UTC [5048:5] LOG: server process (PID 3764) exited with exit code 1\n>>\n>> I don't know how to debug this. I'm not sure if there's some flag we\n>> could set which would cure it. It looks like a 1 bit difference, but I\n>> haven't found what that bit corresponds to.\n\n\nOK, so this cures the problem for drongo:\n\n\ndiff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm\nindex 83a3e40425..dc6b94b74f 100644\n--- a/src/tools/msvc/Mkvcbuild.pm\n+++ b/src/tools/msvc/Mkvcbuild.pm\n@@ -707,6 +707,7 @@ sub mkvcbuild\n print \"CFLAGS recommended by Perl: $Config{ccflags}\\n\";\n print \"CFLAGS to compile embedded Perl: \",\n (join ' ', map { \"-D$_\" } @perl_embed_ccflags), \"\\n\";\n+ push @perl_embed_ccflags,'NO_THREAD_SAFE_LOCALE';\n foreach my $f (@perl_embed_ccflags)\n {\n $plperl->AddDefine($f);\n\n\n\nI'll see if it also works for bowerbird.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 26 Nov 2022 09:43:19 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: MSVC vs Perl"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-26 09:43:19 -0500, Andrew Dunstan wrote:\n> OK, so this cures the problem for drongo:\n> \n> \n> diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm\n> index 83a3e40425..dc6b94b74f 100644\n> --- a/src/tools/msvc/Mkvcbuild.pm\n> +++ b/src/tools/msvc/Mkvcbuild.pm\n> @@ -707,6 +707,7 @@ sub mkvcbuild\n> ��������������� print \"CFLAGS recommended by Perl: $Config{ccflags}\\n\";\n> ��������������� print \"CFLAGS to compile embedded Perl: \",\n> ����������������� (join ' ', map { \"-D$_\" } @perl_embed_ccflags), \"\\n\";\n> +�������������� push @perl_embed_ccflags,'NO_THREAD_SAFE_LOCALE';\n> ��������������� foreach my $f (@perl_embed_ccflags)\n> ��������������� {\n> ����������������������� $plperl->AddDefine($f);\n\nThis likely is just a test patch, in case it is not, it seems we should add\nNO_THREAD_SAFE_LOCALE to @perl_embed_ccflags before printing it.\n\n\nDo we need a \"configure\" check for this? I guess it's ok to define this\nwhenever building with msvc - I don't currently see a scenario where it could\nhurt. We already define flags unconditionally, c.f. PLPERL_HAVE_UID_GID.\n\nGiven how fragile the embedding is (we've had several prior iterations of\nproblems around this), I think it'd be good to test that the current flags\navoid the \"got handshake key\" at configure time, rather than having to debug\nruntime failures.\n\nAs noted by Noah in [1], the Mkvcbuild.pm actually has code to do so - but\nonly does for 32bit builds.\n\nI don't think it's worth generalizing this for src/tools/msvc at this point,\nbut it might be worth copying the test to meson and running the binary (except\nwhen cross building, of course).\n\nGreetings,\n\nAndres Freund\n\n\n[1] https://postgr.es/m/20220130231432.GA2658915%40rfd.leadboat.com\n\n\n",
"msg_date": "Sat, 26 Nov 2022 13:05:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: MSVC vs Perl"
},
{
"msg_contents": "\nOn 2022-11-26 Sa 16:05, Andres Freund wrote:\n> Hi,\n>\n> On 2022-11-26 09:43:19 -0500, Andrew Dunstan wrote:\n>> OK, so this cures the problem for drongo:\n>>\n>>\n>> diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm\n>> index 83a3e40425..dc6b94b74f 100644\n>> --- a/src/tools/msvc/Mkvcbuild.pm\n>> +++ b/src/tools/msvc/Mkvcbuild.pm\n>> @@ -707,6 +707,7 @@ sub mkvcbuild\n>> print \"CFLAGS recommended by Perl: $Config{ccflags}\\n\";\n>> print \"CFLAGS to compile embedded Perl: \",\n>> (join ' ', map { \"-D$_\" } @perl_embed_ccflags), \"\\n\";\n>> + push @perl_embed_ccflags,'NO_THREAD_SAFE_LOCALE';\n>> foreach my $f (@perl_embed_ccflags)\n>> {\n>> $plperl->AddDefine($f);\n> This likely is just a test patch, in case it is not, it seems we should add\n> NO_THREAD_SAFE_LOCALE to @perl_embed_ccflags before printing it.\n\nSure\n\n> Do we need a \"configure\" check for this? I guess it's ok to define this\n> whenever building with msvc - I don't currently see a scenario where it could\n> hurt. We already define flags unconditionally, c.f. PLPERL_HAVE_UID_GID.\n>\n> Given how fragile the embedding is (we've had several prior iterations of\n> problems around this), I think it'd be good to test that the current flags\n> avoid the \"got handshake key\" at configure time, rather than having to debug\n> runtime failures.\n>\n> As noted by Noah in [1], the Mkvcbuild.pm actually has code to do so - but\n> only does for 32bit builds.\n>\n> I don't think it's worth generalizing this for src/tools/msvc at this point,\n> but it might be worth copying the test to meson and running the binary (except\n> when cross building, of course).\n\n\n\nYeah, given that we are planning on ditching this build system as soon\nas we can I'm not inclined to do anything very heroic.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 26 Nov 2022 16:25:29 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: MSVC vs Perl"
},
{
"msg_contents": "\nOn 2022-11-26 Sa 16:25, Andrew Dunstan wrote:\n> On 2022-11-26 Sa 16:05, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2022-11-26 09:43:19 -0500, Andrew Dunstan wrote:\n>>> OK, so this cures the problem for drongo:\n>>>\n>>>\n>>> diff --git a/src/tools/msvc/Mkvcbuild.pm b/src/tools/msvc/Mkvcbuild.pm\n>>> index 83a3e40425..dc6b94b74f 100644\n>>> --- a/src/tools/msvc/Mkvcbuild.pm\n>>> +++ b/src/tools/msvc/Mkvcbuild.pm\n>>> @@ -707,6 +707,7 @@ sub mkvcbuild\n>>> print \"CFLAGS recommended by Perl: $Config{ccflags}\\n\";\n>>> print \"CFLAGS to compile embedded Perl: \",\n>>> (join ' ', map { \"-D$_\" } @perl_embed_ccflags), \"\\n\";\n>>> + push @perl_embed_ccflags,'NO_THREAD_SAFE_LOCALE';\n>>> foreach my $f (@perl_embed_ccflags)\n>>> {\n>>> $plperl->AddDefine($f);\n>> This likely is just a test patch, in case it is not, it seems we should add\n>> NO_THREAD_SAFE_LOCALE to @perl_embed_ccflags before printing it.\n> Sure\n\n\n\nOK, pushed something like that, after testing that it didn't break my\nremaining ActiveState instance.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 27 Nov 2022 09:38:17 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: MSVC vs Perl"
}
] |
[
{
"msg_contents": "[ resending to -hackers because of list issues ]\n\nOn 2022-05-28 Sa 11:37, Justin Pryzby wrote:\n> I'm \"joining\" a bunch of unresolved threads hoping to present them better since\n> they're related and I'm maintaining them together anyway.\n>\n> https://www.postgresql.org/message-id/flat/20220219234148.GC9008%40telsasoft.com\n> - set TESTDIR from perl rather than Makefile\n\n\nI looked at the latest set here, patch 1 still doesn't look right, I\nthink vc_regress.pl should be setting PG_SUBDIR like the Makefile.global\ndoes.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 26 Nov 2022 10:21:49 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: CI and test improvements"
}
] |
[
{
"msg_contents": "In the thread about TAP format out in pg_regress, Andres pointed out [0] that\nwe allow a test to pass even if the test child process failed. While its\nprobably pretty rare to have a test pass if the process failed, this brings a\nrisk for false positives (and it seems questionable that any regress test will\nhave a child process failing as part of its intended run).\n\nThe attached makes child failures an error condition for the test as a belts\nand suspenders type check. Thoughts?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://postgr.es/m/20221122235636.4frx7hjterq6bmls@awork3.anarazel.de",
"msg_date": "Sat, 26 Nov 2022 21:11:39 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "pg_regress: Treat child process failure as test failure"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-26 21:11:39 +0100, Daniel Gustafsson wrote:\n> In the thread about TAP format out in pg_regress, Andres pointed out [0] that\n> we allow a test to pass even if the test child process failed. While its\n> probably pretty rare to have a test pass if the process failed, this brings a\n> risk for false positives (and it seems questionable that any regress test will\n> have a child process failing as part of its intended run).\n\n> The attached makes child failures an error condition for the test as a belts\n> and suspenders type check. Thoughts?\n\nI wonder if it's the right thing to treat a failed psql that's then also\nignored as \"failed (ignored)\". Perhaps it'd be better to move the statuses[i]\n!= 0 check to before the if (differ)?\n\n\n> -\t\t\tif (differ)\n> +\t\t\tif (differ || statuses[i] != 0)\n> \t\t\t{\n> \t\t\t\tbool\t\tignore = false;\n> \t\t\t\t_stringlist *sl;\n> @@ -1815,7 +1815,7 @@ run_single_test(const char *test, test_start_function startfunc,\n> \t\tdiffer |= newdiff;\n> \t}\n> \n> -\tif (differ)\n> +\tif (differ || exit_status != 0)\n> \t{\n> \t\tstatus(_(\"FAILED\"));\n> \t\tfail_count++;\n\nIt certainly is a bit confusing that we print a psql failure separately from\nthe if \"FAILED\" vs \"ok\" bit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 26 Nov 2022 12:55:59 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_regress: Treat child process failure as test failure"
},
{
"msg_contents": "> On 26 Nov 2022, at 21:55, Andres Freund <andres@anarazel.de> wrote:\n> On 2022-11-26 21:11:39 +0100, Daniel Gustafsson wrote:\n\n>> The attached makes child failures an error condition for the test as a belts\n>> and suspenders type check. Thoughts?\n> \n> I wonder if it's the right thing to treat a failed psql that's then also\n> ignored as \"failed (ignored)\". Perhaps it'd be better to move the statuses[i]\n> != 0 check to before the if (differ)?\n\nI was thinking about that too, but I think you're right. The \"ignore\" part is\nabout the test content and not the test run structure.\n\n> It certainly is a bit confusing that we print a psql failure separately from\n> the if \"FAILED\" vs \"ok\" bit.\n\nI've moved the statuses[i] check before the differ check, such that there is a\nseparate block for this not mixed up with the differs check and printing. It\ndoes duplicate things a little bit but also makes it a lot clearer.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Sat, 26 Nov 2022 22:46:24 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: pg_regress: Treat child process failure as test failure"
},
{
"msg_contents": "> On 26 Nov 2022, at 22:46, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> I've moved the statuses[i] check before the differ check, such that there is a\n> separate block for this not mixed up with the differs check and printing.\n\nRebased patch to handle breakage of v2 due to bd8d453e9b.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 22 Feb 2023 15:10:11 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: pg_regress: Treat child process failure as test failure"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-22 15:10:11 +0100, Daniel Gustafsson wrote:\n> > On 26 Nov 2022, at 22:46, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n> > I've moved the statuses[i] check before the differ check, such that there is a\n> > separate block for this not mixed up with the differs check and printing.\n> \n> Rebased patch to handle breakage of v2 due to bd8d453e9b.\n\nI think we probably should just apply this? The current behaviour doesn't seem\nright, and I don't see a downside of the new behaviour?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Feb 2023 12:33:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_regress: Treat child process failure as test failure"
},
{
"msg_contents": "> On 22 Feb 2023, at 21:33, Andres Freund <andres@anarazel.de> wrote:\n> On 2023-02-22 15:10:11 +0100, Daniel Gustafsson wrote:\n\n>> Rebased patch to handle breakage of v2 due to bd8d453e9b.\n> \n> I think we probably should just apply this? The current behaviour doesn't seem\n> right, and I don't see a downside of the new behaviour?\n\nAgreed, I can't think of a regression test where we wouldn't want this. My\nonly concern was if any of the ECPG tests were doing something odd that would\nbreak from this but I can't see anything.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 22 Feb 2023 21:42:21 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: pg_regress: Treat child process failure as test failure"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 22 Feb 2023, at 21:33, Andres Freund <andres@anarazel.de> wrote:\n>> On 2023-02-22 15:10:11 +0100, Daniel Gustafsson wrote:\n>>> Rebased patch to handle breakage of v2 due to bd8d453e9b.\n\n>> I think we probably should just apply this? The current behaviour doesn't seem\n>> right, and I don't see a downside of the new behaviour?\n\n> Agreed, I can't think of a regression test where we wouldn't want this. My\n> only concern was if any of the ECPG tests were doing something odd that would\n> break from this but I can't see anything.\n\n+1. I was a bit surprised to realize that we might not count such\na case as a failure.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Feb 2023 15:55:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_regress: Treat child process failure as test failure"
},
{
"msg_contents": "> On 22 Feb 2023, at 21:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On 22 Feb 2023, at 21:33, Andres Freund <andres@anarazel.de> wrote:\n>>> On 2023-02-22 15:10:11 +0100, Daniel Gustafsson wrote:\n>>>> Rebased patch to handle breakage of v2 due to bd8d453e9b.\n> \n>>> I think we probably should just apply this? The current behaviour doesn't seem\n>>> right, and I don't see a downside of the new behaviour?\n> \n>> Agreed, I can't think of a regression test where we wouldn't want this. My\n>> only concern was if any of the ECPG tests were doing something odd that would\n>> break from this but I can't see anything.\n> \n> +1. I was a bit surprised to realize that we might not count such\n> a case as a failure.\n\nDone that way, thanks!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 23 Feb 2023 09:53:57 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: pg_regress: Treat child process failure as test failure"
}
] |
[
{
"msg_contents": "Hi\n\nApologies for the intermission in activity, among other \"fun\" things had\na family visitation of the influenza virus, which is becoming fashionable\nin these parts again.\n\nIn an attempt to salvage something from the situation I am having a crack at the\nolder entries marked \"Ready for Committer\", some of which probably\nneed some sort of\naction, but it's not always clear (to me at least) what.\n\nIf there's some sort of consensus for individual items, per previous\npractice I'll update the individual threads.\n\nDate in parenthesis is that of the most recent message on the thread.\n\npg_stat_statements: Track statement entry timestamp (2022-04-08)\n----------------------------------------------------------------\n\n- https://commitfest.postgresql.org/40/3048/\n- https://www.postgresql.org/message-id/flat/72e80e7b160a6eb189df9ef6f068cce3765d37f8.camel@moonset.ru\n\nPatch is applying but no recent activity.\n\npsql - refactor echo code (2022-07-24)\n--------------------------------------\n\n- https://commitfest.postgresql.org/40/3140/\n- https://www.postgresql.org/message-id/flat/alpine.DEB.2.22.394.2105301104400.3020016@pseudo\n\nSeems like a small patch which can be applied easily.\n\nAdd Amcheck option for checking unique constraints in btree indexes (2022-09-28)\n--------------------------------------------------------------------------------\n\n- https://commitfest.postgresql.org/40/3464/\n- https://www.postgresql.org/message-id/flat/CALT9ZEHRn5xAM5boga0qnrCmPV52bScEK2QnQ1HmUZDD301JEg@mail.gmail.com\n\nSeems to be consensus it is actually RfC, but needs a committer to\nshow interest.\n\n\nmeson PGXS compatibility (2022-10-13)\n-------------------------------------\n\n- https://commitfest.postgresql.org/40/3932/\n- https://www.postgresql.org/message-id/flat/20221005200710.luvw5evhwf6clig6@awork3.anarazel.de\n\nSeems to be RfC.\n\n\npg_receivewal fail to streams when the partial file to write [...] (2022-10-13)\n-------------------------------------------------------------------------------\n\n- https://commitfest.postgresql.org/40/3503/\n- https://www.postgresql.org/message-id/flat/CAHg+QDcVUss6ocOmbLbV5f4YeGLhOCt+1x2yLNfG2H_eDwU8tw@mail.gmail.com\n\nThe author of the latest patch (not the original patch author) indicates this\nneeds further review; should the status be changed? Setting it back to \"Needs\nreview\" seems the obvious thing to do, but it feels like it would put it back in\nthe pool of possibly unreviewe entries (maybe we need a \"Needs futher review\"\n\n\nUpdate relfrozenxmin when truncating temp tables (2022-11-05)\n------------------------------------------------------------\n\n- https://commitfest.postgresql.org/40/3358/\n- https://www.postgresql.org/message-id/flat/CAM-w4HNNBDeXiXrj0B+_-WvP5NZ6of0RLueqFUZfyqbLcbEfMA@mail.gmail.com\n\nI'll set this one to \"Waiting on Author\" based on Tom's latest feedback.\n\n\nFaster pglz compression (2021-11-17)\n------------------------------------\n\n- https://commitfest.postgresql.org/40/2897/\n- https://www.postgresql.org/message-id/flat/FEF3DC5E-4BC4-44E1-8DEB-DADC67046FE3@yandex-team.ru\n\nThis one, prior to my reminder, has a committer promising to commit but was\ninactive for over a year.\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Sun, 27 Nov 2022 09:43:47 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": true,
"msg_subject": "CF 2022-11: entries \"Waiting for Committer\" but no recent activity"
},
{
"msg_contents": "\nOn 11/27/22 01:43, Ian Lawrence Barwick wrote:\n> ...\n> Faster pglz compression (2021-11-17)\n> ------------------------------------\n> \n> - https://commitfest.postgresql.org/40/2897/\n> - https://www.postgresql.org/message-id/flat/FEF3DC5E-4BC4-44E1-8DEB-DADC67046FE3@yandex-team.ru\n> \n> This one, prior to my reminder, has a committer promising to commit but was\n> inactive for over a year.\n> \n\nUgh, I see that slacker is me, so I'll get this committed (unless\nsomeone else wants to).\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 27 Nov 2022 01:49:02 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CF 2022-11: entries \"Waiting for Committer\" but no recent\n activity"
}
] |
[
{
"msg_contents": "I got confused about how we were managing EquivalenceClass pointers\nin the copy/equal infrastructure, and it took me awhile to remember\nthat the reason it works is that gen_node_support.pl has hard-wired\nknowledge about that. I think that's something we'd be best off\ndropping in favor of explicit annotations on affected fields.\nHence, I propose the attached. This results in zero change in\nthe generated copy/equal code.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 26 Nov 2022 20:39:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Removing another gen_node_support.pl special case"
},
{
"msg_contents": "On 27.11.22 02:39, Tom Lane wrote:\n> I got confused about how we were managing EquivalenceClass pointers\n> in the copy/equal infrastructure, and it took me awhile to remember\n> that the reason it works is that gen_node_support.pl has hard-wired\n> knowledge about that. I think that's something we'd be best off\n> dropping in favor of explicit annotations on affected fields.\n> Hence, I propose the attached. This results in zero change in\n> the generated copy/equal code.\n\nI suppose the question is whether this behavior is something that is a \nproperty of the EquivalenceClass type as such or something that is \nspecific to each individual field.\n\n\n\n",
"msg_date": "Mon, 28 Nov 2022 17:25:13 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Removing another gen_node_support.pl special case"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 27.11.22 02:39, Tom Lane wrote:\n>> I got confused about how we were managing EquivalenceClass pointers\n>> in the copy/equal infrastructure, and it took me awhile to remember\n>> that the reason it works is that gen_node_support.pl has hard-wired\n>> knowledge about that. I think that's something we'd be best off\n>> dropping in favor of explicit annotations on affected fields.\n>> Hence, I propose the attached. This results in zero change in\n>> the generated copy/equal code.\n\n> I suppose the question is whether this behavior is something that is a \n> property of the EquivalenceClass type as such or something that is \n> specific to each individual field.\n\nThat's an interesting point, but what I'm on about is that I don't want\nthe behavior buried in gen_node_support.pl.\n\nI think there's a reasonable argument to be made that equal_as_scalar\n*is* a field-level property not a node-level property. I agree that\nfor the copy case you could argue it differently, and I also agree\nthat it seems error-prone to have to remember to label fields this way.\n\nI notice that EquivalenceClass is already marked as no_copy_equal,\nwhich means that gen_node_support.pl can know that emitting a\nrecursive node-copy or node-compare request is a bad idea. What\ndo you think of using the patch as it stands, plus a cross-check\nthat we don't emit COPY_NODE_FIELD or COMPARE_NODE_FIELD if the\ntarget node type is no_copy or no_equal? This is different from\njust silently applying scalar copy/equal, in that (a) it's visibly\nunder the programmer's control, and (b) it's not hard to imagine\nwanting to use other solutions such as copy_as(NULL).\n\n(More generally, I suspect that there are other useful cross-checks\ngen_node_support.pl could be making. I had a to-do item to think\nabout that, but it didn't get to the top of the list yet.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 28 Nov 2022 11:39:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Removing another gen_node_support.pl special case"
},
{
"msg_contents": "I wrote:\n> I notice that EquivalenceClass is already marked as no_copy_equal,\n> which means that gen_node_support.pl can know that emitting a\n> recursive node-copy or node-compare request is a bad idea. What\n> do you think of using the patch as it stands, plus a cross-check\n> that we don't emit COPY_NODE_FIELD or COMPARE_NODE_FIELD if the\n> target node type is no_copy or no_equal?\n\nConcretely, it seems like something like the attached could be\nuseful, independently of the other change.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 29 Nov 2022 16:34:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Removing another gen_node_support.pl special case"
},
{
"msg_contents": "On 29.11.22 22:34, Tom Lane wrote:\n> I wrote:\n>> I notice that EquivalenceClass is already marked as no_copy_equal,\n>> which means that gen_node_support.pl can know that emitting a\n>> recursive node-copy or node-compare request is a bad idea. What\n>> do you think of using the patch as it stands, plus a cross-check\n>> that we don't emit COPY_NODE_FIELD or COMPARE_NODE_FIELD if the\n>> target node type is no_copy or no_equal?\n> \n> Concretely, it seems like something like the attached could be\n> useful, independently of the other change.\n\nYes, right now you can easily declare things that don't make sense. \nCross-checks like these look useful.\n\n\n\n",
"msg_date": "Fri, 2 Dec 2022 14:00:55 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Removing another gen_node_support.pl special case"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 29.11.22 22:34, Tom Lane wrote:\n>> Concretely, it seems like something like the attached could be\n>> useful, independently of the other change.\n\n> Yes, right now you can easily declare things that don't make sense. \n> Cross-checks like these look useful.\n\nChecking my notes from awhile back, there was one other cross-check\nthat I thought was pretty high-priority: verifying that array_size\nfields precede their array fields. Without that, a read function\nwill fail entirely, and a compare function might index off the\nend of an array depending on which array-size field it chooses\nto believe. It seems like an easy mistake to make, too.\n\nI added that and pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Dec 2022 15:25:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Removing another gen_node_support.pl special case"
}
] |
[
{
"msg_contents": "Hi\n\nThis is an overview of the current patches marked \"Ready for Committer\"\nwhich have been actively worked on during the current CF.\n\nThey largely fall into two categories:\n- active participation from likely committers\n- have been reviewed and marked as \"RfC\", but need committer interest\n\nDates in parentheses represent the last mail on the relevant thread.\n\nmeson PGXS compatibility (2022-10-13)\n-------------------------------------\n\n- https://commitfest.postgresql.org/40/3932/\n- https://www.postgresql.org/message-id/flat/20221005200710.luvw5evhwf6clig6@awork3.anarazel.de\n\nSounds like it is committable, presumably just needs one of the committers on\nthe thread to do it.\n\n\nLet libpq reject unexpected authentication requests (2022-11-16)\n----------------------------------------------------------------\n\n- https://commitfest.postgresql.org/40/3716/\n- https://www.postgresql.org/message-id/flat/9e5a8ccddb8355ea9fa4b75a1e3a9edc88a70cd3.camel@vmware.com\n- Named commtter: Michael Paquier (michael-kun)\n\nPatch refreshed per Michael's comments.\n\n\nAdd LSN to error messages reported for WAL file (2022-11-17)\n------------------------------------------------------------\n\n- https://commitfest.postgresql.org/40/3909/\n- https://www.postgresql.org/message-id/flat/CALj2ACWV=FCddsxcGbVOA=cvPyMr75YCFbSQT6g4KDj=gcJK4g@mail.gmail.com\n\nThread consensus is that it is RfC, but needes interest from a committer.\n\nParallel Hash Full Join (2022-11-17)\n------------------------------------\n\n- https://commitfest.postgresql.org/40/2903/\n- https://www.postgresql.org/message-id/flat/CA+hUKG+A6ftXPz4oe92+x8Er+xpGZqto70-Q_ERwRaSyA=afNg@mail.gmail.com\n- Named commtter: Thomas Munro (macdice)\n\nThomas has indicated he will look at this\n\n\nFix assertion failure with barriers in parallel hash join (2022-11-18)\n----------------------------------------------------------------------\n\n- https://commitfest.postgresql.org/40/3662/\n- https://www.postgresql.org/message-id/flat/20200929061142.GA29096@paquier.xyz\n\nThomas has indicated he will look at this\n\n\nXID formatting and SLRU refactorings ... (2022-11-21)\n-----------------------------------------------------\n\n- https://commitfest.postgresql.org/40/3489/\n- https://www.postgresql.org/message-id/flat/CAJ7c6TPDOYBYrnCAeyndkBktO0WG2xSdYduTF0nxq+vfkmTF5Q@mail.gmail.com\n\nPatch has been updated several times very recently; needs interest\nfrom a committer\n\n\nReducing power consumption when idle (2022-11-21)\n-------------------------------------------------\n\n- https://commitfest.postgresql.org/40/3566/\n- https://www.postgresql.org/message-id/flat/CANbhV-HK8yvO_g4vwbmz__vZu_VZ4_jJWsTwmaNMXieTdzQCzQ@mail.gmail.com\n\nThis is mainly awaiting resolution of the decision whether to\ndeprecate or remove\n\"promote_trigger_file\"; seems consensus is towards \"remove\".\n\n\npg_dump - read data for some options from external file (2022-11-22)\n--------------------------------------------------------------------\n\n- https://commitfest.postgresql.org/40/2573/\n- https://www.postgresql.org/message-id/flat/CAFj8pRB10wvW0CC9Xq=1XDs=zCQxer3cbLcNZa+qiX4cUH-G_A@mail.gmail.com\n\nUpdated patch, needs interest from committer. This one has been\nfloating around for a couple of years...\n\n\nSupport % wildcard in extension upgrade scripts (2022-11-22)\n------------------------------------------------------------\n\n- https://commitfest.postgresql.org/40/3654/\n- https://www.postgresql.org/message-id/flat/YgakFklJyM5pNdt+@c19\n\nPatch has feedback, needs interest from committer; I haven't looked into it in\ndetail but maybe it needs a bit more discussion?\n\nCompress KnownAssignedXids more frequently (2022-11-22)\n-------------------------------------------------------\n\n- https://commitfest.postgresql.org/40/3902/\n- https://www.postgresql.org/message-id/flat/CALdSSPgahNUD_=pB_j=1zSnDBaiOtqVfzo8Ejt5J_k7qZiU1Tw@mail.gmail.com\n\nActive, with feedback from committers.\n\n\nTransaction Management docs (2022-11-23)\n----------------------------------------\n\n- https://commitfest.postgresql.org/40/3899/\n- https://www.postgresql.org/message-id/flat/CANbhV-E_iy9fmrErxrCh8TZTyenpfo72Hf_XD2HLDppva4dUNA@mail.gmail.com\n\nSeems committable.\n\n\nFix order of checking ICU options in initdb and create database (2022-11-24)\n----------------------------------------------------------------------------\n\n- https://commitfest.postgresql.org/40/3976/\n- https://www.postgresql.org/message-id/flat/534fed4a262fee534662bd07a691c5ef@postgrespro.ru\n- Named commtter: Peter Eistentraut (petere)\n\nConcerns expressed by Peter, will change to WoA.\n\n\nUse fadvise in wal replay (2022-11-25)\n--------------------------------------\n\n- https://commitfest.postgresql.org/40/3707/\n- https://www.postgresql.org/message-id/flat/CADVKa1WsQMBYK_02_Ji=pbOFnms+CT7TV6qJxDdHsFCiC9V_hw@mail.gmail.com\n\nSeems to be consensus that this patch is small and will bring a small benefit\nwith minimal code. Needs interest from a committer.\n\n\nPGDOCS - Stats views and functions not in order? (2022-11-26)\n-------------------------------------------------------------\n\n- https://commitfest.postgresql.org/40/3904/\n- https://www.postgresql.org/message-id/flat/CAHut+Pv8Oa7v06hJb3+HzCtM2u-3oHWMdvXVHhvi7ofB83pNbg@mail.gmail.com\n- Named commtter: Peter Eistentraut (petere)\n\nPartially committed, still WIP.\n\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Sun, 27 Nov 2022 13:29:18 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": true,
"msg_subject": "CF 2022-11: entries \"Ready for Committer\" with recent activity"
},
{
"msg_contents": "On Sun, Nov 27, 2022 at 01:29:18PM +0900, Ian Lawrence Barwick wrote:\n> Transaction Management docs (2022-11-23)\n> ----------------------------------------\n> \n> - https://commitfest.postgresql.org/40/3899/\n> - https://www.postgresql.org/message-id/flat/CANbhV-E_iy9fmrErxrCh8TZTyenpfo72Hf_XD2HLDppva4dUNA@mail.gmail.com\n> \n> Seems committable.\n\nI plan to commit this in a few hours.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Mon, 28 Nov 2022 16:44:33 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: CF 2022-11: entries \"Ready for Committer\" with recent activity"
}
] |
[
{
"msg_contents": "Hi,\n\nTab completion for ALTER EXTENSION ADD and DROP was missing, this\npatch adds the tab completion for the same.\n\nRegards,\nVignesh",
"msg_date": "Sun, 27 Nov 2022 18:54:27 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "mprove tab completion for ALTER EXTENSION ADD/DROP"
},
{
"msg_contents": "------- Original Message -------\nOn Sunday, November 27th, 2022 at 10:24, vignesh C <vignesh21@gmail.com> wrote:\n\n\n> Hi,\n> \n> Tab completion for ALTER EXTENSION ADD and DROP was missing, this\n> patch adds the tab completion for the same.\n> \n> Regards,\n> Vignesh\n\nHi Vignesh\n\nI've tested your patched on current master and seems to be working properly.\n\n\nI'm starting reviewing some patches here, let's see what more experience hackers\nhas to say about this, but as far I can tell is that is working as expected.\n\n\n--\nMatheus Alcantara\n\n\n\n",
"msg_date": "Sat, 03 Dec 2022 17:34:57 +0000",
"msg_from": "Matheus Alcantara <mths.dev@pm.me>",
"msg_from_op": false,
"msg_subject": "Re: mprove tab completion for ALTER EXTENSION ADD/DROP"
},
{
"msg_contents": "On Sat, Dec 03, 2022 at 05:34:57PM +0000, Matheus Alcantara wrote:\n> I've tested your patched on current master and seems to be working properly.\n> \n> I'm starting reviewing some patches here, let's see what more experience hackers\n> has to say about this, but as far I can tell is that is working as expected.\n\n+ /* ALTER EXTENSION <name> ADD|DROP */\n+ else if (Matches(\"ALTER\", \"EXTENSION\", MatchAny, \"ADD|DROP\"))\n+ COMPLETE_WITH(\"ACCESS METHOD\", \"AGGREGATE\", \"CAST\", \"COLLATION\",\n+ \"CONVERSION\", \"DOMAIN\", \"EVENT TRIGGER\", \"FOREIGN\",\n+ \"FUNCTION\", \"MATERIALIZED VIEW\", \"OPERATOR\",\n+ \"PROCEDURAL LANGUAGE\", \"PROCEDURE\", \"LANGUAGE\",\n+ \"ROUTINE\", \"SCHEMA\", \"SEQUENCE\", \"SERVER\", \"TABLE\",\n+ \"TEXT SEARCH\", \"TRANSFORM FOR\", \"TYPE\", \"VIEW\");\n+\n+ /* ALTER EXTENSION <name> ADD|DROP FOREIGN*/\n+ else if (Matches(\"ALTER\", \"EXTENSION\", MatchAny, \"ADD|DROP\", \"FOREIGN\"))\n+ COMPLETE_WITH(\"DATA WRAPPER\", \"TABLE\");\n\nThe DROP could be matched with the objects that are actually part of\nthe so-said extension?\n--\nMichael",
"msg_date": "Mon, 5 Dec 2022 10:23:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: mprove tab completion for ALTER EXTENSION ADD/DROP"
},
{
"msg_contents": "On Mon, 5 Dec 2022 at 06:53, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Dec 03, 2022 at 05:34:57PM +0000, Matheus Alcantara wrote:\n> > I've tested your patched on current master and seems to be working properly.\n> >\n> > I'm starting reviewing some patches here, let's see what more experience hackers\n> > has to say about this, but as far I can tell is that is working as expected.\n>\n> + /* ALTER EXTENSION <name> ADD|DROP */\n> + else if (Matches(\"ALTER\", \"EXTENSION\", MatchAny, \"ADD|DROP\"))\n> + COMPLETE_WITH(\"ACCESS METHOD\", \"AGGREGATE\", \"CAST\", \"COLLATION\",\n> + \"CONVERSION\", \"DOMAIN\", \"EVENT TRIGGER\", \"FOREIGN\",\n> + \"FUNCTION\", \"MATERIALIZED VIEW\", \"OPERATOR\",\n> + \"PROCEDURAL LANGUAGE\", \"PROCEDURE\", \"LANGUAGE\",\n> + \"ROUTINE\", \"SCHEMA\", \"SEQUENCE\", \"SERVER\", \"TABLE\",\n> + \"TEXT SEARCH\", \"TRANSFORM FOR\", \"TYPE\", \"VIEW\");\n> +\n> + /* ALTER EXTENSION <name> ADD|DROP FOREIGN*/\n> + else if (Matches(\"ALTER\", \"EXTENSION\", MatchAny, \"ADD|DROP\", \"FOREIGN\"))\n> + COMPLETE_WITH(\"DATA WRAPPER\", \"TABLE\");\n>\n> The DROP could be matched with the objects that are actually part of\n> the so-said extension?\n\nThe modified v2 version has the changes to handle the same. Sorry for\nthe delay as I was working on another project.\n\nRegards,\nVignesh",
"msg_date": "Mon, 2 Jan 2023 13:19:50 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: mprove tab completion for ALTER EXTENSION ADD/DROP"
},
{
"msg_contents": "At Mon, 2 Jan 2023 13:19:50 +0530, vignesh C <vignesh21@gmail.com> wrote in \n> On Mon, 5 Dec 2022 at 06:53, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > The DROP could be matched with the objects that are actually part of\n> > the so-said extension?\n> \n> The modified v2 version has the changes to handle the same. Sorry for\n> the delay as I was working on another project.\n\nIt suggests the *kinds* of objects that are part of the extension, but\nlists the objects of that kind regardless of dependency. I read\nMichael suggested (and I agree) to restrict the objects (not kinds) to\nactually be a part of the extension. (And not for object kinds.)\n\nHowever I'm not sure it is useful to restrict object kinds since the\noperator already knows what to drop, if you still want to do that, the\nuse of completion_dont_quote looks ugly since the function\n(requote_identifier) is assuming an identifier as input. I didn't\nlooked closer but maybe we need another way to do that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 11 Jan 2023 12:10:33 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: mprove tab completion for ALTER EXTENSION ADD/DROP"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 12:10:33PM +0900, Kyotaro Horiguchi wrote:\n> It suggests the *kinds* of objects that are part of the extension, but\n> lists the objects of that kind regardless of dependency. I read\n> Michael suggested (and I agree) to restrict the objects (not kinds) to\n> actually be a part of the extension. (And not for object kinds.)\n\nYeah, that's what I meant. Now, if Vignesh does not want to extend\nthat, that's fine for me as well at the end on second thought, as this \ninvolves much more code for each DROP path depending on the object\ntype involved.\n\nAdding the object names after DROP/ADD is useful on its own, and we\nalready have some completion once the object type is specified, so\nsimpler is perhaps just better here.\n--\nMichael",
"msg_date": "Wed, 11 Jan 2023 15:49:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: mprove tab completion for ALTER EXTENSION ADD/DROP"
},
{
"msg_contents": "On Wed, 11 Jan 2023 at 12:19, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jan 11, 2023 at 12:10:33PM +0900, Kyotaro Horiguchi wrote:\n> > It suggests the *kinds* of objects that are part of the extension, but\n> > lists the objects of that kind regardless of dependency. I read\n> > Michael suggested (and I agree) to restrict the objects (not kinds) to\n> > actually be a part of the extension. (And not for object kinds.)\n>\n> Yeah, that's what I meant. Now, if Vignesh does not want to extend\n> that, that's fine for me as well at the end on second thought, as this\n> involves much more code for each DROP path depending on the object\n> type involved.\n>\n> Adding the object names after DROP/ADD is useful on its own, and we\n> already have some completion once the object type is specified, so\n> simpler is perhaps just better here.\n\nI too felt keeping it simpler is better. How about using the simple\nfirst version of patch itself?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 11 Jan 2023 22:29:25 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: mprove tab completion for ALTER EXTENSION ADD/DROP"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 10:29:25PM +0530, vignesh C wrote:\n> I too felt keeping it simpler is better. How about using the simple\n> first version of patch itself?\n\nOkay, I have just done that, then, after checking that all the object\ntypes were covered (28). Note that PROCEDURAL LANGUAGE has been\nremoved for simplicity.\n--\nMichael",
"msg_date": "Thu, 12 Jan 2023 08:52:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: mprove tab completion for ALTER EXTENSION ADD/DROP"
},
{
"msg_contents": "On Thu, 12 Jan 2023 at 05:22, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jan 11, 2023 at 10:29:25PM +0530, vignesh C wrote:\n> > I too felt keeping it simpler is better. How about using the simple\n> > first version of patch itself?\n>\n> Okay, I have just done that, then, after checking that all the object\n> types were covered (28). Note that PROCEDURAL LANGUAGE has been\n> removed for simplicity.\n\nThanks for pushing this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 13 Jan 2023 06:17:59 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: mprove tab completion for ALTER EXTENSION ADD/DROP"
}
] |
[
{
"msg_contents": "Hi,\n\nWe have the following comment in SnapBuildFindSnapshot():\n\n * c) transition from FULL_SNAPSHOT to CONSISTENT.\n *\n * In FULL_SNAPSHOT state (see d) ), and this xl_running_xacts'\n\nIt mentions \"(state d) )\", which seems like a typo of \"(state d)\", but\nthere is no \"state d\" in the first place. Reading the discussion of\nthe commit 955a684e040 that introduced this comment, this was a\ncomment for an old version patch[1]. So I think we can remove this\npart.\n\nI've attached the patch.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/20170505004237.edtahvrwb3uwd5rs%40alap3.anarazel.de\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 28 Nov 2022 11:13:23 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix comment in SnapBuildFindSnapshot"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 11:13:23AM +0900, Masahiko Sawada wrote:\n> We have the following comment in SnapBuildFindSnapshot():\n> \n> * c) transition from FULL_SNAPSHOT to CONSISTENT.\n> *\n> * In FULL_SNAPSHOT state (see d) ), and this xl_running_xacts'\n> \n> It mentions \"(state d) )\", which seems like a typo of \"(state d)\", but\n> there is no \"state d\" in the first place. Reading the discussion of\n> the commit 955a684e040 that introduced this comment, this was a\n> comment for an old version patch[1]. So I think we can remove this\n> part.\n\nHm, yes, that seems right. There are three \"c) states\" in these\nparagraphs, they are incremental steps. Will apply if there are no\nobjections.\n--\nMichael",
"msg_date": "Mon, 28 Nov 2022 16:46:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix comment in SnapBuildFindSnapshot"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 04:46:44PM +0900, Michael Paquier wrote:\n> Hm, yes, that seems right. There are three \"c) states\" in these\n> paragraphs, they are incremental steps. Will apply if there are no\n> objections.\n\nAnd done.\n--\nMichael",
"msg_date": "Tue, 29 Nov 2022 08:54:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix comment in SnapBuildFindSnapshot"
},
{
"msg_contents": "On Tue, Nov 29, 2022 at 8:54 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Nov 28, 2022 at 04:46:44PM +0900, Michael Paquier wrote:\n> > Hm, yes, that seems right. There are three \"c) states\" in these\n> > paragraphs, they are incremental steps. Will apply if there are no\n> > objections.\n>\n> And done.\n\nThank you!\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 29 Nov 2022 14:41:18 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix comment in SnapBuildFindSnapshot"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reviewing/testing one of the patches I found the following Assert:\n#0 0x000055c6312ba639 in pgstat_unlink_relation (rel=0x7fb73bcbac58)\nat pgstat_relation.c:161\n#1 0x000055c6312bbb5a in pgstat_relation_delete_pending_cb\n(entry_ref=0x55c6335563a8) at pgstat_relation.c:839\n#2 0x000055c6312b72bc in pgstat_delete_pending_entry\n(entry_ref=0x55c6335563a8) at pgstat.c:1124\n#3 0x000055c6312be3f1 in pgstat_release_entry_ref (key=...,\nentry_ref=0x55c6335563a8, discard_pending=true) at pgstat_shmem.c:523\n#4 0x000055c6312bee9a in pgstat_drop_entry\n(kind=PGSTAT_KIND_RELATION, dboid=5, objoid=40960) at\npgstat_shmem.c:867\n#5 0x000055c6312c034a in AtEOXact_PgStat_DroppedStats\n(xact_state=0x55c6334baac8, isCommit=false) at pgstat_xact.c:97\n#6 0x000055c6312c0240 in AtEOXact_PgStat (isCommit=false,\nparallel=false) at pgstat_xact.c:55\n#7 0x000055c630df8bee in AbortTransaction () at xact.c:2861\n#8 0x000055c630df94fd in AbortCurrentTransaction () at xact.c:3352\n\nI could reproduce this issue with the following steps:\ncreate table t1(c1 int);\nBEGIN;\nCREATE TABLE t (c1 int);\nCREATE RULE \"_RETURN\" AS ON SELECT TO t DO INSTEAD SELECT * FROM t1;\nCREATE RULE \"_RETURN\" AS ON SELECT TO t DO INSTEAD SELECT * FROM t1;\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 28 Nov 2022 14:01:30 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Failed Assert while pgstat_unlink_relation"
},
{
"msg_contents": "At Mon, 28 Nov 2022 14:01:30 +0530, vignesh C <vignesh21@gmail.com> wrote in \n> Hi,\n> \n> While reviewing/testing one of the patches I found the following Assert:\n> #0 0x000055c6312ba639 in pgstat_unlink_relation (rel=0x7fb73bcbac58)\n> at pgstat_relation.c:161\n> #1 0x000055c6312bbb5a in pgstat_relation_delete_pending_cb\n> (entry_ref=0x55c6335563a8) at pgstat_relation.c:839\n> #2 0x000055c6312b72bc in pgstat_delete_pending_entry\n> (entry_ref=0x55c6335563a8) at pgstat.c:1124\n> #3 0x000055c6312be3f1 in pgstat_release_entry_ref (key=...,\n> entry_ref=0x55c6335563a8, discard_pending=true) at pgstat_shmem.c:523\n> #4 0x000055c6312bee9a in pgstat_drop_entry\n> (kind=PGSTAT_KIND_RELATION, dboid=5, objoid=40960) at\n> pgstat_shmem.c:867\n> #5 0x000055c6312c034a in AtEOXact_PgStat_DroppedStats\n> (xact_state=0x55c6334baac8, isCommit=false) at pgstat_xact.c:97\n> #6 0x000055c6312c0240 in AtEOXact_PgStat (isCommit=false,\n> parallel=false) at pgstat_xact.c:55\n> #7 0x000055c630df8bee in AbortTransaction () at xact.c:2861\n> #8 0x000055c630df94fd in AbortCurrentTransaction () at xact.c:3352\n> \n> I could reproduce this issue with the following steps:\n> create table t1(c1 int);\n> BEGIN;\n> CREATE TABLE t (c1 int);\n> CREATE RULE \"_RETURN\" AS ON SELECT TO t DO INSTEAD SELECT * FROM t1;\n> CREATE RULE \"_RETURN\" AS ON SELECT TO t DO INSTEAD SELECT * FROM t1;\n\nGood catch!\n\nAtEOXact_PgStat_DroppedStats() visits a relation that has been dropped\nthen wiped (when CLOBBER_FREED_MEMORY) by AtEOXact_RelationCache()\ncalled just before. Since the relcache content directly pointed from\nstats module is lost in this case, the stats side cannot defend itself\nfrom this.\n\nPerhaps RelationDestroyRelation() need to do pgstat_drop_entry() or\nAtEOXact_PgStat_DroppedStats() should be called before\nAtEOXact_RelationCache(). the latter of which is simpler. I think we\nneed to test this case, too.\n\ndiff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\nindex 8086b857b9..789ff4cc6a 100644\n--- a/src/backend/access/transam/xact.c\n+++ b/src/backend/access/transam/xact.c\n@@ -2838,6 +2838,7 @@ AbortTransaction(void)\n \t\t\t\t\t\t\t RESOURCE_RELEASE_BEFORE_LOCKS,\n \t\t\t\t\t\t\t false, true);\n \t\tAtEOXact_Buffers(false);\n+\t\tAtEOXact_PgStat(false, is_parallel_worker); /* reads relcache */\n \t\tAtEOXact_RelationCache(false);\n \t\tAtEOXact_Inval(false);\n \t\tAtEOXact_MultiXact();\n@@ -2858,7 +2859,6 @@ AbortTransaction(void)\n \t\tAtEOXact_Files(false);\n \t\tAtEOXact_ComboCid();\n \t\tAtEOXact_HashTables(false);\n-\t\tAtEOXact_PgStat(false, is_parallel_worker);\n \t\tAtEOXact_ApplyLauncher(false);\n \t\tpgstat_report_xact_timestamp(0);\n \t}\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 29 Nov 2022 15:42:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert while pgstat_unlink_relation"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-29 15:42:45 +0900, Kyotaro Horiguchi wrote:\n> At Mon, 28 Nov 2022 14:01:30 +0530, vignesh C <vignesh21@gmail.com> wrote in \n> > Hi,\n> > \n> > While reviewing/testing one of the patches I found the following Assert:\n> > #0 0x000055c6312ba639 in pgstat_unlink_relation (rel=0x7fb73bcbac58)\n> > at pgstat_relation.c:161\n> > #1 0x000055c6312bbb5a in pgstat_relation_delete_pending_cb\n> > (entry_ref=0x55c6335563a8) at pgstat_relation.c:839\n> > #2 0x000055c6312b72bc in pgstat_delete_pending_entry\n> > (entry_ref=0x55c6335563a8) at pgstat.c:1124\n> > #3 0x000055c6312be3f1 in pgstat_release_entry_ref (key=...,\n> > entry_ref=0x55c6335563a8, discard_pending=true) at pgstat_shmem.c:523\n> > #4 0x000055c6312bee9a in pgstat_drop_entry\n> > (kind=PGSTAT_KIND_RELATION, dboid=5, objoid=40960) at\n> > pgstat_shmem.c:867\n> > #5 0x000055c6312c034a in AtEOXact_PgStat_DroppedStats\n> > (xact_state=0x55c6334baac8, isCommit=false) at pgstat_xact.c:97\n> > #6 0x000055c6312c0240 in AtEOXact_PgStat (isCommit=false,\n> > parallel=false) at pgstat_xact.c:55\n> > #7 0x000055c630df8bee in AbortTransaction () at xact.c:2861\n> > #8 0x000055c630df94fd in AbortCurrentTransaction () at xact.c:3352\n> > \n> > I could reproduce this issue with the following steps:\n> > create table t1(c1 int);\n> > BEGIN;\n> > CREATE TABLE t (c1 int);\n> > CREATE RULE \"_RETURN\" AS ON SELECT TO t DO INSTEAD SELECT * FROM t1;\n> > CREATE RULE \"_RETURN\" AS ON SELECT TO t DO INSTEAD SELECT * FROM t1;\n> \n> Good catch!\n> \n> AtEOXact_PgStat_DroppedStats() visits a relation that has been dropped\n> then wiped (when CLOBBER_FREED_MEMORY) by AtEOXact_RelationCache()\n> called just before. Since the relcache content directly pointed from\n> stats module is lost in this case, the stats side cannot defend itself\n> from this.\n> \n> Perhaps RelationDestroyRelation() need to do pgstat_drop_entry() or\n> AtEOXact_PgStat_DroppedStats() should be called before\n> AtEOXact_RelationCache(). the latter of which is simpler. I think we\n> need to test this case, too.\n\nThis doesn't strike me as the right fix. What do you think about my patch at\nhttps://postgr.es/m/20221128210908.hyffmljjylj447nu%40awork3.anarazel.de ,\nleaving the quibbles around error handling aside?\n\nAfaict it fixes the issue.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Dec 2022 19:23:28 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert while pgstat_unlink_relation"
},
{
"msg_contents": "At Thu, 1 Dec 2022 19:23:28 -0800, Andres Freund <andres@anarazel.de> wrote in \n> > AtEOXact_PgStat_DroppedStats() visits a relation that has been dropped\n> > then wiped (when CLOBBER_FREED_MEMORY) by AtEOXact_RelationCache()\n> > called just before. Since the relcache content directly pointed from\n> > stats module is lost in this case, the stats side cannot defend itself\n> > from this.\n> > \n> > Perhaps RelationDestroyRelation() need to do pgstat_drop_entry() or\n> > AtEOXact_PgStat_DroppedStats() should be called before\n> > AtEOXact_RelationCache(). the latter of which is simpler. I think we\n> > need to test this case, too.\n> \n> This doesn't strike me as the right fix. What do you think about my patch at\n> https://postgr.es/m/20221128210908.hyffmljjylj447nu%40awork3.anarazel.de ,\n> leaving the quibbles around error handling aside?\n\nYeah, I didn't like what my patch does...\n\n> Afaict it fixes the issue.\n\nHmm. I see it works for this specific case, but I don't understand why\nit is generally safe.\n\nThe in-xact created relation t1 happened to be scanned during the\nCREATE RULE and a stats entry is attached. So the stats entry loses t1\nat roll-back, then crashes. Thus, if I understand it correctly, it\nseems to me that just unlinking the stats from t1 (when relkind is\nchanged) works.\n\nBut the fix doesn't change the behavior in relkind-not-changing\ncases. If an in-xact-created table gets a stats entry then the\nrelcache entry for t1 is refreshed to a table relation again then the\ntransaction rolls back, crash will happen for the same reason. I'm not\nsure if there is such a case actually.\n\nWhen I tried to check that behavior further, I found that that\nCREATE ROLE is no longer allowed..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 05 Dec 2022 15:20:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert while pgstat_unlink_relation"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-05 15:20:55 +0900, Kyotaro Horiguchi wrote:\n> The in-xact created relation t1 happened to be scanned during the\n> CREATE RULE and a stats entry is attached. So the stats entry loses t1\n> at roll-back, then crashes. Thus, if I understand it correctly, it\n> seems to me that just unlinking the stats from t1 (when relkind is\n> changed) works.\n> \n> But the fix doesn't change the behavior in relkind-not-changing\n> cases. If an in-xact-created table gets a stats entry then the\n> relcache entry for t1 is refreshed to a table relation again then the\n> transaction rolls back, crash will happen for the same reason. I'm not\n> sure if there is such a case actually.\n\nWe unlink the stats in that case already. see RelationDestroyRelation().\n\n\n> When I tried to check that behavior further, I found that that\n> CREATE ROLE is no longer allowed..\n\nI assume you mean RULE, not ROLE? It should still work in 15.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Dec 2022 09:41:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert while pgstat_unlink_relation"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile reviewing/testing one of the patches I found the following Assert:\n#0 __pthread_kill_implementation (no_tid=0, signo=6,\nthreadid=139624429171648) at ./nptl/pthread_kill.c:44\n#1 __pthread_kill_internal (signo=6, threadid=139624429171648) at\n./nptl/pthread_kill.c:78\n#2 __GI___pthread_kill (threadid=139624429171648,\nsigno=signo@entry=6) at ./nptl/pthread_kill.c:89\n#3 0x00007efcda6e3476 in __GI_raise (sig=sig@entry=6) at\n../sysdeps/posix/raise.c:26\n#4 0x00007efcda6c97f3 in __GI_abort () at ./stdlib/abort.c:79\n#5 0x00005590bf283139 in ExceptionalCondition\n(conditionName=0x5590bf468170 \"rel->pgstat_info->relation == NULL\",\nfileName=0x5590bf46812b \"pgstat_relation.c\", lineNumber=143) at\nassert.c:66\n#6 0x00005590bf0ce5f8 in pgstat_assoc_relation (rel=0x7efcce996a48)\nat pgstat_relation.c:143\n#7 0x00005590beb83046 in initscan (scan=0x5590bfbf4af8, key=0x0,\nkeep_startblock=false) at heapam.c:343\n#8 0x00005590beb8466f in heap_beginscan (relation=0x7efcce996a48,\nsnapshot=0x5590bfb5a520, nkeys=0, key=0x0, parallel_scan=0x0,\nflags=449) at heapam.c:1223\n#9 0x00005590bf02af39 in table_beginscan (rel=0x7efcce996a48,\nsnapshot=0x5590bfb5a520, nkeys=0, key=0x0) at\n../../../src/include/access/tableam.h:891\n#10 0x00005590bf02bf8a in DefineQueryRewrite (rulename=0x5590bfb281d0\n\"_RETURN\", event_relid=16387, event_qual=0x0, event_type=CMD_SELECT,\nis_instead=true, replace=false, action=0x5590bfbf4aa8)\n at rewriteDefine.c:447\n#11 0x00005590bf02b5ab in DefineRule (stmt=0x5590bfb285c0,\nqueryString=0x5590bfb277a8 \"CREATE RULE \\\"_RETURN\\\" AS ON SELECT TO t\nDO INSTEAD SELECT * FROM t1;\") at rewriteDefine.c:213\n\nI could reproduce this issue with the following steps:\ncreate table t1(c int);\nBEGIN;\nCREATE TABLE t (c int);\nSAVEPOINT q;\nCREATE RULE \"_RETURN\" AS ON SELECT TO t DO INSTEAD SELECT * FROM t1;\nselect * from t;\nROLLBACK TO q;\nCREATE RULE \"_RETURN\" AS ON SELECT TO t DO INSTEAD SELECT * FROM t1;\nROLLBACK;\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 28 Nov 2022 14:39:58 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Failed Assert in pgstat_assoc_relation"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 8:10 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi,\n>\n> While reviewing/testing one of the patches I found the following Assert:\n> #0 __pthread_kill_implementation (no_tid=0, signo=6,\n> threadid=139624429171648) at ./nptl/pthread_kill.c:44\n> #1 __pthread_kill_internal (signo=6, threadid=139624429171648) at\n> ./nptl/pthread_kill.c:78\n> #2 __GI___pthread_kill (threadid=139624429171648,\n> signo=signo@entry=6) at ./nptl/pthread_kill.c:89\n> #3 0x00007efcda6e3476 in __GI_raise (sig=sig@entry=6) at\n> ../sysdeps/posix/raise.c:26\n> #4 0x00007efcda6c97f3 in __GI_abort () at ./stdlib/abort.c:79\n> #5 0x00005590bf283139 in ExceptionalCondition\n> (conditionName=0x5590bf468170 \"rel->pgstat_info->relation == NULL\",\n> fileName=0x5590bf46812b \"pgstat_relation.c\", lineNumber=143) at\n> assert.c:66\n> #6 0x00005590bf0ce5f8 in pgstat_assoc_relation (rel=0x7efcce996a48)\n> at pgstat_relation.c:143\n> #7 0x00005590beb83046 in initscan (scan=0x5590bfbf4af8, key=0x0,\n> keep_startblock=false) at heapam.c:343\n> #8 0x00005590beb8466f in heap_beginscan (relation=0x7efcce996a48,\n> snapshot=0x5590bfb5a520, nkeys=0, key=0x0, parallel_scan=0x0,\n> flags=449) at heapam.c:1223\n> #9 0x00005590bf02af39 in table_beginscan (rel=0x7efcce996a48,\n> snapshot=0x5590bfb5a520, nkeys=0, key=0x0) at\n> ../../../src/include/access/tableam.h:891\n> #10 0x00005590bf02bf8a in DefineQueryRewrite (rulename=0x5590bfb281d0\n> \"_RETURN\", event_relid=16387, event_qual=0x0, event_type=CMD_SELECT,\n> is_instead=true, replace=false, action=0x5590bfbf4aa8)\n> at rewriteDefine.c:447\n> #11 0x00005590bf02b5ab in DefineRule (stmt=0x5590bfb285c0,\n> queryString=0x5590bfb277a8 \"CREATE RULE \\\"_RETURN\\\" AS ON SELECT TO t\n> DO INSTEAD SELECT * FROM t1;\") at rewriteDefine.c:213\n>\n> I could reproduce this issue with the following steps:\n> create table t1(c int);\n> BEGIN;\n> CREATE TABLE t (c int);\n> SAVEPOINT q;\n> CREATE RULE \"_RETURN\" AS ON SELECT TO t DO INSTEAD SELECT * FROM t1;\n> select * from t;\n> ROLLBACK TO q;\n> CREATE RULE \"_RETURN\" AS ON SELECT TO t DO INSTEAD SELECT * FROM t1;\n> ROLLBACK;\n>\n> Regards,\n> Vignesh\n\n\nI think what is happening here is that the previous relation is not\nunlinked when pgstat_init_relation() is called\nbecause the relation is now a view and for relations without storage\nthe relation is not unlinked in pgstat_init_relation()\n\nvoid\npgstat_init_relation(Relation rel)\n{\n char relkind = rel->rd_rel->relkind;\n\n /*\n * We only count stats for relations with storage and partitioned tables\n */\n if (!RELKIND_HAS_STORAGE(relkind) && relkind != RELKIND_PARTITIONED_TABLE)\n {\n rel->pgstat_enabled = false;\n rel->pgstat_info = NULL;\n return;\n }\n\nThere is a logic in DefineQueryRewrite() which converts a relation to\na view when you create such a rule like the test case does.\nSo initially the relation had storage, the pgstat_info is linked,\nthen table is converted to a view, but in init, the previous\nrelation is not unlinked but when it tries to link a new relation, the\nassert fails saying a previous relation is already linked to\npgstat_info\n\nI have made a small patch with a fix, but I am not sure if this is the\nright way to fix this.\n\nregards,\nAjin Cherian\nFujitsu Australia",
"msg_date": "Mon, 28 Nov 2022 21:28:11 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert in pgstat_assoc_relation"
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> I could reproduce this issue with the following steps:\n> create table t1(c int);\n> BEGIN;\n> CREATE TABLE t (c int);\n> SAVEPOINT q;\n> CREATE RULE \"_RETURN\" AS ON SELECT TO t DO INSTEAD SELECT * FROM t1;\n> select * from t;\n> ROLLBACK TO q;\n> CREATE RULE \"_RETURN\" AS ON SELECT TO t DO INSTEAD SELECT * FROM t1;\n> ROLLBACK;\n\nUh-huh. I've not bothered to trace this in detail, but presumably\nwhat is happening is that the first CREATE RULE converts the table\nto a view, and then the ROLLBACK undoes that so far as the catalogs\nare concerned, but probably doesn't undo related pg_stats state\nchanges fully. Then we're in a bad state that will cause problems.\n(It still crashes if you replace the second CREATE RULE with\n\"select * from t\".)\n\nAs far as HEAD is concerned, maybe it's time to nuke the whole\nconvert-table-to-view kluge entirely? Only pg_dump older than\n9.4 will emit such code, so we're really about out of reasons\nto keep on maintaining it.\n\nHowever, I'm not sure that removing that code in v15 will fly,\nso maybe we need to make the new pg_stats code a little more\nrobust against the possibility of a relkind change.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 28 Nov 2022 13:37:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert in pgstat_assoc_relation"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-28 13:37:16 -0500, Tom Lane wrote:\n> vignesh C <vignesh21@gmail.com> writes:\n> > I could reproduce this issue with the following steps:\n> > create table t1(c int);\n> > BEGIN;\n> > CREATE TABLE t (c int);\n> > SAVEPOINT q;\n> > CREATE RULE \"_RETURN\" AS ON SELECT TO t DO INSTEAD SELECT * FROM t1;\n> > select * from t;\n> > ROLLBACK TO q;\n> > CREATE RULE \"_RETURN\" AS ON SELECT TO t DO INSTEAD SELECT * FROM t1;\n> > ROLLBACK;\n> \n> Uh-huh. I've not bothered to trace this in detail, but presumably\n> what is happening is that the first CREATE RULE converts the table\n> to a view, and then the ROLLBACK undoes that so far as the catalogs\n> are concerned, but probably doesn't undo related pg_stats state\n> changes fully. Then we're in a bad state that will cause problems.\n> (It still crashes if you replace the second CREATE RULE with\n> \"select * from t\".)\n\nYea. I haven't yet fully traced through this, but presumably relcache inval\ndoesn't fix this because we don't want to loose pending stats after DDL.\n\nPerhaps we need to add a rule about not swapping pgstat* in\nRelationClearRelation() when relkind changes?\n\n\n> As far as HEAD is concerned, maybe it's time to nuke the whole\n> convert-table-to-view kluge entirely? Only pg_dump older than\n> 9.4 will emit such code, so we're really about out of reasons\n> to keep on maintaining it.\n\nSounds good to me.\n\n\n> However, I'm not sure that removing that code in v15 will fly,\n\nAgreed, at the very least that'd increase memory usage.\n\n\n> so maybe we need to make the new pg_stats code a little more\n> robust against the possibility of a relkind change.\n\nPossibly via the relcache code.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 28 Nov 2022 10:50:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert in pgstat_assoc_relation"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-28 13:37:16 -0500, Tom Lane wrote:\n>> As far as HEAD is concerned, maybe it's time to nuke the whole\n>> convert-table-to-view kluge entirely? Only pg_dump older than\n>> 9.4 will emit such code, so we're really about out of reasons\n>> to keep on maintaining it.\n\n> Sounds good to me.\n\nHere's a draft patch for that. If we apply this to HEAD then\nwe only need that klugery in relcache for v15.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 28 Nov 2022 14:54:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert in pgstat_assoc_relation"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-28 10:50:13 -0800, Andres Freund wrote:\n> On 2022-11-28 13:37:16 -0500, Tom Lane wrote:\n> > Uh-huh. I've not bothered to trace this in detail, but presumably\n> > what is happening is that the first CREATE RULE converts the table\n> > to a view, and then the ROLLBACK undoes that so far as the catalogs\n> > are concerned, but probably doesn't undo related pg_stats state\n> > changes fully. Then we're in a bad state that will cause problems.\n> > (It still crashes if you replace the second CREATE RULE with\n> > \"select * from t\".)\n> \n> Yea. I haven't yet fully traced through this, but presumably relcache inval\n> doesn't fix this because we don't want to loose pending stats after DDL.\n> \n> Perhaps we need to add a rule about not swapping pgstat* in\n> RelationClearRelation() when relkind changes?\n\nSomething like the attached. Still needs a bit of polish, e.g. adding the test\ncase from above.\n\nI'm a bit uncomfortable adding a function call below\n\t\t * Perform swapping of the relcache entry contents. Within this\n\t\t * process the old entry is momentarily invalid, so there *must* be no\n\t\t * possibility of CHECK_FOR_INTERRUPTS within this sequence. Do it in\n\t\t * all-in-line code for safety.\nbut it's not the first, see MemoryContextSetParent().\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 28 Nov 2022 13:09:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert in pgstat_assoc_relation"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Something like the attached. Still needs a bit of polish, e.g. adding the test\n> case from above.\n\n> I'm a bit uncomfortable adding a function call below\n> \t\t * Perform swapping of the relcache entry contents. Within this\n> \t\t * process the old entry is momentarily invalid, so there *must* be no\n> \t\t * possibility of CHECK_FOR_INTERRUPTS within this sequence. Do it in\n> \t\t * all-in-line code for safety.\n\nUgh. I don't know what pgstat_unlink_relation does, but assuming\nthat it can never throw an error seems like a pretty bad idea,\nespecially when you aren't adding that to its API spec (contrast\nthe comments for MemoryContextSetParent).\n\nCan't that part be done outside the critical section?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 28 Nov 2022 16:33:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert in pgstat_assoc_relation"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-28 16:33:20 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Something like the attached. Still needs a bit of polish, e.g. adding the test\n> > case from above.\n>\n> > I'm a bit uncomfortable adding a function call below\n> > \t\t * Perform swapping of the relcache entry contents. Within this\n> > \t\t * process the old entry is momentarily invalid, so there *must* be no\n> > \t\t * possibility of CHECK_FOR_INTERRUPTS within this sequence. Do it in\n> > \t\t * all-in-line code for safety.\n>\n> Ugh. I don't know what pgstat_unlink_relation does, but assuming\n> that it can never throw an error seems like a pretty bad idea,\n\nI don't think it'd be an issue - it just resets the pointer from a pgstat\nentry to the relcache entry.\n\nBut you're right:\n\n> Can't that part be done outside the critical section?\n\nwe can do that. See the attached.\n\n\nDo we have any cases of relcache entries changing their relkind?\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 1 Dec 2022 20:46:35 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert in pgstat_assoc_relation"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Do we have any cases of relcache entries changing their relkind?\n\nJust the table-to-view hack. I'm not aware that there are any other\ncases, and it seems hard to credit that there ever will be any.\nI think we could get rid of table-to-view in HEAD, and use your patch\nonly in v15.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Dec 2022 00:08:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert in pgstat_assoc_relation"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-02 00:08:20 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Do we have any cases of relcache entries changing their relkind?\n>\n> Just the table-to-view hack. I'm not aware that there are any other\n> cases, and it seems hard to credit that there ever will be any.\n\nI can see some halfway credible scenarios. E.g. converting a view to a\nmatview, or a table into a partition. I kind of wonder if it's worth keeping\nthe change, just in case we do - it's not that easy to hit...\n\n\n> I think we could get rid of table-to-view in HEAD, and use your patch\n> only in v15.\n\nWFM. I'll push it to 15 tomorrow.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Dec 2022 21:39:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert in pgstat_assoc_relation"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-12-02 00:08:20 -0500, Tom Lane wrote:\n>> Just the table-to-view hack. I'm not aware that there are any other\n>> cases, and it seems hard to credit that there ever will be any.\n\n> I can see some halfway credible scenarios. E.g. converting a view to a\n> matview, or a table into a partition. I kind of wonder if it's worth keeping\n> the change, just in case we do - it's not that easy to hit...\n\nI'd suggest putting in an assertion that the relkind isn't changing,\ninstead. When and if somebody makes a credible feature patch that'd\nrequire relaxing that, we can see what to do.\n\n(There's a couple of places in rewriteHandler.c that could\nperhaps be simplified, too.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Dec 2022 00:48:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert in pgstat_assoc_relation"
},
{
"msg_contents": "Hi, \n\nOn December 1, 2022 9:48:48 PM PST, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Andres Freund <andres@anarazel.de> writes:\n>> On 2022-12-02 00:08:20 -0500, Tom Lane wrote:\n>>> Just the table-to-view hack. I'm not aware that there are any other\n>>> cases, and it seems hard to credit that there ever will be any.\n>\n>> I can see some halfway credible scenarios. E.g. converting a view to a\n>> matview, or a table into a partition. I kind of wonder if it's worth keeping\n>> the change, just in case we do - it's not that easy to hit...\n>\n>I'd suggest putting in an assertion that the relkind isn't changing,\n>instead.\n\nSounds like a plan. Will you do that when you remove the table-to-view hack? \n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Thu, 01 Dec 2022 21:57:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert in pgstat_assoc_relation"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On December 1, 2022 9:48:48 PM PST, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'd suggest putting in an assertion that the relkind isn't changing,\n>> instead.\n\n> Sounds like a plan. Will you do that when you remove the table-to-view hack? \n\nI'd suggest committing it concurrently with the v15 fix, instead,\nso that there's a cross-reference to what some future hacker might\nneed to install if they remove the assertion.\n\nI guess that means that the table-to-view removal has to go in\nfirst. I should be able to take care of that tomorrow, or if\nyou're in a hurry I don't mind if you commit it for me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Dec 2022 01:03:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert in pgstat_assoc_relation"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-02 01:03:35 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On December 1, 2022 9:48:48 PM PST, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I'd suggest putting in an assertion that the relkind isn't changing,\n> >> instead.\n> \n> > Sounds like a plan. Will you do that when you remove the table-to-view hack? \n> \n> I'd suggest committing it concurrently with the v15 fix, instead,\n> so that there's a cross-reference to what some future hacker might\n> need to install if they remove the assertion.\n\nGood idea.\n\n\n> I guess that means that the table-to-view removal has to go in\n> first. I should be able to take care of that tomorrow, or if\n> you're in a hurry I don't mind if you commit it for me.\n\nNo particular hurry from my end.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Dec 2022 08:42:38 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert in pgstat_assoc_relation"
},
{
"msg_contents": "I wrote:\n> I guess that means that the table-to-view removal has to go in\n> first. I should be able to take care of that tomorrow, or if\n> you're in a hurry I don't mind if you commit it for me.\n\nDone now, feel free to deal with the pgstat problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Dec 2022 12:15:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert in pgstat_assoc_relation"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-02 12:15:37 -0500, Tom Lane wrote:\n> I wrote:\n> > I guess that means that the table-to-view removal has to go in\n> > first. I should be able to take care of that tomorrow, or if\n> > you're in a hurry I don't mind if you commit it for me.\n> \n> Done now, feel free to deal with the pgstat problem.\n\nThanks. I'm out for a few hours without proper computer access, couldn't\nquite get it finished inbetween your push and now. Will deal with it once I\nget back.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Dec 2022 09:51:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert in pgstat_assoc_relation"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-02 09:51:39 -0800, Andres Freund wrote:\n> On 2022-12-02 12:15:37 -0500, Tom Lane wrote:\n> > I wrote:\n> > > I guess that means that the table-to-view removal has to go in\n> > > first. I should be able to take care of that tomorrow, or if\n> > > you're in a hurry I don't mind if you commit it for me.\n> > \n> > Done now, feel free to deal with the pgstat problem.\n> \n> Thanks. I'm out for a few hours without proper computer access, couldn't\n> quite get it finished inbetween your push and now. Will deal with it once I\n> get back.\n\nPushed that now. I debated for a bit whether to backpatch the test all the\nway, but after it took me a while to convince myself that there's no active\nproblem in the older branches, I decided it's a good idea.\n\nThanks Vignesh for the bugreports!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Dec 2022 19:06:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Failed Assert in pgstat_assoc_relation"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nA colleague of mine (cc'ed) reported that he was able to pass a NULL\nsnapshot to index_beginscan() and it even worked to a certain degree.\n\nI took my toy extension [1] and replaced the argument with NULL as an\nexperiment:\n\n```\neax=# CREATE EXTENSION experiment;\nCREATE EXTENSION\neax=# SELECT phonebook_lookup_index('Alice');\n phonebook_lookup_index\n------------------------\n -1\n(1 row)\n\neax=# SELECT phonebook_insert('Bob', 456);\n phonebook_insert\n------------------\n 1\n(1 row)\n\neax=# SELECT phonebook_lookup_index('Alice');\n phonebook_lookup_index\n------------------------\n -1\n(1 row)\n\neax=# SELECT phonebook_insert('Alice', 123);\n phonebook_insert\n------------------\n 2\n(1 row)\n\neax=# SELECT phonebook_lookup_index('Alice');\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\n```\n\nSo evidently it really works as long as the index doesn't find any\nmatching rows.\n\nThis could be really confusing for the extension authors so here is a\npatch that adds corresponding Asserts().\n\n[1]: https://github.com/afiskon/postgresql-extensions/tree/main/005-table-access\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 28 Nov 2022 13:07:42 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Check snapshot argument of index_beginscan and family"
},
{
"msg_contents": "Hi, Alexander!\n> A colleague of mine (cc'ed) reported that he was able to pass a NULL\n> snapshot to index_beginscan() and it even worked to a certain degree.\n>\n> I took my toy extension [1] and replaced the argument with NULL as an\n> experiment:\n>\n> ```\n> eax=# CREATE EXTENSION experiment;\n> CREATE EXTENSION\n> eax=# SELECT phonebook_lookup_index('Alice');\n> phonebook_lookup_index\n> ------------------------\n> -1\n> (1 row)\n>\n> eax=# SELECT phonebook_insert('Bob', 456);\n> phonebook_insert\n> ------------------\n> 1\n> (1 row)\n>\n> eax=# SELECT phonebook_lookup_index('Alice');\n> phonebook_lookup_index\n> ------------------------\n> -1\n> (1 row)\n>\n> eax=# SELECT phonebook_insert('Alice', 123);\n> phonebook_insert\n> ------------------\n> 2\n> (1 row)\n>\n> eax=# SELECT phonebook_lookup_index('Alice');\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> ```\n>\n> So evidently it really works as long as the index doesn't find any\n> matching rows.\n>\n> This could be really confusing for the extension authors so here is a\n> patch that adds corresponding Asserts().\n>\n> [1]: https://github.com/afiskon/postgresql-extensions/tree/main/005-table-access\nI think it's a nice catch and worth fixing. The one thing I don't\nagree with is using asserts for handling the error that can appear\nbecause most probably the server is built with assertions off and in\nthis case, there still will be a crash in this case. I'd do this with\nreport ERROR. Otherwise, the patch looks right and worth committing.\n\nKind regards,\nPavel Borisov.\n\n\n",
"msg_date": "Mon, 28 Nov 2022 14:23:29 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Check snapshot argument of index_beginscan and family"
},
{
"msg_contents": "Hi Pavel,\n\nThanks for the feedback!\n\n> I think it's a nice catch and worth fixing. The one thing I don't\n> agree with is using asserts for handling the error that can appear\n> because most probably the server is built with assertions off and in\n> this case, there still will be a crash in this case. I'd do this with\n> report ERROR. Otherwise, the patch looks right and worth committing.\n\nNormally a developer is not supposed to pass NULLs there so I figured\nhaving extra if's in the release builds doesn't worth it. Personally I\nwouldn't mind using ereport() but my intuition tells me that the\ncommitters are not going to approve this :)\n\nLet's see what the rest of the community thinks.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 28 Nov 2022 13:29:55 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Check snapshot argument of index_beginscan and family"
},
{
"msg_contents": "Hi!\n\nOn Mon, Nov 28, 2022 at 1:30 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> Thanks for the feedback!\n>\n> > I think it's a nice catch and worth fixing. The one thing I don't\n> > agree with is using asserts for handling the error that can appear\n> > because most probably the server is built with assertions off and in\n> > this case, there still will be a crash in this case. I'd do this with\n> > report ERROR. Otherwise, the patch looks right and worth committing.\n>\n> Normally a developer is not supposed to pass NULLs there so I figured\n> having extra if's in the release builds doesn't worth it. Personally I\n> wouldn't mind using ereport() but my intuition tells me that the\n> committers are not going to approve this :)\n>\n> Let's see what the rest of the community thinks.\n\nI think this is harmless assertion patch. I'm going to push this if\nno objections.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 2 Dec 2022 18:18:48 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Check snapshot argument of index_beginscan and family"
},
{
"msg_contents": "On Fri, Dec 2, 2022 at 6:18 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Mon, Nov 28, 2022 at 1:30 PM Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> > Thanks for the feedback!\n> >\n> > > I think it's a nice catch and worth fixing. The one thing I don't\n> > > agree with is using asserts for handling the error that can appear\n> > > because most probably the server is built with assertions off and in\n> > > this case, there still will be a crash in this case. I'd do this with\n> > > report ERROR. Otherwise, the patch looks right and worth committing.\n> >\n> > Normally a developer is not supposed to pass NULLs there so I figured\n> > having extra if's in the release builds doesn't worth it. Personally I\n> > wouldn't mind using ereport() but my intuition tells me that the\n> > committers are not going to approve this :)\n> >\n> > Let's see what the rest of the community thinks.\n>\n> I think this is harmless assertion patch. I'm going to push this if\n> no objections.\n\nPushed!\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 6 Dec 2022 03:31:04 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Check snapshot argument of index_beginscan and family"
},
{
"msg_contents": "On Tue, 6 Dec 2022 at 04:31, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Fri, Dec 2, 2022 at 6:18 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Mon, Nov 28, 2022 at 1:30 PM Aleksander Alekseev\n> > <aleksander@timescale.com> wrote:\n> > > Thanks for the feedback!\n> > >\n> > > > I think it's a nice catch and worth fixing. The one thing I don't\n> > > > agree with is using asserts for handling the error that can appear\n> > > > because most probably the server is built with assertions off and in\n> > > > this case, there still will be a crash in this case. I'd do this with\n> > > > report ERROR. Otherwise, the patch looks right and worth committing.\n> > >\n> > > Normally a developer is not supposed to pass NULLs there so I figured\n> > > having extra if's in the release builds doesn't worth it. Personally I\n> > > wouldn't mind using ereport() but my intuition tells me that the\n> > > committers are not going to approve this :)\n> > >\n> > > Let's see what the rest of the community thinks.\n> >\n> > I think this is harmless assertion patch. I'm going to push this if\n> > no objections.\n>\n> Pushed!\n\nGreat, thanks!\nPavel Borisov.\n\n\n",
"msg_date": "Tue, 6 Dec 2022 17:44:50 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Check snapshot argument of index_beginscan and family"
}
] |
[
{
"msg_contents": "Issue in current XactLockTableWait, starting with 3c27944fb2141,\ndiscovered while reviewing https://commitfest.postgresql.org/40/3806/\n\nTest demonstrating the problem is 001-isotest-tuplelock-subxact.v1.patch\n\nA narrative description of the issue follows:\nsession1 - requests multiple nested subtransactions like this:\nBEGIN; ...\nSAVEPOINT subxid1; ...\nSAVEPOINT subxid2; ...\n\nIf another session2 sees an xid from subxid2, it waits. If subxid2\naborts, then session2 sees the abort and can continue processing\nnormally.\nHowever, if subxid2 subcommits, then the lock wait moves from subxid2\nto the topxid. If subxid1 subsequently aborts, it will also abort\nsubxid2, but session2 waiting for subxid2 to complete doesn't see this\nand waits for topxid instead. Which means that it waits longer than it\nshould, and later arriving lock waiters may actually get served first.\n\nSo it's a bug, but not too awful, since in many cases people don't use\nnested subtransactions, and if they do use SAVEPOINTs, they don't\noften close them using RELEASE. And in most cases the extra wait time\nis not very long, hence why nobody ever reported this issue.\n\nThanks to Robert Haas and Julien Tachoires for asking the question\n\"are you sure the existing coding is correct?\". You were both right;\nit is not.\n\nHow to fix? Correct lock wait can be achieved while a subxid is\nrunning if we do either\n* a lock table entry for the subxid OR\n* a subtrans entry that points to its immediate parent\n\nSo we have these options\n\n1. Removing the XactLockTableDelete() call in CommitSubTransaction().\nThat releases lock waiters earlier than expected, which requires\npushups in XactLockTableWait() to cope with that (which are currently\nbroken). Why do we do it? To save shmem space in the lock table should\nanyone want to run a transaction that contains thousands of\nsubtransactions, or more. So option (1) alone would eventually cause\nus to run out of space in the lock table and a transaction would\nreceive ERRORs rather than be allowed to run for a long time.\n\n2. In XactLockTableWait(), replace the call to SubTransGetParent(), so\nwe go up the levels one by one as we did before. However, (2) causes\nhuge subtrans contention and if we implemented that and backpatched it\nthe performance issues could be significant. So my feeling is that if\nwe do (2) then we should not backpatch it.\n\nSo both (1) and (2) have issues.\n\nThe main result from patch https://commitfest.postgresql.org/40/3806/\nis that having subtrans point direct to topxid is very good for\nperformance in XidIsInMVCCSnapshot(), and presumably other places\nalso. This bug prevents the simple application of a patch to improve\nperformance. So now we need a stronger mix of ideas to both resolve\nthe bug and fix the subtrans contention issue in HEAD.\n\nMy preferred solution would be a mix of the above, call it option (3)\n\n3.\na) Include the lock table entry for the first 64 subtransactions only,\nso that we limit shmem. For those first 64 entries, have the subtrans\npoint direct to top, since this makes a subtrans lookup into an O(1)\noperation, which is important for performance of later actions.\n\nb) For any subtransactions after first 64, delete the subxid lock on\nsubcommit, to save shmem, but make subtrans point to the immediate\nparent (only), not the topxid. That fixes the bug, but causes\nperformance problems in XidInMVCCSnapshot() and others, so we also do\nc) and d)\n\nc) At top level commit, go back and alter subtrans again for subxids\nso now it points to the topxid, so that we avoid O(N) behavior in\nXidInMVCCSnapshot() and other callers. Additional cost for longer\ntransactions, but it saves considerable cost in later callers that\nneed to call GetTopmostTransaction.\n\nd) Optimize SubTransGetTopmostTransaction() so it retrieves entries\npage-at-a-time. This will reduce the contention of repeatedly\nre-visiting the same page(s) and ensure that a page is less often\npaged out when we are still using it.\n\nThoughts?\n\n--\nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Mon, 28 Nov 2022 15:27:56 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Bug in wait time when waiting on nested subtransaction"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 10:28 AM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> So we have these options\n>\n> 1. Removing the XactLockTableDelete() call in CommitSubTransaction().\n> That releases lock waiters earlier than expected, which requires\n> pushups in XactLockTableWait() to cope with that (which are currently\n> broken). Why do we do it? To save shmem space in the lock table should\n> anyone want to run a transaction that contains thousands of\n> subtransactions, or more. So option (1) alone would eventually cause\n> us to run out of space in the lock table and a transaction would\n> receive ERRORs rather than be allowed to run for a long time.\n\nThis seems unprincipled to me. The point of having a lock on the\nsubtransaction in the lock table is so that people can wait for the\nsubtransaction to end. If we don't remove the lock table entry at\nsubtransaction end, then that lock table entry doesn't serve that\npurpose any more.\n\n> 2. In XactLockTableWait(), replace the call to SubTransGetParent(), so\n> we go up the levels one by one as we did before. However, (2) causes\n> huge subtrans contention and if we implemented that and backpatched it\n> the performance issues could be significant. So my feeling is that if\n> we do (2) then we should not backpatch it.\n\nWhat I find suspicious about the coding of this function is that it\ndoesn't distinguish between commits and aborts at all. Like, if a\nsubtransaction commits, the changes don't become globally visible\nuntil the parent transaction also commits. If a subtransaction aborts,\nthough, what happens to the top-level XID doesn't seem to matter at\nall. The comment seems to agree:\n\n * Note that this does the right thing for subtransactions: if we wait on a\n * subtransaction, we will exit as soon as it aborts or its top parent commits.\n\nI feel like what I'd expect to see given this comment is code which\n(1) waits until the supplied XID is no longer running, (2) checks\nwhether the XID aborted, and if so return at once, and (3) otherwise\nrecurse to the parent XID. But the code doesn't do that. Perhaps\nthat's not actually the right thing to do, since it seems like a big\nbehavior change, but then I don't understand the comment.\n\nIncidentally, one possible optimization here to try to release locking\ntraffic would be to try waiting for the top-parent first using a\nconditional lock acquisition. If that works, cool. If not, go back\naround and work up the tree level by level. Since that path would only\nbe taken in the unhappy case where we're probably going to have to\nwait anyway, the cost probably wouldn't be too bad.\n\n> My preferred solution would be a mix of the above, call it option (3)\n>\n> 3.\n> a) Include the lock table entry for the first 64 subtransactions only,\n> so that we limit shmem. For those first 64 entries, have the subtrans\n> point direct to top, since this makes a subtrans lookup into an O(1)\n> operation, which is important for performance of later actions.\n>\n> b) For any subtransactions after first 64, delete the subxid lock on\n> subcommit, to save shmem, but make subtrans point to the immediate\n> parent (only), not the topxid. That fixes the bug, but causes\n> performance problems in XidInMVCCSnapshot() and others, so we also do\n> c) and d)\n>\n> c) At top level commit, go back and alter subtrans again for subxids\n> so now it points to the topxid, so that we avoid O(N) behavior in\n> XidInMVCCSnapshot() and other callers. Additional cost for longer\n> transactions, but it saves considerable cost in later callers that\n> need to call GetTopmostTransaction.\n>\n> d) Optimize SubTransGetTopmostTransaction() so it retrieves entries\n> page-at-a-time. This will reduce the contention of repeatedly\n> re-visiting the same page(s) and ensure that a page is less often\n> paged out when we are still using it.\n\nI'm honestly not very sanguine about changing pg_subtrans to point\nstraight to the topmost XID. It's only OK to do that if there's\nabsolutely nothing that needs to know the full tree structure, and the\npresent case looks like an instance where we would like to have the\nfull tree structure. I would not be surprised if there are others.\nThat said, it seems a lot less likely that this would be an issue once\nthe top-level transaction is no longer running. At that point, all the\nsubtransactions are no longer running either: they either committed or\nthey rolled back, and I can't see a reason why any code should care\nabout anything other than which of those two things happened. So I\nthink your idea in (c) might have some possibilities.\n\nYou could also flip that idea around and have readers replace\nimmediate parent pointers with top-parent pointers opportunistically,\nbut I'm not sure that's actually any better. As you present it in (c)\nabove, there's a risk of going back and updating CLOG state that no\none will ever look at. But if you flipped it around and did it on the\nread side, then you'd have the risk of a bunch of backends trying to\ndo it at the same time. I'm not sure whether either of those things is\na big problem in practice, or whether both are, or neither.\n\nI agree that it looks possible to optimize\nSubTransGetTopmostTransaction better for the case where we want to\ntraverse multiple or all levels, so that fewer pages are read. I don't\nknow to what degree that would affect user-observible performance, but\nI can believe that it could be a win.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 28 Nov 2022 12:38:16 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug in wait time when waiting on nested subtransaction"
},
{
"msg_contents": "On 2022-Nov-28, Simon Riggs wrote:\n\n> A narrative description of the issue follows:\n> session1 - requests multiple nested subtransactions like this:\n> BEGIN; ...\n> SAVEPOINT subxid1; ...\n> SAVEPOINT subxid2; ...\n\n> However, if subxid2 subcommits, then the lock wait moves from subxid2\n> to the topxid.\n\nHmm, do we really do that? Seems very strange .. it sounds to me like\nthe lock should have been transferred to subxid1 (which is subxid2's\nparent), not to the top-level Xid. Maybe what the user wanted was to\nrelease subxid1 before establishing subxid2? Or do they want to\ncontinue to be able to rollback to subxid1 after establishing subxid2?\n(but why?)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 28 Nov 2022 19:53:10 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Bug in wait time when waiting on nested subtransaction"
},
{
"msg_contents": "On Mon, 28 Nov 2022 at 18:53, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Nov-28, Simon Riggs wrote:\n>\n> > A narrative description of the issue follows:\n> > session1 - requests multiple nested subtransactions like this:\n> > BEGIN; ...\n> > SAVEPOINT subxid1; ...\n> > SAVEPOINT subxid2; ...\n>\n> > However, if subxid2 subcommits, then the lock wait moves from subxid2\n> > to the topxid.\n>\n> Hmm, do we really do that? Seems very strange .. it sounds to me like\n> the lock should have been transferred to subxid1 (which is subxid2's\n> parent), not to the top-level Xid.\n\nCorrect; that is exactly what I'm saying and why we have a bug since\n3c27944fb2141.\n\n> Maybe what the user wanted was to\n> release subxid1 before establishing subxid2? Or do they want to\n> continue to be able to rollback to subxid1 after establishing subxid2?\n> (but why?)\n\nThis isn't a description of a user's actions, it is a script that\nillustrates the bug in XactLockTableWait().\n\nPerhaps a better example would be nested code blocks with EXCEPTION\nclauses where the outer block fails...\ne.g.\n\nDO $$\nBEGIN\n SELECT 1;\n\n BEGIN\n SELECT 1;\n EXCEPTION WHEN OTHERS THEN\n RAISE NOTICE 's2';\n END;\n\n RAISE division_by_zero; -- now back in outer subxact, which now fails\n\n EXCEPTION WHEN OTHERS THEN\n RAISE NOTICE 's1';\nEND;$$;\n\nOf course, debugging this is harder since there is no way to return\nthe current subxid in SQL.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 28 Nov 2022 19:34:17 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Bug in wait time when waiting on nested subtransaction"
},
{
"msg_contents": "On Mon, 28 Nov 2022 at 17:38, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Nov 28, 2022 at 10:28 AM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > So we have these options\n> >\n> > 1. Removing the XactLockTableDelete() call in CommitSubTransaction().\n> > That releases lock waiters earlier than expected, which requires\n> > pushups in XactLockTableWait() to cope with that (which are currently\n> > broken). Why do we do it? To save shmem space in the lock table should\n> > anyone want to run a transaction that contains thousands of\n> > subtransactions, or more. So option (1) alone would eventually cause\n> > us to run out of space in the lock table and a transaction would\n> > receive ERRORs rather than be allowed to run for a long time.\n>\n> This seems unprincipled to me. The point of having a lock on the\n> subtransaction in the lock table is so that people can wait for the\n> subtransaction to end. If we don't remove the lock table entry at\n> subtransaction end, then that lock table entry doesn't serve that\n> purpose any more.\n\nAn easy point to confuse:\n\"subtransaction to end\": The subtransaction is \"still running\" to\nother backends even AFTER it has been subcommitted, but its state now\ntransfers to the parent.\n\nSo the subtransaction doesn't cease running until it aborts, one of\nits parent aborts or top level commit. The subxid lock should, on\nprinciple, exist until one of those events occurs. It doesn't, but\nthat is an optimization, for the stated reason.\n\n(All of the above is current behavior).\n\n> > 2. In XactLockTableWait(), replace the call to SubTransGetParent(), so\n> > we go up the levels one by one as we did before. However, (2) causes\n> > huge subtrans contention and if we implemented that and backpatched it\n> > the performance issues could be significant. So my feeling is that if\n> > we do (2) then we should not backpatch it.\n>\n> What I find suspicious about the coding of this function is that it\n> doesn't distinguish between commits and aborts at all. Like, if a\n> subtransaction commits, the changes don't become globally visible\n> until the parent transaction also commits. If a subtransaction aborts,\n> though, what happens to the top-level XID doesn't seem to matter at\n> all. The comment seems to agree:\n>\n> * Note that this does the right thing for subtransactions: if we wait on a\n> * subtransaction, we will exit as soon as it aborts or its top parent commits.\n>\n> I feel like what I'd expect to see given this comment is code which\n> (1) waits until the supplied XID is no longer running, (2) checks\n> whether the XID aborted, and if so return at once, and (3) otherwise\n> recurse to the parent XID. But the code doesn't do that. Perhaps\n> that's not actually the right thing to do, since it seems like a big\n> behavior change, but then I don't understand the comment.\n\nAs I mention above, the correct behavior is that the subxact doesn't\ncease running until it aborts, one of its parent aborts or top level\ncommit.\n\nWhich is slightly different from the comment, which may explain why\nthe bug exists.\n\n> Incidentally, one possible optimization here to try to release locking\n> traffic would be to try waiting for the top-parent first using a\n> conditional lock acquisition. If that works, cool. If not, go back\n> around and work up the tree level by level. Since that path would only\n> be taken in the unhappy case where we're probably going to have to\n> wait anyway, the cost probably wouldn't be too bad.\n\nThat sounds like a potential bug fix (not just an optimization).\n\n> > My preferred solution would be a mix of the above, call it option (3)\n> >\n> > 3.\n> > a) Include the lock table entry for the first 64 subtransactions only,\n> > so that we limit shmem. For those first 64 entries, have the subtrans\n> > point direct to top, since this makes a subtrans lookup into an O(1)\n> > operation, which is important for performance of later actions.\n> >\n> > b) For any subtransactions after first 64, delete the subxid lock on\n> > subcommit, to save shmem, but make subtrans point to the immediate\n> > parent (only), not the topxid. That fixes the bug, but causes\n> > performance problems in XidInMVCCSnapshot() and others, so we also do\n> > c) and d)\n> >\n> > c) At top level commit, go back and alter subtrans again for subxids\n> > so now it points to the topxid, so that we avoid O(N) behavior in\n> > XidInMVCCSnapshot() and other callers. Additional cost for longer\n> > transactions, but it saves considerable cost in later callers that\n> > need to call GetTopmostTransaction.\n> >\n> > d) Optimize SubTransGetTopmostTransaction() so it retrieves entries\n> > page-at-a-time. This will reduce the contention of repeatedly\n> > re-visiting the same page(s) and ensure that a page is less often\n> > paged out when we are still using it.\n>\n> I'm honestly not very sanguine about changing pg_subtrans to point\n> straight to the topmost XID. It's only OK to do that if there's\n> absolutely nothing that needs to know the full tree structure, and the\n> present case looks like an instance where we would like to have the\n> full tree structure. I would not be surprised if there are others.\n\nThere are no others. It's a short list and I've checked. I ask others\nto do the same.\n\n> That said, it seems a lot less likely that this would be an issue once\n> the top-level transaction is no longer running. At that point, all the\n> subtransactions are no longer running either: they either committed or\n> they rolled back, and I can't see a reason why any code should care\n> about anything other than which of those two things happened. So I\n> think your idea in (c) might have some possibilities.\n>\n> You could also flip that idea around and have readers replace\n> immediate parent pointers with top-parent pointers opportunistically,\n> but I'm not sure that's actually any better. As you present it in (c)\n> above, there's a risk of going back and updating CLOG state that no\n> one will ever look at. But if you flipped it around and did it on the\n> read side, then you'd have the risk of a bunch of backends trying to\n> do it at the same time. I'm not sure whether either of those things is\n> a big problem in practice, or whether both are, or neither.\n\n(Subtrans state, not clog state)\n\nBut that puts the work on the reader rather than the writer,.\n\n> I agree that it looks possible to optimize\n> SubTransGetTopmostTransaction better for the case where we want to\n> traverse multiple or all levels, so that fewer pages are read. I don't\n> know to what degree that would affect user-observible performance, but\n> I can believe that it could be a win.\n\nGood.\n\nThanks for commenting.\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 28 Nov 2022 19:44:54 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Bug in wait time when waiting on nested subtransaction"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 2:45 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> An easy point to confuse:\n> \"subtransaction to end\": The subtransaction is \"still running\" to\n> other backends even AFTER it has been subcommitted, but its state now\n> transfers to the parent.\n>\n> So the subtransaction doesn't cease running until it aborts, one of\n> its parent aborts or top level commit. The subxid lock should, on\n> principle, exist until one of those events occurs. It doesn't, but\n> that is an optimization, for the stated reason.\n\nThat's not what \"running\" means to me. Running means it's started and\nhasn't yet committed or rolled back.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 28 Nov 2022 14:50:36 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug in wait time when waiting on nested subtransaction"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> That's not what \"running\" means to me. Running means it's started and\n> hasn't yet committed or rolled back.\n\nA subxact definitely can't be considered committed until its topmost\nparent commits. However, it could be known to be rolled back before\nits parent. IIUC, the current situation is that we don't take\nadvantage of the latter case but just wait for the topmost parent.\n\nOne thing we need to be pretty careful of here is to not break the\npromise of atomic commit. At topmost commit, all subxacts must\nappear committed simultaneously. It's not quite clear to me whether\nwe need a similar guarantee in the rollback case. It seems like\nwe shouldn't, but maybe I'm missing something, in which case maybe\nthe current behavior is correct?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 28 Nov 2022 15:01:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug in wait time when waiting on nested subtransaction"
},
{
"msg_contents": "On Mon, Nov 28, 2022 at 3:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> One thing we need to be pretty careful of here is to not break the\n> promise of atomic commit. At topmost commit, all subxacts must\n> appear committed simultaneously. It's not quite clear to me whether\n> we need a similar guarantee in the rollback case. It seems like\n> we shouldn't, but maybe I'm missing something, in which case maybe\n> the current behavior is correct?\n\nAFAICS, the behavior that Simon is complaining about here is an\nexception to the way things work in general, and therefore his\ncomplaint is well-founded. AbortSubTransaction() arranges to release\nresources, whereas CommitSubTransaction() arranges to have them\ntransferred to the parent transaction. This isn't necessarily obvious\nfrom looking at those functions, but if you drill down into the\nAtEO(Sub)Xact functions that they call, that's what happens in nearly\nall cases. Given that subtransaction abort releases resources\nimmediately, it seems pretty fair to wonder what the value is in\nwaiting for its parent or the topmost transaction. I don't see how\nthat can be necessary for correctness.\n\nThe commit message to which Simon (3c27944fb2141) points seems to have\ninadvertently changed the behavior while trying to fix a bug and\nimprove performance. I remember being a bit skeptical about that fix\nat the time. Back in the day, you couldn't XactLockTableWait() unless\nyou knew that the transaction had already started. That commit tried\nto make it so that you could XactLockTableWait() earlier, because\nthat's apparently something that logical replication needs to do. But\nthat is a major redefinition of the charter of that function, and I am\nwondering whether it was a mistake to fold together the thing that we\nneed in normal cases (which is to wait for a transaction we know has\nstarted and may not have finished) from the thing we need in the\nlogical decoding case (which apparently has different requirements).\nMaybe we should have solved that problem by finding a way to wait for\nthe transaction to start, and then afterwards wait for it to end. Or\nmaybe we should have altogether different entrypoints for the two\nrequirements. Or maybe using one function is fine but we just need it\nto be more correct. I'm not really sure.\n\nIn short, I think Simon is right that there's a problem and right\nabout which commit caused it, but I'm not sure what I think we ought\nto do about it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 28 Nov 2022 16:10:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug in wait time when waiting on nested subtransaction"
},
{
"msg_contents": "On Mon, 28 Nov 2022 at 21:10, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Nov 28, 2022 at 3:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > One thing we need to be pretty careful of here is to not break the\n> > promise of atomic commit. At topmost commit, all subxacts must\n> > appear committed simultaneously. It's not quite clear to me whether\n> > we need a similar guarantee in the rollback case. It seems like\n> > we shouldn't, but maybe I'm missing something, in which case maybe\n> > the current behavior is correct?\n>\n> In short, I think Simon is right that there's a problem and right\n> about which commit caused it, but I'm not sure what I think we ought\n> to do about it.\n\nI'm comfortable with ignoring it, on the basis that it *is* a\nperformance optimization, but I suggest we keep the test (with\nmodified output) and document the behavior, if we do.\n\nThe really big issue is the loss of performance we get from having\nsubtrans point only to its immediate parent, which makes\nXidInMVCCSnapshot() go really slow in the presence of lots of\nsubtransactions. So ignoring the issue on this thread will open the\ndoor for the optimization posted for this patch:\nhttps://commitfest.postgresql.org/40/3806/\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 29 Nov 2022 12:53:02 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Bug in wait time when waiting on nested subtransaction"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nWhile working on Pluggable TOAST we've detected a defective behavior\non tables with large amounts of TOASTed data - queries freeze and DB stalls.\nFurther investigation led us to the loop with GetNewOidWithIndex function\ncall - when all available Oid already exist in the related TOAST table this\nloop continues infinitely. Data type used for value ID is the UINT32, which\nis\nunsigned int and has a maximum value of *4294967295* which allows\nmaximum 4294967295 records in the TOAST table. It is not a very big amount\nfor modern databases and is the major problem for productive systems.\n\nQuick fix for this problem is limiting GetNewOidWithIndex loops to some\nreasonable amount defined by related macro and returning error if there is\nstill no available Oid. Please check attached patch, any feedback is\nappreciated.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/",
"msg_date": "Mon, 28 Nov 2022 18:34:20 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-28 18:34:20 +0300, Nikita Malakhov wrote:\n> While working on Pluggable TOAST we've detected a defective behavior\n> on tables with large amounts of TOASTed data - queries freeze and DB stalls.\n> Further investigation led us to the loop with GetNewOidWithIndex function\n> call - when all available Oid already exist in the related TOAST table this\n> loop continues infinitely. Data type used for value ID is the UINT32, which\n> is\n> unsigned int and has a maximum value of *4294967295* which allows\n> maximum 4294967295 records in the TOAST table. It is not a very big amount\n> for modern databases and is the major problem for productive systems.\n\nI don't think the absolute number is the main issue - by default external\ntoasting will happen only for bigger datums. 4 billion external datums\ntypically use a lot of space.\n\nIf you hit this easily with your patch, then you likely broke the conditions\nunder which external toasting happens.\n\nIMO the big issue is the global oid counter making it much easier to hit oid\nwraparound. Due to that we end up assigning oids that conflict with existing\ntoast oids much sooner than 4 billion toasted datums.\n\nI think the first step to improve the situation is to not use a global oid\ncounter for toasted values. One way to do that would be to use the sequence\ncode to do oid assignment, but we likely can find a more efficient\nrepresentation.\n\nEventually we should do the obvious thing and make toast ids 64bit wide - to\ncombat the space usage we likely should switch to representing the ids as\nvariable width integers or such, otherwise the space increase would likely be\nprohibitive.\n\n\n> Quick fix for this problem is limiting GetNewOidWithIndex loops to some\n> reasonable amount defined by related macro and returning error if there is\n> still no available Oid. Please check attached patch, any feedback is\n> appreciated.\n\nThis feels like the wrong spot to tackle the issue. For one, most of the\nlooping will be in GetNewOidWithIndex(), so limiting looping in\ntoast_save_datum() won't help much. For another, if the limiting were in the\nright place, it'd break currently working cases. Due to oid wraparound it's\npretty easy to hit \"ranges\" of allocated oids, without even getting close to\n2^32 toasted datums.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 28 Nov 2022 12:36:55 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi!\n\nWe've already encountered this issue on large production databases, and\n4 billion rows is not so much for modern bases, so this issue already arises\nfrom time to time and would arise more and more often. I agree that global\noid counter is the main issue, and better solution would be local counters\nwith larger datatype for value id. This is the right way to solve this\nissue,\nalthough it would take some time. As I understand, global counter was taken\nbecause it looked the fastest way of getting unique ID.\nOk, I'll prepare a patch with it.\n\n>Due to that we end up assigning oids that conflict with existing\n>toast oids much sooner than 4 billion toasted datums.\n\nJust a note: global oid is checked for related TOAST table only, so equal\noids\nin different TOAST tables would not collide.\n\n>Eventually we should do the obvious thing and make toast ids 64bit wide -\nto\n>combat the space usage we likely should switch to representing the ids as\n>variable width integers or such, otherwise the space increase would likely\nbe\n>prohibitive.\n\nI'm already working on it, but I thought that 64-bit value ID won't be\neasily\naccepted by community. I'd be very thankful for any advice on this.\n\nOn Mon, Nov 28, 2022 at 11:36 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-11-28 18:34:20 +0300, Nikita Malakhov wrote:\n> > While working on Pluggable TOAST we've detected a defective behavior\n> > on tables with large amounts of TOASTed data - queries freeze and DB\n> stalls.\n> > Further investigation led us to the loop with GetNewOidWithIndex function\n> > call - when all available Oid already exist in the related TOAST table\n> this\n> > loop continues infinitely. Data type used for value ID is the UINT32,\n> which\n> > is\n> > unsigned int and has a maximum value of *4294967295* which allows\n> > maximum 4294967295 records in the TOAST table. It is not a very big\n> amount\n> > for modern databases and is the major problem for productive systems.\n>\n> I don't think the absolute number is the main issue - by default external\n> toasting will happen only for bigger datums. 4 billion external datums\n> typically use a lot of space.\n>\n> If you hit this easily with your patch, then you likely broke the\n> conditions\n> under which external toasting happens.\n>\n> IMO the big issue is the global oid counter making it much easier to hit\n> oid\n> wraparound. Due to that we end up assigning oids that conflict with\n> existing\n> toast oids much sooner than 4 billion toasted datums.\n>\n> I think the first step to improve the situation is to not use a global oid\n> counter for toasted values. One way to do that would be to use the sequence\n> code to do oid assignment, but we likely can find a more efficient\n> representation.\n>\n> Eventually we should do the obvious thing and make toast ids 64bit wide -\n> to\n> combat the space usage we likely should switch to representing the ids as\n> variable width integers or such, otherwise the space increase would likely\n> be\n> prohibitive.\n>\n>\n> > Quick fix for this problem is limiting GetNewOidWithIndex loops to some\n> > reasonable amount defined by related macro and returning error if there\n> is\n> > still no available Oid. Please check attached patch, any feedback is\n> > appreciated.\n>\n> This feels like the wrong spot to tackle the issue. For one, most of the\n> looping will be in GetNewOidWithIndex(), so limiting looping in\n> toast_save_datum() won't help much. For another, if the limiting were in\n> the\n> right place, it'd break currently working cases. Due to oid wraparound it's\n> pretty easy to hit \"ranges\" of allocated oids, without even getting close\n> to\n> 2^32 toasted datums.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!We've already encountered this issue on large production databases, and4 billion rows is not so much for modern bases, so this issue already arisesfrom time to time and would arise more and more often. I agree that globaloid counter is the main issue, and better solution would be local counterswith larger datatype for value id. This is the right way to solve this issue,although it would take some time. As I understand, global counter was takenbecause it looked the fastest way of getting unique ID.Ok, I'll prepare a patch with it.>Due to that we end up assigning oids that conflict with existing>toast oids much sooner than 4 billion toasted datums.Just a note: global oid is checked for related TOAST table only, so equal oidsin different TOAST tables would not collide.>Eventually we should do the obvious thing and make toast ids 64bit wide - to>combat the space usage we likely should switch to representing the ids as>variable width integers or such, otherwise the space increase would likely be>prohibitive.I'm already working on it, but I thought that 64-bit value ID won't be easilyaccepted by community. I'd be very thankful for any advice on this.On Mon, Nov 28, 2022 at 11:36 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-11-28 18:34:20 +0300, Nikita Malakhov wrote:\n> While working on Pluggable TOAST we've detected a defective behavior\n> on tables with large amounts of TOASTed data - queries freeze and DB stalls.\n> Further investigation led us to the loop with GetNewOidWithIndex function\n> call - when all available Oid already exist in the related TOAST table this\n> loop continues infinitely. Data type used for value ID is the UINT32, which\n> is\n> unsigned int and has a maximum value of *4294967295* which allows\n> maximum 4294967295 records in the TOAST table. It is not a very big amount\n> for modern databases and is the major problem for productive systems.\n\nI don't think the absolute number is the main issue - by default external\ntoasting will happen only for bigger datums. 4 billion external datums\ntypically use a lot of space.\n\nIf you hit this easily with your patch, then you likely broke the conditions\nunder which external toasting happens.\n\nIMO the big issue is the global oid counter making it much easier to hit oid\nwraparound. Due to that we end up assigning oids that conflict with existing\ntoast oids much sooner than 4 billion toasted datums.\n\nI think the first step to improve the situation is to not use a global oid\ncounter for toasted values. One way to do that would be to use the sequence\ncode to do oid assignment, but we likely can find a more efficient\nrepresentation.\n\nEventually we should do the obvious thing and make toast ids 64bit wide - to\ncombat the space usage we likely should switch to representing the ids as\nvariable width integers or such, otherwise the space increase would likely be\nprohibitive.\n\n\n> Quick fix for this problem is limiting GetNewOidWithIndex loops to some\n> reasonable amount defined by related macro and returning error if there is\n> still no available Oid. Please check attached patch, any feedback is\n> appreciated.\n\nThis feels like the wrong spot to tackle the issue. For one, most of the\nlooping will be in GetNewOidWithIndex(), so limiting looping in\ntoast_save_datum() won't help much. For another, if the limiting were in the\nright place, it'd break currently working cases. Due to oid wraparound it's\npretty easy to hit \"ranges\" of allocated oids, without even getting close to\n2^32 toasted datums.\n\nGreetings,\n\nAndres Freund\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Mon, 28 Nov 2022 23:54:53 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-28 23:54:53 +0300, Nikita Malakhov wrote:\n> We've already encountered this issue on large production databases, and\n> 4 billion rows is not so much for modern bases, so this issue already arises\n> from time to time and would arise more and more often.\n\nWas the issue that you exceeded 4 billion toasted datums, or that assignment\ntook a long time? How many toast datums did you actually have? Was this due to\nvery wide rows leading to even small datums getting toasted?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 28 Nov 2022 13:00:29 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think the first step to improve the situation is to not use a global oid\n> counter for toasted values. One way to do that would be to use the sequence\n> code to do oid assignment, but we likely can find a more efficient\n> representation.\n\nI don't particularly buy that, because the only real fix is this:\n\n> Eventually we should do the obvious thing and make toast ids 64bit wide\n\nand if we do that there'll be no need to worry about multiple counters.\n\n> - to\n> combat the space usage we likely should switch to representing the ids as\n> variable width integers or such, otherwise the space increase would likely be\n> prohibitive.\n\nAnd I don't buy that either. An extra 4 bytes with a 2K payload is not\n\"prohibitive\", it's more like \"down in the noise\".\n\nI think if we switch to int8 keys and widen the global OID counter to 8\nbytes (using just the low 4 bytes for other purposes), we'll have a\nperfectly fine solution. There is still plenty of work to be done under\nthat plan, because of the need to maintain backward compatibility for\nexisting TOAST tables --- and maybe people would want an option to keep on\nusing them, for non-enormous tables? If we add additional work on top of\nthat, it'll just mean that it will take longer to have any solution.\n\n>> Quick fix for this problem is limiting GetNewOidWithIndex loops to some\n>> reasonable amount defined by related macro and returning error if there is\n>> still no available Oid. Please check attached patch, any feedback is\n>> appreciated.\n\n> This feels like the wrong spot to tackle the issue.\n\nYeah, that is completely horrid. It does not remove the existing failure\nmode, just changes it to have worse consequences.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 28 Nov 2022 16:04:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-28 16:04:12 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > - to\n> > combat the space usage we likely should switch to representing the ids as\n> > variable width integers or such, otherwise the space increase would likely be\n> > prohibitive.\n> \n> And I don't buy that either. An extra 4 bytes with a 2K payload is not\n> \"prohibitive\", it's more like \"down in the noise\".\n\nThe space usage for the the the toast relation + index itself is indeed\nirrelevant. Where it's not \"down in the noise\" is in struct varatt_external,\ni.e. references to external toast datums. The size of that already is an\nissue.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 28 Nov 2022 13:23:27 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi,\n\nAndres Freund <andres@anarazel.de> writes:\n>Was the issue that you exceeded 4 billion toasted datums, or that\nassignment\n>took a long time? How many toast datums did you actually have? Was this\ndue to\n>very wide rows leading to even small datums getting toasted?\n\nYep, we had 4 billion toasted datums. I remind that currently relation has\nsingle\nTOAST table for all toastable attributes, so it is not so difficult to get\nto 4 billion\nof toasted values.\n\n>I think if we switch to int8 keys and widen the global OID counter to 8\n>bytes (using just the low 4 bytes for other purposes), we'll have a\n>perfectly fine solution. There is still plenty of work to be done under\n>that plan, because of the need to maintain backward compatibility for\n>existing TOAST tables --- and maybe people would want an option to keep on\n>using them, for non-enormous tables? If we add additional work on top of\n>that, it'll just mean that it will take longer to have any solution.\n\nI agree, but:\n1) Global OID counter is used not only for TOAST, so there would be a lot of\nplaces where the short counter (low part of 64 OID, if we go with that) is\nused;\n2) Upgrading to 64-bit id would require re-toasting old TOAST tables. Or\nthere\nis some way to distinguish old tables from new ones?\n\nBut I don't see any reason to keep an old short ID as an option.\n\n...\n>Yeah, that is completely horrid. It does not remove the existing failure\n>mode, just changes it to have worse consequences.\n\nOn Tue, Nov 29, 2022 at 12:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > I think the first step to improve the situation is to not use a global\n> oid\n> > counter for toasted values. One way to do that would be to use the\n> sequence\n> > code to do oid assignment, but we likely can find a more efficient\n> > representation.\n>\n> I don't particularly buy that, because the only real fix is this:\n>\n> > Eventually we should do the obvious thing and make toast ids 64bit wide\n>\n> and if we do that there'll be no need to worry about multiple counters.\n>\n> > - to\n> > combat the space usage we likely should switch to representing the ids as\n> > variable width integers or such, otherwise the space increase would\n> likely be\n> > prohibitive.\n>\n> And I don't buy that either. An extra 4 bytes with a 2K payload is not\n> \"prohibitive\", it's more like \"down in the noise\".\n>\n> I think if we switch to int8 keys and widen the global OID counter to 8\n> bytes (using just the low 4 bytes for other purposes), we'll have a\n> perfectly fine solution. There is still plenty of work to be done under\n> that plan, because of the need to maintain backward compatibility for\n> existing TOAST tables --- and maybe people would want an option to keep on\n> using them, for non-enormous tables? If we add additional work on top of\n> that, it'll just mean that it will take longer to have any solution.\n>\n> >> Quick fix for this problem is limiting GetNewOidWithIndex loops to some\n> >> reasonable amount defined by related macro and returning error if there\n> is\n> >> still no available Oid. Please check attached patch, any feedback is\n> >> appreciated.\n>\n> > This feels like the wrong spot to tackle the issue.\n>\n> Yeah, that is completely horrid. It does not remove the existing failure\n> mode, just changes it to have worse consequences.\n>\n> regards, tom lane\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,Andres Freund <andres@anarazel.de> writes:>Was the issue that you exceeded 4 billion toasted datums, or that assignment>took a long time? How many toast datums did you actually have? Was this due to>very wide rows leading to even small datums getting toasted?Yep, we had 4 billion toasted datums. I remind that currently relation has singleTOAST table for all toastable attributes, so it is not so difficult to get to 4 billionof toasted values.>I think if we switch to int8 keys and widen the global OID counter to 8>bytes (using just the low 4 bytes for other purposes), we'll have a>perfectly fine solution. There is still plenty of work to be done under>that plan, because of the need to maintain backward compatibility for>existing TOAST tables --- and maybe people would want an option to keep on>using them, for non-enormous tables? If we add additional work on top of>that, it'll just mean that it will take longer to have any solution.I agree, but:1) Global OID counter is used not only for TOAST, so there would be a lot ofplaces where the short counter (low part of 64 OID, if we go with that) is used;2) Upgrading to 64-bit id would require re-toasting old TOAST tables. Or thereis some way to distinguish old tables from new ones?But I don't see any reason to keep an old short ID as an option....>Yeah, that is completely horrid. It does not remove the existing failure>mode, just changes it to have worse consequences.On Tue, Nov 29, 2022 at 12:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andres Freund <andres@anarazel.de> writes:\n> I think the first step to improve the situation is to not use a global oid\n> counter for toasted values. One way to do that would be to use the sequence\n> code to do oid assignment, but we likely can find a more efficient\n> representation.\n\nI don't particularly buy that, because the only real fix is this:\n\n> Eventually we should do the obvious thing and make toast ids 64bit wide\n\nand if we do that there'll be no need to worry about multiple counters.\n\n> - to\n> combat the space usage we likely should switch to representing the ids as\n> variable width integers or such, otherwise the space increase would likely be\n> prohibitive.\n\nAnd I don't buy that either. An extra 4 bytes with a 2K payload is not\n\"prohibitive\", it's more like \"down in the noise\".\n\nI think if we switch to int8 keys and widen the global OID counter to 8\nbytes (using just the low 4 bytes for other purposes), we'll have a\nperfectly fine solution. There is still plenty of work to be done under\nthat plan, because of the need to maintain backward compatibility for\nexisting TOAST tables --- and maybe people would want an option to keep on\nusing them, for non-enormous tables? If we add additional work on top of\nthat, it'll just mean that it will take longer to have any solution.\n\n>> Quick fix for this problem is limiting GetNewOidWithIndex loops to some\n>> reasonable amount defined by related macro and returning error if there is\n>> still no available Oid. Please check attached patch, any feedback is\n>> appreciated.\n\n> This feels like the wrong spot to tackle the issue.\n\nYeah, that is completely horrid. It does not remove the existing failure\nmode, just changes it to have worse consequences.\n\n regards, tom lane\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Tue, 29 Nov 2022 00:24:49 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-28 16:04:12 -0500, Tom Lane wrote:\n>> And I don't buy that either. An extra 4 bytes with a 2K payload is not\n>> \"prohibitive\", it's more like \"down in the noise\".\n\n> The space usage for the the the toast relation + index itself is indeed\n> irrelevant. Where it's not \"down in the noise\" is in struct varatt_external,\n> i.e. references to external toast datums.\n\nAh, gotcha. Yes, the size of varatt_external is a problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 28 Nov 2022 16:45:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-29 00:24:49 +0300, Nikita Malakhov wrote:\n> 2) Upgrading to 64-bit id would require re-toasting old TOAST tables. Or\n> there is some way to distinguish old tables from new ones?\n\nThe catalog / relcache entry should suffice to differentiate between the two.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 28 Nov 2022 13:49:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-29 00:24:49 +0300, Nikita Malakhov wrote:\n>> 2) Upgrading to 64-bit id would require re-toasting old TOAST tables. Or\n>> there is some way to distinguish old tables from new ones?\n\n> The catalog / relcache entry should suffice to differentiate between the two.\n\nYeah, you could easily look at the datatype of the first attribute\n(in either the TOAST table or its index) to determine what to do.\n\nAs I said before, I think there's a decent argument that some people\nwill want the option to stay with 4-byte TOAST OIDs indefinitely,\nat least for smaller tables. So even without the fact that forced\nconversions would be horridly expensive, we'll need to continue\nsupport for both forms of TOAST table.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 28 Nov 2022 16:57:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-28 16:57:53 -0500, Tom Lane wrote:\n> As I said before, I think there's a decent argument that some people\n> will want the option to stay with 4-byte TOAST OIDs indefinitely,\n> at least for smaller tables.\n\nI think we'll need to do something about the width of varatt_external to make\nthe conversion to 64bit toast oids viable - and if we do, I don't think\nthere's a decent argument for staying with 4 byte toast OIDs. I think the\nvaratt_external equivalent would end up being smaller in just about all cases.\nAnd as you said earlier, the increased overhead inside the toast table / index\nis not relevant compared to the size of toasted datums.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 28 Nov 2022 14:10:09 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi,\n\nI'll check that tomorrow. If it is so then there won't be a problem keeping\nold tables without re-toasting.\n\nOn Tue, Nov 29, 2022 at 1:10 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-11-28 16:57:53 -0500, Tom Lane wrote:\n> > As I said before, I think there's a decent argument that some people\n> > will want the option to stay with 4-byte TOAST OIDs indefinitely,\n> > at least for smaller tables.\n>\n> I think we'll need to do something about the width of varatt_external to\n> make\n> the conversion to 64bit toast oids viable - and if we do, I don't think\n> there's a decent argument for staying with 4 byte toast OIDs. I think the\n> varatt_external equivalent would end up being smaller in just about all\n> cases.\n> And as you said earlier, the increased overhead inside the toast table /\n> index\n> is not relevant compared to the size of toasted datums.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,I'll check that tomorrow. If it is so then there won't be a problem keepingold tables without re-toasting.On Tue, Nov 29, 2022 at 1:10 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-11-28 16:57:53 -0500, Tom Lane wrote:\n> As I said before, I think there's a decent argument that some people\n> will want the option to stay with 4-byte TOAST OIDs indefinitely,\n> at least for smaller tables.\n\nI think we'll need to do something about the width of varatt_external to make\nthe conversion to 64bit toast oids viable - and if we do, I don't think\nthere's a decent argument for staying with 4 byte toast OIDs. I think the\nvaratt_external equivalent would end up being smaller in just about all cases.\nAnd as you said earlier, the increased overhead inside the toast table / index\nis not relevant compared to the size of toasted datums.\n\nGreetings,\n\nAndres Freund\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Tue, 29 Nov 2022 01:12:13 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-28 16:57:53 -0500, Tom Lane wrote:\n>> As I said before, I think there's a decent argument that some people\n>> will want the option to stay with 4-byte TOAST OIDs indefinitely,\n>> at least for smaller tables.\n\n> And as you said earlier, the increased overhead inside the toast table / index\n> is not relevant compared to the size of toasted datums.\n\nPerhaps not.\n\n> I think we'll need to do something about the width of varatt_external to make\n> the conversion to 64bit toast oids viable - and if we do, I don't think\n> there's a decent argument for staying with 4 byte toast OIDs. I think the\n> varatt_external equivalent would end up being smaller in just about all cases.\n\nI agree that we can't simply widen varatt_external to use 8 bytes for\nthe toast ID in all cases. Also, I now get the point about avoiding\nuse of globally assigned OIDs here: if the counter starts from zero\nfor each table, then a variable-width varatt_external could actually\nbe smaller than currently for many cases. However, that bit is somewhat\northogonal, and it's certainly not required for fixing the basic problem.\n\nSo it seems like the plan of attack ought to be:\n\n1. Invent a new form or forms of varatt_external to allow different\nwidths of the toast ID. Use the narrowest width possible for any\ngiven ID value.\n\n2. Allow TOAST tables/indexes to store either 4-byte or 8-byte IDs.\n(Conversion could be done as a side effect of table-rewrite\noperations, perhaps.)\n\n3. Localize ID selection so that tables can have small toast IDs\neven when other tables have many IDs. (Optional, could do later.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 28 Nov 2022 17:24:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "I've missed that -\n\n>4 billion external datums\n>typically use a lot of space.\n\nNot quite so. It's 8 Tb for the minimal size of toasted data (about 2 Kb).\nIn my practice tables with more than 5 billions of rows are not of\nsomething out\nof the ordinary (highly loaded databases with large amounts of data in use).\n\nOn Tue, Nov 29, 2022 at 1:12 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi,\n>\n> I'll check that tomorrow. If it is so then there won't be a problem keeping\n> old tables without re-toasting.\n>\n> On Tue, Nov 29, 2022 at 1:10 AM Andres Freund <andres@anarazel.de> wrote:\n>\n>> Hi,\n>>\n>> On 2022-11-28 16:57:53 -0500, Tom Lane wrote:\n>> > As I said before, I think there's a decent argument that some people\n>> > will want the option to stay with 4-byte TOAST OIDs indefinitely,\n>> > at least for smaller tables.\n>>\n>> I think we'll need to do something about the width of varatt_external to\n>> make\n>> the conversion to 64bit toast oids viable - and if we do, I don't think\n>> there's a decent argument for staying with 4 byte toast OIDs. I think the\n>> varatt_external equivalent would end up being smaller in just about all\n>> cases.\n>> And as you said earlier, the increased overhead inside the toast table /\n>> index\n>> is not relevant compared to the size of toasted datums.\n>>\n>> Greetings,\n>>\n>> Andres Freund\n>>\n>\n>\n> --\n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> https://postgrespro.ru/\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nI've missed that ->4 billion external datums>typically use a lot of space.Not quite so. It's 8 Tb for the minimal size of toasted data (about 2 Kb).In my practice tables with more than 5 billions of rows are not of something outof the ordinary (highly loaded databases with large amounts of data in use).On Tue, Nov 29, 2022 at 1:12 AM Nikita Malakhov <hukutoc@gmail.com> wrote:Hi,I'll check that tomorrow. If it is so then there won't be a problem keepingold tables without re-toasting.On Tue, Nov 29, 2022 at 1:10 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-11-28 16:57:53 -0500, Tom Lane wrote:\n> As I said before, I think there's a decent argument that some people\n> will want the option to stay with 4-byte TOAST OIDs indefinitely,\n> at least for smaller tables.\n\nI think we'll need to do something about the width of varatt_external to make\nthe conversion to 64bit toast oids viable - and if we do, I don't think\nthere's a decent argument for staying with 4 byte toast OIDs. I think the\nvaratt_external equivalent would end up being smaller in just about all cases.\nAnd as you said earlier, the increased overhead inside the toast table / index\nis not relevant compared to the size of toasted datums.\n\nGreetings,\n\nAndres Freund\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Tue, 29 Nov 2022 01:27:24 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi!\n\nI'm working on this issue according to the plan Tom proposed above -\n\n>I agree that we can't simply widen varatt_external to use 8 bytes for\n>the toast ID in all cases. Also, I now get the point about avoiding\n>use of globally assigned OIDs here: if the counter starts from zero\n>for each table, then a variable-width varatt_external could actually\n>be smaller than currently for many cases. However, that bit is somewhat\n>orthogonal, and it's certainly not required for fixing the basic problem.\n\nHave I understood correctly that you suppose using an individual counter\nfor each TOAST table? I'm working on this approach, so we store counters\nin cache, but I see an issue with the first insert - when there is no\ncounter\nin cache so we have to loop through the table with increasing steps to find\navailable one (i.e. after restart). Or we still use single global counter,\nbut\n64-bit with a wraparound?\n\n>So it seems like the plan of attack ought to be:\n\n>1. Invent a new form or forms of varatt_external to allow different\n>widths of the toast ID. Use the narrowest width possible for any\n>given ID value.\n\nI'm using the VARTAG field - there are plenty of available values, so there\nis no problem in distinguishing regular toast pointer with 'short' value id\n(4 bytes) from long (8 bytes).\n\nvaratt_external currently is 32-bit aligned, so there is no reason in using\nnarrower type for value ids up to 16 bits.Or is it?\n\n>2. Allow TOAST tables/indexes to store either 4-byte or 8-byte IDs.\n>(Conversion could be done as a side effect of table-rewrite\n>operations, perhaps.)\n\nStill on toast/detoast part, would get to that later.\n\n>3. Localize ID selection so that tables can have small toast IDs\n>even when other tables have many IDs. (Optional, could do later.)\n\n>\nThank you!\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!I'm working on this issue according to the plan Tom proposed above ->I agree that we can't simply widen varatt_external to use 8 bytes for>the toast ID in all cases. Also, I now get the point about avoiding>use of globally assigned OIDs here: if the counter starts from zero>for each table, then a variable-width varatt_external could actually>be smaller than currently for many cases. However, that bit is somewhat>orthogonal, and it's certainly not required for fixing the basic problem.Have I understood correctly that you suppose using an individual counterfor each TOAST table? I'm working on this approach, so we store countersin cache, but I see an issue with the first insert - when there is no counterin cache so we have to loop through the table with increasing steps to findavailable one (i.e. after restart). Or we still use single global counter, but64-bit with a wraparound?>So it seems like the plan of attack ought to be:>1. Invent a new form or forms of varatt_external to allow different>widths of the toast ID. Use the narrowest width possible for any>given ID value.I'm using the VARTAG field - there are plenty of available values, so thereis no problem in distinguishing regular toast pointer with 'short' value id(4 bytes) from long (8 bytes).varatt_external currently is 32-bit aligned, so there is no reason in usingnarrower type for value ids up to 16 bits.Or is it?>2. Allow TOAST tables/indexes to store either 4-byte or 8-byte IDs.>(Conversion could be done as a side effect of table-rewrite>operations, perhaps.)Still on toast/detoast part, would get to that later.>3. Localize ID selection so that tables can have small toast IDs>even when other tables have many IDs. (Optional, could do later.)\nThank you!-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Tue, 29 Nov 2022 14:05:44 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi hackers!\n\nHere's some update on the subject - I've made a branch based on master from\n28/11 and introduced 64-bit TOAST value ID there. It is not complete now but\nis already working and has some features:\n- extended structure for TOAST pointer (varatt_long_external) with 64-bit\nTOAST value ID;\n- individual ID counters for each TOAST table, being cached for faster\naccess\nand lookup of maximum TOAST ID in use after server restart.\nhttps://github.com/postgrespro/postgres/tree/toast_64bit\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi hackers!Here's some update on the subject - I've made a branch based on master from28/11 and introduced 64-bit TOAST value ID there. It is not complete now butis already working and has some features:- extended structure for TOAST pointer (varatt_long_external) with 64-bitTOAST value ID;- individual ID counters for each TOAST table, being cached for faster accessand lookup of maximum TOAST ID in use after server restart.https://github.com/postgrespro/postgres/tree/toast_64bit--Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Wed, 7 Dec 2022 00:00:38 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi hackers!\n\nI've prepared a patch with a 64-bit TOAST Value ID. It is a kind of\nprototype\nand needs some further work, but it is already working and ready to play\nwith.\n\nI've introduced 64-bit value ID field in varatt_external, but to keep it\ncompatible\nwith older bases 64-bit value is stored as a structure of two 32-bit fields\nand value\nID moved to be the last in the varatt_external structure. Also, I've\nintroduced\ndifferent ID assignment function - instead of using global OID assignment\nfunction\nI use individual counters for each TOAST table, automatically cached and\nafter\nserver restart when new value is inserted into TOAST table maximum used\nvalue\nis searched and used to assign the next one.\n\n>Andres Freund:\n>I think we'll need to do something about the width of varatt_external to\nmake\n>the conversion to 64bit toast oids viable - and if we do, I don't think\n>there's a decent argument for staying with 4 byte toast OIDs. I think the\n>varatt_external equivalent would end up being smaller in just about all\ncases.\n>And as you said earlier, the increased overhead inside the toast table /\nindex\n>is not relevant compared to the size of toasted datums.\n\nThere is some small overhead due to indexing 64-bit values. Also, for\nsmaller\ntables we can use 32-bit ID instead of 64-bit, but then we would have a\nproblem\nwhen we reach the limit of 2^32.\n\nPg_upgrade seems to be not a very difficult case if we go keeping old TOAST\ntables using 32-bit values,\n\nPlease have a look. I'd be grateful for some further directions.\n\nGIT branch with this feature, rebased onto current master:\nhttps://github.com/postgrespro/postgres/tree/toast_64bit\n\nOn Wed, Dec 7, 2022 at 12:00 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi hackers!\n>\n> Here's some update on the subject - I've made a branch based on master from\n> 28/11 and introduced 64-bit TOAST value ID there. It is not complete now\n> but\n> is already working and has some features:\n> - extended structure for TOAST pointer (varatt_long_external) with 64-bit\n> TOAST value ID;\n> - individual ID counters for each TOAST table, being cached for faster\n> access\n> and lookup of maximum TOAST ID in use after server restart.\n> https://github.com/postgrespro/postgres/tree/toast_64bit\n>\n> --\n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> https://postgrespro.ru/\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/",
"msg_date": "Tue, 13 Dec 2022 13:41:01 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi hackers!\n\nAny suggestions on the previous message (64-bit toast value ID with\nindividual counters)?\n\nOn Tue, Dec 13, 2022 at 1:41 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi hackers!\n>\n> I've prepared a patch with a 64-bit TOAST Value ID. It is a kind of\n> prototype\n> and needs some further work, but it is already working and ready to play\n> with.\n>\n> I've introduced 64-bit value ID field in varatt_external, but to keep it\n> compatible\n> with older bases 64-bit value is stored as a structure of two 32-bit\n> fields and value\n> ID moved to be the last in the varatt_external structure. Also, I've\n> introduced\n> different ID assignment function - instead of using global OID assignment\n> function\n> I use individual counters for each TOAST table, automatically cached and\n> after\n> server restart when new value is inserted into TOAST table maximum used\n> value\n> is searched and used to assign the next one.\n>\n> >Andres Freund:\n> >I think we'll need to do something about the width of varatt_external to\n> make\n> >the conversion to 64bit toast oids viable - and if we do, I don't think\n> >there's a decent argument for staying with 4 byte toast OIDs. I think the\n> >varatt_external equivalent would end up being smaller in just about all\n> cases.\n> >And as you said earlier, the increased overhead inside the toast table /\n> index\n> >is not relevant compared to the size of toasted datums.\n>\n> There is some small overhead due to indexing 64-bit values. Also, for\n> smaller\n> tables we can use 32-bit ID instead of 64-bit, but then we would have a\n> problem\n> when we reach the limit of 2^32.\n>\n> Pg_upgrade seems to be not a very difficult case if we go keeping old TOAST\n> tables using 32-bit values,\n>\n> Please have a look. I'd be grateful for some further directions.\n>\n> GIT branch with this feature, rebased onto current master:\n> https://github.com/postgrespro/postgres/tree/toast_64bit\n>\n>\n> --\nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi hackers!Any suggestions on the previous message (64-bit toast value ID with individual counters)?On Tue, Dec 13, 2022 at 1:41 PM Nikita Malakhov <hukutoc@gmail.com> wrote:Hi hackers!I've prepared a patch with a 64-bit TOAST Value ID. It is a kind of prototypeand needs some further work, but it is already working and ready to play with.I've introduced 64-bit value ID field in varatt_external, but to keep it compatiblewith older bases 64-bit value is stored as a structure of two 32-bit fields and valueID moved to be the last in the varatt_external structure. Also, I've introduceddifferent ID assignment function - instead of using global OID assignment functionI use individual counters for each TOAST table, automatically cached and afterserver restart when new value is inserted into TOAST table maximum used valueis searched and used to assign the next one.>Andres Freund:>I think we'll need to do something about the width of varatt_external to make>the conversion to 64bit toast oids viable - and if we do, I don't think>there's a decent argument for staying with 4 byte toast OIDs. I think the>varatt_external equivalent would end up being smaller in just about all cases.>And as you said earlier, the increased overhead inside the toast table / index>is not relevant compared to the size of toasted datums.There is some small overhead due to indexing 64-bit values. Also, for smallertables we can use 32-bit ID instead of 64-bit, but then we would have a problemwhen we reach the limit of 2^32.Pg_upgrade seems to be not a very difficult case if we go keeping old TOASTtables using 32-bit values,Please have a look. I'd be grateful for some further directions.GIT branch with this feature, rebased onto current master:https://github.com/postgrespro/postgres/tree/toast_64bit-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Thu, 22 Dec 2022 21:07:31 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 10:07 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> Any suggestions on the previous message (64-bit toast value ID with individual counters)?\n\nWas this patch ever added to CommitFest? I don't see it in the current\nOpen Commitfest.\n\nhttps://commitfest.postgresql.org/43/\n\nBest regards,\nGurjeet http://Gurje.et\nhttp://aws.amazon.com\n\n\n",
"msg_date": "Sat, 22 Apr 2023 08:17:30 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi!\n\nNo, it wasn't. It was a proposal, I thought I'd get some feedback on it\nbefore sending it to commitfest.\n\nOn Sat, Apr 22, 2023 at 6:17 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n\n> On Thu, Dec 22, 2022 at 10:07 AM Nikita Malakhov <hukutoc@gmail.com>\n> wrote:\n> > Any suggestions on the previous message (64-bit toast value ID with\n> individual counters)?\n>\n> Was this patch ever added to CommitFest? I don't see it in the current\n> Open Commitfest.\n>\n> https://commitfest.postgresql.org/43/\n>\n> Best regards,\n> Gurjeet http://Gurje.et\n> http://aws.amazon.com\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!No, it wasn't. It was a proposal, I thought I'd get some feedback on it before sending it to commitfest.On Sat, Apr 22, 2023 at 6:17 PM Gurjeet Singh <gurjeet@singh.im> wrote:On Thu, Dec 22, 2022 at 10:07 AM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> Any suggestions on the previous message (64-bit toast value ID with individual counters)?\n\nWas this patch ever added to CommitFest? I don't see it in the current\nOpen Commitfest.\n\nhttps://commitfest.postgresql.org/43/\n\nBest regards,\nGurjeet http://Gurje.et\nhttp://aws.amazon.com\n-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Mon, 24 Apr 2023 09:03:17 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi,\n\n> I agree that we can't simply widen varatt_external to use 8 bytes for\n> the toast ID in all cases.\n\n+1\n\nNote that the user may have a table with multiple TOASTable\nattributes. If we simply widen the TOAST pointer it may break the\nexisting tables in the edge case. Also this may be a reason why\ncertain users may prefer to continue using narrow pointers. IMO wide\nTOAST pointers should be a table option. Whether the default for new\ntables should be wide or narrow pointers is debatable.\n\nIn another discussion [1] we seem to agree that we also want to have\nan ability to include a 32-bit dictionary_id to the TOAST pointers and\nperhaps support more compression methods (ZSTD to name one). Besides\nthat it would be nice to have an ability to extend TOAST pointers in\nthe future without breaking the existing pointers. One possible\nsolution would be to add a varint feature bitmask to every pointer. So\nwe could add flags like TOAST_IS_WIDE, TOAST_HAS_DICTIONARY,\nTOAST_UNKNOWN_FEATURE_FROM_2077, etc indefinitely.\n\nI suggest we address all the current and future needs once and\ncompletely refactor TOAST pointers rather than solving one problem at\na time. I believe this will be more beneficial for the community in\nthe long term.\n\nThoughts?\n\n[1]: https://commitfest.postgresql.org/43/3626/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 26 Apr 2023 15:54:57 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi!\n\nWidening of a TOAST pointer is possible if we keep backward compatibility\nwith\nold-fashioned TOAST tables - I mean differ 'long' and 'short' TOAST\npointers and\nprocess them accordingly on insert and delete cases, and vacuum with logical\nreplication. It is not very difficult, however it takes some effort.\nRecently I've found\nout that I have not overseen all compatibility cases, so the provided patch\nis\nfunctional but limited in compatibility.\n\nWe already have a flag byte in the TOAST pointer which is responsible for\nthe type\nof the pointer - va_flag field. It was explained in the Pluggable TOAST\npatch.\nOne is enough, there is no need to add another one.\n\n\nOn Wed, Apr 26, 2023 at 3:55 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi,\n>\n> > I agree that we can't simply widen varatt_external to use 8 bytes for\n> > the toast ID in all cases.\n>\n> +1\n>\n> Note that the user may have a table with multiple TOASTable\n> attributes. If we simply widen the TOAST pointer it may break the\n> existing tables in the edge case. Also this may be a reason why\n> certain users may prefer to continue using narrow pointers. IMO wide\n> TOAST pointers should be a table option. Whether the default for new\n> tables should be wide or narrow pointers is debatable.\n>\n> In another discussion [1] we seem to agree that we also want to have\n> an ability to include a 32-bit dictionary_id to the TOAST pointers and\n> perhaps support more compression methods (ZSTD to name one). Besides\n> that it would be nice to have an ability to extend TOAST pointers in\n> the future without breaking the existing pointers. One possible\n> solution would be to add a varint feature bitmask to every pointer. So\n> we could add flags like TOAST_IS_WIDE, TOAST_HAS_DICTIONARY,\n> TOAST_UNKNOWN_FEATURE_FROM_2077, etc indefinitely.\n>\n> I suggest we address all the current and future needs once and\n> completely refactor TOAST pointers rather than solving one problem at\n> a time. I believe this will be more beneficial for the community in\n> the long term.\n>\n> Thoughts?\n>\n> [1]: https://commitfest.postgresql.org/43/3626/\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!Widening of a TOAST pointer is possible if we keep backward compatibility withold-fashioned TOAST tables - I mean differ 'long' and 'short' TOAST pointers andprocess them accordingly on insert and delete cases, and vacuum with logicalreplication. It is not very difficult, however it takes some effort. Recently I've foundout that I have not overseen all compatibility cases, so the provided patch isfunctional but limited in compatibility.We already have a flag byte in the TOAST pointer which is responsible for the typeof the pointer - va_flag field. It was explained in the Pluggable TOAST patch.One is enough, there is no need to add another one.On Wed, Apr 26, 2023 at 3:55 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi,\n\n> I agree that we can't simply widen varatt_external to use 8 bytes for\n> the toast ID in all cases.\n\n+1\n\nNote that the user may have a table with multiple TOASTable\nattributes. If we simply widen the TOAST pointer it may break the\nexisting tables in the edge case. Also this may be a reason why\ncertain users may prefer to continue using narrow pointers. IMO wide\nTOAST pointers should be a table option. Whether the default for new\ntables should be wide or narrow pointers is debatable.\n\nIn another discussion [1] we seem to agree that we also want to have\nan ability to include a 32-bit dictionary_id to the TOAST pointers and\nperhaps support more compression methods (ZSTD to name one). Besides\nthat it would be nice to have an ability to extend TOAST pointers in\nthe future without breaking the existing pointers. One possible\nsolution would be to add a varint feature bitmask to every pointer. So\nwe could add flags like TOAST_IS_WIDE, TOAST_HAS_DICTIONARY,\nTOAST_UNKNOWN_FEATURE_FROM_2077, etc indefinitely.\n\nI suggest we address all the current and future needs once and\ncompletely refactor TOAST pointers rather than solving one problem at\na time. I believe this will be more beneficial for the community in\nthe long term.\n\nThoughts?\n\n[1]: https://commitfest.postgresql.org/43/3626/\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Thu, 27 Apr 2023 13:39:05 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi Nikita,\n\n> I've prepared a patch with a 64-bit TOAST Value ID. It is a kind of prototype\n> and needs some further work, but it is already working and ready to play with.\n\nUnfortunately the patch rotted a bit. Could you please submit an\nupdated/rebased patch so that it could be reviewed in the scope of\nJuly commitfest?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 10 Jul 2023 15:50:18 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi!\n\nAleksander, thank you for reminding me of this patch, try to do it in a few\ndays.\n\n--\nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!Aleksander, thank you for reminding me of this patch, try to do it in a few days.--Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Tue, 11 Jul 2023 15:00:38 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
},
{
"msg_contents": "Hi,\n\n> Aleksander, thank you for reminding me of this patch, try to do it in a few days.\n\nA consensus was reached [1] to mark this patch as RwF for now. There\nare many patches to be reviewed and this one doesn't seem to be in the\nbest shape, so we have to prioritise. Please feel free re-submitting\nthe patch for the next commitfest.\n\n[1]: https://postgr.es/m/0737f444-59bb-ac1d-2753-873c40da0840%40eisentraut.org\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 4 Sep 2023 15:27:10 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Infinite loop while acquiring new TOAST Oid"
}
] |
[
{
"msg_contents": "There's an old thread that interests me but which ended without any\nresolution/solution:\n\nhttps://www.postgresql.org/messageid/CA%2BHiwqGpoKmDg%2BTJdMfoeu3k4VQnxcfBon4fwBRp8ex_CTCsnA%40mail.gmail.com\n\nSome of our basic requirements are:\n1) All data must be labeled at a specific level (using SELinux multi-level\nsecurity (MLS) policy).\n2) Data of different levels cannot be stored in the same file on disk.\n3) The Bell-LaPadula model must be applied meaning read (select) down\n(return\nrows labeled at levels dominated by the querying processes level) is\nallowed,\nupdates (insert/update/delete) can only be done to data at the same level as\nexecuting process. BLM allows for write up but in reality since processes\ndon't\nknow about levels which dominate theirs this doesn't happen.\n\nIn the past I've used RLS, sepgsql and some additional custom functions to\ncreate MLS databases but this does not satisfy #2. Partitioning looks to be\na\nway to achieve #2 and to possibly improve query performance since\npartitions\ncould be pruned based on the level of data stored in them. However I'm not\naware of a means to implement table level dominance pruning. The patch,\nin the thread noted above, proposed a hook to allow customized pruning of\npartitions which is something I think would be useful. However a number of\nquestions and concerns were raised (some beyond my ability to even\ncomprehend since I don't have intimate knowledge of the code base) but\nnever addressed.\nWhat's the best way forward in a situation like this?\n\nTed\n\nThere's an old thread that interests me but which ended without any resolution/solution:https://www.postgresql.org/messageid/CA%2BHiwqGpoKmDg%2BTJdMfoeu3k4VQnxcfBon4fwBRp8ex_CTCsnA%40mail.gmail.comSome of our basic requirements are:1) All data must be labeled at a specific level (using SELinux multi-level security (MLS) policy). 2) Data of different levels cannot be stored in the same file on disk. 3) The Bell-LaPadula model must be applied meaning read (select) down (return rows labeled at levels dominated by the querying processes level) is allowed, updates (insert/update/delete) can only be done to data at the same level asexecuting process. BLM allows for write up but in reality since processes don't know about levels which dominate theirs this doesn't happen.In the past I've used RLS, sepgsql and some additional custom functions to create MLS databases but this does not satisfy #2. Partitioning looks to be a way to achieve #2 and to possibly improve query performance since partitions could be pruned based on the level of data stored in them. However I'm not aware of a means to implement table level dominance pruning. The patch, in the thread noted above, proposed a hook to allow customized pruning of partitions which is something I think would be useful. However a number of questions and concerns were raised (some beyond my ability to even comprehend since I don't have intimate knowledge of the code base) but never addressed. What's the best way forward in a situation like this?Ted",
"msg_date": "Tue, 29 Nov 2022 07:40:45 -0600",
"msg_from": "Ted Toth <txtoth@gmail.com>",
"msg_from_op": true,
"msg_subject": "'Flexible \"partition pruning\" hook' redux?"
}
] |
[
{
"msg_contents": "FYI, you might wonder why so many bugs reported on pg_upgrade eventually\nare bugs in pg_dump. Well, of course, partly is it because pg_upgrade\nrelies on pg_dump, but a bigger issue is that pg_upgrade will fail if\npg_dump or its restoration generate _any_ errors. My guess is that many\npeople are using pg_dump and restore and just ignoring errors or fixing\nthem later, while this is not possible when using pg_upgrade.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Tue, 29 Nov 2022 23:39:05 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "pg_dump bugs reported as pg_upgrade bugs"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> FYI, you might wonder why so many bugs reported on pg_upgrade eventually\n> are bugs in pg_dump. Well, of course, partly is it because pg_upgrade\n> relies on pg_dump, but a bigger issue is that pg_upgrade will fail if\n> pg_dump or its restoration generate _any_ errors. My guess is that many\n> people are using pg_dump and restore and just ignoring errors or fixing\n> them later, while this is not possible when using pg_upgrade.\n\npg_dump scripts are *designed* to be tolerant of errors, mainly so\nthat you can restore into a situation that's not exactly like where\nyou dumped from, with the possible need to resolve errors or decide\nthat they're not problems. So your depiction of what happens in\ndump/restore is not showing a problem; it's about using those tools\nas they were intended to be used.\n\nIndeed, there's a bit of disconnect there with pg_upgrade, which would\nlike to present a zero-user-involvement, nothing-to-see-here facade.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Nov 2022 00:22:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump bugs reported as pg_upgrade bugs"
},
{
"msg_contents": "On Wed, Nov 30, 2022 at 12:22:57AM -0500, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > FYI, you might wonder why so many bugs reported on pg_upgrade eventually\n> > are bugs in pg_dump. Well, of course, partly is it because pg_upgrade\n> > relies on pg_dump, but a bigger issue is that pg_upgrade will fail if\n> > pg_dump or its restoration generate _any_ errors. My guess is that many\n> > people are using pg_dump and restore and just ignoring errors or fixing\n> > them later, while this is not possible when using pg_upgrade.\n> \n> pg_dump scripts are *designed* to be tolerant of errors, mainly so\n> that you can restore into a situation that's not exactly like where\n> you dumped from, with the possible need to resolve errors or decide\n> that they're not problems. So your depiction of what happens in\n> dump/restore is not showing a problem; it's about using those tools\n> as they were intended to be used.\n> \n> Indeed, there's a bit of disconnect there with pg_upgrade, which would\n> like to present a zero-user-involvement, nothing-to-see-here facade.\n\nAgreed, a disconnect, plus if it is a table or index restore that fails,\npg_upgrade would fail later because there would be no system catalogs to\nmove the data into.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Wed, 30 Nov 2022 08:55:14 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump bugs reported as pg_upgrade bugs"
},
{
"msg_contents": "On Wed, Nov 30, 2022 at 12:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> pg_dump scripts are *designed* to be tolerant of errors, mainly so\n> that you can restore into a situation that's not exactly like where\n> you dumped from, with the possible need to resolve errors or decide\n> that they're not problems. So your depiction of what happens in\n> dump/restore is not showing a problem; it's about using those tools\n> as they were intended to be used.\n>\n> Indeed, there's a bit of disconnect there with pg_upgrade, which would\n> like to present a zero-user-involvement, nothing-to-see-here facade.\n\nYes. I think it's good that the pg_dump scripts are designed to be\ntolerant of errors, but I also think that we've got to clearly\nenvisage that error-tolerance as the backup plan. Fifteen years ago,\nit may have been acceptable to imagine that every dump-and-restore was\ngoing to be performed by a human being who could make an intelligent\njudgement about whether the errors that occurred were concerning or\nnot, but today, at the scale that PostgreSQL is being used, that's not\nrealistic. Unattended operation is common, and the number of instances\nvastly outstrips the number of people who are truly knowledgeable\nabout the internals. The goalposts have moved because the project is\nsuccessful and widely adopted. All of this is true even apart from\npg_upgrade, but the existence of pg_upgrade and the fact that\npg_upgrade is the only way to perform a quick major version upgrade\nexacerbates the problem quite a bit.\n\nI don't know what consequences this has concretely, really. I have no\nspecific change to propose. I just think that we need to wrench\nourselves out of a mind-set where we imagine that some errors are OK\nbecause the DBA will know how to fix things up. The DBA is a script.\nIf there's a knowledgeable person at all they have 10,000 instances to\nlook after and don't have time to fiddle with each one. The aspects of\nPostgreSQL that tend to require manual fiddling (HA, backups,\nupgrades, autovacuum) are huge barriers to wider adoption and\nlarge-scale deployment in a way that probably just wasn't true when\nthe project wasn't as successful as it now is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 30 Nov 2022 10:54:16 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump bugs reported as pg_upgrade bugs"
}
] |
[
{
"msg_contents": "For historical reasons, pg_dump refers to large objects as \"BLOBs\". \nThis term is not used anywhere else in PostgreSQL, and it also means \nsomething different in the SQL standard and other SQL systems.\n\nThis patch renames internal functinos, code comments, documentation, \netc. to use the \"large object\" or \"LO\" terminology instead. There is no \nfunctionality change, so the archive format still uses the name \"BLOB\" \nfor the archive entry. Additional long command-line options are added \nwith the new naming.",
"msg_date": "Wed, 30 Nov 2022 08:04:01 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "pg_dump: Remove \"blob\" terminology"
},
{
"msg_contents": "> On 30 Nov 2022, at 08:04, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> For historical reasons, pg_dump refers to large objects as \"BLOBs\". This term is not used anywhere else in PostgreSQL, and it also means something different in the SQL standard and other SQL systems.\n> \n> This patch renames internal functinos, code comments, documentation, etc. to use the \"large object\" or \"LO\" terminology instead. There is no functionality change, so the archive format still uses the name \"BLOB\" for the archive entry. Additional long command-line options are added with the new naming.\n\n+1 on doing this. No pointy bits stood out when reading, just a few small\ncomments:\n\nThe commit message contains a typo: functinos\n\n * called for both BLOB and TABLE data; it is the responsibility of\n- * the format to manage each kind of data using StartBlob/StartData.\n+ * the format to manage each kind of data using StartLO/StartData.\n\nShould BLOB be changed to BLOBS here (and in similar comments) to make it\nclearer that it refers to the archive entry and the concept of a binary large\nobject in general?\n\nTheres an additional mention in src/test/modules/test_pg_dump/t/001_base.pl:\n\n # Tests which are considered 'full' dumps by pg_dump, but there.\n # are flags used to exclude specific items (ACLs, blobs, etc).\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 30 Nov 2022 09:07:53 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: Remove \"blob\" terminology"
},
{
"msg_contents": "On 30.11.22 09:07, Daniel Gustafsson wrote:\n> The commit message contains a typo: functinos\n\nfixed\n\n> * called for both BLOB and TABLE data; it is the responsibility of\n> - * the format to manage each kind of data using StartBlob/StartData.\n> + * the format to manage each kind of data using StartLO/StartData.\n> \n> Should BLOB be changed to BLOBS here (and in similar comments) to make it\n> clearer that it refers to the archive entry and the concept of a binary large\n> object in general?\n\nI changed this one and went through it again to tidy it up a bit more. \n(There are both \"BLOB\" and \"BLOBS\" archive entries, so both forms still \nexist in the code now.)\n\n> Theres an additional mention in src/test/modules/test_pg_dump/t/001_base.pl:\n> \n> # Tests which are considered 'full' dumps by pg_dump, but there.\n> # are flags used to exclude specific items (ACLs, blobs, etc).\n\nfixed\n\nI also put back the old long options forms in the documentation and help \nbut marked them deprecated.",
"msg_date": "Fri, 2 Dec 2022 08:09:46 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump: Remove \"blob\" terminology"
},
{
"msg_contents": "> On 2 Dec 2022, at 08:09, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> fixed\n\n+1 on this version of the patch, LGTM.\n\n> I also put back the old long options forms in the documentation and help but marked them deprecated.\n\n+ <term><option>--blobs</option> (deprecated)</term>\nWhile not in scope for this patch, I wonder if we should add a similar\n(deprecated) marker on other commandline options which are documented to be\ndeprecated. -i on bin/postgres comes to mind as one candidate.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 2 Dec 2022 09:34:11 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: Remove \"blob\" terminology"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 30.11.22 09:07, Daniel Gustafsson wrote:\n>> Should BLOB be changed to BLOBS here (and in similar comments) to make it\n>> clearer that it refers to the archive entry and the concept of a binary large\n>> object in general?\n\n> I changed this one and went through it again to tidy it up a bit more. \n> (There are both \"BLOB\" and \"BLOBS\" archive entries, so both forms still \n> exist in the code now.)\n\nI've not read this patch and don't have time right this moment, but\nI wanted to make a note about something to have in the back of your\nhead: we really need to do something about situations with $BIGNUM\nlarge objects. Currently those tend to run pg_dump or pg_restore\nout of memory because of TOC bloat, and we've seen multiple field\nreports of that actually happening.\n\nThe scheme I've vaguely thought about, but not got round to writing,\nis to merge all blobs with the same owner and ACL into one TOC entry.\nOne would hope that would get it down to a reasonable number of\nTOC entries in practical applications. (Perhaps there'd need to be\na switch to make this optional.)\n\nI'm not asking you to make that happen as part of this patch, but\nplease don't refactor things in a way that will make it harder.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Dec 2022 09:18:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: Remove \"blob\" terminology"
},
{
"msg_contents": "> On 2 Dec 2022, at 15:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I'm not asking you to make that happen as part of this patch, but\n> please don't refactor things in a way that will make it harder.\n\nI have that on my TODO as well since 7da8823d83a2b66bdd917aa6cb2c5c2619d86011.camel@credativ.de,\nand having read this patch I don't think it moves the needle in any\nway which complicates that.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 2 Dec 2022 15:53:33 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: Remove \"blob\" terminology"
},
{
"msg_contents": "\nOn 2022-12-02 Fr 09:18, Tom Lane wrote:\n> we really need to do something about situations with $BIGNUM\n> large objects. Currently those tend to run pg_dump or pg_restore\n> out of memory because of TOC bloat, and we've seen multiple field\n> reports of that actually happening.\n>\n> The scheme I've vaguely thought about, but not got round to writing,\n> is to merge all blobs with the same owner and ACL into one TOC entry.\n> One would hope that would get it down to a reasonable number of\n> TOC entries in practical applications. (Perhaps there'd need to be\n> a switch to make this optional.)\n\n\n+1 for fixing this. Your scheme seems reasonable. This has been a pain\npoint for a long time. I'm not sure what we'd gain by making the fix\noptional.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 2 Dec 2022 12:26:29 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: Remove \"blob\" terminology"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-12-02 Fr 09:18, Tom Lane wrote:\n>> The scheme I've vaguely thought about, but not got round to writing,\n>> is to merge all blobs with the same owner and ACL into one TOC entry.\n>> One would hope that would get it down to a reasonable number of\n>> TOC entries in practical applications. (Perhaps there'd need to be\n>> a switch to make this optional.)\n\n> +1 for fixing this. Your scheme seems reasonable. This has been a pain\n> point for a long time. I'm not sure what we'd gain by making the fix\n> optional.\n\nWell, what this would lose is the ability to selectively restore\nindividual large objects using \"pg_restore -L\". I'm not sure who\nout there might be depending on that, but if we assume that's the\nempty set I fear we'll find out it's not. So a workaround switch\nseemed possibly worth the trouble. I don't have a position yet\non which way ought to be the default.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Dec 2022 12:40:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: Remove \"blob\" terminology"
},
{
"msg_contents": "\nOn 2022-12-02 Fr 12:40, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 2022-12-02 Fr 09:18, Tom Lane wrote:\n>>> The scheme I've vaguely thought about, but not got round to writing,\n>>> is to merge all blobs with the same owner and ACL into one TOC entry.\n>>> One would hope that would get it down to a reasonable number of\n>>> TOC entries in practical applications. (Perhaps there'd need to be\n>>> a switch to make this optional.)\n>> +1 for fixing this. Your scheme seems reasonable. This has been a pain\n>> point for a long time. I'm not sure what we'd gain by making the fix\n>> optional.\n> Well, what this would lose is the ability to selectively restore\n> individual large objects using \"pg_restore -L\". I'm not sure who\n> out there might be depending on that, but if we assume that's the\n> empty set I fear we'll find out it's not. So a workaround switch\n> seemed possibly worth the trouble. I don't have a position yet\n> on which way ought to be the default.\n>\n> \t\t\t\n\n\nOK, fair point. I suspect making the batched mode the default would gain\nmore friends than enemies.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 3 Dec 2022 11:07:09 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: Remove \"blob\" terminology"
},
{
"msg_contents": "On 02.12.22 09:34, Daniel Gustafsson wrote:\n>> On 2 Dec 2022, at 08:09, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n>> fixed\n> \n> +1 on this version of the patch, LGTM.\n\ncommitted\n\n>> I also put back the old long options forms in the documentation and help but marked them deprecated.\n> \n> + <term><option>--blobs</option> (deprecated)</term>\n> While not in scope for this patch, I wonder if we should add a similar\n> (deprecated) marker on other commandline options which are documented to be\n> deprecated. -i on bin/postgres comes to mind as one candidate.\n\nmakes sense\n\n\n\n",
"msg_date": "Mon, 5 Dec 2022 09:04:58 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump: Remove \"blob\" terminology"
},
{
"msg_contents": "On Sat, Dec 3, 2022 at 11:07 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > Well, what this would lose is the ability to selectively restore\n> > individual large objects using \"pg_restore -L\". I'm not sure who\n> > out there might be depending on that, but if we assume that's the\n> > empty set I fear we'll find out it's not. So a workaround switch\n> > seemed possibly worth the trouble. I don't have a position yet\n> > on which way ought to be the default.\n>\n> OK, fair point. I suspect making the batched mode the default would gain\n> more friends than enemies.\n\nA lot of people probably don't know that selective restore even exists\nbut it is an AWESOME feature and I'd hate to see us break it, or even\njust degrade it.\n\nI wonder if we can't find a better solution than bunching TOC entries\ntogether. Perhaps we don't need every TOC entry in memory\nsimultaneously, for instance, especially things like LOBs that don't\nhave dependencies.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 5 Dec 2022 10:08:57 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: Remove \"blob\" terminology"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I wonder if we can't find a better solution than bunching TOC entries\n> together. Perhaps we don't need every TOC entry in memory\n> simultaneously, for instance, especially things like LOBs that don't\n> have dependencies.\n\nInteresting idea. We'd have to either read the TOC multiple times,\nor shove the LOB TOC entries into a temporary file, either of which\nhas downsides.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Dec 2022 10:56:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: Remove \"blob\" terminology"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 10:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Interesting idea. We'd have to either read the TOC multiple times,\n> or shove the LOB TOC entries into a temporary file, either of which\n> has downsides.\n\nI wonder if we'd need to read the TOC entries repeatedly, or just incrementally.\n\nI haven't studied the pg_dump format much, but I wonder if we could\narrange things so that the \"boring\" entries without dependencies, or\nmaybe the LOB entries specifically, are grouped together in some way\nwhere we know the byte-length of that section and can just skip over\nit. Then when we need them we go back and read 'em one by one.\n\n(Of course this doesn't work if reading for standard input or if\nmultiple phases of processing need them or whatever. Not trying to say\nI've solved the problem. But in general I think we need to give more\nthought to the possibility that \"just read all the data into memory\"\nis not an adequate solution to $PROBLEM.)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 5 Dec 2022 11:16:43 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: Remove \"blob\" terminology"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile playing with pg_regress and pg_isolation_regress, I noticed that\nthere's a potential nullptr deference in both of them.\n\nHow to reproduce:\n\nSpecify the `--dbname=` option without providing any database name.\n\n<path>/<to>/pg_regress --dbname= foo\n<path>/<to>/pg_isolation_regress --dbname= foo\n\nPatch is attached.\n\n-- \nBest Regards,\nXing",
"msg_date": "Wed, 30 Nov 2022 23:02:35 +0800",
"msg_from": "Xing Guo <higuoxing@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_regress/pg_isolation_regress: Fix possible nullptr dereference."
},
{
"msg_contents": "Xing Guo <higuoxing@gmail.com> writes:\n> While playing with pg_regress and pg_isolation_regress, I noticed that\n> there's a potential nullptr deference in both of them.\n> How to reproduce:\n> Specify the `--dbname=` option without providing any database name.\n\nHmm, yeah, I see that too.\n\n> Patch is attached.\n\nThis patch seems like a band-aid, though. The reason nobody's\nnoticed this for decades is that it doesn't make a lot of sense\nto allow tests to run in your default database: the odds of them\nscrewing up something valuable are high, and the odds that they'll\nfail if started in a nonempty database are even higher.\n\nI think the right answer is to treat it as an error if we end up\nwith an empty dblist (or even a zero-length name).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Nov 2022 11:59:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_regress/pg_isolation_regress: Fix possible nullptr\n dereference."
}
] |
[
{
"msg_contents": "Hi, I'd like to propose a change and get advice if I should work on it.\n\nThe extension pg_stat_statements is very helpful, but the downside is that\nit will take up too much disk space when storing query stats if it's\nenabled for all statements like SELECT, INSERT, UPDATE, DELETE.\n\nFor example, deletes do not happen too frequently; so I'd like to be able\nto enable pg_stat_statements only for the DELETE statement, maybe using\nsome flags.\n\nAnother possibility is if we can limit the tables to which\npg_stat_statements logs\nresults.\n\nHi, I'd like to propose a change and get advice if I should work on it.The extension pg_stat_statements is very helpful, but the downside is that it will take up too much disk space when storing query stats if it's enabled for all statements like SELECT, INSERT, UPDATE, DELETE.For example, deletes do not happen too frequently; so I'd like to be able to enable pg_stat_statements only for the DELETE statement, maybe using some flags.Another possibility is if we can limit the tables to which pg_stat_statements logs results.",
"msg_date": "Wed, 30 Nov 2022 09:57:48 -0800",
"msg_from": "Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com>",
"msg_from_op": true,
"msg_subject": "Enable pg_stat_statements extension for limited statements only"
},
{
"msg_contents": "Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com> writes:\n> Hi, I'd like to propose a change and get advice if I should work on it.\n> The extension pg_stat_statements is very helpful, but the downside is that\n> it will take up too much disk space when storing query stats if it's\n> enabled for all statements like SELECT, INSERT, UPDATE, DELETE.\n\nIt will only take up a lot of disk space if you let it, by setting\nthe pg_stat_statements.max parameter too high.\n\n> For example, deletes do not happen too frequently; so I'd like to be able\n> to enable pg_stat_statements only for the DELETE statement, maybe using\n> some flags.\n\nI'm a little skeptical of the value of that. Why would you want stats\nonly for infrequent statements?\n\nI'm not denying that there might be usefulness in filtering what\npg_stat_statements will track, but it's not clear to me that\nthis particular proposal will be useful to many people.\n\nI wonder whether there would be more use in filters expressed\nas regular expressions to match against the statement text.\nThat would allow, for example, tracking statements that mention\na particular table as well as statements with a particular\nhead keyword. I could see usefulness in both a positive filter\n(must match this to get tracked) and a negative one (must not\nmatch this to get tracked).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Nov 2022 13:15:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Enable pg_stat_statements extension for limited statements only"
},
{
"msg_contents": "Yes, I agree that infrequent statements don't need stats. Actually I was\ndistracted with the use case that I had in mind other than stats, maybe\nbringing that up will help.\n\nIf someone's interested how frequent are deletes being run on a particular\ntable, or what was the exact query that ran. Basically keeping track of\nqueries. Although now I'm less convinced if a considerable amount of people\nwill be interested in this, but let me know what you think.\n\n\nOn Wed, Nov 30, 2022 at 10:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com> writes:\n> > Hi, I'd like to propose a change and get advice if I should work on it.\n> > The extension pg_stat_statements is very helpful, but the downside is\n> that\n> > it will take up too much disk space when storing query stats if it's\n> > enabled for all statements like SELECT, INSERT, UPDATE, DELETE.\n>\n> It will only take up a lot of disk space if you let it, by setting\n> the pg_stat_statements.max parameter too high.\n>\n> > For example, deletes do not happen too frequently; so I'd like to be able\n> > to enable pg_stat_statements only for the DELETE statement, maybe using\n> > some flags.\n>\n> I'm a little skeptical of the value of that. Why would you want stats\n> only for infrequent statements?\n>\n> I'm not denying that there might be usefulness in filtering what\n> pg_stat_statements will track, but it's not clear to me that\n> this particular proposal will be useful to many people.\n>\n> I wonder whether there would be more use in filters expressed\n> as regular expressions to match against the statement text.\n> That would allow, for example, tracking statements that mention\n> a particular table as well as statements with a particular\n> head keyword. I could see usefulness in both a positive filter\n> (must match this to get tracked) and a negative one (must not\n> match this to get tracked).\n>\n> regards, tom lane\n>\n\nYes, I agree that infrequent statements don't need stats. Actually I was distracted with the use case that I had in mind other than stats, maybe bringing that up will help.If someone's interested how frequent are deletes being run on a particular table, or what was the exact query that ran. Basically keeping track of queries. Although now I'm less convinced if a considerable amount of people will be interested in this, but let me know what you think.On Wed, Nov 30, 2022 at 10:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com> writes:\n> Hi, I'd like to propose a change and get advice if I should work on it.\n> The extension pg_stat_statements is very helpful, but the downside is that\n> it will take up too much disk space when storing query stats if it's\n> enabled for all statements like SELECT, INSERT, UPDATE, DELETE.\n\nIt will only take up a lot of disk space if you let it, by setting\nthe pg_stat_statements.max parameter too high.\n\n> For example, deletes do not happen too frequently; so I'd like to be able\n> to enable pg_stat_statements only for the DELETE statement, maybe using\n> some flags.\n\nI'm a little skeptical of the value of that. Why would you want stats\nonly for infrequent statements?\n\nI'm not denying that there might be usefulness in filtering what\npg_stat_statements will track, but it's not clear to me that\nthis particular proposal will be useful to many people.\n\nI wonder whether there would be more use in filters expressed\nas regular expressions to match against the statement text.\nThat would allow, for example, tracking statements that mention\na particular table as well as statements with a particular\nhead keyword. I could see usefulness in both a positive filter\n(must match this to get tracked) and a negative one (must not\nmatch this to get tracked).\n\n regards, tom lane",
"msg_date": "Wed, 30 Nov 2022 10:53:13 -0800",
"msg_from": "Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Enable pg_stat_statements extension for limited statements only"
}
] |
[
{
"msg_contents": "Whenever rounding a number to a fixed number of decimal points in a\ncalculation, we need to cast the number into a numeric before using\nround((col1/100.0)::numeric, 2).\n\nIt would be convenient for everyone if round() also accepts float and\ndouble precision.\n\nIs this something I could work with? And is that feasible?\n\nWhenever rounding a number to a fixed number of decimal points in a calculation, we need to cast the number into a numeric before using round((col1/100.0)::numeric, 2).It would be convenient for everyone if round() also accepts float and double precision.Is this something I could work with? And is that feasible?",
"msg_date": "Wed, 30 Nov 2022 10:38:46 -0800",
"msg_from": "Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com>",
"msg_from_op": true,
"msg_subject": "Allow round() function to accept float and double precision"
},
{
"msg_contents": "On Thu, 1 Dec 2022 at 07:39, Sayyid Ali Sajjad Rizavi\n<sasrizavi@gmail.com> wrote:\n>\n> Whenever rounding a number to a fixed number of decimal points in a calculation, we need to cast the number into a numeric before using round((col1/100.0)::numeric, 2).\n>\n> It would be convenient for everyone if round() also accepts float and double precision.\n>\n> Is this something I could work with? And is that feasible?\n\nI don't immediately see any issues with adding such a function.\n\nWe do have some weirdness in some existing overloaded functions.\npg_size_pretty() is an example.\n\nIf you run: SELECT pg_size_pretty(1000); you get:\nERROR: function pg_size_pretty(integer) is not unique\n\nThat occurs because we don't know if we should promote the INT into a\nBIGINT or into a NUMERIC. We have a pg_size_pretty() function for each\nof those. I don't think the same polymorphic type resolution problem\nexists for REAL, FLOAT8 and NUMERIC. If a literal has a decimal point,\nit's a NUMERIC, so it'll just use the numeric version of round().\n\nI'm unsure what the repercussions of the fact that REAL and FLOAT8 are\nnot represented as decimals. So I'm not quite sure what real\nguarantees there are that the number is printed out with the number of\ndecimal places that you've rounded the number to.\n\nDoing:\n\ncreate function round(n float8, d int) returns float8 as $$ begin\nreturn round(n::numeric, d)::float8; end; $$ language plpgsql;\n\nand running things like:\n\nselect round(3.333333333333333::float8,10);\n\nI'm not seeing any issues.\n\nDavid\n\n\n",
"msg_date": "Thu, 1 Dec 2022 14:30:44 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow round() function to accept float and double precision"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> We do have some weirdness in some existing overloaded functions.\n> pg_size_pretty() is an example.\n\n> If you run: SELECT pg_size_pretty(1000); you get:\n> ERROR: function pg_size_pretty(integer) is not unique\n\nYeah, you have to be careful about that when proposing to overload\na function name.\n\n> That occurs because we don't know if we should promote the INT into a\n> BIGINT or into a NUMERIC. We have a pg_size_pretty() function for each\n> of those. I don't think the same polymorphic type resolution problem\n> exists for REAL, FLOAT8 and NUMERIC.\n\nI would counsel against bothering with a REAL version. FLOAT8 will\ncover that case just fine.\n\n> I'm unsure what the repercussions of the fact that REAL and FLOAT8 are\n> not represented as decimals.\n\nThe main thing is that I think the output will still have to be\nNUMERIC, or you're going to get complaints about \"inaccurate\"\nresults. Before we got around to inventing infinities for NUMERIC,\nthat choice would have been problematic, but now I think it's OK.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 30 Nov 2022 20:45:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow round() function to accept float and double precision"
},
{
"msg_contents": "On Wed, Nov 30, 2022 at 6:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> David Rowley <dgrowleyml@gmail.com> writes:\n>\n>\n> I'm unsure what the repercussions of the fact that REAL and FLOAT8 are\n> > not represented as decimals.\n>\n> The main thing is that I think the output will still have to be\n> NUMERIC, or you're going to get complaints about \"inaccurate\"\n> results. Before we got around to inventing infinities for NUMERIC,\n> that choice would have been problematic, but now I think it's OK.\n>\n>\nI don't get the point of adding a function here (or at least one called\nround) - the type itself is inexact so, as you say, it is actually more of\na type conversion with an ability to specify precision, which is exactly\nwhat you get today when you write 1.48373::numeric(20,3) - though it is a\nbit annoying having to specify an arbitrary precision.\n\nAt present round does allow you to specify a negative position to round at\npositions to the left of the decimal point (this is undocumented though...)\nwhich the actual cast cannot do, but that seems like a marginal case.\n\nMaybe call it: make_exact(numeric, integer) ?\n\nI do get a feeling like I'm being too pedantic here though...\n\nDavid J.\n\nOn Wed, Nov 30, 2022 at 6:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:David Rowley <dgrowleyml@gmail.com> writes: \n> I'm unsure what the repercussions of the fact that REAL and FLOAT8 are\n> not represented as decimals.\n\nThe main thing is that I think the output will still have to be\nNUMERIC, or you're going to get complaints about \"inaccurate\"\nresults. Before we got around to inventing infinities for NUMERIC,\nthat choice would have been problematic, but now I think it's OK.I don't get the point of adding a function here (or at least one called round) - the type itself is inexact so, as you say, it is actually more of a type conversion with an ability to specify precision, which is exactly what you get today when you write 1.48373::numeric(20,3) - though it is a bit annoying having to specify an arbitrary precision.At present round does allow you to specify a negative position to round at positions to the left of the decimal point (this is undocumented though...) which the actual cast cannot do, but that seems like a marginal case.Maybe call it: make_exact(numeric, integer) ?I do get a feeling like I'm being too pedantic here though...David J.",
"msg_date": "Wed, 30 Nov 2022 19:41:08 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow round() function to accept float and double precision"
},
{
"msg_contents": "On Thu, 1 Dec 2022 at 15:41, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> I don't get the point of adding a function here (or at least one called round) - the type itself is inexact so, as you say, it is actually more of a type conversion with an ability to specify precision, which is exactly what you get today when you write 1.48373::numeric(20,3) - though it is a bit annoying having to specify an arbitrary precision.\n\nAn additional problem with that which you might have missed is that\nyou'd need to know what to specify in the precision part of the\ntypemod. You might start getting errors one day if you don't select a\nvalue large enough. That problem does not exist with round(). Having\nto specify 131072 each time does not sound like a great solution, it's\nnot exactly a very memorable number.\n\nDavid\n\n\n",
"msg_date": "Thu, 1 Dec 2022 15:58:23 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow round() function to accept float and double precision"
},
{
"msg_contents": "On Thu, 1 Dec 2022 at 02:58, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 1 Dec 2022 at 15:41, David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> > I don't get the point of adding a function here (or at least one called round) - the type itself is inexact so, as you say, it is actually more of a type conversion with an ability to specify precision, which is exactly what you get today when you write 1.48373::numeric(20,3) - though it is a bit annoying having to specify an arbitrary precision.\n>\n> An additional problem with that which you might have missed is that\n> you'd need to know what to specify in the precision part of the\n> typemod. You might start getting errors one day if you don't select a\n> value large enough. That problem does not exist with round(). Having\n> to specify 131072 each time does not sound like a great solution, it's\n> not exactly a very memorable number.\n>\n\nI don't really see the point of such a function either.\n\nCasting to numeric(1000, n) will work fine in all cases AFAICS (1000\nbeing the maximum allowed precision in a numeric typemod, and somewhat\nmore memorable).\n\nNote that double precision numbers range in magnitude from something\nlike 2.2e-308 to 1.8e308, so you won't ever get an error (except, I\nsuppose, if you also chose \"n\" larger than 692 or so, but that would\nbe silly, given the input).\n\n\n> > At present round does allow you to specify a negative position to round at positions to the left of the decimal point (this is undocumented though...) which the actual cast cannot do, but that seems like a marginal case.\n\nNote that, as of PG15, \"n\" can be negative in such typemods, if you\nwant to round before the decimal point.\n\nThe fact that passing a negative scale to round() isn't documented\ndoes seem like an oversight though...\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 1 Dec 2022 08:55:33 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow round() function to accept float and double precision"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> I don't really see the point of such a function either.\n> Casting to numeric(1000, n) will work fine in all cases AFAICS (1000\n> being the maximum allowed precision in a numeric typemod, and somewhat\n> more memorable).\n\nRight, but I think what the OP wants is to not have to think about\nwhether the input is of exact or inexact type. That's easily soluble\nlocally by making your own function:\n\ncreate function round(float8, int) returns numeric\n as $$select pg_catalog.round($1::pg_catalog.numeric, $2)$$\n language sql strict immutable parallel safe;\n\nbut I'm not sure that the argument for it is strong enough to\njustify putting it into Postgres.\n\n> The fact that passing a negative scale to round() isn't documented\n> does seem like an oversight though...\n\nAgreed, will do something about that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Dec 2022 09:39:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow round() function to accept float and double precision"
},
{
"msg_contents": "On Thu, Dec 1, 2022 at 7:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n>\n> > The fact that passing a negative scale to round() isn't documented\n> > does seem like an oversight though...\n>\n> Agreed, will do something about that.\n>\n>\nThanks. I'm a bit surprised you left \"Rounds v to s decimal places.\" alone\nthough. I feel like the prose should also make clear that positions to the\nleft of the decimal, which are not conventionally considered decimal\nplaces, can be identified.\n\nRounds v at s digits before or after the decimal place.\n\nThe examples will hopefully clear up any off-by-one concerns that someone\nmay have.\n\nDavid J.\n\nOn Thu, Dec 1, 2022 at 7:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> The fact that passing a negative scale to round() isn't documented\n> does seem like an oversight though...\n\nAgreed, will do something about that.Thanks. I'm a bit surprised you left \"Rounds v to s decimal places.\" alone though. I feel like the prose should also make clear that positions to the left of the decimal, which are not conventionally considered decimal places, can be identified.Rounds v at s digits before or after the decimal place.The examples will hopefully clear up any off-by-one concerns that someone may have.David J.",
"msg_date": "Thu, 1 Dec 2022 12:30:56 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow round() function to accept float and double precision"
},
{
"msg_contents": "On Thu, 1 Dec 2022 at 21:55, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> Casting to numeric(1000, n) will work fine in all cases AFAICS (1000\n> being the maximum allowed precision in a numeric typemod, and somewhat\n> more memorable).\n\nI wasn't aware of the typemod limit.\n\nI don't really agree that it will work fine in all cases though. If\nthe numeric has more than 1000 digits left of the decimal point then\nthe method won't work at all.\n\n# select length(('1' || repeat('0',2000))::numeric(1000,0)::text);\nERROR: numeric field overflow\nDETAIL: A field with precision 1000, scale 0 must round to an\nabsolute value less than 10^1000.\n\nNo issue with round() with the same number.\n\n# select length(round(('1' || repeat('0',2000))::numeric,0)::text);\n length\n--------\n 2001\n\nDavid\n\n\n",
"msg_date": "Fri, 2 Dec 2022 08:57:17 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow round() function to accept float and double precision"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I don't really agree that it will work fine in all cases though. If\n> the numeric has more than 1000 digits left of the decimal point then\n> the method won't work at all.\n\nBut what we're talking about is starting from a float4 or float8\ninput, so it can't be more than ~308 digits.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Dec 2022 15:02:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow round() function to accept float and double precision"
},
{
"msg_contents": "On Fri, 2 Dec 2022 at 09:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I don't really agree that it will work fine in all cases though. If\n> > the numeric has more than 1000 digits left of the decimal point then\n> > the method won't work at all.\n>\n> But what we're talking about is starting from a float4 or float8\n> input, so it can't be more than ~308 digits.\n\nI may have misunderstood. I thought David J was proposing this as a\nuseful method for rounding numeric too. Re-reading what he wrote, I no\nlonger think he was.\n\nDavid\n\n\n",
"msg_date": "Fri, 2 Dec 2022 10:21:23 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow round() function to accept float and double precision"
},
{
"msg_contents": "On Thu, Dec 1, 2022 at 2:21 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 2 Dec 2022 at 09:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > David Rowley <dgrowleyml@gmail.com> writes:\n> > > I don't really agree that it will work fine in all cases though. If\n> > > the numeric has more than 1000 digits left of the decimal point then\n> > > the method won't work at all.\n> >\n> > But what we're talking about is starting from a float4 or float8\n> > input, so it can't be more than ~308 digits.\n>\n> I may have misunderstood. I thought David J was proposing this as a\n> useful method for rounding numeric too. Re-reading what he wrote, I no\n> longer think he was.\n>\n>\nI was not, my response was that what is being asked for is basically a cast\nfrom float to numeric, and doing that via a \"round()\" function seems odd.\nAnd we can handle the desired rounding aspect of that process already via\nthe existing numeric(1000, n) syntax.\n\nDavid J.\n\nOn Thu, Dec 1, 2022 at 2:21 PM David Rowley <dgrowleyml@gmail.com> wrote:On Fri, 2 Dec 2022 at 09:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I don't really agree that it will work fine in all cases though. If\n> > the numeric has more than 1000 digits left of the decimal point then\n> > the method won't work at all.\n>\n> But what we're talking about is starting from a float4 or float8\n> input, so it can't be more than ~308 digits.\n\nI may have misunderstood. I thought David J was proposing this as a\nuseful method for rounding numeric too. Re-reading what he wrote, I no\nlonger think he was.I was not, my response was that what is being asked for is basically a cast from float to numeric, and doing that via a \"round()\" function seems odd. And we can handle the desired rounding aspect of that process already via the existing numeric(1000, n) syntax.David J.",
"msg_date": "Thu, 1 Dec 2022 14:40:48 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow round() function to accept float and double precision"
}
] |
[
{
"msg_contents": "Over on [1], Dean and I were discussing why both gcc and clang don't\nseem to want to optimize the multiplication that we're doing in\npg_strtoint16, pg_strtoint32 and pg_strtoint64 into something more\nefficient. It seems that this is down to the overflow checks.\nRemoving seems to allow the compiler to better optimize the compiled\ncode. See the use of LEA instead of IMUL in [2].\n\nInstead of using the pg_mul_sNN_overflow functions, we can just\naccumulate the number in an unsigned version of the type and do an\noverflow check by checking if the accumulated value has gone beyond a\n10th of the maximum *signed* value for the type. We then just need to\ndo some final overflow checks after the accumulation is done. What\nthose depend on if it's a negative or positive number.\n\nI ran a few microbenchmarks with the attached str2int.c file and I see\nabout a 10-20% performance increase:\n\n$ ./str2int -100000000 100000000\nn = -100000000, e = 100000000\npg_strtoint32 in 3.207926 seconds\npg_strtoint32_v2 in 2.763062 seconds (16.100399% faster)\n\nv2 is the updated version of the function\n\nOn Windows, the gains are generally a bit more. I think this must be\ndue to the lack of overflow intrinsic functions.\n\n>str2int.exe -100000000 100000000\nn = -100000000, e = 100000000\npg_strtoint32 in 9.416000 seconds\npg_strtoint32_v2 in 8.099000 seconds (16.261267% faster)\n\nI was thinking that we should likely apply this before doing the hex\nliterals, which is the main focus of [1]. The reason being is so that\nthat patch can immediately have faster conversions by allowing the\ncompiler to use bit shifting instead of other means of multiplying by\na power-of-2 number. I'm hoping this removes a barrier for Peter from\nthe small gripe I raised on that thread about the patch having slower\nthan required hex, octal and binary string parsing.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvrL6_+wKgPqRHr7gH_6xy3hXM6a3QCsZ5ForurjDFfenA@mail.gmail.com\n[2] https://godbolt.org/z/7YoMT63q1",
"msg_date": "Thu, 1 Dec 2022 12:42:01 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Improve performance of pg_strtointNN functions"
},
{
"msg_contents": "On Thu, Dec 1, 2022 at 6:42 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I was thinking that we should likely apply this before doing the hex\n> literals, which is the main focus of [1]. The reason being is so that\n> that patch can immediately have faster conversions by allowing the\n> compiler to use bit shifting instead of other means of multiplying by\n> a power-of-2 number. I'm hoping this removes a barrier for Peter from\n> the small gripe I raised on that thread about the patch having slower\n> than required hex, octal and binary string parsing.\n\nI don't see why the non-decimal literal patch needs to be \"immediately\"\nfaster? If doing this first leads to less code churn, that's another\nconsideration, but you haven't made that argument.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Dec 1, 2022 at 6:42 AM David Rowley <dgrowleyml@gmail.com> wrote:>> I was thinking that we should likely apply this before doing the hex> literals, which is the main focus of [1]. The reason being is so that> that patch can immediately have faster conversions by allowing the> compiler to use bit shifting instead of other means of multiplying by> a power-of-2 number. I'm hoping this removes a barrier for Peter from> the small gripe I raised on that thread about the patch having slower> than required hex, octal and binary string parsing.I don't see why the non-decimal literal patch needs to be \"immediately\" faster? If doing this first leads to less code churn, that's another consideration, but you haven't made that argument.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 1 Dec 2022 12:27:23 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve performance of pg_strtointNN functions"
},
{
"msg_contents": "On Thu, 1 Dec 2022 at 18:27, John Naylor <john.naylor@enterprisedb.com> wrote:\n> I don't see why the non-decimal literal patch needs to be \"immediately\" faster? If doing this first leads to less code churn, that's another consideration, but you haven't made that argument.\n\nMy view is that Peter wants to keep the code he's adding for the hex,\noctal and binary parsing as similar to the existing code as possible.\nI very much understand Peter's point of view on that. Consistency is\ngood. However, if we commit the hex literals patch first, people might\nask \"why don't we use bit-wise operators to make the power-of-2 bases\nfaster?\", which seems like a very legitimate question. I asked it,\nanyway... On the other hand, if Peter adds the bit-wise operators\nthen the problem of code inconsistency remains.\n\nAs an alternative to those 2 options, I'm proposing we commit this\nfirst then the above dilemma disappears completely.\n\nIf this was going to cause huge conflicts with Peter's patch then I\nmight think differently. I feel like it's a fairly trivial task to\nrebase.\n\nIf the consensus is that we should fix this afterwards, then I'm happy to delay.\n\nDavid\n\n\n",
"msg_date": "Thu, 1 Dec 2022 18:38:05 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve performance of pg_strtointNN functions"
},
{
"msg_contents": "On Thu, 1 Dec 2022 at 05:38, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 1 Dec 2022 at 18:27, John Naylor <john.naylor@enterprisedb.com> wrote:\n> > I don't see why the non-decimal literal patch needs to be \"immediately\" faster? If doing this first leads to less code churn, that's another consideration, but you haven't made that argument.\n>\n> My view is that Peter wants to keep the code he's adding for the hex,\n> octal and binary parsing as similar to the existing code as possible.\n> I very much understand Peter's point of view on that. Consistency is\n> good. However, if we commit the hex literals patch first, people might\n> ask \"why don't we use bit-wise operators to make the power-of-2 bases\n> faster?\", which seems like a very legitimate question. I asked it,\n> anyway... On the other hand, if Peter adds the bit-wise operators\n> then the problem of code inconsistency remains.\n>\n> As an alternative to those 2 options, I'm proposing we commit this\n> first then the above dilemma disappears completely.\n>\n> If this was going to cause huge conflicts with Peter's patch then I\n> might think differently. I feel like it's a fairly trivial task to\n> rebase.\n>\n> If the consensus is that we should fix this afterwards, then I'm happy to delay.\n>\n\nI feel like it should be done afterwards, so that any performance\ngains can be measured for all bases. Otherwise, we won't really know,\nor have any record of, how much faster this was for other bases, or be\nable to go back and test that.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 1 Dec 2022 09:55:12 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve performance of pg_strtointNN functions"
},
{
"msg_contents": "On 01.12.22 06:38, David Rowley wrote:\n> If this was going to cause huge conflicts with Peter's patch then I\n> might think differently. I feel like it's a fairly trivial task to\n> rebase.\n> \n> If the consensus is that we should fix this afterwards, then I'm happy to delay.\n\nIf we are happy with this patch, then it's okay with me if you push this \nfirst. I'll probably need to do another pass over my patch anyway, so a \nbit more work isn't a problem.\n\n\n\n",
"msg_date": "Fri, 2 Dec 2022 08:35:10 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve performance of pg_strtointNN functions"
},
{
"msg_contents": "On Fri, 2 Dec 2022 at 20:35, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> If we are happy with this patch, then it's okay with me if you push this\n> first. I'll probably need to do another pass over my patch anyway, so a\n> bit more work isn't a problem.\n\nThanks. I'll start looking at the patch again now. If I don't find any\nproblems then I'll push it.\n\nI just did some performance tests by COPYing in 40 million INTs\nranging from -/+ 20 million.\n\ncreate table ab(a int, b int);\ninsert into ab select x,x from generate_series(-20000000,20000000)x;\ncopy ab to '/tmp/ab.csv';\n\n-- master\ntruncate ab; copy ab from '/tmp/ab.csv';\nTime: 10219.386 ms (00:10.219)\nTime: 10252.572 ms (00:10.253)\nTime: 10202.940 ms (00:10.203)\n\n-- patched\ntruncate ab; copy ab from '/tmp/ab.csv';\nTime: 9522.020 ms (00:09.522)\nTime: 9441.294 ms (00:09.441)\nTime: 9432.834 ms (00:09.433)\n\nAbout ~8% faster\n\nDavid\n\n\n",
"msg_date": "Sun, 4 Dec 2022 13:52:03 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve performance of pg_strtointNN functions"
},
{
"msg_contents": "On Sun, 4 Dec 2022 at 13:52, David Rowley <dgrowleyml@gmail.com> wrote:\n> Thanks. I'll start looking at the patch again now. If I don't find any\n> problems then I'll push it.\n\nPushed with some small adjustments.\n\nDavid\n\n\n",
"msg_date": "Sun, 4 Dec 2022 16:19:43 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve performance of pg_strtointNN functions"
},
{
"msg_contents": "On Sun, 4 Dec 2022 at 03:19, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> Pushed with some small adjustments.\n>\n\nAh, I see that you changed the overflow test, and I realise that I\nforgot to answer your question about why I wrote that as 1 - INT_MIN /\n10 over on the other thread.\n\nThe reason is that we need to detect whether tmp * base will exceed\n-INT_MIN, not INT_MAX, since we're accumulating the absolute value of\na signed integer. So the right test is\n\n tmp >= 1 - INT_MIN / base\n\nor equivalently\n\n tmp > -(INT_MIN / base)\n\nI used the first form, because it didn't require extra parentheses,\nbut that doesn't really matter. The point is that, in general, that's\nnot the same as\n\n tmp > INT_MAX / base\n\nthough it happens to be the same for base = 10, because INT_MIN and\nINT_MAX aren't divisible by 10. It will break when base is a power of\n2 though, so although it's not broken now, it's morally wrong, and it\nrisks breaking when Peter commits his patch.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sun, 4 Dec 2022 09:53:42 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve performance of pg_strtointNN functions"
},
{
"msg_contents": "On Sun, 4 Dec 2022 at 22:53, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> Ah, I see that you changed the overflow test, and I realise that I\n> forgot to answer your question about why I wrote that as 1 - INT_MIN /\n> 10 over on the other thread.\n>\n> The reason is that we need to detect whether tmp * base will exceed\n> -INT_MIN, not INT_MAX, since we're accumulating the absolute value of\n> a signed integer.\n\nI think I'd been too focused on the simplicity of that expression and\nalso the base 10 part. I saw that everything worked in base 10 and\nfailed to give enough forward thought to other bases.\n\nI now see that it was wrong-headed to code it the way I had it.\nThanks for pointing this out. I've just pushed a fix.\n\nDavid\n\n\n",
"msg_date": "Mon, 5 Dec 2022 12:03:26 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve performance of pg_strtointNN functions"
}
] |
[
{
"msg_contents": "Here are a couple of patches that clean up the internal File API and \nrelated things a bit:\n\n0001-Update-types-in-File-API.patch\n\n Make the argument types of the File API match stdio better:\n\n - Change the data buffer to void *, from char *.\n - Change FileWrite() data buffer to const on top of that.\n - Change amounts to size_t, from int.\n\n In passing, change the FilePrefetch() amount argument from int to\n off_t, to match the underlying posix_fadvise().\n\n0002-Remove-unnecessary-casts.patch\n\n Some code carefully cast all data buffer arguments for\n BufFileWrite() and BufFileRead() to void *, even though the\n arguments are already void * (and AFAICT were never anything else).\n Remove this unnecessary clutter.\n\n(I had initially thought these casts were related to the first patch, \nbut as I said the BufFile API never used char * arguments, so this \nturned out to be unrelated, but still weird.)",
"msg_date": "Thu, 1 Dec 2022 09:25:44 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "File API cleanup"
},
{
"msg_contents": "On Thu, Dec 1, 2022 at 1:55 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Here are a couple of patches that clean up the internal File API and\n> related things a bit:\n>\n> 0001-Update-types-in-File-API.patch\n>\n> Make the argument types of the File API match stdio better:\n>\n> - Change the data buffer to void *, from char *.\n> - Change FileWrite() data buffer to const on top of that.\n> - Change amounts to size_t, from int.\n>\n> In passing, change the FilePrefetch() amount argument from int to\n> off_t, to match the underlying posix_fadvise().\n>\n> 0002-Remove-unnecessary-casts.patch\n>\n> Some code carefully cast all data buffer arguments for\n> BufFileWrite() and BufFileRead() to void *, even though the\n> arguments are already void * (and AFAICT were never anything else).\n> Remove this unnecessary clutter.\n>\n> (I had initially thought these casts were related to the first patch,\n> but as I said the BufFile API never used char * arguments, so this\n> turned out to be unrelated, but still weird.)\n\nThanks. Please note that I've not looked at the patches attached.\nHowever, I'm here after reading the $subject - can we have a generic,\nsingle function file_exists() in fd.c/file_utils.c so that both\nbackend and frontend code can use it? I see there are 3 uses and\ndefinitions of it in jit.c, dfmgr.c and pg_regress.c. This will reduce\nthe code duplication. Thoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 1 Dec 2022 14:25:16 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: File API cleanup"
},
{
"msg_contents": "On 01.12.22 09:55, Bharath Rupireddy wrote:\n> can we have a generic,\n> single function file_exists() in fd.c/file_utils.c so that both\n> backend and frontend code can use it? I see there are 3 uses and\n> definitions of it in jit.c, dfmgr.c and pg_regress.c. This will reduce\n> the code duplication. Thoughts?\n\nWell, the first problem with that would be that all three of those \nimplementations are slightly different. Maybe that is intentional, or \nmaybe not, in which case a common implementation might be beneficial.\n\n(Another thing to consider is that checking whether a file exists is not \noften actually useful. If you want to use the file, you should just \nopen it and then check for any errors. The cases above have special \nrequirements, so there obviously are uses, but I'm not sure how many in \nthe long run.)\n\n\n\n",
"msg_date": "Thu, 1 Dec 2022 16:10:39 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: File API cleanup"
},
{
"msg_contents": "On 01.12.22 09:25, Peter Eisentraut wrote:\n> Here are a couple of patches that clean up the internal File API and \n> related things a bit:\n\nHere are two follow-up patches that clean up some stuff related to the \nearlier patch set. I suspect these are all historically related.\n\n0001-Remove-unnecessary-casts.patch\n\n Some code carefully cast all data buffer arguments for data write\n and read function calls to void *, even though the respective\n arguments are already void *. Remove this unnecessary clutter.\n\n0002-Add-const-to-BufFileWrite.patch\n\n Make data buffer argument to BufFileWrite a const pointer and bubble\n this up to various callers and related APIs. This makes the APIs\n clearer and more consistent.",
"msg_date": "Fri, 23 Dec 2022 09:33:58 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: File API cleanup"
},
{
"msg_contents": "On 23.12.22 09:33, Peter Eisentraut wrote:\n> On 01.12.22 09:25, Peter Eisentraut wrote:\n>> Here are a couple of patches that clean up the internal File API and \n>> related things a bit:\n> \n> Here are two follow-up patches that clean up some stuff related to the \n> earlier patch set. I suspect these are all historically related.\n\nAnother patch under this theme. Here, I'm addressing the smgr API, \nwhich effectively sits one level above the previously-dealt with \"File\" API.\n\nSpecifically, I'm changing the data buffer to void *, from char *, and \nadding const where appropriate. As you can see in the patch, most \ncallers were unhappy with the previous arrangement and required casts.\n\n(I pondered whether \"Page\" might be the right data type instead, since \nthe writers all write values of that type. But the readers don't read \ninto pages directly. So \"Block\" seemed more appropriate, and Block is \nvoid * (bufmgr.h), so this makes sense.)",
"msg_date": "Mon, 20 Feb 2023 10:52:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: File API cleanup"
}
] |
[
{
"msg_contents": "Keeping the SQL commands that initdb runs in string arrays before\nfeeding them to PG_CMD_PUTS() seems unnecessarily verbose and\ninflexible. In some cases, the array only has one member. In other\ncases, one might want to use PG_CMD_PRINTF() instead, to parametrize a\ncommand, but that would require breaking up the loop or using\nworkarounds like replace_token(). This patch unwinds all that; it's \nmuch simpler that way.",
"msg_date": "Thu, 1 Dec 2022 11:02:33 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "initdb: Refactor PG_CMD_PUTS loops"
},
{
"msg_contents": "On Thu, Dec 1, 2022 at 5:02 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n>\n> Keeping the SQL commands that initdb runs in string arrays before\n> feeding them to PG_CMD_PUTS() seems unnecessarily verbose and\n> inflexible. In some cases, the array only has one member. In other\n> cases, one might want to use PG_CMD_PRINTF() instead, to parametrize a\n> command, but that would require breaking up the loop or using\n> workarounds like replace_token(). This patch unwinds all that; it's\n> much simpler that way.\n\n+1, I can't think of a reason to keep the current coding\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Dec 1, 2022 at 5:02 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:>> Keeping the SQL commands that initdb runs in string arrays before> feeding them to PG_CMD_PUTS() seems unnecessarily verbose and> inflexible. In some cases, the array only has one member. In other> cases, one might want to use PG_CMD_PRINTF() instead, to parametrize a> command, but that would require breaking up the loop or using> workarounds like replace_token(). This patch unwinds all that; it's> much simpler that way.+1, I can't think of a reason to keep the current coding--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 2 Dec 2022 14:03:40 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb: Refactor PG_CMD_PUTS loops"
},
{
"msg_contents": "\nOn 2022-12-01 Th 05:02, Peter Eisentraut wrote:\n> Keeping the SQL commands that initdb runs in string arrays before\n> feeding them to PG_CMD_PUTS() seems unnecessarily verbose and\n> inflexible. In some cases, the array only has one member. In other\n> cases, one might want to use PG_CMD_PRINTF() instead, to parametrize a\n> command, but that would require breaking up the loop or using\n> workarounds like replace_token(). This patch unwinds all that; it's\n> much simpler that way.\n\n\nLooks reasonable. (Most of this dates back to 2003/2004, the very early\ndays of initdb.c.)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 2 Dec 2022 09:07:54 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: initdb: Refactor PG_CMD_PUTS loops"
},
{
"msg_contents": "On 02.12.22 15:07, Andrew Dunstan wrote:\n> On 2022-12-01 Th 05:02, Peter Eisentraut wrote:\n>> Keeping the SQL commands that initdb runs in string arrays before\n>> feeding them to PG_CMD_PUTS() seems unnecessarily verbose and\n>> inflexible. In some cases, the array only has one member. In other\n>> cases, one might want to use PG_CMD_PRINTF() instead, to parametrize a\n>> command, but that would require breaking up the loop or using\n>> workarounds like replace_token(). This patch unwinds all that; it's\n>> much simpler that way.\n> \n> Looks reasonable. (Most of this dates back to 2003/2004, the very early\n> days of initdb.c.)\n\ncommitted\n\n\n\n",
"msg_date": "Mon, 5 Dec 2022 23:40:54 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: initdb: Refactor PG_CMD_PUTS loops"
}
] |
[
{
"msg_contents": "Hi,\n\nI believe that has room for improving generation node files.\n\nThe patch attached reduced the size of generated files by 27 kbytes.\n From 891 kbytes to 864 kbytes.\n\nAbout the patch:\n1. Avoid useless attribution when from->field is NULL, once that\nthe new node is palloc0.\n\n2. Avoid useless declaration variable Size, when it is unnecessary.\n\n3. Optimize comparison functions like memcmp and strcmp, using\n a short-cut comparison of the first element.\n\n4. Switch several copy attributions like COPY_SCALAR_FIELD or\nCOPY_LOCATION_FIELD\nby one memcpy call.\n\n5. Avoid useless attribution, ignoring the result of pg_strtok when it is\nunnecessary.\n\nregards,\nRanier Vilela",
"msg_date": "Thu, 1 Dec 2022 10:02:11 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Optimizing Node Files Support"
},
{
"msg_contents": "On Thu, Dec 1, 2022 at 8:02 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Hi,\n>\n> I believe that has room for improving generation node files.\n>\n> The patch attached reduced the size of generated files by 27 kbytes.\n> From 891 kbytes to 864 kbytes.\n>\n> About the patch:\n> 1. Avoid useless attribution when from->field is NULL, once that\n> the new node is palloc0.\n>\n> 2. Avoid useless declaration variable Size, when it is unnecessary.\n\nNot useless -- it prevents a multiple evaluation hazard, which this patch\nintroduces.\n\n> 3. Optimize comparison functions like memcmp and strcmp, using\n> a short-cut comparison of the first element.\n\nNot sure if the juice is worth the squeeze. Profiling would tell.\n\n> 4. Switch several copy attributions like COPY_SCALAR_FIELD or\nCOPY_LOCATION_FIELD\n> by one memcpy call.\n\nMy first thought is, it would cause code churn.\n\n> 5. Avoid useless attribution, ignoring the result of pg_strtok when it is\nunnecessary.\n\nLooks worse.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Dec 1, 2022 at 8:02 PM Ranier Vilela <ranier.vf@gmail.com> wrote:>> Hi,>> I believe that has room for improving generation node files.>> The patch attached reduced the size of generated files by 27 kbytes.> From 891 kbytes to 864 kbytes.>> About the patch:> 1. Avoid useless attribution when from->field is NULL, once that> the new node is palloc0.>> 2. Avoid useless declaration variable Size, when it is unnecessary.Not useless -- it prevents a multiple evaluation hazard, which this patch introduces.> 3. Optimize comparison functions like memcmp and strcmp, using> a short-cut comparison of the first element.Not sure if the juice is worth the squeeze. Profiling would tell.> 4. Switch several copy attributions like COPY_SCALAR_FIELD or COPY_LOCATION_FIELD> by one memcpy call.My first thought is, it would cause code churn.> 5. Avoid useless attribution, ignoring the result of pg_strtok when it is unnecessary.Looks worse.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 2 Dec 2022 19:24:39 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing Node Files Support"
},
{
"msg_contents": "Hi, thanks for reviewing this.\n\nEm sex., 2 de dez. de 2022 às 09:24, John Naylor <\njohn.naylor@enterprisedb.com> escreveu:\n\n>\n> On Thu, Dec 1, 2022 at 8:02 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I believe that has room for improving generation node files.\n> >\n> > The patch attached reduced the size of generated files by 27 kbytes.\n> > From 891 kbytes to 864 kbytes.\n> >\n> > About the patch:\n> > 1. Avoid useless attribution when from->field is NULL, once that\n> > the new node is palloc0.\n> >\n> > 2. Avoid useless declaration variable Size, when it is unnecessary.\n>\n> Not useless -- it prevents a multiple evaluation hazard, which this patch\n> introduces.\n>\nIt's doubting, that patch introduces some hazard here.\nBut I think that casting size_t (typedef Size) to size_t is worse and is\nunnecessary.\nAdjusted in the v1 patch.\n\n\n> > 3. Optimize comparison functions like memcmp and strcmp, using\n> > a short-cut comparison of the first element.\n>\n> Not sure if the juice is worth the squeeze. Profiling would tell.\n>\nThis is a cheaper test and IMO can really optimize, avoiding a function\ncall.\n\n\n> > 4. Switch several copy attributions like COPY_SCALAR_FIELD or\n> COPY_LOCATION_FIELD\n> > by one memcpy call.\n>\n> My first thought is, it would cause code churn.\n>\nIt's a weak argument.\nReduced 27k from source code, really worth it.\n\n\n> > 5. Avoid useless attribution, ignoring the result of pg_strtok when it\n> is unnecessary.\n>\n> Looks worse.\n>\nBetter to inform the compiler that we really don't need the result.\n\nregards,\nRanier Vilela",
"msg_date": "Fri, 2 Dec 2022 10:35:58 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing Node Files Support"
},
{
"msg_contents": "On Fri, 2 Dec 2022 at 19:06, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Hi, thanks for reviewing this.\n>\n> Em sex., 2 de dez. de 2022 às 09:24, John Naylor <john.naylor@enterprisedb.com> escreveu:\n>>\n>>\n>> On Thu, Dec 1, 2022 at 8:02 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> >\n>> > Hi,\n>> >\n>> > I believe that has room for improving generation node files.\n>> >\n>> > The patch attached reduced the size of generated files by 27 kbytes.\n>> > From 891 kbytes to 864 kbytes.\n>> >\n>> > About the patch:\n>> > 1. Avoid useless attribution when from->field is NULL, once that\n>> > the new node is palloc0.\n>> >\n>> > 2. Avoid useless declaration variable Size, when it is unnecessary.\n>>\n>> Not useless -- it prevents a multiple evaluation hazard, which this patch introduces.\n>\n> It's doubting, that patch introduces some hazard here.\n> But I think that casting size_t (typedef Size) to size_t is worse and is unnecessary.\n> Adjusted in the v1 patch.\n>\n>>\n>> > 3. Optimize comparison functions like memcmp and strcmp, using\n>> > a short-cut comparison of the first element.\n>>\n>> Not sure if the juice is worth the squeeze. Profiling would tell.\n>\n> This is a cheaper test and IMO can really optimize, avoiding a function call.\n>\n>>\n>> > 4. Switch several copy attributions like COPY_SCALAR_FIELD or COPY_LOCATION_FIELD\n>> > by one memcpy call.\n>>\n>> My first thought is, it would cause code churn.\n>\n> It's a weak argument.\n> Reduced 27k from source code, really worth it.\n>\n>>\n>> > 5. Avoid useless attribution, ignoring the result of pg_strtok when it is unnecessary.\n>>\n>> Looks worse.\n>\n> Better to inform the compiler that we really don't need the result.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\ne351f85418313e97c203c73181757a007dfda6d0 ===\n=== applying patch ./v1-optimize_gen_nodes_support.patch\npatching file src/backend/nodes/gen_node_support.pl\nHunk #2 succeeded at 680 with fuzz 2.\nHunk #3 FAILED at 709.\n...\nHunk #7 succeeded at 844 (offset 42 lines).\n1 out of 7 hunks FAILED -- saving rejects to file\nsrc/backend/nodes/gen_node_support.pl.rej\n\n[1] - http://cfbot.cputube.org/patch_41_4034.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 3 Jan 2023 18:40:09 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing Node Files Support"
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\nYeah. The way that I'd been thinking of optimizing the copy functions\nwas more or less as attached: continue to write all the COPY_SCALAR_FIELD\nmacro calls, but just make them expand to no-ops after an initial memcpy\nof the whole node. This preserves flexibility to do something else while\nstill getting the benefit of substituting memcpy for retail field copies.\nHaving said that, I'm not very sure it's worth doing, because I do not\nsee any major reduction in code size:\n\nHEAD:\n text data bss dec hex filename\n 53601 0 0 53601 d161 copyfuncs.o\nw/patch:\n text data bss dec hex filename\n 49850 0 0 49850 c2ba copyfuncs.o\n\nI've not looked at the generated assembly code, but I suspect that\nmy compiler is converting the memcpy's into inlined code that's\nhardly any smaller than field-by-field assignment. Also, it's\nrather debatable that it'd be faster, especially for node types\nthat are mostly pointer fields, where the memcpy is going to be\nlargely wasted effort.\n\nI tend to agree with John that the rest of the changes proposed\nin the v1 patch are not useful improvements, especially with\nno evidence offered that they'd make the code smaller or faster.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 04 Jan 2023 17:39:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing Node Files Support"
},
{
"msg_contents": "Em qua., 4 de jan. de 2023 às 19:39, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> vignesh C <vignesh21@gmail.com> writes:\n> > The patch does not apply on top of HEAD as in [1], please post a rebased\n> patch:\n>\n> Yeah. The way that I'd been thinking of optimizing the copy functions\n> was more or less as attached: continue to write all the COPY_SCALAR_FIELD\n> macro calls, but just make them expand to no-ops after an initial memcpy\n> of the whole node. This preserves flexibility to do something else while\n> still getting the benefit of substituting memcpy for retail field copies.\n> Having said that, I'm not very sure it's worth doing, because I do not\n> see any major reduction in code size:\n>\nI think this option is worse.\nBy disabling these macros, you lose their use in other areas.\nBy putting more intelligence into gen_node_support.pl, to either use memcpy\nor the macros is better, IMO.\nIn cases of one or two macros, would it be faster than memset?\n\n\n> HEAD:\n> text data bss dec hex filename\n> 53601 0 0 53601 d161 copyfuncs.o\n> w/patch:\n> text data bss dec hex filename\n> 49850 0 0 49850 c2ba copyfuncs.o\n>\nI haven't tested it on Linux, but on Windows, there is a significant\nreduction.\n\nhead:\n8,114,688 postgres.exe\n121.281 copyfuncs.funcs.c\n\npatched:\n8,108,544 postgres.exe\n95.321 copyfuncs.funcs.c\n\n\n> I've not looked at the generated assembly code, but I suspect that\n> my compiler is converting the memcpy's into inlined code that's\n> hardly any smaller than field-by-field assignment. Also, it's\n> rather debatable that it'd be faster, especially for node types\n> that are mostly pointer fields, where the memcpy is going to be\n> largely wasted effort.\n>\nIMO, with many field assignments, memcpy would be more faster.\n\n\n>\n> I tend to agree with John that the rest of the changes proposed\n> in the v1 patch are not useful improvements, especially with\n> no evidence offered that they'd make the code smaller or faster.\n>\nI tried using palloc_object, as you proposed, but several tests failed.\nSo I suspect that some fields are not filled in correctly.\nIt would be an advantage to avoid memset in the allocation (palloc0), but I\nchose to keep it because of the errors.\n\nThis way, if we use palloc0, there is no need to set NULL on\nCOPY_STRING_FIELD.\n\nRegarding COPY_POINTER_FIELD, it is wasteful to cast size_t to size_t.\n\nv3 attached.\n\nregards,\nRanier Vilela\n\n\n> regards, tom lane\n>\n>",
"msg_date": "Fri, 6 Jan 2023 11:49:34 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing Node Files Support"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em qua., 4 de jan. de 2023 às 19:39, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>> Yeah. The way that I'd been thinking of optimizing the copy functions\n>> was more or less as attached: continue to write all the COPY_SCALAR_FIELD\n>> macro calls, but just make them expand to no-ops after an initial memcpy\n>> of the whole node.\n\n> I think this option is worse.\n> By disabling these macros, you lose their use in other areas.\n\nWhat other areas? They're local to copyfuncs.c.\n\nThe bigger picture here is that as long as we have any manually-maintained\nnode copy functions, it seems best to adhere to the existing convention\nof explicitly listing each and every field in them. I'm far more\nconcerned about errors-of-omission than I am about incremental performance\ngains (which still haven't been demonstrated to exist, anyway).\n\n> v3 attached.\n\nI think you're wasting people's time if you don't provide some\nperformance measurements showing that this is worth doing from\na speed standpoint.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Jan 2023 10:18:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing Node Files Support"
},
{
"msg_contents": "I think it's clear we aren't going to be taking this patch in its\ncurrent form. Perhaps there are better ways to do what these files do\n(I sure think there are!) but I don't think microoptimizing the\ncopying is something people are super excited about. It sounds like\nrethinking how to make these functions more convenient for programmers\nto maintain reliably would be more valuable.\n\nI guess I'll mark it Returned with Feedback -- if there are\nsignificant performance gains to show without making the code harder\nto maintain and/or a nicer way to structure this code in general then\nwe can revisit this.\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Mon, 20 Mar 2023 15:56:04 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing Node Files Support"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nThere is no error or warning when creating FOR EACH STATEMENT trigger on Logical Replication subscriber side, but it is not doing anything. Shouldn't a warning be helpful?\n\nCREATE TRIGGER set_updated_time_trig\n AFTER INSERT OR UPDATE OR DELETE ON test\n FOR EACH STATEMENT EXECUTE FUNCTION set_updated_time();\n\nThanks\n\nIMPORTANT - This email and any attachments is intended for the above named addressee(s), and may contain information which is confidential or privileged. If you are not the intended recipient, please inform the sender immediately and delete this email: you should not copy or use this e-mail for any purpose nor disclose its contents to any person.\n\n\n\n\n\n\n\n\n\n\nHi Hackers,\n\n \n\nThere is no error or warning when creating FOR EACH STATEMENT trigger on Logical Replication subscriber side, but it is not doing anything. Shouldn't a warning\n be helpful?\n\n \n\nCREATE TRIGGER set_updated_time_trig\n\n AFTER INSERT OR UPDATE OR DELETE ON test\n\n FOR EACH STATEMENT EXECUTE FUNCTION set_updated_time();\n\n \n\nThanks\n \n\nIMPORTANT - This email and any attachments is intended for the above named addressee(s), and may contain information which is confidential or privileged. If you are not the intended recipient, please inform the sender immediately and delete this email: you\n should not copy or use this e-mail for any purpose nor disclose its contents to any person.",
"msg_date": "Thu, 1 Dec 2022 13:58:53 +0000",
"msg_from": "Avi Weinberg <AviW@gilat.com>",
"msg_from_op": true,
"msg_subject": "Warning When Creating FOR EACH STATEMENT Trigger On Logical\n Replication Subscriber Side"
},
{
"msg_contents": "On Thu, Dec 1, 2022 at 7:29 PM Avi Weinberg <AviW@gilat.com> wrote:\n>\n>\n> There is no error or warning when creating FOR EACH STATEMENT trigger on Logical Replication subscriber side, but it is not doing anything. Shouldn't a warning be helpful?\n>\n>\n>\n> CREATE TRIGGER set_updated_time_trig\n>\n> AFTER INSERT OR UPDATE OR DELETE ON test\n>\n> FOR EACH STATEMENT EXECUTE FUNCTION set_updated_time();\n>\n\nI think we need to consider a few things for this. First, how will we\ndecide whether a particular node is a subscriber-side? One can create\na subscription after creating the trigger. Also, it won't be\nstraightforward to know whether the particular table is involved in\nthe replication. Second, the statement triggers do get fired during\ninitial sync, see docs [1] (The logical replication apply process\ncurrently only fires row triggers, not statement triggers. The initial\ntable synchronization, however, is implemented like a COPY command and\nthus fires both row and statement triggers for INSERT.). So,\nconsidering these points, I don't know if it is worth showing a\nWARNING here.\n\n[1] - https://www.postgresql.org/docs/devel/logical-replication-architecture.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 2 Dec 2022 08:25:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Warning When Creating FOR EACH STATEMENT Trigger On Logical\n Replication Subscriber Side"
}
] |
[
{
"msg_contents": "I wanted to test the different pg_upgrade transfer modes (--link, \n--clone), but that was not that easy, because there is more than one \nplace in the test script you have to find and manually change. So I \nwrote a little patch to make that easier. It's still manual, but it's a \nstart. (In principle, we could automatically run the tests with each \nsupported mode in a loop, but that would be very slow.)\n\nWhile doing that, I also found it strange that the default transfer mode \n(referred to as \"copy\" internally) did not have any external \nrepresentation, so it is awkward to refer to it in text, and obscure to \nsee where it is used for example in those test scripts. So I added an \noption --copy, which effectively does nothing, but it's not uncommon to \nhave options that select default behaviors explicitly. (I also thought \nabout something like a \"mode\" option with an argument, but given that we \nalready have --link and --clone, this seemed the most sensible.)\n\nThoughts?",
"msg_date": "Thu, 1 Dec 2022 16:18:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "pg_upgrade: Make testing different transfer modes easier"
},
{
"msg_contents": "At Thu, 1 Dec 2022 16:18:21 +0100, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote in \n> I wanted to test the different pg_upgrade transfer modes (--link,\n> --clone), but that was not that easy, because there is more than one\n> place in the test script you have to find and manually change. So I\n> wrote a little patch to make that easier. It's still manual, but it's\n> a start. (In principle, we could automatically run the tests with\n> each supported mode in a loop, but that would be very slow.)\n> \n> While doing that, I also found it strange that the default transfer\n> mode (referred to as \"copy\" internally) did not have any external\n> representation, so it is awkward to refer to it in text, and obscure\n> to see where it is used for example in those test scripts. So I added\n> an option --copy, which effectively does nothing, but it's not\n> uncommon to have options that select default behaviors explicitly. (I\n\nI don't have a clear idea of wheter it is common or not, but I suppose many such commands allow to choose the default behavior by a configuration file or an environment variable, etc. But I don't mind the command had the effetively nop option only for completeness.\n\n> also thought about something like a \"mode\" option with an argument,\n> but given that we already have --link and --clone, this seemed the\n> most sensible.)\n> \n> Thoughts?\n\nWhen I read up to the point of the --copy option, what came to my mind\nwas the --mode=<blah> option. IMHO, if I was going to add an option\nto choose the copy behavior, I would add --mode option instead, like\npg_ctl does, as it implicitly signals that the suboptions are mutually\nexclusive.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 02 Dec 2022 09:56:50 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade: Make testing different transfer modes easier"
},
{
"msg_contents": "> On 1 Dec 2022, at 16:18, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> I wanted to test the different pg_upgrade transfer modes (--link, --clone), but that was not that easy, because there is more than one place in the test script you have to find and manually change. So I wrote a little patch to make that easier. It's still manual, but it's a start. (In principle, we could automatically run the tests with each supported mode in a loop, but that would be very slow.)\n\nWouldn't it be possible, and less change-code-manual, to accept this via an\nextension to PROVE_FLAGS? Any options after :: to prove are passed to the\ntest(s) [0] so we could perhaps inspect @ARGV for the mode if we invent a new\nway to pass arguments. Something along the lines of the untested sketch\nbelow in the pg_upgrade test:\n\n+# Optionally set the file transfer mode for the tests via arguments to PROVE\n+my $mode = (@ARGV);\n+$mode = '--copy' unless defined;\n\n.. together with an extension to Makefile.global.in ..\n\n- $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl)\n+ $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl) $(PROVE_TEST_ARGS)\n\n.. should *I think* allow for passing the mode to the tests via:\n\nmake -C src/bin/pg_upgrade check PROVE_TEST_ARGS=\":: --link\"\n\nThe '::' part should of course ideally be injected automatically but the above\nis mostly thinking out loud pseudocode so I didn't add that.\n\nThis could probably benefit other tests as well, to make it eas{y|ier} to run\nextended testing on certain buildfarm animals or in the CFBot CI on specific\npatches in the commitfest.\n\n> While doing that, I also found it strange that the default transfer mode (referred to as \"copy\" internally) did not have any external representation, so it is awkward to refer to it in text, and obscure to see where it is used for example in those test scripts. So I added an option --copy, which effectively does nothing, but it's not uncommon to have options that select default behaviors explicitly. (I also thought about something like a \"mode\" option with an argument, but given that we already have --link and --clone, this seemed the most sensible.)\n\nAgreed, +1 on adding --copy regardless of the above.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://perldoc.perl.org/prove#Arguments-to-Tests\n\n",
"msg_date": "Fri, 2 Dec 2022 13:04:11 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade: Make testing different transfer modes easier"
},
{
"msg_contents": "On 02.12.22 01:56, Kyotaro Horiguchi wrote:\n>> also thought about something like a \"mode\" option with an argument,\n>> but given that we already have --link and --clone, this seemed the\n>> most sensible.)\n>>\n>> Thoughts?\n> \n> When I read up to the point of the --copy option, what came to my mind\n> was the --mode=<blah> option. IMHO, if I was going to add an option\n> to choose the copy behavior, I would add --mode option instead, like\n> pg_ctl does, as it implicitly signals that the suboptions are mutually\n> exclusive.\n\nOk, we have sort of one vote for each variant now. Let's see if there \nare other opinions.\n\n\n\n",
"msg_date": "Wed, 7 Dec 2022 17:30:44 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade: Make testing different transfer modes easier"
},
{
"msg_contents": "On 02.12.22 13:04, Daniel Gustafsson wrote:\n> Wouldn't it be possible, and less change-code-manual, to accept this via an\n> extension to PROVE_FLAGS? Any options after :: to prove are passed to the\n> test(s) [0] so we could perhaps inspect @ARGV for the mode if we invent a new\n> way to pass arguments. Something along the lines of the untested sketch\n> below in the pg_upgrade test:\n> \n> +# Optionally set the file transfer mode for the tests via arguments to PROVE\n> +my $mode = (@ARGV);\n> +$mode = '--copy' unless defined;\n> \n> .. together with an extension to Makefile.global.in ..\n> \n> - $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl)\n> + $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl) $(PROVE_TEST_ARGS)\n> \n> .. should *I think* allow for passing the mode to the tests via:\n> \n> make -C src/bin/pg_upgrade check PROVE_TEST_ARGS=\":: --link\"\n\nI think this might be a lot of complication to get working robustly and \nin the different build systems. Plus, what happens if you run all the \ntest suites and want to pass some options to pg_upgrade and some to \nanother test?\n\nI think if we want to make this configurable on the fly, and environment \nvariable would be much easier, like\n\n my $mode = $ENV{PG_TEST_PG_UPGRADE_MODE} || '--copy';\n\n\n\n",
"msg_date": "Wed, 7 Dec 2022 17:33:06 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade: Make testing different transfer modes easier"
},
{
"msg_contents": "On 07.12.22 17:33, Peter Eisentraut wrote:\n> I think if we want to make this configurable on the fly, and environment \n> variable would be much easier, like\n> \n> my $mode = $ENV{PG_TEST_PG_UPGRADE_MODE} || '--copy';\n\nHere is an updated patch set that incorporates this idea.",
"msg_date": "Wed, 14 Dec 2022 08:04:29 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade: Make testing different transfer modes easier"
},
{
"msg_contents": "> On 14 Dec 2022, at 08:04, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 07.12.22 17:33, Peter Eisentraut wrote:\n>> I think if we want to make this configurable on the fly, and environment variable would be much easier, like\n>> my $mode = $ENV{PG_TEST_PG_UPGRADE_MODE} || '--copy';\n> \n> Here is an updated patch set that incorporates this idea.\n\nI would prefer a small note about it in src/bin/pg_upgrade/TESTING to document\nit outside of the code, but otherwise LGTM.\n\n+\t\t$mode,\n \t\t'--check'\n \t],\n\n...\n\n-\t\t'-p', $oldnode->port, '-P', $newnode->port\n+\t\t'-p', $oldnode->port, '-P', $newnode->port,\n+\t\t$mode,\n \t],\n\nMinor nitpick, but while in there should we take the opportunity to add a\ntrailing comma on the other two array declarations which now ends with --check?\nIt's good Perl practice and will make the code consistent.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 14 Dec 2022 10:40:45 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade: Make testing different transfer modes easier"
},
{
"msg_contents": "At Wed, 14 Dec 2022 10:40:45 +0100, Daniel Gustafsson <daniel@yesql.se> wrote in \n> > On 14 Dec 2022, at 08:04, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> > \n> > On 07.12.22 17:33, Peter Eisentraut wrote:\n> >> I think if we want to make this configurable on the fly, and environment variable would be much easier, like\n> >> my $mode = $ENV{PG_TEST_PG_UPGRADE_MODE} || '--copy';\n> > \n> > Here is an updated patch set that incorporates this idea.\n\nWe have already PG_TEST_EXTRA. Shouldn't we use it here as well?\n\n> I would prefer a small note about it in src/bin/pg_upgrade/TESTING to document\n> it outside of the code, but otherwise LGTM.\n> \n> +\t\t$mode,\n> \t\t'--check'\n> \t],\n> \n> ...\n> \n> -\t\t'-p', $oldnode->port, '-P', $newnode->port\n> +\t\t'-p', $oldnode->port, '-P', $newnode->port,\n> +\t\t$mode,\n> \t],\n> \n> Minor nitpick, but while in there should we take the opportunity to add a\n> trailing comma on the other two array declarations which now ends with --check?\n> It's good Perl practice and will make the code consistent.\n\nOtherwise look god to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 15 Dec 2022 09:56:19 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade: Make testing different transfer modes easier"
},
{
"msg_contents": "> On 15 Dec 2022, at 01:56, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n> At Wed, 14 Dec 2022 10:40:45 +0100, Daniel Gustafsson <daniel@yesql.se> wrote in \n>>> On 14 Dec 2022, at 08:04, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>>> \n>>> On 07.12.22 17:33, Peter Eisentraut wrote:\n>>>> I think if we want to make this configurable on the fly, and environment variable would be much easier, like\n>>>> my $mode = $ENV{PG_TEST_PG_UPGRADE_MODE} || '--copy';\n>>> \n>>> Here is an updated patch set that incorporates this idea.\n> \n> We have already PG_TEST_EXTRA. Shouldn't we use it here as well?\n\nI think those are two different things. PG_TEST_EXTRA adds test suites that\naren't run by default, this proposal is to be able to inject options into a\ntest suite to modify its behavior.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 15 Dec 2022 09:34:17 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade: Make testing different transfer modes easier"
},
{
"msg_contents": "On 14.12.22 10:40, Daniel Gustafsson wrote:\n>> On 14 Dec 2022, at 08:04, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 07.12.22 17:33, Peter Eisentraut wrote:\n>>> I think if we want to make this configurable on the fly, and environment variable would be much easier, like\n>>> my $mode = $ENV{PG_TEST_PG_UPGRADE_MODE} || '--copy';\n>>\n>> Here is an updated patch set that incorporates this idea.\n> \n> I would prefer a small note about it in src/bin/pg_upgrade/TESTING to document\n> it outside of the code, but otherwise LGTM.\n> \n> +\t\t$mode,\n> \t\t'--check'\n> \t],\n> \n> ...\n> \n> -\t\t'-p', $oldnode->port, '-P', $newnode->port\n> +\t\t'-p', $oldnode->port, '-P', $newnode->port,\n> +\t\t$mode,\n> \t],\n> \n> Minor nitpick, but while in there should we take the opportunity to add a\n> trailing comma on the other two array declarations which now ends with --check?\n> It's good Perl practice and will make the code consistent.\n\ncommitted with these changes\n\n\n\n",
"msg_date": "Fri, 16 Dec 2022 18:43:31 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_upgrade: Make testing different transfer modes easier"
},
{
"msg_contents": "Hi, \r\nWith the addition of --copy option, pg_upgrade now has three possible transfer mode options. Currently, an error does not occur even if multiple transfer modes are specified. For example, we can also run \"pg_upgrade --link --copy --clone\" command. As discussed in Horiguchi-san's previous email, options like \"--mode=link|copy|clone\" can prevent this problem.\r\nThe attached patch uses the current implementation and performs a minimum check to prevent multiple transfer modes from being specified.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n-----Original Message-----\r\nFrom: Peter Eisentraut <peter.eisentraut@enterprisedb.com> \r\nSent: Saturday, December 17, 2022 2:44 AM\r\nTo: Daniel Gustafsson <daniel@yesql.se>\r\nCc: PostgreSQL Hackers <pgsql-hackers@postgresql.org>\r\nSubject: Re: pg_upgrade: Make testing different transfer modes easier\r\n\r\nOn 14.12.22 10:40, Daniel Gustafsson wrote:\r\n>> On 14 Dec 2022, at 08:04, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\r\n>>\r\n>> On 07.12.22 17:33, Peter Eisentraut wrote:\r\n>>> I think if we want to make this configurable on the fly, and environment variable would be much easier, like\r\n>>> my $mode = $ENV{PG_TEST_PG_UPGRADE_MODE} || '--copy';\r\n>>\r\n>> Here is an updated patch set that incorporates this idea.\r\n> \r\n> I would prefer a small note about it in src/bin/pg_upgrade/TESTING to \r\n> document it outside of the code, but otherwise LGTM.\r\n> \r\n> +\t\t$mode,\r\n> \t\t'--check'\r\n> \t],\r\n> \r\n> ...\r\n> \r\n> -\t\t'-p', $oldnode->port, '-P', $newnode->port\r\n> +\t\t'-p', $oldnode->port, '-P', $newnode->port,\r\n> +\t\t$mode,\r\n> \t],\r\n> \r\n> Minor nitpick, but while in there should we take the opportunity to \r\n> add a trailing comma on the other two array declarations which now ends with --check?\r\n> It's good Perl practice and will make the code consistent.\r\n\r\ncommitted with these changes",
"msg_date": "Mon, 19 Dec 2022 00:39:28 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": false,
"msg_subject": "RE: pg_upgrade: Make testing different transfer modes easier"
},
{
"msg_contents": "> On 19 Dec 2022, at 01:39, Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com> wrote:\n\n> With the addition of --copy option, pg_upgrade now has three possible transfer mode options. Currently, an error does not occur even if multiple transfer modes are specified. For example, we can also run \"pg_upgrade --link --copy --clone\" command. As discussed in Horiguchi-san's previous email, options like \"--mode=link|copy|clone\" can prevent this problem.\n> The attached patch uses the current implementation and performs a minimum check to prevent multiple transfer modes from being specified.\n\nWe typically allow multiple invocations of the same parameter with a\nlast-one-wins strategy, and only error out when competing *different*\nparameters are present. A --mode=<string> parameter can still be added as\nsyntactic sugar, but multiple-choice parameters is not a commonly used pattern\nin postgres utilities (pg_dump/restore and pg_basebackup are ones that come to\nmind).\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sun, 25 Dec 2022 23:11:58 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade: Make testing different transfer modes easier"
}
] |
[
{
"msg_contents": "Hi,\n\nHere's a work-in-progress patch that uses WaitEventSet for the main\nevent loop in the postmaster, with a latch as the wakeup mechanism for\n\"PM signals\" (requests from backends to do things like start a\nbackground worker, etc). There are still raw signals that are part of\nthe external interface (SIGHUP etc), but those handlers just set a\nflag and set the latch, instead of doing the state machine work. Some\nadvantages I can think of:\n\n1. Inherits various slightly more efficient modern kernel APIs for\nmultiplexing.\n2. Will automatically benefit from later improvements to WaitEventSet.\n3. Looks much more like the rest of our code.\n4. Requires no obscure signal programming knowledge to understand.\n5. Removes the strange call stacks we have, where most of postgres is\nforked from inside a signal handler.\n6. Might help with weirdness and bugs in some signal implementations\n(Cygwin, NetBSD?).\n7. Removes the need to stat() PROMOTE_SIGNAL_FILE and\nLOGROTATE_SIGNAL_FILE whenever PM signals are sent, now that SIGUSR1\nis less overloaded.\n8. It's a small step towards removing the need to emulate signals on Windows.\n\nIn order to avoid adding a new dependency on the contents of shared\nmemory, I introduced SetLatchRobustly() that will always use the slow\npath kernel wakeup primitive, even in cases where SetLatch() would\nnot. The idea here is that if one backend trashes shared memory,\nothers backends can still wake the postmaster even though it may\nappear that the postmaster isn't waiting or the latch is already set.\nIt would be possible to go further and have a robust wait mode that\ndoesn't read is_set too. It was indecision here that stopped me\nproposing this sooner...\n\nOne thing that might need more work is cleanup of the PM's WES in\nchild processes. Also I noticed in passing that the Windows kernel\nevent handles for latches are probably leaked on crash-reinit, but\nthat was already true, this just adds one more. Also the way I re-add\nthe latch every time through the event loop in case there was a\ncrash-reinit is stupid, I'll tidy that up.\n\nThis is something I extracted and rejuvenated from a larger set of\npatches I was hacking on a year or two ago to try to get rid of lots\nof users of raw signals. The recent thread about mamba's struggles\nand the possibility that latches might help reminded me to dust this\npart off, and potentially avoid some duplicated effort.\n\nI'm not saying this is free of bugs, but it's passing on CI and seems\nlike enough to share and see what people think.\n\n(Some other ideas I thought about back then: we could invent\nWL_SIGNAL, and not need all those global flag variables or the\nhandlers that set the latch. For eg kqueue it's trivial, and for\nancient Unixes you could do a sort of home-made signalfd with a single\ngeneric signal handler that just does write(self_pipe, &siginfo,\nsizeof(siginfo)). But that starts to seems like refactoring for\nrefactoring's sake; that's probably how I'd write a native kqueue\nprogram, but it's not even that obvious to me that we really should be\npretending that Windows has signals, which put me off that idea.\nPerhaps in a post-fake-signals world we just have the postmaster's\nevent loop directly consume commands from pg_ctl from a control pipe?\nToo many decisions at once, I gave that line of thinking up for now.)",
"msg_date": "Fri, 2 Dec 2022 10:12:25 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Using WaitEventSet in the postmaster"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-02 10:12:25 +1300, Thomas Munro wrote:\n> Here's a work-in-progress patch that uses WaitEventSet for the main\n> event loop in the postmaster\n\nWee!\n\n\n> with a latch as the wakeup mechanism for \"PM signals\" (requests from\n> backends to do things like start a background worker, etc).\n\nHm - is that directly related? ISTM that using a WES in the main loop, and\nchanging pmsignal.c to a latch are somewhat separate things?\n\nUsing a latch for pmsignal.c seems like a larger lift, because it means that\nall of latch.c needs to be robust against a corrupted struct Latch.\n\n\n> In order to avoid adding a new dependency on the contents of shared\n> memory, I introduced SetLatchRobustly() that will always use the slow\n> path kernel wakeup primitive, even in cases where SetLatch() would\n> not. The idea here is that if one backend trashes shared memory,\n> others backends can still wake the postmaster even though it may\n> appear that the postmaster isn't waiting or the latch is already set.\n\nWhy is that a concern that needs to be addressed?\n\n\nISTM that the important thing is that either a) the postmaster's latch can't\nbe corrupted, because it's not shared with backends or b) struct Latch can be\noverwritten with random contents without causing additional problems in\npostmaster.\n\nI don't think b) is the case as the patch stands. Imagine some process\noverwriting pm_latch->owner_pid. That'd then break the SetLatch() in\npostmaster's signal handler, because it wouldn't realize that itself needs to\nbe woken up, and we'd just signal some random process.\n\n\nIt doesn't seem trivial (but not impossible either) to make SetLatch() robust\nagainst arbitrary corruption. So it seems easier to me to just put the latch\nin process local memory, and do a SetLatch() in postmaster's SIGUSR1 handler.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Dec 2022 17:40:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "On Fri, Dec 2, 2022 at 2:40 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-12-02 10:12:25 +1300, Thomas Munro wrote:\n> > with a latch as the wakeup mechanism for \"PM signals\" (requests from\n> > backends to do things like start a background worker, etc).\n>\n> Hm - is that directly related? ISTM that using a WES in the main loop, and\n> changing pmsignal.c to a latch are somewhat separate things?\n\nYeah, that's a good question. This comes from a larger patch set\nwhere my *goal* was to use latches everywhere possible for\ninterprocess wakeups, but it does indeed make a lot of sense to do the\npostmaster WaitEventSet retrofit completely independently of that, and\nleaving the associated robustness problems for later proposals (the\nposted patch clearly fails to solve them).\n\n> I don't think b) is the case as the patch stands. Imagine some process\n> overwriting pm_latch->owner_pid. That'd then break the SetLatch() in\n> postmaster's signal handler, because it wouldn't realize that itself needs to\n> be woken up, and we'd just signal some random process.\n\nRight. At some point I had an idea about a non-shared table of\nlatches where OS-specific things like pids and HANDLEs live, so only\nthe maybe_waiting and is_set flags are in shared memory, and even\nthose are ignored when accessing the latch in 'robust' mode (they're\nonly optimisations after all). I didn't try it though. First you\nmight have to switch to a model with a finite set of latches\nidentified by index, or something like that. But I like your idea of\nseparating that whole problem.\n\n> It doesn't seem trivial (but not impossible either) to make SetLatch() robust\n> against arbitrary corruption. So it seems easier to me to just put the latch\n> in process local memory, and do a SetLatch() in postmaster's SIGUSR1 handler.\n\nAlright, good idea, I'll do a v2 like that.\n\n\n",
"msg_date": "Fri, 2 Dec 2022 15:36:03 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "On Fri, Dec 2, 2022 at 3:36 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Dec 2, 2022 at 2:40 PM Andres Freund <andres@anarazel.de> wrote:\n> > It doesn't seem trivial (but not impossible either) to make SetLatch() robust\n> > against arbitrary corruption. So it seems easier to me to just put the latch\n> > in process local memory, and do a SetLatch() in postmaster's SIGUSR1 handler.\n>\n> Alright, good idea, I'll do a v2 like that.\n\nHere's an iteration like that. Still WIP grade. It passes, but there\nmust be something I don't understand about this computer program yet,\nbecause if I move the \"if (pending_...\" section up into the block\nwhere WL_LATCH_SET has arrived (instead of testing those variables\nevery time through the loop), a couple of tests leave zombie\n(unreaped) processes behind, indicating that something funky happened\nto the state machine that I haven't yet grokked. Will look more next\nweek.\n\nBy the way, I think if we do this and then also do\ns/select(/WaitLatchOrSocket(/ in auth.c's RADIUS code, then we could\nthen drop a chunk of newly unreachable code in\nsrc/backend/port/win32/socket.c (though maybe I missed something; it's\nquite hard to grep for \"select\" in a SQL database :-D). There's also\na bunch of suspect stuff in there about UDP that is already dead\nthanks to the pgstats work.",
"msg_date": "Sat, 3 Dec 2022 10:41:36 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "On Sat, Dec 3, 2022 at 10:41 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here's an iteration like that. Still WIP grade. It passes, but there\n> must be something I don't understand about this computer program yet,\n> because if I move the \"if (pending_...\" section up into the block\n> where WL_LATCH_SET has arrived (instead of testing those variables\n> every time through the loop), a couple of tests leave zombie\n> (unreaped) processes behind, indicating that something funky happened\n> to the state machine that I haven't yet grokked. Will look more next\n> week.\n\nDuh. The reason for that was the pre-existing special case for\nPM_WAIT_DEAD_END, which used a sleep(100ms) loop to wait for children\nto exit, which I needed to change to a latch wait. Fixed in the next\niteration, attached.\n\nThe reason for the existing sleep-based approach was that we didn't\nwant to accept any more connections (or spin furiously because the\nlisten queue was non-empty). So in this version I invented a way to\nsuppress socket events temporarily with WL_SOCKET_IGNORE, and then\nreactivate them after crash reinit.\n\nStill WIP, but I hope travelling in the right direction.",
"msg_date": "Mon, 5 Dec 2022 22:45:57 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-05 22:45:57 +1300, Thomas Munro wrote:\n> On Sat, Dec 3, 2022 at 10:41 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Here's an iteration like that. Still WIP grade. It passes, but there\n> > must be something I don't understand about this computer program yet,\n> > because if I move the \"if (pending_...\" section up into the block\n> > where WL_LATCH_SET has arrived (instead of testing those variables\n> > every time through the loop), a couple of tests leave zombie\n> > (unreaped) processes behind, indicating that something funky happened\n> > to the state machine that I haven't yet grokked. Will look more next\n> > week.\n> \n> Duh. The reason for that was the pre-existing special case for\n> PM_WAIT_DEAD_END, which used a sleep(100ms) loop to wait for children\n> to exit, which I needed to change to a latch wait. Fixed in the next\n> iteration, attached.\n> \n> The reason for the existing sleep-based approach was that we didn't\n> want to accept any more connections (or spin furiously because the\n> listen queue was non-empty). So in this version I invented a way to\n> suppress socket events temporarily with WL_SOCKET_IGNORE, and then\n> reactivate them after crash reinit.\n\nThat seems like an odd special flag. Why do we need it? Is it just because we\nwant to have assertions ensuring that something is being queried?\n\n\n> * WL_SOCKET_ACCEPT is a new event for an incoming connection (on Unix,\n> this is just another name for WL_SOCKET_READABLE, but Window has a\n> different underlying event; this mirrors WL_SOCKET_CONNECTED on the\n> other end of a connection)\n\nPerhaps worth committing separately and soon? Seems pretty uncontroversial\nfrom here.\n\n\n> +/*\n> + * Object representing the state of a postmaster.\n> + *\n> + * XXX Lots of global variables could move in here.\n> + */\n> +typedef struct\n> +{\n> +\tWaitEventSet\t*wes;\n> +} Postmaster;\n> +\n\nSeems weird to introduce this but then basically have it be unused. I'd say\neither have a preceding patch move at least a few members into it, or just\nomit it for now.\n\n\n> +\t/* This may configure SIGURG, depending on platform. */\n> +\tInitializeLatchSupport();\n> +\tInitLocalLatch();\n\nI'm mildly preferring InitProcessLocalLatch(), but not sure why - there's not\nreally a conflicting meaning of \"local\" here.\n\n\n> +/*\n> + * Initialize the WaitEventSet we'll use in our main event loop.\n> + */\n> +static void\n> +InitializeWaitSet(Postmaster *postmaster)\n> +{\n> +\t/* Set up a WaitEventSet for our latch and listening sockets. */\n> +\tpostmaster->wes = CreateWaitEventSet(CurrentMemoryContext, 1 + MAXLISTEN);\n> +\tAddWaitEventToSet(postmaster->wes, WL_LATCH_SET, PGINVALID_SOCKET, MyLatch, NULL);\n> +\tfor (int i = 0; i < MAXLISTEN; i++)\n> +\t{\n> +\t\tint\t\t\tfd = ListenSocket[i];\n> +\n> +\t\tif (fd == PGINVALID_SOCKET)\n> +\t\t\tbreak;\n> +\t\tAddWaitEventToSet(postmaster->wes, WL_SOCKET_ACCEPT, fd, NULL, NULL);\n> +\t}\n> +}\n\nThe naming seems like it could be confused with latch.h\ninfrastructure. InitPostmasterWaitSet()?\n\n\n> +/*\n> + * Activate or deactivate the server socket events.\n> + */\n> +static void\n> +AdjustServerSocketEvents(Postmaster *postmaster, bool active)\n> +{\n> +\tfor (int pos = 1; pos < GetNumRegisteredWaitEvents(postmaster->wes); ++pos)\n> +\t\tModifyWaitEvent(postmaster->wes,\n> +\t\t\t\t\t\tpos, active ? WL_SOCKET_ACCEPT : WL_SOCKET_IGNORE,\n> +\t\t\t\t\t\tNULL);\n> +}\n\nThis seems to hardcode the specific wait events we're waiting for based on\nlatch.c infrastructure. Not really convinced that's a good idea.\n\n> +\t\t/*\n> +\t\t * Latch set by signal handler, or new connection pending on any of our\n> +\t\t * sockets? If the latter, fork a child process to deal with it.\n> +\t\t */\n> +\t\tfor (int i = 0; i < nevents; i++)\n> \t\t{\n> -\t\t\tif (errno != EINTR && errno != EWOULDBLOCK)\n> +\t\t\tif (events[i].events & WL_LATCH_SET)\n> \t\t\t{\n> -\t\t\t\tereport(LOG,\n> -\t\t\t\t\t\t(errcode_for_socket_access(),\n> -\t\t\t\t\t\t errmsg(\"select() failed in postmaster: %m\")));\n> -\t\t\t\treturn STATUS_ERROR;\n> +\t\t\t\tResetLatch(MyLatch);\n> +\n> +\t\t\t\t/* Process work scheduled by signal handlers. */\n> +\t\t\t\tif (pending_action_request)\n> +\t\t\t\t\tprocess_action_request(postmaster);\n> +\t\t\t\tif (pending_child_exit)\n> +\t\t\t\t\tprocess_child_exit(postmaster);\n> +\t\t\t\tif (pending_reload_request)\n> +\t\t\t\t\tprocess_reload_request();\n> +\t\t\t\tif (pending_shutdown_request)\n> +\t\t\t\t\tprocess_shutdown_request(postmaster);\n> \t\t\t}\n\nIs the order of operations here quite right? Shouldn't we process a shutdown\nrequest before the others? And a child exit before the request to start an\nautovac worker etc?\n\nISTM it should be 1) shutdown request 2) child exit 3) config reload 4) action\nrequest.\n\n\n> /*\n> - * pmdie -- signal handler for processing various postmaster signals.\n> + * pg_ctl uses SIGTERM, SIGINT and SIGQUIT to request different types of\n> + * shutdown.\n> */\n> static void\n> -pmdie(SIGNAL_ARGS)\n> +handle_shutdown_request_signal(SIGNAL_ARGS)\n> {\n> -\tint\t\t\tsave_errno = errno;\n> -\n> -\tereport(DEBUG2,\n> -\t\t\t(errmsg_internal(\"postmaster received signal %d\",\n> -\t\t\t\t\t\t\t postgres_signal_arg)));\n> +\tint save_errno = errno;\n> \n> \tswitch (postgres_signal_arg)\n> \t{\n> \t\tcase SIGTERM:\n> +\t\t\tpending_shutdown_request = SmartShutdown;\n> +\t\t\tbreak;\n> +\t\tcase SIGINT:\n> +\t\t\tpending_shutdown_request = FastShutdown;\n> +\t\t\tbreak;\n> +\t\tcase SIGQUIT:\n> +\t\t\tpending_shutdown_request = ImmediateShutdown;\n> +\t\t\tbreak;\n> +\t}\n> +\tSetLatch(MyLatch);\n> +\n> +\terrno = save_errno;\n> +}\n\nHm, not having the \"postmaster received signal\" message anymore seems like a\nloss when debugging things. I think process_shutdown_request() should emit\nsomething like it.\n\nI wonder if we should have a elog_sighand() that's written to be signal\nsafe. I've written versions of that numerous times for debugging, and it's a\nbit silly to do that over and over again.\n\n\n\n> @@ -2905,23 +2926,33 @@ pmdie(SIGNAL_ARGS)\n> \t\t\t * Now wait for backends to exit. If there are none,\n> \t\t\t * PostmasterStateMachine will take the next step.\n> \t\t\t */\n> -\t\t\tPostmasterStateMachine();\n> +\t\t\tPostmasterStateMachine(postmaster);\n> \t\t\tbreak;\n\nI'm by now fairly certain that it's a bad idea to have this change mixed in\nwith the rest of this large-ish change.\n\n\n> static void\n> -PostmasterStateMachine(void)\n> +PostmasterStateMachine(Postmaster *postmaster)\n> {\n> \t/* If we're doing a smart shutdown, try to advance that state. */\n> \tif (pmState == PM_RUN || pmState == PM_HOT_STANDBY)\n> @@ -3819,6 +3849,9 @@ PostmasterStateMachine(void)\n> \t\t\tAssert(AutoVacPID == 0);\n> \t\t\t/* syslogger is not considered here */\n> \t\t\tpmState = PM_NO_CHILDREN;\n> +\n> +\t\t\t/* re-activate server socket events */\n> +\t\t\tAdjustServerSocketEvents(postmaster, true);\n> \t\t}\n> \t}\n> \n> @@ -3905,6 +3938,9 @@ PostmasterStateMachine(void)\n> \t\tpmState = PM_STARTUP;\n> \t\t/* crash recovery started, reset SIGKILL flag */\n> \t\tAbortStartTime = 0;\n> +\n> +\t\t/* start accepting server socket connection events again */\n> +\t\treenable_server_socket_events = true;\n> \t}\n> }\n\nI don't think reenable_server_socket_events does anything as the patch stands\n- I don't see it being checked anywhere? And in the path above, you're using\nAdjustServerSocketEvents() directly.\n\n\n> @@ -4094,6 +4130,7 @@ BackendStartup(Port *port)\n> \t/* Hasn't asked to be notified about any bgworkers yet */\n> \tbn->bgworker_notify = false;\n> \n> +\tPG_SETMASK(&BlockSig);\n> #ifdef EXEC_BACKEND\n> \tpid = backend_forkexec(port);\n> #else\t\t\t\t\t\t\t/* !EXEC_BACKEND */\n\nThere are other calls to fork_process() - why don't they need the same\ntreatment?\n\nPerhaps we should add an assertion to fork_process() ensuring that signals are\nmasked?\n\n\n> diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c\n> index eb3a569aae..3bfef592eb 100644\n> --- a/src/backend/storage/ipc/latch.c\n> +++ b/src/backend/storage/ipc/latch.c\n> @@ -283,6 +283,17 @@ InitializeLatchSupport(void)\n> #ifdef WAIT_USE_SIGNALFD\n> \tsigset_t\tsignalfd_mask;\n> \n> +\tif (IsUnderPostmaster)\n> +\t{\n> +\t\tif (signal_fd != -1)\n> +\t\t{\n> +\t\t\t/* Release postmaster's signal FD; ignore any error */\n> +\t\t\t(void) close(signal_fd);\n> +\t\t\tsignal_fd = -1;\n> +\t\t\tReleaseExternalFD();\n> +\t\t}\n> +\t}\n> +\n\nHm - arguably it's a bug that we don't do this right now, correct?\n\n> @@ -1201,6 +1214,7 @@ WaitEventAdjustKqueue(WaitEventSet *set, WaitEvent *event, int old_events)\n> \t\t event->events == WL_POSTMASTER_DEATH ||\n> \t\t (event->events & (WL_SOCKET_READABLE |\n> \t\t\t\t\t\t\t WL_SOCKET_WRITEABLE |\n> +\t\t\t\t\t\t\t WL_SOCKET_IGNORE |\n> \t\t\t\t\t\t\t WL_SOCKET_CLOSED)));\n> \n> \tif (event->events == WL_POSTMASTER_DEATH)\n> @@ -1312,6 +1326,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)\n> \t\t\tflags |= FD_WRITE;\n> \t\tif (event->events & WL_SOCKET_CONNECTED)\n> \t\t\tflags |= FD_CONNECT;\n> +\t\tif (event->events & WL_SOCKET_ACCEPT)\n> +\t\t\tflags |= FD_ACCEPT;\n> \n> \t\tif (*handle == WSA_INVALID_EVENT)\n> \t\t{\n\nI wonder if the code would end up easier to understand if we handled\nWL_SOCKET_CONNECTED, WL_SOCKET_ACCEPT explicitly in the !WIN32 cases, rather\nthan redefining it to WL_SOCKET_READABLE.\n\n\n\n> diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c\n> index 3082093d1e..655e881688 100644\n> --- a/src/backend/tcop/postgres.c\n> +++ b/src/backend/tcop/postgres.c\n> @@ -24,7 +24,6 @@\n> #include <signal.h>\n> #include <unistd.h>\n> #include <sys/resource.h>\n> -#include <sys/select.h>\n> #include <sys/socket.h>\n> #include <sys/time.h>\n\nDo you know why this include even existed?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Dec 2022 10:09:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 7:09 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-12-05 22:45:57 +1300, Thomas Munro wrote:\n> > The reason for the existing sleep-based approach was that we didn't\n> > want to accept any more connections (or spin furiously because the\n> > listen queue was non-empty). So in this version I invented a way to\n> > suppress socket events temporarily with WL_SOCKET_IGNORE, and then\n> > reactivate them after crash reinit.\n>\n> That seems like an odd special flag. Why do we need it? Is it just because we\n> want to have assertions ensuring that something is being queried?\n\nYeah. Perhaps 0 would be a less clumsy way to say \"no events please\".\nI removed the assertions and did it that way in this next iteration.\n\nI realised that the previous approach didn't actually suppress POLLHUP\nand POLLERR in the poll and epoll implementations (even though our\ncode seems to think it needs to ask for those events, it's not\nnecessary, you get them anyway), and, being level-triggered, if those\nwere ever reported we'd finish up pegging the CPU to 100% until the\nchildren exited. Unlikely to happen with a server socket, but wrong\non principle, and maybe a problem for other potential users of this\ntemporary event suppression mode.\n\nOne way to fix that for the epoll version is to EPOLL_CTL_DEL and\nEPOLL_CTL_ADD, whenever transitioning to/from a zero event mask.\nTried like that in this version. Another approach would be to\n(finally) write DeleteWaitEvent() to do the same thing at a higher\nlevel... seems overkill for this.\n\nThe kqueue version was already doing that because of the way it was\nimplemented, and the poll and Windows versions needed only a small\nadjustment. I'm not too sure about the Windows change; my two ideas\nare passing the 0 through as shown in this version (not sure if it\nreally works the way I want, but it makes some sense and the\nWSAEventSelect() call doesn't fail...), or sticking a dummy unsignaled\nevent in the array passed to WaitForMultipleObjects().\n\nTo make sure this code is exercised, I made the state machine code\neager about silencing the socket events during PM_WAIT_DEAD_END, so\ncrash TAP tests go through the cycle. Regular non-crash shutdown also\nruns EPOLL_CTL_DEL/EV_DELETE, which stands out if you trace the\npostmaster.\n\n> > * WL_SOCKET_ACCEPT is a new event for an incoming connection (on Unix,\n> > this is just another name for WL_SOCKET_READABLE, but Window has a\n> > different underlying event; this mirrors WL_SOCKET_CONNECTED on the\n> > other end of a connection)\n>\n> Perhaps worth committing separately and soon? Seems pretty uncontroversial\n> from here.\n\nAlright, I split this into a separate patch.\n\n> > +/*\n> > + * Object representing the state of a postmaster.\n> > + *\n> > + * XXX Lots of global variables could move in here.\n> > + */\n> > +typedef struct\n> > +{\n> > + WaitEventSet *wes;\n> > +} Postmaster;\n> > +\n>\n> Seems weird to introduce this but then basically have it be unused. I'd say\n> either have a preceding patch move at least a few members into it, or just\n> omit it for now.\n\nAlright, I'll just have to make a global variable wait_set for now to\nkeep things simple.\n\n> > + /* This may configure SIGURG, depending on platform. */\n> > + InitializeLatchSupport();\n> > + InitLocalLatch();\n>\n> I'm mildly preferring InitProcessLocalLatch(), but not sure why - there's not\n> really a conflicting meaning of \"local\" here.\n\nDone.\n\n> > +/*\n> > + * Initialize the WaitEventSet we'll use in our main event loop.\n> > + */\n> > +static void\n> > +InitializeWaitSet(Postmaster *postmaster)\n> > +{\n> > + /* Set up a WaitEventSet for our latch and listening sockets. */\n> > + postmaster->wes = CreateWaitEventSet(CurrentMemoryContext, 1 + MAXLISTEN);\n> > + AddWaitEventToSet(postmaster->wes, WL_LATCH_SET, PGINVALID_SOCKET, MyLatch, NULL);\n> > + for (int i = 0; i < MAXLISTEN; i++)\n> > + {\n> > + int fd = ListenSocket[i];\n> > +\n> > + if (fd == PGINVALID_SOCKET)\n> > + break;\n> > + AddWaitEventToSet(postmaster->wes, WL_SOCKET_ACCEPT, fd, NULL, NULL);\n> > + }\n> > +}\n>\n> The naming seems like it could be confused with latch.h\n> infrastructure. InitPostmasterWaitSet()?\n\nOK.\n\n> > +/*\n> > + * Activate or deactivate the server socket events.\n> > + */\n> > +static void\n> > +AdjustServerSocketEvents(Postmaster *postmaster, bool active)\n> > +{\n> > + for (int pos = 1; pos < GetNumRegisteredWaitEvents(postmaster->wes); ++pos)\n> > + ModifyWaitEvent(postmaster->wes,\n> > + pos, active ? WL_SOCKET_ACCEPT : WL_SOCKET_IGNORE,\n> > + NULL);\n> > +}\n>\n> This seems to hardcode the specific wait events we're waiting for based on\n> latch.c infrastructure. Not really convinced that's a good idea.\n\nWhat are you objecting to? The assumption that the first socket is at\nposition 1? The use of GetNumRegisteredWaitEvents()?\n\n> > + /* Process work scheduled by signal handlers. */\n> > + if (pending_action_request)\n> > + process_action_request(postmaster);\n> > + if (pending_child_exit)\n> > + process_child_exit(postmaster);\n> > + if (pending_reload_request)\n> > + process_reload_request();\n> > + if (pending_shutdown_request)\n> > + process_shutdown_request(postmaster);\n\n> Is the order of operations here quite right? Shouldn't we process a shutdown\n> request before the others? And a child exit before the request to start an\n> autovac worker etc?\n>\n> ISTM it should be 1) shutdown request 2) child exit 3) config reload 4) action\n> request.\n\nOK, reordered like that.\n\n> > - ereport(DEBUG2,\n> > - (errmsg_internal(\"postmaster received signal %d\",\n> > - postgres_signal_arg)));\n\n> Hm, not having the \"postmaster received signal\" message anymore seems like a\n> loss when debugging things. I think process_shutdown_request() should emit\n> something like it.\n\nI added some of these.\n\n> I wonder if we should have a elog_sighand() that's written to be signal\n> safe. I've written versions of that numerous times for debugging, and it's a\n> bit silly to do that over and over again.\n\nRight, I was being dogmatic about kicking everything that doesn't have\na great big neon \"async-signal-safe\" sign on it out of the handlers.\n\n> > +\n> > + /* start accepting server socket connection events again */\n> > + reenable_server_socket_events = true;\n> > }\n> > }\n>\n> I don't think reenable_server_socket_events does anything as the patch stands\n> - I don't see it being checked anywhere? And in the path above, you're using\n> AdjustServerSocketEvents() directly.\n\nSorry, that was a left over unused variable from an earlier attempt,\nwhich I only noticed after clicking send. Removed.\n\n> > @@ -4094,6 +4130,7 @@ BackendStartup(Port *port)\n> > /* Hasn't asked to be notified about any bgworkers yet */\n> > bn->bgworker_notify = false;\n> >\n> > + PG_SETMASK(&BlockSig);\n> > #ifdef EXEC_BACKEND\n> > pid = backend_forkexec(port);\n> > #else /* !EXEC_BACKEND */\n>\n> There are other calls to fork_process() - why don't they need the same\n> treatment?\n>\n> Perhaps we should add an assertion to fork_process() ensuring that signals are\n> masked?\n\nIf we're going to put an assertion in there, we might as well consider\nsetting and restoring the mask in that wrapper. Tried like that in\nthis version.\n\n> > diff --git a/src/backend/storage/ipc/latch.c b/src/backend/storage/ipc/latch.c\n> > index eb3a569aae..3bfef592eb 100644\n> > --- a/src/backend/storage/ipc/latch.c\n> > +++ b/src/backend/storage/ipc/latch.c\n> > @@ -283,6 +283,17 @@ InitializeLatchSupport(void)\n> > #ifdef WAIT_USE_SIGNALFD\n> > sigset_t signalfd_mask;\n> >\n> > + if (IsUnderPostmaster)\n> > + {\n> > + if (signal_fd != -1)\n> > + {\n> > + /* Release postmaster's signal FD; ignore any error */\n> > + (void) close(signal_fd);\n> > + signal_fd = -1;\n> > + ReleaseExternalFD();\n> > + }\n> > + }\n> > +\n>\n> Hm - arguably it's a bug that we don't do this right now, correct?\n\nYes, I would say it's a non-live bug. A signalfd descriptor inherited\nby a child process isn't dangerous (it doesn't see the parent's\nsignals, it sees the child's signals), but it's a waste because we'd\nleak it. I guess we could re-use it instead but that seems a little\nweird. I've put this into a separate commit in case someone wants to\nargue for back-patching, but it's a pretty hypothetical concern since\nthe postmaster never initialised latch support before...\n\nOne thing that does seem a bit odd to me, though, is why we're\ncleaning up inherited descriptors in a function called\nInitializeLatchSupport(). I wonder if we should move it into\nFreeLatchSupportAfterFork()?\n\nWe should also close the postmaster's epoll fd, so I invented\nFreeWaitEventSetAfterFork(). I found that ClosePostmasterPorts() was\na good place to call that, though it doesn't really fit the name of\nthat function too well...\n\n> > @@ -1201,6 +1214,7 @@ WaitEventAdjustKqueue(WaitEventSet *set, WaitEvent *event, int old_events)\n> > event->events == WL_POSTMASTER_DEATH ||\n> > (event->events & (WL_SOCKET_READABLE |\n> > WL_SOCKET_WRITEABLE |\n> > + WL_SOCKET_IGNORE |\n> > WL_SOCKET_CLOSED)));\n> >\n> > if (event->events == WL_POSTMASTER_DEATH)\n> > @@ -1312,6 +1326,8 @@ WaitEventAdjustWin32(WaitEventSet *set, WaitEvent *event)\n> > flags |= FD_WRITE;\n> > if (event->events & WL_SOCKET_CONNECTED)\n> > flags |= FD_CONNECT;\n> > + if (event->events & WL_SOCKET_ACCEPT)\n> > + flags |= FD_ACCEPT;\n> >\n> > if (*handle == WSA_INVALID_EVENT)\n> > {\n>\n> I wonder if the code would end up easier to understand if we handled\n> WL_SOCKET_CONNECTED, WL_SOCKET_ACCEPT explicitly in the !WIN32 cases, rather\n> than redefining it to WL_SOCKET_READABLE.\n\nYeah maybe we could try that separately.\n\n> > diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c\n> > index 3082093d1e..655e881688 100644\n> > --- a/src/backend/tcop/postgres.c\n> > +++ b/src/backend/tcop/postgres.c\n> > @@ -24,7 +24,6 @@\n> > #include <signal.h>\n> > #include <unistd.h>\n> > #include <sys/resource.h>\n> > -#include <sys/select.h>\n> > #include <sys/socket.h>\n> > #include <sys/time.h>\n>\n> Do you know why this include even existed?\n\nThat turned out to be a fun question to answer: apparently there used\nto be an optional 'multiplexed backend' mode, removed by commit\nd5bbe2aca5 in 1998. A single backend could be connected to multiple\nfrontends.",
"msg_date": "Wed, 7 Dec 2022 00:58:06 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "On 2022-12-07 00:58:06 +1300, Thomas Munro wrote:\n> One way to fix that for the epoll version is to EPOLL_CTL_DEL and\n> EPOLL_CTL_ADD, whenever transitioning to/from a zero event mask.\n> Tried like that in this version. Another approach would be to\n> (finally) write DeleteWaitEvent() to do the same thing at a higher\n> level... seems overkill for this.\n\nWhat about just recreating the WES during crash restart?\n\n\n> > This seems to hardcode the specific wait events we're waiting for based on\n> > latch.c infrastructure. Not really convinced that's a good idea.\n>\n> What are you objecting to? The assumption that the first socket is at\n> position 1? The use of GetNumRegisteredWaitEvents()?\n\nThe latter.\n\n\n",
"msg_date": "Tue, 6 Dec 2022 15:12:30 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "> From 61480441f67ca7fac96ca4bcfe85f27013a47aa8 Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Tue, 6 Dec 2022 16:13:36 +1300\n> Subject: [PATCH v4 2/5] Don't leak a signalfd when using latches in the\n> postmaster.\n> \n> +\t\t/*\n> +\t\t * It would probably be safe to re-use the inherited signalfd since\n> +\t\t * signalfds only see the current processes pending signals, but it\n\nI think you mean \"current process's\", right ?\n\n\n",
"msg_date": "Tue, 6 Dec 2022 19:08:43 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 12:12 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-12-07 00:58:06 +1300, Thomas Munro wrote:\n> > One way to fix that for the epoll version is to EPOLL_CTL_DEL and\n> > EPOLL_CTL_ADD, whenever transitioning to/from a zero event mask.\n> > Tried like that in this version. Another approach would be to\n> > (finally) write DeleteWaitEvent() to do the same thing at a higher\n> > level... seems overkill for this.\n>\n> What about just recreating the WES during crash restart?\n\nIt seems a bit like cheating but yeah that's a super simple solution,\nand removes one patch from the stack. Done like that in this version.\n\n> > > This seems to hardcode the specific wait events we're waiting for based on\n> > > latch.c infrastructure. Not really convinced that's a good idea.\n> >\n> > What are you objecting to? The assumption that the first socket is at\n> > position 1? The use of GetNumRegisteredWaitEvents()?\n>\n> The latter.\n\nRemoved.",
"msg_date": "Wed, 7 Dec 2022 14:12:37 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 2:08 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > + /*\n> > + * It would probably be safe to re-use the inherited signalfd since\n> > + * signalfds only see the current processes pending signals, but it\n>\n> I think you mean \"current process's\", right ?\n\nFixed in v5, thanks.\n\n\n",
"msg_date": "Wed, 7 Dec 2022 14:13:06 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-07 14:12:37 +1300, Thomas Munro wrote:\n> On Wed, Dec 7, 2022 at 12:12 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-12-07 00:58:06 +1300, Thomas Munro wrote:\n> > > One way to fix that for the epoll version is to EPOLL_CTL_DEL and\n> > > EPOLL_CTL_ADD, whenever transitioning to/from a zero event mask.\n> > > Tried like that in this version. Another approach would be to\n> > > (finally) write DeleteWaitEvent() to do the same thing at a higher\n> > > level... seems overkill for this.\n> >\n> > What about just recreating the WES during crash restart?\n> \n> It seems a bit like cheating but yeah that's a super simple solution,\n> and removes one patch from the stack. Done like that in this version.\n\nI somewhat wish we'd do that more aggressively during crash-restart, rather\nthan the opposite. Mostly around shared memory contents though, so perhaps\nthat's not that comparable...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Dec 2022 17:22:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "Oops, v5 was broken as visible on cfbot (a last second typo broke it).\nHere's a better one.",
"msg_date": "Wed, 7 Dec 2022 16:16:18 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "I pushed the small preliminary patches. Here's a rebase of the main patch.",
"msg_date": "Fri, 23 Dec 2022 20:46:29 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 8:46 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I pushed the small preliminary patches. Here's a rebase of the main patch.\n\nHere are some questions I have considered. Anyone got an opinion\non point 3, in particular?\n\n1. Is it OK that we are now using APIs that might throw, in places\nwhere we weren't? I think so: we don't really expect WaitEventSet\nAPIs to throw unless something is pretty seriously wrong, and if you\nhack things to inject failures there then you get a FATAL error and\nthe postmaster exits and the children detect that. I think that is\nappropriate.\n\n2. Is it really OK to delete the pqsignal_pm() infrastructure? I\nthink so. The need for sa_mask to block all signals is gone: all\nsignal handlers should now be re-entrant (ie safe in case of\ninterruption while already in a signal handler), safe against stack\noverflow (pqsignal() still blocks re-entry for the *same* signal\nnumber, because we use sigaction() without SA_NODEFER, so a handler\ncan only be interrupted by a different signal, and the number of\nactions installed is finite and small), and safe to run at any time\n(ie safe to interrupt the user context because we just do known-good\nsigatomic_t/syscall stuff and save/restore errno). The concern about\nSA_RESTART is gone, because we no longer use the underspecified\nselect() interface; the replacement implementation syscalls, even\npoll(), return with EINTR for handlers installed with SA_RESTART, but\nthat's now moot anyway because we have a latch that guarantees they\nreturn with a different event anyway. (FTR select() is nearly extinct\nin BE code, I found one other user and I plan to remove it, see RADIUS\nthread, CF #4103.)\n\n3. Is it OK to clobber the shared pending flag for SIGQUIT, SIGTERM,\nSIGINT? If you send all of these extremely rapidly, it's\nindeterminate which one will be seen by handle_shutdown_request(). I\nthink that's probably acceptable? To be strict about processing only\nthe first one that is delivered, I think you'd need an sa_mask to\nblock all three signals, and then you wouldn't change\npending_shutdown_request if it's already set, which I'm willing to\ncode up if someone thinks that's important. (<vapourware>Ideally I\nwould invent WL_SIGNAL to consume signal events serially without\nhandlers or global variables</vapourware>.)\n\n4. Is anything new leaking into child processes due to this new\ninfrastructure? I don't think so; the postmaster's MemoryContext is\ndestroyed, and before that I'm releasing kernel resources on OSes that\nneed it (namely Linux, where the epoll fd and signalfd need to be\nclosed).\n\n5. Is the signal mask being correctly handled during forking? I\nthink so: I decided to push the masking logic directly into the\nroutine that forks, to make it easy to verify that all paths set the\nmask the way we want. (While thinking about that I noticed that\nsignals don't seem to be initially masked on Windows; I think that's a\npre-existing condition, and I assume we get away with it because\nnothing reaches the fake signal dispatch code soon enough to break\nanything? Not changing that in this patch.)\n\n6. Is the naming and style OK? Better ideas welcome, but basically I\ntried to avoid all unnecessary refactoring and changes, so no real\nlogic moves around, and the changes are pretty close to \"mechanical\".\nOne bikeshed decision was what to call the {handle,process}_XXX\nfunctions and associated flags. Maybe \"action\" isn't the best name;\nbut it could be a request from pg_ctl or a request from a child\nprocess. I went with newly invented names for these handlers rather\nthan \"handle_SIGUSR1\" etc because (1) the 3 different shutdown request\nsignals point to a common handler and (2) I hope to switch to latches\ninstead of SIGUSR1 for \"action\" in later work. But I could switch to\ngot_SIGUSR1 style variables if someone thinks it's better.\n\nHere's a new version, with small changes:\n* remove a stray reference to select() in a pqcomm.c comment\n* move PG_SETMASK(&UnBlockSig) below the bit that sets up SIGTTIN etc\n* pgindent",
"msg_date": "Sat, 7 Jan 2023 11:08:36 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-07 11:08:36 +1300, Thomas Munro wrote:\n> 1. Is it OK that we are now using APIs that might throw, in places\n> where we weren't? I think so: we don't really expect WaitEventSet\n> APIs to throw unless something is pretty seriously wrong, and if you\n> hack things to inject failures there then you get a FATAL error and\n> the postmaster exits and the children detect that. I think that is\n> appropriate.\n\nI think it's ok in principle. It might be that we'll find some thing to fix in\nthe future, but I don't see anything fundamental or obvious.\n\n\n> 2. Is it really OK to delete the pqsignal_pm() infrastructure? I\n> think so.\n\nSame.\n\n\n> 3. Is it OK to clobber the shared pending flag for SIGQUIT, SIGTERM,\n> SIGINT? If you send all of these extremely rapidly, it's\n> indeterminate which one will be seen by handle_shutdown_request().\n\nThat doesn't seem optimal. I'm mostly worried that we can end up downgrading a\nshutdown request.\n\n\n> I think that's probably acceptable? To be strict about processing only the\n> first one that is delivered, I think you'd need an sa_mask to block all\n> three signals, and then you wouldn't change pending_shutdown_request if it's\n> already set, which I'm willing to code up if someone thinks that's\n> important. (<vapourware>Ideally I would invent WL_SIGNAL to consume signal\n> events serially without handlers or global variables</vapourware>.)\n\nHm. The need for blocking sa_mask solely comes from using one variable in\nthree signal handlers, right? It's not pretty, but to me the easiest fix here\nseems to be have separate pending_{fast,smart,immediate}_shutdown_request\nvariables and deal with them in process_shutdown_request(). Might still make\nsense to have one pending_shutdown_request variable, to avoid unnecessary\nbranches before calling process_shutdown_request().\n\n\n> 5. Is the signal mask being correctly handled during forking? I\n> think so: I decided to push the masking logic directly into the\n> routine that forks, to make it easy to verify that all paths set the\n> mask the way we want.\n\nHm. If I understand correctly, you used sigprocmask() directly (vs\nPG_SETMASK()) in fork_process() because you want the old mask? But why do we\nrestore the prior mask, instead of just using PG_SETMASK(&UnBlockSig); as we\nstill do in a bunch of places in the postmaster?\n\nNot that I'm advocating for that, but would there be any real harm in just\ncontinuing to accept signals post fork? Now all the signal handlers should\njust end up pointlessly setting a local variable that's not going to be read\nany further? If true, it'd be good to add a comment explaining that this is\njust a belt-and-suspenders thing.\n\n\n> (While thinking about that I noticed that signals don't seem to be initially\n> masked on Windows; I think that's a pre-existing condition, and I assume we\n> get away with it because nothing reaches the fake signal dispatch code soon\n> enough to break anything? Not changing that in this patch.)\n\nIt's indeed a bit odd that we do pgwin32_signal_initialize() before the\ninitmask() and PG_SETMASK(&BlockSig) in InitPostmasterChild(). I guess it's\nkinda harmless though?\n\n\nI'm now somewhat weirded out by the choice to do pg_strong_random_init() in\nfork_process() rather than InitPostmasterChild(). Seems odd.\n\n\n> 6. Is the naming and style OK? Better ideas welcome, but basically I\n> tried to avoid all unnecessary refactoring and changes, so no real\n> logic moves around, and the changes are pretty close to \"mechanical\".\n> One bikeshed decision was what to call the {handle,process}_XXX\n> functions and associated flags.\n\nI wonder if it'd be good to have a _pm_ in the name.\n\n\n> Maybe \"action\" isn't the best name;\n\nYea, I don't like it. A shutdown is also an action, etc. What about just using\n_pmsignal_? It's a it odd because there's two signals in the name, but it\nstill feels better than 'action' and better than the existing sigusr1_handler.\n\n\n> +\n> +/* I/O multiplexing object */\n> +static WaitEventSet *wait_set;\n\nI'd name it a bit more obviously connected to postmaster, particularly because\nit does survive into forked processes and needs to be closed there.\n\n\n> +/*\n> + * Activate or deactivate notifications of server socket events. Since we\n> + * don't currently have a way to remove events from an existing WaitEventSet,\n> + * we'll just destroy and recreate the whole thing. This is called during\n> + * shutdown so we can wait for backends to exit without accepting new\n> + * connections, and during crash reinitialization when we need to start\n> + * listening for new connections again.\n> + */\n\nI'd maybe reference that this gets cleaned up via ClosePostmasterPorts(), it's\nnot *immediately* obvious.\n\n\n> +static void\n> +ConfigurePostmasterWaitSet(bool accept_connections)\n> +{\n> +\tif (wait_set)\n> +\t\tFreeWaitEventSet(wait_set);\n> +\twait_set = NULL;\n> +\n> +\twait_set = CreateWaitEventSet(CurrentMemoryContext, 1 + MAXLISTEN);\n> +\tAddWaitEventToSet(wait_set, WL_LATCH_SET, PGINVALID_SOCKET, MyLatch, NULL);\n\nIs there any reason to use MAXLISTEN here? We know how many sockets w're\nlistening on by this point, I think? No idea if the overhead matters anywhere,\nbut ...\n\nI guess all the other code already does so, but afaict we don't dynamically\nallocate resources there for things like ListenSocket[].\n\n\n\n> -\t\t/* Now check the select() result */\n> -\t\tif (selres < 0)\n> -\t\t{\n> -\t\t\tif (errno != EINTR && errno != EWOULDBLOCK)\n> -\t\t\t{\n> -\t\t\t\tereport(LOG,\n> -\t\t\t\t\t\t(errcode_for_socket_access(),\n> -\t\t\t\t\t\t errmsg(\"select() failed in postmaster: %m\")));\n> -\t\t\t\treturn STATUS_ERROR;\n> -\t\t\t}\n> -\t\t}\n> +\t\tnevents = WaitEventSetWait(wait_set,\n> +\t\t\t\t\t\t\t\t timeout.tv_sec * 1000 + timeout.tv_usec / 1000,\n> +\t\t\t\t\t\t\t\t events,\n> +\t\t\t\t\t\t\t\t lengthof(events),\n> +\t\t\t\t\t\t\t\t 0 /* postmaster posts no wait_events */ );\n> \n> \t\t/*\n> -\t\t * New connection pending on any of our sockets? If so, fork a child\n> -\t\t * process to deal with it.\n> +\t\t * Latch set by signal handler, or new connection pending on any of\n> +\t\t * our sockets? If the latter, fork a child process to deal with it.\n> \t\t */\n> -\t\tif (selres > 0)\n> +\t\tfor (int i = 0; i < nevents; i++)\n> \t\t{\n\nHm. This is preexisting behaviour, but now it seems somewhat odd that we might\nend up happily forking a backend for each socket without checking signals\ninbetween. Forking might take a while, and if a signal arrived since the\nWaitEventSetWait() we'll not react to it.\n\n\n> static void\n> PostmasterStateMachine(void)\n> @@ -3796,6 +3819,9 @@ PostmasterStateMachine(void)\n> \n> \tif (pmState == PM_WAIT_DEAD_END)\n> \t{\n> +\t\t/* Don't allow any new socket connection events. */\n> +\t\tConfigurePostmasterWaitSet(false);\n\nHm. Is anything actually using the wait set until we re-create it with (true)\nbelow?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 6 Jan 2023 15:25:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "On Sat, Jan 7, 2023 at 12:25 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-01-07 11:08:36 +1300, Thomas Munro wrote:\n> > 3. Is it OK to clobber the shared pending flag for SIGQUIT, SIGTERM,\n> > SIGINT? If you send all of these extremely rapidly, it's\n> > indeterminate which one will be seen by handle_shutdown_request().\n>\n> That doesn't seem optimal. I'm mostly worried that we can end up downgrading a\n> shutdown request.\n\nI was contemplating whether I needed to do some more push-ups to\nprefer the first delivered signal (instead of the last), but you're\nsaying that it would be enough to prefer the fastest shutdown type, in\ncases where more than one signal was handled between server loops.\nWFM.\n\n> It's not pretty, but to me the easiest fix here\n> seems to be have separate pending_{fast,smart,immediate}_shutdown_request\n> variables and deal with them in process_shutdown_request(). Might still make\n> sense to have one pending_shutdown_request variable, to avoid unnecessary\n> branches before calling process_shutdown_request().\n\nOK, tried that way.\n\n> > 5. Is the signal mask being correctly handled during forking? I\n> > think so: I decided to push the masking logic directly into the\n> > routine that forks, to make it easy to verify that all paths set the\n> > mask the way we want.\n>\n> Hm. If I understand correctly, you used sigprocmask() directly (vs\n> PG_SETMASK()) in fork_process() because you want the old mask? But why do we\n> restore the prior mask, instead of just using PG_SETMASK(&UnBlockSig); as we\n> still do in a bunch of places in the postmaster?\n\nIt makes zero difference in practice but I think it's a nicer way to\ncode it because it doesn't make an unnecessary assumption about what\nthe signal mask was on entry.\n\n> Not that I'm advocating for that, but would there be any real harm in just\n> continuing to accept signals post fork? Now all the signal handlers should\n> just end up pointlessly setting a local variable that's not going to be read\n> any further? If true, it'd be good to add a comment explaining that this is\n> just a belt-and-suspenders thing.\n\nSeems plausible and a nice idea to research. I think it might take\nsome analysis of important signals that children might miss before\nthey install their own handlers. Comment added.\n\n> > 6. Is the naming and style OK? Better ideas welcome, but basically I\n> > tried to avoid all unnecessary refactoring and changes, so no real\n> > logic moves around, and the changes are pretty close to \"mechanical\".\n> > One bikeshed decision was what to call the {handle,process}_XXX\n> > functions and associated flags.\n>\n> I wonder if it'd be good to have a _pm_ in the name.\n\nI dunno about this one, it's all static stuff in a file called\npostmaster.c and one (now) already has pm in it (see below).\n\n> > Maybe \"action\" isn't the best name;\n>\n> Yea, I don't like it. A shutdown is also an action, etc. What about just using\n> _pmsignal_? It's a it odd because there's two signals in the name, but it\n> still feels better than 'action' and better than the existing sigusr1_handler.\n\nDone.\n\n> > +\n> > +/* I/O multiplexing object */\n> > +static WaitEventSet *wait_set;\n>\n> I'd name it a bit more obviously connected to postmaster, particularly because\n> it does survive into forked processes and needs to be closed there.\n\nDone, as pm_wait_set.\n\n> > +/*\n> > + * Activate or deactivate notifications of server socket events. Since we\n> > + * don't currently have a way to remove events from an existing WaitEventSet,\n> > + * we'll just destroy and recreate the whole thing. This is called during\n> > + * shutdown so we can wait for backends to exit without accepting new\n> > + * connections, and during crash reinitialization when we need to start\n> > + * listening for new connections again.\n> > + */\n>\n> I'd maybe reference that this gets cleaned up via ClosePostmasterPorts(), it's\n> not *immediately* obvious.\n\nDone.\n\n> > +static void\n> > +ConfigurePostmasterWaitSet(bool accept_connections)\n> > +{\n> > + if (wait_set)\n> > + FreeWaitEventSet(wait_set);\n> > + wait_set = NULL;\n> > +\n> > + wait_set = CreateWaitEventSet(CurrentMemoryContext, 1 + MAXLISTEN);\n> > + AddWaitEventToSet(wait_set, WL_LATCH_SET, PGINVALID_SOCKET, MyLatch, NULL);\n>\n> Is there any reason to use MAXLISTEN here? We know how many sockets w're\n> listening on by this point, I think? No idea if the overhead matters anywhere,\n> but ...\n\nFixed.\n\n> > /*\n> > - * New connection pending on any of our sockets? If so, fork a child\n> > - * process to deal with it.\n> > + * Latch set by signal handler, or new connection pending on any of\n> > + * our sockets? If the latter, fork a child process to deal with it.\n> > */\n> > - if (selres > 0)\n> > + for (int i = 0; i < nevents; i++)\n> > {\n>\n> Hm. This is preexisting behaviour, but now it seems somewhat odd that we might\n> end up happily forking a backend for each socket without checking signals\n> inbetween. Forking might take a while, and if a signal arrived since the\n> WaitEventSetWait() we'll not react to it.\n\nRight, so if you have 64 server sockets and they all have clients\nwaiting on their listen queue, we'll service one connection from each\nbefore getting back to checking for pmsignals or shutdown, and that's\nunchanged in this patch. (FWIW I also noticed that problem while\nexperimenting with the idea that you could handle multiple clients in\none go on OSes that report the listen queue depth size along with\nWL_SOCKET_ACCEPT events, but didn't pursue it...)\n\nI guess we could check every time through the nevents loop. I may\nlook into that in a later patch, but I prefer to keep the same policy\nin this patch.\n\n> > static void\n> > PostmasterStateMachine(void)\n> > @@ -3796,6 +3819,9 @@ PostmasterStateMachine(void)\n> >\n> > if (pmState == PM_WAIT_DEAD_END)\n> > {\n> > + /* Don't allow any new socket connection events. */\n> > + ConfigurePostmasterWaitSet(false);\n>\n> Hm. Is anything actually using the wait set until we re-create it with (true)\n> below?\n\nYes. While in PM_WAIT_DEAD_END state, waiting for children to exit,\nthere may be clients trying to connect. On master, we have a special\npg_usleep(100000L) instead of select() just for that state so we can\nignore incoming connections while waiting for the SIGCHLD reaper to\nadvance our state, but in this new world that's not enough. We need\nto wait for the latch to be set by handle_child_exit_signal(). So I\nused the regular WES to wait for the latch (that is, no more special\ncase for that state), but I need to ignore socket events. If I\ndidn't, then an incoming connection sitting in the listen queue would\ncause the server loop to burn 100% CPU reporting a level-triggered\nWL_SOCKET_ACCEPT for sockets that we don't want to accept (yet).",
"msg_date": "Sat, 7 Jan 2023 18:08:11 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-07 18:08:11 +1300, Thomas Munro wrote:\n> On Sat, Jan 7, 2023 at 12:25 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-01-07 11:08:36 +1300, Thomas Munro wrote:\n> > > 3. Is it OK to clobber the shared pending flag for SIGQUIT, SIGTERM,\n> > > SIGINT? If you send all of these extremely rapidly, it's\n> > > indeterminate which one will be seen by handle_shutdown_request().\n> >\n> > That doesn't seem optimal. I'm mostly worried that we can end up downgrading a\n> > shutdown request.\n> \n> I was contemplating whether I needed to do some more push-ups to\n> prefer the first delivered signal (instead of the last), but you're\n> saying that it would be enough to prefer the fastest shutdown type, in\n> cases where more than one signal was handled between server loops.\n> WFM.\n\nI don't see any need for such an order requirement - in case of receiving a\n\"less severe\" shutdown request first, we'd process the more severe one soon\nafter. There's nothing to be gained by trying to follow the order of the\nincoming signals.\n\nAfaict that's also the behaviour today. pmdie() has blocks like this:\n\t\tcase SIGTERM:\n\n\t\t\t/*\n\t\t\t * Smart Shutdown:\n\t\t\t *\n\t\t\t * Wait for children to end their work, then shut down.\n\t\t\t */\n\t\t\tif (Shutdown >= SmartShutdown)\n\t\t\t\tbreak;\n\n\n\nI briefly wondered about deduplicating that code, now that we we know the\nrequested mode ahead of the switch. So it could be a something like:\n\n/* don't interrupt an already in-progress shutdown in a more \"severe\" mode */\nif (mode < Shutdown)\n return;\n\nbut that's again probaly something for later.\n\n\n> > > 5. Is the signal mask being correctly handled during forking? I\n> > > think so: I decided to push the masking logic directly into the\n> > > routine that forks, to make it easy to verify that all paths set the\n> > > mask the way we want.\n> >\n> > Hm. If I understand correctly, you used sigprocmask() directly (vs\n> > PG_SETMASK()) in fork_process() because you want the old mask? But why do we\n> > restore the prior mask, instead of just using PG_SETMASK(&UnBlockSig); as we\n> > still do in a bunch of places in the postmaster?\n> \n> It makes zero difference in practice but I think it's a nicer way to\n> code it because it doesn't make an unnecessary assumption about what\n> the signal mask was on entry.\n\nHeh, to me doing something slightly different than in other places actually\nseems to make it a bit less nice :). There's also some benefit in the\ncertainty of knowing what the mask will be. But it doesn't matter mcuh.\n\n\n> > > 6. Is the naming and style OK? Better ideas welcome, but basically I\n> > > tried to avoid all unnecessary refactoring and changes, so no real\n> > > logic moves around, and the changes are pretty close to \"mechanical\".\n> > > One bikeshed decision was what to call the {handle,process}_XXX\n> > > functions and associated flags.\n> >\n> > I wonder if it'd be good to have a _pm_ in the name.\n> \n> I dunno about this one, it's all static stuff in a file called\n> postmaster.c and one (now) already has pm in it (see below).\n\nI guess stuff like signal handlers and their state somehow seems more global\nto me than their C linkage type suggests. Hence the desire to be a bit more\n\"namespaced\" in their naming. I do find it somewhat annoying when reasonably\nimportant global variables aren't uniquely named when using a debugger...\n\nBut again, this isn't anything that should hold up the patch.\n\n\n> > Is there any reason to use MAXLISTEN here? We know how many sockets w're\n> > listening on by this point, I think? No idea if the overhead matters anywhere,\n> > but ...\n> \n> Fixed.\n\nI was thinking of determining the number once, in PostmasterMain(). But that's\nperhaps better done as a separate change... WFM.\n\n\n> > Hm. This is preexisting behaviour, but now it seems somewhat odd that we might\n> > end up happily forking a backend for each socket without checking signals\n> > inbetween. Forking might take a while, and if a signal arrived since the\n> > WaitEventSetWait() we'll not react to it.\n> \n> Right, so if you have 64 server sockets and they all have clients\n> waiting on their listen queue, we'll service one connection from each\n> before getting back to checking for pmsignals or shutdown, and that's\n> unchanged in this patch. (FWIW I also noticed that problem while\n> experimenting with the idea that you could handle multiple clients in\n> one go on OSes that report the listen queue depth size along with\n> WL_SOCKET_ACCEPT events, but didn't pursue it...)\n> \n> I guess we could check every time through the nevents loop. I may\n> look into that in a later patch, but I prefer to keep the same policy\n> in this patch.\n\nMakes sense. This was mainly me trying to make sure we're not changing subtle\nstuff accidentally (and trying to understand how things work currently, as a\nprerequisite).\n\n\n> > > static void\n> > > PostmasterStateMachine(void)\n> > > @@ -3796,6 +3819,9 @@ PostmasterStateMachine(void)\n> > >\n> > > if (pmState == PM_WAIT_DEAD_END)\n> > > {\n> > > + /* Don't allow any new socket connection events. */\n> > > + ConfigurePostmasterWaitSet(false);\n> >\n> > Hm. Is anything actually using the wait set until we re-create it with (true)\n> > below?\n> \n> Yes. While in PM_WAIT_DEAD_END state, waiting for children to exit,\n> there may be clients trying to connect.\n\nOh, Right. I had misread the diff, thinking the\nConfigurePostmasterWaitSet(false) was in the same PM_NO_CHILDREN branch that\nthe ConfigurePostmasterWaitSet(true) was in.\n\n\n> On master, we have a special pg_usleep(100000L) instead of select() just for\n> that state so we can ignore incoming connections while waiting for the\n> SIGCHLD reaper to advance our state, but in this new world that's not\n> enough. We need to wait for the latch to be set by\n> handle_child_exit_signal(). So I used the regular WES to wait for the latch\n> (that is, no more special case for that state), but I need to ignore socket\n> events. If I didn't, then an incoming connection sitting in the listen\n> queue would cause the server loop to burn 100% CPU reporting a\n> level-triggered WL_SOCKET_ACCEPT for sockets that we don't want to accept\n> (yet).\n\nYea, this is clearly the better approach.\n\n\nA few more code review comments:\n\nDetermineSleepTime() still deals with struct timeval, which we maintain at\nsome effort. Just to then convert it away from struct timeval in the\nWaitEventSetWait() call. That doesn't seem quite right, and is basically\nintroduced in this patch.\n\n\nI think ServerLoop still has an outdated comment:\n\n *\n * NB: Needs to be called with signals blocked\n\nwhich we aren't doing (nor need to be doing) anymore.\n\n\n> \t\t/*\n> -\t\t * New connection pending on any of our sockets? If so, fork a child\n> -\t\t * process to deal with it.\n> +\t\t * Latch set by signal handler, or new connection pending on any of\n> +\t\t * our sockets? If the latter, fork a child process to deal with it.\n> \t\t */\n> -\t\tif (selres > 0)\n> +\t\tfor (int i = 0; i < nevents; i++)\n> \t\t{\n> -\t\t\tint\t\t\ti;\n> -\n> -\t\t\tfor (i = 0; i < MAXLISTEN; i++)\n> +\t\t\tif (events[i].events & WL_LATCH_SET)\n> \t\t\t{\n> -\t\t\t\tif (ListenSocket[i] == PGINVALID_SOCKET)\n> -\t\t\t\t\tbreak;\n> -\t\t\t\tif (FD_ISSET(ListenSocket[i], &rmask))\n> +\t\t\t\tResetLatch(MyLatch);\n> +\n> +\t\t\t\t/* Process work scheduled by signal handlers. */\n\nVery minor: It feels a tad off to say that the work was scheduled by signal\nhandlers, it's either from other processes or by the OS. But ...\n\n\n> +/*\n> + * Child processes use SIGUSR1 to send 'pmsignals'. pg_ctl uses SIGUSR1 to ask\n> + * postmaster to check for logrotate and promote files.\n> + */\n\ns/send/notify us of/, since the concrete \"pmsignal\" is actually transported\noutside of the \"OS signal\" level?\n\n\nLGTM.\n\n\nI think this is a significant improvement, thanks for working on it.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 7 Jan 2023 14:55:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "On Sun, Jan 8, 2023 at 11:55 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-01-07 18:08:11 +1300, Thomas Munro wrote:\n> > On Sat, Jan 7, 2023 at 12:25 PM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2023-01-07 11:08:36 +1300, Thomas Munro wrote:\n> > > > 3. Is it OK to clobber the shared pending flag for SIGQUIT, SIGTERM,\n> > > > SIGINT? If you send all of these extremely rapidly, it's\n> > > > indeterminate which one will be seen by handle_shutdown_request().\n> > >\n> > > That doesn't seem optimal. I'm mostly worried that we can end up downgrading a\n> > > shutdown request.\n> >\n> > I was contemplating whether I needed to do some more push-ups to\n> > prefer the first delivered signal (instead of the last), but you're\n> > saying that it would be enough to prefer the fastest shutdown type, in\n> > cases where more than one signal was handled between server loops.\n> > WFM.\n>\n> I don't see any need for such an order requirement - in case of receiving a\n> \"less severe\" shutdown request first, we'd process the more severe one soon\n> after. There's nothing to be gained by trying to follow the order of the\n> incoming signals.\n\nOh, I fully agree. I was working through the realisation that I might\nneed to serialise the handlers to implement the priority logic\ncorrectly (upgrades good, downgrades bad), but your suggestion\nfast-forwards to the right answer and doesn't require blocking, so I\nprefer it, and had already gone that way in v9. In this version I've\nadded a comment to explain that the outcome is the same in the end,\nand also fixed the flag clearing logic which was subtly wrong before.\n\n> > > I wonder if it'd be good to have a _pm_ in the name.\n> >\n> > I dunno about this one, it's all static stuff in a file called\n> > postmaster.c and one (now) already has pm in it (see below).\n>\n> I guess stuff like signal handlers and their state somehow seems more global\n> to me than their C linkage type suggests. Hence the desire to be a bit more\n> \"namespaced\" in their naming. I do find it somewhat annoying when reasonably\n> important global variables aren't uniquely named when using a debugger...\n\nAlright, renamed.\n\n> A few more code review comments:\n>\n> DetermineSleepTime() still deals with struct timeval, which we maintain at\n> some effort. Just to then convert it away from struct timeval in the\n> WaitEventSetWait() call. That doesn't seem quite right, and is basically\n> introduced in this patch.\n\nI agree, but I was trying to minimise the patch: signals and events\nstuff is a lot already. I didn't want to touch DetermineSleepTime()'s\ntime logic in the same commit. But here's a separate patch for that.\n\n> I think ServerLoop still has an outdated comment:\n>\n> *\n> * NB: Needs to be called with signals blocked\n\nFixed.\n\n> > + /* Process work scheduled by signal handlers. */\n>\n> Very minor: It feels a tad off to say that the work was scheduled by signal\n> handlers, it's either from other processes or by the OS. But ...\n\nOK, now it's \"requested via signal handlers\".\n\n> > +/*\n> > + * Child processes use SIGUSR1 to send 'pmsignals'. pg_ctl uses SIGUSR1 to ask\n> > + * postmaster to check for logrotate and promote files.\n> > + */\n>\n> s/send/notify us of/, since the concrete \"pmsignal\" is actually transported\n> outside of the \"OS signal\" level?\n\nFixed.\n\n> LGTM.\n\nThanks. Here's v10. I'll wait a bit longer to see if anyone else has feedback.",
"msg_date": "Wed, 11 Jan 2023 16:07:03 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 4:07 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Thanks. Here's v10. I'll wait a bit longer to see if anyone else has feedback.\n\nPushed, after a few very minor adjustments, mostly comments. Thanks\nfor the reviews and pointers. I think there are quite a lot of\nrefactoring and refinement opportunities unlocked by this change (I\nhave some draft proposals already), but for now I'll keep an eye on\nthe build farm.\n\n\n",
"msg_date": "Thu, 12 Jan 2023 17:03:02 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Pushed, after a few very minor adjustments, mostly comments. Thanks\n> for the reviews and pointers. I think there are quite a lot of\n> refactoring and refinement opportunities unlocked by this change (I\n> have some draft proposals already), but for now I'll keep an eye on\n> the build farm.\n\nskink seems to have found a problem:\n\n==2011873== VALGRINDERROR-BEGIN\n==2011873== Syscall param epoll_wait(events) points to unaddressable byte(s)\n==2011873== at 0x4D8DC73: epoll_wait (epoll_wait.c:30)\n==2011873== by 0x55CA49: WaitEventSetWaitBlock (latch.c:1527)\n==2011873== by 0x55D591: WaitEventSetWait (latch.c:1473)\n==2011873== by 0x4F2B28: ServerLoop (postmaster.c:1729)\n==2011873== by 0x4F3E85: PostmasterMain (postmaster.c:1452)\n==2011873== by 0x42643C: main (main.c:200)\n==2011873== Address 0x7b1e620 is 1,360 bytes inside a recently re-allocated block of size 8,192 alloc'd\n==2011873== at 0x48407B4: malloc (vg_replace_malloc.c:381)\n==2011873== by 0x6D9D30: AllocSetContextCreateInternal (aset.c:446)\n==2011873== by 0x4F2D9B: PostmasterMain (postmaster.c:614)\n==2011873== by 0x42643C: main (main.c:200)\n==2011873== \n==2011873== VALGRINDERROR-END\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Jan 2023 01:27:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 7:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> skink seems to have found a problem:\n>\n> ==2011873== VALGRINDERROR-BEGIN\n> ==2011873== Syscall param epoll_wait(events) points to unaddressable byte(s)\n> ==2011873== at 0x4D8DC73: epoll_wait (epoll_wait.c:30)\n> ==2011873== by 0x55CA49: WaitEventSetWaitBlock (latch.c:1527)\n> ==2011873== by 0x55D591: WaitEventSetWait (latch.c:1473)\n> ==2011873== by 0x4F2B28: ServerLoop (postmaster.c:1729)\n> ==2011873== by 0x4F3E85: PostmasterMain (postmaster.c:1452)\n> ==2011873== by 0x42643C: main (main.c:200)\n> ==2011873== Address 0x7b1e620 is 1,360 bytes inside a recently re-allocated block of size 8,192 alloc'd\n> ==2011873== at 0x48407B4: malloc (vg_replace_malloc.c:381)\n> ==2011873== by 0x6D9D30: AllocSetContextCreateInternal (aset.c:446)\n> ==2011873== by 0x4F2D9B: PostmasterMain (postmaster.c:614)\n> ==2011873== by 0x42643C: main (main.c:200)\n> ==2011873==\n> ==2011873== VALGRINDERROR-END\n\nRepro'd here on Valgrind. Oh, that's interesting. WaitEventSetWait()\nwants to use an internal buffer of the size given to the constructor\nfunction, but passes the size of the caller's output buffer to\nepoll_wait() and friends. Perhaps it should use Min(nevents,\nset->nevents_space). I mean, I should have noticed that, but I think\nthat's arguably a pre-existing bug in the WES code, or at least an\nunhelpful interface. Thinking...\n\n\n",
"msg_date": "Thu, 12 Jan 2023 19:57:01 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 7:57 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Jan 12, 2023 at 7:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > skink seems to have found a problem:\n> >\n> > ==2011873== VALGRINDERROR-BEGIN\n> > ==2011873== Syscall param epoll_wait(events) points to unaddressable byte(s)\n> > ==2011873== at 0x4D8DC73: epoll_wait (epoll_wait.c:30)\n> > ==2011873== by 0x55CA49: WaitEventSetWaitBlock (latch.c:1527)\n> > ==2011873== by 0x55D591: WaitEventSetWait (latch.c:1473)\n> > ==2011873== by 0x4F2B28: ServerLoop (postmaster.c:1729)\n> > ==2011873== by 0x4F3E85: PostmasterMain (postmaster.c:1452)\n> > ==2011873== by 0x42643C: main (main.c:200)\n> > ==2011873== Address 0x7b1e620 is 1,360 bytes inside a recently re-allocated block of size 8,192 alloc'd\n> > ==2011873== at 0x48407B4: malloc (vg_replace_malloc.c:381)\n> > ==2011873== by 0x6D9D30: AllocSetContextCreateInternal (aset.c:446)\n> > ==2011873== by 0x4F2D9B: PostmasterMain (postmaster.c:614)\n> > ==2011873== by 0x42643C: main (main.c:200)\n> > ==2011873==\n> > ==2011873== VALGRINDERROR-END\n>\n> Repro'd here on Valgrind. Oh, that's interesting. WaitEventSetWait()\n> wants to use an internal buffer of the size given to the constructor\n> function, but passes the size of the caller's output buffer to\n> epoll_wait() and friends. Perhaps it should use Min(nevents,\n> set->nevents_space). I mean, I should have noticed that, but I think\n> that's arguably a pre-existing bug in the WES code, or at least an\n> unhelpful interface. Thinking...\n\nYeah. This stops valgrind complaining here.",
"msg_date": "Thu, 12 Jan 2023 20:35:43 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-12 20:35:43 +1300, Thomas Munro wrote:\n> Subject: [PATCH] Fix WaitEventSetWait() buffer overrun.\n> \n> The WAIT_USE_EPOLL and WAIT_USE_KQUEUE implementations of\n> WaitEventSetWaitBlock() confused the size of their internal buffer with\n> the size of the caller's output buffer, and could ask the kernel for too\n> many events. In fact the set of events retrieved from the kernel needs\n> to be able to fit in both buffers, so take the minimum of the two.\n> \n> The WAIT_USE_POLL and WAIT_USE WIN32 implementations didn't have this\n> confusion.\n\n> This probably didn't come up before because we always used the same\n> number in both places, but commit 7389aad6 calculates a dynamic size at\n> construction time, while using MAXLISTEN for its output event buffer on\n> the stack. That seems like a reasonable thing to want to do, so\n> consider this to be a pre-existing bug worth fixing.\n\n> As reported by skink, valgrind and Tom Lane.\n> \n> Discussion: https://postgr.es/m/901504.1673504836%40sss.pgh.pa.us\n\nMakes sense. We should backpatch this, I think?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 Jan 2023 10:26:29 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 7:26 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-01-12 20:35:43 +1300, Thomas Munro wrote:\n> > Subject: [PATCH] Fix WaitEventSetWait() buffer overrun.\n\n> Makes sense. We should backpatch this, I think?\n\nDone.\n\n\n",
"msg_date": "Fri, 13 Jan 2023 11:07:55 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "The nearby thread about searching for uses of volatile reminded me: we\ncan now drop a bunch of these in postmaster.c. The patch I originally\nwrote to do that as part of this series somehow morphed into an\nexperimental patch to nuke all global variables[1], but of course we\nshould at least drop the now redundant use of volatile and\nsigatomic_t. See attached.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGKH_RPAo%3DNgPfHKj--565aL1qiVpUGdWt1_pmJehY%2Bdmw%40mail.gmail.com",
"msg_date": "Sat, 28 Jan 2023 14:25:38 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> The nearby thread about searching for uses of volatile reminded me: we\n> can now drop a bunch of these in postmaster.c. The patch I originally\n> wrote to do that as part of this series somehow morphed into an\n> experimental patch to nuke all global variables[1], but of course we\n> should at least drop the now redundant use of volatile and\n> sigatomic_t. See attached.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 Jan 2023 20:39:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-28 14:25:38 +1300, Thomas Munro wrote:\n> The nearby thread about searching for uses of volatile reminded me: we\n> can now drop a bunch of these in postmaster.c. The patch I originally\n> wrote to do that as part of this series somehow morphed into an\n> experimental patch to nuke all global variables[1],\n\nHah.\n\n\n> but of course we should at least drop the now redundant use of volatile and\n> sigatomic_t. See attached.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 Jan 2023 17:59:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Using WaitEventSet in the postmaster"
}
] |
[
{
"msg_contents": "Hi,\n\nCommit f5580882 established that all supported computers have AF_UNIX.\nOne of the follow-up consequences that was left unfinished is that we\ncould simplify our test harness code to make it the same on all\nplatforms. Currently we have hundreds of lines of C and perl to use\nsecure TCP connections instead for the benefit of defunct Windows\nversions. Here's a patch set for that. These patches and some\ndiscussion of them were buried in the recent\nclean-up-after-recently-dropped-obsolete-systems thread[1], and I\ndidn't want to lose track of them. I think they would need review and\ntesting from a Windows-based hacker to make progress. The patches\nare:\n\n1. Teach mkdtemp() to make a non-world-accessible directory. This is\nrequired to be able to make a socket that other processes can't\nconnect to, to match the paranoia level used on Unix. This was\nwritten just by reading documentation, because I am not a Windows\nuser, so I would be grateful for a second opinion and/or testing from\na Windows hacker, which would involve testing with two different\nusers. The idea is that Windows' mkdir() is completely ignoring the\npermissions (we can see in the mingw headers that it literally throws\naway the mode argument), so we shouldn't use that, but native\nCreateDirectory() when given a pointer to a SECURITY_ATTRIBUTES with\nlpSecurityDesciptor set to NULL should only allow the current user to\naccess the object (directory). Does this really work, and would it be\nbetter to create some more explicit private-keep-out\nSECURITY_ATTRIBUTE, and how would that look?\n\nI'm fairly sure that filesystem permissions must be enough to stop\nanother OS user from connecting, because it's clear from documentation\nthat AF_UNIX works on Windows by opening the file and reading some\nmagic \"reparse\" data from inside it, so if you can't see into the\ncontaining directory, you can't do it. This patch is the one the rest\nare standing on, because the tests should match Unix in their level of\nsecurity.\n\n2. Always use AF_UNIX for pg_regress. Remove a bunch of\nno-longer-needed sspi auth stuff. Remove comments that worried about\nsignal handler safety (referring here to real Windows signals, not\nfake PostgreSQL signals that are a backend-only concept). By my\nreading of the Windows documentation and our code, there is no real\nconcern here, so the remove_temp() stuff should be fine, as I have\nexplained in a new comment. But I have not tested this signal safety\nclaim, not being a Windows user. I added an assertion that should\nhold if I am right. If you run this on Windows and interrupt\npg_regress with ^C, does it hold?\n\n3. Use AF_UNIX for TAP tests too.\n\n4. In passing, remove our documentation's claim that Linux's\n\"abstract\" AF_UNIX namespace is available on Windows. It does not\nwork at all, according to all reports (IMHO it seems like an\ninherently insecure interface that other OSes would be unlikely to\nadopt).\n\nNote that this thread is not about libpq, which differs from Unix by\ndefaulting to host=localhost rather than AF_UNIX IIRC. That's a\nuser-facing policy decision I'm not touching; this thread is just\nabout cleaning up old test infrastructure of interest to hackers.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJ3LHeP9w5Fgzdr4G8AnEtJ%3Dz%3Dp6hGDEm4qYGEUX5B6fQ%40mail.gmail.com",
"msg_date": "Fri, 2 Dec 2022 13:02:36 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Using AF_UNIX sockets always for tests on Windows"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Commit f5580882 established that all supported computers have AF_UNIX.\n> One of the follow-up consequences that was left unfinished is that we\n> could simplify our test harness code to make it the same on all\n> platforms. Currently we have hundreds of lines of C and perl to use\n> secure TCP connections instead for the benefit of defunct Windows\n> versions. Here's a patch set for that.\n\nIf we remove that, won't we have a whole lot of code that's not\ntested at all on any platform, ie all the TCP-socket code?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Dec 2022 20:30:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Using AF_UNIX sockets always for tests on Windows"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-01 20:30:36 -0500, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Commit f5580882 established that all supported computers have AF_UNIX.\n> > One of the follow-up consequences that was left unfinished is that we\n> > could simplify our test harness code to make it the same on all\n> > platforms. Currently we have hundreds of lines of C and perl to use\n> > secure TCP connections instead for the benefit of defunct Windows\n> > versions. Here's a patch set for that.\n> \n> If we remove that, won't we have a whole lot of code that's not\n> tested at all on any platform, ie all the TCP-socket code?\n\nThere's some coverage via the auth and ssl tests. But I agree it's an\nissue. But to me the fix for that seems to be to add a dedicated test for\nthat, rather than relying on windows to test our socket code - that's quite a\nfew separate code paths from the tcp support of other platforms.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Dec 2022 17:42:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Using AF_UNIX sockets always for tests on Windows"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-12-01 20:30:36 -0500, Tom Lane wrote:\n>> If we remove that, won't we have a whole lot of code that's not\n>> tested at all on any platform, ie all the TCP-socket code?\n\n> There's some coverage via the auth and ssl tests. But I agree it's an\n> issue. But to me the fix for that seems to be to add a dedicated test for\n> that, rather than relying on windows to test our socket code - that's quite a\n> few separate code paths from the tcp support of other platforms.\n\nIMO that's not the best way forward, because you'll always have\nnagging questions about whether a single-purpose test covers\neverything that needs coverage. I think the best place to be in\nwould be to be able to run the whole test suite using either TCP or\nUNIX sockets, on any platform (with stuff like the SSL test\noverriding the choice as needed). I doubt ripping out the existing\nWindows-only support for TCP-based testing is a good step in that\ndirection.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Dec 2022 20:56:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Using AF_UNIX sockets always for tests on Windows"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-01 20:56:18 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-12-01 20:30:36 -0500, Tom Lane wrote:\n> >> If we remove that, won't we have a whole lot of code that's not\n> >> tested at all on any platform, ie all the TCP-socket code?\n> \n> > There's some coverage via the auth and ssl tests. But I agree it's an\n> > issue. But to me the fix for that seems to be to add a dedicated test for\n> > that, rather than relying on windows to test our socket code - that's quite a\n> > few separate code paths from the tcp support of other platforms.\n> \n> IMO that's not the best way forward, because you'll always have\n> nagging questions about whether a single-purpose test covers\n> everything that needs coverage.\n\nStill seems better than not having any coverage in our development\nenvironments...\n\n\n> I think the best place to be in would be to be able to run the whole test\n> suite using either TCP or UNIX sockets, on any platform (with stuff like the\n> SSL test overriding the choice as needed).\n\nI agree that that's useful. But it seems somewhat independent from the\nmajority of the proposed changes. To be able to test force-tcp-everywhere we\ndon't need e.g. code for setting sspi auth in pg_regress etc - it's afaik\njust needed so there's a secure way of running tests at all on windows.\n\nI think 0003 should be \"trimmed\" to only change the default for\n$use_unix_sockets on windows and to remove PG_TEST_USE_UNIX_SOCKETS. Whoever\nwants to, can then add a new environment variable to force tap tests to use\ntcp.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Dec 2022 18:10:17 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Using AF_UNIX sockets always for tests on Windows"
},
{
"msg_contents": "On 2022-12-01 Th 21:10, Andres Freund wrote:\n> Hi,\n>\n> On 2022-12-01 20:56:18 -0500, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> On 2022-12-01 20:30:36 -0500, Tom Lane wrote:\n>>>> If we remove that, won't we have a whole lot of code that's not\n>>>> tested at all on any platform, ie all the TCP-socket code?\n>>> There's some coverage via the auth and ssl tests. But I agree it's an\n>>> issue. But to me the fix for that seems to be to add a dedicated test for\n>>> that, rather than relying on windows to test our socket code - that's quite a\n>>> few separate code paths from the tcp support of other platforms.\n>> IMO that's not the best way forward, because you'll always have\n>> nagging questions about whether a single-purpose test covers\n>> everything that needs coverage.\n> Still seems better than not having any coverage in our development\n> environments...\n>\n>\n>> I think the best place to be in would be to be able to run the whole test\n>> suite using either TCP or UNIX sockets, on any platform (with stuff like the\n>> SSL test overriding the choice as needed).\n> I agree that that's useful. But it seems somewhat independent from the\n> majority of the proposed changes. To be able to test force-tcp-everywhere we\n> don't need e.g. code for setting sspi auth in pg_regress etc - it's afaik\n> just needed so there's a secure way of running tests at all on windows.\n>\n> I think 0003 should be \"trimmed\" to only change the default for\n> $use_unix_sockets on windows and to remove PG_TEST_USE_UNIX_SOCKETS. Whoever\n> wants to, can then add a new environment variable to force tap tests to use\n> tcp.\n>\n\nNot sure if it's useful here, but a few months ago I prepared patches to\nremove the config-auth option of pg_regress, which struck me as more\nthan odd, and replace it with a perl module. I didn't get around to\nfinishing them, but the patches as of then are attached.\n\nI agree that having some switch that says \"run everything with TCP\" or\n\"run (almost) everything with Unix sockets\" would be good.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 2 Dec 2022 07:37:59 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Using AF_UNIX sockets always for tests on Windows"
},
{
"msg_contents": "On Fri, 2 Dec 2022 at 18:08, Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 2022-12-01 Th 21:10, Andres Freund wrote:\n> > Hi,\n> >\n> > On 2022-12-01 20:56:18 -0500, Tom Lane wrote:\n> >> Andres Freund <andres@anarazel.de> writes:\n> >>> On 2022-12-01 20:30:36 -0500, Tom Lane wrote:\n> >>>> If we remove that, won't we have a whole lot of code that's not\n> >>>> tested at all on any platform, ie all the TCP-socket code?\n> >>> There's some coverage via the auth and ssl tests. But I agree it's an\n> >>> issue. But to me the fix for that seems to be to add a dedicated test for\n> >>> that, rather than relying on windows to test our socket code - that's quite a\n> >>> few separate code paths from the tcp support of other platforms.\n> >> IMO that's not the best way forward, because you'll always have\n> >> nagging questions about whether a single-purpose test covers\n> >> everything that needs coverage.\n> > Still seems better than not having any coverage in our development\n> > environments...\n> >\n> >\n> >> I think the best place to be in would be to be able to run the whole test\n> >> suite using either TCP or UNIX sockets, on any platform (with stuff like the\n> >> SSL test overriding the choice as needed).\n> > I agree that that's useful. But it seems somewhat independent from the\n> > majority of the proposed changes. To be able to test force-tcp-everywhere we\n> > don't need e.g. code for setting sspi auth in pg_regress etc - it's afaik\n> > just needed so there's a secure way of running tests at all on windows.\n> >\n> > I think 0003 should be \"trimmed\" to only change the default for\n> > $use_unix_sockets on windows and to remove PG_TEST_USE_UNIX_SOCKETS. Whoever\n> > wants to, can then add a new environment variable to force tap tests to use\n> > tcp.\n> >\n>\n> Not sure if it's useful here, but a few months ago I prepared patches to\n> remove the config-auth option of pg_regress, which struck me as more\n> than odd, and replace it with a perl module. I didn't get around to\n> finishing them, but the patches as of then are attached.\n>\n> I agree that having some switch that says \"run everything with TCP\" or\n> \"run (almost) everything with Unix sockets\" would be good.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\nbf03cfd162176d543da79f9398131abc251ddbb9 ===\n=== applying patch\n./0001-Do-config_auth-in-perl-code-for-TAP-tests-and-vcregr.patch\npatching file contrib/basebackup_to_shell/t/001_basic.pl\nHunk #1 FAILED at 21.\n1 out of 1 hunk FAILED -- saving rejects to file\ncontrib/basebackup_to_shell/t/001_basic.pl.rej\npatching file src/bin/pg_basebackup/t/010_pg_basebackup.pl\nHunk #1 FAILED at 29.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/bin/pg_basebackup/t/010_pg_basebackup.pl.rej\nHunk #3 FAILED at 461.\n1 out of 3 hunks FAILED -- saving rejects to file\nsrc/test/perl/PostgreSQL/Test/Cluster.pm.rej\n\n[1] - http://cfbot.cputube.org/patch_41_4033.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 4 Jan 2023 17:43:22 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using AF_UNIX sockets always for tests on Windows"
},
{
"msg_contents": "\nOn 2023-01-04 We 07:13, vignesh C wrote:\n> On Fri, 2 Dec 2022 at 18:08, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> On 2022-12-01 Th 21:10, Andres Freund wrote:\n>>> Hi,\n>>>\n>>> On 2022-12-01 20:56:18 -0500, Tom Lane wrote:\n>>>> Andres Freund <andres@anarazel.de> writes:\n>>>>> On 2022-12-01 20:30:36 -0500, Tom Lane wrote:\n>>>>>> If we remove that, won't we have a whole lot of code that's not\n>>>>>> tested at all on any platform, ie all the TCP-socket code?\n>>>>> There's some coverage via the auth and ssl tests. But I agree it's an\n>>>>> issue. But to me the fix for that seems to be to add a dedicated test for\n>>>>> that, rather than relying on windows to test our socket code - that's quite a\n>>>>> few separate code paths from the tcp support of other platforms.\n>>>> IMO that's not the best way forward, because you'll always have\n>>>> nagging questions about whether a single-purpose test covers\n>>>> everything that needs coverage.\n>>> Still seems better than not having any coverage in our development\n>>> environments...\n>>>\n>>>\n>>>> I think the best place to be in would be to be able to run the whole test\n>>>> suite using either TCP or UNIX sockets, on any platform (with stuff like the\n>>>> SSL test overriding the choice as needed).\n>>> I agree that that's useful. But it seems somewhat independent from the\n>>> majority of the proposed changes. To be able to test force-tcp-everywhere we\n>>> don't need e.g. code for setting sspi auth in pg_regress etc - it's afaik\n>>> just needed so there's a secure way of running tests at all on windows.\n>>>\n>>> I think 0003 should be \"trimmed\" to only change the default for\n>>> $use_unix_sockets on windows and to remove PG_TEST_USE_UNIX_SOCKETS. Whoever\n>>> wants to, can then add a new environment variable to force tap tests to use\n>>> tcp.\n>>>\n>> Not sure if it's useful here, but a few months ago I prepared patches to\n>> remove the config-auth option of pg_regress, which struck me as more\n>> than odd, and replace it with a perl module. I didn't get around to\n>> finishing them, but the patches as of then are attached.\n>>\n>> I agree that having some switch that says \"run everything with TCP\" or\n>> \"run (almost) everything with Unix sockets\" would be good.\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n> === Applying patches on top of PostgreSQL commit ID\n> bf03cfd162176d543da79f9398131abc251ddbb9 ===\n> === applying patch\n> ./0001-Do-config_auth-in-perl-code-for-TAP-tests-and-vcregr.patch\n> patching file contrib/basebackup_to_shell/t/001_basic.pl\n> Hunk #1 FAILED at 21.\n> 1 out of 1 hunk FAILED -- saving rejects to file\n> contrib/basebackup_to_shell/t/001_basic.pl.rej\n> patching file src/bin/pg_basebackup/t/010_pg_basebackup.pl\n> Hunk #1 FAILED at 29.\n> 1 out of 1 hunk FAILED -- saving rejects to file\n> src/bin/pg_basebackup/t/010_pg_basebackup.pl.rej\n> Hunk #3 FAILED at 461.\n> 1 out of 3 hunks FAILED -- saving rejects to file\n> src/test/perl/PostgreSQL/Test/Cluster.pm.rej\n>\n> [1] - http://cfbot.cputube.org/patch_41_4033.log\n>\n\nWhat I posted was not intended as a replacement for Thomas' patches, or\nindeed meant as a CF item at all.\n\nSo really we're waiting on Thomas to post a response to Tom's and\nAndres' comments upthread.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 5 Jan 2023 07:31:15 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Using AF_UNIX sockets always for tests on Windows"
},
{
"msg_contents": "Hello,\n\nOn Fri, Dec 2, 2022 at 1:03 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n>\n> 1. Teach mkdtemp() to make a non-world-accessible directory. This is\n> required to be able to make a socket that other processes can't\n> connect to, to match the paranoia level used on Unix. This was\n> written just by reading documentation, because I am not a Windows\n> user, so I would be grateful for a second opinion and/or testing from\n> a Windows hacker, which would involve testing with two different\n> users. The idea is that Windows' mkdir() is completely ignoring the\n> permissions (we can see in the mingw headers that it literally throws\n> away the mode argument), so we shouldn't use that, but native\n> CreateDirectory() when given a pointer to a SECURITY_ATTRIBUTES with\n> lpSecurityDesciptor set to NULL should only allow the current user to\n> access the object (directory). Does this really work, and would it be\n> better to create some more explicit private-keep-out\n> SECURITY_ATTRIBUTE, and how would that look?\n>\n\nA directory created with a NULL SECURITY_ATTRIBUTES inherits the ACL from\nits parent directory [1]. In this case, its parent is the designated\ntemporary location, which already should have a limited access.\n\nYou can create an explicit DACL for that directory, PFA a patch for so.\nThis is just an example, not something that I'm proposing as a committable\nalternative.\n\nI'm fairly sure that filesystem permissions must be enough to stop\n> another OS user from connecting, because it's clear from documentation\n> that AF_UNIX works on Windows by opening the file and reading some\n> magic \"reparse\" data from inside it, so if you can't see into the\n> containing directory, you can't do it. This patch is the one the rest\n> are standing on, because the tests should match Unix in their level of\n> security.\n>\n\nYes, this is correct.\n\n>\n> Only the first patch is modified, but I'm including all of them so they go\nthrough the cfbot.\n\n[1]\nhttps://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-createfilea\n\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Mon, 16 Jan 2023 13:05:12 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using AF_UNIX sockets always for tests on Windows"
},
{
"msg_contents": "Thanks Tom, Andres, Juan José, Andrew for your feedback. I agree that\nit's a better \"OS harmonisation\" outcome if we can choose both ways on\nboth OSes. I will leave the 0003 patch aside for now due to lack of\ntime, but here's a rebase of the first two patches. Since this is\nreally just more cleanup-obsolete-stuff background work, I'm going to\nmove it to the next CF.\n\n0001 -- Teach mkdtemp() to care about permissions on Windows.\nReviewed by Juan José, who confirmed my blind-coded understanding and\nshowed an alternative version but didn't suggest doing it that way.\nIt's a little confusing that NULL \"attributes\" means something\ndifferent from NULL \"descriptor\", so I figured I should highlight that\ndifference more clearly with some new comments. I guess one question\nis \"why should we expect the calling process to have the default\naccess token?\"\n\n0002 -- Use AF_UNIX for pg_regress. This one removes a couple of\nhundred Windows-only lines that set up SSPI stuff, and some comments\nabout a signal handling hazard that -- as far as I can tell by reading\nmanuals -- was never really true.\n\n0003 -- TAP testing adjustments needs some more work based on feedback\nalready given, not included in this revision.",
"msg_date": "Wed, 22 Mar 2023 17:28:12 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Using AF_UNIX sockets always for tests on Windows"
},
{
"msg_contents": "On Wed, 22 Mar 2023 at 09:59, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Thanks Tom, Andres, Juan José, Andrew for your feedback. I agree that\n> it's a better \"OS harmonisation\" outcome if we can choose both ways on\n> both OSes. I will leave the 0003 patch aside for now due to lack of\n> time, but here's a rebase of the first two patches. Since this is\n> really just more cleanup-obsolete-stuff background work, I'm going to\n> move it to the next CF.\n>\n> 0001 -- Teach mkdtemp() to care about permissions on Windows.\n> Reviewed by Juan José, who confirmed my blind-coded understanding and\n> showed an alternative version but didn't suggest doing it that way.\n> It's a little confusing that NULL \"attributes\" means something\n> different from NULL \"descriptor\", so I figured I should highlight that\n> difference more clearly with some new comments. I guess one question\n> is \"why should we expect the calling process to have the default\n> access token?\"\n>\n> 0002 -- Use AF_UNIX for pg_regress. This one removes a couple of\n> hundred Windows-only lines that set up SSPI stuff, and some comments\n> about a signal handling hazard that -- as far as I can tell by reading\n> manuals -- was never really true.\n>\n> 0003 -- TAP testing adjustments needs some more work based on feedback\n> already given, not included in this revision.\n\nThe patch does not apply anymore:\npatch -p1 < v3-0002-Always-use-AF_UNIX-sockets-in-pg_regress-on-Windo.patch\npatching file src/test/regress/pg_regress.c\n...\nHunk #6 FAILED at 781.\n...\n1 out of 10 hunks FAILED -- saving rejects to file\nsrc/test/regress/pg_regress.c.rej\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 21 Jan 2024 18:01:57 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using AF_UNIX sockets always for tests on Windows"
},
{
"msg_contents": "On Sun, 21 Jan 2024 at 18:01, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, 22 Mar 2023 at 09:59, Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > Thanks Tom, Andres, Juan José, Andrew for your feedback. I agree that\n> > it's a better \"OS harmonisation\" outcome if we can choose both ways on\n> > both OSes. I will leave the 0003 patch aside for now due to lack of\n> > time, but here's a rebase of the first two patches. Since this is\n> > really just more cleanup-obsolete-stuff background work, I'm going to\n> > move it to the next CF.\n> >\n> > 0001 -- Teach mkdtemp() to care about permissions on Windows.\n> > Reviewed by Juan José, who confirmed my blind-coded understanding and\n> > showed an alternative version but didn't suggest doing it that way.\n> > It's a little confusing that NULL \"attributes\" means something\n> > different from NULL \"descriptor\", so I figured I should highlight that\n> > difference more clearly with some new comments. I guess one question\n> > is \"why should we expect the calling process to have the default\n> > access token?\"\n> >\n> > 0002 -- Use AF_UNIX for pg_regress. This one removes a couple of\n> > hundred Windows-only lines that set up SSPI stuff, and some comments\n> > about a signal handling hazard that -- as far as I can tell by reading\n> > manuals -- was never really true.\n> >\n> > 0003 -- TAP testing adjustments needs some more work based on feedback\n> > already given, not included in this revision.\n>\n> The patch does not apply anymore:\n> patch -p1 < v3-0002-Always-use-AF_UNIX-sockets-in-pg_regress-on-Windo.patch\n> patching file src/test/regress/pg_regress.c\n> ...\n> Hunk #6 FAILED at 781.\n> ...\n> 1 out of 10 hunks FAILED -- saving rejects to file\n> src/test/regress/pg_regress.c.rej\n\nWith no update to the thread and the patch not applying I'm marking\nthis as returned with feedback. Please feel free to resubmit to the\nnext CF when there is a new version of the patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 1 Feb 2024 21:58:30 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Using AF_UNIX sockets always for tests on Windows"
}
] |
[
{
"msg_contents": "Hi,\n\nHaving a query, I am trying to find out all the columns that need to be\naccessed (their varattno and vartype). I have access to a targetlist\nrepresenting a tree like this. So, I am looking for a function that\nrecursively traverses the tree and gives me the VARs. So, for SELECT\na,b,b+c from tab; I am interested in [a,b]. Is such a function currently\nimplemented in postgresql? How can I use it?\n\n:targetlist (\n> {TARGETENTRY\n> :expr\n> {VAR\n> :varno 1\n> :varattno 1\n> :vartype 23\n> :vartypmod -1\n> :varcollid 0\n> :varlevelsup 0\n> :varnosyn 1\n> :varattnosyn 1\n> :location 7\n> }\n> :resno 1\n> :resname l_orderkey\n> :ressortgroupref 0\n> :resorigtbl 24805\n> :resorigcol 1\n> :resjunk false\n> }\n> {TARGETENTRY\n> :expr\n> {VAR\n> :varno 1\n> :varattno 2\n> :vartype 23\n> :vartypmod -1\n> :varcollid 0\n> :varlevelsup 0\n> :varnosyn 1\n> :varattnosyn 2\n> :location 18\n> }\n> :resno 2\n> :resname l_partkey\n> :ressortgroupref 0\n> :resorigtbl 24805\n> :resorigcol 2\n> :resjunk false\n> }\n> {TARGETENTRY\n> :expr\n> {OPEXPR\n> :opno 551\n> :opfuncid 177\n> :opresulttype 23\n> :opretset false\n> :opcollid 0\n> :inputcollid 0\n> :args (\n> {OPEXPR\n> :opno 551\n> :opfuncid 177\n> :opresulttype 23\n> :opretset false\n> :opcollid 0\n> :inputcollid 0\n> :args (\n> {VAR\n> :varno 1\n> :varattno 1\n> :vartype 23\n> :vartypmod -1\n> :varcollid 0\n> :varlevelsup 0\n> :varnosyn 1\n> :varattnosyn 1\n> :location 28\n> }\n> {VAR\n> :varno 1\n> :varattno 2\n> :vartype 23\n> :vartypmod -1\n> :varcollid 0\n> :varlevelsup 0\n> :varnosyn 1\n> :varattnosyn 2\n> :location 39\n> }\n> )\n> :location 38\n> }\n> {VAR\n> :varno 1\n> :varattno 3\n> :vartype 23\n> :vartypmod -1\n> :varcollid 0\n> :varlevelsup 0\n> :varnosyn 1\n> :varattnosyn 3\n> :location 49\n> }\n> )\n> :location 48\n> }\n> :resno 3\n> :resname ?column?\n> :ressortgroupref 0\n> :resorigtbl 0\n> :resorigcol 0\n> :resjunk false\n> }\n> )\n>\n\nHi,Having a query, I am trying to find out all the columns that need to be accessed (their varattno and vartype). I have access to a targetlist representing a tree like this. So, I am looking for a function that recursively traverses the tree and gives me the VARs. So, for SELECT a,b,b+c from tab; I am interested in [a,b]. Is such a function currently implemented in postgresql? How can I use it?:targetlist ( {TARGETENTRY :expr {VAR :varno 1 :varattno 1 :vartype 23 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 7 } :resno 1 :resname l_orderkey :ressortgroupref 0 :resorigtbl 24805 :resorigcol 1 :resjunk false } {TARGETENTRY :expr {VAR :varno 1 :varattno 2 :vartype 23 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 2 :location 18 } :resno 2 :resname l_partkey :ressortgroupref 0 :resorigtbl 24805 :resorigcol 2 :resjunk false } {TARGETENTRY :expr {OPEXPR :opno 551 :opfuncid 177 :opresulttype 23 :opretset false :opcollid 0 :inputcollid 0 :args ( {OPEXPR :opno 551 :opfuncid 177 :opresulttype 23 :opretset false :opcollid 0 :inputcollid 0 :args ( {VAR :varno 1 :varattno 1 :vartype 23 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 1 :location 28 } {VAR :varno 1 :varattno 2 :vartype 23 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 2 :location 39 } ) :location 38 } {VAR :varno 1 :varattno 3 :vartype 23 :vartypmod -1 :varcollid 0 :varlevelsup 0 :varnosyn 1 :varattnosyn 3 :location 49 } ) :location 48 } :resno 3 :resname ?column? :ressortgroupref 0 :resorigtbl 0 :resorigcol 0 :resjunk false } )",
"msg_date": "Thu, 1 Dec 2022 18:17:31 -0800",
"msg_from": "Amin <amin.fallahi@gmail.com>",
"msg_from_op": true,
"msg_subject": "Traversing targetlist to find accessed columns"
},
{
"msg_contents": "Amin <amin.fallahi@gmail.com> writes:\n> Having a query, I am trying to find out all the columns that need to be\n> accessed (their varattno and vartype). I have access to a targetlist\n> representing a tree like this. So, I am looking for a function that\n> recursively traverses the tree and gives me the VARs. So, for SELECT\n> a,b,b+c from tab; I am interested in [a,b]. Is such a function currently\n> implemented in postgresql? How can I use it?\n\npull_var_clause() might help you, or one of its siblings in\nsrc/backend/optimizer/util/var.c, or you could use that as a\ntemplate to write your own --- it doesn't take much code if\nyou use expression_tree_walker to do the dirty work.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Dec 2022 09:33:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Traversing targetlist to find accessed columns"
}
] |
[
{
"msg_contents": "Whilst fooling with my outer-join-aware-Vars patch, I tripped\nacross a multi-way join query that failed with\n ERROR: could not devise a query plan for the given query\nwhen enable_partitionwise_join is on.\n\nI traced that to the fact that reparameterize_path_by_child()\nomits support for MaterialPath, so that if the only surviving\npath(s) for a child join include materialization steps, we'll\nfail outright to produce a plan for the parent join.\n\nUnfortunately, I don't have an example that produces such a\nfailure against HEAD. It seems certain to me that such cases\nexist, though, so I'd like to apply and back-patch the attached.\n\nI'm suspicious now that reparameterize_path() should be\nextended likewise, but I don't really have any hard\nevidence for that.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 01 Dec 2022 21:55:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Missing MaterialPath support in reparameterize_path_by_child"
},
{
"msg_contents": "On Fri, Dec 2, 2022 at 10:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I traced that to the fact that reparameterize_path_by_child()\n> omits support for MaterialPath, so that if the only surviving\n> path(s) for a child join include materialization steps, we'll\n> fail outright to produce a plan for the parent join.\n\n\nYeah, that's true. It's weird we neglect MaterialPath here.\n\n\n> Unfortunately, I don't have an example that produces such a\n> failure against HEAD. It seems certain to me that such cases\n> exist, though, so I'd like to apply and back-patch the attached.\n\n\nI tried on HEAD and got one, which leverages sampled rel to generate the\nMaterialPath and lateral reference to make it the only available path.\n\nSET enable_partitionwise_join to true;\n\nCREATE TABLE prt (a int, b int) PARTITION BY RANGE(a);\nCREATE TABLE prt_p1 PARTITION OF prt FOR VALUES FROM (0) TO (10);\n\nCREATE EXTENSION tsm_system_time;\n\nexplain (costs off)\nselect * from prt t1 left join lateral (select t1.a as t1a, t2.a as t2a\nfrom prt t2 TABLESAMPLE system_time (10)) ss on ss.t1a = ss.t2a;\nERROR: could not devise a query plan for the given query\n\nThanks\nRichard\n\nOn Fri, Dec 2, 2022 at 10:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nI traced that to the fact that reparameterize_path_by_child()\nomits support for MaterialPath, so that if the only surviving\npath(s) for a child join include materialization steps, we'll\nfail outright to produce a plan for the parent join. Yeah, that's true. It's weird we neglect MaterialPath here. \nUnfortunately, I don't have an example that produces such a\nfailure against HEAD. It seems certain to me that such cases\nexist, though, so I'd like to apply and back-patch the attached. I tried on HEAD and got one, which leverages sampled rel to generate theMaterialPath and lateral reference to make it the only available path.SET enable_partitionwise_join to true;CREATE TABLE prt (a int, b int) PARTITION BY RANGE(a);CREATE TABLE prt_p1 PARTITION OF prt FOR VALUES FROM (0) TO (10);CREATE EXTENSION tsm_system_time;explain (costs off)select * from prt t1 left join lateral (select t1.a as t1a, t2.a as t2a from prt t2 TABLESAMPLE system_time (10)) ss on ss.t1a = ss.t2a;ERROR: could not devise a query plan for the given queryThanksRichard",
"msg_date": "Fri, 2 Dec 2022 15:29:28 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing MaterialPath support in reparameterize_path_by_child"
},
{
"msg_contents": "Hi Tom,\n\nOn Fri, Dec 2, 2022 at 8:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Whilst fooling with my outer-join-aware-Vars patch, I tripped\n> across a multi-way join query that failed with\n> ERROR: could not devise a query plan for the given query\n> when enable_partitionwise_join is on.\n>\n> I traced that to the fact that reparameterize_path_by_child()\n> omits support for MaterialPath, so that if the only surviving\n> path(s) for a child join include materialization steps, we'll\n> fail outright to produce a plan for the parent join.\n>\n> Unfortunately, I don't have an example that produces such a\n> failure against HEAD. It seems certain to me that such cases\n> exist, though, so I'd like to apply and back-patch the attached.\n\n From this comment, that I wrote back when I implemented that function,\nI wonder if we thought MaterialPath wouldn't appear on the inner side\nof nestloop join. But that can't be the case. Or probably we didn't\nfind MaterialPath being there from our tests.\n * This function is currently only applied to the inner side of a nestloop\n * join that is being partitioned by the partitionwise-join code. Hence,\n * we need only support path types that plausibly arise in that context.\nBut I think it's good to have MaterialPath there.\n\n>\n> I'm suspicious now that reparameterize_path() should be\n> extended likewise, but I don't really have any hard\n> evidence for that.\n\nI think we need it there since the scope of paths under appendrel has\ncertainly expanded a lot because of partitioned table optimizations.\n\nThe patch looks good to me.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 2 Dec 2022 16:51:27 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing MaterialPath support in reparameterize_path_by_child"
},
{
"msg_contents": "On Fri, Dec 2, 2022 at 7:21 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> > I'm suspicious now that reparameterize_path() should be\n> > extended likewise, but I don't really have any hard\n> > evidence for that.\n>\n> I think we need it there since the scope of paths under appendrel has\n> certainly expanded a lot because of partitioned table optimizations.\n\n\nI tried to see if the similar error can be triggered because of the lack\nof MaterialPath support in reparameterize_path but didn't succeed.\nInstead I see the optimization opportunity here if we can extend\nreparameterize_path. As an example, consider query\n\ncreate table t (a int, b int);\ninsert into t select i, i from generate_series(1,10000)i;\ncreate index on t(a);\nanalyze t;\n\nexplain (costs off)\nselect * from (select * from t t1 union all select * from t t2 TABLESAMPLE\nsystem_time (10)) s join (select * from t t3 limit 1) ss on s.a > ss.a;\n\nCurrently parameterized append path is not possible because MaterialPath\nis not supported in reparameterize_path. The current plan looks like\n\n QUERY PLAN\n--------------------------------------------------------------------\n Nested Loop\n Join Filter: (t1.a > t3.a)\n -> Limit\n -> Seq Scan on t t3\n -> Append\n -> Seq Scan on t t1\n -> Materialize\n -> Sample Scan on t t2\n Sampling: system_time ('10'::double precision)\n(9 rows)\n\nIf we extend reparameterize_path to support MaterialPath, we would have\nthe additional parameterized append path and generate a better plan as\nbelow\n\n QUERY PLAN\n--------------------------------------------------------------------\n Nested Loop\n -> Limit\n -> Seq Scan on t t3\n -> Append\n -> Index Scan using t_a_idx on t t1\n Index Cond: (a > t3.a)\n -> Materialize\n -> Sample Scan on t t2\n Sampling: system_time ('10'::double precision)\n Filter: (a > t3.a)\n(10 rows)\n\nSo I also agree it's worth doing.\n\nBTW, the code changes I'm using:\n\n--- a/src/backend/optimizer/util/pathnode.c\n+++ b/src/backend/optimizer/util/pathnode.c\n@@ -3979,6 +3979,17 @@ reparameterize_path(PlannerInfo *root, Path *path,\n apath->path.parallel_aware,\n -1);\n }\n+ case T_Material:\n+ {\n+ MaterialPath *matpath = (MaterialPath *) path;\n+ Path *spath = matpath->subpath;\n+\n+ spath = reparameterize_path(root, spath,\n+ required_outer,\n+ loop_count);\n+\n+ return (Path *) create_material_path(rel, spath);\n+ }\n\nThanks\nRichard\n\nOn Fri, Dec 2, 2022 at 7:21 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n> I'm suspicious now that reparameterize_path() should be\n> extended likewise, but I don't really have any hard\n> evidence for that.\n\nI think we need it there since the scope of paths under appendrel has\ncertainly expanded a lot because of partitioned table optimizations. I tried to see if the similar error can be triggered because of the lackof MaterialPath support in reparameterize_path but didn't succeed.Instead I see the optimization opportunity here if we can extendreparameterize_path. As an example, consider querycreate table t (a int, b int);insert into t select i, i from generate_series(1,10000)i;create index on t(a);analyze t;explain (costs off)select * from (select * from t t1 union all select * from t t2 TABLESAMPLE system_time (10)) s join (select * from t t3 limit 1) ss on s.a > ss.a;Currently parameterized append path is not possible because MaterialPathis not supported in reparameterize_path. The current plan looks like QUERY PLAN-------------------------------------------------------------------- Nested Loop Join Filter: (t1.a > t3.a) -> Limit -> Seq Scan on t t3 -> Append -> Seq Scan on t t1 -> Materialize -> Sample Scan on t t2 Sampling: system_time ('10'::double precision)(9 rows)If we extend reparameterize_path to support MaterialPath, we would havethe additional parameterized append path and generate a better plan asbelow QUERY PLAN-------------------------------------------------------------------- Nested Loop -> Limit -> Seq Scan on t t3 -> Append -> Index Scan using t_a_idx on t t1 Index Cond: (a > t3.a) -> Materialize -> Sample Scan on t t2 Sampling: system_time ('10'::double precision) Filter: (a > t3.a)(10 rows)So I also agree it's worth doing.BTW, the code changes I'm using:--- a/src/backend/optimizer/util/pathnode.c+++ b/src/backend/optimizer/util/pathnode.c@@ -3979,6 +3979,17 @@ reparameterize_path(PlannerInfo *root, Path *path, apath->path.parallel_aware, -1); }+ case T_Material:+ {+ MaterialPath *matpath = (MaterialPath *) path;+ Path *spath = matpath->subpath;++ spath = reparameterize_path(root, spath,+ required_outer,+ loop_count);++ return (Path *) create_material_path(rel, spath);+ }ThanksRichard",
"msg_date": "Fri, 2 Dec 2022 20:49:40 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing MaterialPath support in reparameterize_path_by_child"
},
{
"msg_contents": "On Fri, Dec 2, 2022 at 8:49 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> BTW, the code changes I'm using:\n>\n> --- a/src/backend/optimizer/util/pathnode.c\n> +++ b/src/backend/optimizer/util/pathnode.c\n> @@ -3979,6 +3979,17 @@ reparameterize_path(PlannerInfo *root, Path *path,\n> apath->path.parallel_aware,\n> -1);\n> }\n> + case T_Material:\n> + {\n> + MaterialPath *matpath = (MaterialPath *) path;\n> + Path *spath = matpath->subpath;\n> +\n> + spath = reparameterize_path(root, spath,\n> + required_outer,\n> + loop_count);\n> +\n> + return (Path *) create_material_path(rel, spath);\n> + }\n>\n\nBTW, the subpath needs to be checked if it is null after being\nreparameterized, since it might be a path type that is not supported\nyet.\n\n--- a/src/backend/optimizer/util/pathnode.c\n+++ b/src/backend/optimizer/util/pathnode.c\n@@ -3979,6 +3979,19 @@ reparameterize_path(PlannerInfo *root, Path *path,\n apath->path.parallel_aware,\n -1);\n }\n+ case T_Material:\n+ {\n+ MaterialPath *matpath = (MaterialPath *) path;\n+ Path *spath = matpath->subpath;\n+\n+ spath = reparameterize_path(root, spath,\n+ required_outer,\n+ loop_count);\n+ if (spath == NULL)\n+ return NULL;\n+\n+ return (Path *) create_material_path(rel, spath);\n+ }\n\nThanks\nRichard\n\nOn Fri, Dec 2, 2022 at 8:49 PM Richard Guo <guofenglinux@gmail.com> wrote:BTW, the code changes I'm using:--- a/src/backend/optimizer/util/pathnode.c+++ b/src/backend/optimizer/util/pathnode.c@@ -3979,6 +3979,17 @@ reparameterize_path(PlannerInfo *root, Path *path, apath->path.parallel_aware, -1); }+ case T_Material:+ {+ MaterialPath *matpath = (MaterialPath *) path;+ Path *spath = matpath->subpath;++ spath = reparameterize_path(root, spath,+ required_outer,+ loop_count);++ return (Path *) create_material_path(rel, spath);+ } BTW, the subpath needs to be checked if it is null after beingreparameterized, since it might be a path type that is not supportedyet.--- a/src/backend/optimizer/util/pathnode.c+++ b/src/backend/optimizer/util/pathnode.c@@ -3979,6 +3979,19 @@ reparameterize_path(PlannerInfo *root, Path *path, apath->path.parallel_aware, -1); }+ case T_Material:+ {+ MaterialPath *matpath = (MaterialPath *) path;+ Path *spath = matpath->subpath;++ spath = reparameterize_path(root, spath,+ required_outer,+ loop_count);+ if (spath == NULL)+ return NULL;++ return (Path *) create_material_path(rel, spath);+ }ThanksRichard",
"msg_date": "Fri, 2 Dec 2022 21:16:07 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing MaterialPath support in reparameterize_path_by_child"
},
{
"msg_contents": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> On Fri, Dec 2, 2022 at 8:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Unfortunately, I don't have an example that produces such a\n>> failure against HEAD. It seems certain to me that such cases\n>> exist, though, so I'd like to apply and back-patch the attached.\n\n> From this comment, that I wrote back when I implemented that function,\n> I wonder if we thought MaterialPath wouldn't appear on the inner side\n> of nestloop join. But that can't be the case. Or probably we didn't\n> find MaterialPath being there from our tests.\n> * This function is currently only applied to the inner side of a nestloop\n> * join that is being partitioned by the partitionwise-join code. Hence,\n> * we need only support path types that plausibly arise in that context.\n> But I think it's good to have MaterialPath there.\n\nSo thinking about this a bit: the reason it is okay if reparameterize_path\nfails is that it's not fatal. We just go on our way without making\na parameterized path for that appendrel. However, if\nreparameterize_path_by_child fails for every available child path,\nwe end up with \"could not devise a query plan\", because the\npartitionwise-join code is brittle and won't tolerate failure\nto build a parent-join path. Seems like we should be willing to\nfall back to a non-partitionwise join in that case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Dec 2022 10:43:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Missing MaterialPath support in reparameterize_path_by_child"
},
{
"msg_contents": "On Fri, Dec 2, 2022 at 9:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> > On Fri, Dec 2, 2022 at 8:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Unfortunately, I don't have an example that produces such a\n> >> failure against HEAD. It seems certain to me that such cases\n> >> exist, though, so I'd like to apply and back-patch the attached.\n>\n> > From this comment, that I wrote back when I implemented that function,\n> > I wonder if we thought MaterialPath wouldn't appear on the inner side\n> > of nestloop join. But that can't be the case. Or probably we didn't\n> > find MaterialPath being there from our tests.\n> > * This function is currently only applied to the inner side of a nestloop\n> > * join that is being partitioned by the partitionwise-join code. Hence,\n> > * we need only support path types that plausibly arise in that context.\n> > But I think it's good to have MaterialPath there.\n>\n> So thinking about this a bit: the reason it is okay if reparameterize_path\n> fails is that it's not fatal. We just go on our way without making\n> a parameterized path for that appendrel. However, if\n> reparameterize_path_by_child fails for every available child path,\n> we end up with \"could not devise a query plan\", because the\n> partitionwise-join code is brittle and won't tolerate failure\n> to build a parent-join path. Seems like we should be willing to\n> fall back to a non-partitionwise join in that case.\n>\n> regards, tom lane\n\npartition-wise join should be willing to fallback to non-partitionwise\njoin in such a case. After spending a few minutes with the code, I\nthink generate_partitionwise_join_paths() should not call\nset_cheapest() is the pathlist of the child is NULL and should just\nwind up and avoid adding any path.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 5 Dec 2022 16:39:42 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing MaterialPath support in reparameterize_path_by_child"
},
{
"msg_contents": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> partition-wise join should be willing to fallback to non-partitionwise\n> join in such a case. After spending a few minutes with the code, I\n> think generate_partitionwise_join_paths() should not call\n> set_cheapest() is the pathlist of the child is NULL and should just\n> wind up and avoid adding any path.\n\nWe clearly need to not call set_cheapest(), but that's not sufficient;\nwe still fail at higher levels, as you'll see if you try the example\nRichard found. I ended up making fe12f2f8f to fix this.\n\nI don't especially like \"rel->nparts = 0\" as a way of disabling\npartitionwise join; ISTM it'd be clearer and more flexible to reset\nconsider_partitionwise_join instead of destroying the data structure.\nBut that's the way it's being done elsewhere, and I didn't want to\ntamper with it in a bug fix. I see various assertions about parent\nand child consider_partitionwise_join flags being equal, which we\nmight have to revisit if we try to make it work that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Dec 2022 09:43:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Missing MaterialPath support in reparameterize_path_by_child"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 8:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> > partition-wise join should be willing to fallback to non-partitionwise\n> > join in such a case. After spending a few minutes with the code, I\n> > think generate_partitionwise_join_paths() should not call\n> > set_cheapest() is the pathlist of the child is NULL and should just\n> > wind up and avoid adding any path.\n>\n> We clearly need to not call set_cheapest(), but that's not sufficient;\n> we still fail at higher levels, as you'll see if you try the example\n> Richard found. I ended up making fe12f2f8f to fix this.\n\nThanks. That looks good.\n\n>\n> I don't especially like \"rel->nparts = 0\" as a way of disabling\n> partitionwise join; ISTM it'd be clearer and more flexible to reset\n> consider_partitionwise_join instead of destroying the data structure.\n> But that's the way it's being done elsewhere, and I didn't want to\n> tamper with it in a bug fix. I see various assertions about parent\n> and child consider_partitionwise_join flags being equal, which we\n> might have to revisit if we try to make it work that way.\n>\n\nAFAIR, consider_partitionwise_join tells whether a given partitioned\nrelation (join, higher or base) can be considered for partitionwise\njoin. set_append_rel_size() decides that based on some properties. But\nrel->nparts is indicator of whether the relation (join, higher or\nbase) is partitioned or not. If we can not generate AppendPath for a\njoin relation, it means there is no way to compute child join\nrelations and thus the relation is not partitioned. So setting\nrel->nparts = 0 is right. Probably we should add macros similar to\ndummy relation for marking and checking partitioned relation. I see\nIS_PARTITIONED_RELATION() is defined already. Maybe we could add\nmark_(un)partitioned_rel().\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 6 Dec 2022 20:07:57 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Missing MaterialPath support in reparameterize_path_by_child"
},
{
"msg_contents": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> On Mon, Dec 5, 2022 at 8:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I don't especially like \"rel->nparts = 0\" as a way of disabling\n>> partitionwise join ...\n\n> ... If we can not generate AppendPath for a\n> join relation, it means there is no way to compute child join\n> relations and thus the relation is not partitioned. So setting\n> rel->nparts = 0 is right.\n\nIf we had nparts > 0 before, then it is partitioned for some value\nof \"partitioned\", so I don't entirely buy this argument.\n\n> Probably we should add macros similar to\n> dummy relation for marking and checking partitioned relation. I see\n> IS_PARTITIONED_RELATION() is defined already. Maybe we could add\n> mark_(un)partitioned_rel().\n\nHiding it behind a macro with an explanatory name would be an\nimprovement, for sure.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Dec 2022 09:52:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Missing MaterialPath support in reparameterize_path_by_child"
}
] |
[
{
"msg_contents": "Continuing the ideas in [0], this patch refactors the ExecGrant_Foo() \nfunctions and replaces many of them by a common function that is driven \nby the ObjectProperty table.\n\nIt would be nice to do more here, for example ExecGrant_Language(), \nwhich has additional non-generalizable checks, but I think this is \nalready a good start. For example, the work being discussed on \nprivileges on publications [1] would be able to take good advantage of this.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/95c30f96-4060-2f48-98b5-a4392d3b6066@enterprisedb.com\n[1]: https://www.postgresql.org/message-id/flat/20330.1652105397@antos",
"msg_date": "Fri, 2 Dec 2022 08:30:55 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "refactor ExecGrant_*() functions"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-02 08:30:55 +0100, Peter Eisentraut wrote:\n> From 200879e5edfc1ce93b7af3cbfafc1f618626cbe9 Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <peter@eisentraut.org>\n> Date: Fri, 2 Dec 2022 08:16:53 +0100\n> Subject: [PATCH] Refactor ExecGrant_*() functions\n>\n> Instead of half a dozen of mostly-duplicate ExecGrant_Foo() functions,\n> write one common function ExecGrant_generic() that can handle most of\n> them.\n\nI'd name it ExecGrant_common() or such instead - ExecGrant_generic() sounds\nlike it will handle arbitrary things, which it doesn't. And, as you mention,\nwe could implement e.g. ExecGrant_Language() as using ExecGrant_common() +\nadditional checks.\n\nPerhaps it'd be useful to add a callback to ExecGrant_generic() that can\nperform additional checks, so that e.g. ExecGrant_Language() can easily be\nimplemented using ExecGrant_generic()?\n\n\n> 1 file changed, 34 insertions(+), 628 deletions(-)\n\nVery neat.\n\n\nIt seems wrong that most (all?) ExecGrant_* functions have the\n foreach(cell, istmt->objects)\nloop. But that's a lot easier to address once the code has been\ndeduplicated radically.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Dec 2022 09:28:29 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: refactor ExecGrant_*() functions"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> Continuing the ideas in [0], this patch refactors the ExecGrant_Foo()\n> functions and replaces many of them by a common function that is driven by the\n> ObjectProperty table.\n> \n> It would be nice to do more here, for example ExecGrant_Language(), which has\n> additional non-generalizable checks, but I think this is already a good start.\n> For example, the work being discussed on privileges on publications [1] would\n> be able to take good advantage of this.\n\nRight, I mostly copy and pasted the code when writing\nExecGrant_Publication(). I agree that your refactoring is very useful.\n\nAttached are my proposals for improvements. One is to avoid memory leak, the\nother tries to improve readability a little bit.\n\n> [0]:\n> https://www.postgresql.org/message-id/95c30f96-4060-2f48-98b5-a4392d3b6066@enterprisedb.com\n> [1]: https://www.postgresql.org/message-id/flat/20330.1652105397@antos\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Tue, 06 Dec 2022 09:41:58 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: refactor ExecGrant_*() functions"
},
{
"msg_contents": "On 02.12.22 18:28, Andres Freund wrote:\n> Hi,\n> \n> On 2022-12-02 08:30:55 +0100, Peter Eisentraut wrote:\n>> From 200879e5edfc1ce93b7af3cbfafc1f618626cbe9 Mon Sep 17 00:00:00 2001\n>> From: Peter Eisentraut <peter@eisentraut.org>\n>> Date: Fri, 2 Dec 2022 08:16:53 +0100\n>> Subject: [PATCH] Refactor ExecGrant_*() functions\n>>\n>> Instead of half a dozen of mostly-duplicate ExecGrant_Foo() functions,\n>> write one common function ExecGrant_generic() that can handle most of\n>> them.\n> \n> I'd name it ExecGrant_common() or such instead - ExecGrant_generic() sounds\n> like it will handle arbitrary things, which it doesn't. And, as you mention,\n> we could implement e.g. ExecGrant_Language() as using ExecGrant_common() +\n> additional checks.\n\nDone\n\n> Perhaps it'd be useful to add a callback to ExecGrant_generic() that can\n> perform additional checks, so that e.g. ExecGrant_Language() can easily be\n> implemented using ExecGrant_generic()?\n\nDone. This allows getting rid of ExecGrant_Language and ExecGrant_Type \nin addition to the previous patch.",
"msg_date": "Thu, 8 Dec 2022 10:26:35 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: refactor ExecGrant_*() functions"
},
{
"msg_contents": "On 06.12.22 09:41, Antonin Houska wrote:\n> Attached are my proposals for improvements. One is to avoid memory leak, the\n> other tries to improve readability a little bit.\n\nI added the readability improvement to my v2 patch. The pfree() calls \naren't necessary AFAICT.\n\n\n\n",
"msg_date": "Thu, 8 Dec 2022 10:27:25 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: refactor ExecGrant_*() functions"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> On 06.12.22 09:41, Antonin Houska wrote:\n> > Attached are my proposals for improvements. One is to avoid memory leak, the\n> > other tries to improve readability a little bit.\n> \n> I added the readability improvement to my v2 patch. The pfree() calls aren't\n> necessary AFAICT.\n\nI see that memory contexts exist and that the amount of memory freed is not\nhuge, but my style is to free the memory explicitly if it's allocated in a\nloop.\n\nv2 looks good to me.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Mon, 12 Dec 2022 10:44:39 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: refactor ExecGrant_*() functions"
},
{
"msg_contents": "On 12.12.22 10:44, Antonin Houska wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n>> On 06.12.22 09:41, Antonin Houska wrote:\n>>> Attached are my proposals for improvements. One is to avoid memory leak, the\n>>> other tries to improve readability a little bit.\n>>\n>> I added the readability improvement to my v2 patch. The pfree() calls aren't\n>> necessary AFAICT.\n\nIt's something to consider, but since this is a refactoring patch and \nthe old code didn't do it either, I think it's out of scope.\n\n> I see that memory contexts exist and that the amount of memory freed is not\n> huge, but my style is to free the memory explicitly if it's allocated in a\n> loop.\n> \n> v2 looks good to me.\n\nCommitted, thanks.\n\n\n\n",
"msg_date": "Tue, 13 Dec 2022 07:54:07 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: refactor ExecGrant_*() functions"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> On 12.12.22 10:44, Antonin Houska wrote:\n> > Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> > \n> >> On 06.12.22 09:41, Antonin Houska wrote:\n> >>> Attached are my proposals for improvements. One is to avoid memory leak, the\n> >>> other tries to improve readability a little bit.\n> >>\n> >> I added the readability improvement to my v2 patch. The pfree() calls aren't\n> >> necessary AFAICT.\n> \n> It's something to consider, but since this is a refactoring patch and the old\n> code didn't do it either, I think it's out of scope.\n\nWell, the reason I brought this topic up is that the old code didn't even\npalloc() those arrays. (Because the were located in the stack.)\n\n> > I see that memory contexts exist and that the amount of memory freed is not\n> > huge, but my style is to free the memory explicitly if it's allocated in a\n> > loop.\n> > v2 looks good to me.\n> \n> Committed, thanks.\n\nok, I'll post rebased \"USAGE privilege on PUBLICATION\" patch [1] soon.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n[1] https://commitfest.postgresql.org/41/3641/\n\n\n",
"msg_date": "Tue, 13 Dec 2022 16:03:57 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: refactor ExecGrant_*() functions"
}
] |
[
{
"msg_contents": "Hi hackers.\n\nWhen a star(*) expands into multiple fields, our current\nimplementation is to generate multiple copies of the expression\nand do FieldSelects. This is very inefficient because the same\nexpression get evaluated multiple times but actually we only need\nto do it once. This is stated in ExpandRowReference().\n\nFor example:\nCREATE TABLE tbl(c1 int, c2 int, c3 int);\nCREATE TABLE src(v text);\nCREATE FUNCTION expensive_func(input text) RETURNS t;\nINSERT INTO tbl SELECT (expensive_func(v)).* FROM src;\n\nThis is effectively the same as:\nINSERT INTO tbl SELECT (expensive_func(v)).c1,\n(expensive_func(v)).c2, (expensive_func(v)).c3 FROM src;\n\nIn this form, expensive_func will be evaluated for every column in\ntbl. If tbl has hundreds of columns we are in trouble. To partially\nsolve this issue, when doing projection in ExecBuildProjectionInfo,\ninstead of generating normal steps for FieldSelects one by one, we\ncan group them by the expression(arg of FieldSelect node). Then\nevaluate the epxression once to get a HeapTuple, deform it into\nfields, and then assign needed fields in one step. I've attached\npatch that introduce EEOP_FIELD_MULTI_SELECT_ASSIGN for this.\n\nWith this patch, the following query should generate only one\nNOTICE, instead of 3.\n\nCREATE TYPE proj_type AS (a int, b int, c text);\nCREATE OR REPLACE FUNCTION proj_type_func1(input text)\nRETURNS proj_type AS $$\nBEGIN\n RAISE NOTICE 'proj_type_func called';\n RETURN ROW(1, 2, input);\nEND\n$$ IMMUTABLE LANGUAGE PLPGSQL;\nCREATE TEMP TABLE stage_table(a text);\nINSERT INTO stage_table VALUES('aaaa');\nSELECT (proj_type_func1(a)).* FROM stage_table;\n\nThis patch is just proof of concept. Some unsolved questions I\ncan think of right now:\n- Carry some information in FieldSelect from ExpandRowReference\nto assist grouping?\n- This can only handle FuncExpr as the root node of FieldSelect\narg. What about a more general expression?\n- How to determine whether a common expression is safe to be\noptimized this way? Any unexpcted side-effects?\n\nAny thoughts on this approach?\n\nBest regards,\nPeifeng Qiu",
"msg_date": "Fri, 2 Dec 2022 16:39:07 +0900",
"msg_from": "Peifeng Qiu <pgsql@qiupf.dev>",
"msg_from_op": true,
"msg_subject": "Optimize common expressions in projection evaluation"
},
{
"msg_contents": "On Fri, Dec 2, 2022 at 12:52 AM Peifeng Qiu <pgsql@qiupf.dev> wrote:\n\n> Hi hackers.\n>\n> When a star(*) expands into multiple fields, our current\n> implementation is to generate multiple copies of the expression\n> and do FieldSelects. This is very inefficient because the same\n> expression get evaluated multiple times but actually we only need\n> to do it once.\n\n\nAnd so we implemented the SQL Standard LATERAL and all was well.\n\nGiven both how long we didn't have lateral and didn't do something like\nthis, and how long lateral has been around and this hasn't really come up,\nthe need for this code seems not that great. But as to the code itself I'm\nunable to properly judge.\n\nDavid J.\n\nOn Fri, Dec 2, 2022 at 12:52 AM Peifeng Qiu <pgsql@qiupf.dev> wrote:Hi hackers.\n\nWhen a star(*) expands into multiple fields, our current\nimplementation is to generate multiple copies of the expression\nand do FieldSelects. This is very inefficient because the same\nexpression get evaluated multiple times but actually we only need\nto do it once.And so we implemented the SQL Standard LATERAL and all was well.Given both how long we didn't have lateral and didn't do something like this, and how long lateral has been around and this hasn't really come up, the need for this code seems not that great. But as to the code itself I'm unable to properly judge.David J.",
"msg_date": "Fri, 2 Dec 2022 06:29:42 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize common expressions in projection evaluation"
},
{
"msg_contents": "> the need for this code seems not that great. But as to the code itself I'm unable to properly judge.\nA simplified version of my use case is like this:\nCREATE FOREIGN TABLE ft(rawdata json);\nINSERT INTO tbl SELECT (convert_func(rawdata)).* FROM ft;\nWe have a foreign data source that can emit json data in different\nformats. We need different\nconvert_func to extract the actual fields out. The client know which\nfunction to use, but as\nthe each json may have hundreds of columns, the performance is very poor.\n\nBest regards,\nPeifeng Qiu\n\n\n",
"msg_date": "Mon, 5 Dec 2022 13:00:37 +0900",
"msg_from": "Peifeng Qiu <pgsql@qiupf.dev>",
"msg_from_op": true,
"msg_subject": "Re: Optimize common expressions in projection evaluation"
},
{
"msg_contents": "On Sun, Dec 4, 2022 at 9:00 PM Peifeng Qiu <pgsql@qiupf.dev> wrote:\n\n> > the need for this code seems not that great. But as to the code itself\n> I'm unable to properly judge.\n> A simplified version of my use case is like this:\n> CREATE FOREIGN TABLE ft(rawdata json);\n> INSERT INTO tbl SELECT (convert_func(rawdata)).* FROM ft;\n>\n>\nWhich is properly written as the following, using lateral, which also\navoids the problem you describe:\n\nINSERT INTO tbl\nSELECT func_call.*\nFROM ft\nJOIN LATERAL convert_func(ft.rawdata) AS func_call ON true;\n\nDavid J.\n\nOn Sun, Dec 4, 2022 at 9:00 PM Peifeng Qiu <pgsql@qiupf.dev> wrote:> the need for this code seems not that great. But as to the code itself I'm unable to properly judge.\nA simplified version of my use case is like this:\nCREATE FOREIGN TABLE ft(rawdata json);\nINSERT INTO tbl SELECT (convert_func(rawdata)).* FROM ft;Which is properly written as the following, using lateral, which also avoids the problem you describe:INSERT INTO tblSELECT func_call.*FROM ftJOIN LATERAL convert_func(ft.rawdata) AS func_call ON true;David J.",
"msg_date": "Sun, 4 Dec 2022 21:08:20 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize common expressions in projection evaluation"
},
{
"msg_contents": "Peifeng Qiu <pgsql@qiupf.dev> writes:\n>> the need for this code seems not that great. But as to the code itself I'm unable to properly judge.\n\n> A simplified version of my use case is like this:\n> CREATE FOREIGN TABLE ft(rawdata json);\n> INSERT INTO tbl SELECT (convert_func(rawdata)).* FROM ft;\n\nIt might be worth noting that the code as we got it from Berkeley\ncould do this scenario without multiple evaluations of convert_func().\nMemory is foggy, but I believe it involved essentially a two-level\ntargetlist. Unfortunately, the scheme was impossibly baroque and\nbuggy, so we eventually ripped it out altogether in favor of the\nmultiple-evaluation behavior you see today. I think that commit\n62e29fe2e might have been what ripped it out, but I'm not quite\nsure. It's about the right time-frame, anyway.\n\nI mention this because trying to reverse-engineer this situation\nin execExpr seems seriously ugly and inefficient, even assuming\nyou can make it non-buggy. The right solution has to involve never\nexpanding foo().* into duplicate function calls in the first place,\nwhich is the way it used to be. Maybe if you dug around in those\ntwenty-year-old changes you could get some inspiration.\n\nI tend to agree with David that LATERAL offers a good-enough\nsolution in most cases ... but it is annoying that we accept\nthis syntax and then pessimize it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Dec 2022 23:27:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimize common expressions in projection evaluation"
},
{
"msg_contents": "po 5. 12. 2022 v 5:28 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Peifeng Qiu <pgsql@qiupf.dev> writes:\n> >> the need for this code seems not that great. But as to the code itself\n> I'm unable to properly judge.\n>\n> > A simplified version of my use case is like this:\n> > CREATE FOREIGN TABLE ft(rawdata json);\n> > INSERT INTO tbl SELECT (convert_func(rawdata)).* FROM ft;\n>\n> It might be worth noting that the code as we got it from Berkeley\n> could do this scenario without multiple evaluations of convert_func().\n> Memory is foggy, but I believe it involved essentially a two-level\n> targetlist. Unfortunately, the scheme was impossibly baroque and\n> buggy, so we eventually ripped it out altogether in favor of the\n> multiple-evaluation behavior you see today. I think that commit\n> 62e29fe2e might have been what ripped it out, but I'm not quite\n> sure. It's about the right time-frame, anyway.\n>\n> I mention this because trying to reverse-engineer this situation\n> in execExpr seems seriously ugly and inefficient, even assuming\n> you can make it non-buggy. The right solution has to involve never\n> expanding foo().* into duplicate function calls in the first place,\n> which is the way it used to be. Maybe if you dug around in those\n> twenty-year-old changes you could get some inspiration.\n>\n> I tend to agree with David that LATERAL offers a good-enough\n> solution in most cases ... but it is annoying that we accept\n> this syntax and then pessimize it.\n>\n\nI agree, so there is a perfect solution like don't use .*, but on second\nhand any supported syntax should be optimized well or we should raise some\nwarning about negative performance impact.\n\nToday there are a lot of baroque technologies in the stack so it is hard to\nremember all good practices and it is hard for newbies to take this\nknowledge. We should reduce possible performance traps when it is possible.\n\nRegards\n\nPavel\n\n\n\n\n>\n> regards, tom lane\n>\n>\n>\n\npo 5. 12. 2022 v 5:28 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Peifeng Qiu <pgsql@qiupf.dev> writes:\n>> the need for this code seems not that great. But as to the code itself I'm unable to properly judge.\n\n> A simplified version of my use case is like this:\n> CREATE FOREIGN TABLE ft(rawdata json);\n> INSERT INTO tbl SELECT (convert_func(rawdata)).* FROM ft;\n\nIt might be worth noting that the code as we got it from Berkeley\ncould do this scenario without multiple evaluations of convert_func().\nMemory is foggy, but I believe it involved essentially a two-level\ntargetlist. Unfortunately, the scheme was impossibly baroque and\nbuggy, so we eventually ripped it out altogether in favor of the\nmultiple-evaluation behavior you see today. I think that commit\n62e29fe2e might have been what ripped it out, but I'm not quite\nsure. It's about the right time-frame, anyway.\n\nI mention this because trying to reverse-engineer this situation\nin execExpr seems seriously ugly and inefficient, even assuming\nyou can make it non-buggy. The right solution has to involve never\nexpanding foo().* into duplicate function calls in the first place,\nwhich is the way it used to be. Maybe if you dug around in those\ntwenty-year-old changes you could get some inspiration.\n\nI tend to agree with David that LATERAL offers a good-enough\nsolution in most cases ... but it is annoying that we accept\nthis syntax and then pessimize it.I agree, so there is a perfect solution like don't use .*, but on second hand any supported syntax should be optimized well or we should raise some warning about negative performance impact.Today there are a lot of baroque technologies in the stack so it is hard to remember all good practices and it is hard for newbies to take this knowledge. We should reduce possible performance traps when it is possible.RegardsPavel \n\n regards, tom lane",
"msg_date": "Mon, 5 Dec 2022 05:36:44 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize common expressions in projection evaluation"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> po 5. 12. 2022 v 5:28 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> I tend to agree with David that LATERAL offers a good-enough\n>> solution in most cases ... but it is annoying that we accept\n>> this syntax and then pessimize it.\n\n> I agree, so there is a perfect solution like don't use .*, but on second\n> hand any supported syntax should be optimized well or we should raise some\n> warning about negative performance impact.\n\nYeah. I wonder if we could get the parser to automatically transform\n\n\tSELECT (foo(t.x)).* FROM tab t\n\ninto\n\n\tSELECT f.* FROM tab t, LATERAL foo(t.x) f\n\nThere are probably cases where this doesn't work or changes the\nsemantics subtly, but I suspect it could be made to work in\nmost cases of practical interest.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Dec 2022 23:47:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimize common expressions in projection evaluation"
},
{
"msg_contents": "On Sun, Dec 4, 2022 at 9:37 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n> po 5. 12. 2022 v 5:28 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> Peifeng Qiu <pgsql@qiupf.dev> writes:\n>> >> the need for this code seems not that great. But as to the code\n>> itself I'm unable to properly judge.\n>>\n>> I mention this because trying to reverse-engineer this situation\n>> in execExpr seems seriously ugly and inefficient, even assuming\n>> you can make it non-buggy. The right solution has to involve never\n>> expanding foo().* into duplicate function calls in the first place,\n>> which is the way it used to be. Maybe if you dug around in those\n>> twenty-year-old changes you could get some inspiration.\n>>\n>> I tend to agree with David that LATERAL offers a good-enough\n>> solution in most cases ... but it is annoying that we accept\n>> this syntax and then pessimize it.\n>>\n>\n> I agree, so there is a perfect solution like don't use .*, but on second\n> hand any supported syntax should be optimized well or we should raise some\n> warning about negative performance impact.\n>\n>\nYeah, I phrased that poorly. I agree that having this problem solved would\nbe beneficial to the project. But, and this is probably a bit unfair to\nthe OP, if Tom decided to implement LATERAL after years of hearing\ncomplaints I doubted this patch was going to be acceptable. The\nbenefit/cost ratio of fixing this in code just doesn't seem to be high\nenough to try at this point. But I'd be happy to be proven wrong. And I\nreadily admit both a lack of knowledge, and that as time has passed maybe\nwe've improved the foundations enough to implement something here.\n\nOtherwise, we can maybe improve the documentation. There isn't any way\n(that the project would accept anyway) to actually warn the user at\nruntime. If we go that route we should probably just disallow the syntax\noutright.\n\nDavid J.\n\nOn Sun, Dec 4, 2022 at 9:37 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:po 5. 12. 2022 v 5:28 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Peifeng Qiu <pgsql@qiupf.dev> writes:\n>> the need for this code seems not that great. But as to the code itself I'm unable to properly judge.\nI mention this because trying to reverse-engineer this situation\nin execExpr seems seriously ugly and inefficient, even assuming\nyou can make it non-buggy. The right solution has to involve never\nexpanding foo().* into duplicate function calls in the first place,\nwhich is the way it used to be. Maybe if you dug around in those\ntwenty-year-old changes you could get some inspiration.\n\nI tend to agree with David that LATERAL offers a good-enough\nsolution in most cases ... but it is annoying that we accept\nthis syntax and then pessimize it.I agree, so there is a perfect solution like don't use .*, but on second hand any supported syntax should be optimized well or we should raise some warning about negative performance impact.Yeah, I phrased that poorly. I agree that having this problem solved would be beneficial to the project. But, and this is probably a bit unfair to the OP, if Tom decided to implement LATERAL after years of hearing complaints I doubted this patch was going to be acceptable. The benefit/cost ratio of fixing this in code just doesn't seem to be high enough to try at this point. But I'd be happy to be proven wrong. And I readily admit both a lack of knowledge, and that as time has passed maybe we've improved the foundations enough to implement something here.Otherwise, we can maybe improve the documentation. There isn't any way (that the project would accept anyway) to actually warn the user at runtime. If we go that route we should probably just disallow the syntax outright.David J.",
"msg_date": "Sun, 4 Dec 2022 21:53:11 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize common expressions in projection evaluation"
},
{
"msg_contents": "> Which is properly written as the following, using lateral, which also avoids the problem you describe:\n>\n> INSERT INTO tbl\n> SELECT func_call.*\n> FROM ft\n> JOIN LATERAL convert_func(ft.rawdata) AS func_call ON true;\n\nI didn't fully realize this syntax until you point out. Just try it out in\nour case and this works well. I think My problem is mostly resolved\nwithout the need of this patch. Thanks!\n\nIt's still good to do something about the normal (func(v)).* syntax\nif it's still considered legal. I will give a try to expanding it more\ncleverly and see if we can avoid the duplicate evaluation issue.\n\nPeifeng Qiu\n\n\n",
"msg_date": "Mon, 5 Dec 2022 17:51:49 +0900",
"msg_from": "Peifeng Qiu <pgsql@qiupf.dev>",
"msg_from_op": true,
"msg_subject": "Re: Optimize common expressions in projection evaluation"
}
] |
[
{
"msg_contents": "When reviewing other patch I noticed there might be an oversight for\nMemoizePath in reparameterize_path. In reparameterize_path we are\nsupposed to increase the path's parameterization to required_outer.\nHowever, AFAICS for MemoizePath we just re-create the same path thus its\nparameterization does not get increased.\n\nI'm not sure if this has consequences in practice. Just from reading\nthe codes, it seems this may cause assertion failure after the call of\nreparameterize_path.\n\n Assert(bms_equal(PATH_REQ_OUTER(path), required_outer));\n\nThanks\nRichard\n\nWhen reviewing other patch I noticed there might be an oversight forMemoizePath in reparameterize_path. In reparameterize_path we aresupposed to increase the path's parameterization to required_outer.However, AFAICS for MemoizePath we just re-create the same path thus itsparameterization does not get increased.I'm not sure if this has consequences in practice. Just from readingthe codes, it seems this may cause assertion failure after the call ofreparameterize_path. Assert(bms_equal(PATH_REQ_OUTER(path), required_outer));ThanksRichard",
"msg_date": "Fri, 2 Dec 2022 22:29:47 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Is this an oversight in reparameterizing Memoize path?"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> When reviewing other patch I noticed there might be an oversight for\n> MemoizePath in reparameterize_path. In reparameterize_path we are\n> supposed to increase the path's parameterization to required_outer.\n> However, AFAICS for MemoizePath we just re-create the same path thus its\n> parameterization does not get increased.\n\nYeah, that sure looks wrong. At minimum we should be recursively\nfixing the subpath. (It looks like doing that and re-calling\ncreate_memoize_path might be sufficient.)\n\nAccording to [1] our code coverage for reparameterize_path is just\nawful. MemoizePath in reparameterize_pathlist_by_child isn't\ntested either ...\n\n\t\t\tregards, tom lane\n\n[1] https://coverage.postgresql.org/src/backend/optimizer/util/pathnode.c.gcov.html\n\n\n",
"msg_date": "Fri, 02 Dec 2022 10:13:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Is this an oversight in reparameterizing Memoize path?"
}
] |
[
{
"msg_contents": "In the wake of b23cd185f (pushed just now), I tried adding Asserts\nto rewriteHandler.c that relkinds in RTEs don't change, as attached.\nThis blew up the regression tests immediately. On investigation,\nI find that\n\n(1) ON CONFLICT's EXCLUDED pseudo-relation is assigned\n rte->relkind = RELKIND_COMPOSITE, a rather horrid hack\n installed by commit ad2278379.\n\n(2) If a stored rule involves ON CONFLICT, then while loading\n the rule AcquireRewriteLocks overwrites that with the actual\n relkind, ie RELKIND_RELATION. Or it did without the\n attached Assert, anyway.\n\nIt appears to me that this means whatever safeguards are created\nby the use of RELKIND_COMPOSITE will fail to apply in a rule.\nMaybe that's okay because the relevant behaviors only occur at\nparse time not rewrite/planning/execution, but even to write that\nis to doubt how reliable and future-proof the assumption is.\n\nI'm inclined to think we'd be well advised to undo that aspect of\nad2278379 and solve it some other way. Maybe a new RTEKind would\nbe a better idea. Alternatively, we could drop rewriteHandler.c's\nattempts to update relkind. Theoretically that's safe now, but\nI hadn't quite wanted to just pull that trigger right away ...\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 02 Dec 2022 12:34:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Bogus rte->relkind for EXCLUDED pseudo-relation"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-02 12:34:36 -0500, Tom Lane wrote:\n> In the wake of b23cd185f (pushed just now), I tried adding Asserts\n> to rewriteHandler.c that relkinds in RTEs don't change, as attached.\n> This blew up the regression tests immediately. On investigation,\n> I find that\n>\n> (1) ON CONFLICT's EXCLUDED pseudo-relation is assigned\n> rte->relkind = RELKIND_COMPOSITE, a rather horrid hack\n> installed by commit ad2278379.\n\nIs it that horrid? I guess we can add a full blown relkind for it, but that'd\nnot really change that we'd logic to force AcquireRewriteLocks() to keep it's\nhand off the relkind?\n\nWe don't really have a different way to represent something that looks like a\ntable's tuple, but without system columns, and that shouldn't affected by RLS\nrewrite magic. We could add a distinct RELKIND of course, but that'd afaict\nlook very similar to RELKIND_COMPOSITE_TYPE.\n\n\n> I'm inclined to think we'd be well advised to undo that aspect of\n> ad2278379 and solve it some other way. Maybe a new RTEKind would\n> be a better idea. Alternatively, we could drop rewriteHandler.c's\n> attempts to update relkind. Theoretically that's safe now, but\n> I hadn't quite wanted to just pull that trigger right away ...\n\nI think it'd be good to not have rewriteHandler.c update relkind, even if we\nundo the RELKIND_COMPOSITE aspect of ad2278379. Changing relkind seems fairly\ndangerous to me, particularly because we don't ever expect that to happen\nnow. I think it might make sense to make it an elog() rather than an Assert()\nthough.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 2 Dec 2022 09:50:29 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Bogus rte->relkind for EXCLUDED pseudo-relation"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-12-02 12:34:36 -0500, Tom Lane wrote:\n>> I find that\n>> (1) ON CONFLICT's EXCLUDED pseudo-relation is assigned\n>> rte->relkind = RELKIND_COMPOSITE, a rather horrid hack\n>> installed by commit ad2278379.\n\n> Is it that horrid?\n\nIt's pretty bad IMO. You didn't even bother to update the comments\nfor RangeTblEntry to explain that\n\n char relkind; /* relation kind (see pg_class.relkind) */\n\nmight now not be the rel's relkind at all. Changing RTEKind\nwould likely be a better way, though of course we couldn't\ndo that in back branches.\n\nAnd I think that we do have an issue in the back branches.\nAccording to the commit message for ad2278379,\n\n 4) References to EXCLUDED were rewritten by the RLS machinery, as\n EXCLUDED was treated as if it were the underlying relation.\n\nThat rewriting would be post-rule-load, so it sure seems to me\nthat a rule containing EXCLUDED would be improperly subject to\nRLS rewriting. I don't have enough familiarity with RLS to come\nup with a test case, and I don't see any relevant examples in\nthe mailing list threads referenced by ad2278379, but I bet\nthat it is broken.\n\nThe back-branch fix could just be to teach rewriteHandler.c\nto not overwrite RELKIND_COMPOSITE_TYPE, perhaps. We can't\nremove the update completely because of the table-to-view case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Dec 2022 16:47:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Bogus rte->relkind for EXCLUDED pseudo-relation"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI've attached an attempt at porting a similar change to 05a7be9 [0] to\nlogical/worker.c. The bulk of the patch is lifted from the walreceiver\npatch, but I did need to add a hack for waking up after\nwal_retrieve_retry_interval to start sync workers. This hack involves a\nnew wakeup variable that process_syncing_tables_for_apply() sets.\n\nFor best results, this patch should be applied on top of [1], which is an\nattempt at fixing all the stuff that only runs within a reasonable\ntimeframe because logical worker processes currently wake up at least once\na second. With the attached patch applied, those periodic wakeups are\ngone, so we need to make sure we wake up the logical workers as needed.\n\n[0] https://postgr.es/m/CA%2BhUKGJGhX4r2LPUE3Oy9BX71Eum6PBcS8L3sJpScR9oKaTVaA%40mail.gmail.com\n[1] https://postgr.es/m/20221122004119.GA132961%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 2 Dec 2022 11:55:03 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "Dear Nathan,\n\nThank you for making the patch! I tested your patch, and it basically worked well.\nAbout following part:\n\n```\n\t\t\tConfigReloadPending = false;\n \t\t\tProcessConfigFile(PGC_SIGHUP);\n+\t\t\tnow = GetCurrentTimestamp();\n+\t\t\tfor (int i = 0; i < NUM_LRW_WAKEUPS; i++)\n+\t\t\t\tLogRepWorkerComputeNextWakeup(i, now);\n+\n+\t\t\t/*\n+\t\t\t * If a wakeup time for starting sync workers was set, just set it\n+\t\t\t * to right now. It will be recalculated as needed.\n+\t\t\t */\n+\t\t\tif (next_sync_start != PG_INT64_MAX)\n+\t\t\t\tnext_sync_start = now;\n \t\t}\n```\n\nDo we have to recalculate the NextWakeup when subscriber receives SIGHUP signal?\nI think this may cause the unexpected change like following.\n\nAssuming that wal_receiver_timeout is 60s, and wal_sender_timeout on publisher is\n0s (or the network between nodes is disconnected).\nAnd we send SIGHUP signal per 20s to subscriber's postmaster.\n\nCurrently the last_recv_time is calcurated when the worker accepts messages,\nand the value is used for deciding to send a ping. The worker will exit if the\nwalsender does not reply.\n\nBut in your patch, the apply worker calcurates wakeup[LRW_WAKEUP_PING] and\nwakeup[LRW_WAKEUP_TERMINATE] again when it gets SIGHUP, so the worker never sends\nping with requestReply = true, and never exits due to the timeout.\n\nMy case seems to be crazy, but there may be another issues if it remains.\n\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Mon, 5 Dec 2022 13:00:19 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "On Mon, Dec 05, 2022 at 01:00:19PM +0000, Hayato Kuroda (Fujitsu) wrote:\n> But in your patch, the apply worker calcurates wakeup[LRW_WAKEUP_PING] and\n> wakeup[LRW_WAKEUP_TERMINATE] again when it gets SIGHUP, so the worker never sends\n> ping with requestReply = true, and never exits due to the timeout.\n\nThis is the case for the walreceiver patch, too. If a SIGHUP arrives just\nbefore we are due to ping the server, the ping wakeup time will be pushed\nback. To me, this seems unlikely to cause any issues in practice unless\nthe server is being constantly SIGHUP'd. If we wanted to fix this, we'd\nprobably need to recompute the wakeup times using the values currently set.\nI haven't looked into this too closely, but it doesn't sound tremendously\ndifficult. Thoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 5 Dec 2022 09:35:23 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "rebased for cfbot\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 9 Jan 2023 09:42:17 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> [ v2-0001-suppress-unnecessary-wakeups-in-logical-worker.c.patch ]\n\nI took a look through this, and have a number of mostly-cosmetic\nissues:\n\n* It seems wrong that next_sync_start isn't handled as one of the\nwakeup[NUM_LRW_WAKEUPS] entries. I see that it needs to be accessed from\nanother module; but you could handle that without exposing either enum\nLogRepWorkerWakeupReason or the array, by providing getter/setter\nfunctions for process_syncing_tables_for_apply() to call.\n\n* This code is far too intimately familiar with the fact that TimestampTz\nis an int64 count of microseconds. (I'm picky about that because I\nremember that they were once something else, so I wonder if someday\nthey will be different again.) You could get rid of the PG_INT64_MAX\nusages by replacing those with the timestamp infinity macro DT_NOEND;\nand I'd even be on board with adding a less-opaque alternate name for\nthat to datatype/timestamp.h. The various magic-constant multipliers\ncould perhaps be made less magic by using TimestampTzPlusMilliseconds(). \n\n* I think it might be better to construct the enum like this:\n\n+typedef enum LogRepWorkerWakeupReason\n+{\n+\tLRW_WAKEUP_TERMINATE,\n+\tLRW_WAKEUP_PING,\n+\tLRW_WAKEUP_STATUS\n+#define NUM_LRW_WAKEUPS (LRW_WAKEUP_STATUS + 1)\n+} LogRepWorkerWakeupReason;\n\nso that you don't have to have a default: case in switches on the\nenum value. I'm more worried about somebody adding an enum value\nand missing updating a switch statement elsewhere than I am about \nsomebody adding an enum value and neglecting to update the\nimmediately-adjacent macro.\n\n* The updates of \"now\" in LogicalRepApplyLoop seem rather\nrandomly placed, and I'm not entirely convinced that we'll\nalways be using a reasonably up-to-date value. Can't we\njust update it right before each usage?\n\n* This special handling of next_sync_start seems mighty ugly:\n\n+ /* Also consider special wakeup time for starting sync workers. */\n+ if (next_sync_start < now)\n+ {\n+ /*\n+ * Instead of spinning while we wait for the sync worker to\n+ * start, wait a bit before retrying (unless there's an earlier\n+ * wakeup time).\n+ */\n+ nextWakeup = Min(now + INT64CONST(100000), nextWakeup);\n+ }\n+ else\n+ nextWakeup = Min(next_sync_start, nextWakeup);\n\nDo we really need the slop? If so, is there a reason why it\nshouldn't apply to all possible sources of nextWakeup? (It's\ngoing to be hard to fold next_sync_start into the wakeup[]\narray unless you can make this not a special case.)\n\n* It'd probably be worth enlarging the comment for\nLogRepWorkerComputeNextWakeup to explain why its API is like that,\nperhaps \"We ask the caller to pass in the value of \"now\" because\nthis frequently avoids multiple calls of GetCurrentTimestamp().\nIt had better be a reasonably-up-to-date value, though.\"\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Jan 2023 18:45:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 06:45:08PM -0500, Tom Lane wrote:\n> I took a look through this, and have a number of mostly-cosmetic\n> issues:\n\nThanks for the detailed review.\n\n> * It seems wrong that next_sync_start isn't handled as one of the\n> wakeup[NUM_LRW_WAKEUPS] entries. I see that it needs to be accessed from\n> another module; but you could handle that without exposing either enum\n> LogRepWorkerWakeupReason or the array, by providing getter/setter\n> functions for process_syncing_tables_for_apply() to call.\n> \n> * This code is far too intimately familiar with the fact that TimestampTz\n> is an int64 count of microseconds. (I'm picky about that because I\n> remember that they were once something else, so I wonder if someday\n> they will be different again.) You could get rid of the PG_INT64_MAX\n> usages by replacing those with the timestamp infinity macro DT_NOEND;\n> and I'd even be on board with adding a less-opaque alternate name for\n> that to datatype/timestamp.h. The various magic-constant multipliers\n> could perhaps be made less magic by using TimestampTzPlusMilliseconds(). \n> \n> * I think it might be better to construct the enum like this:\n> \n> +typedef enum LogRepWorkerWakeupReason\n> +{\n> +\tLRW_WAKEUP_TERMINATE,\n> +\tLRW_WAKEUP_PING,\n> +\tLRW_WAKEUP_STATUS\n> +#define NUM_LRW_WAKEUPS (LRW_WAKEUP_STATUS + 1)\n> +} LogRepWorkerWakeupReason;\n> \n> so that you don't have to have a default: case in switches on the\n> enum value. I'm more worried about somebody adding an enum value\n> and missing updating a switch statement elsewhere than I am about \n> somebody adding an enum value and neglecting to update the\n> immediately-adjacent macro.\n\nI did all of this in v3.\n\n> * The updates of \"now\" in LogicalRepApplyLoop seem rather\n> randomly placed, and I'm not entirely convinced that we'll\n> always be using a reasonably up-to-date value. Can't we\n> just update it right before each usage?\n\nThis came up for walreceiver.c, too. The concern is that\nGetCurrentTimestamp() might be rather expensive on systems without\nsomething like the vDSO. I don't know how common that is. If we can rule\nthat out, then I agree that we should just update it right before each use.\n\n> * This special handling of next_sync_start seems mighty ugly:\n> \n> + /* Also consider special wakeup time for starting sync workers. */\n> + if (next_sync_start < now)\n> + {\n> + /*\n> + * Instead of spinning while we wait for the sync worker to\n> + * start, wait a bit before retrying (unless there's an earlier\n> + * wakeup time).\n> + */\n> + nextWakeup = Min(now + INT64CONST(100000), nextWakeup);\n> + }\n> + else\n> + nextWakeup = Min(next_sync_start, nextWakeup);\n> \n> Do we really need the slop? If so, is there a reason why it\n> shouldn't apply to all possible sources of nextWakeup? (It's\n> going to be hard to fold next_sync_start into the wakeup[]\n> array unless you can make this not a special case.)\n\nI'm not positive it is absolutely necessary. AFAICT the function that\nupdates this particular wakeup time is conditionally called, so it at least\nseems theoretically possible that we could end up spinning in a tight loop\nuntil we attempt to start a new tablesync worker. But perhaps this is\nunlikely enough that we needn't worry about it.\n\nI noticed that this wakeup time wasn't being updated when the number of\nactive tablesync workers is >= max_sync_workers_per_subscription. In v3, I\ntried to handle this by setting the wakeup time to a second later for this\ncase. I think you could ordinarily depend on the tablesync worker's\nnotify_pid to wake up the apply worker, but that wouldn't work if the apply\nworker has restarted.\n\nUltimately, this particular wakeup time seems to be a special case, and I\nprobably need to think about it some more. If you have ideas, I'm all\nears.\n\n> * It'd probably be worth enlarging the comment for\n> LogRepWorkerComputeNextWakeup to explain why its API is like that,\n> perhaps \"We ask the caller to pass in the value of \"now\" because\n> this frequently avoids multiple calls of GetCurrentTimestamp().\n> It had better be a reasonably-up-to-date value, though.\"\n\nI did this in v3. I noticed that many of your comments also applied to the\nsimilar patch that was recently applied to walreceiver.c, so I created\nanother patch to fix that up.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 25 Jan 2023 15:50:04 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 12:50 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> I did this in v3. I noticed that many of your comments also applied to the\n> similar patch that was recently applied to walreceiver.c, so I created\n> another patch to fix that up.\n\nCan we also use TimestampDifferenceMilliseconds()? It knows about\nrounding up for WaitLatch().\n\n\n",
"msg_date": "Thu, 26 Jan 2023 13:23:41 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 01:23:41PM +1300, Thomas Munro wrote:\n> Can we also use TimestampDifferenceMilliseconds()? It knows about\n> rounding up for WaitLatch().\n\nI think we might risk overflowing \"long\" when all the wakeup times are\nDT_NOEND:\n\n\t * This is typically used to calculate a wait timeout for WaitLatch()\n\t * or a related function. The choice of \"long\" as the result type\n\t * is to harmonize with that. It is caller's responsibility that the\n\t * input timestamps not be so far apart as to risk overflow of \"long\"\n\t * (which'd happen at about 25 days on machines with 32-bit \"long\").\n\nMaybe we can adjust that function or create a new one to deal with this.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 25 Jan 2023 16:33:19 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> I think we might risk overflowing \"long\" when all the wakeup times are\n> DT_NOEND:\n\n> \t * This is typically used to calculate a wait timeout for WaitLatch()\n> \t * or a related function. The choice of \"long\" as the result type\n> \t * is to harmonize with that. It is caller's responsibility that the\n> \t * input timestamps not be so far apart as to risk overflow of \"long\"\n> \t * (which'd happen at about 25 days on machines with 32-bit \"long\").\n\n> Maybe we can adjust that function or create a new one to deal with this.\n\nIt'd probably be reasonable to file down that sharp edge by instead\nspecifying that TimestampDifferenceMilliseconds will clamp overflowing\ndifferences to LONG_MAX. Maybe there should be a clamp on the underflow\nside too ... but should it be to LONG_MIN or to zero?\n\nBTW, as long as we're discussing roundoff gotchas, I noticed while\ntesting your previous patch that there's some inconsistency between\nTimestampDifferenceExceeds and TimestampDifferenceMilliseconds.\nWhat you submitted at [1] did this:\n\n+ if (TimestampDifferenceExceeds(last_start, now,\n+ wal_retrieve_retry_interval))\n+ ...\n+ else\n+ {\n+ long elapsed;\n+\n+ elapsed = TimestampDifferenceMilliseconds(last_start, now);\n+ wait_time = Min(wait_time, wal_retrieve_retry_interval - elapsed);\n+ }\n\nand I discovered that that could sometimes busy-wait by repeatedly\nfalling through to the \"else\", but then calculating elapsed ==\nwal_retrieve_retry_interval and hence setting wait_time to zero.\nI fixed it in the committed version [2] by always computing \"elapsed\"\nand then checking if that's strictly less than\nwal_retrieve_retry_interval, but I bet there's existing code with the\nsame issue. I think we need to take a closer look at making\nTimestampDifferenceMilliseconds' roundoff behavior match the outcome of\nTimestampDifferenceExceeds comparisons.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/20230110174345.GA1292607%40nathanxps13\n[2] https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=5a3a95385\n\n\n",
"msg_date": "Wed, 25 Jan 2023 21:27:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 3:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n> > I think we might risk overflowing \"long\" when all the wakeup times are\n> > DT_NOEND:\n>\n> > * This is typically used to calculate a wait timeout for WaitLatch()\n> > * or a related function. The choice of \"long\" as the result type\n> > * is to harmonize with that. It is caller's responsibility that the\n> > * input timestamps not be so far apart as to risk overflow of \"long\"\n> > * (which'd happen at about 25 days on machines with 32-bit \"long\").\n>\n> > Maybe we can adjust that function or create a new one to deal with this.\n>\n> It'd probably be reasonable to file down that sharp edge by instead\n> specifying that TimestampDifferenceMilliseconds will clamp overflowing\n> differences to LONG_MAX. Maybe there should be a clamp on the underflow\n> side too ... but should it be to LONG_MIN or to zero?\n\nThat got me curious... Why did WaitLatch() use long in the first\nplace? I see that it was in Heikki's original sketch[1], but I can't\nthink of any implementation reason for it. Note that the current\nimplementation of WaitLatch() et al will reach WaitEventSetWait()'s\nassertion that the timeout is <= INT_MAX, so a LONG_MAX clamp isn't\nright without further clamping. Then internally,\nWaitEventSetWaitBlock() takes an int, so there is an implicit cast to\nint. If I had to guess I'd say the reasons for long in the API are\nlost, and the WES rewrite used in \"int\" because that's what poll() and\nepoll_wait() wanted.\n\n[1] https://www.postgresql.org/message-id/flat/4C72E85C.3000201%2540enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Jan 2023 16:57:23 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Jan 26, 2023 at 3:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It'd probably be reasonable to file down that sharp edge by instead\n>> specifying that TimestampDifferenceMilliseconds will clamp overflowing\n>> differences to LONG_MAX. Maybe there should be a clamp on the underflow\n>> side too ... but should it be to LONG_MIN or to zero?\n\n> That got me curious... Why did WaitLatch() use long in the first\n> place?\n\nGood question. It's not a great choice, because of the inherent\nplatform specificity. OTOH, I'm not sure it's worth the pain\nto change now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 Jan 2023 23:04:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "I wrote:\n>>> It'd probably be reasonable to file down that sharp edge by instead\n>>> specifying that TimestampDifferenceMilliseconds will clamp overflowing\n>>> differences to LONG_MAX. Maybe there should be a clamp on the underflow\n>>> side too ... but should it be to LONG_MIN or to zero?\n\nAfter looking closer, I see that TimestampDifferenceMilliseconds\nalready explicitly states that its output is intended for WaitLatch\nand friends, which makes it perfectly sane for it to clamp the result\nto [0, INT_MAX] rather than depending on the caller to not pass\nout-of-range values.\n\nI checked existing callers, and found one place in basebackup_copy.c\nthat had not read the memo about TimestampDifferenceMilliseconds\nnever returning a negative value, and another in postmaster.c that\nhad not read the memo about Min() and Max() being macros. There\nare none that are unhappy about clamping to INT_MAX, and at least\none that was already assuming it could just cast the result to int.\n\nHence, I propose the attached. I haven't gone as far as to change\nthe result type from long to int; that seems like a lot of code\nchurn (if we are to update WaitLatch and all callers to match)\nand it won't really buy anything semantically.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 26 Jan 2023 13:54:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 01:54:08PM -0500, Tom Lane wrote:\n> After looking closer, I see that TimestampDifferenceMilliseconds\n> already explicitly states that its output is intended for WaitLatch\n> and friends, which makes it perfectly sane for it to clamp the result\n> to [0, INT_MAX] rather than depending on the caller to not pass\n> out-of-range values.\n\n+1\n\n> * This is typically used to calculate a wait timeout for WaitLatch()\n> * or a related function. The choice of \"long\" as the result type\n> - * is to harmonize with that. It is caller's responsibility that the\n> - * input timestamps not be so far apart as to risk overflow of \"long\"\n> - * (which'd happen at about 25 days on machines with 32-bit \"long\").\n> + * is to harmonize with that; furthermore, we clamp the result to at most\n> + * INT_MAX milliseconds, because that's all that WaitLatch() allows.\n> *\n> - * Both inputs must be ordinary finite timestamps (in current usage,\n> - * they'll be results from GetCurrentTimestamp()).\n> + * At least one input must be an ordinary finite timestamp, else the \"diff\"\n> + * calculation might overflow. We do support stop_time == TIMESTAMP_INFINITY,\n> + * which will result in INT_MAX wait time.\n\nI wonder if we should explicitly reject negative timestamps to eliminate\nany chance of int64 overflow, too. Alternatively, we could detect that the\noperation will overflow and return either 0 or INT_MAX, but I assume\nthere's minimal use of this function with negative timestamps.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 26 Jan 2023 11:48:12 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Thu, Jan 26, 2023 at 01:54:08PM -0500, Tom Lane wrote:\n>> - * Both inputs must be ordinary finite timestamps (in current usage,\n>> - * they'll be results from GetCurrentTimestamp()).\n>> + * At least one input must be an ordinary finite timestamp, else the \"diff\"\n>> + * calculation might overflow. We do support stop_time == TIMESTAMP_INFINITY,\n>> + * which will result in INT_MAX wait time.\n\n> I wonder if we should explicitly reject negative timestamps to eliminate\n> any chance of int64 overflow, too.\n\nHmm. I'm disinclined to add an assumption that the epoch is in the past,\nbut I take your point that the subtraction would overflow with\nTIMESTAMP_INFINITY and a negative finite timestamp. Maybe we should\nmake use of pg_sub_s64_overflow()?\n\nBTW, I just noticed that the adjacent function TimestampDifference\nhas a lot of callers that would be much happier using\nTimestampDifferenceMilliseconds.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Jan 2023 15:04:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 03:04:30PM -0500, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> I wonder if we should explicitly reject negative timestamps to eliminate\n>> any chance of int64 overflow, too.\n> \n> Hmm. I'm disinclined to add an assumption that the epoch is in the past,\n> but I take your point that the subtraction would overflow with\n> TIMESTAMP_INFINITY and a negative finite timestamp. Maybe we should\n> make use of pg_sub_s64_overflow()?\n\nThat would be my vote. I think the 'diff <= 0' check might need to be\nreplaced with something like 'start_time > stop_time' so that we return 0\nfor the underflow case.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 26 Jan 2023 12:23:01 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Thu, Jan 26, 2023 at 03:04:30PM -0500, Tom Lane wrote:\n>> Hmm. I'm disinclined to add an assumption that the epoch is in the past,\n>> but I take your point that the subtraction would overflow with\n>> TIMESTAMP_INFINITY and a negative finite timestamp. Maybe we should\n>> make use of pg_sub_s64_overflow()?\n\n> That would be my vote. I think the 'diff <= 0' check might need to be\n> replaced with something like 'start_time > stop_time' so that we return 0\n> for the underflow case.\n\nRight, so more like this.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 26 Jan 2023 16:09:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 04:09:51PM -0500, Tom Lane wrote:\n> Right, so more like this.\n\nLGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 26 Jan 2023 13:22:55 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Thu, Jan 26, 2023 at 04:09:51PM -0500, Tom Lane wrote:\n>> Right, so more like this.\n\n> LGTM\n\nThanks, pushed.\n\nReturning to the prior patch ... I don't much care for this:\n\n+ /* Maybe there will be a free slot in a second... */\n+ retry_time = TimestampTzPlusSeconds(now, 1);\n+ LogRepWorkerUpdateSyncStartWakeup(retry_time);\n\nWe're not moving the goalposts very far on unnecessary wakeups if\nwe have to do that. Do we need to get a wakeup on sync slot free?\nAlthough having to send that to every worker seems ugly. Maybe this\nis being done in the wrong place and we need to find a way to get\nthe launcher to handle it.\n\nAs for the business about process_syncing_tables being only called\nconditionally, I was already of the opinion that the way it's\ngetting called is loony. Why isn't it called from LogicalRepApplyLoop\n(and noplace else)? With more certainty about when it runs, we might\nnot need so many kluges elsewhere.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Jan 2023 17:37:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 4:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thanks, pushed.\n>\n> Returning to the prior patch ... I don't much care for this:\n>\n> + /* Maybe there will be a free slot in a second... */\n> + retry_time = TimestampTzPlusSeconds(now, 1);\n> + LogRepWorkerUpdateSyncStartWakeup(retry_time);\n>\n> We're not moving the goalposts very far on unnecessary wakeups if\n> we have to do that. Do we need to get a wakeup on sync slot free?\n> Although having to send that to every worker seems ugly. Maybe this\n> is being done in the wrong place and we need to find a way to get\n> the launcher to handle it.\n>\n> As for the business about process_syncing_tables being only called\n> conditionally, I was already of the opinion that the way it's\n> getting called is loony. Why isn't it called from LogicalRepApplyLoop\n> (and noplace else)?\n\nCurrently, it seems to be called after processing transaction end\ncommands or when we are not in any transaction. As per my\nunderstanding, that is when we can ensure the sync between tablesync\nand apply worker. For example, say when tablesync worker is doing the\ninitial copy, the apply worker went ahead and processed some\nadditional xacts (WAL), now the tablesync worker needs to process all\nthose transactions after initial sync and before it can mark the state\nas SYNCDONE. So that can be checked only at transaction boundries.\n\nHowever, it is not very clear to me why the patch needs the below code.\n@@ -3615,7 +3639,33 @@ LogicalRepApplyLoop(XLogRecPtr last_received)\n if (!dlist_is_empty(&lsn_mapping))\n wait_time = WalWriterDelay;\n else\n- wait_time = NAPTIME_PER_CYCLE;\n+ {\n+ TimestampTz nextWakeup = DT_NOEND;\n+\n+ /*\n+ * Since process_syncing_tables() is called conditionally, the\n+ * tablesync worker start wakeup time might be in the past, and we\n+ * can't know for sure when it will be updated again. Rather than\n+ * spinning in a tight loop in this case, bump this wakeup time by\n+ * a second.\n+ */\n+ now = GetCurrentTimestamp();\n+ if (wakeup[LRW_WAKEUP_SYNC_START] < now)\n+ wakeup[LRW_WAKEUP_SYNC_START] =\nTimestampTzPlusSeconds(wakeup[LRW_WAKEUP_SYNC_START], 1);\n\nDo we see unnecessary wakeups without this, or delay in sync?\n\nBTW, do we need to do something about wakeups in\nwait_for_relation_state_change()?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 28 Jan 2023 10:26:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "On Sat, Jan 28, 2023 at 10:26:25AM +0530, Amit Kapila wrote:\n> On Fri, Jan 27, 2023 at 4:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Returning to the prior patch ... I don't much care for this:\n>>\n>> + /* Maybe there will be a free slot in a second... */\n>> + retry_time = TimestampTzPlusSeconds(now, 1);\n>> + LogRepWorkerUpdateSyncStartWakeup(retry_time);\n>>\n>> We're not moving the goalposts very far on unnecessary wakeups if\n>> we have to do that. Do we need to get a wakeup on sync slot free?\n>> Although having to send that to every worker seems ugly. Maybe this\n>> is being done in the wrong place and we need to find a way to get\n>> the launcher to handle it.\n\nIt might be feasible to set up a before_shmem_exit() callback that wakes up\nthe apply worker (like is already done for the launcher). I think the\napply worker is ordinarily notified via the tablesync worker's notify_pid,\nbut AFAICT there's no guarantee that the apply worker hasn't restarted with\na different PID.\n\n> + /*\n> + * Since process_syncing_tables() is called conditionally, the\n> + * tablesync worker start wakeup time might be in the past, and we\n> + * can't know for sure when it will be updated again. Rather than\n> + * spinning in a tight loop in this case, bump this wakeup time by\n> + * a second.\n> + */\n> + now = GetCurrentTimestamp();\n> + if (wakeup[LRW_WAKEUP_SYNC_START] < now)\n> + wakeup[LRW_WAKEUP_SYNC_START] =\n> TimestampTzPlusSeconds(wakeup[LRW_WAKEUP_SYNC_START], 1);\n> \n> Do we see unnecessary wakeups without this, or delay in sync?\n\nI haven't looked too cloesly to see whether busy loops are likely in\npractice.\n\n> BTW, do we need to do something about wakeups in\n> wait_for_relation_state_change()?\n\n... and wait_for_worker_state_change(), and copy_read_data(). From a quick\nglance, it looks like fixing these would be a more invasive change. TBH\nI'm beginning to wonder whether all this is really worth it to prevent\nwaking up once per second.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 31 Jan 2023 16:05:21 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "On Wed, Feb 1, 2023 at 5:35 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Sat, Jan 28, 2023 at 10:26:25AM +0530, Amit Kapila wrote:\n> > On Fri, Jan 27, 2023 at 4:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Returning to the prior patch ... I don't much care for this:\n> >>\n> >> + /* Maybe there will be a free slot in a second... */\n> >> + retry_time = TimestampTzPlusSeconds(now, 1);\n> >> + LogRepWorkerUpdateSyncStartWakeup(retry_time);\n> >>\n> >> We're not moving the goalposts very far on unnecessary wakeups if\n> >> we have to do that. Do we need to get a wakeup on sync slot free?\n> >> Although having to send that to every worker seems ugly. Maybe this\n> >> is being done in the wrong place and we need to find a way to get\n> >> the launcher to handle it.\n>\n> It might be feasible to set up a before_shmem_exit() callback that wakes up\n> the apply worker (like is already done for the launcher). I think the\n> apply worker is ordinarily notified via the tablesync worker's notify_pid,\n> but AFAICT there's no guarantee that the apply worker hasn't restarted with\n> a different PID.\n>\n> > + /*\n> > + * Since process_syncing_tables() is called conditionally, the\n> > + * tablesync worker start wakeup time might be in the past, and we\n> > + * can't know for sure when it will be updated again. Rather than\n> > + * spinning in a tight loop in this case, bump this wakeup time by\n> > + * a second.\n> > + */\n> > + now = GetCurrentTimestamp();\n> > + if (wakeup[LRW_WAKEUP_SYNC_START] < now)\n> > + wakeup[LRW_WAKEUP_SYNC_START] =\n> > TimestampTzPlusSeconds(wakeup[LRW_WAKEUP_SYNC_START], 1);\n> >\n> > Do we see unnecessary wakeups without this, or delay in sync?\n>\n> I haven't looked too cloesly to see whether busy loops are likely in\n> practice.\n>\n> > BTW, do we need to do something about wakeups in\n> > wait_for_relation_state_change()?\n>\n> ... and wait_for_worker_state_change(), and copy_read_data(). From a quick\n> glance, it looks like fixing these would be a more invasive change.\n>\n\nWhat kind of logic do you have in mind to avoid waking up once per\nsecond in those cases?\n\n> TBH\n> I'm beginning to wonder whether all this is really worth it to prevent\n> waking up once per second.\n>\n\nIf we can't do it for all cases, do you see any harm in doing it for\ncases where we can achieve it without adding much complexity? We can\nprobably add comments for others so that if someone else has better\nideas in the future we can deal with those as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Mar 2023 15:30:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "I've attached a minimally-updated patch that doesn't yet address the bigger\ntopics under discussion.\n\nOn Thu, Mar 16, 2023 at 03:30:37PM +0530, Amit Kapila wrote:\n> On Wed, Feb 1, 2023 at 5:35 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> On Sat, Jan 28, 2023 at 10:26:25AM +0530, Amit Kapila wrote:\n>> > BTW, do we need to do something about wakeups in\n>> > wait_for_relation_state_change()?\n>>\n>> ... and wait_for_worker_state_change(), and copy_read_data(). From a quick\n>> glance, it looks like fixing these would be a more invasive change.\n> \n> What kind of logic do you have in mind to avoid waking up once per\n> second in those cases?\n\nI haven't looked into this too much yet. I'd probably try out Tom's\nsuggestions from upthread [0] next and see if those can be applied here,\ntoo.\n\n>> TBH\n>> I'm beginning to wonder whether all this is really worth it to prevent\n>> waking up once per second.\n> \n> If we can't do it for all cases, do you see any harm in doing it for\n> cases where we can achieve it without adding much complexity? We can\n> probably add comments for others so that if someone else has better\n> ideas in the future we can deal with those as well.\n\nI don't think there's any harm, but I'm also not sure it does a whole lot\nof good. At the very least, I think we should figure out something better\nthan the process_syncing_tables() hacks before taking this patch seriously.\n\n[0] https://postgr.es/m/3220831.1674772625%40sss.pgh.pa.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 16 Mar 2023 17:22:55 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 5:52 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> I've attached a minimally-updated patch that doesn't yet address the bigger\n> topics under discussion.\n>\n> On Thu, Mar 16, 2023 at 03:30:37PM +0530, Amit Kapila wrote:\n> > On Wed, Feb 1, 2023 at 5:35 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> On Sat, Jan 28, 2023 at 10:26:25AM +0530, Amit Kapila wrote:\n> >> > BTW, do we need to do something about wakeups in\n> >> > wait_for_relation_state_change()?\n> >>\n> >> ... and wait_for_worker_state_change(), and copy_read_data(). From a quick\n> >> glance, it looks like fixing these would be a more invasive change.\n> >\n> > What kind of logic do you have in mind to avoid waking up once per\n> > second in those cases?\n>\n> I haven't looked into this too much yet. I'd probably try out Tom's\n> suggestions from upthread [0] next and see if those can be applied here,\n> too.\n>\n\nFor the clean exit of tablesync worker, we already wake up the apply\nworker in finish_sync_worker(). You probably want to deal with error\ncases or is there something else on your mind? BTW, for\nwait_for_worker_state_change(), one possibility is to wake all the\nsync workers when apply worker exits but not sure if that is a very\ngood idea.\n\nFew minor comments:\n=====================\n1.\n- if (wal_receiver_timeout > 0)\n+ now = GetCurrentTimestamp();\n+ if (now >= wakeup[LRW_WAKEUP_TERMINATE])\n+ ereport(ERROR,\n+ (errcode(ERRCODE_CONNECTION_FAILURE),\n+ errmsg(\"terminating logical replication worker due to timeout\")));\n+\n+ /* Check to see if it's time for a ping. */\n+ if (now >= wakeup[LRW_WAKEUP_PING])\n {\n- TimestampTz now = GetCurrentTimestamp();\n\nPreviously, we use to call GetCurrentTimestamp() only when\nwal_receiver_timeout > 0 but we ignore that part now. It may not\nmatter much but if possible let's avoid calling GetCurrentTimestamp()\nat additional places.\n\n2.\n+ for (int i = 0; i < NUM_LRW_WAKEUPS; i++)\n+ LogRepWorkerComputeNextWakeup(i, now);\n+\n+ /*\n+ * LogRepWorkerComputeNextWakeup() will have cleared the tablesync\n+ * worker start wakeup time, so we might not wake up to start a new\n+ * worker at the appropriate time. To deal with this, we set the\n+ * wakeup time to right now so that\n+ * process_syncing_tables_for_apply() recalculates it as soon as\n+ * possible.\n+ */\n+ if (!am_tablesync_worker())\n+ LogRepWorkerUpdateSyncStartWakeup(now);\n\nCan't we avoid clearing syncstart time in the first place?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 17 Mar 2023 14:46:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "> On 17 Mar 2023, at 10:16, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> Few minor comments:\n\nHave you had a chance to address the comments raised by Amit in this thread?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 4 Jul 2023 09:48:23 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
},
{
"msg_contents": "On Tue, Jul 04, 2023 at 09:48:23AM +0200, Daniel Gustafsson wrote:\n>> On 17 Mar 2023, at 10:16, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n>> Few minor comments:\n> \n> Have you had a chance to address the comments raised by Amit in this thread?\n\nNot yet, sorry.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 4 Jul 2023 11:37:58 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: suppressing useless wakeups in logical/worker.c"
}
] |
[
{
"msg_contents": "Hello,\n\nI noticed a few places in the new foreign key code where a comment says \n\"the ON DELETE SET NULL/DELETE clause\". I believe it should say \"ON \nDELETE SET NULL/DEFAULT\".\n\nThese comments were added in d6f96ed94e7, \"Allow specifying column list \nfor foreign key ON DELETE SET actions.\" Here is a patch to correct them. \nI don't think you usually create a commitfest entry for tiny fixes like \nthis, right? But if you'd like one please let me know and I'll add it.\n\nYours,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com",
"msg_date": "Fri, 2 Dec 2022 14:18:51 -0800",
"msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Think-o in foreign key comments"
},
{
"msg_contents": "2022年12月3日(土) 7:19 Paul Jungwirth <pj@illuminatedcomputing.com>:\n>\n> Hello,\n>\n> I noticed a few places in the new foreign key code where a comment says\n> \"the ON DELETE SET NULL/DELETE clause\". I believe it should say \"ON\n> DELETE SET NULL/DEFAULT\".\n>\n> These comments were added in d6f96ed94e7, \"Allow specifying column list\n> for foreign key ON DELETE SET actions.\" Here is a patch to correct them.\n\nLGTM.\n\nI do notice the same patch adds the function \"validateFkOnDeleteSetColumns\"\nbut the name in the comment preceding it is \"validateFkActionSetColumns\",\nmight as well fix that the same time.\n\n> I don't think you usually create a commitfest entry for tiny fixes like\n> this, right? But if you'd like one please let me know and I'll add it.\n\n From experience usually a committer will pick up trivial fixes like this\nwithin a few days, but if it escapes notice for more than a couple of weeks\na reminder and/or CF entry might be useful to make sure it doesn't get\nforgotten.\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Sat, 3 Dec 2022 13:59:30 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Think-o in foreign key comments"
},
{
"msg_contents": "On 03.12.22 05:59, Ian Lawrence Barwick wrote:\n> 2022年12月3日(土) 7:19 Paul Jungwirth <pj@illuminatedcomputing.com>:\n>> I noticed a few places in the new foreign key code where a comment says\n>> \"the ON DELETE SET NULL/DELETE clause\". I believe it should say \"ON\n>> DELETE SET NULL/DEFAULT\".\n>>\n>> These comments were added in d6f96ed94e7, \"Allow specifying column list\n>> for foreign key ON DELETE SET actions.\" Here is a patch to correct them.\n> \n> LGTM.\n> \n> I do notice the same patch adds the function \"validateFkOnDeleteSetColumns\"\n> but the name in the comment preceding it is \"validateFkActionSetColumns\",\n> might as well fix that the same time.\n\nCommitted with that addition and backpatched to PG15.\n\n\n\n",
"msg_date": "Wed, 7 Dec 2022 17:11:52 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Think-o in foreign key comments"
}
] |
[
{
"msg_contents": "Hello,\n\nWe have statement_timeout, idle_in_transaction_timeout,\nidle_session_timeout and many more! But we have no\ntransaction_timeout. I've skimmed thread [0,1] about existing timeouts\nand found no contraindications to implement transaction_timeout.\n\nNikolay asked me if I can prototype the feature for testing by him,\nand it seems straightforward. Please find attached. If it's not known\nto be a bad idea - we'll work on it.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/763A0689-F189-459E-946F-F0EC4458980B%40hotmail.com",
"msg_date": "Fri, 2 Dec 2022 21:18:40 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Transaction timeout"
},
{
"msg_contents": "On Fri, Dec 2, 2022 at 9:18 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n\n> Hello,\n>\n> We have statement_timeout, idle_in_transaction_timeout,\n> idle_session_timeout and many more! But we have no\n> transaction_timeout. I've skimmed thread [0,1] about existing timeouts\n> and found no contraindications to implement transaction_timeout.\n>\n> Nikolay asked me if I can prototype the feature for testing by him,\n> and it seems straightforward. Please find attached. If it's not known\n> to be a bad idea - we'll work on it.\n>\n\nThanks!! It was a super quick reaction to my proposal Honestly, I was\nthinking about it for several years, wondering why it's still not\nimplemented.\n\nThe reasons to have it should be straightforward – here are a couple of\nthem I can see.\n\nFirst one. In the OLTP context, we usually have:\n- a hard timeout set in application server\n- a hard timeout set in HTTP server\n- users not willing to wait more than several seconds – and almost never\nbeing ready to wait for more than 30-60s.\n\nIf Postgres allows longer transactions (it does since we cannot reliably\nlimit their duration now, it's always virtually unlimited), it might be\ndoing the work that nobody is waiting / is needing anymore, speeding\nresources, affecting health, etc.\n\nWhy we cannot limit transaction duration reliably? The existing timeouts\n(namely, statement_timeout + idle_session_timeout) don't protect from\nhaving transactions consisting of a series of small statements and short\npauses between them. If such behavior happens (e.g., a long series of fast\nUPDATEs in a loop). It can be dangerous, affecting general DB health (bloat\nissues). This is reason number two – DBAs might want to decide to minimize\nthe cases of long transactions, setting transaction limits globally (and\nallowing to set it locally for particular sessions or for some users in\nrare cases).\n\nSpeaking of the patch – I just tested it (gitpod:\nhttps://gitpod.io/#https://gitlab.com/NikolayS/postgres/tree/transaction_timeout);\nit applies, works as expected for single-statement transactions:\n\npostgres=# set transaction_timeout to '2s';\nSET\npostgres=# select pg_sleep(3);\nERROR: canceling transaction due to transaction timeout\n\nBut it fails in the \"worst\" case I've described above – a series of small\nstatements:\n\npostgres=# set transaction_timeout to '2s';\nSET\npostgres=# begin; select pg_sleep(1); select pg_sleep(1); select\npg_sleep(1); select pg_sleep(1); select pg_sleep(1); commit;\nBEGIN\n pg_sleep\n----------\n\n(1 row)\n\n pg_sleep\n----------\n\n(1 row)\n\n pg_sleep\n----------\n\n(1 row)\n\n pg_sleep\n----------\n\n(1 row)\n\n pg_sleep\n----------\n\n(1 row)\n\nCOMMIT\npostgres=#\n\nOn Fri, Dec 2, 2022 at 9:18 PM Andrey Borodin <amborodin86@gmail.com> wrote:Hello,\n\nWe have statement_timeout, idle_in_transaction_timeout,\nidle_session_timeout and many more! But we have no\ntransaction_timeout. I've skimmed thread [0,1] about existing timeouts\nand found no contraindications to implement transaction_timeout.\n\nNikolay asked me if I can prototype the feature for testing by him,\nand it seems straightforward. Please find attached. If it's not known\nto be a bad idea - we'll work on it.Thanks!! It was a super quick reaction to my proposal Honestly, I was thinking about it for several years, wondering why it's still not implemented.The reasons to have it should be straightforward – here are a couple of them I can see.First one. In the OLTP context, we usually have:- a hard timeout set in application server- a hard timeout set in HTTP server- users not willing to wait more than several seconds – and almost never being ready to wait for more than 30-60s.If Postgres allows longer transactions (it does since we cannot reliably limit their duration now, it's always virtually unlimited), it might be doing the work that nobody is waiting / is needing anymore, speeding resources, affecting health, etc.Why we cannot limit transaction duration reliably? The existing timeouts (namely, statement_timeout + idle_session_timeout) don't protect from having transactions consisting of a series of small statements and short pauses between them. If such behavior happens (e.g., a long series of fast UPDATEs in a loop). It can be dangerous, affecting general DB health (bloat issues). This is reason number two – DBAs might want to decide to minimize the cases of long transactions, setting transaction limits globally (and allowing to set it locally for particular sessions or for some users in rare cases).Speaking of the patch – I just tested it (gitpod: https://gitpod.io/#https://gitlab.com/NikolayS/postgres/tree/transaction_timeout); it applies, works as expected for single-statement transactions:postgres=# set transaction_timeout to '2s';SETpostgres=# select pg_sleep(3);ERROR: canceling transaction due to transaction timeoutBut it fails in the \"worst\" case I've described above – a series of small statements:postgres=# set transaction_timeout to '2s';SETpostgres=# begin; select pg_sleep(1); select pg_sleep(1); select pg_sleep(1); select pg_sleep(1); select pg_sleep(1); commit;BEGIN pg_sleep ---------- (1 row) pg_sleep ---------- (1 row) pg_sleep ---------- (1 row) pg_sleep ---------- (1 row) pg_sleep ---------- (1 row)COMMITpostgres=#",
"msg_date": "Fri, 2 Dec 2022 22:59:18 -0800",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Fri, Dec 2, 2022 at 10:59 PM Nikolay Samokhvalov\n<samokhvalov@gmail.com> wrote:\n>\n> But it fails in the \"worst\" case I've described above – a series of small statements:\n\nFixed. Added test for this.\n\nOpen questions:\n1. Docs\n2. Order of reporting if happened lock_timeout, statement_timeout, and\ntransaction_timeout simultaneously. Currently there's a lot of code\naround this...\n\nThanks!\n\nBest regards, Andrey Borodin.",
"msg_date": "Sat, 3 Dec 2022 09:41:04 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Sat, Dec 3, 2022 at 9:41 AM Andrey Borodin <amborodin86@gmail.com> wrote:\n\n> Fixed. Added test for this.\n>\n\nThanks! Tested (gitpod:\nhttps://gitpod.io/#https://gitlab.com/NikolayS/postgres/tree/transaction_timeout-v2\n),\n\nworks as expected.\n\nOn Sat, Dec 3, 2022 at 9:41 AM Andrey Borodin <amborodin86@gmail.com> wrote:\nFixed. Added test for this. Thanks! Tested (gitpod: https://gitpod.io/#https://gitlab.com/NikolayS/postgres/tree/transaction_timeout-v2), works as expected.",
"msg_date": "Mon, 5 Dec 2022 14:17:29 -0800",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nTested, works as expected;\r\n\r\ndocumentation is not yet added",
"msg_date": "Mon, 05 Dec 2022 22:28:35 +0000",
"msg_from": "Nikolay Samokhvalov <nikolay@samokhvalov.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-03 09:41:04 -0800, Andrey Borodin wrote:\n> @@ -2720,6 +2723,7 @@ finish_xact_command(void)\n> \n> \tif (xact_started)\n> \t{\n> +\n> \t\tCommitTransactionCommand();\n> \n> #ifdef MEMORY_CONTEXT_CHECKING\n\nSpurious newline added.\n\n\n> @@ -4460,6 +4473,10 @@ PostgresMain(const char *dbname, const char *username)\n> \t\t\t\t\tenable_timeout_after(IDLE_SESSION_TIMEOUT,\n> \t\t\t\t\t\t\t\t\t\t IdleSessionTimeout);\n> \t\t\t\t}\n> +\n> +\n> +\t\t\t\tif (get_timeout_active(TRANSACTION_TIMEOUT))\n> +\t\t\t\t\tdisable_timeout(TRANSACTION_TIMEOUT, false);\n> \t\t\t}\n> \n> \t\t\t/* Report any recently-changed GUC options */\n\nToo many newlines added.\n\n\nI'm a bit worried about adding evermore branches and function calls for\nthe processing of single statements. We already spend a noticable\npercentage of the cycles for a single statement in PostgresMain(), this\nadds additional overhead.\n\nI'm somewhat inclined to think that we need some redesign here before we\nadd more overhead.\n\n\n> @@ -1360,6 +1363,16 @@ IdleInTransactionSessionTimeoutHandler(void)\n> \tSetLatch(MyLatch);\n> }\n> \n> +static void\n> +TransactionTimeoutHandler(void)\n> +{\n> +#ifdef HAVE_SETSID\n> +\t/* try to signal whole process group */\n> +\tkill(-MyProcPid, SIGINT);\n> +#endif\n> +\tkill(MyProcPid, SIGINT);\n> +}\n> +\n\nWhy does this use signals instead of just setting the latch like\nIdleInTransactionSessionTimeoutHandler() etc?\n\n\n\n> diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c\n> index 0081873a72..5229fe3555 100644\n> --- a/src/bin/pg_dump/pg_backup_archiver.c\n> +++ b/src/bin/pg_dump/pg_backup_archiver.c\n> @@ -3089,6 +3089,7 @@ _doSetFixedOutputState(ArchiveHandle *AH)\n> \tahprintf(AH, \"SET statement_timeout = 0;\\n\");\n> \tahprintf(AH, \"SET lock_timeout = 0;\\n\");\n> \tahprintf(AH, \"SET idle_in_transaction_session_timeout = 0;\\n\");\n> +\tahprintf(AH, \"SET transaction_timeout = 0;\\n\");\n\nHm - why is that the right thing to do?\n\n\n\n> diff --git a/src/test/isolation/specs/timeouts.spec b/src/test/isolation/specs/timeouts.spec\n> index c747b4ae28..a7f27811c7 100644\n> --- a/src/test/isolation/specs/timeouts.spec\n> +++ b/src/test/isolation/specs/timeouts.spec\n> @@ -23,6 +23,9 @@ step sto\t{ SET statement_timeout = '10ms'; }\n> step lto\t{ SET lock_timeout = '10ms'; }\n> step lsto\t{ SET lock_timeout = '10ms'; SET statement_timeout = '10s'; }\n> step slto\t{ SET lock_timeout = '10s'; SET statement_timeout = '10ms'; }\n> +step tto\t{ SET transaction_timeout = '10ms'; }\n> +step sleep0\t{ SELECT pg_sleep(0.0001) }\n> +step sleep10\t{ SELECT pg_sleep(0.01) }\n> step locktbl\t{ LOCK TABLE accounts; }\n> step update\t{ DELETE FROM accounts WHERE accountid = 'checking'; }\n> teardown\t{ ABORT; }\n> @@ -47,3 +50,5 @@ permutation wrtbl lto update(*)\n> permutation wrtbl lsto update(*)\n> # statement timeout expires first, row-level lock\n> permutation wrtbl slto update(*)\n> +# transaction timeout\n> +permutation tto sleep0 sleep0 sleep10(*)\n> \\ No newline at end of file\n\nI don't think this is quite sufficient. I think the test should verify\nthat transaction timeout interacts correctly with statement timeout /\nidle in tx timeout.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Dec 2022 15:07:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Thanks for looking into this Andres!\n\nOn Mon, Dec 5, 2022 at 3:07 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> I'm a bit worried about adding evermore branches and function calls for\n> the processing of single statements. We already spend a noticable\n> percentage of the cycles for a single statement in PostgresMain(), this\n> adds additional overhead.\n>\n> I'm somewhat inclined to think that we need some redesign here before we\n> add more overhead.\n>\nWe can cap statement_timeout\\idle_session_timeout by the budget of\ntransaction_timeout left.\nEither way we can do batch function enable_timeouts() instead\nenable_timeout_after().\n\nDoes anything of it make sense?\n\n>\n> > @@ -1360,6 +1363,16 @@ IdleInTransactionSessionTimeoutHandler(void)\n> > SetLatch(MyLatch);\n> > }\n> >\n> > +static void\n> > +TransactionTimeoutHandler(void)\n> > +{\n> > +#ifdef HAVE_SETSID\n> > + /* try to signal whole process group */\n> > + kill(-MyProcPid, SIGINT);\n> > +#endif\n> > + kill(MyProcPid, SIGINT);\n> > +}\n> > +\n>\n> Why does this use signals instead of just setting the latch like\n> IdleInTransactionSessionTimeoutHandler() etc?\n\nI just copied statement_timeout behaviour. As I understand this\nimplementation is prefered if the timeout can catch the backend\nrunning at full steam.\n\n> > diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c\n> > index 0081873a72..5229fe3555 100644\n> > --- a/src/bin/pg_dump/pg_backup_archiver.c\n> > +++ b/src/bin/pg_dump/pg_backup_archiver.c\n> > @@ -3089,6 +3089,7 @@ _doSetFixedOutputState(ArchiveHandle *AH)\n> > ahprintf(AH, \"SET statement_timeout = 0;\\n\");\n> > ahprintf(AH, \"SET lock_timeout = 0;\\n\");\n> > ahprintf(AH, \"SET idle_in_transaction_session_timeout = 0;\\n\");\n> > + ahprintf(AH, \"SET transaction_timeout = 0;\\n\");\n>\n> Hm - why is that the right thing to do?\nBecause transaction_timeout has effects of statement_timeout.\n\nThank you!\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Mon, 5 Dec 2022 15:41:29 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-05 15:41:29 -0800, Andrey Borodin wrote:\n> Thanks for looking into this Andres!\n>\n> On Mon, Dec 5, 2022 at 3:07 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > I'm a bit worried about adding evermore branches and function calls for\n> > the processing of single statements. We already spend a noticable\n> > percentage of the cycles for a single statement in PostgresMain(), this\n> > adds additional overhead.\n> >\n> > I'm somewhat inclined to think that we need some redesign here before we\n> > add more overhead.\n> >\n> We can cap statement_timeout\\idle_session_timeout by the budget of\n> transaction_timeout left.\n\nI don't know what you mean by that.\n\n\n> @@ -3277,6 +3282,7 @@ ProcessInterrupts(void)\n> \t\t */\n> \t\tlock_timeout_occurred = get_timeout_indicator(LOCK_TIMEOUT, true);\n> \t\tstmt_timeout_occurred = get_timeout_indicator(STATEMENT_TIMEOUT, true);\n> +\t\ttx_timeout_occurred = get_timeout_indicator(TRANSACTION_TIMEOUT, true);\n> \n> \t\t/*\n> \t\t * If both were set, we want to report whichever timeout completed\n\nThis doesn't update the preceding comment, btw, which now reads oddly:\n\n\t\t/*\n\t\t * If LOCK_TIMEOUT and STATEMENT_TIMEOUT indicators are both set, we\n\t\t * need to clear both, so always fetch both.\n\t\t */\n\n\n\n> > > @@ -1360,6 +1363,16 @@ IdleInTransactionSessionTimeoutHandler(void)\n> > > SetLatch(MyLatch);\n> > > }\n> > >\n> > > +static void\n> > > +TransactionTimeoutHandler(void)\n> > > +{\n> > > +#ifdef HAVE_SETSID\n> > > + /* try to signal whole process group */\n> > > + kill(-MyProcPid, SIGINT);\n> > > +#endif\n> > > + kill(MyProcPid, SIGINT);\n> > > +}\n> > > +\n> >\n> > Why does this use signals instead of just setting the latch like\n> > IdleInTransactionSessionTimeoutHandler() etc?\n>\n> I just copied statement_timeout behaviour. As I understand this\n> implementation is prefered if the timeout can catch the backend\n> running at full steam.\n\nHm. I'm not particularly convinced by that code. Be that as it may, I\ndon't think it's a good idea to have one more copy of this code. At\nleast the patch should wrap the signalling code in a helper.\n\n\nFWIW, the HAVE_SETSID code originates in:\n\ncommit 3ad0728c817bf8abd2c76bd11d856967509b307c\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2006-11-21 20:59:53 +0000\n\n On systems that have setsid(2) (which should be just about everything except\n Windows), arrange for each postmaster child process to be its own process\n group leader, and deliver signals SIGINT, SIGTERM, SIGQUIT to the whole\n process group not only the direct child process. This provides saner behavior\n for archive and recovery scripts; in particular, it's possible to shut down a\n warm-standby recovery server using \"pg_ctl stop -m immediate\", since delivery\n of SIGQUIT to the startup subprocess will result in killing the waiting\n recovery_command. Also, this makes Query Cancel and statement_timeout apply\n to scripts being run from backends via system(). (There is no support in the\n core backend for that, but it's widely done using untrusted PLs.) Per gripe\n from Stephen Harris and subsequent discussion.\n\n\n\n> > > diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c\n> > > index 0081873a72..5229fe3555 100644\n> > > --- a/src/bin/pg_dump/pg_backup_archiver.c\n> > > +++ b/src/bin/pg_dump/pg_backup_archiver.c\n> > > @@ -3089,6 +3089,7 @@ _doSetFixedOutputState(ArchiveHandle *AH)\n> > > ahprintf(AH, \"SET statement_timeout = 0;\\n\");\n> > > ahprintf(AH, \"SET lock_timeout = 0;\\n\");\n> > > ahprintf(AH, \"SET idle_in_transaction_session_timeout = 0;\\n\");\n> > > + ahprintf(AH, \"SET transaction_timeout = 0;\\n\");\n> >\n> > Hm - why is that the right thing to do?\n> Because transaction_timeout has effects of statement_timeout.\n\nI guess it's just following precedent - but it seems a bit presumptuous\nto just disable safety settings a DBA might have set up. That makes some\nsense for e.g. idle_in_transaction_session_timeout, because I think\ne.g. parallel backup can lead to a connection being idle for a bit.\n\n\nA few more review comments:\n\n> Either way we can do batch function enable_timeouts() instead\n> enable_timeout_after().\n\n> Does anything of it make sense?\n\nI'm at least as worried about the various calls *after* the execution of\na statement.\n\n\n> +\t\tif (tx_timeout_occurred)\n> +\t\t{\n> +\t\t\tLockErrorCleanup();\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errcode(ERRCODE_TRANSACTION_TIMEOUT),\n> +\t\t\t\t\t errmsg(\"canceling transaction due to transaction timeout\")));\n> +\t\t}\n\nThe number of calls to LockErrorCleanup() here feels wrong - there's\nalready 8 calls in ProcessInterrupts(). Besides the code duplication I\nalso think it's not a sane idea to rely on having LockErrorCleanup()\nbefore all the relevant ereport(ERROR)s.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Dec 2022 16:22:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "At Mon, 5 Dec 2022 15:07:47 -0800, Andres Freund <andres@anarazel.de> wrote in \n> I'm a bit worried about adding evermore branches and function calls for\n> the processing of single statements. We already spend a noticable\n> percentage of the cycles for a single statement in PostgresMain(), this\n> adds additional overhead.\n> \n> I'm somewhat inclined to think that we need some redesign here before we\n> add more overhead.\n\ninsert_timeout() and remove_timeout_index() move 40*(# of several\ntimeouts) bytes at every enabling/disabling a timeout. This is far\nfrequent than actually any timeout fires. schedule_alarm() is\ninterested only in the nearest timeout.\n\nSo, we can get rid of the insertion sort in\ninsert_timeout/remove_timeout_index then let them just search for the\nnearest one and remember it. Finding the nearest should be faster than\nthe insertion sort. Instead we need to scan over the all timeouts\ninstead of the a few first ones, but that's overhead is not a matter\nwhen a timeout fires.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 06 Dec 2022 09:44:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-06 09:44:01 +0900, Kyotaro Horiguchi wrote:\n> At Mon, 5 Dec 2022 15:07:47 -0800, Andres Freund <andres@anarazel.de> wrote in\n> > I'm a bit worried about adding evermore branches and function calls for\n> > the processing of single statements. We already spend a noticable\n> > percentage of the cycles for a single statement in PostgresMain(), this\n> > adds additional overhead.\n> >\n> > I'm somewhat inclined to think that we need some redesign here before we\n> > add more overhead.\n>\n> insert_timeout() and remove_timeout_index() move 40*(# of several\n> timeouts) bytes at every enabling/disabling a timeout. This is far\n> frequent than actually any timeout fires. schedule_alarm() is\n> interested only in the nearest timeout.\n\n> So, we can get rid of the insertion sort in\n> insert_timeout/remove_timeout_index then let them just search for the\n> nearest one and remember it. Finding the nearest should be faster than\n> the insertion sort. Instead we need to scan over the all timeouts\n> instead of the a few first ones, but that's overhead is not a matter\n> when a timeout fires.\n\nI'm most concerned about the overhead when the timeouts are *not*\nenabled. And this adds a branch to start_xact_command() and a function\ncall for get_timeout_active(TRANSACTION_TIMEOUT) in that case. On its\nown, that's not a whole lot, but it does add up. There's 10+ function\ncalls for timeout and ps_display purposes for every single statement.\n\nBut it's definitely also worth optimizing the timeout enabled paths. And\nyou're right, it looks like there's a fair bit of optimization\npotential.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 5 Dec 2022 17:10:50 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "At Mon, 5 Dec 2022 17:10:50 -0800, Andres Freund <andres@anarazel.de> wrote in \n> I'm most concerned about the overhead when the timeouts are *not*\n> enabled. And this adds a branch to start_xact_command() and a function\n> call for get_timeout_active(TRANSACTION_TIMEOUT) in that case. On its\n> own, that's not a whole lot, but it does add up. There's 10+ function\n> calls for timeout and ps_display purposes for every single statement.\n\nThat path seems like existing just for robustness. I inserted\n\"Assert(0)\" just before the disable_timeout(), but make check-world\ndidn't fail [1]. Couldn't we get rid of that path, adding an assertion\ninstead? I'm not sure about other timeouts yet, though.\n\nAbout disabling side, we cannot rely on StatementTimeout.\n\n[1]\n# 032_apply_delay.pl fails for me so I don't know any of the later\n# tests fails.\n\n> But it's definitely also worth optimizing the timeout enabled paths. And\n> you're right, it looks like there's a fair bit of optimization\n> potential.\n\nThanks. I'll work on that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 07 Dec 2022 11:52:39 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-03 09:41:04 -0800, Andrey Borodin wrote:\n> Fixed. Added test for this.\n\nThe tests don't pass: https://cirrus-ci.com/build/4811553145356288\n\n[00:54:35.337](1.251s) not ok 1 - no parameters missing from postgresql.conf.sample\n[00:54:35.338](0.000s) # Failed test 'no parameters missing from postgresql.conf.sample'\n# at /tmp/cirrus-ci-build/src/test/modules/test_misc/t/003_check_guc.pl line 81.\n[00:54:35.338](0.000s) # got: '1'\n# expected: '0'\n\n\nI am just looking through a bunch of failing CF entries, so I'm perhaps\nover-sensitized right now. But I'm a bit confused why there's so many\noccasions of the tests clearly not having been run...\n\n\nMichael, any reason 003_check_guc doesn't show the missing GUCs? It's not\nparticularly helpful to see \"0 is different from 1\". Seems that even just\nsomething like\n is_deeply(\\@missing_from_file, [], \"no parameters missing from postgresql.conf.sample\");\nwould be a decent improvement?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Dec 2022 10:23:37 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 10:23 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-12-03 09:41:04 -0800, Andrey Borodin wrote:\n> > Fixed. Added test for this.\n>\n> The tests don't pass: https://cirrus-ci.com/build/4811553145356288\n>\noops, sorry. Here's the fix. I hope to address other feedback on the\nweekend. Thank you!\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 7 Dec 2022 13:30:16 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 1:30 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n> I hope to address other feedback on the weekend.\n\nAndres, here's my progress on working with your review notes.\n\n> > @@ -3277,6 +3282,7 @@ ProcessInterrupts(void)\n> > */\n> > lock_timeout_occurred = get_timeout_indicator(LOCK_TIMEOUT, true);\n> > stmt_timeout_occurred = get_timeout_indicator(STATEMENT_TIMEOUT, true);\n> > + tx_timeout_occurred = get_timeout_indicator(TRANSACTION_TIMEOUT, true);\n> >\n> > /*\n> > * If both were set, we want to report whichever timeout completed\n>\n> This doesn't update the preceding comment, btw, which now reads oddly:\n\nI've rewritten this part to correctly report all timeouts that did\nhappen. However there's now a tricky comma-formatting code which was\ntested only manually.\n\n> > > > @@ -1360,6 +1363,16 @@ IdleInTransactionSessionTimeoutHandler(void)\n> > > > SetLatch(MyLatch);\n> > > > }\n> > > >\n> > > > +static void\n> > > > +TransactionTimeoutHandler(void)\n> > > > +{\n> > > > +#ifdef HAVE_SETSID\n> > > > + /* try to signal whole process group */\n> > > > + kill(-MyProcPid, SIGINT);\n> > > > +#endif\n> > > > + kill(MyProcPid, SIGINT);\n> > > > +}\n> > > > +\n> > >\n> > > Why does this use signals instead of just setting the latch like\n> > > IdleInTransactionSessionTimeoutHandler() etc?\n> >\n> > I just copied statement_timeout behaviour. As I understand this\n> > implementation is prefered if the timeout can catch the backend\n> > running at full steam.\n>\n> Hm. I'm not particularly convinced by that code. Be that as it may, I\n> don't think it's a good idea to have one more copy of this code. At\n> least the patch should wrap the signalling code in a helper.\n\nDone, now there is a single CancelOnTimeoutHandler() handler.\n\n> > > > diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c\n> > > > index 0081873a72..5229fe3555 100644\n> > > > --- a/src/bin/pg_dump/pg_backup_archiver.c\n> > > > +++ b/src/bin/pg_dump/pg_backup_archiver.c\n> > > > @@ -3089,6 +3089,7 @@ _doSetFixedOutputState(ArchiveHandle *AH)\n> > > > ahprintf(AH, \"SET statement_timeout = 0;\\n\");\n> > > > ahprintf(AH, \"SET lock_timeout = 0;\\n\");\n> > > > ahprintf(AH, \"SET idle_in_transaction_session_timeout = 0;\\n\");\n> > > > + ahprintf(AH, \"SET transaction_timeout = 0;\\n\");\n> > >\n> > > Hm - why is that the right thing to do?\n> > Because transaction_timeout has effects of statement_timeout.\n>\n> I guess it's just following precedent - but it seems a bit presumptuous\n> to just disable safety settings a DBA might have set up. That makes some\n> sense for e.g. idle_in_transaction_session_timeout, because I think\n> e.g. parallel backup can lead to a connection being idle for a bit.\n\nI do not know. My reasoning - everywhere we turn off\nstatement_timeout, we should turn off transaction_timeout too.\nBut I have no strong opinion here. I left this code as is in the patch\nso far. For the same reason I did not change anything in\npg_backup_archiver.c.\n\n> > Either way we can do batch function enable_timeouts() instead\n> > enable_timeout_after().\n>\n> > Does anything of it make sense?\n>\n> I'm at least as worried about the various calls *after* the execution of\n> a statement.\n\nI think this code is just a one bit check\nif (get_timeout_active(TRANSACTION_TIMEOUT))\ninside of get_timeout_active(). With all 14 timeouts we have, I don't\nsee a good way to optimize stuff so far.\n\n> > + if (tx_timeout_occurred)\n> > + {\n> > + LockErrorCleanup();\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_TRANSACTION_TIMEOUT),\n> > + errmsg(\"canceling transaction due to transaction timeout\")));\n> > + }\n>\n> The number of calls to LockErrorCleanup() here feels wrong - there's\n> already 8 calls in ProcessInterrupts(). Besides the code duplication I\n> also think it's not a sane idea to rely on having LockErrorCleanup()\n> before all the relevant ereport(ERROR)s.\n\nI've refactored that code down to 7 calls of LockErrorCleanup() :)\nLogic behind various branches is not clear for me, e.g. why we do not\nLockErrorCleanup() when reading commands from a client?\nSo I did not risk refactoring further.\n\n> I think the test should verify\n> that transaction timeout interacts correctly with statement timeout /\n> idle in tx timeout.\n\nI've added tests that check statement_timeout vs transaction_timeout.\nHowever I could not produce stable tests with\nidle_in_transaction_timeout vs transaction_timeout so far. But I'll\nlook into this more.\nActually, stabilizing statement_timeout vs transaction_timeout was\ntricky on Windows too. I had to remove the second call to\npg_sleep(0.0001) because it was triggering 10ьs timeout from time to\ntime. Also, test timeout was increased to 30ms, because unlike others\nin spec it's not supposed to happen at the very first SQL statement.\n\nThank you!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Sun, 18 Dec 2022 12:53:31 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Sun, Dec 18, 2022 at 12:53:31PM -0800, Andrey Borodin wrote:\n> I've rewritten this part to correctly report all timeouts that did\n> happen. However there's now a tricky comma-formatting code which was\n> tested only manually.\n\nI suspect this will make translation difficult.\n\n>> > > > + ahprintf(AH, \"SET transaction_timeout = 0;\\n\");\n>> > >\n>> > > Hm - why is that the right thing to do?\n>> > Because transaction_timeout has effects of statement_timeout.\n>>\n>> I guess it's just following precedent - but it seems a bit presumptuous\n>> to just disable safety settings a DBA might have set up. That makes some\n>> sense for e.g. idle_in_transaction_session_timeout, because I think\n>> e.g. parallel backup can lead to a connection being idle for a bit.\n> \n> I do not know. My reasoning - everywhere we turn off\n> statement_timeout, we should turn off transaction_timeout too.\n> But I have no strong opinion here. I left this code as is in the patch\n> so far. For the same reason I did not change anything in\n> pg_backup_archiver.c.\n\n From 8383486's commit message:\n\n\tWe disable statement_timeout and lock_timeout during dump and restore,\n\tto prevent any global settings that might exist from breaking routine\n\tbackups.\n\nI imagine changing this could disrupt existing servers that depend on these\noverrides during backups, although I think Andres has a good point about\ndisabling safety settings. This might be a good topic for another thread.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 12 Jan 2023 11:24:36 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 11:24 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> On Sun, Dec 18, 2022 at 12:53:31PM -0800, Andrey Borodin wrote:\n> > I've rewritten this part to correctly report all timeouts that did\n> > happen. However there's now a tricky comma-formatting code which was\n> > tested only manually.\n>\n> I suspect this will make translation difficult.\nI use special functions for this like _()\n\nchar* lock_reason = lock_timeout_occurred ? _(\"lock timeout\") : \"\";\n\nand then\nereport(ERROR, (errcode(err_code),\n errmsg(\"canceling statement due to %s%s%s%s%s\", lock_reason, comma1,\n stmt_reason, comma2, tx_reason)));\n\nI hope it will be translatable...\n\n> >> > > > + ahprintf(AH, \"SET transaction_timeout = 0;\\n\");\n> >> > >\n> >> > > Hm - why is that the right thing to do?\n> >> > Because transaction_timeout has effects of statement_timeout.\n> >>\n> >> I guess it's just following precedent - but it seems a bit presumptuous\n> >> to just disable safety settings a DBA might have set up. That makes some\n> >> sense for e.g. idle_in_transaction_session_timeout, because I think\n> >> e.g. parallel backup can lead to a connection being idle for a bit.\n> >\n> > I do not know. My reasoning - everywhere we turn off\n> > statement_timeout, we should turn off transaction_timeout too.\n> > But I have no strong opinion here. I left this code as is in the patch\n> > so far. For the same reason I did not change anything in\n> > pg_backup_archiver.c.\n>\n> From 8383486's commit message:\n>\n> We disable statement_timeout and lock_timeout during dump and restore,\n> to prevent any global settings that might exist from breaking routine\n> backups.\n>\n> I imagine changing this could disrupt existing servers that depend on these\n> overrides during backups, although I think Andres has a good point about\n> disabling safety settings. This might be a good topic for another thread.\n>\n+1.\n\nThanks for the review!\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Thu, 12 Jan 2023 11:46:54 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 11:47 AM Andrey Borodin <amborodin86@gmail.com>\nwrote:\n\n> On Thu, Jan 12, 2023 at 11:24 AM Nathan Bossart\n> <nathandbossart@gmail.com> wrote:\n> >\n> > On Sun, Dec 18, 2022 at 12:53:31PM -0800, Andrey Borodin wrote:\n> > > I've rewritten this part to correctly report all timeouts that did\n> > > happen. However there's now a tricky comma-formatting code which was\n> > > tested only manually.\n>\n\nTesting it again, a couple of questions\n\n1) The current test set has only 2 simple cases – I'd suggest adding one\nmore (that one that didn't work in v1):\n\ngitpod=# set transaction_timeout to '20ms';\nSET\ngitpod=# begin; select pg_sleep(.01); select pg_sleep(.01); select\npg_sleep(.01); commit;\nBEGIN\n pg_sleep\n----------\n\n(1 row)\n\nERROR: canceling statement due to transaction timeout\n\n\ngitpod=# set statement_timeout to '20ms'; set transaction_timeout to 0; --\nto test value for statement_timeout and see that it doesn't fail\nSET\nSET\ngitpod=# begin; select pg_sleep(.01); select pg_sleep(.01); select\npg_sleep(.01); commit;\nBEGIN\n pg_sleep\n----------\n\n(1 row)\n\n pg_sleep\n----------\n\n(1 row)\n\n pg_sleep\n----------\n\n(1 row)\n\nCOMMIT\n\n\n2) Testing for a longer transaction (2 min), in a gitpod VM (everything is\nlocal, no network involved)\n\n// not sure what's happening here, maybe some overheads that are not\nrelated to the implementation,\n// but the goal was to see how precise the limiting is for longer\ntransactions\n\ngitpod=# set transaction_timeout to '2min';\nSET\ngitpod=# begin;\nBEGIN\ngitpod=*# select now(), clock_timestamp(), pg_sleep(3) \\watch 1\n Fri 13 Jan 2023 03:49:24 PM UTC (every 1s)\n\n now | clock_timestamp | pg_sleep\n-------------------------------+-------------------------------+----------\n 2023-01-13 15:49:22.906924+00 | 2023-01-13 15:49:24.088728+00 |\n(1 row)\n\n[...]\n\n Fri 13 Jan 2023 03:51:18 PM UTC (every 1s)\n\n now | clock_timestamp | pg_sleep\n-------------------------------+-------------------------------+----------\n 2023-01-13 15:49:22.906924+00 | 2023-01-13 15:51:18.179579+00 |\n(1 row)\n\nERROR: canceling statement due to transaction timeout\n\ngitpod=!#\ngitpod=!# rollback;\nROLLBACK\ngitpod=# select timestamptz '2023-01-13 15:51:18.179579+00' - '2023-01-13\n15:49:22.906924+00';\n ?column?\n-----------------\n 00:01:55.272655\n(1 row)\n\ngitpod=# select interval '2min' - '00:01:55.272655';\n ?column?\n-----------------\n 00:00:04.727345\n(1 row)\n\ngitpod=# select interval '2min' - '00:01:55.272655' - '4s';\n ?column?\n-----------------\n 00:00:00.727345\n(1 row)\n\n– it seems we could (should) have one more successful \"1s wait, 3s sleep\"\niteration here, ~727ms somehow wasted in a loop, quite a lot.\n\nOn Thu, Jan 12, 2023 at 11:47 AM Andrey Borodin <amborodin86@gmail.com> wrote:On Thu, Jan 12, 2023 at 11:24 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> On Sun, Dec 18, 2022 at 12:53:31PM -0800, Andrey Borodin wrote:\n> > I've rewritten this part to correctly report all timeouts that did\n> > happen. However there's now a tricky comma-formatting code which was\n> > tested only manually.Testing it again, a couple of questions1) The current test set has only 2 simple cases – I'd suggest adding one more (that one that didn't work in v1):gitpod=# set transaction_timeout to '20ms';SETgitpod=# begin; select pg_sleep(.01); select pg_sleep(.01); select pg_sleep(.01); commit;BEGIN pg_sleep ---------- (1 row)ERROR: canceling statement due to transaction timeoutgitpod=# set statement_timeout to '20ms'; set transaction_timeout to 0; -- to test value for statement_timeout and see that it doesn't failSETSETgitpod=# begin; select pg_sleep(.01); select pg_sleep(.01); select pg_sleep(.01); commit;BEGIN pg_sleep ---------- (1 row) pg_sleep ---------- (1 row) pg_sleep ---------- (1 row)COMMIT2) Testing for a longer transaction (2 min), in a gitpod VM (everything is local, no network involved)// not sure what's happening here, maybe some overheads that are not related to the implementation,// but the goal was to see how precise the limiting is for longer transactionsgitpod=# set transaction_timeout to '2min';SETgitpod=# begin;BEGINgitpod=*# select now(), clock_timestamp(), pg_sleep(3) \\watch 1 Fri 13 Jan 2023 03:49:24 PM UTC (every 1s) now | clock_timestamp | pg_sleep -------------------------------+-------------------------------+---------- 2023-01-13 15:49:22.906924+00 | 2023-01-13 15:49:24.088728+00 | (1 row)[...] Fri 13 Jan 2023 03:51:18 PM UTC (every 1s) now | clock_timestamp | pg_sleep -------------------------------+-------------------------------+---------- 2023-01-13 15:49:22.906924+00 | 2023-01-13 15:51:18.179579+00 | (1 row)ERROR: canceling statement due to transaction timeoutgitpod=!# gitpod=!# rollback;ROLLBACKgitpod=# select timestamptz '2023-01-13 15:51:18.179579+00' - '2023-01-13 15:49:22.906924+00'; ?column? ----------------- 00:01:55.272655(1 row)gitpod=# select interval '2min' - '00:01:55.272655'; ?column? ----------------- 00:00:04.727345(1 row)gitpod=# select interval '2min' - '00:01:55.272655' - '4s'; ?column? ----------------- 00:00:00.727345(1 row)– it seems we could (should) have one more successful \"1s wait, 3s sleep\" iteration here, ~727ms somehow wasted in a loop, quite a lot.",
"msg_date": "Fri, 13 Jan 2023 08:03:00 -0800",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Thanks for the review Nikolay!\n\nOn Fri, Jan 13, 2023 at 8:03 AM Nikolay Samokhvalov\n<samokhvalov@gmail.com> wrote:\n>\n> 1) The current test set has only 2 simple cases – I'd suggest adding one more (that one that didn't work in v1):\n>\n> gitpod=# set transaction_timeout to '20ms';\n> SET\n> gitpod=# begin; select pg_sleep(.01); select pg_sleep(.01); select pg_sleep(.01); commit;\nI tried exactly these tests - tests were unstable on Windows. Maybe\nthat OS has a more coarse-grained timer resolution.\nIt's a tradeoff between time spent on tests, strength of the test and\nprobability of false failure. I chose small time without false alarms.\n\n> – it seems we could (should) have one more successful \"1s wait, 3s sleep\" iteration here, ~727ms somehow wasted in a loop, quite a lot.\n\nI think big chunk from these 727ms were spent between \"BEGIN\" and\n\"select now(), clock_timestamp(), pg_sleep(3) \\watch 1\". I doubt patch\nreally contains arithmetic errors.\n\nMany thanks for looking into this!\n\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Fri, 13 Jan 2023 10:15:56 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 10:16 AM Andrey Borodin <amborodin86@gmail.com>\nwrote:\n\n> > – it seems we could (should) have one more successful \"1s wait, 3s\n> sleep\" iteration here, ~727ms somehow wasted in a loop, quite a lot.\n>\n> I think big chunk from these 727ms were spent between \"BEGIN\" and\n> \"select now(), clock_timestamp(), pg_sleep(3) \\watch 1\".\n>\n\nNot really – there was indeed ~2s delay between BEGIN and the first\npg_sleep query, but those ~727ms is something else.\n\nhere we measure the remainder between the beginning of the transaction\nmeasured by \"now()' and the the beginning of the last successful pg_sleep()\nquery:\n\ngitpod=# select timestamptz '2023-01-13 15:51:18.179579+00' - '2023-01-13\n15:49:22.906924+00';\n ?column?\n-----------------\n 00:01:55.272655\n(1 row)\n\nIt already includes all delays that we had from the beginning of our\ntransaction.\n\nThe problem with my question was that I didn't take into attention that\n'2023-01-13 15:51:18.179579+00' is when the last successful query\n*started*. So the remainder of our 2-min quota – 00:00:04.727345 – includes\nthe last successful loop (3s of successful query + 1s of waiting), and then\nwe have failed after ~700ms.\n\nIn other words, there are no issues here, all good.\n\n> Many thanks for looking into this!\n\nmany thanks for implementing it\n\nOn Fri, Jan 13, 2023 at 10:16 AM Andrey Borodin <amborodin86@gmail.com> wrote:\n> – it seems we could (should) have one more successful \"1s wait, 3s sleep\" iteration here, ~727ms somehow wasted in a loop, quite a lot.\n\nI think big chunk from these 727ms were spent between \"BEGIN\" and\n\"select now(), clock_timestamp(), pg_sleep(3) \\watch 1\". Not really – there was indeed ~2s delay between BEGIN and the first pg_sleep query, but those ~727ms is something else. here we measure the remainder between the beginning of the transaction measured by \"now()' and the the beginning of the last successful pg_sleep() query:gitpod=# select timestamptz '2023-01-13 15:51:18.179579+00' - '2023-01-13 15:49:22.906924+00'; ?column? ----------------- 00:01:55.272655(1 row)It already includes all delays that we had from the beginning of our transaction.The problem with my question was that I didn't take into attention that '2023-01-13 15:51:18.179579+00' is when the last successful query *started*. So the remainder of our 2-min quota – 00:00:04.727345 – includes the last successful loop (3s of successful query + 1s of waiting), and then we have failed after ~700ms.In other words, there are no issues here, all good.> Many thanks for looking into this!many thanks for implementing it",
"msg_date": "Fri, 13 Jan 2023 11:00:32 -0800",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On 12.01.23 20:46, Andrey Borodin wrote:\n>> On Sun, Dec 18, 2022 at 12:53:31PM -0800, Andrey Borodin wrote:\n>>> I've rewritten this part to correctly report all timeouts that did\n>>> happen. However there's now a tricky comma-formatting code which was\n>>> tested only manually.\n>> I suspect this will make translation difficult.\n> I use special functions for this like _()\n> \n> char* lock_reason = lock_timeout_occurred ? _(\"lock timeout\") : \"\";\n> \n> and then\n> ereport(ERROR, (errcode(err_code),\n> errmsg(\"canceling statement due to %s%s%s%s%s\", lock_reason, comma1,\n> stmt_reason, comma2, tx_reason)));\n> \n> I hope it will be translatable...\n\nNo, you can't do that. You have to write out all the strings separately.\n\n\n",
"msg_date": "Fri, 1 Sep 2023 22:23:07 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\n\nOn 2022/12/19 5:53, Andrey Borodin wrote:\n> On Wed, Dec 7, 2022 at 1:30 PM Andrey Borodin <amborodin86@gmail.com> wrote:\n>> I hope to address other feedback on the weekend.\n\nThanks for implementing this feature!\n\nWhile testing v4 patch, I noticed it doesn't handle the COMMIT AND CHAIN case correctly.\nWhen COMMIT AND CHAIN is executed, I believe the transaction timeout counter should reset\nand start from zero with the next transaction. However, it appears that the current\nv4 patch doesn't reset the counter in this scenario. Can you confirm this?\n\nWith the v4 patch, I found that timeout errors no longer occur during the idle in\ntransaction phase. Instead, they occur when the next statement is executed. Is this\nthe intended behavior? I thought some users might want to use the transaction timeout\nfeature to prevent prolonged transactions and promptly release resources (e.g., locks)\nin case of a timeout, similar to idle_in_transaction_session_timeout.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 6 Sep 2023 17:16:16 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Thanks for looking into this!\n\n> On 6 Sep 2023, at 13:16, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n> While testing v4 patch, I noticed it doesn't handle the COMMIT AND CHAIN case correctly.\n> When COMMIT AND CHAIN is executed, I believe the transaction timeout counter should reset\n> and start from zero with the next transaction. However, it appears that the current\n> v4 patch doesn't reset the counter in this scenario. Can you confirm this?\nYes, I was not aware of this feature. I'll test and fix this.\n\n> With the v4 patch, I found that timeout errors no longer occur during the idle in\n> transaction phase. Instead, they occur when the next statement is executed. Is this\n> the intended behavior?\nAFAIR I had been testing that behaviour of \"idle in transaction\" was intact. I'll check that again.\n\n> I thought some users might want to use the transaction timeout\n> feature to prevent prolonged transactions and promptly release resources (e.g., locks)\n> in case of a timeout, similar to idle_in_transaction_session_timeout.\nYes, this is exactly how I was expecting the feature to behave: empty up max_connections slots for long-hanging transactions.\n\nThanks for your findings, I'll check and post new version!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 6 Sep 2023 16:32:33 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On 2023-09-06 20:32, Andrey M. Borodin wrote:\n> Thanks for looking into this!\n> \n>> On 6 Sep 2023, at 13:16, Fujii Masao <masao.fujii@oss.nttdata.com> \n>> wrote:\n>> \n>> While testing v4 patch, I noticed it doesn't handle the COMMIT AND \n>> CHAIN case correctly.\n>> When COMMIT AND CHAIN is executed, I believe the transaction timeout \n>> counter should reset\n>> and start from zero with the next transaction. However, it appears \n>> that the current\n>> v4 patch doesn't reset the counter in this scenario. Can you confirm \n>> this?\n> Yes, I was not aware of this feature. I'll test and fix this.\n> \n>> With the v4 patch, I found that timeout errors no longer occur during \n>> the idle in\n>> transaction phase. Instead, they occur when the next statement is \n>> executed. Is this\n>> the intended behavior?\n> AFAIR I had been testing that behaviour of \"idle in transaction\" was\n> intact. I'll check that again.\n> \n>> I thought some users might want to use the transaction timeout\n>> feature to prevent prolonged transactions and promptly release \n>> resources (e.g., locks)\n>> in case of a timeout, similar to idle_in_transaction_session_timeout.\n> Yes, this is exactly how I was expecting the feature to behave: empty\n> up max_connections slots for long-hanging transactions.\n> \n> Thanks for your findings, I'll check and post new version!\n> \n> \n> Best regards, Andrey Borodin.\nHi,\n\nThank you for implementing this nice feature!\nI tested the v4 patch in the interactive transaction mode with 3 \nfollowing cases:\n\n1. Start a transaction with transaction_timeout=0 (i.e., timeout \ndisabled), and then change the timeout value to more than 0 during the \ntransaction.\n\n=# SET transaction_timeout TO 0;\n=# BEGIN; //timeout is not enabled\n=# SELECT pg_sleep(5);\n=# SET transaction_timeout TO '1s';\n=# SELECT pg_sleep(10); //timeout is enabled with 1s\nIn this case, the transaction timeout happens during pg_sleep(10).\n\n2. Start a transaction with transaction_timeout>0 (i.e., timeout \nenabled), and then change the timeout value to more than 0 during the \ntransaction.\n\n=# SET transaction_timeout TO '1000s';\n=# BEGIN; //timeout is enabled with 1000s\n=# SELECT pg_sleep(5);\n=# SET transaction_timeout TO '1s';\n=# SELECT pg_sleep(10); //timeout is not restarted and still running \nwith 1000s\nIn this case, the transaction timeout does NOT happen during \npg_sleep(10).\n\n3. Start a transaction with transaction_timeout>0 (i.e., timeout \nenabled), and then change the timeout value to 0 during the transaction.\n\n=# SET transaction_timeout TO '10s';\n=# BEGIN; //timeout is enabled with 10s\n=# SELECT pg_sleep(5);\n=# SET transaction_timeout TO 0;\n=# SELECT pg_sleep(10); //timeout is NOT disabled and still running \nwith 10s\nIn this case, the transaction timeout happens during pg_sleep(10).\n\nThe first case where transaction_timeout is disabled before the \ntransaction begins is totally fine. However, in the second and third \ncases, where transaction_timeout is enabled before the transaction \nbegins, since the timeout has already enabled with a certain value, it \nwill not be enabled again with a new setting value.\n\nFurthermore, let's say I want to set a transaction_timeout value for all \ntransactions in postgresql.conf file so it would affect all sessions. \nThe same behavior happened but for all 3 cases, here is one example with \nthe second case:\n\n=# BEGIN; SHOW transaction_timeout; select pg_sleep(10); SHOW \ntransaction_timeout; COMMIT;\nBEGIN\n transaction_timeout\n---------------------\n 15s\n(1 row)\n\n2023-09-07 11:52:50.510 JST [23889] LOG: received SIGHUP, reloading \nconfiguration files\n2023-09-07 11:52:50.510 JST [23889] LOG: parameter \n\"transaction_timeout\" changed to \"5000\"\n pg_sleep\n----------\n\n(1 row)\n\n transaction_timeout\n---------------------\n 5s\n(1 row)\n\nCOMMIT\n\nI am of the opinion that these behaviors might lead to confusion among \nusers. Could you confirm if these are the intended behaviors?\n\nAdditionally, I think the short description should be \"Sets the maximum \nallowed time to commit a transaction.\" or \"Sets the maximum allowed time \nto wait before aborting a transaction.\" so that it could be more clear \nand consistent with other %_timeout descriptions.\n\nAlso, there is a small whitespace error here:\nsrc/backend/tcop/postgres.c:3373: space before tab in indent.\n+ \nstmt_reason, comma2, tx_reason)));\n\nOn a side note, while testing the patch with pgbench, it came to my \nattention that in scenarios involving the execution of multiple \nconcurrent transactions within a high contention environment and with \nrelatively short timeout durations, there is a potential for cascading \nblocking. This phenomenon can lead to multiple transactions exceeding \ntheir designated timeouts, consequently resulting in a degradation of \ntransaction processing performance. No?\nDo you think this feature should be co-implemented with the existing \nconcurrency control protocol to maintain the transaction performance \n(e.g. a transaction scheduling mechanism based on transaction timeout)?\n\nRegards,\nTung Nguyen\n\n\n",
"msg_date": "Thu, 07 Sep 2023 16:06:40 +0900",
"msg_from": "bt23nguyent <bt23nguyent@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Wed, Sep 6, 2023 at 1:16 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> With the v4 patch, I found that timeout errors no longer occur during the idle in\n> transaction phase. Instead, they occur when the next statement is executed. Is this\n> the intended behavior? I thought some users might want to use the transaction timeout\n> feature to prevent prolonged transactions and promptly release resources (e.g., locks)\n> in case of a timeout, similar to idle_in_transaction_session_timeout.\n\nI agree – it seems reasonable to interrupt transaction immediately\nwhen the timeout occurs. This was the idea – to determine the maximum\npossible time for all transactions that is allowed on a server, to\navoid too long-lasting locking and not progressing xmin horizon.\n\nThat being said, I also think this wording in the docs:\n\n+ Setting <varname>transaction_timeout</varname> in\n+ <filename>postgresql.conf</filename> is not recommended\nbecause it would\n+ affect all sessions.\n\nIt was inherited from statement_timeout, where I also find this\nwording too one-sided. There are certain situations where we do want\nglobal setting to be set – actually, any large OLTP case (to be on\nlower risk side; those users who need longer timeout, can set it when\nneeded, but by default we do need very restrictive timeouts, usually <\n1 minute, like we do in HTTP or application servers). I propose this:\n> Setting transaction_timeout in postgresql.conf should be done with caution because it affects all sessions.\n\nLooking at the v4 of the patch, a couple of more comments that might\nbe helpful for v5 (which is planned, as I understand):\n\n1) it might be beneficial to add tests for more complex scenarios,\ne.g., subtransactions\n\n2) In the error message:\n\n+ errmsg(\"canceling statement due to %s%s%s%s%s\", lock_reason, comma1,\n+ stmt_reason, comma2, tx_reason)));\n\n– it seems we can have excessive commas here\n\n3) Perhaps, we should say that we cancel the transaction, not\nstatement (especially in the case when it is happening in the\nidle-in-transaction state).\n\nThanks for working on this feature!\n\n\n",
"msg_date": "Tue, 10 Oct 2023 14:07:47 -0700",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nI test the V4 patch and found the backend does't process SIGINT while it's in secure_read.\nAnd it seems not a good choice to report ERROR during secure_read, which will turns into\nFATAL \"terminating connection because protocol synchronization was lost\".\n\nIt might be much easier to terminate the backend rather than cancel the backend just like\nidle_in_transaction_session_timeout and idle_session_timeout did. But the name of the GUC\nmight be transaction_session_timeout.\n\nAnd what about 2PC transaction? The hanging 2PC transaction also hurts server a lot. It’s\nactive transaction but not active backend. Can we cancel the 2PC transaction and how we\ncancel it.\n\n--\nYuhang Qiu\n\n\n\n",
"msg_date": "Mon, 20 Nov 2023 11:33:22 +0800",
"msg_from": "=?utf-8?B?6YKx5a6H6Iiq?= <iamqyh@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 20 Nov 2023, at 06:33, 邱宇航 <iamqyh@gmail.com> wrote:\n\nNikolay, Peter, Fujii, Tung, Yuhang, thank you for reviewing this.\nI'll address feedback soon, this patch has been for a long time on my TODO list.\nI've started with fixing problem of COMMIT AND CHAIN by restarting timeout counter.\nTomorrow I plan to fix raising of the timeout when the transaction is idle.\nRenaming transaction_timeout to something else (to avoid confusion with prepared xacts) also seems correct to me.\n\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Thu, 30 Nov 2023 20:06:11 +0300",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 30 Nov 2023, at 20:06, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> \n> Tomorrow I plan to fix raising of the timeout when the transaction is idle.\n> Renaming transaction_timeout to something else (to avoid confusion with prepared xacts) also seems correct to me.\n\n\nHere's a v6 version of the feature. Changes:\n1. Now transaction_timeout will break connection with FATAL instead of hanging in \"idle in transaction (aborted)\"\n2. It will kill equally idle and active transactions\n3. New isolation tests are slightly more complex: isolation tester does not like when the connection is forcibly killed, thus there must be only 1 permutation with killed connection.\n\nTODO: as Yuhang pointed out prepared transactions must not be killed, thus name \"transaction_timeout\" is not correct. I think the name must be like \"session_transaction_timeout\", but I'd like to have an opinion of someone more experienced in giving names to GUCs than me. Or, perhaps, a native speaker?\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 6 Dec 2023 16:05:48 +0300",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nOn Wed, 06 Dec 2023 at 21:05, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> On 30 Nov 2023, at 20:06, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>>\n>>\n>> Tomorrow I plan to fix raising of the timeout when the transaction is idle.\n>> Renaming transaction_timeout to something else (to avoid confusion with prepared xacts) also seems correct to me.\n>\n>\n> Here's a v6 version of the feature. Changes:\n> 1. Now transaction_timeout will break connection with FATAL instead of hanging in \"idle in transaction (aborted)\"\n> 2. It will kill equally idle and active transactions\n> 3. New isolation tests are slightly more complex: isolation tester does not like when the connection is forcibly killed, thus there must be only 1 permutation with killed connection.\n>\n\nGreate. If idle_in_transaction_timeout is bigger than transaction_timeout,\nthe idle-in-transaction timeout don't needed, right?\n\n> TODO: as Yuhang pointed out prepared transactions must not be killed, thus name \"transaction_timeout\" is not correct. I think the name must be like \"session_transaction_timeout\", but I'd like to have an opinion of someone more experienced in giving names to GUCs than me. Or, perhaps, a native speaker?\n>\nHow about transaction_session_timeout? Similar to idle_session_timeout.\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Thu, 07 Dec 2023 11:25:16 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Hi,\n\nI read the V6 patch and found something needs to be improved.\n\nPrepared transactions should also be documented.\n A value of zero (the default) disables the timeout.\n+ This timeout is not applied to prepared transactions. Only transactions\n+ with user connections are affected.\n\nMissing 'time'.\n- gettext_noop(\"Sets the maximum allowed in a transaction.\"),\n+ gettext_noop(\"Sets the maximum allowed time in a transaction.\"),\n\n16 is already released. It's 17 now.\n- if (AH->remoteVersion >= 160000)\n+ if (AH->remoteVersion >= 170000)\n ExecuteSqlStatement(AH, \"SET transaction_timeout = 0\");\n\nAnd I test the V6 patch and it works as expected.\n\n--\nYuhang Qiu\n\n\nHi,I read the V6 patch and found something needs to be improved.Prepared transactions should also be documented. A value of zero (the default) disables the timeout.+ This timeout is not applied to prepared transactions. Only transactions+ with user connections are affected.Missing 'time'.- gettext_noop(\"Sets the maximum allowed in a transaction.\"),+ gettext_noop(\"Sets the maximum allowed time in a transaction.\"),16 is already released. It's 17 now.- if (AH->remoteVersion >= 160000)+ if (AH->remoteVersion >= 170000) ExecuteSqlStatement(AH, \"SET transaction_timeout = 0\");And I test the V6 patch and it works as expected.--Yuhang Qiu",
"msg_date": "Thu, 7 Dec 2023 18:39:55 +0800",
"msg_from": "=?utf-8?B?6YKx5a6H6Iiq?= <iamqyh@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Thanks Yuhang!\n\n> On 7 Dec 2023, at 13:39, 邱宇航 <iamqyh@gmail.com> wrote:\n> \n> I read the V6 patch and found something needs to be improved.\n\nFixed. PFA v7.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Thu, 7 Dec 2023 15:35:23 +0300",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 7 Dec 2023, at 06:25, Japin Li <japinli@hotmail.com> wrote:\n> \n> If idle_in_transaction_timeout is bigger than transaction_timeout,\n> the idle-in-transaction timeout don't needed, right?\nYes, I think so.\n\n> \n>> TODO: as Yuhang pointed out prepared transactions must not be killed, thus name \"transaction_timeout\" is not correct. I think the name must be like \"session_transaction_timeout\", but I'd like to have an opinion of someone more experienced in giving names to GUCs than me. Or, perhaps, a native speaker?\n>> \n> How about transaction_session_timeout? Similar to idle_session_timeout.\n\nWell, Yuhang also suggested this name...\n\nHonestly, I still have a gut feeling that transaction_timeout is a good name, despite being not exactly precise.\n\nThanks!\n\n\nBest regards, Andrey Borodin.\nPS Sorry for posting twice to the same thread, i noticed your message only after answering to Yuhang's review.\nOn 7 Dec 2023, at 06:25, Japin Li <japinli@hotmail.com> wrote: If idle_in_transaction_timeout is bigger than transaction_timeout,the idle-in-transaction timeout don't needed, right?Yes, I think so.TODO: as Yuhang pointed out prepared transactions must not be killed, thus name \"transaction_timeout\" is not correct. I think the name must be like \"session_transaction_timeout\", but I'd like to have an opinion of someone more experienced in giving names to GUCs than me. Or, perhaps, a native speaker?How about transaction_session_timeout? Similar to idle_session_timeout.Well, Yuhang also suggested this name...Honestly, I still have a gut feeling that transaction_timeout is a good name, despite being not exactly precise.Thanks!Best regards, Andrey Borodin.PS Sorry for posting twice to the same thread, i noticed your message only after answering to Yuhang's review.",
"msg_date": "Thu, 7 Dec 2023 15:40:34 +0300",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nOn Thu, 07 Dec 2023 at 20:40, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> On 7 Dec 2023, at 06:25, Japin Li <japinli@hotmail.com> wrote:\n>>\n>> If idle_in_transaction_timeout is bigger than transaction_timeout,\n>> the idle-in-transaction timeout don't needed, right?\n> Yes, I think so.\n>\n\nShould we disable the idle_in_transaction_timeout in this case? Of cursor, I'm\nnot strongly insist on this.\n\nI think you forget disable transaction_timeout in AutoVacWorkerMain().\nIf not, can you elaborate on why you don't disable it?\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Fri, 08 Dec 2023 15:59:34 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 8 Dec 2023, at 12:59, Japin Li <japinli@hotmail.com> wrote:\n> \n> \n> On Thu, 07 Dec 2023 at 20:40, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>>> On 7 Dec 2023, at 06:25, Japin Li <japinli@hotmail.com> wrote:\n>>> \n>>> If idle_in_transaction_timeout is bigger than transaction_timeout,\n>>> the idle-in-transaction timeout don't needed, right?\n>> Yes, I think so.\n>> \n> \n> Should we disable the idle_in_transaction_timeout in this case? Of cursor, I'm\n> not strongly insist on this.\nGood idea!\n\n> I think you forget disable transaction_timeout in AutoVacWorkerMain().\n> If not, can you elaborate on why you don't disable it?\n\nSeems like code in autovacuum.c was copied, but patch was not updated. I’ve fixed this oversight.\n\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 8 Dec 2023 15:08:12 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nOn Fri, 08 Dec 2023 at 18:08, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> On 8 Dec 2023, at 12:59, Japin Li <japinli@hotmail.com> wrote:\n>>\n>>\n>> On Thu, 07 Dec 2023 at 20:40, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>>>> On 7 Dec 2023, at 06:25, Japin Li <japinli@hotmail.com> wrote:\n>>>>\n>>>> If idle_in_transaction_timeout is bigger than transaction_timeout,\n>>>> the idle-in-transaction timeout don't needed, right?\n>>> Yes, I think so.\n>>>\n>>\n>> Should we disable the idle_in_transaction_timeout in this case? Of cursor, I'm\n>> not strongly insist on this.\n> Good idea!\n>\n>> I think you forget disable transaction_timeout in AutoVacWorkerMain().\n>> If not, can you elaborate on why you don't disable it?\n>\n> Seems like code in autovacuum.c was copied, but patch was not updated. I’ve fixed this oversight.\n>\n\nThanks for updating the patch. LGTM.\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Fri, 08 Dec 2023 18:29:17 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 8 Dec 2023, at 15:29, Japin Li <japinli@hotmail.com> wrote:\n> \n> Thanks for updating the patch. LGTM.\n\nPFA v9. Changes:\n1. Added tests for idle_in_transaction_timeout\n2. Suppress statement_timeout if it’s shorter than transaction_timeout\n\nConsider changing status of the commitfest entry if you think it’s ready for committer.\n\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 15 Dec 2023 14:51:43 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nOn Fri, 15 Dec 2023 at 17:51, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> On 8 Dec 2023, at 15:29, Japin Li <japinli@hotmail.com> wrote:\n>>\n>> Thanks for updating the patch. LGTM.\n>\n> PFA v9. Changes:\n> 1. Added tests for idle_in_transaction_timeout\n> 2. Suppress statement_timeout if it’s shorter than transaction_timeout\n>\n+ if (StatementTimeout > 0\n+ && IdleInTransactionSessionTimeout < TransactionTimeout)\n ^\n\nShould be StatementTimeout?\n\n\nMaybe we should add documentation to describe this behavior.\n\n> Consider changing status of the commitfest entry if you think it’s ready for committer.\n>\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Sat, 16 Dec 2023 08:58:53 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 16 Dec 2023, at 05:58, Japin Li <japinli@hotmail.com> wrote:\n> \n> \n> On Fri, 15 Dec 2023 at 17:51, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>>> On 8 Dec 2023, at 15:29, Japin Li <japinli@hotmail.com> wrote:\n>>> \n>>> Thanks for updating the patch. LGTM.\n>> \n>> PFA v9. Changes:\n>> 1. Added tests for idle_in_transaction_timeout\n>> 2. Suppress statement_timeout if it’s shorter than transaction_timeout\n>> \n> + if (StatementTimeout > 0\n> + && IdleInTransactionSessionTimeout < TransactionTimeout)\n> ^\n> \n> Should be StatementTimeout?\nYes, that’s an oversight. I’ve adjusted tests so they catch this problem.\n\n> Maybe we should add documentation to describe this behavior.\n\nI've added a paragraph about it to config.sgml, but I'm not sure about the comprehensiveness of the wording.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Mon, 18 Dec 2023 10:49:23 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nOn Mon, 18 Dec 2023 at 13:49, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> On 16 Dec 2023, at 05:58, Japin Li <japinli@hotmail.com> wrote:\n>>\n>>\n>> On Fri, 15 Dec 2023 at 17:51, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>>>> On 8 Dec 2023, at 15:29, Japin Li <japinli@hotmail.com> wrote:\n>>>>\n>>>> Thanks for updating the patch. LGTM.\n>>>\n>>> PFA v9. Changes:\n>>> 1. Added tests for idle_in_transaction_timeout\n>>> 2. Suppress statement_timeout if it’s shorter than transaction_timeout\n>>>\n>> + if (StatementTimeout > 0\n>> + && IdleInTransactionSessionTimeout < TransactionTimeout)\n>> ^\n>>\n>> Should be StatementTimeout?\n> Yes, that’s an oversight. I’ve adjusted tests so they catch this problem.\n>\n>> Maybe we should add documentation to describe this behavior.\n>\n> I've added a paragraph about it to config.sgml, but I'm not sure about the comprehensiveness of the wording.\n>\n\nThanks for updating the patch, no objections.\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Mon, 18 Dec 2023 17:32:22 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 18 Dec 2023, at 14:32, Japin Li <japinli@hotmail.com> wrote:\n> \n> \n> Thanks for updating the patch\n\nSorry for the noise, but commitfest bot found one more bug in handling statement timeout. PFA v11.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Mon, 18 Dec 2023 14:40:31 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nOn Mon, 18 Dec 2023 at 17:40, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> On 18 Dec 2023, at 14:32, Japin Li <japinli@hotmail.com> wrote:\n>>\n>>\n>> Thanks for updating the patch\n>\n> Sorry for the noise, but commitfest bot found one more bug in handling statement timeout. PFA v11.\n>\n\nOn Windows, there still have an error:\n\ndiff -w -U3 C:/cirrus/src/test/isolation/expected/timeouts.out C:/cirrus/build/testrun/isolation/isolation/results/timeouts.out\n--- C:/cirrus/src/test/isolation/expected/timeouts.out\t2023-12-18 10:22:21.772537200 +0000\n+++ C:/cirrus/build/testrun/isolation/isolation/results/timeouts.out\t2023-12-18 10:26:08.039831800 +0000\n@@ -103,24 +103,7 @@\n 0\n (1 row)\n\n-step stt2_check: SELECT 1;\n-FATAL: terminating connection due to transaction timeout\n-server closed the connection unexpectedly\n+PQconsumeInput failed: server closed the connection unexpectedly\n \tThis probably means the server terminated abnormally\n \tbefore or while processing the request.\n\n-step itt4_set: SET idle_in_transaction_session_timeout = '1ms'; SET statement_timeout = '10s'; SET lock_timeout = '10s'; SET transaction_timeout = '10s';\n-step itt4_begin: BEGIN ISOLATION LEVEL READ COMMITTED;\n-step sleep_there: SELECT pg_sleep(0.01);\n-pg_sleep\n---------\n-\n-(1 row)\n-\n-step stt3_check_itt4: SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/itt4' <waiting ...>\n-step stt3_check_itt4: <... completed>\n-count\n------\n- 0\n-(1 row)\n-\n\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Tue, 19 Dec 2023 09:25:27 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 19 Dec 2023, at 06:25, Japin Li <japinli@hotmail.com> wrote:\n> \n> On Windows, there still have an error:\n\nUhhmm, yes. Connection termination looks different on windows machine.\nI’ve checked how this looks in relication slot tests and removed select that was observing connection failure.\nI don’t have Windows machine, so I hope CF bot will pick this.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 19 Dec 2023 13:26:15 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 19 Dec 2023, at 13:26, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> I don’t have Windows machine, so I hope CF bot will pick this.\n\nI used Github CI to produce version of tests that seems to be is stable on Windows.\nSorry for the noise.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 19 Dec 2023 15:27:23 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nOn Tue, 19 Dec 2023 at 18:27, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> On 19 Dec 2023, at 13:26, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>>\n>> I don’t have Windows machine, so I hope CF bot will pick this.\n>\n> I used Github CI to produce version of tests that seems to be is stable on Windows.\n\nIt still failed on Windows Server 2019 [1].\n\ndiff -w -U3 C:/cirrus/src/test/isolation/expected/timeouts.out C:/cirrus/build/testrun/isolation/isolation/results/timeouts.out\n--- C:/cirrus/src/test/isolation/expected/timeouts.out\t2023-12-19 10:34:30.354721100 +0000\n+++ C:/cirrus/build/testrun/isolation/isolation/results/timeouts.out\t2023-12-19 10:38:25.877981600 +0000\n@@ -100,7 +100,7 @@\n step stt3_check_stt2: SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/stt2'\n count\n -----\n- 0\n+ 1\n (1 row)\n\n step itt4_set: SET idle_in_transaction_session_timeout = '1ms'; SET statement_timeout = '10s'; SET lock_timeout = '10s'; SET transaction_timeout = '10s';\n\n[1] https://api.cirrus-ci.com/v1/artifact/task/4707530400595968/testrun/build/testrun/isolation/isolation/regression.diffs\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Tue, 19 Dec 2023 22:06:21 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 6:27 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>\n>\n>\n> > On 19 Dec 2023, at 13:26, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> >\n> > I don’t have Windows machine, so I hope CF bot will pick this.\n>\n> I used Github CI to produce version of tests that seems to be is stable on Windows.\n> Sorry for the noise.\n>\n>\n> Best regards, Andrey Borodin.\n\n+ <para>\n+ If <varname>transaction_timeout</varname> is shorter than\n+ <varname>idle_in_transaction_session_timeout</varname> or\n<varname>statement_timeout</varname>\n+ <varname>transaction_timeout</varname> will invalidate longer timeout.\n+ </para>\n\nWhen transaction_timeout is *equal* to idle_in_transaction_session_timeout\nor statement_timeout, idle_in_transaction_session_timeout and statement_timeout\nwill also be invalidated, the logic in the code seems right, though\nthis document\nis a little bit inaccurate.\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Tue, 19 Dec 2023 22:51:03 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Tue, Dec 19, 2023 at 10:51 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> On Tue, Dec 19, 2023 at 6:27 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> >\n> >\n> >\n> > > On 19 Dec 2023, at 13:26, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> > >\n> > > I don’t have Windows machine, so I hope CF bot will pick this.\n> >\n> > I used Github CI to produce version of tests that seems to be is stable on Windows.\n> > Sorry for the noise.\n> >\n> >\n> > Best regards, Andrey Borodin.\n>\n> + <para>\n> + If <varname>transaction_timeout</varname> is shorter than\n> + <varname>idle_in_transaction_session_timeout</varname> or\n> <varname>statement_timeout</varname>\n> + <varname>transaction_timeout</varname> will invalidate longer timeout.\n> + </para>\n>\n> When transaction_timeout is *equal* to idle_in_transaction_session_timeout\n> or statement_timeout, idle_in_transaction_session_timeout and statement_timeout\n> will also be invalidated, the logic in the code seems right, though\n> this document\n> is a little bit inaccurate.\n>\n <para>\n Unlike <varname>statement_timeout</varname>, this timeout can only occur\n while waiting for locks. Note that if\n<varname>statement_timeout</varname>\n is nonzero, it is rather pointless to set\n<varname>lock_timeout</varname> to\n the same or larger value, since the statement timeout would always\n trigger first. If <varname>log_min_error_statement</varname> is set to\n <literal>ERROR</literal> or lower, the statement that timed out will be\n logged.\n </para>\n\nThere is a note about statement_timeout and lock_timeout, set both\nand lock_timeout >= statement_timeout is pointless, but this logic seems not\nimplemented in the code. I am wondering if lock_timeout >= transaction_timeout,\nshould we invalidate lock_timeout? Or maybe just document this.\n\n> --\n> Regards\n> Junwang Zhao\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Wed, 20 Dec 2023 09:48:59 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Hi Junwang Zhao\r\n #should we invalidate lock_timeout? Or maybe just document this.\r\nI think you mean when lock_time is greater than trasaction-time invalidate lock_timeout or needs to be logged ?\r\n\r\n\r\n\r\n\r\nBest whish\r\n________________________________\r\n发件人: Junwang Zhao <zhjwpku@gmail.com>\r\n发送时间: 2023年12月20日 9:48\r\n收件人: Andrey M. Borodin <x4mmm@yandex-team.ru>\r\n抄送: Japin Li <japinli@hotmail.com>; 邱宇航 <iamqyh@gmail.com>; Fujii Masao <masao.fujii@oss.nttdata.com>; Andrey Borodin <amborodin86@gmail.com>; Andres Freund <andres@anarazel.de>; Michael Paquier <michael.paquier@gmail.com>; Nikolay Samokhvalov <samokhvalov@gmail.com>; pgsql-hackers <pgsql-hackers@postgresql.org>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\r\n主题: Re: Transaction timeout\r\n\r\nOn Tue, Dec 19, 2023 at 10:51 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\r\n>\r\n> On Tue, Dec 19, 2023 at 6:27 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\r\n> >\r\n> >\r\n> >\r\n> > > On 19 Dec 2023, at 13:26, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\r\n> > >\r\n> > > I don’t have Windows machine, so I hope CF bot will pick this.\r\n> >\r\n> > I used Github CI to produce version of tests that seems to be is stable on Windows.\r\n> > Sorry for the noise.\r\n> >\r\n> >\r\n> > Best regards, Andrey Borodin.\r\n>\r\n> + <para>\r\n> + If <varname>transaction_timeout</varname> is shorter than\r\n> + <varname>idle_in_transaction_session_timeout</varname> or\r\n> <varname>statement_timeout</varname>\r\n> + <varname>transaction_timeout</varname> will invalidate longer timeout.\r\n> + </para>\r\n>\r\n> When transaction_timeout is *equal* to idle_in_transaction_session_timeout\r\n> or statement_timeout, idle_in_transaction_session_timeout and statement_timeout\r\n> will also be invalidated, the logic in the code seems right, though\r\n> this document\r\n> is a little bit inaccurate.\r\n>\r\n <para>\r\n Unlike <varname>statement_timeout</varname>, this timeout can only occur\r\n while waiting for locks. Note that if\r\n<varname>statement_timeout</varname>\r\n is nonzero, it is rather pointless to set\r\n<varname>lock_timeout</varname> to\r\n the same or larger value, since the statement timeout would always\r\n trigger first. If <varname>log_min_error_statement</varname> is set to\r\n <literal>ERROR</literal> or lower, the statement that timed out will be\r\n logged.\r\n </para>\r\n\r\nThere is a note about statement_timeout and lock_timeout, set both\r\nand lock_timeout >= statement_timeout is pointless, but this logic seems not\r\nimplemented in the code. I am wondering if lock_timeout >= transaction_timeout,\r\nshould we invalidate lock_timeout? Or maybe just document this.\r\n\r\n> --\r\n> Regards\r\n> Junwang Zhao\r\n\r\n\r\n\r\n--\r\nRegards\r\nJunwang Zhao\r\n\r\n\r\n\n\n\n\n\n\n\n\r\nHi Junwang Zhao\n #should\r\n we invalidate lock_timeout? Or maybe just document this.\nI\r\n think you mean when lock_time is greater than trasaction-time invalidate lock_timeout\r\n or needs to be logged ?\n\n\n\n\n\n\n\n\nBest\r\n whish \n\n\n发件人: Junwang Zhao <zhjwpku@gmail.com>\n发送时间: 2023年12月20日 9:48\n收件人: Andrey M. Borodin <x4mmm@yandex-team.ru>\n抄送: Japin Li <japinli@hotmail.com>; 邱宇航 <iamqyh@gmail.com>; Fujii Masao <masao.fujii@oss.nttdata.com>; Andrey Borodin <amborodin86@gmail.com>; Andres Freund <andres@anarazel.de>; Michael Paquier <michael.paquier@gmail.com>; Nikolay Samokhvalov <samokhvalov@gmail.com>;\r\n pgsql-hackers <pgsql-hackers@postgresql.org>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\n主题: Re: Transaction timeout\n \n\n\nOn Tue, Dec 19, 2023 at 10:51 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\r\n>\r\n> On Tue, Dec 19, 2023 at 6:27 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\r\n> >\r\n> >\r\n> >\r\n> > > On 19 Dec 2023, at 13:26, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\r\n> > >\r\n> > > I don’t have Windows machine, so I hope CF bot will pick this.\r\n> >\r\n> > I used Github CI to produce version of tests that seems to be is stable on Windows.\r\n> > Sorry for the noise.\r\n> >\r\n> >\r\n> > Best regards, Andrey Borodin.\r\n>\r\n> + <para>\r\n> + If <varname>transaction_timeout</varname> is shorter than\r\n> + <varname>idle_in_transaction_session_timeout</varname> or\r\n> <varname>statement_timeout</varname>\r\n> + <varname>transaction_timeout</varname> will invalidate longer timeout.\r\n> + </para>\r\n>\r\n> When transaction_timeout is *equal* to idle_in_transaction_session_timeout\r\n> or statement_timeout, idle_in_transaction_session_timeout and statement_timeout\r\n> will also be invalidated, the logic in the code seems right, though\r\n> this document\r\n> is a little bit inaccurate.\r\n>\r\n <para>\r\n Unlike <varname>statement_timeout</varname>, this timeout can only occur\r\n while waiting for locks. Note that if\r\n<varname>statement_timeout</varname>\r\n is nonzero, it is rather pointless to set\r\n<varname>lock_timeout</varname> to\r\n the same or larger value, since the statement timeout would always\r\n trigger first. If <varname>log_min_error_statement</varname> is set to\r\n <literal>ERROR</literal> or lower, the statement that timed out will be\r\n logged.\r\n </para>\n\r\nThere is a note about statement_timeout and lock_timeout, set both\r\nand lock_timeout >= statement_timeout is pointless, but this logic seems not\r\nimplemented in the code. I am wondering if lock_timeout >= transaction_timeout,\r\nshould we invalidate lock_timeout? Or maybe just document this.\n\r\n> --\r\n> Regards\r\n> Junwang Zhao\n\n\n\r\n-- \r\nRegards\r\nJunwang Zhao",
"msg_date": "Wed, 20 Dec 2023 01:58:47 +0000",
"msg_from": "Thomas wen <Thomas_valentine_365@outlook.com>",
"msg_from_op": false,
"msg_subject": "=?utf-8?B?5Zue5aSNOiBUcmFuc2FjdGlvbiB0aW1lb3V0?="
},
{
"msg_contents": "On Wed, Dec 20, 2023 at 9:58 AM Thomas wen\n<Thomas_valentine_365@outlook.com> wrote:\n>\n> Hi Junwang Zhao\n> #should we invalidate lock_timeout? Or maybe just document this.\n> I think you mean when lock_time is greater than trasaction-time invalidate lock_timeout or needs to be logged ?\n>\nI mean the interleaving of the gucs, which is lock_timeout and the new\nintroduced transaction_timeout,\nif lock_timeout >= transaction_timeout, seems no need to enable lock_timeout.\n>\n>\n>\n> Best whish\n> ________________________________\n> 发件人: Junwang Zhao <zhjwpku@gmail.com>\n> 发送时间: 2023年12月20日 9:48\n> 收件人: Andrey M. Borodin <x4mmm@yandex-team.ru>\n> 抄送: Japin Li <japinli@hotmail.com>; 邱宇航 <iamqyh@gmail.com>; Fujii Masao <masao.fujii@oss.nttdata.com>; Andrey Borodin <amborodin86@gmail.com>; Andres Freund <andres@anarazel.de>; Michael Paquier <michael.paquier@gmail.com>; Nikolay Samokhvalov <samokhvalov@gmail.com>; pgsql-hackers <pgsql-hackers@postgresql.org>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\n> 主题: Re: Transaction timeout\n>\n> On Tue, Dec 19, 2023 at 10:51 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >\n> > On Tue, Dec 19, 2023 at 6:27 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> > >\n> > >\n> > >\n> > > > On 19 Dec 2023, at 13:26, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> > > >\n> > > > I don’t have Windows machine, so I hope CF bot will pick this.\n> > >\n> > > I used Github CI to produce version of tests that seems to be is stable on Windows.\n> > > Sorry for the noise.\n> > >\n> > >\n> > > Best regards, Andrey Borodin.\n> >\n> > + <para>\n> > + If <varname>transaction_timeout</varname> is shorter than\n> > + <varname>idle_in_transaction_session_timeout</varname> or\n> > <varname>statement_timeout</varname>\n> > + <varname>transaction_timeout</varname> will invalidate longer timeout.\n> > + </para>\n> >\n> > When transaction_timeout is *equal* to idle_in_transaction_session_timeout\n> > or statement_timeout, idle_in_transaction_session_timeout and statement_timeout\n> > will also be invalidated, the logic in the code seems right, though\n> > this document\n> > is a little bit inaccurate.\n> >\n> <para>\n> Unlike <varname>statement_timeout</varname>, this timeout can only occur\n> while waiting for locks. Note that if\n> <varname>statement_timeout</varname>\n> is nonzero, it is rather pointless to set\n> <varname>lock_timeout</varname> to\n> the same or larger value, since the statement timeout would always\n> trigger first. If <varname>log_min_error_statement</varname> is set to\n> <literal>ERROR</literal> or lower, the statement that timed out will be\n> logged.\n> </para>\n>\n> There is a note about statement_timeout and lock_timeout, set both\n> and lock_timeout >= statement_timeout is pointless, but this logic seems not\n> implemented in the code. I am wondering if lock_timeout >= transaction_timeout,\n> should we invalidate lock_timeout? Or maybe just document this.\n>\n> > --\n> > Regards\n> > Junwang Zhao\n>\n>\n>\n> --\n> Regards\n> Junwang Zhao\n>\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Wed, 20 Dec 2023 10:35:16 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Hi Junwang Zhao\n Agree +1\n\nBest whish\n\nJunwang Zhao <zhjwpku@gmail.com> 于2023年12月20日周三 10:35写道:\n\n> On Wed, Dec 20, 2023 at 9:58 AM Thomas wen\n> <Thomas_valentine_365@outlook.com> wrote:\n> >\n> > Hi Junwang Zhao\n> > #should we invalidate lock_timeout? Or maybe just document this.\n> > I think you mean when lock_time is greater than trasaction-time\n> invalidate lock_timeout or needs to be logged ?\n> >\n> I mean the interleaving of the gucs, which is lock_timeout and the new\n> introduced transaction_timeout,\n> if lock_timeout >= transaction_timeout, seems no need to enable\n> lock_timeout.\n> >\n> >\n> >\n> > Best whish\n> > ________________________________\n> > 发件人: Junwang Zhao <zhjwpku@gmail.com>\n> > 发送时间: 2023年12月20日 9:48\n> > 收件人: Andrey M. Borodin <x4mmm@yandex-team.ru>\n> > 抄送: Japin Li <japinli@hotmail.com>; 邱宇航 <iamqyh@gmail.com>; Fujii Masao\n> <masao.fujii@oss.nttdata.com>; Andrey Borodin <amborodin86@gmail.com>;\n> Andres Freund <andres@anarazel.de>; Michael Paquier <\n> michael.paquier@gmail.com>; Nikolay Samokhvalov <samokhvalov@gmail.com>;\n> pgsql-hackers <pgsql-hackers@postgresql.org>;\n> pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\n> > 主题: Re: Transaction timeout\n> >\n> > On Tue, Dec 19, 2023 at 10:51 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> > >\n> > > On Tue, Dec 19, 2023 at 6:27 PM Andrey M. Borodin <\n> x4mmm@yandex-team.ru> wrote:\n> > > >\n> > > >\n> > > >\n> > > > > On 19 Dec 2023, at 13:26, Andrey M. Borodin <x4mmm@yandex-team.ru>\n> wrote:\n> > > > >\n> > > > > I don’t have Windows machine, so I hope CF bot will pick this.\n> > > >\n> > > > I used Github CI to produce version of tests that seems to be is\n> stable on Windows.\n> > > > Sorry for the noise.\n> > > >\n> > > >\n> > > > Best regards, Andrey Borodin.\n> > >\n> > > + <para>\n> > > + If <varname>transaction_timeout</varname> is shorter than\n> > > + <varname>idle_in_transaction_session_timeout</varname> or\n> > > <varname>statement_timeout</varname>\n> > > + <varname>transaction_timeout</varname> will invalidate longer\n> timeout.\n> > > + </para>\n> > >\n> > > When transaction_timeout is *equal* to\n> idle_in_transaction_session_timeout\n> > > or statement_timeout, idle_in_transaction_session_timeout and\n> statement_timeout\n> > > will also be invalidated, the logic in the code seems right, though\n> > > this document\n> > > is a little bit inaccurate.\n> > >\n> > <para>\n> > Unlike <varname>statement_timeout</varname>, this timeout can\n> only occur\n> > while waiting for locks. Note that if\n> > <varname>statement_timeout</varname>\n> > is nonzero, it is rather pointless to set\n> > <varname>lock_timeout</varname> to\n> > the same or larger value, since the statement timeout would\n> always\n> > trigger first. If <varname>log_min_error_statement</varname> is\n> set to\n> > <literal>ERROR</literal> or lower, the statement that timed out\n> will be\n> > logged.\n> > </para>\n> >\n> > There is a note about statement_timeout and lock_timeout, set both\n> > and lock_timeout >= statement_timeout is pointless, but this logic seems\n> not\n> > implemented in the code. I am wondering if lock_timeout >=\n> transaction_timeout,\n> > should we invalidate lock_timeout? Or maybe just document this.\n> >\n> > > --\n> > > Regards\n> > > Junwang Zhao\n> >\n> >\n> >\n> > --\n> > Regards\n> > Junwang Zhao\n> >\n> >\n>\n>\n> --\n> Regards\n> Junwang Zhao\n>\n>\n>\n\nHi Junwang Zhao Agree +1Best whishJunwang Zhao <zhjwpku@gmail.com> 于2023年12月20日周三 10:35写道:On Wed, Dec 20, 2023 at 9:58 AM Thomas wen\n<Thomas_valentine_365@outlook.com> wrote:\n>\n> Hi Junwang Zhao\n> #should we invalidate lock_timeout? Or maybe just document this.\n> I think you mean when lock_time is greater than trasaction-time invalidate lock_timeout or needs to be logged ?\n>\nI mean the interleaving of the gucs, which is lock_timeout and the new\nintroduced transaction_timeout,\nif lock_timeout >= transaction_timeout, seems no need to enable lock_timeout.\n>\n>\n>\n> Best whish\n> ________________________________\n> 发件人: Junwang Zhao <zhjwpku@gmail.com>\n> 发送时间: 2023年12月20日 9:48\n> 收件人: Andrey M. Borodin <x4mmm@yandex-team.ru>\n> 抄送: Japin Li <japinli@hotmail.com>; 邱宇航 <iamqyh@gmail.com>; Fujii Masao <masao.fujii@oss.nttdata.com>; Andrey Borodin <amborodin86@gmail.com>; Andres Freund <andres@anarazel.de>; Michael Paquier <michael.paquier@gmail.com>; Nikolay Samokhvalov <samokhvalov@gmail.com>; pgsql-hackers <pgsql-hackers@postgresql.org>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\n> 主题: Re: Transaction timeout\n>\n> On Tue, Dec 19, 2023 at 10:51 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >\n> > On Tue, Dec 19, 2023 at 6:27 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> > >\n> > >\n> > >\n> > > > On 19 Dec 2023, at 13:26, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> > > >\n> > > > I don’t have Windows machine, so I hope CF bot will pick this.\n> > >\n> > > I used Github CI to produce version of tests that seems to be is stable on Windows.\n> > > Sorry for the noise.\n> > >\n> > >\n> > > Best regards, Andrey Borodin.\n> >\n> > + <para>\n> > + If <varname>transaction_timeout</varname> is shorter than\n> > + <varname>idle_in_transaction_session_timeout</varname> or\n> > <varname>statement_timeout</varname>\n> > + <varname>transaction_timeout</varname> will invalidate longer timeout.\n> > + </para>\n> >\n> > When transaction_timeout is *equal* to idle_in_transaction_session_timeout\n> > or statement_timeout, idle_in_transaction_session_timeout and statement_timeout\n> > will also be invalidated, the logic in the code seems right, though\n> > this document\n> > is a little bit inaccurate.\n> >\n> <para>\n> Unlike <varname>statement_timeout</varname>, this timeout can only occur\n> while waiting for locks. Note that if\n> <varname>statement_timeout</varname>\n> is nonzero, it is rather pointless to set\n> <varname>lock_timeout</varname> to\n> the same or larger value, since the statement timeout would always\n> trigger first. If <varname>log_min_error_statement</varname> is set to\n> <literal>ERROR</literal> or lower, the statement that timed out will be\n> logged.\n> </para>\n>\n> There is a note about statement_timeout and lock_timeout, set both\n> and lock_timeout >= statement_timeout is pointless, but this logic seems not\n> implemented in the code. I am wondering if lock_timeout >= transaction_timeout,\n> should we invalidate lock_timeout? Or maybe just document this.\n>\n> > --\n> > Regards\n> > Junwang Zhao\n>\n>\n>\n> --\n> Regards\n> Junwang Zhao\n>\n>\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Wed, 20 Dec 2023 20:23:13 +0800",
"msg_from": "wenhui qiu <qiuwenhuifx@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Tue, 19 Dec 2023 at 22:06, Japin Li <japinli@hotmail.com> wrote:\n> On Tue, 19 Dec 2023 at 18:27, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>>> On 19 Dec 2023, at 13:26, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>>>\n>>> I don’t have Windows machine, so I hope CF bot will pick this.\n>>\n>> I used Github CI to produce version of tests that seems to be is stable on Windows.\n>\n> It still failed on Windows Server 2019 [1].\n>\n> diff -w -U3 C:/cirrus/src/test/isolation/expected/timeouts.out C:/cirrus/build/testrun/isolation/isolation/results/timeouts.out\n> --- C:/cirrus/src/test/isolation/expected/timeouts.out\t2023-12-19 10:34:30.354721100 +0000\n> +++ C:/cirrus/build/testrun/isolation/isolation/results/timeouts.out\t2023-12-19 10:38:25.877981600 +0000\n> @@ -100,7 +100,7 @@\n> step stt3_check_stt2: SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/stt2'\n> count\n> -----\n> - 0\n> + 1\n> (1 row)\n>\n> step itt4_set: SET idle_in_transaction_session_timeout = '1ms'; SET statement_timeout = '10s'; SET lock_timeout = '10s'; SET transaction_timeout = '10s';\n>\n> [1] https://api.cirrus-ci.com/v1/artifact/task/4707530400595968/testrun/build/testrun/isolation/isolation/regression.diffs\n\nHi,\n\nI try to split the test for transaction timeout, and all passed on my CI [1].\n\nOTOH, I find if I set transaction_timeout in a transaction, it will not take\neffect immediately. For example:\n\n[local]:2049802 postgres=# BEGIN;\nBEGIN\n[local]:2049802 postgres=*# SET transaction_timeout TO '1s';\nSET\n[local]:2049802 postgres=*# SELECT relname FROM pg_class LIMIT 1; -- wait 10s\n relname\n--------------\n pg_statistic\n(1 row)\n\n[local]:2049802 postgres=*# SELECT relname FROM pg_class LIMIT 1;\nFATAL: terminating connection due to transaction timeout\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded.\n\nIt looks odd. Does this is expected? I'm not read all the threads,\nam I missing something?\n\n[1] https://cirrus-ci.com/build/6574686130143232\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.",
"msg_date": "Fri, 22 Dec 2023 13:39:27 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Fri, Dec 22, 2023 at 1:39 PM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> On Tue, 19 Dec 2023 at 22:06, Japin Li <japinli@hotmail.com> wrote:\n> > On Tue, 19 Dec 2023 at 18:27, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> >>> On 19 Dec 2023, at 13:26, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> >>>\n> >>> I don’t have Windows machine, so I hope CF bot will pick this.\n> >>\n> >> I used Github CI to produce version of tests that seems to be is stable on Windows.\n> >\n> > It still failed on Windows Server 2019 [1].\n> >\n> > diff -w -U3 C:/cirrus/src/test/isolation/expected/timeouts.out C:/cirrus/build/testrun/isolation/isolation/results/timeouts.out\n> > --- C:/cirrus/src/test/isolation/expected/timeouts.out 2023-12-19 10:34:30.354721100 +0000\n> > +++ C:/cirrus/build/testrun/isolation/isolation/results/timeouts.out 2023-12-19 10:38:25.877981600 +0000\n> > @@ -100,7 +100,7 @@\n> > step stt3_check_stt2: SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/stt2'\n> > count\n> > -----\n> > - 0\n> > + 1\n> > (1 row)\n> >\n> > step itt4_set: SET idle_in_transaction_session_timeout = '1ms'; SET statement_timeout = '10s'; SET lock_timeout = '10s'; SET transaction_timeout = '10s';\n> >\n> > [1] https://api.cirrus-ci.com/v1/artifact/task/4707530400595968/testrun/build/testrun/isolation/isolation/regression.diffs\n>\n> Hi,\n>\n> I try to split the test for transaction timeout, and all passed on my CI [1].\n>\n> OTOH, I find if I set transaction_timeout in a transaction, it will not take\n> effect immediately. For example:\n>\n> [local]:2049802 postgres=# BEGIN;\n> BEGIN\n> [local]:2049802 postgres=*# SET transaction_timeout TO '1s';\nwhen this execute, TransactionTimeout is still 0, this command will\nnot set timeout\n> SET\n> [local]:2049802 postgres=*# SELECT relname FROM pg_class LIMIT 1; -- wait 10s\nwhen this command get execute, start_xact_command will enable the timer\n> relname\n> --------------\n> pg_statistic\n> (1 row)\n>\n> [local]:2049802 postgres=*# SELECT relname FROM pg_class LIMIT 1;\n> FATAL: terminating connection due to transaction timeout\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Succeeded.\n>\n> It looks odd. Does this is expected? I'm not read all the threads,\n> am I missing something?\n\nI think this is by design, if you debug statement_timeout, it's the same\nbehaviour, the timeout will be set for each command after the second\ncommand was called, you just aren't aware of this.\n\nI doubt people will set this in a transaction.\n>\n> [1] https://cirrus-ci.com/build/6574686130143232\n>\n> --\n> Regrads,\n> Japin Li\n> ChengDu WenWu Information Technology Co., Ltd.\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Fri, 22 Dec 2023 20:29:58 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nOn Fri, 22 Dec 2023 at 20:29, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> On Fri, Dec 22, 2023 at 1:39 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>>\n>> On Tue, 19 Dec 2023 at 22:06, Japin Li <japinli@hotmail.com> wrote:\n>> > On Tue, 19 Dec 2023 at 18:27, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> >>> On 19 Dec 2023, at 13:26, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> >>>\n>> >>> I don’t have Windows machine, so I hope CF bot will pick this.\n>> >>\n>> >> I used Github CI to produce version of tests that seems to be is stable on Windows.\n>> >\n>> > It still failed on Windows Server 2019 [1].\n>> >\n>> > diff -w -U3 C:/cirrus/src/test/isolation/expected/timeouts.out C:/cirrus/build/testrun/isolation/isolation/results/timeouts.out\n>> > --- C:/cirrus/src/test/isolation/expected/timeouts.out 2023-12-19 10:34:30.354721100 +0000\n>> > +++ C:/cirrus/build/testrun/isolation/isolation/results/timeouts.out 2023-12-19 10:38:25.877981600 +0000\n>> > @@ -100,7 +100,7 @@\n>> > step stt3_check_stt2: SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/stt2'\n>> > count\n>> > -----\n>> > - 0\n>> > + 1\n>> > (1 row)\n>> >\n>> > step itt4_set: SET idle_in_transaction_session_timeout = '1ms'; SET statement_timeout = '10s'; SET lock_timeout = '10s'; SET transaction_timeout = '10s';\n>> >\n>> > [1] https://api.cirrus-ci.com/v1/artifact/task/4707530400595968/testrun/build/testrun/isolation/isolation/regression.diffs\n>>\n>> Hi,\n>>\n>> I try to split the test for transaction timeout, and all passed on my CI [1].\n>>\n>> OTOH, I find if I set transaction_timeout in a transaction, it will not take\n>> effect immediately. For example:\n>>\n>> [local]:2049802 postgres=# BEGIN;\n>> BEGIN\n>> [local]:2049802 postgres=*# SET transaction_timeout TO '1s';\n> when this execute, TransactionTimeout is still 0, this command will\n> not set timeout\n>> SET\n>> [local]:2049802 postgres=*# SELECT relname FROM pg_class LIMIT 1; -- wait 10s\n> when this command get execute, start_xact_command will enable the timer\n\nThanks for your exaplantion, got it.\n\n>> relname\n>> --------------\n>> pg_statistic\n>> (1 row)\n>>\n>> [local]:2049802 postgres=*# SELECT relname FROM pg_class LIMIT 1;\n>> FATAL: terminating connection due to transaction timeout\n>> server closed the connection unexpectedly\n>> This probably means the server terminated abnormally\n>> before or while processing the request.\n>> The connection to the server was lost. Attempting reset: Succeeded.\n>>\n>> It looks odd. Does this is expected? I'm not read all the threads,\n>> am I missing something?\n>\n> I think this is by design, if you debug statement_timeout, it's the same\n> behaviour, the timeout will be set for each command after the second\n> command was called, you just aren't aware of this.\n>\n\nI try to set idle_in_transaction_session_timeout after begin transaction,\nit changes immediately, so I think transaction_timeout should also be take\nimmediately.\n\n> I doubt people will set this in a transaction.\n\nMaybe not,\n\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Fri, 22 Dec 2023 22:25:37 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Fri, Dec 22, 2023 at 10:25 PM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> On Fri, 22 Dec 2023 at 20:29, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> > On Fri, Dec 22, 2023 at 1:39 PM Japin Li <japinli@hotmail.com> wrote:\n> >>\n> >>\n> >> On Tue, 19 Dec 2023 at 22:06, Japin Li <japinli@hotmail.com> wrote:\n> >> > On Tue, 19 Dec 2023 at 18:27, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> >> >>> On 19 Dec 2023, at 13:26, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> >> >>>\n> >> >>> I don’t have Windows machine, so I hope CF bot will pick this.\n> >> >>\n> >> >> I used Github CI to produce version of tests that seems to be is stable on Windows.\n> >> >\n> >> > It still failed on Windows Server 2019 [1].\n> >> >\n> >> > diff -w -U3 C:/cirrus/src/test/isolation/expected/timeouts.out C:/cirrus/build/testrun/isolation/isolation/results/timeouts.out\n> >> > --- C:/cirrus/src/test/isolation/expected/timeouts.out 2023-12-19 10:34:30.354721100 +0000\n> >> > +++ C:/cirrus/build/testrun/isolation/isolation/results/timeouts.out 2023-12-19 10:38:25.877981600 +0000\n> >> > @@ -100,7 +100,7 @@\n> >> > step stt3_check_stt2: SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/stt2'\n> >> > count\n> >> > -----\n> >> > - 0\n> >> > + 1\n> >> > (1 row)\n> >> >\n> >> > step itt4_set: SET idle_in_transaction_session_timeout = '1ms'; SET statement_timeout = '10s'; SET lock_timeout = '10s'; SET transaction_timeout = '10s';\n> >> >\n> >> > [1] https://api.cirrus-ci.com/v1/artifact/task/4707530400595968/testrun/build/testrun/isolation/isolation/regression.diffs\n> >>\n> >> Hi,\n> >>\n> >> I try to split the test for transaction timeout, and all passed on my CI [1].\n> >>\n> >> OTOH, I find if I set transaction_timeout in a transaction, it will not take\n> >> effect immediately. For example:\n> >>\n> >> [local]:2049802 postgres=# BEGIN;\n> >> BEGIN\n> >> [local]:2049802 postgres=*# SET transaction_timeout TO '1s';\n> > when this execute, TransactionTimeout is still 0, this command will\n> > not set timeout\n> >> SET\n> >> [local]:2049802 postgres=*# SELECT relname FROM pg_class LIMIT 1; -- wait 10s\n> > when this command get execute, start_xact_command will enable the timer\n>\n> Thanks for your exaplantion, got it.\n>\n> >> relname\n> >> --------------\n> >> pg_statistic\n> >> (1 row)\n> >>\n> >> [local]:2049802 postgres=*# SELECT relname FROM pg_class LIMIT 1;\n> >> FATAL: terminating connection due to transaction timeout\n> >> server closed the connection unexpectedly\n> >> This probably means the server terminated abnormally\n> >> before or while processing the request.\n> >> The connection to the server was lost. Attempting reset: Succeeded.\n> >>\n> >> It looks odd. Does this is expected? I'm not read all the threads,\n> >> am I missing something?\n> >\n> > I think this is by design, if you debug statement_timeout, it's the same\n> > behaviour, the timeout will be set for each command after the second\n> > command was called, you just aren't aware of this.\n> >\n>\n> I try to set idle_in_transaction_session_timeout after begin transaction,\n> it changes immediately, so I think transaction_timeout should also be take\n> immediately.\n\nAh, right, idle_in_transaction_session_timeout is set after the set\ncommand finishes and before the backend send *ready for query*\nto the client, so the value of the GUC is already set before\nnext command.\n\nI bet you must have checked this ;)\n\n>\n> > I doubt people will set this in a transaction.\n>\n> Maybe not,\n>\n>\n> --\n> Regrads,\n> Japin Li\n> ChengDu WenWu Information Technology Co., Ltd.\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Fri, 22 Dec 2023 22:37:40 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nOn Fri, 22 Dec 2023 at 22:37, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> On Fri, Dec 22, 2023 at 10:25 PM Japin Li <japinli@hotmail.com> wrote:\n>> I try to set idle_in_transaction_session_timeout after begin transaction,\n>> it changes immediately, so I think transaction_timeout should also be take\n>> immediately.\n>\n> Ah, right, idle_in_transaction_session_timeout is set after the set\n> command finishes and before the backend send *ready for query*\n> to the client, so the value of the GUC is already set before\n> next command.\n>\n\nI mean, is it possible to set transaction_timeout before next comand?\n\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Fri, 22 Dec 2023 22:44:27 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Fri, Dec 22, 2023 at 10:44 PM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> On Fri, 22 Dec 2023 at 22:37, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> > On Fri, Dec 22, 2023 at 10:25 PM Japin Li <japinli@hotmail.com> wrote:\n> >> I try to set idle_in_transaction_session_timeout after begin transaction,\n> >> it changes immediately, so I think transaction_timeout should also be take\n> >> immediately.\n> >\n> > Ah, right, idle_in_transaction_session_timeout is set after the set\n> > command finishes and before the backend send *ready for query*\n> > to the client, so the value of the GUC is already set before\n> > next command.\n> >\n>\n> I mean, is it possible to set transaction_timeout before next comand?\n>\nYeah, it's possible, set transaction_timeout in the when it first\ngoes into *idle in transaction* mode, see the attached files.\n\n>\n> --\n> Regrads,\n> Japin Li\n> ChengDu WenWu Information Technology Co., Ltd.\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Fri, 22 Dec 2023 23:30:56 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nOn Fri, 22 Dec 2023 at 23:30, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> On Fri, Dec 22, 2023 at 10:44 PM Japin Li <japinli@hotmail.com> wrote:\n>>\n>>\n>> On Fri, 22 Dec 2023 at 22:37, Junwang Zhao <zhjwpku@gmail.com> wrote:\n>> > On Fri, Dec 22, 2023 at 10:25 PM Japin Li <japinli@hotmail.com> wrote:\n>> >> I try to set idle_in_transaction_session_timeout after begin transaction,\n>> >> it changes immediately, so I think transaction_timeout should also be take\n>> >> immediately.\n>> >\n>> > Ah, right, idle_in_transaction_session_timeout is set after the set\n>> > command finishes and before the backend send *ready for query*\n>> > to the client, so the value of the GUC is already set before\n>> > next command.\n>> >\n>>\n>> I mean, is it possible to set transaction_timeout before next comand?\n>>\n> Yeah, it's possible, set transaction_timeout in the when it first\n> goes into *idle in transaction* mode, see the attached files.\n>\n\nThanks for updating the patch, LGTM.\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Sat, 23 Dec 2023 08:32:26 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nOn Sat, 23 Dec 2023 at 08:32, Japin Li <japinli@hotmail.com> wrote:\n> On Fri, 22 Dec 2023 at 23:30, Junwang Zhao <zhjwpku@gmail.com> wrote:\n>> On Fri, Dec 22, 2023 at 10:44 PM Japin Li <japinli@hotmail.com> wrote:\n>>>\n>>>\n>>> On Fri, 22 Dec 2023 at 22:37, Junwang Zhao <zhjwpku@gmail.com> wrote:\n>>> > On Fri, Dec 22, 2023 at 10:25 PM Japin Li <japinli@hotmail.com> wrote:\n>>> >> I try to set idle_in_transaction_session_timeout after begin transaction,\n>>> >> it changes immediately, so I think transaction_timeout should also be take\n>>> >> immediately.\n>>> >\n>>> > Ah, right, idle_in_transaction_session_timeout is set after the set\n>>> > command finishes and before the backend send *ready for query*\n>>> > to the client, so the value of the GUC is already set before\n>>> > next command.\n>>> >\n>>>\n>>> I mean, is it possible to set transaction_timeout before next comand?\n>>>\n>> Yeah, it's possible, set transaction_timeout in the when it first\n>> goes into *idle in transaction* mode, see the attached files.\n>>\n>\n> Thanks for updating the patch, LGTM.\n\nSorry for the noise!\n\nRead the previous threads, I find why the author enable transaction_timeout\nin start_xact_command().\n\nThe v15 patch cannot handle COMMIT AND CHAIN, see [1]. For example:\n\nSET transaction_timeout TO '2s'; BEGIN; SELECT 1, pg_sleep(1); COMMIT AND CHAIN; SELECT 2, pg_sleep(1); COMMIT;\n\nThe transaction_timeout do not reset when executing COMMIT AND CHAIN.\n\n[1] https://www.postgresql.org/message-id/a906dea1-76a1-4f26-76c5-a7efad3ef5b8%40oss.nttdata.com\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Sat, 23 Dec 2023 10:40:32 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Sat, Dec 23, 2023 at 10:40 AM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> On Sat, 23 Dec 2023 at 08:32, Japin Li <japinli@hotmail.com> wrote:\n> > On Fri, 22 Dec 2023 at 23:30, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >> On Fri, Dec 22, 2023 at 10:44 PM Japin Li <japinli@hotmail.com> wrote:\n> >>>\n> >>>\n> >>> On Fri, 22 Dec 2023 at 22:37, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >>> > On Fri, Dec 22, 2023 at 10:25 PM Japin Li <japinli@hotmail.com> wrote:\n> >>> >> I try to set idle_in_transaction_session_timeout after begin transaction,\n> >>> >> it changes immediately, so I think transaction_timeout should also be take\n> >>> >> immediately.\n> >>> >\n> >>> > Ah, right, idle_in_transaction_session_timeout is set after the set\n> >>> > command finishes and before the backend send *ready for query*\n> >>> > to the client, so the value of the GUC is already set before\n> >>> > next command.\n> >>> >\n> >>>\n> >>> I mean, is it possible to set transaction_timeout before next comand?\n> >>>\n> >> Yeah, it's possible, set transaction_timeout in the when it first\n> >> goes into *idle in transaction* mode, see the attached files.\n> >>\n> >\n> > Thanks for updating the patch, LGTM.\n>\n> Sorry for the noise!\n>\n> Read the previous threads, I find why the author enable transaction_timeout\n> in start_xact_command().\n>\n> The v15 patch cannot handle COMMIT AND CHAIN, see [1]. For example:\n\nI didn't read the previous threads, sorry for that, let's stick to v14.\n\n>\n> SET transaction_timeout TO '2s'; BEGIN; SELECT 1, pg_sleep(1); COMMIT AND CHAIN; SELECT 2, pg_sleep(1); COMMIT;\n>\n> The transaction_timeout do not reset when executing COMMIT AND CHAIN.\n>\n> [1] https://www.postgresql.org/message-id/a906dea1-76a1-4f26-76c5-a7efad3ef5b8%40oss.nttdata.com\n>\n> --\n> Regrads,\n> Japin Li\n> ChengDu WenWu Information Technology Co., Ltd.\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Sat, 23 Dec 2023 11:08:33 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "a\nOn Sat, 23 Dec 2023 at 10:40, Japin Li <japinli@hotmail.com> wrote:\n> On Sat, 23 Dec 2023 at 08:32, Japin Li <japinli@hotmail.com> wrote:\n>> On Fri, 22 Dec 2023 at 23:30, Junwang Zhao <zhjwpku@gmail.com> wrote:\n>>> On Fri, Dec 22, 2023 at 10:44 PM Japin Li <japinli@hotmail.com> wrote:\n>>>>\n>>>>\n>>>> On Fri, 22 Dec 2023 at 22:37, Junwang Zhao <zhjwpku@gmail.com> wrote:\n>>>> > On Fri, Dec 22, 2023 at 10:25 PM Japin Li <japinli@hotmail.com> wrote:\n>>>> >> I try to set idle_in_transaction_session_timeout after begin transaction,\n>>>> >> it changes immediately, so I think transaction_timeout should also be take\n>>>> >> immediately.\n>>>> >\n>>>> > Ah, right, idle_in_transaction_session_timeout is set after the set\n>>>> > command finishes and before the backend send *ready for query*\n>>>> > to the client, so the value of the GUC is already set before\n>>>> > next command.\n>>>> >\n>>>>\n>>>> I mean, is it possible to set transaction_timeout before next comand?\n>>>>\n>>> Yeah, it's possible, set transaction_timeout in the when it first\n>>> goes into *idle in transaction* mode, see the attached files.\n>>>\n>>\n>> Thanks for updating the patch, LGTM.\n>\n> Sorry for the noise!\n>\n> Read the previous threads, I find why the author enable transaction_timeout\n> in start_xact_command().\n>\n> The v15 patch cannot handle COMMIT AND CHAIN, see [1]. For example:\n>\n> SET transaction_timeout TO '2s'; BEGIN; SELECT 1, pg_sleep(1); COMMIT AND CHAIN; SELECT 2, pg_sleep(1); COMMIT;\n>\n> The transaction_timeout do not reset when executing COMMIT AND CHAIN.\n>\n> [1] https://www.postgresql.org/message-id/a906dea1-76a1-4f26-76c5-a7efad3ef5b8%40oss.nttdata.com\n\nAttach v16 to solve this. Any suggestions?\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.",
"msg_date": "Sat, 23 Dec 2023 11:17:14 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Sat, Dec 23, 2023 at 11:17 AM Japin Li <japinli@hotmail.com> wrote:\n>\n> a\n> On Sat, 23 Dec 2023 at 10:40, Japin Li <japinli@hotmail.com> wrote:\n> > On Sat, 23 Dec 2023 at 08:32, Japin Li <japinli@hotmail.com> wrote:\n> >> On Fri, 22 Dec 2023 at 23:30, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >>> On Fri, Dec 22, 2023 at 10:44 PM Japin Li <japinli@hotmail.com> wrote:\n> >>>>\n> >>>>\n> >>>> On Fri, 22 Dec 2023 at 22:37, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >>>> > On Fri, Dec 22, 2023 at 10:25 PM Japin Li <japinli@hotmail.com> wrote:\n> >>>> >> I try to set idle_in_transaction_session_timeout after begin transaction,\n> >>>> >> it changes immediately, so I think transaction_timeout should also be take\n> >>>> >> immediately.\n> >>>> >\n> >>>> > Ah, right, idle_in_transaction_session_timeout is set after the set\n> >>>> > command finishes and before the backend send *ready for query*\n> >>>> > to the client, so the value of the GUC is already set before\n> >>>> > next command.\n> >>>> >\n> >>>>\n> >>>> I mean, is it possible to set transaction_timeout before next comand?\n> >>>>\n> >>> Yeah, it's possible, set transaction_timeout in the when it first\n> >>> goes into *idle in transaction* mode, see the attached files.\n> >>>\n> >>\n> >> Thanks for updating the patch, LGTM.\n> >\n> > Sorry for the noise!\n> >\n> > Read the previous threads, I find why the author enable transaction_timeout\n> > in start_xact_command().\n> >\n> > The v15 patch cannot handle COMMIT AND CHAIN, see [1]. For example:\n> >\n> > SET transaction_timeout TO '2s'; BEGIN; SELECT 1, pg_sleep(1); COMMIT AND CHAIN; SELECT 2, pg_sleep(1); COMMIT;\n> >\n> > The transaction_timeout do not reset when executing COMMIT AND CHAIN.\n> >\n> > [1] https://www.postgresql.org/message-id/a906dea1-76a1-4f26-76c5-a7efad3ef5b8%40oss.nttdata.com\n>\n> Attach v16 to solve this. Any suggestions?\n\nI've checked this with *COMMIT AND CHAIN* and *ABORT AND CHAIN*,\nboth work as expected. Thanks for the update.\n\n>\n> --\n> Regrads,\n> Japin Li\n> ChengDu WenWu Information Technology Co., Ltd.\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Sat, 23 Dec 2023 11:35:17 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\r\n\r\n> 在 2023年12月23日,11:35,Junwang Zhao <zhjwpku@gmail.com> 写道:\r\n> \r\n> On Sat, Dec 23, 2023 at 11:17 AM Japin Li <japinli@hotmail.com> wrote:\r\n>> \r\n>> a\r\n>>> On Sat, 23 Dec 2023 at 10:40, Japin Li <japinli@hotmail.com> wrote:\r\n>>> On Sat, 23 Dec 2023 at 08:32, Japin Li <japinli@hotmail.com> wrote:\r\n>>>> On Fri, 22 Dec 2023 at 23:30, Junwang Zhao <zhjwpku@gmail.com> wrote:\r\n>>>>> On Fri, Dec 22, 2023 at 10:44 PM Japin Li <japinli@hotmail.com> wrote:\r\n>>>>>> \r\n>>>>>> \r\n>>>>>> On Fri, 22 Dec 2023 at 22:37, Junwang Zhao <zhjwpku@gmail.com> wrote:\r\n>>>>>>> On Fri, Dec 22, 2023 at 10:25 PM Japin Li <japinli@hotmail.com> wrote:\r\n>>>>>>>> I try to set idle_in_transaction_session_timeout after begin transaction,\r\n>>>>>>>> it changes immediately, so I think transaction_timeout should also be take\r\n>>>>>>>> immediately.\r\n>>>>>>> \r\n>>>>>>> Ah, right, idle_in_transaction_session_timeout is set after the set\r\n>>>>>>> command finishes and before the backend send *ready for query*\r\n>>>>>>> to the client, so the value of the GUC is already set before\r\n>>>>>>> next command.\r\n>>>>>>> \r\n>>>>>> \r\n>>>>>> I mean, is it possible to set transaction_timeout before next comand?\r\n>>>>>> \r\n>>>>> Yeah, it's possible, set transaction_timeout in the when it first\r\n>>>>> goes into *idle in transaction* mode, see the attached files.\r\n>>>>> \r\n>>>> \r\n>>>> Thanks for updating the patch, LGTM.\r\n>>> \r\n>>> Sorry for the noise!\r\n>>> \r\n>>> Read the previous threads, I find why the author enable transaction_timeout\r\n>>> in start_xact_command().\r\n>>> \r\n>>> The v15 patch cannot handle COMMIT AND CHAIN, see [1]. For example:\r\n>>> \r\n>>> SET transaction_timeout TO '2s'; BEGIN; SELECT 1, pg_sleep(1); COMMIT AND CHAIN; SELECT 2, pg_sleep(1); COMMIT;\r\n>>> \r\n>>> The transaction_timeout do not reset when executing COMMIT AND CHAIN.\r\n>>> \r\n>>> [1] https://www.postgresql.org/message-id/a906dea1-76a1-4f26-76c5-a7efad3ef5b8%40oss.nttdata.com\r\n>> \r\n>> Attach v16 to solve this. Any suggestions?\r\n> \r\n> I've checked this with *COMMIT AND CHAIN* and *ABORT AND CHAIN*,\r\n> both work as expected. Thanks for the update.\r\n> \r\n\r\nThanks for your testing and reviewing!",
"msg_date": "Sat, 23 Dec 2023 06:48:14 +0000",
"msg_from": "Li Japin <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 22 Dec 2023, at 10:39, Japin Li <japinli@hotmail.com> wrote:\n> \n> \n> I try to split the test for transaction timeout, and all passed on my CI [1].\n\n\nI like the refactoring you did in timeout.spec. I thought it is impossible, because permutations would try to reinitialize FATALed sessions. But, obviously, tests work the way you refactored it.\nHowever I don't think ignoring test failures on Windows without understanding root cause is a good idea.\nLet's get back to v13 version of tests, understand why it failed, apply your test refactorings afterwards. BTW are you sure that v14 refactorings are functional equivalent of v13 tests?\n\nTo go with this plan I attach slightly modified version of v13 tests in v16 patchset. The only change is timing in \"sleep_there\" step. I suspect that failure was induced by more coarse timer granularity on Windows. Tests were giving only 9 milliseconds for a timeout to entirely wipe away backend from pg_stat_activity. This saves testing time, but might induce false positive test flaps. So I've raised wait times to 100ms. This seems too much, but I do not have other ideas how to ensure tests stability. Maybe 50ms would be enough, I do not know. Isolation runs ~50 seconds now. I'm tempted to say that 200ms for timeouts worth it.\n\n\nAs to 2nd step \"Try to enable transaction_timeout during transaction\", I think this makes sense. But if we are doing so, shouldn't we also allow to enable idle_in_transaction timeout in a same manner? Currently we only allow to disable other timeouts... Also, if we are already in transaction, shouldn't we also subtract current transaction span from timeout?\nI think making this functionality as another step of the patchset was a good idea.\n\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Sat, 23 Dec 2023 22:14:35 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Hi Andrey,\n\nOn Sun, Dec 24, 2023 at 1:14 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>\n>\n>\n> > On 22 Dec 2023, at 10:39, Japin Li <japinli@hotmail.com> wrote:\n> >\n> >\n> > I try to split the test for transaction timeout, and all passed on my CI [1].\n>\n>\n> I like the refactoring you did in timeout.spec. I thought it is impossible, because permutations would try to reinitialize FATALed sessions. But, obviously, tests work the way you refactored it.\n> However I don't think ignoring test failures on Windows without understanding root cause is a good idea.\n> Let's get back to v13 version of tests, understand why it failed, apply your test refactorings afterwards. BTW are you sure that v14 refactorings are functional equivalent of v13 tests?\n>\n> To go with this plan I attach slightly modified version of v13 tests in v16 patchset. The only change is timing in \"sleep_there\" step. I suspect that failure was induced by more coarse timer granularity on Windows. Tests were giving only 9 milliseconds for a timeout to entirely wipe away backend from pg_stat_activity. This saves testing time, but might induce false positive test flaps. So I've raised wait times to 100ms. This seems too much, but I do not have other ideas how to ensure tests stability. Maybe 50ms would be enough, I do not know. Isolation runs ~50 seconds now. I'm tempted to say that 200ms for timeouts worth it.\n>\n>\n> As to 2nd step \"Try to enable transaction_timeout during transaction\", I think this makes sense. But if we are doing so, shouldn't we also allow to enable idle_in_transaction timeout in a same manner? Currently we only allow to disable other timeouts... Also, if we are already in transaction, shouldn't we also subtract current transaction span from timeout?\nidle_in_transaction_session_timeout is already the behavior Japin suggested,\nit is enabled before backend sends *read for query* to client.\n\n> I think making this functionality as another step of the patchset was a good idea.\n>\n> Thanks!\n>\n>\n> Best regards, Andrey Borodin.\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Mon, 25 Dec 2023 10:17:50 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nOn Sun, 24 Dec 2023 at 01:14, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> On 22 Dec 2023, at 10:39, Japin Li <japinli@hotmail.com> wrote:\n>>\n>>\n>> I try to split the test for transaction timeout, and all passed on my CI [1].\n>\n>\n> I like the refactoring you did in timeout.spec. I thought it is impossible, because permutations would try to reinitialize FATALed sessions. But, obviously, tests work the way you refactored it.\n> However I don't think ignoring test failures on Windows without understanding root cause is a good idea.\n\nYeah.\n\n> Let's get back to v13 version of tests, understand why it failed, apply your test refactorings afterwards. BTW are you sure that v14 refactorings are functional equivalent of v13 tests?\n>\nI think it is equivalent. Maybe I missing something. Please let me known\nif they are not equivalent.\n\n> To go with this plan I attach slightly modified version of v13 tests in v16 patchset. The only change is timing in \"sleep_there\" step. I suspect that failure was induced by more coarse timer granularity on Windows. Tests were giving only 9 milliseconds for a timeout to entirely wipe away backend from pg_stat_activity. This saves testing time, but might induce false positive test flaps. So I've raised wait times to 100ms. This seems too much, but I do not have other ideas how to ensure tests stability. Maybe 50ms would be enough, I do not know. Isolation runs ~50 seconds now. I'm tempted to say that 200ms for timeouts worth it.\n>\nSo this is caused by Windows timer granularity?\n\n> As to 2nd step \"Try to enable transaction_timeout during transaction\", I think this makes sense. But if we are doing so, shouldn't we also allow to enable idle_in_transaction timeout in a same manner?\n\nI think the current idle_in_transaction_session_timeout work correctly.\n\n> Currently we only allow to disable other timeouts... Also, if we are already in transaction, shouldn't we also subtract current transaction span from timeout?\n\nAgreed.\n\n> I think making this functionality as another step of the patchset was a good idea.\n>\n\n--\nRegrads,\nJapin Li\nChengDu WenWu Information Technology Co., Ltd.\n\n\n",
"msg_date": "Mon, 25 Dec 2023 10:27:32 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Hey Andrey,\n\nOn Sun, Dec 24, 2023 at 1:14 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>\n>\n>\n> > On 22 Dec 2023, at 10:39, Japin Li <japinli@hotmail.com> wrote:\n> >\n> >\n> > I try to split the test for transaction timeout, and all passed on my CI [1].\n>\n>\n> I like the refactoring you did in timeout.spec. I thought it is impossible, because permutations would try to reinitialize FATALed sessions. But, obviously, tests work the way you refactored it.\n> However I don't think ignoring test failures on Windows without understanding root cause is a good idea.\n> Let's get back to v13 version of tests, understand why it failed, apply your test refactorings afterwards. BTW are you sure that v14 refactorings are functional equivalent of v13 tests?\n>\n> To go with this plan I attach slightly modified version of v13 tests in v16 patchset. The only change is timing in \"sleep_there\" step. I suspect that failure was induced by more coarse timer granularity on Windows. Tests were giving only 9 milliseconds for a timeout to entirely wipe away backend from pg_stat_activity. This saves testing time, but might induce false positive test flaps. So I've raised wait times to 100ms. This seems too much, but I do not have other ideas how to ensure tests stability. Maybe 50ms would be enough, I do not know. Isolation runs ~50 seconds now. I'm tempted to say that 200ms for timeouts worth it.\n>\n>\n> As to 2nd step \"Try to enable transaction_timeout during transaction\", I think this makes sense. But if we are doing so, shouldn't we also allow to enable idle_in_transaction timeout in a same manner? Currently we only allow to disable other timeouts... Also, if we are already in transaction, shouldn't we also subtract current transaction span from timeout?\n> I think making this functionality as another step of the patchset was a good idea.\n>\n> Thanks!\nSeems V5~V17 doesn't work as expected for Nikolay's case:\n\npostgres=# set transaction_timeout to '2s';\nSET\npostgres=# begin; select pg_sleep(1); select pg_sleep(1); select\npg_sleep(1); select pg_sleep(1); select pg_sleep(1); commit;\nBEGIN\n\nThe reason for this seems the timer has been refreshed for each\ncommand, xact_started along can not indicate it's a new\ntransaction or not, there is a TransactionState contains some\ninfos.\n\nSo I propose the following change, what do you think?\n\ndiff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c\nindex a2611cf8e6..cffd2c44d0 100644\n--- a/src/backend/tcop/postgres.c\n+++ b/src/backend/tcop/postgres.c\n@@ -2746,7 +2746,7 @@ start_xact_command(void)\n StartTransactionCommand();\n\n /* Schedule or reschedule transaction timeout */\n- if (TransactionTimeout > 0)\n+ if (TransactionTimeout > 0 &&\n!get_timeout_active(TRANSACTION_TIMEOUT))\n enable_timeout_after(TRANSACTION_TIMEOUT,\nTransactionTimeout);\n\n xact_started = true;\n\n>\n>\n> Best regards, Andrey Borodin.\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Fri, 29 Dec 2023 00:02:36 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 28 Dec 2023, at 21:02, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> \n> Seems V5~V17 doesn't work as expected for Nikolay's case:\n> \n\nYeah, that's a problem.\n> So I propose the following change, what do you think?\nThis breaks COMMIT AND CHAIN.\n\nPFA v18: I've added a test for Nik's case and for COMMIT AND CHAIN. Now we need to fix stuff to pass this tests (I've crafted output).\nWe also need test for patchset step \"Try to enable transaction_timeout before next command\".\n\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 29 Dec 2023 15:00:12 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Fri, Dec 29, 2023 at 6:00 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>\n>\n>\n> > On 28 Dec 2023, at 21:02, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >\n> > Seems V5~V17 doesn't work as expected for Nikolay's case:\n> >\n>\n> Yeah, that's a problem.\n> > So I propose the following change, what do you think?\n> This breaks COMMIT AND CHAIN.\n>\n> PFA v18: I've added a test for Nik's case and for COMMIT AND CHAIN. Now we need to fix stuff to pass this tests (I've crafted output).\n> We also need test for patchset step \"Try to enable transaction_timeout before next command\".\n>\n> Thanks!\n\nAfter exploring the code, I found scheduling the timeout in\n`StartTransaction` might be a reasonable idea, all the chain\ncommands will call this function.\n\nWhat concerns me is that it is also called by StartParallelWorkerTransaction,\nI'm not sure if we should enable this timeout for parallel execution.\n\nThought?\n\n>\n>\n> Best regards, Andrey Borodin.\n\n\n\n-- \nRegards\nJunwang Zhao",
"msg_date": "Fri, 29 Dec 2023 19:00:10 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\n\n> On 29 Dec 2023, at 16:00, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> \n> After exploring the code, I found scheduling the timeout in\n> `StartTransaction` might be a reasonable idea, all the chain\n> commands will call this function.\n> \n> What concerns me is that it is also called by StartParallelWorkerTransaction,\n> I'm not sure if we should enable this timeout for parallel execution.\n\nI think for parallel workers we should mimic statement_timeout. Because these workers have per-statemenent lifetime.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 29 Dec 2023 16:15:00 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 29 Dec 2023, at 16:15, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n\nPFA v20. Code steps are intact.\n\nFurther refactored tests:\n 1. Check termination of active and idle queries (previously tests from Li were testing only termination of idle query)\n 2. Check timeout reschedule (even when last active query was 'SET transaction_timeout')\n 3. Check that timeout is not rescheduled by new queries (Nik's case)\n\n\nDo we have any other open items?\nI've left 'make check-timeouts' in isolation directory, it's for development purposes. I think we should remove this before committing. Obviously, all patch steps are expected to be squashed before commit.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Mon, 1 Jan 2024 19:28:33 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 1 Jan 2024, at 19:28, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> 3. Check that timeout is not rescheduled by new queries (Nik's case)\n\nThe test of Nik's case was not stable enough together with COMMIT AND CHAIN. So I've separated these cases into different permutations.\nLooking through CI logs it seems variation in sleeps and actual timeouts easily reach 30+ms. I'm not entirely sure we can reach 100% stable tests without too big timeouts.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 3 Jan 2024 11:39:44 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 3 Jan 2024, at 11:39, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> \n> \n>> On 1 Jan 2024, at 19:28, Andrey M. Borodin <x4mmm@yandex-team.ru <mailto:x4mmm@yandex-team.ru>> wrote:\n>> \n>> 3. Check that timeout is not rescheduled by new queries (Nik's case)\n> \n> The test of Nik's case was not stable enough together with COMMIT AND CHAIN. So I've separated these cases into different permutations.\n> Looking through CI logs it seems variation in sleeps and actual timeouts easily reach 30+ms. I'm not entirely sure we can reach 100% stable tests without too big timeouts.\n> \n> \n> Best regards, Andrey Borodin.\n> \n> <v21-0001-Introduce-transaction_timeout.patch>\n> <v21-0002-Add-better-tests-for-transaction_timeout.patch>\n> <v21-0003-Try-to-enable-transaction_timeout-before-next-co.patch>\n> <v21-0004-fix-reschedule-timeout-for-each-commmand.patch>\n I do not understand why, but mailing list did not pick patches that I sent. I'll retry.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 3 Jan 2024 16:46:36 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 3 Jan 2024, at 16:46, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> I do not understand why, but mailing list did not pick patches that I sent. I'll retry.\n\n\nSorry for the noise. Seems like Apple updated something in Mail.App couple of days ago and it started to use strange \"Apple-Mail\" stuff by default.\nI see patches were attached, but were not recognized by mailing list archives and CFbot.\nNow I've flipped everything to \"plain text by default\" everywhere. Hope that helps.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 3 Jan 2024 17:04:41 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nOn Wed, 03 Jan 2024 at 20:04, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> On 3 Jan 2024, at 16:46, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>>\n>> I do not understand why, but mailing list did not pick patches that I sent. I'll retry.\n>\n>\n> Sorry for the noise. Seems like Apple updated something in Mail.App couple of days ago and it started to use strange \"Apple-Mail\" stuff by default.\n> I see patches were attached, but were not recognized by mailing list archives and CFbot.\n> Now I've flipped everything to \"plain text by default\" everywhere. Hope that helps.\n>\n\nThanks for updating the patch, I find the test on Debian with mason failed [1].\n\nDoes the timeout is too short for testing? I see the timeouts for lock_timeout\nand statement_timeout is more bigger than transaction_timeout.\n\n[1] https://api.cirrus-ci.com/v1/artifact/task/5490718928535552/testrun/build-32/testrun/isolation/isolation/regression.diffs\n\n\n",
"msg_date": "Thu, 04 Jan 2024 10:14:34 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 4 Jan 2024, at 07:14, Japin Li <japinli@hotmail.com> wrote:\n> \n> Does the timeout is too short for testing? I see the timeouts for lock_timeout\n> and statement_timeout is more bigger than transaction_timeout.\n\nMakes sense. Done. I've also put some effort into fine-tuning timeouts Nik's case tests. To have 100ms gap between check, false positive and actual bug we had I had to use transaction_timeout = 300ms. Currently all tests take more than 1000ms!\nBut I do not see a way to make these tests both stable and short.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Thu, 4 Jan 2024 13:41:43 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere was a CFbot test failure last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4040/\n[2] https://cirrus-ci.com/task/4721191139672064\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 17:23:51 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 22 Jan 2024, at 11:23, Peter Smith <smithpb2250@gmail.com> wrote:\n> \n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> there was a CFbot test failure last time it was run [2]. Please have a\n> look and post an updated version if necessary.\nThanks Peter!\n\nI’ve inspected CI fails and they were caused by two different problems:\n1. It’s unsafe for isaoltion tester to await transaction_timeout within a query. Usually it gets\nFATAL: terminating connection due to transaction timeout\nBut if VM is a bit slow it can get occasional\nPQconsumeInput failed: server closed the connection unexpectedly\nSo, currently all tests use “passive waiting”, in a session that will not timeout.\n\n2. In some cases pg_sleep(0.1) were sleeping up to 200 ms. That was making s7 and s8 fail, because they rely on this margin.\nI’ve separated these tests into different test timeouts-long and increased margin to 300ms. Now tests run horrible 2431 ms. Moreover I’m afraid that on buildfarm we can have much randomly-slower machines so this test might be excluded.\nThis test checks COMMIT AND CHAIN and flow of small queries (Nik’s case).\n\nAlso I’ve verified that every \"enable_timeout_after(TRANSACTION_TIMEOUT)” and “disable_timeout(TRANSACTION_TIMEOUT)” is necessary and found that case of aborting \"idle in transaction (aborted)” is not covered by tests. I’m not sure we need a test for this.\nJapin, Junwang, what do you think?\n\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 26 Jan 2024 11:44:02 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 26 Jan 2024, at 11:44, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> \n> 1. It’s unsafe for isaoltion tester to await transaction_timeout within a query. Usually it gets\n> FATAL: terminating connection due to transaction timeout\n> But if VM is a bit slow it can get occasional\n> PQconsumeInput failed: server closed the connection unexpectedly\n> So, currently all tests use “passive waiting”, in a session that will not timeout.\n\n\nOops, sorry, I’ve accidentally sent version without this fix.\nHere it is.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 26 Jan 2024 11:46:43 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nOn Fri, 26 Jan 2024 at 14:44, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> On 22 Jan 2024, at 11:23, Peter Smith <smithpb2250@gmail.com> wrote:\n>>\n>> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n>> there was a CFbot test failure last time it was run [2]. Please have a\n>> look and post an updated version if necessary.\n> Thanks Peter!\n>\n\nThanks for updating the patch. Here are some comments for v24.\n\n+ <para>\n+ Terminate any session that spans longer than the specified amount of\n+ time in transaction. The limit applies both to explicit transactions\n+ (started with <command>BEGIN</command>) and to implicitly started\n+ transaction corresponding to single statement. But this limit is not\n+ applied to prepared transactions.\n+ If this value is specified without units, it is taken as milliseconds.\n+ A value of zero (the default) disables the timeout.\n+ </para>\nThe sentence \"But this limit is not applied to prepared transactions\" is redundant,\nsince we have a paragraph to describe this later.\n\n+\n+ <para>\n+ If <varname>transaction_timeout</varname> is shorter than\n+ <varname>idle_in_transaction_session_timeout</varname> or <varname>statement_timeout</varname>\n+ <varname>transaction_timeout</varname> will invalidate longer timeout.\n+ </para>\n+\n\nSince we are already try to disable the timeouts, should we try to disable\nthem even if they are equal.\n\n+\n+ <para>\n+ Prepared transactions are not subject for this timeout.\n+ </para>\n\nMaybe wrap this with <note> is a good idea.\n\n> I’ve inspected CI fails and they were caused by two different problems:\n> 1. It’s unsafe for isaoltion tester to await transaction_timeout within a query. Usually it gets\n> FATAL: terminating connection due to transaction timeout\n> But if VM is a bit slow it can get occasional\n> PQconsumeInput failed: server closed the connection unexpectedly\n> So, currently all tests use “passive waiting”, in a session that will not timeout.\n>\n> 2. In some cases pg_sleep(0.1) were sleeping up to 200 ms. That was making s7 and s8 fail, because they rely on this margin.\n\nI'm curious why this happened.\n\n> I’ve separated these tests into different test timeouts-long and increased margin to 300ms. Now tests run horrible 2431 ms. Moreover I’m afraid that on buildfarm we can have much randomly-slower machines so this test might be excluded.\n> This test checks COMMIT AND CHAIN and flow of small queries (Nik’s case).\n>\n> Also I’ve verified that every \"enable_timeout_after(TRANSACTION_TIMEOUT)” and “disable_timeout(TRANSACTION_TIMEOUT)” is necessary and found that case of aborting \"idle in transaction (aborted)” is not covered by tests. I’m not sure we need a test for this.\n\nI see there is a test about idle_in_transaction_timeout and transaction_timeout.\n\nBoth of them only check the session, but don't check the reason, so we cannot\ndistinguish the reason they are terminated. Right?\n\n> Japin, Junwang, what do you think?\n\nHowever, checking the reason on the timeout session may cause regression test\nfailed (as you point in 1), I don't strongly insist on it.\n\n--\nBest regards,\nJapin Li.\n\n\n",
"msg_date": "Fri, 26 Jan 2024 22:58:34 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 26 Jan 2024, at 19:58, Japin Li <japinli@hotmail.com> wrote:\n> \n> Thanks for updating the patch. Here are some comments for v24.\n> \n> + <para>\n> + Terminate any session that spans longer than the specified amount of\n> + time in transaction. The limit applies both to explicit transactions\n> + (started with <command>BEGIN</command>) and to implicitly started\n> + transaction corresponding to single statement. But this limit is not\n> + applied to prepared transactions.\n> + If this value is specified without units, it is taken as milliseconds.\n> + A value of zero (the default) disables the timeout.\n> + </para>\n> The sentence \"But this limit is not applied to prepared transactions\" is redundant,\n> since we have a paragraph to describe this later.\nFixed.\n> \n> +\n> + <para>\n> + If <varname>transaction_timeout</varname> is shorter than\n> + <varname>idle_in_transaction_session_timeout</varname> or <varname>statement_timeout</varname>\n> + <varname>transaction_timeout</varname> will invalidate longer timeout.\n> + </para>\n> +\n> \n> Since we are already try to disable the timeouts, should we try to disable\n> them even if they are equal.\n\nWell, we disable timeouts on equality. Fixed docs.\n\n> \n> +\n> + <para>\n> + Prepared transactions are not subject for this timeout.\n> + </para>\n> \n> Maybe wrap this with <note> is a good idea.\nDone.\n\n> \n>> I’ve inspected CI fails and they were caused by two different problems:\n>> 1. It’s unsafe for isaoltion tester to await transaction_timeout within a query. Usually it gets\n>> FATAL: terminating connection due to transaction timeout\n>> But if VM is a bit slow it can get occasional\n>> PQconsumeInput failed: server closed the connection unexpectedly\n>> So, currently all tests use “passive waiting”, in a session that will not timeout.\n>> \n>> 2. In some cases pg_sleep(0.1) were sleeping up to 200 ms. That was making s7 and s8 fail, because they rely on this margin.\n> \n> I'm curious why this happened.\nI think pg_sleep() cannot provide guarantees on when next query will be executed. In our case we need that isolation tester see that sleep is over and continue in other session...\n\n>> I’ve separated these tests into different test timeouts-long and increased margin to 300ms. Now tests run horrible 2431 ms. Moreover I’m afraid that on buildfarm we can have much randomly-slower machines so this test might be excluded.\n>> This test checks COMMIT AND CHAIN and flow of small queries (Nik’s case).\n>> \n>> Also I’ve verified that every \"enable_timeout_after(TRANSACTION_TIMEOUT)” and “disable_timeout(TRANSACTION_TIMEOUT)” is necessary and found that case of aborting \"idle in transaction (aborted)” is not covered by tests. I’m not sure we need a test for this.\n> \n> I see there is a test about idle_in_transaction_timeout and transaction_timeout.\n> \n> Both of them only check the session, but don't check the reason, so we cannot\n> distinguish the reason they are terminated. Right?\nYes.\n> \n>> Japin, Junwang, what do you think?\n> \n> However, checking the reason on the timeout session may cause regression test\n> failed (as you point in 1), I don't strongly insist on it.\n\nIndeed, if we check a reason of FATAL timeouts - we get flaky tests.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 30 Jan 2024 11:22:51 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nOn Tue, 30 Jan 2024 at 14:22, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> On 26 Jan 2024, at 19:58, Japin Li <japinli@hotmail.com> wrote:\n>>\n>> Thanks for updating the patch. Here are some comments for v24.\n>>\n>> + <para>\n>> + Terminate any session that spans longer than the specified amount of\n>> + time in transaction. The limit applies both to explicit transactions\n>> + (started with <command>BEGIN</command>) and to implicitly started\n>> + transaction corresponding to single statement. But this limit is not\n>> + applied to prepared transactions.\n>> + If this value is specified without units, it is taken as milliseconds.\n>> + A value of zero (the default) disables the timeout.\n>> + </para>\n>> The sentence \"But this limit is not applied to prepared transactions\" is redundant,\n>> since we have a paragraph to describe this later.\n> Fixed.\n>>\n>> +\n>> + <para>\n>> + If <varname>transaction_timeout</varname> is shorter than\n>> + <varname>idle_in_transaction_session_timeout</varname> or <varname>statement_timeout</varname>\n>> + <varname>transaction_timeout</varname> will invalidate longer timeout.\n>> + </para>\n>> +\n>>\n>> Since we are already try to disable the timeouts, should we try to disable\n>> them even if they are equal.\n>\n> Well, we disable timeouts on equality. Fixed docs.\n>\n>>\n>> +\n>> + <para>\n>> + Prepared transactions are not subject for this timeout.\n>> + </para>\n>>\n>> Maybe wrap this with <note> is a good idea.\n> Done.\n>\n\nThanks for updating the patch. LGTM.\n\nIf there is no other objections, I'll change it to ready for committer\nnext Monday.\n\n\n",
"msg_date": "Wed, 31 Jan 2024 17:27:06 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\n\n> On 31 Jan 2024, at 14:27, Japin Li <japinli@hotmail.com> wrote:\n> \n> LGTM.\n> \n> If there is no other objections, I'll change it to ready for committer\n> next Monday.\n\nI think we have a quorum, so I decided to go ahead and flipped status to RfC. Thanks!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 31 Jan 2024 14:57:32 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Hi!\n\nOn Wed, Jan 31, 2024 at 11:57 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > On 31 Jan 2024, at 14:27, Japin Li <japinli@hotmail.com> wrote:\n> >\n> > LGTM.\n> >\n> > If there is no other objections, I'll change it to ready for committer\n> > next Monday.\n>\n> I think we have a quorum, so I decided to go ahead and flipped status to RfC. Thanks!\n\nI checked this patch. Generally I look good. I've slightly revised that.\n\nI think there is one unaddressed concern by Andres Freund [1] about\nthe overhead of this patch by adding extra branches and function calls\nin the case transaction_timeout is disabled. I tried to measure the\noverhead of this patch using a pgbench script containing 20 semicolons\n(20 empty statements in 20 empty transactions). I didn't manage to\nfind measurable overhead or change of performance profile (I used\nXCode Instruments on my x86 MacBook). One thing, which I still found\npossible to do is to avoid unconditional calls to\nget_timeout_active(TRANSACTION_TIMEOUT). Instead I put responsibility\nfor disabling timeout after GUC disables the transaction_timeout\nassign hook.\n\nI removed the TODO comment from _doSetFixedOutputState(). I think\nbackup restore is the operation where slow commands and slow\ntransactions are expected, and it's natural to disable\ntransaction_timeout among other timeouts there. And the existing\ncomment clarifies that.\n\nAlso I made some grammar fixes to docs and comments.\n\nI'm going to push this if there are no objections.\n\nLinks.\n1. https://www.postgresql.org/message-id/20221206011050.s6hapukjqha35hud%40alap3.anarazel.de\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Tue, 13 Feb 2024 23:42:35 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-13 23:42:35 +0200, Alexander Korotkov wrote:\n> diff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\n> index 464858117e0..a124ba59330 100644\n> --- a/src/backend/access/transam/xact.c\n> +++ b/src/backend/access/transam/xact.c\n> @@ -2139,6 +2139,10 @@ StartTransaction(void)\n> \t */\n> \ts->state = TRANS_INPROGRESS;\n> \n> +\t/* Schedule transaction timeout */\n> +\tif (TransactionTimeout > 0)\n> +\t\tenable_timeout_after(TRANSACTION_TIMEOUT, TransactionTimeout);\n> +\n> \tShowTransactionState(\"StartTransaction\");\n> }\n\nIsn't it a problem that all uses of StartTransaction() trigger a timeout, but\ntransaction commit/abort don't? What if e.g. logical replication apply starts\na transaction, commits it, and then goes idle? The timer would still be\nactive, afaict?\n\nI don't think it works well to enable timeouts in xact.c and to disable them\nin PostgresMain().\n\n\n> @@ -4491,12 +4511,18 @@ PostgresMain(const char *dbname, const char *username)\n> \t\t\t\tpgstat_report_activity(STATE_IDLEINTRANSACTION_ABORTED, NULL);\n> \n> \t\t\t\t/* Start the idle-in-transaction timer */\n> -\t\t\t\tif (IdleInTransactionSessionTimeout > 0)\n> +\t\t\t\tif (IdleInTransactionSessionTimeout > 0\n> +\t\t\t\t\t&& (IdleInTransactionSessionTimeout < TransactionTimeout || TransactionTimeout == 0))\n> \t\t\t\t{\n> \t\t\t\t\tidle_in_transaction_timeout_enabled = true;\n> \t\t\t\t\tenable_timeout_after(IDLE_IN_TRANSACTION_SESSION_TIMEOUT,\n> \t\t\t\t\t\t\t\t\t\t IdleInTransactionSessionTimeout);\n> \t\t\t\t}\n> +\n> +\t\t\t\t/* Schedule or reschedule transaction timeout */\n> +\t\t\t\tif (TransactionTimeout > 0 && !get_timeout_active(TRANSACTION_TIMEOUT))\n> +\t\t\t\t\tenable_timeout_after(TRANSACTION_TIMEOUT,\n> +\t\t\t\t\t\t\t\t\t\t TransactionTimeout);\n> \t\t\t}\n> \t\t\telse if (IsTransactionOrTransactionBlock())\n> \t\t\t{\n> @@ -4504,12 +4530,18 @@ PostgresMain(const char *dbname, const char *username)\n> \t\t\t\tpgstat_report_activity(STATE_IDLEINTRANSACTION, NULL);\n> \n> \t\t\t\t/* Start the idle-in-transaction timer */\n> -\t\t\t\tif (IdleInTransactionSessionTimeout > 0)\n> +\t\t\t\tif (IdleInTransactionSessionTimeout > 0\n> +\t\t\t\t\t&& (IdleInTransactionSessionTimeout < TransactionTimeout || TransactionTimeout == 0))\n> \t\t\t\t{\n> \t\t\t\t\tidle_in_transaction_timeout_enabled = true;\n> \t\t\t\t\tenable_timeout_after(IDLE_IN_TRANSACTION_SESSION_TIMEOUT,\n> \t\t\t\t\t\t\t\t\t\t IdleInTransactionSessionTimeout);\n> \t\t\t\t}\n> +\n> +\t\t\t\t/* Schedule or reschedule transaction timeout */\n> +\t\t\t\tif (TransactionTimeout > 0 && !get_timeout_active(TRANSACTION_TIMEOUT))\n> +\t\t\t\t\tenable_timeout_after(TRANSACTION_TIMEOUT,\n> +\t\t\t\t\t\t\t\t\t\t TransactionTimeout);\n> \t\t\t}\n> \t\t\telse\n> \t\t\t{\n\nWhy do we need to do anything in these cases if the timer is started in\nStartTransaction()?\n\n\n> new file mode 100644\n> index 00000000000..ce2c9a43011\n> --- /dev/null\n> +++ b/src/test/isolation/specs/timeouts-long.spec\n> @@ -0,0 +1,35 @@\n> +# Tests for transaction timeout that require long wait times\n> +\n> +session s7\n> +step s7_begin\n> +{\n> + BEGIN ISOLATION LEVEL READ COMMITTED;\n> + SET transaction_timeout = '1s';\n> +}\n> +step s7_commit_and_chain { COMMIT AND CHAIN; }\n> +step s7_sleep\t{ SELECT pg_sleep(0.6); }\n> +step s7_abort\t{ ABORT; }\n> +\n> +session s8\n> +step s8_begin\n> +{\n> + BEGIN ISOLATION LEVEL READ COMMITTED;\n> + SET transaction_timeout = '900ms';\n> +}\n> +# to test that quick query does not restart transaction_timeout\n> +step s8_select_1 { SELECT 1; }\n> +step s8_sleep\t{ SELECT pg_sleep(0.6); }\n> +\n> +session checker\n> +step checker_sleep\t{ SELECT pg_sleep(0.3); }\n\nIsn't this test going to be very fragile on busy / slow machines? What if the\npg_sleep() takes one second, because there were other tasks to schedule? I'd\nbe surprised if this didn't fail under valgrind, for example.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 15 Feb 2024 15:08:56 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Alexander, thanks for pushing this! This is small but very awaited feature.\n\n> On 16 Feb 2024, at 02:08, Andres Freund <andres@anarazel.de> wrote:\n> \n> Isn't this test going to be very fragile on busy / slow machines? What if the\n> pg_sleep() takes one second, because there were other tasks to schedule? I'd\n> be surprised if this didn't fail under valgrind, for example.\n\nEven more robust tests that were bullet-proof in CI previously exhibited some failures on buildfarm. Currently there are 5 failures through this weekend.\nFailing tests are testing interaction of idle_in_transaction_session_timeout vs transaction_timeout(5), and rescheduling transaction_timeout(6).\nSymptoms:\n\n[0] transaction timeout occurs when it is being scheduled. Seems like SET was running to long.\n step s6_begin: BEGIN ISOLATION LEVEL READ COMMITTED;\n step s6_tt: SET statement_timeout = '1s'; SET transaction_timeout = '10ms';\n+s6: FATAL: terminating connection due to transaction timeout\n step checker_sleep: SELECT pg_sleep(0.1);\n\n[1] transaction timeout 10ms is not detected after 1s\nstep s6_check: SELECT count(*) FROM pg_stat_activity WHERE application_name = 'isolation/timeouts/s6';\n count\n -----\n- 0\n+ 1\n\n[2] transaction timeout is not detected in both session 5 and session 6.\n\nSo far not signle animal reported failures twice, so it's hard to say anything about frequency. But it seems to be significant source of failures.\n\nSo far I have these ideas:\n\n1. Remove test sessions 5 and 6. But it seems a little strange that session 3 did not fail at all (it is testing interaction of statement_timeout and transaction_timeout). This test is very similar to test sessiont 5...\n2. Increase wait times.\nstep checker_sleep\t{ SELECT pg_sleep(0.1); }\nSeems not enough to observe backend timed out from pg_stat_activity. But this won't help from [0].\n3. Reuse waiting INJECTION_POINT from [3] to make timeout tests deterministic and safe from race conditions. With waiting injection points we can wait as much as needed in current environment.\n\nAny advices are welcome.\n\n\nBest regards, Andrey Borodin.\n\n\n[0] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-02-16%2020%3A06%3A51\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kestrel&dt=2024-02-16%2001%3A45%3A10\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fairywren&dt=2024-02-17%2001%3A55%3A45\n[3] https://www.postgresql.org/message-id/0925F9A9-4D53-4B27-A87E-3D83A757B0E0@yandex-team.ru\n\n",
"msg_date": "Sun, 18 Feb 2024 22:16:02 +0300",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\n\n> On 18 Feb 2024, at 22:16, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> But it seems a little strange that session 3 did not fail at all\nIt was only coincidence. Any test that verifies FATALing out in 100ms can fail, see new failure here [0].\n\nIn a nearby thread Michael is proposing injections points that can wait and be awaken. So I propose following course of action:\n1. Remove all tests that involve pg_stat_activity test of FATALed session (any permutation with checker_sleep step)\n2. Add idle_in_transaction_session_timeout, statement_timeout and transaction_timeout tests when injection points features get committed.\n\nAlexander, what do you think?\n\nBest regards, Andrey Borodin.\n\n\n[0] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2024-02-18%2022%3A23%3A45\n\n",
"msg_date": "Mon, 19 Feb 2024 12:14:05 +0300",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "\nOn Mon, 19 Feb 2024 at 17:14, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> On 18 Feb 2024, at 22:16, Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>>\n>> But it seems a little strange that session 3 did not fail at all\n> It was only coincidence. Any test that verifies FATALing out in 100ms can fail, see new failure here [0].\n>\n> In a nearby thread Michael is proposing injections points that can wait and be awaken. So I propose following course of action:\n> 1. Remove all tests that involve pg_stat_activity test of FATALed session (any permutation with checker_sleep step)\n> 2. Add idle_in_transaction_session_timeout, statement_timeout and transaction_timeout tests when injection points features get committed.\n>\n\n+1\n\n> Alexander, what do you think?\n>\n\n\n",
"msg_date": "Mon, 19 Feb 2024 18:17:16 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 19 Feb 2024, at 15:17, Japin Li <japinli@hotmail.com> wrote:\n> \n> \n> +1\n\nPFA patch set of 4 patches:\n1. remove all potential flaky tests. BTW recently we had a bingo when 3 of them failed together [0]\n2-3. waiting injection points patchset by Michael Paquier, intact v2 from nearby thread.\n4. prototype of simple TAP tests for timeouts.\n\nI did not add a test for statement_timeout, because it still have good coverage in isolation tests. But added test for idle_sessoin_timeout.\nMaybe these tests could be implemented with NOTICE injection points (not requiring steps 2-3), but I'm afraid that they might be flaky too: FATALed connection might not send information necesary for test success (we will see something like \"PQconsumeInput failed: server closed the connection unexpectedly\" as in [1]).\n\n\nBest regards, Andrey Borodin.\n\n[0] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tamandua&dt=2024-02-20%2010%3A20%3A13\n[1] https://www.postgresql.org/message-id/flat/CAAhFRxiQsRs2Eq5kCo9nXE3HTugsAAJdSQSmxncivebAxdmBjQ%40mail.gmail.com",
"msg_date": "Thu, 22 Feb 2024 22:23:24 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "Hi, Andrey!\n\nOn Thu, Feb 22, 2024 at 7:23 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> > On 19 Feb 2024, at 15:17, Japin Li <japinli@hotmail.com> wrote:\n> >\n> >\n> > +1\n>\n> PFA patch set of 4 patches:\n> 1. remove all potential flaky tests. BTW recently we had a bingo when 3 of them failed together [0]\n> 2-3. waiting injection points patchset by Michael Paquier, intact v2 from nearby thread.\n> 4. prototype of simple TAP tests for timeouts.\n>\n> I did not add a test for statement_timeout, because it still have good coverage in isolation tests. But added test for idle_sessoin_timeout.\n> Maybe these tests could be implemented with NOTICE injection points (not requiring steps 2-3), but I'm afraid that they might be flaky too: FATALed connection might not send information necesary for test success (we will see something like \"PQconsumeInput failed: server closed the connection unexpectedly\" as in [1]).\n\nThank you for the patches. I've pushed the 0001 patch to avoid\nfurther failures on buildfarm. Let 0004 wait till injections points\nby Mechael are committed.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 25 Feb 2024 20:50:48 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 25 Feb 2024, at 21:50, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> \n> Thank you for the patches. I've pushed the 0001 patch to avoid\n> further failures on buildfarm. Let 0004 wait till injections points\n> by Mechael are committed.\n\nThanks!\n\nAll prerequisites are committed. I propose something in a line with this patch.\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 6 Mar 2024 11:22:00 +0300",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Wed, Mar 6, 2024 at 10:22 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> > On 25 Feb 2024, at 21:50, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >\n> > Thank you for the patches. I've pushed the 0001 patch to avoid\n> > further failures on buildfarm. Let 0004 wait till injections points\n> > by Mechael are committed.\n>\n> Thanks!\n>\n> All prerequisites are committed. I propose something in a line with this patch.\n\nThank you. I took a look at the patch. Should we also check the\nrelevant message after the timeout is fired? We could check it in\npsql stderr or log for that.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 6 Mar 2024 21:55:46 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 7 Mar 2024, at 00:55, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> \n> On Wed, Mar 6, 2024 at 10:22 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>>> On 25 Feb 2024, at 21:50, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>>> \n>>> Thank you for the patches. I've pushed the 0001 patch to avoid\n>>> further failures on buildfarm. Let 0004 wait till injections points\n>>> by Mechael are committed.\n>> \n>> Thanks!\n>> \n>> All prerequisites are committed. I propose something in a line with this patch.\n> \n> Thank you. I took a look at the patch. Should we also check the\n> relevant message after the timeout is fired? We could check it in\n> psql stderr or log for that.\n\nPFA version which checks log output.\nBut I could not come up with a proper use of BackgroundPsql->query_until() to check outputs. And there are multiple possible errors.\n\nWe can copy test from src/bin/psql/t/001_basic.pl:\n\n# test behavior and output on server crash\nmy ($ret, $out, $err) = $node->psql('postgres',\n\"SELECT 'before' AS running;\\n\"\n. \"SELECT pg_terminate_backend(pg_backend_pid());\\n\"\n. \"SELECT 'AFTER' AS not_running;\\n\");\n\nis($ret, 2, 'server crash: psql exit code');\nlike($out, qr/before/, 'server crash: output before crash');\nok($out !~ qr/AFTER/, 'server crash: no output after crash');\nis( $err,\n'psql:<stdin>:2: FATAL: terminating connection due to administrator command\npsql:<stdin>:2: server closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\npsql:<stdin>:2: error: connection to server was lost',\n'server crash: error message’);\n\nBut I do not see much value in this.\nWhat do you think?\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Mon, 11 Mar 2024 15:52:57 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Mon, Mar 11, 2024 at 12:53 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> > On 7 Mar 2024, at 00:55, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >\n> > On Wed, Mar 6, 2024 at 10:22 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> >>> On 25 Feb 2024, at 21:50, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >>>\n> >>> Thank you for the patches. I've pushed the 0001 patch to avoid\n> >>> further failures on buildfarm. Let 0004 wait till injections points\n> >>> by Mechael are committed.\n> >>\n> >> Thanks!\n> >>\n> >> All prerequisites are committed. I propose something in a line with this patch.\n> >\n> > Thank you. I took a look at the patch. Should we also check the\n> > relevant message after the timeout is fired? We could check it in\n> > psql stderr or log for that.\n>\n> PFA version which checks log output.\n> But I could not come up with a proper use of BackgroundPsql->query_until() to check outputs. And there are multiple possible errors.\n>\n> We can copy test from src/bin/psql/t/001_basic.pl:\n>\n> # test behavior and output on server crash\n> my ($ret, $out, $err) = $node->psql('postgres',\n> \"SELECT 'before' AS running;\\n\"\n> . \"SELECT pg_terminate_backend(pg_backend_pid());\\n\"\n> . \"SELECT 'AFTER' AS not_running;\\n\");\n>\n> is($ret, 2, 'server crash: psql exit code');\n> like($out, qr/before/, 'server crash: output before crash');\n> ok($out !~ qr/AFTER/, 'server crash: no output after crash');\n> is( $err,\n> 'psql:<stdin>:2: FATAL: terminating connection due to administrator command\n> psql:<stdin>:2: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> psql:<stdin>:2: error: connection to server was lost',\n> 'server crash: error message’);\n>\n> But I do not see much value in this.\n> What do you think?\n\nI think if checking psql stderr is problematic, checking just logs is\nfine. Could we wait for the relevant log messages one by one with\n$node->wait_for_log() just like 040_standby_failover_slots_sync.pl do?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 11 Mar 2024 13:18:01 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 11 Mar 2024, at 16:18, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> \n> I think if checking psql stderr is problematic, checking just logs is\n> fine. Could we wait for the relevant log messages one by one with\n> $node->wait_for_log() just like 040_standby_failover_slots_sync.pl do?\n\nPFA version with $node->wait_for_log()\n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 12 Mar 2024 13:28:19 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Tue, Mar 12, 2024 at 10:28 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> > On 11 Mar 2024, at 16:18, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >\n> > I think if checking psql stderr is problematic, checking just logs is\n> > fine. Could we wait for the relevant log messages one by one with\n> > $node->wait_for_log() just like 040_standby_failover_slots_sync.pl do?\n>\n> PFA version with $node->wait_for_log()\n\nI've slightly revised the patch. I'm going to push it if no objections.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 13 Mar 2024 02:23:27 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "> On 13 Mar 2024, at 05:23, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> \n> On Tue, Mar 12, 2024 at 10:28 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>>> On 11 Mar 2024, at 16:18, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>>> \n>>> I think if checking psql stderr is problematic, checking just logs is\n>>> fine. Could we wait for the relevant log messages one by one with\n>>> $node->wait_for_log() just like 040_standby_failover_slots_sync.pl do?\n>> \n>> PFA version with $node->wait_for_log()\n> \n> I've slightly revised the patch. I'm going to push it if no objections.\n\nOne small note: log_offset was updated, but was not used.\n\nThanks!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 13 Mar 2024 10:56:06 +0500",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
},
{
"msg_contents": "On Wed, Mar 13, 2024 at 7:56 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> > On 13 Mar 2024, at 05:23, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >\n> > On Tue, Mar 12, 2024 at 10:28 AM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> >>> On 11 Mar 2024, at 16:18, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >>>\n> >>> I think if checking psql stderr is problematic, checking just logs is\n> >>> fine. Could we wait for the relevant log messages one by one with\n> >>> $node->wait_for_log() just like 040_standby_failover_slots_sync.pl do?\n> >>\n> >> PFA version with $node->wait_for_log()\n> >\n> > I've slightly revised the patch. I'm going to push it if no objections.\n>\n> One small note: log_offset was updated, but was not used.\n\nThank you. This is the updated version.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 13 Mar 2024 10:17:37 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Transaction timeout"
}
] |
[
{
"msg_contents": "Hi\n\nA few of the developer option GUCs were missing the \"id\" attribute\nin their markup, making it impossible to link to them directly.\n\nSpecifically the entries from \"trace_locks\" to \"log_btree_build_stats\" here:\n\n https://www.postgresql.org/docs/current/runtime-config-developer.html\n\nPatch applies cleanly back to REL_11_STABLE.\n\nRegards\n\nIan Barwick",
"msg_date": "Sat, 3 Dec 2022 15:58:19 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": true,
"msg_subject": "docs: add missing <varlistentry> id elements for developer GUCs"
},
{
"msg_contents": "On Sat, Dec 03, 2022 at 03:58:19PM +0900, Ian Lawrence Barwick wrote:\n> A few of the developer option GUCs were missing the \"id\" attribute\n> in their markup, making it impossible to link to them directly.\n\nTrue enough that the other developer GUCs do that, so applied and\nbackpatched down to 11.\n--\nMichael",
"msg_date": "Mon, 5 Dec 2022 11:24:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: docs: add missing <varlistentry> id elements for developer GUCs"
},
{
"msg_contents": "2022年12月5日(月) 11:25 Michael Paquier <michael@paquier.xyz>:\n>\n> On Sat, Dec 03, 2022 at 03:58:19PM +0900, Ian Lawrence Barwick wrote:\n> > A few of the developer option GUCs were missing the \"id\" attribute\n> > in their markup, making it impossible to link to them directly.\n>\n> True enough that the other developer GUCs do that, so applied and\n> backpatched down to 11.\n\nThanks :).\n\nIt has since been brought to my attention that there's a general patch covering\nmissing \"id\" attributes, including those ones (see [1]), albeit\nwithout a CF entry.\nI will see if we can move that forward, before I end up sending in more\npiecemeal patches as I encounter them...\n\n[1] https://www.postgresql.org/message-id/flat/3bac458c-b121-1b20-8dea-0665986faa40%40gmx.de#f52d07e7782b893a54e6e31b5a20b4db\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Tue, 6 Dec 2022 09:59:34 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: docs: add missing <varlistentry> id elements for developer GUCs"
}
] |
[
{
"msg_contents": "From 6d10dafdd7c7789eddd7fd72ca22dfde74febe23 Mon Sep 17 00:00:00 2001\nFrom: Ali Sajjad <sasrizavi@gmail.com>\nDate: Sun, 4 Dec 2022 06:03:11 -0800\nSubject: [PATCH] Add .idea to gitignore for JetBrains CLion\n\n---\n .gitignore | 1 +\n 1 file changed, 1 insertion(+)\n\ndiff --git a/.gitignore b/.gitignore\nindex 1c0f3e5e35..7118b90f25 100644\n--- a/.gitignore\n+++ b/.gitignore\n@@ -31,6 +31,7 @@ win32ver.rc\n *.exe\n lib*dll.def\n lib*.pc\n+**/.idea\n\n # Local excludes in root directory\n /GNUmakefile\n-- \n2.34.1\n\nProbably I'm the first one building PostgreSQL on CLion.\n\nFrom 6d10dafdd7c7789eddd7fd72ca22dfde74febe23 Mon Sep 17 00:00:00 2001From: Ali Sajjad <sasrizavi@gmail.com>Date: Sun, 4 Dec 2022 06:03:11 -0800Subject: [PATCH] Add .idea to gitignore for JetBrains CLion--- .gitignore | 1 + 1 file changed, 1 insertion(+)diff --git a/.gitignore b/.gitignoreindex 1c0f3e5e35..7118b90f25 100644--- a/.gitignore+++ b/.gitignore@@ -31,6 +31,7 @@ win32ver.rc *.exe lib*dll.def lib*.pc+**/.idea # Local excludes in root directory /GNUmakefile-- 2.34.1Probably I'm the first one building PostgreSQL on CLion.",
"msg_date": "Sun, 4 Dec 2022 07:28:30 -0800",
"msg_from": "Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add .idea to gitignore for JetBrains CLion"
},
{
"msg_contents": "I searched the commit fest app and there's already someone who has made\nthis.\n\nOn Sun, Dec 4, 2022 at 7:28 AM Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com>\nwrote:\n\n> From 6d10dafdd7c7789eddd7fd72ca22dfde74febe23 Mon Sep 17 00:00:00 2001\n> From: Ali Sajjad <sasrizavi@gmail.com>\n> Date: Sun, 4 Dec 2022 06:03:11 -0800\n> Subject: [PATCH] Add .idea to gitignore for JetBrains CLion\n>\n> ---\n> .gitignore | 1 +\n> 1 file changed, 1 insertion(+)\n>\n> diff --git a/.gitignore b/.gitignore\n> index 1c0f3e5e35..7118b90f25 100644\n> --- a/.gitignore\n> +++ b/.gitignore\n> @@ -31,6 +31,7 @@ win32ver.rc\n> *.exe\n> lib*dll.def\n> lib*.pc\n> +**/.idea\n>\n> # Local excludes in root directory\n> /GNUmakefile\n> --\n> 2.34.1\n>\n> Probably I'm the first one building PostgreSQL on CLion.\n>\n\nI searched the commit fest app and there's already someone who has made this.On Sun, Dec 4, 2022 at 7:28 AM Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com> wrote:From 6d10dafdd7c7789eddd7fd72ca22dfde74febe23 Mon Sep 17 00:00:00 2001From: Ali Sajjad <sasrizavi@gmail.com>Date: Sun, 4 Dec 2022 06:03:11 -0800Subject: [PATCH] Add .idea to gitignore for JetBrains CLion--- .gitignore | 1 + 1 file changed, 1 insertion(+)diff --git a/.gitignore b/.gitignoreindex 1c0f3e5e35..7118b90f25 100644--- a/.gitignore+++ b/.gitignore@@ -31,6 +31,7 @@ win32ver.rc *.exe lib*dll.def lib*.pc+**/.idea # Local excludes in root directory /GNUmakefile-- 2.34.1Probably I'm the first one building PostgreSQL on CLion.",
"msg_date": "Sun, 4 Dec 2022 07:33:43 -0800",
"msg_from": "Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add .idea to gitignore for JetBrains CLion"
},
{
"msg_contents": "Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com> writes:\n> +**/.idea\n\nOur policy is that the in-tree .gitignore files should only hide\nfiles that are build artifacts of standard build processes.\nSomething like this belongs in your personal ~/.gitexclude,\ninstead.\n\n(BTW, perhaps we should remove the entries targeting \".sl\"\nextensions? AFAIK that was only for HP-UX, which is now\ndesupported.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Dec 2022 10:35:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add .idea to gitignore for JetBrains CLion"
},
{
"msg_contents": "> On 4 Dec 2022, at 16:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com> writes:\n>> +**/.idea\n> \n> Our policy is that the in-tree .gitignore files should only hide\n> files that are build artifacts of standard build processes.\n> Something like this belongs in your personal ~/.gitexclude,\n> instead.\n\nSince this comes up every now and again, I wonder if it's worth documenting\nthis in our .gitignore along the lines of:\n\n--- a/.gitignore\n+++ b/.gitignore\n@@ -1,3 +1,7 @@\n+# This contains ignores for build artifacts from standard builds,\n+# auxiliary files from local workflows should be ignored locally\n+# with $GIT_DIR/info/exclude\n+\n\n> (BTW, perhaps we should remove the entries targeting \".sl\"\n> extensions? AFAIK that was only for HP-UX, which is now\n> desupported.)\n\n+1. Grepping through the .gitignores in the tree didn't reveal anything else\nthat seemed to have outlived its usefulness.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sun, 4 Dec 2022 21:02:04 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add .idea to gitignore for JetBrains CLion"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 4 Dec 2022, at 16:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Our policy is that the in-tree .gitignore files should only hide\n>> files that are build artifacts of standard build processes.\n>> Something like this belongs in your personal ~/.gitexclude,\n>> instead.\n\n> Since this comes up every now and again, I wonder if it's worth documenting\n> this in our .gitignore along the lines of:\n\nGood idea.\n\n>> (BTW, perhaps we should remove the entries targeting \".sl\"\n>> extensions? AFAIK that was only for HP-UX, which is now\n>> desupported.)\n\n> +1. Grepping through the .gitignores in the tree didn't reveal anything else\n> that seemed to have outlived its usefulness.\n\nI'll make it so. Thanks for checking.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 04 Dec 2022 15:14:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add .idea to gitignore for JetBrains CLion"
}
] |
[
{
"msg_contents": "Hi\n\nOn this page:\n\n https://www.postgresql.org/docs/current/extend-extensions.html\n\nthree of the <sect2> sections are missing an \"id\" attribute; patch adds\nthese. Noticed when trying to create a stable link to one of the affected\nsections.\n\nRegards\n\nIan Barwick",
"msg_date": "Mon, 5 Dec 2022 10:50:16 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": true,
"msg_subject": "doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 2022-Dec-05, Ian Lawrence Barwick wrote:\n\n> On this page:\n> \n> https://www.postgresql.org/docs/current/extend-extensions.html\n> \n> three of the <sect2> sections are missing an \"id\" attribute; patch adds\n> these. Noticed when trying to create a stable link to one of the affected\n> sections.\n\nHm, I was reminded of this patch here that adds IDs in a lot of places\nhttps://postgr.es/m/3bac458c-b121-1b20-8dea-0665986faa40@gmx.de\nand this other one \nhttps://postgr.es/m/76287ac6-f415-8562-fdaa-5876380c05f3@gmx.de\nwhich adds XSL stuff for adding selectable anchors next to each\nid-carrying item.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 5 Dec 2022 10:56:31 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "2022年12月5日(月) 18:56 Alvaro Herrera <alvherre@alvh.no-ip.org>:\n>\n> On 2022-Dec-05, Ian Lawrence Barwick wrote:\n>\n> > On this page:\n> >\n> > https://www.postgresql.org/docs/current/extend-extensions.html\n> >\n> > three of the <sect2> sections are missing an \"id\" attribute; patch adds\n> > these. Noticed when trying to create a stable link to one of the affected\n> > sections.\n>\n> Hm, I was reminded of this patch here that adds IDs in a lot of places\n> https://postgr.es/m/3bac458c-b121-1b20-8dea-0665986faa40@gmx.de\n> and this other one\n> https://postgr.es/m/76287ac6-f415-8562-fdaa-5876380c05f3@gmx.de\n> which adds XSL stuff for adding selectable anchors next to each\n> id-carrying item.\n\nOh, now you mention it, I vaguely recall seeing those. However the thread\nstalled back in March and the patches don't seem to have made it to a\nCommitFest entry. Brar, would you like to add an entry so they don't get\nlost? See: https://commitfest.postgresql.org/41/\n\nThe items in my patch are covered by the above so disregard that.\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Tue, 6 Dec 2022 09:55:18 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 06.12.2022 at 01:55, Ian Lawrence Barwick wrote:\n\n> Oh, now you mention it, I vaguely recall seeing those. However the thread\n> stalled back in March and the patches don't seem to have made it to a\n> CommitFest entry.\n\nYes, my patches added quite a few ids and also some xsl/css logic to\nmake them more discoverable in the browser but I had gotten the\nimpression that nobody besides me cares about this, so I didn't push it\nany further.\n\n> Brar, would you like to add an entry so they don't get\n> lost? See: https://commitfest.postgresql.org/41/\n\nYes. I can certainly add them to the commitfest although I'm not sure if\nthey still apply cleanly.\n\nI can also rebase or extend them if somebody cares.\n\nRegards,\n\nBrar\n\n\n\n",
"msg_date": "Tue, 6 Dec 2022 09:25:34 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 2022-Dec-06, Brar Piening wrote:\n\n> On 06.12.2022 at 01:55, Ian Lawrence Barwick wrote:\n> \n> > Oh, now you mention it, I vaguely recall seeing those. However the thread\n> > stalled back in March and the patches don't seem to have made it to a\n> > CommitFest entry.\n> \n> Yes, my patches added quite a few ids and also some xsl/css logic to\n> make them more discoverable in the browser but I had gotten the\n> impression that nobody besides me cares about this, so I didn't push it\n> any further.\n\nI care. The problem last time is that we were in the middle of the last\ncommitfest, so we were (or at least I was) distracted by other stuff.\n\nLooking at the resulting psql page,\nhttps://pgdocs.piening.info/app-psql.html#APP-PSQL-OPTIONS-EXPANDED\nI note that the ID for the -x option is called \"options-blah\". I\nunderstand where does this come from: it's the \"expanded\" bit in the\n\"options\" section. However, put together it's a bit silly to have\n\"options\" in plural there; it would make more sense to have it be\nhttps://pgdocs.piening.info/app-psql.html#APP-PSQL-OPTION-EXPANDED\n(where you can read more naturally \"the expanded option for psql\").\nHow laborious would it be to make it so?\n\n> Yes. I can certainly add them to the commitfest although I'm not sure if\n> they still apply cleanly.\n\nIt'll probably have some conflicts, yeah.\n\n> I can also rebase or extend them if somebody cares.\n\nI would welcome separate patches: one to add the IDs, another for the\nXSL/CSS stuff. That allows us to discuss them separately.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 6 Dec 2022 09:38:09 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 06.12.2022 at 09:38, Alvaro Herrera wrote:\n> I care. The problem last time is that we were in the middle of the last\n> commitfest, so we were (or at least I was) distracted by other stuff.\n\nOk, thanks. That's appreciated and understood.\n\n\n> Looking at the resulting psql page,\n> https://pgdocs.piening.info/app-psql.html#APP-PSQL-OPTIONS-EXPANDED\n> I note that the ID for the -x option is called \"options-blah\". I\n> understand where does this come from: it's the \"expanded\" bit in the\n> \"options\" section. However, put together it's a bit silly to have\n> \"options\" in plural there; it would make more sense to have it be\n> https://pgdocs.piening.info/app-psql.html#APP-PSQL-OPTION-EXPANDED\n> (where you can read more naturally \"the expanded option for psql\").\n> How laborious would it be to make it so?\n\nNo problem. I've already done it. Your second link should work now.\n\n\n> It'll probably have some conflicts, yeah.\n\nI've updated and rebased my branch on current master now.\n\n\n> I would welcome separate patches: one to add the IDs, another for the\n> XSL/CSS stuff. That allows us to discuss them separately.\n\nI'll send two patches in two separate e-mails in a moment.\n\nRegards,\n\nBrar\n\n\n\n",
"msg_date": "Tue, 6 Dec 2022 18:59:59 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 06.12.2022 at 18:59, Brar Piening wrote:\n> On 06.12.2022 at 09:38, Alvaro Herrera wrote:\n>> I would welcome separate patches: one to add the IDs, another for the\n>> XSL/CSS stuff. That allows us to discuss them separately.\n>\n> I'll send two patches in two separate e-mails in a moment.\n\nThis is patch no 1 that adds ids to various elements without making them\nvisible on the HTML surface.\nIt is an updated and rebased version of the work discussed in the\nfollowing thread:\n\nhttps://www.postgresql.org/message-id/f900c5e1-a18a-84cc-6536-e85ec655c7d7%40gmx.de\n\nThe current statistics for docbook elements with/without ids after\napplying the patch are the following:\n\nname | with_id | without_id | id_coverage | min_id_len | max_id_len\n---------------+---------+------------+-------------+------------+------------\nsect2 | 870 | 0 | 100.00 | 7 | 57 sect1 | 739 | 0 | 100.00 | 4 | 46\nrefentry | 307 | 0 | 100.00 | 8 | 37 sect3 | 206 | 0 | 100.00 | 11 | 57\nchapter | 77 | 0 | 100.00 | 5 | 24 sect4 | 28 | 0 | 100.00 | 16 | 47\nbiblioentry | 23 | 0 | 100.00 | 6 | 15 simplesect | 20 | 0 | 100.00 | 24\n| 39 appendix | 15 | 0 | 100.00 | 7 | 23 part | 8 | 0 | 100.00 | 5 | 20\nco | 4 | 0 | 100.00 | 18 | 30 figure | 3 | 0 | 100.00 | 13 | 28\nreference | 3 | 0 | 100.00 | 14 | 18 anchor | 1 | 0 | 100.00 | 21 | 21\nbibliography | 1 | 0 | 100.00 | 8 | 8 book | 1 | 0 | 100.00 | 10 | 10\nindex | 1 | 0 | 100.00 | 11 | 11 legalnotice | 1 | 0 | 100.00 | 13 | 13\npreface | 1 | 0 | 100.00 | 9 | 9 glossentry | 119 | 14 | 89.47 | 13 | 32\ntable | 285 | 162 | 63.76 | 12 | 56 example | 27 | 16 | 62.79 | 12 | 42\nrefsect3 | 5 | 3 | 62.50 | 19 | 24 refsect2 | 41 | 56 | 42.27 | 10 | 36\nvarlistentry | 1701 | 3172 | 34.91 | 9 | 64 footnote | 5 | 18 | 21.74 |\n17 | 32 step | 28 | 130 | 17.72 | 7 | 28 refsect1 | 151 | 1333 | 10.18 |\n15 | 40 informaltable | 1 | 15 | 6.25 | 25 | 25 phrase | 2 | 94 | 2.08 |\n20 | 26 indexterm | 5 | 3262 | 0.15 | 17 | 26 variablelist | 1 | 813 |\n0.12 | 21 | 21 function | 4 | 4011 | 0.10 | 12 | 28 entry | 11 | 17740 |\n0.06 | 21 | 40 para | 3 | 25734 | 0.01 | 21 | 27\n\nRegards,\n\nBrar",
"msg_date": "Tue, 6 Dec 2022 19:11:57 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 06.12.2022 at 18:59, Brar Piening wrote:\n> On 06.12.2022 at 09:38, Alvaro Herrera wrote:\n>> I would welcome separate patches: one to add the IDs, another for the\n>> XSL/CSS stuff. That allows us to discuss them separately.\n>\n> I'll send two patches in two separate e-mails in a moment.\n\nThis is patch no 2 that adds links to html elements with ids to make\nthem visible on the HTML surface when hovering the element.\n\nRegards,\n\nBrar",
"msg_date": "Tue, 6 Dec 2022 19:16:51 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 06.12.2022 at 19:11, Brar Piening wrote:\r\n> The current statistics for docbook elements with/without ids after \r\n> applying the patch are the following: \r\n\r\nSomehow my e-mail client destroyed the table. That's how it was supposed \r\nto look like:\r\n\r\n name | with_id | without_id | id_coverage | min_id_len | \r\nmax_id_len\r\n---------------+---------+------------+-------------+------------+------------\r\n sect2 | 870 | 0 | 100.00 | 7 \r\n| 57\r\n sect1 | 739 | 0 | 100.00 | 4 \r\n| 46\r\n refentry | 307 | 0 | 100.00 | 8 \r\n| 37\r\n sect3 | 206 | 0 | 100.00 | 11 \r\n| 57\r\n chapter | 77 | 0 | 100.00 | 5 \r\n| 24\r\n sect4 | 28 | 0 | 100.00 | 16 \r\n| 47\r\n biblioentry | 23 | 0 | 100.00 | 6 \r\n| 15\r\n simplesect | 20 | 0 | 100.00 | 24 \r\n| 39\r\n appendix | 15 | 0 | 100.00 | 7 \r\n| 23\r\n part | 8 | 0 | 100.00 | 5 \r\n| 20\r\n co | 4 | 0 | 100.00 | 18 \r\n| 30\r\n figure | 3 | 0 | 100.00 | 13 \r\n| 28\r\n reference | 3 | 0 | 100.00 | 14 \r\n| 18\r\n anchor | 1 | 0 | 100.00 | 21 \r\n| 21\r\n bibliography | 1 | 0 | 100.00 | 8 \r\n| 8\r\n book | 1 | 0 | 100.00 | 10 \r\n| 10\r\n index | 1 | 0 | 100.00 | 11 \r\n| 11\r\n legalnotice | 1 | 0 | 100.00 | 13 \r\n| 13\r\n preface | 1 | 0 | 100.00 | 9 \r\n| 9\r\n glossentry | 119 | 14 | 89.47 | 13 \r\n| 32\r\n table | 285 | 162 | 63.76 | 12 \r\n| 56\r\n example | 27 | 16 | 62.79 | 12 \r\n| 42\r\n refsect3 | 5 | 3 | 62.50 | 19 \r\n| 24\r\n refsect2 | 41 | 56 | 42.27 | 10 \r\n| 36\r\n varlistentry | 1701 | 3172 | 34.91 | 9 \r\n| 64\r\n footnote | 5 | 18 | 21.74 | 17 \r\n| 32\r\n step | 28 | 130 | 17.72 | 7 \r\n| 28\r\n refsect1 | 151 | 1333 | 10.18 | 15 \r\n| 40\r\n informaltable | 1 | 15 | 6.25 | 25 \r\n| 25\r\n phrase | 2 | 94 | 2.08 | 20 \r\n| 26\r\n indexterm | 5 | 3262 | 0.15 | 17 \r\n| 26\r\n variablelist | 1 | 813 | 0.12 | 21 \r\n| 21\r\n function | 4 | 4011 | 0.10 | 12 \r\n| 28\r\n entry | 11 | 17740 | 0.06 | 21 \r\n| 40\r\n para | 3 | 25734 | 0.01 | 21 \r\n| 27\r\n\r\n\r\n",
"msg_date": "Tue, 6 Dec 2022 19:35:01 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "Hello,\n\nAttached is a new patch (add_html_ids_v2.patch), almost identical to\nthe old but modified so that it applies. There were 2 changes (made\nto sgml/plpgsql.stml and sgml/ddl.sgml) that prevented the patch from\napplying. (In ddl.sgml the VACUUM and ANALYZE privileges were\ncombined into MAINTAIN. In plpgsql.sgml an id attribute was added.)\n\nIf the author will look over my version of the patch I believe it can\nbe approved and sent on to the committers.\n\n\nWhat follows is my notes for the committers:\n\nThe ids look reasonably consistent, with \"nesting\" so that ids of\nsub-sections mostly have (at least some of) the id of the parent\nsection as a prefix. There are a few inconsistencies. A sect3 has\nid=\"collation-managing-standard\" and sect4 has\nid=\"collation-managing-predefined\". There is a slight possibility of\nconflict, as in this case sect4 ids omit the last word of the section\n3 ids it is possible to have conflicts with the ids of the sect4s in\nother sect3s of the same file. I don't have a problem with this.\n\n(I see establishing strict standards for id values as excessive.)\n\nThe above was the only case I noticed. I also tried counting words,\n\"-\" delimited, in ids and found no cases with fewer words than the\nnumber of section levels. Here's the hack:\n\negrep '^\\+ *<sect' /tmp/add_html_ids.patch \\\n | gawk '{if (int(substr($2, length($2), 1)) < split($2, dummy, \"-\"))\n print $0;}'\n\nAs far as I know the ids are consistent with the rest of the\ndocumentation. They are not entirely consistent in construction.\nMostly they copy the section title, but sometimes words are omitted.\nE.g in sgml/charset.sgml where sect2 is \"Managing Collations\" with\nid=\"collation-managing\" and sect3 is \"Standard Collations\" with\nid=\"collation-managing-standard\". Also there is at least one\nabbreviation in the id of a word in the title.\n(id=\"installation-notes-aix-mem-management\" v.s a title of \"Memory\nManagement\") All this seems fine to me.\n\nThe ids are sometimes very long. This also seems ok.\n\nI did not do a particularly close look at the id values for\nvarlistentrys. Scanning the patch they seem fine.\n\nI can confirm that all the patch does is add ids.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein",
"msg_date": "Mon, 2 Jan 2023 14:53:54 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 02.01.2023 at 21:53, Karl O. Pinc wrote:\n> If the author will look over my version of the patch I believe it can\n> be approved and sent on to the committers.\n\nLGTM.\n\nThanks!\n\nBrar\n\n\n\n",
"msg_date": "Tue, 3 Jan 2023 21:35:09 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Tue, 3 Jan 2023 21:35:09 +0100\nBrar Piening <brar@gmx.de> wrote:\n\n> On 02.01.2023 at 21:53, Karl O. Pinc wrote:\n> > If the author will look over my version of the patch I believe it\n> > can be approved and sent on to the committers. \n> \n> LGTM.\n\nApproved for committer!\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Tue, 3 Jan 2023 16:43:34 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Wed, 4 Jan 2023 at 04:13, Karl O. Pinc <kop@karlpinc.com> wrote:\n>\n> On Tue, 3 Jan 2023 21:35:09 +0100\n> Brar Piening <brar@gmx.de> wrote:\n>\n> > On 02.01.2023 at 21:53, Karl O. Pinc wrote:\n> > > If the author will look over my version of the patch I believe it\n> > > can be approved and sent on to the committers.\n> >\n> > LGTM.\n>\n> Approved for committer!\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\ncd9479af2af25d7fa9bfd24dd4dcf976b360f077 ===\n=== applying patch ./add_html_ids_v2.patch\n....\npatching file doc/src/sgml/ref/pgbench.sgml\npatching file doc/src/sgml/ref/psql-ref.sgml\nHunk #73 FAILED at 1824.\n....\n2 out of 208 hunks FAILED -- saving rejects to file\ndoc/src/sgml/ref/psql-ref.sgml.rej\n\n[1] - http://cfbot.cputube.org/patch_41_4041.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 9 Jan 2023 08:01:37 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Mon, 9 Jan 2023 at 08:01, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, 4 Jan 2023 at 04:13, Karl O. Pinc <kop@karlpinc.com> wrote:\n> >\n> > On Tue, 3 Jan 2023 21:35:09 +0100\n> > Brar Piening <brar@gmx.de> wrote:\n> >\n> > > On 02.01.2023 at 21:53, Karl O. Pinc wrote:\n> > > > If the author will look over my version of the patch I believe it\n> > > > can be approved and sent on to the committers.\n> > >\n> > > LGTM.\n> >\n> > Approved for committer!\n>\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n> === Applying patches on top of PostgreSQL commit ID\n> cd9479af2af25d7fa9bfd24dd4dcf976b360f077 ===\n> === applying patch ./add_html_ids_v2.patch\n> ....\n> patching file doc/src/sgml/ref/pgbench.sgml\n> patching file doc/src/sgml/ref/psql-ref.sgml\n> Hunk #73 FAILED at 1824.\n> ....\n> 2 out of 208 hunks FAILED -- saving rejects to file\n> doc/src/sgml/ref/psql-ref.sgml.rej\n>\n> [1] - http://cfbot.cputube.org/patch_41_4041.log\n\nThere are couple of commitfest entries for this:\nhttps://commitfest.postgresql.org/41/4041/\nhttps://commitfest.postgresql.org/41/4042/\n\nCan one of them be closed?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 9 Jan 2023 08:08:22 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 09.01.2023 at 03:31, vignesh C wrote:\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\nVoilà\n\nThis one applies on top of 3c569049b7b502bb4952483d19ce622ff0af5fd6 and\nthe documentation build succeeds. Beyond rebasing I've added a few more\nids (to make the other patch (make_html_ids_discoverable.patch) build\nwithout warnings again) but nothing that would justify another review.\n\nWe probably have to move quickly with this patch since it touches pretty\nmuch any file in the documentation and will be outdated in a minute.\n\nRegards,\n\nBrar",
"msg_date": "Mon, 9 Jan 2023 08:09:02 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 09.01.2023 at 03:38, vignesh C wrote:\n> There are couple of commitfest entries for this:\n> https://commitfest.postgresql.org/41/4041/\n> https://commitfest.postgresql.org/41/4042/ Can one of them be closed?\nI've split the initial patch into two parts upon Álvaro's request in [1]\nso that we can discuss them separately\n\nhttps://commitfest.postgresql.org/41/4041/ is tracking the patch you've\nbeen trying to apply and that I've just sent a rebased version for. It\nonly adds (invisible) ids to the HTML documentation and can be closed\nonce you've applied the patch.\nhttps://commitfest.postgresql.org/41/4042/ is tracking a different patch\nthat makes the ids and the corresponding links discoverable at the HTML\nsurface. Hover one of the psql options in [2] to see the behavior. This\none still needs reviewing and there is no discussion around it yet.\n\nRegards,\nBrar\n\n[1]\nhttps://www.postgresql.org/message-id/20221206083809.3kaygnh2xswoxslj%40alvherre.pgsql\n[2] https://pgdocs.piening.info/app-psql.html#APP-PSQL-OPTION-PORT\n\n\n",
"msg_date": "Mon, 9 Jan 2023 08:31:22 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Mon, 9 Jan 2023 08:09:02 +0100\nBrar Piening <brar@gmx.de> wrote:\n\n> On 09.01.2023 at 03:31, vignesh C wrote:\n> > The patch does not apply on top of HEAD as in [1], please post a\n> > rebased patch: \n\n> This one applies on top of 3c569049b7b502bb4952483d19ce622ff0af5fd6\n> and the documentation build succeeds. Beyond rebasing I've added a\n> few more ids (to make the other patch\n> (make_html_ids_discoverable.patch) build without warnings again) but\n> nothing that would justify another review.\n\nAgreed. I believe that as long as your system has xmllint installed\nand the documentation builds there's not a lot that can go wrong.\nThis patch only adds lots-of-id attributes.\n\n> We probably have to move quickly with this patch since it touches\n> pretty much any file in the documentation and will be outdated in a\n> minute.\n\n+1\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Mon, 9 Jan 2023 09:53:38 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "Brar Piening <brar@gmx.de> writes:\n> On 09.01.2023 at 03:38, vignesh C wrote:\n>> There are couple of commitfest entries for this:\n>> https://commitfest.postgresql.org/41/4041/\n>> https://commitfest.postgresql.org/41/4042/ Can one of them be closed?\n\n> I've split the initial patch into two parts upon Álvaro's request in [1]\n> so that we can discuss them separately\n\nIt's not great to have multiple CF entries pointing at the same email\nthread --- it confuses both people and bots. Next time please split\noff a thread for each distinct patch.\n\nI pushed the ID-addition patch, with a few fixes:\n\n* AFAIK our practice is to use \"-\" never \"_\" in XML ID attributes.\nYou weren't very consistent about that even within this patch, and\nthe overall effect would have been to have no standard about that\nat all, which doesn't seem great. I changed them all to \"-\".\n\n* I got rid of a couple of \"-et-al\" additions, because it did not\nseem like a good precedent. That would tempt people to modify\nexisting ID tags when adding variables to an entry, which'd defeat\nthe purpose I think.\n\n* I fixed a couple of things that looked like typos or unnecessary\ninconsistencies. I have to admit that my eyes glazed over after\nawhile, so there might be remaining infelicities.\n\nIt's probably going to be necessary to have follow-on patches,\nbecause I'm sure there is stuff in the pipeline that adds more\nID-less tags. Or do we have a way to create warnings about that?\n\nI'm unqualified to review CSS stuff, so you'll need to get somebody\nelse to review that patch. But I'd suggest reposting it, else\nthe cfbot is going to start whining that the patch-of-record in\nthis thread no longer applies.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Jan 2023 15:18:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Mon, 09 Jan 2023 15:18:18 -0500\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I pushed the ID-addition patch, with a few fixes:\n> \n> * AFAIK our practice is to use \"-\" never \"_\" in XML ID attributes.\n> You weren't very consistent about that even within this patch, and\n> the overall effect would have been to have no standard about that\n> at all, which doesn't seem great. I changed them all to \"-\".\n\nApologies for not catching this.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Mon, 9 Jan 2023 14:45:57 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Mon, 09 Jan 2023 15:18:18 -0500\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Brar Piening <brar@gmx.de> writes:\n> > On 09.01.2023 at 03:38, vignesh C wrote: \n> >> There are couple of commitfest entries for this:\n> >> https://commitfest.postgresql.org/41/4041/\n> >> https://commitfest.postgresql.org/41/4042/ Can one of them be\n> >> closed? \n> \n> > I've split the initial patch into two parts upon Álvaro's request\n> > in [1] so that we can discuss them separately \n\n> I pushed the ID-addition patch, with a few fixes:\n\n> It's probably going to be necessary to have follow-on patches,\n> because I'm sure there is stuff in the pipeline that adds more\n> ID-less tags. Or do we have a way to create warnings about that?\n\nI am unclear on how to make warnings with xslt. You can make\na listing of problems, but who would read it if the build\ncompleted successfully? You can make errors and abort.\n\nBut my xslt and docbook and pg-docs-fu are a bit stale.\n\nI think there's more to comment on regards the xslt in the\nother patch, but I'll wait for the new thread for that patch.\nThat might be where there should be warnings raised, etc.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Mon, 9 Jan 2023 16:28:21 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "\"Karl O. Pinc\" <kop@karlpinc.com> writes:\n> I think there's more to comment on regards the xslt in the\n> other patch, but I'll wait for the new thread for that patch.\n> That might be where there should be warnings raised, etc.\n\nWe can continue using this thread, now that the other commit is in.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Jan 2023 17:30:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 09.01.2023 at 21:18, Tom Lane wrote:\n> It's not great to have multiple CF entries pointing at the same email\n> thread --- it confuses both people and bots. Next time please split\n> off a thread for each distinct patch.\n\nI agree. I had overestimated the cfbot's ability to handle branched\nthreads. I'll create separate threads next time.\n\n\n> * AFAIK our practice is to use \"-\" never \"_\" in XML ID attributes.\n> You weren't very consistent about that even within this patch, and\n> the overall effect would have been to have no standard about that\n> at all, which doesn't seem great. I changed them all to \"-\".\n\nNoted. Maybe it's worth to write a short paragraph about Ids and their\nstyle somewhere in the docs (e.g. Appendix J.5).\n\n\n> * I got rid of a couple of \"-et-al\" additions, because it did not\n> seem like a good precedent. That would tempt people to modify\n> existing ID tags when adding variables to an entry, which'd defeat\n> the purpose I think.\n\nI tried to use it sparsely, mostly where a varlistentry had multiple\nchild items and I had arbitrarily pick one of them. It's not important,\nthough. I'm curious how you solved this.\n\n\n> * I fixed a couple of things that looked like typos or unnecessary\n> inconsistencies. I have to admit that my eyes glazed over after\n> awhile, so there might be remaining infelicities.\n\nI'm all for consistency. The only places where I intentionally refrained\nfrom being consistent was where I felt Ids would get too long or where\nthere were already ids in place that didn't match my naming scheme.\n\n\n> It's probably going to be necessary to have follow-on patches,\n> because I'm sure there is stuff in the pipeline that adds more\n> ID-less tags. Or do we have a way to create warnings about that?\n\nAgreed. And yes, we do have a limited way to create warnings (that's\npart of the other patch).\n\n\n> I'm unqualified to review CSS stuff, so you'll need to get somebody\n> else to review that patch. But I'd suggest reposting it, else\n> the cfbot is going to start whining that the patch-of-record in\n> this thread no longer applies.\n\nI will do that. Thanks for your feedback!\n\nRegards,\n\nBrar\n\n\n\n",
"msg_date": "Tue, 10 Jan 2023 06:12:44 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "Brar Piening <brar@gmx.de> writes:\n> On 09.01.2023 at 21:18, Tom Lane wrote:\n>> * I got rid of a couple of \"-et-al\" additions, because it did not\n>> seem like a good precedent. That would tempt people to modify\n>> existing ID tags when adding variables to an entry, which'd defeat\n>> the purpose I think.\n\n> I tried to use it sparsely, mostly where a varlistentry had multiple\n> child items and I had arbitrarily pick one of them. It's not important,\n> though. I'm curious how you solved this.\n\nI just removed \"-et-al\", I didn't question your choice of the principal\nvariable name. As you say, it didn't seem to matter that much.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Jan 2023 00:22:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 09.01.2023 at 23:28, Karl O. Pinc wrote:\n> On Mon, 09 Jan 2023 15:18:18 -0500\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It's probably going to be necessary to have follow-on patches,\n>> because I'm sure there is stuff in the pipeline that adds more\n>> ID-less tags. Or do we have a way to create warnings about that?\n> I am unclear on how to make warnings with xslt. You can make\n> a listing of problems, but who would read it if the build\n> completed successfully? You can make errors and abort.\n\nYou can emit warnings to the command line or you can abort with an\nerror. I've opted for warnings + comments in the output in the styling\npatch.\n\nThe biggest issue about errors and warnings is the fact that xslt does\nnot process files in a line-based way which makes it pretty much\nimpossible to give hints where the problem causing the warning is\nlocated. Since everything is bound together via XML entities, you can't\neven tell the source file.\n\nI've worked around this by also emitting an HTML comment to the output\nso that I can find a somewhat unique string next to it and the grep the\ndocumentation sources for this string. It's a bit ugly but the best I\ncould come up with.\n\nI'll repost a rebased version of the styling patch in a minute.\n\nRegards,\n\nBrar\n\n\n\n",
"msg_date": "Tue, 10 Jan 2023 06:28:10 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 10.01.2023 at 06:28, Brar Piening wrote:\n>\n> I'll repost a rebased version of the styling patch in a minute.\n\nAfter checking that there's no need for rebasing I'm reposting the\noriginal patch here, to make cfbot pick it up as the latest one in a\nsomewhat screwed up thread mixing two patches (sorry for that - won't\nhappen again).\n\nAlthoug the patch is pretty compact you probably need some understanding\nof both XSLT and CSS to understand and judge the changes it introduces.\n\nIt pretty much does two things:\n\n 1. Make html ids discoverable in the browser by adding a link with a\n hover effect to items sections and varlistentries that have an id.\n Hover one of the psql options and click on the hash mark in [1] to\n see the behavior.\n 2. Emit a warning to the command line and a comment to the HTML output\n when the docs build runs into a section without id or a varlistentry\n without id where at least one entry in the varlist already has an id.\n\nThe original mail for the patch is at [2], the commitfest entry is at\n[3] and the initial discussion leading to this patch starts at [4].\n\nRegards,\n\nBrar\n\n[1] https://pgdocs.piening.info/app-psql.html#APP-PSQL-OPTION-PORT\n\n[2]\nhttps://www.postgresql.org/message-id/d6695820-af71-5e84-58b0-ff9f1c189603%40gmx.de\n\n[3] https://commitfest.postgresql.org/41/4042/\n\n[4]\nhttps://www.postgresql.org/message-id/4364ab38-a475-a1fc-b104-ecd6c72010d0%40enterprisedb.com",
"msg_date": "Tue, 10 Jan 2023 07:08:08 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 09.01.23 21:18, Tom Lane wrote:\n> * AFAIK our practice is to use \"-\" never \"_\" in XML ID attributes.\n> You weren't very consistent about that even within this patch, and\n> the overall effect would have been to have no standard about that\n> at all, which doesn't seem great. I changed them all to \"-\".\n\nIn the olden says, \"_\" was invalid in ID attribute values. This is no \nlonger the case. But of course it's good to stay consistent with \nexisting practice where reasonable.\n\n\n\n",
"msg_date": "Wed, 11 Jan 2023 15:22:43 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "Hi Brar,\n\nHere's my first review of the make_html_ids_discoverable.patch.\n\n\nOverall:\n\nTo start with, I'd like to say I like the goal and how everything\nworks when the patch is applied. I was a bit skeptical, thinking that\nthe whole thing was going to be distracting when reading the docs or\notherwise odd, but it really works.\n\nThe thought comes to mind to take accessibility into account. I am\nunclear on what that would mean in this case.\n\n\nRegards CSS:\n\nThe CSS looks good. Also, it passes the W3C CSS validator (\nhttps://jigsaw.w3.org/css-validator/) for CSS level 3.\n\n\nRegards XSLT:\n\nI believe the XSLT needs work.\n\nWithout really knowing the style of PG's XSLT, I'm going to suggest\nputting the comment\n<!-- Make HTML ids discoverable (by hover) by adding links to the ids -->\nabove the new XSLT. Precede the comment with 2 blank lines and follow\nit by one blank line. The idea is to make the code you're adding an\nidentifiable \"separate section\".\n\nI have a question, it is the docbook html stylesheets for XML that are\nbeing overridden? It looks like it, and what else would it be, but....\n(Sorry, I'm a bit stale on this stuff.)\n\nI believe that overriding the XSLT by copying the original and making\nmodifications is the \"wrong way\" (TM). I think that the right way is\nto declare a xsl:default-mode somewhere in the stylesheets. There\ndoes not seem to be one, so the default mode for PG doc processing\ncould be something like \"postgres-mode\". And I'd expect it to be\ndeclared somewhere right at the top of the xml hierarchy. (I forget\nwhat that is, probably something like \"document\" or \"book\" or\nsomething.) Then, you'd write your for the <xsl:template\nmatch=\"varlistentry\"> and <xsl:template name=\"section.heading\"> with a\nmode of \"postgres-mode\", and have your templates call the \"traditional\ndefault\", \"modeless\", templates. That way your not copying and\npatching upstream code, but are instead, in effect, calling it as a\nsubroutine.\n\nThis should work especially well since, I think, you're just adding\nnew output to what the upstream templates do. You're not trying to\ninsert new output into the middle of the stock output or otherwise\nmodify the stock output.\n\nYou're adding only about 6 lines of XSLT to the upstream templates,\nand copying 100+ lines. There must be a better way.\n\nSee: https://www.w3.org/TR/xslt-30/#modes\n\nI've never tried this, although I do recall doing something or another\nwith modes in the past. And I've not gone so far as to figure out\n(again?) how to call a \"modeless template\", so you can invoke the\noriginal, upstream, templates. And setting a default mode seems like\nsomething of a \"big hammer\", so should probably be checked over by\nsomeone who's more steeped in Postgres docs than myself. (Like a\ncommitter. :) But I believe it is the way to go. To make it work\nyou'll need to figure out the XSLT mode selection process and make\nsure that it first selects the \"postgres-mode\", and then the modeless\ntemplates, and also still works right when a template calls another\nand explicitly sets a mode.\n\n\nRegards visual presentation:\n\nHere's the fun question, what to have \"appear\" when a section or\nvarlistentry with an id is hovered over?\n\nI kind of like your choice of the \"#\" character as the screen element\nthat becomes visible when you hover over a section or varlistentry\nwith the mouse, which then shows you the URL of the thing over which\nyou are hovering. That's what's in the patch now. But I wonder why\nyou chose it. Is there some sort of standard? I've seen the anchor\nUnicode character before, (⚓) \\u2693. I don't find it particularly\nhelpful. The \"place of interest\" symbol,(⌘) \\u2318, might be nice.\nThere is (◎), \\u25ce, the \"bullseye\" symbol. There is the link symbol\n(🔗), \\U0001f517. Like the anchor, it has generally been confusing\nwhen I come across it. The link symbol with the \"text variant form\",\n(🔗︎) \\U0001f517\\ufe0e, looks more link an actual chain and is somewhat\nbetter. (The opposite of \"text variant\" is \"emoji variant\".) There\nis also the paperclip, (📎) \\U0001f4ce. And the paperclip text\nvariant, (📎︎) \\U0001f4ce\\ufe0e.\n\nOf all the Unicode choices above, I think I prefer the last. The text\npaperclip. 📎︎\n\n(The hex representations above are Python 3 string literals. So\nprint(\"\\U0001f4ce\\ufe0e\") prints the text paperclip.)\n\nThe actual representation of the various Unicode characters is going\nto depend on the font. (Things change for me when looked at in an \nemail v.s. in a browser.)\n\nThe question I have is should we use \"Unicode icons\" at all or is\nplain old UTF-8 good enough for us (because it was good enough for our\nancestors ;) ? Of course an image is also possible.... For now I'm\nnot going to advocate for a change from the \"#\" already used in the\npatch.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Sun, 15 Jan 2023 18:01:50 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Sun, 15 Jan 2023 18:01:50 -0600\n\"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n\n> Regards XSLT:\n> \n> I believe the XSLT needs work.\n\nI also think that the XSLT should error and halt\nwhen there's no id (in the expected places). \nInstead of just giving a warning and keeping going. Otherwise\nthey'll constantly be ignored warnings and periodically\nthere will have to be patches to supply missing ids.\n\nTo solve the \"which id is missing where so I can fix it\"\nproblem, I propose the error text show the chapter title,\nall the enclosing sub-section titles, and any previous existing\nvarlistentry ids occurring before the tag with the\nmissing attribute. At least for varlistentry-s. For\nsections you could do chapter and enclosing sub-section\ntitles and the title of the section with the problem.\nThat should be enough for an author to find the place\nin the source sgml that needs fixing.\n\nMaybe, possibly, you can see how this is done by looking\nat whatever XSLT there is that automatically generates\nids for sections without ids, so that the table of contents\nhave something to link to. In any case, XSLT is really\ngood at \"looking at\" parent/enclosing XML, so producing\na useful error message shouldn't be _that_ hard. I've\ndefinitely done this sort of thing before so I can tell you\nit's readily doable.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Sun, 15 Jan 2023 18:28:24 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Sun, 15 Jan 2023 18:01:50 -0600\n\"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n\n> Regards XSLT:\n> \n> I believe the XSLT needs work.\n\n> I believe that overriding the XSLT by copying the original and making\n> modifications is the \"wrong way\" (TM). I think that the right way is\n> to declare a xsl:default-mode somewhere in the stylesheets. There\n> does not seem to be one, so the default mode for PG doc processing\n> could be something like \"postgres-mode\". And I'd expect it to be\n> declared somewhere right at the top of the xml hierarchy. (I forget\n> what that is, probably something like \"document\" or \"book\" or\n> something.) Then, you'd write your for the <xsl:template\n> match=\"varlistentry\"> and <xsl:template name=\"section.heading\"> with\n> a mode of \"postgres-mode\", and have your templates call the\n> \"traditional default\", \"modeless\", templates. That way your not\n> copying and patching upstream code, but are instead, in effect,\n> calling it as a subroutine.\n> \n> This should work especially well since, I think, you're just adding\n> new output to what the upstream templates do. You're not trying to\n> insert new output into the middle of the stock output or otherwise\n> modify the stock output.\n> \n> You're adding only about 6 lines of XSLT to the upstream templates,\n> and copying 100+ lines. There must be a better way.\n> \n> See: https://www.w3.org/TR/xslt-30/#modes\n> \n> I've never tried this, although I do recall doing something or another\n> with modes in the past. And I've not gone so far as to figure out\n> (again?) how to call a \"modeless template\", so you can invoke the\n> original, upstream, templates. And setting a default mode seems like\n> something of a \"big hammer\", so should probably be checked over by\n> someone who's more steeped in Postgres docs than myself. (Like a\n> committer. :) But I believe it is the way to go. To make it work\n> you'll need to figure out the XSLT mode selection process and make\n> sure that it first selects the \"postgres-mode\", and then the modeless\n> templates, and also still works right when a template calls another\n> and explicitly sets a mode.\n\nDrat. I forgot. We're using xsltproc which is XSLT 1.0.\n\nSo this is the relevant spec:\n\nhttps://www.w3.org/TR/1999/REC-xslt-19991116#modes\n\nIn XSLT 1.0 there is no xml:default-mode. So I _think_ what you do then\nis modify the built-in template rules so that the (default) template\n(mode='') is invoked when there is no 'postgres-mode' version of the\ntemplate, but otherwise the 'postgres-mode' version of the template\nis invoked. Your 'postgres-mode' templates will xsl:call-template\nthe default template, adding whatever they want to the output produced\nby the default template.\n\nSee: https://www.w3.org/TR/1999/REC-xslt-19991116#built-in-rule\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Mon, 16 Jan 2023 11:14:35 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Mon, 16 Jan 2023 11:14:35 -0600\n\"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n\n> On Sun, 15 Jan 2023 18:01:50 -0600\n> \"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n> \n> > Regards XSLT:\n> > \n> > I believe the XSLT needs work. \n\n> In XSLT 1.0 there is no xml:default-mode. So I _think_ what you do\n> then is modify the built-in template rules so that the (default)\n> template (mode='') is invoked when there is no 'postgres-mode'\n> version of the template, but otherwise the 'postgres-mode' version of\n> the template is invoked. Your 'postgres-mode' templates will\n> xsl:call-template the default template, adding whatever they want to\n> the output produced by the default template.\n\nOr maybe the right way is to set a mode at the very top,\nthe first apply-templates call, and not mess with the\nbuilt-in templates at all. (You'd write your own\n\"postgres-mode\" templates the same way, to \"wrap\"\nand call the default templates.)\n\nThink of the mode as an implicit argument that's preserved and\npassed down through each template invocation without having to\nbe explicitly specified by the calling code.\n\nRegards\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Mon, 16 Jan 2023 19:05:50 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 17.01.2023 at 02:05, Karl O. Pinc wrote:\n> Or maybe the right way is to set a mode at the very top,\n> the first apply-templates call, and not mess with the\n> built-in templates at all. (You'd write your own\n> \"postgres-mode\" templates the same way, to \"wrap\"\n> and call the default templates.)\n>\n> Think of the mode as an implicit argument that's preserved and\n> passed down through each template invocation without having to\n> be explicitly specified by the calling code.\n\nI think the document you're missing is [1].\n\nThere are multiple ways to customize DocBook XSL output and it sounds\nlike you want me to write a customization layer which I didn't do\nbecause there is precedent that the typical \"way to do it\" (TM) in the\nPostgreSQL project is [2].\n\nRegards,\n\nBrar\n\n[1] http://www.sagehill.net/docbookxsl/CustomizingPart.html\n[2] http://www.sagehill.net/docbookxsl/ReplaceTemplate.html\n\n\n\n",
"msg_date": "Tue, 17 Jan 2023 06:57:23 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Tue, 17 Jan 2023 06:57:23 +0100\nBrar Piening <brar@gmx.de> wrote:\n\n> On 17.01.2023 at 02:05, Karl O. Pinc wrote:\n> > Or maybe the right way is to set a mode at the very top,\n> > the first apply-templates call, and not mess with the\n> > built-in templates at all. (You'd write your own\n> > \"postgres-mode\" templates the same way, to \"wrap\"\n> > and call the default templates.)\n> >\n> > Think of the mode as an implicit argument that's preserved and\n> > passed down through each template invocation without having to\n> > be explicitly specified by the calling code. \n> \n> I think the document you're missing is [1].\n> \n> There are multiple ways to customize DocBook XSL output and it sounds\n> like you want me to write a customization layer which I didn't do\n> because there is precedent that the typical \"way to do it\" (TM) in the\n> PostgreSQL project is [2].\n> \n> Regards,\n> \n> Brar\n> \n> [1] http://www.sagehill.net/docbookxsl/CustomizingPart.html\n> [2] http://www.sagehill.net/docbookxsl/ReplaceTemplate.html\n> \n\nSagehill is normally vary good. But in this case [2] does not\napply. Or rather it applies but it is overkill because you\ndo not want to replace what a template is producing. You\nwant to add to what a template is producing. So you want to\nwrap the template, with your new code adding output before/\nafter what the original produces.\n\n[1] does not contain this technique.\n\nIf you're not willing to try I am willing to see if I can\nproduce an example to work from. My XSLT is starting to\ncome back.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Tue, 17 Jan 2023 07:12:42 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 17.01.2023 at 14:12, Karl O. Pinc wrote:\n> If you're not willing to try I am willing to see if I can\n> produce an example to work from. My XSLT is starting to\n> come back.\n\nI'm certainly willing to try but I'd appreciate an example in any case.\n\nMy XSLT skills are mostly learning by doing combined with trial and error.\n\nRegards,\n\nBrar\n\n\n\n",
"msg_date": "Tue, 17 Jan 2023 19:13:38 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Tue, 17 Jan 2023 19:13:38 +0100\nBrar Piening <brar@gmx.de> wrote:\n\n> On 17.01.2023 at 14:12, Karl O. Pinc wrote:\n> > If you're not willing to try I am willing to see if I can\n> > produce an example to work from. My XSLT is starting to\n> > come back. \n> \n> I'm certainly willing to try but I'd appreciate an example in any\n> case.\n\nIt's good you asked. After poking about the XSLT 1.0 spec to refresh\nmy memory I abandoned the \"mode\" approach entirely. The real \"right\nway\" is to use the import mechanism.\n\nI've attached a patch that \"wraps\" the section.heading template\nand adds extra stuff to the HTML output generated by the stock\ntemplate. (example_section_heading_override.patch)\n\nThere's a bug. All that goes into the html is a comment, not\na hoverable link. But the technique is clear.\n\nOn my system (Debian 11, bullseye) I found the URI to import\nby looking at:\n/usr/share/xml/docbook/stylesheet/docbook-xsl/catalog.xml\n(Which is probably the right place to look.)\nUltimately, that file is findable via: /etc/xml/catalog\nThe \"best way\" on\nDebian is: /usr/share/doc/docbook-xsl/README.gz\nIn other words, the README that comes with the style sheets.\n\nSupposedly, the href=URLs are really URIs and will be good\nno matter what/when. The XSLT processor should know to look\nat the system catalogs and read the imported style sheet\nfrom the local file system.\n\nIt might be useful to add --nonet to the xsltproc invocation(s)\nin the Makefile(s). Just in case; to keep from retrieving\nstylesheets from the net. (If the option is not already there.\nI didn't look.)\n\nIf this is the first time that PG uses the XSLT import mechanism\nI imagine that \"things could go wrong\" depending on what sort\nof system is being used to build the docs. I'm not worried,\nbut it is something to note for the committer.\n\n> My XSLT skills are mostly learning by doing combined with trial and\n> error.\n\nI think of XSLT as a functional programming language. Recursion is\na big deal, and data directed programming can be a powerful technique\nbecause XSLT is so good with data structures.\n(https://mitp-content-server.mit.edu/books/content/sectbyfn/books_pres_0/6515/sicp.zip/full-text/book/book-Z-H-17.html#%_sec_2.4.3)\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein",
"msg_date": "Tue, 17 Jan 2023 16:43:13 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 17.01.2023 at 23:43, Karl O. Pinc wrote:\n> It's good you asked. After poking about the XSLT 1.0 spec to refresh\n> my memory I abandoned the \"mode\" approach entirely. The real \"right\n> way\" is to use the import mechanism.\n>\n> I've attached a patch that \"wraps\" the section.heading template\n> and adds extra stuff to the HTML output generated by the stock\n> template. (example_section_heading_override.patch)\n\nThanks!\n\nI'll give it a proper look this weekend.\n\nRegards,\n\nBrar\n\n\n\n",
"msg_date": "Wed, 18 Jan 2023 06:50:40 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Tue, 17 Jan 2023 16:43:13 -0600\n\"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n\n> It might be useful to add --nonet to the xsltproc invocation(s)\n> in the Makefile(s). Just in case; to keep from retrieving\n> stylesheets from the net. (If the option is not already there.\n> I didn't look.)\n> \n> If this is the first time that PG uses the XSLT import mechanism\n> I imagine that \"things could go wrong\" depending on what sort\n> of system is being used to build the docs. I'm not worried,\n> but it is something to note for the committer.\n\nLooks like doc/src/sgml/stylesheet-fo.xsl already uses\nxsl:import, although it is unclear to me whether the import\nis applied.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Wed, 18 Jan 2023 22:05:33 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 18.01.2023 at 06:50, Brar Piening wrote:\n\n> I'll give it a proper look this weekend.\n\nIt turns out I didn't get a round tuit.\n\n... and I'm afraid I probably will not be able to work on this until\nmid/end February so we'll have to move this to the next commitfest until\nsomebody wants to take it over and push it through.\n\n\n\n\n",
"msg_date": "Thu, 26 Jan 2023 21:48:54 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-26 21:48:54 +0100, Brar Piening wrote:\n> On 18.01.2023 at 06:50, Brar Piening wrote:\n> \n> > I'll give it a proper look this weekend.\n> \n> It turns out I didn't get a round tuit.\n> \n> ... and I'm afraid I probably will not be able to work on this until\n> mid/end February so we'll have to move this to the next commitfest until\n> somebody wants to take it over and push it through.\n\nA small note: As-is this fails on CI, because we don't allow network access\nduring the docs build anymore (since it always fails these days):\n\nhttps://cirrus-ci.com/task/5474029402849280?logs=docs_build#L297\n\n[17:02:03.114] time make -s -j${BUILD_JOBS} -C doc\n[17:02:04.092] I/O error : Attempt to load network entity http://cdn.docbook.org/release/xsl/current/html/sections.xsl\n[17:02:04.092] warning: failed to load external entity \"http://cdn.docbook.org/release/xsl/current/html/sections.xsl\"\n[17:02:04.092] compilation error: file stylesheet-html-common.xsl line 17 element import\n[17:02:04.092] xsl:import : unable to load http://cdn.docbook.org/release/xsl/current/html/sections.xsl\n\nI think this is just due to the common URL in docbook packages being\nhttp://docbook.sourceforge.net/release/xsl/current/*\nBecause of that the docbook catalog matching logic won't work for that file:\n\nE.g. I have the following in /etc/xml/docbook-xsl.xml, on debian unstable:\n<delegateURI uriStartString=\"http://docbook.sourceforge.net/release/xsl/\" catalog=\"file:///usr/share/xml/docbook/stylesheet/docbook-xsl/catalog.xml\"/>\n\nAs all our other references use the sourceforge address, this should too.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Feb 2023 12:13:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Tue, 14 Feb 2023 12:13:18 -0800\nAndres Freund <andres@anarazel.de> wrote:\n\n> A small note: As-is this fails on CI, because we don't allow network\n> access during the docs build anymore (since it always fails these\n> days):\n> \n> https://cirrus-ci.com/task/5474029402849280?logs=docs_build#L297\n> \n> [17:02:03.114] time make -s -j${BUILD_JOBS} -C doc\n> [17:02:04.092] I/O error : Attempt to load network entity\n> http://cdn.docbook.org/release/xsl/current/html/sections.xsl\n> [17:02:04.092] warning: failed to load external entity\n> \"http://cdn.docbook.org/release/xsl/current/html/sections.xsl\"\n> [17:02:04.092] compilation error: file stylesheet-html-common.xsl\n> line 17 element import [17:02:04.092] xsl:import : unable to load\n> http://cdn.docbook.org/release/xsl/current/html/sections.xsl\n\nThis makes me think that it would be useful to add --nonet to the\nxsltproc invocations. That would catch this error before it goes to\nCI.\n\n> I think this is just due to the common URL in docbook packages being\n> http://docbook.sourceforge.net/release/xsl/current/*\n> Because of that the docbook catalog matching logic won't work for\n> that file:\n> \n> E.g. I have the following in /etc/xml/docbook-xsl.xml, on debian\n> unstable: <delegateURI\n> uriStartString=\"http://docbook.sourceforge.net/release/xsl/\"\n> catalog=\"file:///usr/share/xml/docbook/stylesheet/docbook-xsl/catalog.xml\"/>\n> \n> As all our other references use the sourceforge address, this should\n> too.\n\nAgreed.\n\nI'm also noticing that the existing xsl:import-s all import entire\ndocbook stylesheets. It does not hurt to do this; the output is\nunaffected, although I can't say what it means for build performance.\nIt does keep it simple. Only one import is needed no matter which\ntemplates we use the import mechanism to extend. And by importing\n\"everything\" there's no concern about any (unlikely) changes to\nthe the \"internals\" of the catalog.\n\nShould we import only what we need or all of docbook? I don't know.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Wed, 15 Feb 2023 13:34:37 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-15 13:34:37 -0600, Karl O. Pinc wrote:\n> This makes me think that it would be useful to add --nonet to the\n> xsltproc invocations. That would catch this error before it goes to\n> CI.\n\nWe are doing that now :)\n\ncommit 969509c3f2e3b4c32dcf264f9d642b5ef01319f3\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2023-02-08 17:15:23 -0500\n \n Stop recommending auto-download of DTD files, and indeed disable it.\n\n\n\n> I'm also noticing that the existing xsl:import-s all import entire\n> docbook stylesheets. It does not hurt to do this; the output is\n> unaffected, although I can't say what it means for build performance.\n> It does keep it simple. Only one import is needed no matter which\n> templates we use the import mechanism to extend. And by importing\n> \"everything\" there's no concern about any (unlikely) changes to\n> the the \"internals\" of the catalog.\n> \n> Should we import only what we need or all of docbook? I don't know.\n\nIt couldn't hurt to check if performance improves when you avoid doing so. I\nsuspect it won't make much of a difference, because the time is actually spent\nevaluating xslt rather than parsing it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 Feb 2023 12:49:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Thu, 26 Jan 2023 at 15:55, Brar Piening <brar@gmx.de> wrote:\n>\n> On 18.01.2023 at 06:50, Brar Piening wrote:\n>\n> > I'll give it a proper look this weekend.\n>\n> It turns out I didn't get a round tuit.\n>\n> ... and I'm afraid I probably will not be able to work on this until\n> mid/end February so we'll have to move this to the next commitfest until\n> somebody wants to take it over and push it through.\n\n\nLooks like a lot of good work was happening on this patch right up\nuntil mid-February. Is there a lot of work left? Do you think you'll\nhave a chance to wrap it up this commitfest for this release?\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Mon, 20 Mar 2023 14:47:24 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 20.03.2023 at 19:47, Gregory Stark (as CFM) wrote:\n> Looks like a lot of good work was happening on this patch right up\n> until mid-February. Is there a lot of work left? Do you think you'll\n> have a chance to wrap it up this commitfest for this release?\n\nThanks for the ping.\n\nI had another look this morning and I think I can probably finish this\nby the end of the week.\n\n\n\n",
"msg_date": "Tue, 21 Mar 2023 14:57:28 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 17.01.2023 at 23:43, Karl O. Pinc wrote:\n> It's good you asked. After poking about the XSLT 1.0 spec to refresh\n> my memory I abandoned the \"mode\" approach entirely. The real \"right\n> way\" is to use the import mechanism.\n\nIt actually is not.\n\nAfter quite some time trying to figure out why things don't work as\nintended, I ended up reading the XSLT 1.0 spec.\n\nAs the name already suggests, <xsl:apply-imports> is closely related to\n<xsl:apply-templates> with the difference that it *applies* a *template\nrule* from an imported style sheet instead of applying a template rule\nfrom the current style sheet\n(https://www.w3.org/TR/1999/REC-xslt-19991116#apply-imports). What it\ndoes not do is *calling* a *named template*\n(https://www.w3.org/TR/1999/REC-xslt-19991116#named-templates).\n\nWhat this basically means is that in XSLT 1.0 you can use\n<xsl:apply-imports> to override template rules (<xsl:template\nmatch=\"this-pattern-inside-match-makes-it-a-template-rule\">) but you\ncannot use it to override named templates (<xsl:template\nname=\"this-id-inside-name-makes-it-a-named-template\">). If you want to\noverride named templates you basically have to duplicate and change them.\n\nWhile there are mechanisms to call overriden named templates in XSLT 3,\nthis is out of scope here, since we're bound to XSLT 1.0\n\nAs a consequence, there was little I could change in the initial patch\nto avoid the code duplication and all attempts to do so, ultimately led\nto even longer and more complex code without really reducing the amount\nof duplication.\n\nThe <xsl:apply-imports> approach actually does work in the varlistentry\ncase, although this doesn't really change a lot regarding the length of\nthe patch (it's a bit nicer though since in this case it really avoids\nduplication). I've also taken the advice to terminate the build and\nprint the xpath if a required id is missing.\n\nThe attached patch is my best-effort approach to implement discoverable\nlinks.\n\nBest regards,\n\nBrar",
"msg_date": "Tue, 21 Mar 2023 23:16:25 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Tue, 21 Mar 2023 23:16:25 +0100\nBrar Piening <brar@gmx.de> wrote:\n\n> On 17.01.2023 at 23:43, Karl O. Pinc wrote:\n> > It's good you asked. After poking about the XSLT 1.0 spec to\n> > refresh my memory I abandoned the \"mode\" approach entirely. The\n> > real \"right way\" is to use the import mechanism. \n\n> After quite some time trying to figure out why things don't work as\n> intended, I ended up reading the XSLT 1.0 spec.\n> \n> As the name already suggests, <xsl:apply-imports> is closely related\n> to <xsl:apply-templates> with the difference that it *applies* a\n> *template rule* from an imported style sheet instead of applying a\n> template rule from the current style sheet\n> (https://www.w3.org/TR/1999/REC-xslt-19991116#apply-imports). What it\n> does not do is *calling* a *named template*\n> (https://www.w3.org/TR/1999/REC-xslt-19991116#named-templates).\n> \n> What this basically means is that in XSLT 1.0 you can use\n> <xsl:apply-imports> to override template rules (<xsl:template\n> match=\"this-pattern-inside-match-makes-it-a-template-rule\">) but you\n> cannot use it to override named templates (<xsl:template\n> name=\"this-id-inside-name-makes-it-a-named-template\">). If you want to\n> override named templates you basically have to duplicate and change\n> them.\n> \n> While there are mechanisms to call overriden named templates in XSLT\n> 3, this is out of scope here, since we're bound to XSLT 1.0\n\n(It was reassuring to see you take the steps above; I once did exactly\nthe same with and had the same excitements and disappointments. I\nfeel validated. ;-)\n\n(One of my disappointments is that xsltproc supports only XSLT 1.0,\nand nothing later. IIRC, apparently one big reason is not the amount\nwork needed to develop the program but the work required to develop a\ntest suite to validate conformance.)\n\n> As a consequence, there was little I could change in the initial patch\n> to avoid the code duplication and all attempts to do so, ultimately\n> led to even longer and more complex code without really reducing the\n> amount of duplication.\n\nYou're quite right. I clearly didn't have my XSLT turned on. Importing\nonly works when templates are matched, not called by name.\n\nSorry for the extra work I've put you through.\n\n> The <xsl:apply-imports> approach actually does work in the\n> varlistentry case, although this doesn't really change a lot\n> regarding the length of the patch (it's a bit nicer though since in\n> this case it really avoids duplication).\n\n\nYou've put in a lot of good work. I'm attaching 2 patches\nwith only minor changes.\n\n\n001-add-needed-ids_v1.patch\n\nThis separates out the addition of ids from the XSLT changes, just\nto keep things tidy. Content is from your patch.\n\n\n002-make_html_ids_discoverable_v4.patch\n\nI changed the linked text, the #, so that the leading space\nis not linked. This is arguable, as the extra space makes\nit easier to put the mouse on the region. But it seems\ntidy.\n\nI've tided up so the lines are no longer than 80 chars.\n\n> I've also taken the advice\n> to terminate the build and print the xpath if a required id is\n> missing.\n\nThis looks awesome. I love the xpath! I've changed the format of the\nerror message. What do you think? (Try it out by _not_ applying\n001-add-needed-ids_v1.patch.)\n\nAlso, the error message now has leading and trailing newlines to make\nit stand out. I'm normally against this sort of thing but thought I'd\nadd it anyway for others to review.\n\nI'm ready to send these on to a committer but if you don't\nlike what I did please send more patches for me to review.\n\n\nOutstanding questions (for committer?):\n\nThe 002-make_html_ids_discoverable_v4.patch generates xhtml <h1>,\n<h2>, etc. attributes using a XSLT <element> element with a\n\"namespace\" attribute. I'm unclear on the relationship PG has with\nxhtml and namespaces. Looks right to me, since the generated html has\nthe same namespace name appearing in the xmlns attribute of the html\ntag, but somebody who knows more than me might want to review this.\n\nUsing the namespace attribute does not seem to have affected the\ngenerated html, as far as my random sampling of output can tell.\n\n\nWhat character should be used to represent a link anchor?\n\nI've left your choice of \"#\" in the patch.\n\nIf we go to unicode, My preference is the text paperclip 📎︎\n\nHere's a table of all the choices I came up with, there may be others\nthat are suitable. (The hex representations are Python 3 string\nliterals. So print(\"\\U0001f4ce\\ufe0e\") prints the text paperclip.)\n\nHash mark # (ASCII, used in the patch, \\u0023)\nAnchor ⚓ \\u2693\nPlace of interest ⌘ \\u2318\nBullseye ◎ \\u25ce\nLink (emoji variant) 🔗 \\U0001f517\nLink (text variant) 🔗︎ \\U0001f517\\ufe0e\nPaperclip (emoji variant) 📎 \\U0001f4ce\nPaperclip (text variant) 📎︎ \\U0001f4ce\\ufe0e\n\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein",
"msg_date": "Wed, 22 Mar 2023 22:09:19 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 23.03.2023 at 04:09, Karl O. Pinc wrote:\n> You're quite right. I clearly didn't have my XSLT turned on. Importing\n> only works when templates are matched, not called by name.\n>\n> Sorry for the extra work I've put you through.\n\nNo problem. As always I've learnt something which may help me in the future.\n\n\n> You've put in a lot of good work. I'm attaching 2 patches\n> with only minor changes.\n\nThanks. When comparing things I also realized that I had accidentally\ncreated a reversed patch. Thanks for fixing this.\n\n\n> 001-add-needed-ids_v1.patch\n>\n> This separates out the addition of ids from the XSLT changes, just\n> to keep things tidy. Content is from your patch.\n\n+1\n\n\n> 002-make_html_ids_discoverable_v4.patch\n>\n> I changed the linked text, the #, so that the leading space\n> is not linked. This is arguable, as the extra space makes\n> it easier to put the mouse on the region. But it seems\n> tidy.\n\nI tend to prefer a slightly bigger mouseover-region but I don't really mind.\n\n\n> I've tided up so the lines are no longer than 80 chars.\n\n+1\n\n\n> This looks awesome. I love the xpath! I've changed the format of the\n> error message. What do you think? (Try it out by _not_ applying\n> 001-add-needed-ids_v1.patch.)\n>\n> Also, the error message now has leading and trailing newlines to make\n> it stand out. I'm normally against this sort of thing but thought I'd\n> add it anyway for others to review.\n\n+1\n\n\n> I'm ready to send these on to a committer but if you don't\n> like what I did please send more patches for me to review.\n\nI like it and think it's ready for commiter.\n\n\n> Outstanding questions (for committer?):\n>\n> The 002-make_html_ids_discoverable_v4.patch generates xhtml <h1>,\n> <h2>, etc. attributes using a XSLT <element> element with a\n> \"namespace\" attribute.\n\nI'm not sure I follow. I cannot see any namespacing weirdness in my output.\n\nAre you using the v1.79.2 styleshhets?\n\n\n> What character should be used to represent a link anchor?\n\nIt's not the first time this is coming up. See my response in the old\nthread:\nhttps://www.postgresql.org/message-id/e50193ea-ca5c-e178-026a-f3fd8942252d%40gmx.de\n\nPersonally I'd advise to stick with ASCII for now.\n\nIn any case changing the symbol at some point would be a very minor\neffort if we deem it necessary.\n\nMaybe this could be part of some general overhaul of the visual\napperance and website styling by a person with more talent for this than\nI have.\n\nRegards,\n\nBrar\n\n\n\n",
"msg_date": "Thu, 23 Mar 2023 08:24:48 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "Thanks, Brar and Karl, I hope we can get this done soon.\n\nAs with the <simplelist> patch, we'll need to patch the CSS used in the\nwebsite for the docs too, as that's the most important place where docs\nare visited. See this commit for an example:\nhttps://git.postgresql.org/gitweb/?p=pgweb.git;a=commitdiff;h=0b89ea0fff28d29ed177c82a274144453e3c7f82\n\nIn order to test locally that your patched stylesheet works correctly,\nyou'd have to compile the docs with \"make html STYLE=website\" in the doc\nsubdir, and tweak one of the CSS files there (I think it's\ndocs-complete.css) so that it references your local copy instead of\nfetching it from the website.\n\n> diff --git a/doc/src/sgml/stylesheet.css b/doc/src/sgml/stylesheet.css\n> index cc14efa1ca..15bcc95d41 100644\n> --- a/doc/src/sgml/stylesheet.css\n> +++ b/doc/src/sgml/stylesheet.css\n> @@ -169,3 +169,13 @@ acronym\t\t{ font-style: inherit; }\n> width: 75%;\n> }\n> }\n> +\n> +/* Links to ids of headers and definition terms */\n> +a.id_link {\n> + color: inherit;\n> + visibility: hidden;\n> +}\n> +\n> +*:hover > a.id_link {\n> + visibility: visible;\n> +}\n\nI'm not clear on what exactly becomes visible when one hovers over what.\nCan you please share a screenshot?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 23 Mar 2023 10:35:55 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 23.03.2023 at 10:35, Alvaro Herrera wrote:\n> As with the <simplelist> patch, we'll need to patch the CSS used in the\n> website for the docs too, as that's the most important place where docs\n> are visited. See this commit for an example:\n> https://git.postgresql.org/gitweb/?p=pgweb.git;a=commitdiff;h=0b89ea0fff28d29ed177c82a274144453e3c7f82\n>\n> In order to test locally that your patched stylesheet works correctly,\n> you'd have to compile the docs with \"make html STYLE=website\" in the doc\n> subdir, and tweak one of the CSS files there (I think it's\n> docs-complete.css) so that it references your local copy instead of\n> fetching it from the website.\n\nThanks I'll take care of this tonight.\n\n\n> I'm not clear on what exactly becomes visible when one hovers over what.\n> Can you please share a screenshot?\n\nI could, but since hover effects don't really come across in screenshots\nI've posted a build of the docs including the patch to\nhttps://pgdocs.piening.info\n\nSee https://pgdocs.piening.info/app-psql.html as an example.\n\n\n\n\n",
"msg_date": "Thu, 23 Mar 2023 11:13:49 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Thu, 23 Mar 2023 08:24:48 +0100\nBrar Piening <brar@gmx.de> wrote:\n\n> On 23.03.2023 at 04:09, Karl O. Pinc wrote:\n\n> > Sorry for the extra work I've put you through. \n> \n> No problem. As always I've learnt something which may help me in the\n> future.\n\nI don't know about you, but sadly, my brain eventually leaks. ;-)\n\n> > I'm attaching 2 patches\n> > with only minor changes. \n\n> > 001-add-needed-ids_v1.patch\n> >\n> > This separates out the addition of ids from the XSLT changes, just\n> > to keep things tidy. \n\n> > 002-make_html_ids_discoverable_v4.patch\n> >\n> > I changed the linked text, the #, so that the leading space\n> > is not linked. This is arguable, as the extra space makes\n> > it easier to put the mouse on the region.\n\n> I tend to prefer a slightly bigger mouseover-region but I don't\n> really mind.\n\nI'm leaving it for the committer to review.\n\n> I've changed the format of\n> > the error message. What do you think? (Try it out by _not_\n> > applying 001-add-needed-ids_v1.patch.)\n> >\n> > Also, the error message now has leading and trailing newlines to\n> > make it stand out.\n\nIncluding the error message/make output here, so everyone can see\neasily.\n\n--------------<snip>------------\n/usr/bin/xsltproc --nonet --path . --stringparam pg.version '16devel' stylesheet.xsl postgres-full.xml\n\nIds are required in order to provide the public HTML documentation with stable URLs for <varlistentry> element content; id missing at: /book[@id = 'postgres']/part[@id = 'appendixes']/appendix[@id = 'contrib']/sect1[@id = 'pgwalinspect']/sect2[@id = 'pgwalinspect-funcs']/variablelist\n \nno result for postgres-full.xml\nmake: *** [Makefile:146: html-stamp] Error 10\n--------------<snip>------------\n\n> I like it and think it's ready for commiter.\n\nI've marked it ready for the committer in the commitfest.\n\n> > Outstanding questions (for committer?):\n> >\n> > The 002-make_html_ids_discoverable_v4.patch generates xhtml <h1>,\n> > <h2>, etc. attributes using a XSLT <element> element with a\n> > \"namespace\" attribute. \n> \n> I'm not sure I follow. I cannot see any namespacing weirdness in my\n> output.\n\nThere's nothing weird in the output, it's all about how\nyou're generating it in the xslt with\n\n <xsl:element name=\"h$level\" namespace=\"...\n\nOutput looks right to me.\n\n> Are you using the v1.79.2 styleshhets?\n\nYes. But I've got both the ones with namespaces and without\ninstalled.\n\nI've just never had to look at what PG is doing with namespaces\nbefore. What you've done looks right to me, but I'm pretty\nclueless so somebody else should double check.]\n\n> > What character should be used to represent a link anchor? \n\n> Personally I'd advise to stick with ASCII for now.\n\n+1\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Thu, 23 Mar 2023 07:51:44 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Thu, 23 Mar 2023 10:35:55 +0100\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> As with the <simplelist> patch, we'll need to patch the CSS used in\n> the website for the docs too, as that's the most important place\n> where docs are visited. See this commit for an example:\n> https://git.postgresql.org/gitweb/?p=pgweb.git;a=commitdiff;h=0b89ea0fff28d29ed177c82a274144453e3c7f82\n> \n> In order to test locally that your patched stylesheet works correctly,\n> you'd have to compile the docs with \"make html STYLE=website\" in the\n> doc subdir, and tweak one of the CSS files there (I think it's\n> docs-complete.css) so that it references your local copy instead of\n> fetching it from the website.\n\nI should have known to put the css in a separate patch.\n\n> I'm not clear on what exactly becomes visible when one hovers over\n> what. Can you please share a screenshot?\n\nAttached are 2 screenshots. I don't know why, but for\nsome reason you can't see the mouse pointer.\n\n\"#' shows up when the mouse is anywhere over the html heading element.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein",
"msg_date": "Thu, 23 Mar 2023 08:02:07 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 23.03.2023 at 10:35, Alvaro Herrera wrote:\n> As with the <simplelist> patch, we'll need to patch the CSS used in the\n> website for the docs too, as that's the most important place where docs\n> are visited.\n\nOk, I've created and tested a patch for this too.\n\nSince the need for ids is starting to grow again (ecb696527c added an id\nto a varlistentry in doc/src/sgml/ref/create_subscription.sgml) I've\nalso amended the add-needed-ids patch once again so that the build does\nnot fail after applying the make_html_ids_discoverable patch.\n\nI've also attached the (unchanged) make_html_ids_discoverable patch for\nconvenience so this email now contains two patches for postgresql\n(ending with .postgresql.patch) and one patch for pgweb (ending with\n.pgweb.patch).\n\nTBH I'm a bit afraid that people will immediately start complaining\nabout the failing docs builds after this got applied since it forces\nthem to add ids to all varlistenries in a variablelist if they add one,\nwhich can be perceived as quite a burden (also committers and reviewers\nwill have to get used to always watch out for failing docs builds\nbecause of this).\n\nSince breaking the build on missing ids was an intentional decision we\ncan theoretically soften this by only issuing a warning or removing the\ncheck for missing id's altogether but this would probably defeat the\npurpose since it would lead to an increasing number of entries that lack\nan id after a while.\n\nRegards,\n\nBrar",
"msg_date": "Thu, 23 Mar 2023 20:08:52 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": ">\n> TBH I'm a bit afraid that people will immediately start complaining\n> about the failing docs builds after this got applied since it forces\n> them to add ids to all varlistenries in a variablelist if they add one,\n> which can be perceived as quite a burden (also committers and reviewers\n> will have to get used to always watch out for failing docs builds\n> because of this).\n>\n\nAs a person who had to re-rebase a patch because I discovered that id tags\nhad been added to one of the files I was working on, I can confidently say\n\"don't worry\". It wasn't that big of a deal, I wasn't even following this\nthread at the time and I immediately figured out what had happened and what\nwas expected of me. So it isn't that much of an inconvenience. If there is\na negative consequence to this change, it would be that it might\nincentivize patch writers to omit documentation completely at the early\nstages. But I don't think that's a problem because people generally see a\nlack of documentation as a clue that maybe the patch isn't ready to be\nreviewed, and this change would only reinforce that litmus test.\n\nI had suggested we do something like this a few years back (the ids, that\nis. the idea that we could check for compliance was beyond my imagination\nat the time), and I'm glad to see both finally happening.\n\nWhile I can foresee people overlooking the docs build, such oversights\nwon't go long before being caught, and the fix is simple. Now if we can\njust get a threaded version of xlstproc to make the builds faster...\n\np.s. I'm \"Team Paperclip\" when it comes to the link hint, but let's get the\nfeature in first and worry about the right character later.\n\nTBH I'm a bit afraid that people will immediately start complaining\nabout the failing docs builds after this got applied since it forces\nthem to add ids to all varlistenries in a variablelist if they add one,\nwhich can be perceived as quite a burden (also committers and reviewers\nwill have to get used to always watch out for failing docs builds\nbecause of this).As a person who had to re-rebase a patch because I discovered that id tags had been added to one of the files I was working on, I can confidently say \"don't worry\". It wasn't that big of a deal, I wasn't even following this thread at the time and I immediately figured out what had happened and what was expected of me. So it isn't that much of an inconvenience. If there is a negative consequence to this change, it would be that it might incentivize patch writers to omit documentation completely at the early stages. But I don't think that's a problem because people generally see a lack of documentation as a clue that maybe the patch isn't ready to be reviewed, and this change would only reinforce that litmus test.I had suggested we do something like this a few years back (the ids, that is. the idea that we could check for compliance was beyond my imagination at the time), and I'm glad to see both finally happening.While I can foresee people overlooking the docs build, such oversights won't go long before being caught, and the fix is simple. Now if we can just get a threaded version of xlstproc to make the builds faster...p.s. I'm \"Team Paperclip\" when it comes to the link hint, but let's get the feature in first and worry about the right character later.",
"msg_date": "Thu, 23 Mar 2023 15:40:45 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "Hi Brar,\n\nIt occurs to me that I had not actually tested the\nway the anchor is put only after the last term in a\nvarlistentry. (The code looked obviously right\nand should work if any varlistentry terms have anchors,\nwhich they do, but....)\n\nHave you tested this? If not, catalog.sgml, the\nDEPENDENCY_PARTITION_SEC term is a 2nd term and usable\nfor at test case. But, at the moment there are no ids\nfor any of these varlistentries so you'd have to hack\nthem in.\n\nApologies for missing this.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Thu, 23 Mar 2023 17:31:23 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "This is for the committer, as an FYI.\n\nI cut out the <xsl:template name=\"section.heading\"> portion\nof the docbook XSLT and diffed it with the code for the\nsame template in the patch. The diff looks like:\n\n-- /tmp/sections.xsl\t2023-03-22 13:00:33.432968357 -0500\n+++ /tmp/make_html_ids_discoverable_v3.patch\t2023-03-22 13:03:39.776930603 -0500\n@@ -52,5 +52,8 @@\n </xsl:call-template>\n </xsl:if>\n <xsl:copy-of select=\"$title\"/>\n+ <xsl:call-template name=\"pg.id.link\">\n+ <xsl:with-param name=\"object\" select=\"$section\"/>\n+ </xsl:call-template>\n </xsl:element>\n </xsl:template>\n\n(So, this output would start with line 52 of the template,\nnot from the top of the stock sections.xsl file.)\n\nHowever, I am not really familiar with exactly what flavor\nof docbook, version, namespace-d or not, etc., that PG\nuses. So I could be diffing with the wrong thing.\n\nHope this helps and is not just noise.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Thu, 23 Mar 2023 17:55:30 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "Hi Brar,\n\nAn observation: The # that shows up when hovering\nover section-level headings is styled as the\nsection-level heading is. But the # that shows\nup when hovering over varlistentrys has the default\ntext style.\n\nThis works for me. It's nice to have the \"section #\"s\nlook like the section heading. But the varlistentry's\nterms are smaller than the normal font, and their\nline width is less heavy than normal. I'm not really\ninvested one way or the other, but I find it kind of\nnice that the varlistentry's #s are easier to click\non and more noticable because they're slightly larger\nthan might be expected.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Thu, 23 Mar 2023 23:09:25 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 23.03.2023 at 23:31, Karl O. Pinc wrote:\n> Hi Brar,\n>\n> It occurs to me that I had not actually tested the\n> way the anchor is put only after the last term in a\n> varlistentry. (The code looked obviously right\n> and should work if any varlistentry terms have anchors,\n> which they do, but....)\n>\n> Have you tested this?\n\nYes, I have. See\nhttps://pgdocs.piening.info/app-psql.html#APP-PSQL-META-COMMAND-DE for\nan extreme case.\n\n\n\n",
"msg_date": "Fri, 24 Mar 2023 06:05:06 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 24.03.2023 at 05:09, Karl O. Pinc wrote:\n> Hi Brar,\n>\n> An observation: The # that shows up when hovering\n> over section-level headings is styled as the\n> section-level heading is. But the # that shows\n> up when hovering over varlistentrys has the default\n> text style.\n>\n> This works for me. It's nice to have the \"section #\"s\n> look like the section heading. But the varlistentry's\n> terms are smaller than the normal font, and their\n> line width is less heavy than normal. I'm not really\n> invested one way or the other, but I find it kind of\n> nice that the varlistentry's #s are easier to click\n> on and more noticable because they're slightly larger\n> than might be expected.\n\nTBH I didn't bother a lot with this.\n\nMost of the time it's actually not the font size but rather the\nfont-family which gets inherited from the parent element if you don't\nset it explicitly.\n\nThe link just inherits everithing (including the color, which I have set\nto inherit explicitly since links don't inherit the parent's color by\ndefault) from it's parent, which is the HTML <dt> element (ultimately\nthe inheritance probably goes up to the <body> element style in pretty\nmuch all cases).\n\nIn some instances the input <term> element contains elements that are\nstyled differently in the output (e.g.: <literal> which translates to\nHTML <code> which has \"font-family: monospace;\") which makes the # from\nthe link appear differently than the visible element it appears after.\n\nSince (after tweaking the color) the general visual appearence looked ok\nto me, I didn't bother with this any further.\n\nRegards,\n\nBrar\n\n\n\n\n",
"msg_date": "Fri, 24 Mar 2023 06:48:17 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 24.03.2023 at 06:48, Brar Piening wrote:\n>\n> Since (after tweaking the color) the general visual appearence looked\n> ok to me, I didn't bother with this any further.\n\nAlso, if we go on with this we'll probably end up in an almost\nprototypical bikeshedding scenario where PostgreSQL itself is the\nnuclear power plant and the visual appearence of the hover links on the\ndocumentation website is the color of the bikeshed.\n\n;-)\n\n\n\n",
"msg_date": "Fri, 24 Mar 2023 06:59:33 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 2023-Mar-24, Brar Piening wrote:\n\n> On 23.03.2023 at 23:31, Karl O. Pinc wrote:\n> > Hi Brar,\n> > \n> > It occurs to me that I had not actually tested the\n> > way the anchor is put only after the last term in a\n> > varlistentry. (The code looked obviously right\n> > and should work if any varlistentry terms have anchors,\n> > which they do, but....)\n> > \n> > Have you tested this?\n> \n> Yes, I have. See\n> https://pgdocs.piening.info/app-psql.html#APP-PSQL-META-COMMAND-DE for\n> an extreme case.\n\nBut why are there no anchors next to <h3> items on that page? For\nexample, how do I get the link for the \"Meta Commands\" subsection?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n<inflex> really, I see PHP as like a strange amalgamation of C, Perl, Shell\n<crab> inflex: you know that \"amalgam\" means \"mixture with mercury\",\n more or less, right?\n<crab> i.e., \"deadly poison\"\n\n\n",
"msg_date": "Fri, 24 Mar 2023 10:45:03 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 24.03.2023 at 10:45, Alvaro Herrera wrote:\n> But why are there no anchors next to <h3> items on that page? For\n> example, how do I get the link for the \"Meta Commands\" subsection?\n\nI somehow knew that responding from a crappy mobile phone e-mail client\nwill mess up the format and the thread...\n\nFor those trying to follow the thread in the archives: my response (it's\nprobably a refsect which isn't supported yet) ended up here:\nhttps://www.postgresql.org/message-id/1N1fn0-1qd4xd1MyG-011ype%40mail.gmx.net\n\nRegards,\n\nBrar\n\n\n\n",
"msg_date": "Sun, 26 Mar 2023 09:57:17 +0200",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 23.03.2023 at 20:08, Brar Piening wrote:\n> Since the need for ids is starting to grow again (ecb696527c added an\n> id to a varlistentry in doc/src/sgml/ref/create_subscription.sgml)\n> I've also amended the add-needed-ids patch once again so that the\n> build does not fail after applying the make_html_ids_discoverable patch.\n\nNew add-needed-ids patch since it was outdated again.\n\nRegards,\n\nBrar",
"msg_date": "Mon, 27 Mar 2023 19:05:52 +0200",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 4:06 AM Brar Piening <brar@gmx.de> wrote:\n>\n> On 23.03.2023 at 20:08, Brar Piening wrote:\n> > Since the need for ids is starting to grow again (ecb696527c added an\n> > id to a varlistentry in doc/src/sgml/ref/create_subscription.sgml)\n> > I've also amended the add-needed-ids patch once again so that the\n> > build does not fail after applying the make_html_ids_discoverable patch.\n>\n> New add-needed-ids patch since it was outdated again.\n>\n\nFYI, there is a lot of overlap between this last attachment and the\npatches of Kuroda-san's current thread here [1] which is also adding\nids to create_subscription.sgml.\n\n(Anyway, I guess you might already be aware of that other thread\nbecause your new ids look like they have the same names as those\nchosen by Kuroda-san)\n\n------\n[1] https://www.postgresql.org/message-id/flat/CAHut%2BPvzo6%3DKKLqMR6-mAQdM%2Bj_dse0eUreGmrFouL7gbLbv2w%40mail.gmail.com#7da8d0e3b73096375847c16c856b4aed\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 28 Mar 2023 09:11:57 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 28.03.2023 at 00:11, Peter Smith wrote:\n> FYI, there is a lot of overlap between this last attachment and the\n> patches of Kuroda-san's current thread here [1] which is also adding\n> ids to create_subscription.sgml.\n>\n> (Anyway, I guess you might already be aware of that other thread\n> because your new ids look like they have the same names as those\n> chosen by Kuroda-san)\n>\n> ------\n> [1] https://www.postgresql.org/message-id/flat/CAHut%2BPvzo6%3DKKLqMR6-mAQdM%2Bj_dse0eUreGmrFouL7gbLbv2w%40mail.gmail.com#7da8d0e3b73096375847c16c856b4aed\n\nThanks, I actually was not aware of this.\n\nAlso, kudos for capturing the missing id and advocating for consistency\nregarding ids even before this is actively enforced. This nurtures my\noptimism that consistency might actually be achieveable without\neverybody getting angry at me because my patch will enforce it.\n\nRegarding the overlap, I currently try to make it as easy as possible\nfor a potential committer and I'm happy to rebase my patch upon request\nor if Kuroda-san's patch gets applied first.\n\nRegards,\n\nBrar\n\n\n\n",
"msg_date": "Tue, 28 Mar 2023 12:17:28 +0200",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "Dear my comrade Brar,\r\n\r\n> Thanks, I actually was not aware of this.\r\n> \r\n> Also, kudos for capturing the missing id and advocating for consistency\r\n> regarding ids even before this is actively enforced. This nurtures my\r\n> optimism that consistency might actually be achieveable without\r\n> everybody getting angry at me because my patch will enforce it.\r\n> \r\n> Regarding the overlap, I currently try to make it as easy as possible\r\n> for a potential committer and I'm happy to rebase my patch upon request\r\n> or if Kuroda-san's patch gets applied first.\r\n\r\nFYI - my patch is pushed [1]. Could you please rebase your patch?\r\nI think it's ok to just remove changes from logical-replication.sgml, ref/alter_subscription.sgml,\r\nand ref/create_subscription.sgml.\r\n\r\n[1]: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=de5a47af2d8003dee123815bb7e58913be9a03f3\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 29 Mar 2023 04:52:56 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 29.03.2023 at 06:52, Hayato Kuroda (Fujitsu) wrote:\n> FYI - my patch is pushed\n\nThanks!\n\n\n> Could you please rebase your patch?\n\nDone and tested. Patch is attached.\n\nRegards,\n\nBrar",
"msg_date": "Wed, 29 Mar 2023 18:03:48 +0200",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "Dear Brar,\r\n\r\nThank you for updating the patch. The patch looks good to me.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 30 Mar 2023 08:11:13 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 29.03.23 18:03, Brar Piening wrote:\n> On 29.03.2023 at 06:52, Hayato Kuroda (Fujitsu) wrote:\n>> FYI - my patch is pushed\n> \n> Thanks!\n> \n> \n>> Could you please rebase your patch?\n> \n> Done and tested. Patch is attached.\n\nI have committed the most recent patch that added some missing IDs. (I \nalso added a missing xreflabel in passing.)\n\nI'll look at the XSLT patch next.\n\n\n\n",
"msg_date": "Tue, 4 Apr 2023 16:22:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 23.03.23 20:08, Brar Piening wrote:\n> I've also attached the (unchanged) make_html_ids_discoverable patch for\n> convenience so this email now contains two patches for postgresql\n> (ending with .postgresql.patch) and one patch for pgweb (ending with\n> .pgweb.patch).\n\nHere is my view on this:\n\nFirst of all, it works very nicely and is very useful. Very welcome.\n\nThe XSLT implementation looks sound to me. It would be a touch better \nif it had some comments about which parts of the templates were copied \nfrom upstream stylesheets and which were changed. There are examples of \nsuch commenting in the existing customization layer. Also, avoid \nintroducing whitespace differences during said copying.\n\nHowever, I wonder if this is the right way to approach this. I don't \nthink we should put these link markers directly into the HTML. It feels \nlike this is the wrong layer. For example, if you have CSS turned off, \nthen all these # marks show up by default.\n\nIt seems to me that the correct way to do this is to hook in some \nJavaScript that does this transformation directly on the DOM. Then we \ndon't need to carry this presentation detail in the HTML. Moreover, it \nwould avoid tight coupling between the website and the documentation \nsources. You can produce the exact same DOM, that part seems okay, just \ndo it elsewhere. Was this approach considered? I didn't see it in the \nthread.\n\n\n\n",
"msg_date": "Tue, 4 Apr 2023 16:54:23 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 04.04.2023 at 16:54, Peter Eisentraut wrote:\n>\n> First of all, it works very nicely and is very useful. Very welcome.\n\nThank you!\n\n\n> The XSLT implementation looks sound to me. It would be a touch better\n> if it had some comments about which parts of the templates were copied\n> from upstream stylesheets and which were changed. There are examples\n> of such commenting in the existing customization layer. Also, avoid\n> introducing whitespace differences during said copying.\n\nI will amend the patch if we agree that this is the way forward.\n\n\n> However, I wonder if this is the right way to approach this. I don't\n> think we should put these link markers directly into the HTML. It\n> feels like this is the wrong layer. For example, if you have CSS\n> turned off, then all these # marks show up by default.\n\nI'd consider this a feature rather than a problem but this is certainly\ndebatable. I cannot reliably predict what expectations a user who is\nbrowsing the docs with CSS disabled might have. The opposite is true\ntoo. If we'd move the id links feature to javascript, a user who has\njavascript disabled will not see them. Is this what they'd want? I don't\nknow.\n\nAlso, while about 1-2% of users have Javascript disabled, I haven't\nheard of disabling CSS except for debugging purposes.\n\nIn general I'd consider the fact that CSS or Javascript might be\ndisabled a niche problem that isn't really worth much debating but there\nis definitely something to consider regarding people using screen\nreaders who might suffer from one or the other behavior and I'd\ndefinitely be interested what behavior these users would expect. Would\nthey want to use the id link feature or would the links rather disrupt\ntheir reading experience - I have no idea TBH and I hate speculating\nabout other people's preferences.\n\n\n> It seems to me that the correct way to do this is to hook in some\n> JavaScript that does this transformation directly on the DOM. Then we\n> don't need to carry this presentation detail in the HTML. Moreover, it\n> would avoid tight coupling between the website and the documentation\n> sources. You can produce the exact same DOM, that part seems okay,\n> just do it elsewhere. Was this approach considered? I didn't see it\n> in the thread.\n\nI briefly touched the topic in [1] and [2] but we didin't really follow\nup on the best approach.\n\n\nRegards,\n\nBrar\n\n\n[1]\nhttps://www.postgresql.org/message-id/68b9c435-d017-93cc-775a-c604db9ec683%40gmx.de\n\n[2]\nhttps://www.postgresql.org/message-id/a75b6d7c-3fa4-d6a8-cf23-6b5180237392%40gmx.de\n\n\n\n",
"msg_date": "Tue, 4 Apr 2023 21:52:31 +0200",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Tue, 4 Apr 2023 21:52:31 +0200\nBrar Piening <brar@gmx.de> wrote:\n\n> On 04.04.2023 at 16:54, Peter Eisentraut wrote:\n\n> > The XSLT implementation looks sound to me. It would be a touch\n> > better if it had some comments about which parts of the templates\n> > were copied from upstream stylesheets and which were changed.\n\nI like this idea. A lot. (Including which stylesheets were copied\nfrom.)\n\n> > However, I wonder if this is the right way to approach this. I\n> > don't think we should put these link markers directly into the\n> > HTML. It feels like this is the wrong layer. For example, if you\n> > have CSS turned off, then all these # marks show up by default. \n> \n> I'd consider this a feature rather than a problem but this is\n> certainly debatable.\n\nI too would consider this a feature. If you don't style your\nhtml presentation, you see everything. The \"#\" links to content\nare, well, content.\n\n> > It seems to me that the correct way to do this is to hook in some\n> > JavaScript that does this transformation directly on the DOM. Then\n> > we don't need to carry this presentation detail in the HTML.\n\nI would argue the opposite. The HTML/CSS is delivered to the browser\nwhich is then free to present the content to the user in the\nform preferred by the user. This puts control of presentation\nin the hands of the end-user, where, IMO, it belongs.\n\nUsing JavaScript to manipulate the DOM is all well and good when\nusing AJAX to interact with the server to produce dynamic content.\nBut otherwise manipulating the DOM with JavaScript seems overly\nheavy-handed, even though popular. It seems like JavaScript is\nused more because CSS is difficult and an \"extra technology\" when\ninstead JavaScript can \"do everything\". So CSS is put aside.\n\nI may be biased, not being a JavaScript fan. (I tend to be one\nof those cranky individuals who keep JavaScript turned off.)\nI'd rather not have code executing when such overhead/complication\ncan be avoided. (Insert here exciting argument about \"what is code\nand what is data\".)\n\nGlancing at the documentation source, I don't see JavaScript used\nat all. Introducing it would be adding something else to the mix.\nNot that this would be bad if it provides value.\n\nIn the end, I don't _really_ care. And don't do web development\nall day either so my fundamentals could be just wrong. But\nthis is my take.\n\n> > Moreover, it would avoid tight coupling between the website and the\n> > documentation sources. \n\nI don't really understand this statement. Are you saying that\nthe documentation's source CSS needn't/shouldn't be the CSS used\non the website? That seems counter-intuitive. But then I don't\nunderstand why the default CSS used when developing the documentation\nis not the CSS used on the website.\n\nI can imagine administrative arguments around server maintenance for\nwanting to keep the website decoupled from the PG source code.\n(I think.) But I can't speak to any of that.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Tue, 4 Apr 2023 16:21:42 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "For what it's worth, I think having the anchors be always-visible when\nCSS disabled is a feature. The content is still perfectly readable,\nand the core feature from this patch is available. Introducing\nJavaScript to lose that functionality seems like a step backwards.\n\nBy the way, the latest patch attachment was not the full patch series,\nwhich I think confused cfbot: [1] (unless I'm misunderstanding the\nstate of the patch series).\n\nAnd thanks for working on this. I've hunted in the page source for ids\nto link to a number of times. I look forward to not doing that\nanymore.\n\nThanks,\nMaciek\n\n[1]: https://commitfest.postgresql.org/42/4042/\n\n\n",
"msg_date": "Wed, 5 Apr 2023 15:00:45 -0700",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 04.04.23 21:52, Brar Piening wrote:\n>> The XSLT implementation looks sound to me. It would be a touch better\n>> if it had some comments about which parts of the templates were copied\n>> from upstream stylesheets and which were changed. There are examples\n>> of such commenting in the existing customization layer. Also, avoid\n>> introducing whitespace differences during said copying.\n> \n> I will amend the patch if we agree that this is the way forward.\n\nOk, it appears everyone agrees that this is the correct approach. \nPlease post an updated patch. There have been so many partial patches \nposted recently, I'm not sure which one is the most current one and who \nis really the author.\n\n\n\n",
"msg_date": "Thu, 6 Apr 2023 11:06:18 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 06.04.2023 at 11:06, Peter Eisentraut wrote:\n> On 04.04.23 21:52, Brar Piening wrote:\n>>> The XSLT implementation looks sound to me. It would be a touch better\n>>> if it had some comments about which parts of the templates were copied\n>>> from upstream stylesheets and which were changed. There are examples\n>>> of such commenting in the existing customization layer. Also, avoid\n>>> introducing whitespace differences during said copying.\n>>\n>> I will amend the patch if we agree that this is the way forward.\n>\n> Ok, it appears everyone agrees that this is the correct approach.\n> Please post an updated patch. There have been so many partial patches\n> posted recently, I'm not sure which one is the most current one and\n> who is really the author.\n>\nAttached are two patches:\n\n001-make_html_ids_discoverable_v5.postgresql.patch which needs to be\napplied to the postgresql repository. It adds the XSLT to generate the\nid links and the CSS to hide/display them. I've added comments as\nsuggested above.\n\n002-add-discoverable-id-style_v1.pgweb.patch which needs to be applied\nto the pgweb repository. It adds the CSS to the offical documentation site.\n\nAt the moment (commit 983ec23007) there are no missing ids, so the build\nshould just work after applying the patch but, as we already know, this\nmay change with every commit that gets added.\n\nReviewer is Karl O. Pink\n\nAuthor is Brar Piening (with some additions from Karl O. Pink)\n\nBest regards,\n\nBrar",
"msg_date": "Thu, 6 Apr 2023 16:19:30 +0200",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Thu, 6 Apr 2023 16:19:30 +0200\nBrar Piening <brar@gmx.de> wrote:\n\n> Reviewer is Karl O. Pink\n\n\"Karl O. Pinc\" actually, with a \"c\".\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Thu, 6 Apr 2023 10:05:52 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 06.04.23 16:19, Brar Piening wrote:\n> 001-make_html_ids_discoverable_v5.postgresql.patch which needs to be\n> applied to the postgresql repository. It adds the XSLT to generate the\n> id links and the CSS to hide/display them. I've added comments as\n> suggested above.\n> \n> 002-add-discoverable-id-style_v1.pgweb.patch which needs to be applied\n> to the pgweb repository. It adds the CSS to the offical documentation site.\n\nThe first patch has been committed.\n\nThe second patch should be sent to pgsql-www for integrating into the \nweb site.\n\nSide project: I noticed that these new hover links don't appear in the \nsingle-page HTML output (make postgres.html), even though the generated \nHTML source code looks correct. Maybe someone has an idea there.\n\n\n\n",
"msg_date": "Thu, 13 Apr 2023 10:31:43 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On 13.04.2023 at 10:31, Peter Eisentraut wrote:\n> The first patch has been committed.\n\nYay - thank you!\n\n> The second patch should be sent to pgsql-www for integrating into the\n> web site.\nDone via [1]. Thanks for the hint.\n\n> Side project: I noticed that these new hover links don't appear in the\n> single-page HTML output (make postgres.html), even though the\n> generated HTML source code looks correct. Maybe someone has an idea\n> there.\nI feel responsible for the feature to work for all use cases where it\nmakes sense. I'll investigate this and post back.\n\nRegards,\nBrar\n\n[1]\nhttps://www.postgresql.org/message-id/d987a4a7-62c3-7e0c-860f-1c96fc2117d9%40gmx.de\n\n\n\n\n\n",
"msg_date": "Thu, 13 Apr 2023 16:01:35 +0200",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n\n> On 06.04.23 16:19, Brar Piening wrote:\n>> 001-make_html_ids_discoverable_v5.postgresql.patch which needs to be\n>> applied to the postgresql repository. It adds the XSLT to generate the\n>> id links and the CSS to hide/display them. I've added comments as\n>> suggested above.\n>> 002-add-discoverable-id-style_v1.pgweb.patch which needs to be applied\n>> to the pgweb repository. It adds the CSS to the offical documentation site.\n>\n> The first patch has been committed.\n>\n> The second patch should be sent to pgsql-www for integrating into the\n> web site.\n>\n> Side project: I noticed that these new hover links don't appear in the\n> single-page HTML output (make postgres.html), even though the generated \n> HTML source code looks correct. Maybe someone has an idea there.\n\nAnother side note: I notice the links don't appear on <refsectN> elements\n(e.g. https://www.postgresql.org/docs/devel/sql-select.html#SQL-WITH),\nonly <sectN>.\n\n- ilmari\n\n\n",
"msg_date": "Thu, 13 Apr 2023 15:58:03 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Thu, 13 Apr 2023 15:58:03 +0100\nDagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n\n> > The first patch has been committed.\n\n> Another side note: I notice the links don't appear on <refsectN>\n> elements (e.g.\n> https://www.postgresql.org/docs/devel/sql-select.html#SQL-WITH), only\n> <sectN>.\n\nThis we know. Working with <refsectN> elements is a different\ndive into the XSLT which was deliberately put off for future\nwork.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Thu, 13 Apr 2023 10:40:09 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Thu, 13 Apr 2023 16:01:35 +0200\nBrar Piening <brar@gmx.de> wrote:\n\n> On 13.04.2023 at 10:31, Peter Eisentraut wrote:\n> > The first patch has been committed. \n> \n> Yay - thank you!\n> \n> > The second patch should be sent to pgsql-www for integrating into\n> > the web site. \n> Done via [1]. Thanks for the hint.\n> \n> > Side project: I noticed that these new hover links don't appear in\n> > the single-page HTML output (make postgres.html), even though the\n> > generated HTML source code looks correct. Maybe someone has an idea\n> > there. \n> I feel responsible for the feature to work for all use cases where it\n> makes sense. I'll investigate this and post back.\n\nLooks to me like the \">\" in the CSS was transformed into the >\nHTML entity when the stylesheet was included into the single-file\nHTML.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Thu, 13 Apr 2023 10:53:31 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
},
{
"msg_contents": "On Thu, 13 Apr 2023 10:53:31 -0500\n\"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n\n> On Thu, 13 Apr 2023 16:01:35 +0200\n> Brar Piening <brar@gmx.de> wrote:\n> \n> > On 13.04.2023 at 10:31, Peter Eisentraut wrote: \n\n> > > Side project: I noticed that these new hover links don't appear in\n> > > the single-page HTML output (make postgres.html), even though the\n> > > generated HTML source code looks correct. Maybe someone has an\n> > > idea there. \n> > I feel responsible for the feature to work for all use cases where\n> > it makes sense. I'll investigate this and post back. \n> \n> Looks to me like the \">\" in the CSS was transformed into the >\n> HTML entity when the stylesheet was included into the single-file\n> HTML.\n\nThe XSLT 1.0 spec says that characters in <style> elements should\nnot be escaped when outputting HTML. [4] But (I think) the\ngenerate.css.header parameter method [1][2] of \ninserting the CSS into the HTML expands the CSS content\nin an XML context, not a HTML context. \n\nI've played around with it, going so far as to make stylesheet.css\nlook like:\n\n--------------<snip>--------\n<!DOCTYPE xsl:stylesheet [\n<!ENTITY css SYSTEM \"stylesheet.css\">\n]>\n<xsl:stylesheet\n xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\">\n &css;\n</xsl:stylesheet>\n--------------<snip>--------\n\nand even substituted the actual text of stylesheet.css in\nplace of the &css; entity reference. In these cases the\n\"<\" character is still entity expanded, resulting in broken CSS.\n\nMy conclusion is that this method is broken.\n\n(The other possibility, I suppose, is that xsltproc is broken.)\n\nI think the thing to try is the sagehill.net approach [4].\nThis overrides the user.head.content template. My hope is\nthat because the &css; entity is seen in a <style> element\nin the template, that xsltproc then recognizances style element\ncontent in a HTML output context.\n\n(I don't know how xsltproc is supposed to know that it is\nin a HTML output context. I suppose exploring this would\nbe another avenue should the above fail.)\n\n\n1\nhttps://docbook.sourceforge.net/release/xsl/current/doc/html/custom.css.source.html\n\n2\nhttps://docbook.sourceforge.net/release/xsl/current/doc/html/generate.css.header.html\n\n3 http://sagehill.net/docbookxsl/HtmlHead.html#EmbedCSS\n\n4 https://www.w3.org/TR/xslt-10/#section-HTML-Output-Method\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Fri, 14 Apr 2023 12:41:23 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: add missing \"id\" attributes to extension packaging page"
}
] |
[
{
"msg_contents": "Hi,\n\nI see that in the archiver code, in the function pgarch_MainLoop,\nthe archiver sleeps for a certain time or until there's a signal. The time\nit sleeps for is represented by:\n\ntimeout = PGARCH_AUTOWAKE_INTERVAL - (curtime - last_copy_time);\nIt so happens that last_copy_time and curtime are always set at the same\ntime which always makes timeout equal (actually roughly equal) to\nPGARCH_AUTOWAKE_INTERVAL.\n\nI see that this behaviour was introduced as a part of the commit:\nd75288fb27b8fe0a926aaab7d75816f091ecdc27. The discussion thread is:\nhttps://postgr.es/m/20180629.173418.190173462.horiguchi.kyotaro@lab.ntt.co.jp\n\nThe change was introduced in v31, with the following comment in the\ndiscussion thread:\n\n- pgarch_MainLoop start the loop with wakened = true when both\nnotified or timed out. Otherwise time_to_stop is set and exits from\nthe loop immediately. So the variable wakened is actually\nuseless. Removed it.\n\nThis behaviour was different before the commit:\nd75288fb27b8fe0a926aaab7d75816f091ecdc27,\nin which the archiver keeps track of how much time has elapsed since\nlast_copy_time\nin case there was a signal, and it results in a smaller subsequent value of\ntimeout, until timeout is zero. This also avoids calling\npgarch_ArchiverCopyLoop\nbefore PGARCH_AUTOWAKE_INTERVAL in case there's an intermittent signal.\n\nWith the current changes it may be okay to always sleep for\nPGARCH_AUTOWAKE_INTERVAL,\nbut that means curtime and last_copy_time are no more needed.\n\nI would like to validate if my understanding is correct, and which of the\nbehaviours we would like to retain.\n\n\nThanks & Regards,\nSravan Velagandula\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nHi, I see that in the archiver code, in the function pgarch_MainLoop, the archiver sleeps for a certain time or until there's a signal. The time it sleeps for is represented by: timeout = PGARCH_AUTOWAKE_INTERVAL - (curtime - last_copy_time); It so happens that last_copy_time and curtime are always set at the same time which always makes timeout equal (actually roughly equal) to PGARCH_AUTOWAKE_INTERVAL. I see that this behaviour was introduced as a part of the commit: d75288fb27b8fe0a926aaab7d75816f091ecdc27. The discussion thread is: https://postgr.es/m/20180629.173418.190173462.horiguchi.kyotaro@lab.ntt.co.jp The change was introduced in v31, with the following comment in the discussion thread: - pgarch_MainLoop start the loop with wakened = true when both notified or timed out. Otherwise time_to_stop is set and exits from the loop immediately. So the variable wakened is actually useless. Removed it. This behaviour was different before the commit: d75288fb27b8fe0a926aaab7d75816f091ecdc27, in which the archiver keeps track of how much time has elapsed since last_copy_time in case there was a signal, and it results in a smaller subsequent value of timeout, until timeout is zero. This also avoids calling pgarch_ArchiverCopyLoop before PGARCH_AUTOWAKE_INTERVAL in case there's an intermittent signal. With the current changes it may be okay to always sleep for PGARCH_AUTOWAKE_INTERVAL, but that means curtime and last_copy_time are no more needed. I would like to validate if my understanding is correct, and which of the behaviours we would like to retain.Thanks & Regards,Sravan VelagandulaEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 5 Dec 2022 12:06:11 +0530",
"msg_from": "Sravan Kumar <sravanvcybage@gmail.com>",
"msg_from_op": true,
"msg_subject": "Question regarding \"Make archiver process an auxiliary process.\n commit\""
},
{
"msg_contents": "At Mon, 5 Dec 2022 12:06:11 +0530, Sravan Kumar <sravanvcybage@gmail.com> wrote in \n> timeout = PGARCH_AUTOWAKE_INTERVAL - (curtime - last_copy_time);\n> It so happens that last_copy_time and curtime are always set at the same\n> time which always makes timeout equal (actually roughly equal) to\n> PGARCH_AUTOWAKE_INTERVAL.\n\nOooo *^^*.\n\n> This behaviour was different before the commit:\n> d75288fb27b8fe0a926aaab7d75816f091ecdc27,\n> in which the archiver keeps track of how much time has elapsed since\n> last_copy_time\n> in case there was a signal, and it results in a smaller subsequent value of\n> timeout, until timeout is zero. This also avoids calling\n> pgarch_ArchiverCopyLoop\n> before PGARCH_AUTOWAKE_INTERVAL in case there's an intermittent signal.\n\nYes, WaitLatch() (I believe) no longer makes a spurious wakeup.\n\n> With the current changes it may be okay to always sleep for\n> PGARCH_AUTOWAKE_INTERVAL,\n> but that means curtime and last_copy_time are no more needed.\n\nI think you're right.\n\n> I would like to validate if my understanding is correct, and which of the\n> behaviours we would like to retain.\n\nAs my understanding the patch didn't change the copying behavior of\nthe function. I think we should simplify the loop by removing\nlast_copy_time and curtime in the \"if (!time_to_stop)\" block. Then we\ncan remove the variable \"timeout\" and the \"if (timeout > 0)\"\nbranch. Are you willing to work on this?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 06 Dec 2022 17:24:28 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Question regarding \"Make archiver process an auxiliary\n process. commit\""
},
{
"msg_contents": "Thank you for the feedback.\n\nI'll be glad to help with the fix. Here's the patch for review.\n\n\nOn Tue, Dec 6, 2022 at 1:54 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Mon, 5 Dec 2022 12:06:11 +0530, Sravan Kumar <sravanvcybage@gmail.com>\n> wrote in\n> > timeout = PGARCH_AUTOWAKE_INTERVAL - (curtime - last_copy_time);\n> > It so happens that last_copy_time and curtime are always set at the same\n> > time which always makes timeout equal (actually roughly equal) to\n> > PGARCH_AUTOWAKE_INTERVAL.\n>\n> Oooo *^^*.\n>\n> > This behaviour was different before the commit:\n> > d75288fb27b8fe0a926aaab7d75816f091ecdc27,\n> > in which the archiver keeps track of how much time has elapsed since\n> > last_copy_time\n> > in case there was a signal, and it results in a smaller subsequent value\n> of\n> > timeout, until timeout is zero. This also avoids calling\n> > pgarch_ArchiverCopyLoop\n> > before PGARCH_AUTOWAKE_INTERVAL in case there's an intermittent signal.\n>\n> Yes, WaitLatch() (I believe) no longer makes a spurious wakeup.\n>\n> > With the current changes it may be okay to always sleep for\n> > PGARCH_AUTOWAKE_INTERVAL,\n> > but that means curtime and last_copy_time are no more needed.\n>\n> I think you're right.\n>\n> > I would like to validate if my understanding is correct, and which of the\n> > behaviours we would like to retain.\n>\n> As my understanding the patch didn't change the copying behavior of\n> the function. I think we should simplify the loop by removing\n> last_copy_time and curtime in the \"if (!time_to_stop)\" block. Then we\n> can remove the variable \"timeout\" and the \"if (timeout > 0)\"\n> branch. Are you willing to work on this?\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\n\n-- \nThanks And Regards,\nSravan\n\nTake life one day at a time.",
"msg_date": "Tue, 6 Dec 2022 16:57:11 +0530",
"msg_from": "Sravan Kumar <sravanvcybage@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Question regarding \"Make archiver process an auxiliary process.\n commit\""
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 4:57 PM Sravan Kumar <sravanvcybage@gmail.com> wrote:\n>\n> Thank you for the feedback.\n>\n> I'll be glad to help with the fix. Here's the patch for review.\n\nThanks. +1 for fixing this.\n\nI would like to quote recent discussions on reducing the useless\nwakeups or increasing the sleep/hibernation times in various processes\nto reduce the power savings [1] [2] [3] [4] [5]. With that in context,\ndoes the archiver need to wake up every 60 sec at all when its latch\ngets set (PgArchWakeup()) whenever the server switches to a new WAL\nfile? What happens if we get rid of PGARCH_AUTOWAKE_INTERVAL and rely\non its latch being set? If required, we can spread PgArchWakeup() to\nmore places, no?\n\nBefore even answering the above questions, I think we need to see if\nthere're any cases where a process can miss SetLatch() calls (I don't\nhave an answer for that).\n\nOr do we want to stick to what the below comment says?\n\n /*\n * There shouldn't be anything for the archiver to do except to wait for a\n * signal ... however, the archiver exists to protect our data, so she\n * wakes up occasionally to allow herself to be proactive.\n */\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGJCbcv8AtujLw3kEO2wRB7Ffzo1fmwaGG-tQLuMOjf6qQ%40mail.gmail.com\n[2]\ncommit cd4329d9393f84dce34f0bd2dd936adc8ffaa213\nAuthor: Thomas Munro <tmunro@postgresql.org>\nDate: Tue Nov 29 11:28:08 2022 +1300\n\n Remove promote_trigger_file.\n\n Previously, an idle startup (recovery) process would wake up every 5\n seconds to have a chance to poll for promote_trigger_file, even if that\n GUC was not configured. That promotion triggering mechanism was\n effectively superseded by pg_ctl promote and pg_promote() a long time\n ago. There probably aren't many users left and it's very easy to change\n to the modern mechanisms, so we agreed to remove the feature.\n\n This is part of a campaign to reduce wakeups on idle systems.\n\n[3] https://commitfest.postgresql.org/41/4035/\n[4] https://commitfest.postgresql.org/41/4020/\n[5]\ncommit 05a7be93558c614ab89c794cb1d301ea9ff33199\nAuthor: Thomas Munro <tmunro@postgresql.org>\nDate: Tue Nov 8 20:36:36 2022 +1300\n\n Suppress useless wakeups in walreceiver.\n\n Instead of waking up 10 times per second to check for various timeout\n conditions, keep track of when we next have periodic work to do.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 6 Dec 2022 17:23:50 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Question regarding \"Make archiver process an auxiliary process.\n commit\""
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 5:24 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Tue, Dec 6, 2022 at 4:57 PM Sravan Kumar <sravanvcybage@gmail.com>\n> wrote:\n> >\n> > Thank you for the feedback.\n> >\n> > I'll be glad to help with the fix. Here's the patch for review.\n>\n> Thanks. +1 for fixing this.\n>> I would like to quote recent discussions on reducing the useless\n>> wakeups or increasing the sleep/hibernation times in various processes\n>> to reduce the power savings [1] [2] [3] [4] [5]. With that in context,\n>> does the archiver need to wake up every 60 sec at all when its latch\n>> gets set (PgArchWakeup()) whenever the server switches to a new WAL\n>> file? What happens if we get rid of PGARCH_AUTOWAKE_INTERVAL and rely\n>> on its latch being set? If required, we can spread PgArchWakeup() to\n>> more places, no?\n>\n>\nI like the idea of not having to wake up intermittently and probably we\nshould start a discussion about it.\n\nI see the following comment in PgArchWakeup().\n\n /*\n* We don't acquire ProcArrayLock here. It's actually fine because\n* procLatch isn't ever freed, so we just can potentially set the wrong\n* process' (or no process') latch. Even in that case the archiver will\n* be relaunched shortly and will start archiving.\n*/\n\nWhile I do not fully understand the comment, it gives me an impression that\nthe SetLatch() done here is counting on the timeout to wake up the archiver\nin some cases where the latch is wrongly set.\n\nThe proposed idea is a behaviour change while this thread intends to clean\nup some code that's\na result of the mentioned commit. So probably the proposed idea can be\ndiscussed as a separate thread.\n\n\nBefore even answering the above questions, I think we need to see if\n> there're any cases where a process can miss SetLatch() calls (I don't\n> have an answer for that).\n>\n> Or do we want to stick to what the below comment says?\n>\n> /*\n> * There shouldn't be anything for the archiver to do except to wait\n> for a\n> * signal ... however, the archiver exists to protect our data, so she\n> * wakes up occasionally to allow herself to be proactive.\n> */\n>\n> [1]\n> https://www.postgresql.org/message-id/CA%2BhUKGJCbcv8AtujLw3kEO2wRB7Ffzo1fmwaGG-tQLuMOjf6qQ%40mail.gmail.com\n> [2]\n> commit cd4329d9393f84dce34f0bd2dd936adc8ffaa213\n> Author: Thomas Munro <tmunro@postgresql.org>\n> Date: Tue Nov 29 11:28:08 2022 +1300\n>\n> Remove promote_trigger_file.\n>\n> Previously, an idle startup (recovery) process would wake up every 5\n> seconds to have a chance to poll for promote_trigger_file, even if that\n> GUC was not configured. That promotion triggering mechanism was\n> effectively superseded by pg_ctl promote and pg_promote() a long time\n> ago. There probably aren't many users left and it's very easy to\n> change\n> to the modern mechanisms, so we agreed to remove the feature.\n>\n> This is part of a campaign to reduce wakeups on idle systems.\n>\n> [3] https://commitfest.postgresql.org/41/4035/\n> [4] https://commitfest.postgresql.org/41/4020/\n> [5]\n> commit 05a7be93558c614ab89c794cb1d301ea9ff33199\n> Author: Thomas Munro <tmunro@postgresql.org>\n> Date: Tue Nov 8 20:36:36 2022 +1300\n>\n> Suppress useless wakeups in walreceiver.\n>\n> Instead of waking up 10 times per second to check for various timeout\n> conditions, keep track of when we next have periodic work to do.\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n\n\n-- \nThanks And Regards,\nSravan\n\nTake life one day at a time.\n\nOn Tue, Dec 6, 2022 at 5:24 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Tue, Dec 6, 2022 at 4:57 PM Sravan Kumar <sravanvcybage@gmail.com> wrote:\n>\n> Thank you for the feedback.\n>\n> I'll be glad to help with the fix. Here's the patch for review.\n\nThanks. +1 for fixing this.\nI would like to quote recent discussions on reducing the useless\nwakeups or increasing the sleep/hibernation times in various processes\nto reduce the power savings [1] [2] [3] [4] [5]. With that in context,\ndoes the archiver need to wake up every 60 sec at all when its latch\ngets set (PgArchWakeup()) whenever the server switches to a new WAL\nfile? What happens if we get rid of PGARCH_AUTOWAKE_INTERVAL and rely\non its latch being set? If required, we can spread PgArchWakeup() to\nmore places, no? I like the idea of not having to wake up intermittently and probably we should start a discussion about it.I see the following comment in PgArchWakeup(). /*\t * We don't acquire ProcArrayLock here. It's actually fine because\t * procLatch isn't ever freed, so we just can potentially set the wrong\t * process' (or no process') latch. Even in that case the archiver will\t * be relaunched shortly and will start archiving.\t */While I do not fully understand the comment, it gives me an impression thatthe SetLatch() done here is counting on the timeout to wake up the archiverin some cases where the latch is wrongly set.The proposed idea is a behaviour change while this thread intends to clean up some code that'sa result of the mentioned commit. So probably the proposed idea can be discussed as a separate thread.\nBefore even answering the above questions, I think we need to see if\nthere're any cases where a process can miss SetLatch() calls (I don't\nhave an answer for that).\n\nOr do we want to stick to what the below comment says?\n\n /*\n * There shouldn't be anything for the archiver to do except to wait for a\n * signal ... however, the archiver exists to protect our data, so she\n * wakes up occasionally to allow herself to be proactive.\n */\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGJCbcv8AtujLw3kEO2wRB7Ffzo1fmwaGG-tQLuMOjf6qQ%40mail.gmail.com\n[2]\ncommit cd4329d9393f84dce34f0bd2dd936adc8ffaa213\nAuthor: Thomas Munro <tmunro@postgresql.org>\nDate: Tue Nov 29 11:28:08 2022 +1300\n\n Remove promote_trigger_file.\n\n Previously, an idle startup (recovery) process would wake up every 5\n seconds to have a chance to poll for promote_trigger_file, even if that\n GUC was not configured. That promotion triggering mechanism was\n effectively superseded by pg_ctl promote and pg_promote() a long time\n ago. There probably aren't many users left and it's very easy to change\n to the modern mechanisms, so we agreed to remove the feature.\n\n This is part of a campaign to reduce wakeups on idle systems.\n\n[3] https://commitfest.postgresql.org/41/4035/\n[4] https://commitfest.postgresql.org/41/4020/\n[5]\ncommit 05a7be93558c614ab89c794cb1d301ea9ff33199\nAuthor: Thomas Munro <tmunro@postgresql.org>\nDate: Tue Nov 8 20:36:36 2022 +1300\n\n Suppress useless wakeups in walreceiver.\n\n Instead of waking up 10 times per second to check for various timeout\n conditions, keep track of when we next have periodic work to do.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n-- Thanks And Regards,SravanTake life one day at a time.",
"msg_date": "Wed, 7 Dec 2022 11:28:23 +0530",
"msg_from": "Sravan Kumar <sravanvcybage@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Question regarding \"Make archiver process an auxiliary process.\n commit\""
},
{
"msg_contents": "At Tue, 6 Dec 2022 17:23:50 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> Thanks. +1 for fixing this.\n> \n> I would like to quote recent discussions on reducing the useless\n> wakeups or increasing the sleep/hibernation times in various processes\n> to reduce the power savings [1] [2] [3] [4] [5]. With that in context,\n> does the archiver need to wake up every 60 sec at all when its latch\n> gets set (PgArchWakeup()) whenever the server switches to a new WAL\n> file? What happens if we get rid of PGARCH_AUTOWAKE_INTERVAL and rely\n> on its latch being set? If required, we can spread PgArchWakeup() to\n> more places, no?\n\nI thought so first, but archiving may be interrupted for various\nreasons (disk full I think is the most common one). So, only\nintentional wakeups aren't sufficient.\n\n> Before even answering the above questions, I think we need to see if\n> there're any cases where a process can miss SetLatch() calls (I don't\n> have an answer for that).\n\nI read a recent Thomas' mail that says something like \"should we\nconsider the case latch sets are missed?\". It is triggered by SIGURG\nor SetEvent(). I'm not sure but I believe the former is now reliable\nand the latter was born reliable.\n\n> Or do we want to stick to what the below comment says?\n> \n> /*\n> * There shouldn't be anything for the archiver to do except to wait for a\n> * signal ... however, the archiver exists to protect our data, so she\n> * wakes up occasionally to allow herself to be proactive.\n> */\n\nSo I think this is still valid. If we want to eliminate useless\nwakeups, archiver needs to remember whether the last iteration was\nfully done or not. But it seems to be a race condition is involved.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 07 Dec 2022 15:01:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Question regarding \"Make archiver process an auxiliary\n process. commit\""
},
{
"msg_contents": "At Wed, 7 Dec 2022 11:28:23 +0530, Sravan Kumar <sravanvcybage@gmail.com> wrote in \n> On Tue, Dec 6, 2022 at 5:24 PM Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> > On Tue, Dec 6, 2022 at 4:57 PM Sravan Kumar <sravanvcybage@gmail.com>\n> > wrote:\n> > >\n> > > Thank you for the feedback.\n> > >\n> > > I'll be glad to help with the fix. Here's the patch for review.\n> >\n> > Thanks. +1 for fixing this.\n> >> I would like to quote recent discussions on reducing the useless\n> >> wakeups or increasing the sleep/hibernation times in various processes\n> >> to reduce the power savings [1] [2] [3] [4] [5]. With that in context,\n> >> does the archiver need to wake up every 60 sec at all when its latch\n> >> gets set (PgArchWakeup()) whenever the server switches to a new WAL\n> >> file? What happens if we get rid of PGARCH_AUTOWAKE_INTERVAL and rely\n> >> on its latch being set? If required, we can spread PgArchWakeup() to\n> >> more places, no?\n> >\n> >\n> I like the idea of not having to wake up intermittently and probably we\n> should start a discussion about it.\n> \n> I see the following comment in PgArchWakeup().\n> \n> /*\n> * We don't acquire ProcArrayLock here. It's actually fine because\n> * procLatch isn't ever freed, so we just can potentially set the wrong\n> * process' (or no process') latch. Even in that case the archiver will\n> * be relaunched shortly and will start archiving.\n> */\n> \n> While I do not fully understand the comment, it gives me an impression that\n> the SetLatch() done here is counting on the timeout to wake up the archiver\n> in some cases where the latch is wrongly set.\n\nIt is telling about the first iteration of archive process, not\nperiodical wakeups. So I think it is doable by considering how to\nhandle incomplete archiving iterations.\n\n> The proposed idea is a behaviour change while this thread intends to clean\n> up some code that's\n> a result of the mentioned commit. So probably the proposed idea can be\n> discussed as a separate thread.\n\nAgreed.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 07 Dec 2022 15:19:39 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Question regarding \"Make archiver process an auxiliary\n process. commit\""
},
{
"msg_contents": "I have added the thread to the commitfest: https://commitfest.postgresql.org/42/\nDid you get a chance to review the patch? Please let me know if you\nneed anything from my end.\n\nThanks & Regards,\nSravan Velagandula\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, Dec 7, 2022 at 11:49 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 7 Dec 2022 11:28:23 +0530, Sravan Kumar <sravanvcybage@gmail.com> wrote in\n> > On Tue, Dec 6, 2022 at 5:24 PM Bharath Rupireddy <\n> > bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > > On Tue, Dec 6, 2022 at 4:57 PM Sravan Kumar <sravanvcybage@gmail.com>\n> > > wrote:\n> > > >\n> > > > Thank you for the feedback.\n> > > >\n> > > > I'll be glad to help with the fix. Here's the patch for review.\n> > >\n> > > Thanks. +1 for fixing this.\n> > >> I would like to quote recent discussions on reducing the useless\n> > >> wakeups or increasing the sleep/hibernation times in various processes\n> > >> to reduce the power savings [1] [2] [3] [4] [5]. With that in context,\n> > >> does the archiver need to wake up every 60 sec at all when its latch\n> > >> gets set (PgArchWakeup()) whenever the server switches to a new WAL\n> > >> file? What happens if we get rid of PGARCH_AUTOWAKE_INTERVAL and rely\n> > >> on its latch being set? If required, we can spread PgArchWakeup() to\n> > >> more places, no?\n> > >\n> > >\n> > I like the idea of not having to wake up intermittently and probably we\n> > should start a discussion about it.\n> >\n> > I see the following comment in PgArchWakeup().\n> >\n> > /*\n> > * We don't acquire ProcArrayLock here. It's actually fine because\n> > * procLatch isn't ever freed, so we just can potentially set the wrong\n> > * process' (or no process') latch. Even in that case the archiver will\n> > * be relaunched shortly and will start archiving.\n> > */\n> >\n> > While I do not fully understand the comment, it gives me an impression that\n> > the SetLatch() done here is counting on the timeout to wake up the archiver\n> > in some cases where the latch is wrongly set.\n>\n> It is telling about the first iteration of archive process, not\n> periodical wakeups. So I think it is doable by considering how to\n> handle incomplete archiving iterations.\n>\n> > The proposed idea is a behaviour change while this thread intends to clean\n> > up some code that's\n> > a result of the mentioned commit. So probably the proposed idea can be\n> > discussed as a separate thread.\n>\n> Agreed.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n\n\n--\nThanks & Regards,\nSravan Velagandula\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 4 Jan 2023 11:35:33 +0530",
"msg_from": "Sravan Kumar <sravanvcybage@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Question regarding \"Make archiver process an auxiliary process.\n commit\""
},
{
"msg_contents": "On Wed, Jan 04, 2023 at 11:35:33AM +0530, Sravan Kumar wrote:\n> I have added the thread to the commitfest: https://commitfest.postgresql.org/42/\n> Did you get a chance to review the patch? Please let me know if you\n> need anything from my end.\n\nThis seems like worthwhile simplification to me. Ultimately, your patch\nshouldn't result in any sort of signficant behavior change, and I don't see\nany reason to further complicate the timeout calculation. The copy loop\nwill run any time the archiver's latch is set, and it'll wait up to 60\nseconds otherwise. As discussed upthread, it might be possible to remove\nthe timeout completely, but that probably deserves its own thread.\n\nI noticed that time.h is no longer needed by the archiver, so I removed\nthat and fixed an indentation nitpick in the attached v2. I'm going to set\nthe commitfest entry to ready-for-committer shortly after sending this\nmessage.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 20 Jan 2023 11:39:56 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Question regarding \"Make archiver process an auxiliary process.\n commit\""
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 11:39:56AM -0800, Nathan Bossart wrote:\n> I noticed that time.h is no longer needed by the archiver, so I removed\n> that and fixed an indentation nitpick in the attached v2. I'm going to set\n> the commitfest entry to ready-for-committer shortly after sending this\n> message.\n\nI'm not sure why I thought time.h was no longer needed. time() is clearly\nused elsewhere in this file. Here's a new version with that added back.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 31 Jan 2023 20:30:13 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Question regarding \"Make archiver process an auxiliary process.\n commit\""
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 08:30:13PM -0800, Nathan Bossart wrote:\n> I'm not sure why I thought time.h was no longer needed. time() is clearly\n> used elsewhere in this file. Here's a new version with that added back.\n\nAh, I see. The key point is that curtime and last_copy_time will most\nlikely be the same value as time() is second-based, so timeout is\nbasically always PGARCH_AUTOWAKE_INTERVAL. There is no need to care\nabout time_to_stop, as we just go through and exit if it happens to be\nswitched to true. Applied v3, keeping time_to_stop as it is in v2 and\nv3 so as we don't loop again on a postmaster death.\n--\nMichael",
"msg_date": "Wed, 1 Feb 2023 15:51:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Question regarding \"Make archiver process an auxiliary process.\n commit\""
}
] |
[
{
"msg_contents": "I had noticed that most getopt() or getopt_long() calls had their letter \nlists in pretty crazy orders. There might have been occasional attempts \nat grouping, but those then haven't been maintained as new options were \nadded. To restore some sanity to this, I went through and ordered them \nalphabetically.",
"msg_date": "Mon, 5 Dec 2022 09:29:53 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Order getopt arguments"
},
{
"msg_contents": "\nHello Peter,\n\n> I had noticed that most getopt() or getopt_long() calls had their letter \n> lists in pretty crazy orders. There might have been occasional attempts \n> at grouping, but those then haven't been maintained as new options were \n> added. To restore some sanity to this, I went through and ordered them \n> alphabetically.\n\nI agree that a more or less random historical order does not make much \nsense.\n\nFor pgbench, ISTM that sorting per functionality then alphabetical would \nbe better than pure alphabetical because it has 2 modes. Such sections \nmight be (1) general (2) connection (3) common/shared (4) initialization \nand (5) benchmarking, we some comments on each.\n\nWhat do you think? If okay, I'll send you a patch for that.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 5 Dec 2022 09:42:41 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Order getopt arguments"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 3:42 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> > I had noticed that most getopt() or getopt_long() calls had their letter\n> > lists in pretty crazy orders. There might have been occasional attempts\n> > at grouping, but those then haven't been maintained as new options were\n> > added. To restore some sanity to this, I went through and ordered them\n> > alphabetically.\n>\n> I agree that a more or less random historical order does not make much\n> sense.\n>\n> For pgbench, ISTM that sorting per functionality then alphabetical would\n> be better than pure alphabetical because it has 2 modes. Such sections\n> might be (1) general (2) connection (3) common/shared (4) initialization\n> and (5) benchmarking, we some comments on each.\n\nI don't see the value in this. Grouping related options often makes\nsense, but it seems more confusing than helpful in the case of a\ngetopt string.\n\n+1 for Peter's proposal to just alphabetize. That's easy to maintain,\nat least in theory.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 5 Dec 2022 10:57:04 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Order getopt arguments"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> +1 for Peter's proposal to just alphabetize. That's easy to maintain,\n> at least in theory.\n\nAgreed for single-letter options. Long options complicate matters:\nare we going to order their code stanzas by the actual long name, or\nby the character/number returned by getopt? Or are we going to be\nwilling to repeatedly renumber the assigned codes to keep those the\nsame? I don't think I want to go that far.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Dec 2022 11:13:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Order getopt arguments"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 11:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > +1 for Peter's proposal to just alphabetize. That's easy to maintain,\n> > at least in theory.\n>\n> Agreed for single-letter options. Long options complicate matters:\n> are we going to order their code stanzas by the actual long name, or\n> by the character/number returned by getopt? Or are we going to be\n> willing to repeatedly renumber the assigned codes to keep those the\n> same? I don't think I want to go that far.\n\nI was only talking about the actual argument to getopt(), not the\norder of the code stanzas. I'm not sure what we ought to do about the\nlatter.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 5 Dec 2022 11:21:20 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Order getopt arguments"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I was only talking about the actual argument to getopt(), not the\n> order of the code stanzas. I'm not sure what we ought to do about the\n> latter.\n\n100% agreed that the getopt argument should just be alphabetical.\nBut the bulk of Peter's patch is rearranging switch cases to agree\nwith that, and if you want to do that then you have to also think\nabout long options, which are not in the getopt argument. I'm\nnot entirely convinced that reordering the switch cases is worth\ntroubling over.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Dec 2022 11:51:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Order getopt arguments"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 11:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I was only talking about the actual argument to getopt(), not the\n> > order of the code stanzas. I'm not sure what we ought to do about the\n> > latter.\n>\n> 100% agreed that the getopt argument should just be alphabetical.\n> But the bulk of Peter's patch is rearranging switch cases to agree\n> with that, and if you want to do that then you have to also think\n> about long options, which are not in the getopt argument. I'm\n> not entirely convinced that reordering the switch cases is worth\n> troubling over.\n\nI'm not particularly sold on that either, but neither am I\nparticularly opposed to it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 5 Dec 2022 12:04:38 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Order getopt arguments"
},
{
"msg_contents": "On 05.12.22 18:04, Robert Haas wrote:\n> On Mon, Dec 5, 2022 at 11:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> I was only talking about the actual argument to getopt(), not the\n>>> order of the code stanzas. I'm not sure what we ought to do about the\n>>> latter.\n>>\n>> 100% agreed that the getopt argument should just be alphabetical.\n>> But the bulk of Peter's patch is rearranging switch cases to agree\n>> with that, and if you want to do that then you have to also think\n>> about long options, which are not in the getopt argument. I'm\n>> not entirely convinced that reordering the switch cases is worth\n>> troubling over.\n> \n> I'm not particularly sold on that either, but neither am I\n> particularly opposed to it.\n\nI have committed it as posted.\n\n\n\n",
"msg_date": "Mon, 12 Dec 2022 15:24:17 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Order getopt arguments"
}
] |
[
{
"msg_contents": "In the pg_dump blob terminology thread it was briefly discussed [0] to mark\nparameters as deprecated in the --help output. The attached is a quick diff to\nshow that that would look like. Personally I think it makes sense, not\neveryone will read the docs.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] 95467596-834d-baa4-c67c-5db1096ed549@enterprisedb.com",
"msg_date": "Mon, 5 Dec 2022 10:42:38 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Marking options deprecated in help output"
},
{
"msg_contents": "On 05/12/2022 11:42, Daniel Gustafsson wrote:\n> In the pg_dump blob terminology thread it was briefly discussed [0] to mark\n> parameters as deprecated in the --help output. The attached is a quick diff to\n> show that that would look like. Personally I think it makes sense, not\n> everyone will read the docs.\n\nMakes sense. One minor suggestion; instead of this:\n\n> -h, -H, --host=HOSTNAME database server host or socket directory\n> (-H is deprecated)\n\nHow about putting the deprecated option on a separate line like this:\n\n> -h, --host=HOSTNAME database server host or socket directory\n> -H (same as -h, deprecated)\n\nAnd same for --blobs and -no-blobs\n\n- Heikki\n\n\n\n",
"msg_date": "Fri, 24 Feb 2023 22:31:57 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Marking options deprecated in help output"
},
{
"msg_contents": "> On 24 Feb 2023, at 21:31, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 05/12/2022 11:42, Daniel Gustafsson wrote:\n\n> How about putting the deprecated option on a separate line like this:\n> \n>> -h, --host=HOSTNAME database server host or socket directory\n>> -H (same as -h, deprecated)\n\nIf nothing else, it helps to keep it shorter and avoids breaking the line\nbetween command and description, so I agree with you. Done in the attached v2.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 1 Mar 2023 00:12:11 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Marking options deprecated in help output"
}
] |
[
{
"msg_contents": "I happened to notice that currently MemoizePath cannot be generated for\npartitionwise join. I traced that and have found the problem. One\ncondition we need to meet to generate MemoizePath is that inner path's\nppi_clauses should have the form \"outer op inner\" or \"inner op outer\",\nas checked by clause_sides_match_join in paraminfo_get_equal_hashops.\n\nNote that when are at this check, the inner path has not got the chance\nto be re-parameterized (that is done later in try_nestloop_path), so if\nit is parameterized, it is parameterized by the topmost parent of the\nouter rel, not the outer rel itself. Thus this check performed by\nclause_sides_match_join could not succeed if we are joining two child\nrels.\n\nThe fix is straightforward, just to use outerrel->top_parent if it is\nnot null and leave the reparameterization work to try_nestloop_path. In\naddition, I believe when reparameterizing MemoizePath we need to adjust\nits param_exprs too.\n\nAttach the patch for fix.\n\nThanks\nRichard",
"msg_date": "Mon, 5 Dec 2022 18:43:15 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "MemoizePath fails to work for partitionwise join"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> The fix is straightforward, just to use outerrel->top_parent if it is\n> not null and leave the reparameterization work to try_nestloop_path. In\n> addition, I believe when reparameterizing MemoizePath we need to adjust\n> its param_exprs too.\n\nRight you are. I'd noticed the apparent omission in\nreparameterize_path_by_child, but figured that we'd need a test case to\nfind any other holes before it'd be worth fixing. Thanks for finding\na usable test case.\n\nOne small problem is that top_parent doesn't exist in the back branches,\nso I had to substitute a much uglier lookup in order to make this work\nthere. I'm surprised that we got away without top_parent for this long\nTBH, but anyway this fix validates the wisdom of 2f17b5701.\n\nSo, pushed with some cosmetic adjustments and the modified back-branch\ncode.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Dec 2022 12:42:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: MemoizePath fails to work for partitionwise join"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 1:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> One small problem is that top_parent doesn't exist in the back branches,\n> so I had to substitute a much uglier lookup in order to make this work\n> there. I'm surprised that we got away without top_parent for this long\n> TBH, but anyway this fix validates the wisdom of 2f17b5701.\n>\n> So, pushed with some cosmetic adjustments and the modified back-branch\n> code.\n\n\nThanks for the modifying and pushing and the back-patching. I didn't\nrealize how the fix should look like if without top_parent. Thanks to\nthe work in 2f17b5701, it makes life much easier.\n\nThanks\nRichard\n\nOn Tue, Dec 6, 2022 at 1:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nOne small problem is that top_parent doesn't exist in the back branches,\nso I had to substitute a much uglier lookup in order to make this work\nthere. I'm surprised that we got away without top_parent for this long\nTBH, but anyway this fix validates the wisdom of 2f17b5701.\n\nSo, pushed with some cosmetic adjustments and the modified back-branch\ncode. Thanks for the modifying and pushing and the back-patching. I didn'trealize how the fix should look like if without top_parent. Thanks tothe work in 2f17b5701, it makes life much easier.ThanksRichard",
"msg_date": "Tue, 6 Dec 2022 09:58:01 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MemoizePath fails to work for partitionwise join"
}
] |
[
{
"msg_contents": "Over in [1], Masahiko and I found that using some bitmapset logic yields a\nuseful speedup in one part of the proposed radix tree patch. In addition to\nwhat's in bitmapset.h now, we'd need WORDNUM, BITNUM, RIGHTMOST_ONE and\nbmw_rightmost_one_pos() from bitmapset.c. The file tidbitmap.c has its own\ncopies of the first two, and we could adapt that strategy again. But\ninstead of three files defining these, it seems like it's time to consider\nmoving them somewhere more central.\n\nAttached is a simple form of this concept, including moving\nHAS_MULTIPLE_ONES just for consistency. One possible objection is the\npossibility of namespace clash. Thoughts?\n\nI also considered putting the macros and typedefs in pg_bitutils.h. One\norganizational advantage is that pg_bitutils.h already offers convenience\nfunction symbols where the parameter depends on SIZEOF_SIZE_T, so putting\nthe BITS_PER_BITMAPWORD versions there makes sense. But that way is not a\nclear win, so I didn't go that far.\n\n[1]\nhttps://www.postgresql.org/message-id/CAFBsxsHgP5LP9q%3DRq_3Q2S6Oyut67z%2BVpemux%2BKseSPXhJF7sg%40mail.gmail.com\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 5 Dec 2022 18:51:50 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "move some bitmapset.c macros to bitmapset.h"
},
{
"msg_contents": "On 2022-Dec-05, John Naylor wrote:\n\n> diff --git a/src/backend/nodes/bitmapset.c b/src/backend/nodes/bitmapset.c\n> index b7b274aeff..3204b49738 100644\n> --- a/src/backend/nodes/bitmapset.c\n> +++ b/src/backend/nodes/bitmapset.c\n> @@ -26,33 +26,9 @@\n> #include \"port/pg_bitutils.h\"\n> \n> \n> -#define WORDNUM(x)\t((x) / BITS_PER_BITMAPWORD)\n> -#define BITNUM(x)\t((x) % BITS_PER_BITMAPWORD)\n\nIn this location, nobody can complain about the naming of these macros,\nsince they're just used to implement other bitmapset.c code. However,\nif you move them to the .h file, ISTM you should give them more\nmeaningful names.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Oh, great altar of passive entertainment, bestow upon me thy discordant images\nat such speed as to render linear thought impossible\" (Calvin a la TV)\n\n\n",
"msg_date": "Mon, 5 Dec 2022 13:05:49 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: move some bitmapset.c macros to bitmapset.h"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Dec-05, John Naylor wrote:\n>> -#define WORDNUM(x)\t((x) / BITS_PER_BITMAPWORD)\n>> -#define BITNUM(x)\t((x) % BITS_PER_BITMAPWORD)\n\n> In this location, nobody can complain about the naming of these macros,\n> since they're just used to implement other bitmapset.c code. However,\n> if you move them to the .h file, ISTM you should give them more\n> meaningful names.\n\nIMV these are absolutely private to bitmapset.c. I reject the idea\nthat they should be exposed publicly, under these names or any others.\n\nMaybe we need some more bitmapset primitive functions? What is it\nyou actually want to accomplish in the end?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Dec 2022 09:32:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: move some bitmapset.c macros to bitmapset.h"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 9:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2022-Dec-05, John Naylor wrote:\n> >> -#define WORDNUM(x) ((x) / BITS_PER_BITMAPWORD)\n> >> -#define BITNUM(x) ((x) % BITS_PER_BITMAPWORD)\n>\n> > In this location, nobody can complain about the naming of these macros,\n> > since they're just used to implement other bitmapset.c code. However,\n> > if you move them to the .h file, ISTM you should give them more\n> > meaningful names.\n>\n> IMV these are absolutely private to bitmapset.c. I reject the idea\n> that they should be exposed publicly, under these names or any others.\n\nWell, they've already escaped to tidbitmap.c as a copy. How do you feel\nabout going that route?\n\n> Maybe we need some more bitmapset primitive functions? What is it\n> you actually want to accomplish in the end?\n\n An inserter into one type of node in a tree structure must quickly find a\nfree position in an array. We have a bitmap of 128 bits to indicate whether\nthe corresponding array position is free. The proposed coding is:\n\n /* get the first word with at least one bit not set */\nfor (idx = 0; idx < WORDNUM(128); idx++)\n{\n if (isset[idx] < ~((bitmapword) 0))\n break;\n}\n\n/* To get the first unset bit in X, get the first set bit in ~X */\ninverse = ~(isset[idx]);\nslotpos = idx * BITS_PER_BITMAPWORD;\nslotpos += bmw_rightmost_one_pos(inverse);\n\n/* mark the slot used */\nisset[idx] |= RIGHTMOST_ONE(inverse);\n\nreturn slotpos;\n\n...which, if it were reversed so that a set bit meant \"available\", would\nessentially be bms_first_member(), so a more primitive version of that\nmight work.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Dec 5, 2022 at 9:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:> > On 2022-Dec-05, John Naylor wrote:> >> -#define WORDNUM(x) ((x) / BITS_PER_BITMAPWORD)> >> -#define BITNUM(x) ((x) % BITS_PER_BITMAPWORD)>> > In this location, nobody can complain about the naming of these macros,> > since they're just used to implement other bitmapset.c code. However,> > if you move them to the .h file, ISTM you should give them more> > meaningful names.>> IMV these are absolutely private to bitmapset.c. I reject the idea> that they should be exposed publicly, under these names or any others.Well, they've already escaped to tidbitmap.c as a copy. How do you feel about going that route?> Maybe we need some more bitmapset primitive functions? What is it> you actually want to accomplish in the end? An inserter into one type of node in a tree structure must quickly find a free position in an array. We have a bitmap of 128 bits to indicate whether the corresponding array position is free. The proposed coding is: /* get the first word with at least one bit not set */for (idx = 0; idx < WORDNUM(128); idx++){ if (isset[idx] < ~((bitmapword) 0)) break;}/* To get the first unset bit in X, get the first set bit in ~X */inverse = ~(isset[idx]);slotpos = idx * BITS_PER_BITMAPWORD;slotpos += bmw_rightmost_one_pos(inverse);/* mark the slot used */isset[idx] |= RIGHTMOST_ONE(inverse);return slotpos;...which, if it were reversed so that a set bit meant \"available\", would essentially be bms_first_member(), so a more primitive version of that might work.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 6 Dec 2022 11:17:04 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: move some bitmapset.c macros to bitmapset.h"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Mon, Dec 5, 2022 at 9:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> IMV these are absolutely private to bitmapset.c. I reject the idea\n>> that they should be exposed publicly, under these names or any others.\n\n> Well, they've already escaped to tidbitmap.c as a copy. How do you feel\n> about going that route?\n\nNot terribly pleased with that either, I must admit.\n\n>> Maybe we need some more bitmapset primitive functions? What is it\n>> you actually want to accomplish in the end?\n\n> \tfor (idx = 0; idx < WORDNUM(128); idx++)\n\nBITS_PER_BITMAPWORD is already public, so can't you spell that\n\n\tfor (idx = 0; idx < 128/BITS_PER_BITMAPWORD; idx++)\n\n> slotpos += bmw_rightmost_one_pos(inverse);\n\nI'm not terribly against exposing bmw_rightmost_one_pos, given\nthat it's just exposing the pg_rightmost_one_posXX variant that\nmatches BITS_PER_BITMAPWORD.\n\n> isset[idx] |= RIGHTMOST_ONE(inverse);\n\nAnd RIGHTMOST_ONE is something that could be made public, but\nI think it belongs in pg_bitutils.h, perhaps with a different\nname.\n\n> ...which, if it were reversed so that a set bit meant \"available\", would\n> essentially be bms_first_member(), so a more primitive version of that\n> might work.\n\nThat could be a reasonable direction to pursue as well.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Dec 2022 23:57:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: move some bitmapset.c macros to bitmapset.h"
},
{
"msg_contents": "On Tue, 6 Dec 2022 at 17:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> And RIGHTMOST_ONE is something that could be made public, but\n> I think it belongs in pg_bitutils.h, perhaps with a different\n> name.\n\nMaybe there's a path of lesser resistance... There's been a bit of\nwork in pg_bitutils.h to define some of the bit manipulation functions\nfor size_t types which wrap the 32 or 64-bit version of the function\naccordingly. Couldn't we just define one of those for\npg_rightmost_one_pos and then use a size_t as the bitmap word type?\n\nThen you'd end up with something like:\n\nfor (idx = 0; idx < 128 / (sizeof(size_t) * 8); idx++)\n if (isset[idx] != ~((size_t) 0))\n break;\nslotpos = idx * (sizeof(size_t) * 8) + pg_rightmost_one_pos_size_t(~isset[idx]);\n\nno need to export anything from bitmapset.c to do it like that.\n\nI've not looked at the code in question to know how often that form\nwould be needed. Maybe it would need a set of inlined functions\nsimilar to above in the same file this is being used in to save on\nrepeating too often.\n\nDavid\n\n\n",
"msg_date": "Tue, 6 Dec 2022 18:46:38 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: move some bitmapset.c macros to bitmapset.h"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Maybe there's a path of lesser resistance... There's been a bit of\n> work in pg_bitutils.h to define some of the bit manipulation functions\n> for size_t types which wrap the 32 or 64-bit version of the function\n> accordingly. Couldn't we just define one of those for\n> pg_rightmost_one_pos and then use a size_t as the bitmap word type?\n\nIt doesn't seem particularly wise to me to hard-wire the bitmap\nword size as sizeof(size_t). There is not a necessary connection\nbetween those things: there could be a performance reason to\nchoose a word width that's different from size_t.\n\nIf we do put RIGHTMOST_ONE functionality into pg_bitutils.h,\nI'd envision it as pg_bitutils.h exporting both 32-bit and\n64-bit versions of that, and then bitmapset.c choosing the\nappropriate one just like it chooses bmw_rightmost_one_pos.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Dec 2022 00:57:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: move some bitmapset.c macros to bitmapset.h"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 12:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > Well, they've already escaped to tidbitmap.c as a copy. How do you feel\n> > about going that route?\n>\n> Not terribly pleased with that either, I must admit.\n\nOkay, I won't pursue that further.\n\n> If we do put RIGHTMOST_ONE functionality into pg_bitutils.h,\n> I'd envision it as pg_bitutils.h exporting both 32-bit and\n> 64-bit versions of that, and then bitmapset.c choosing the\n> appropriate one just like it chooses bmw_rightmost_one_pos.\n\nHere's a quick go at that. I've not attempted to use it for what I need,\nbut it looks like it fits the bill.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 6 Dec 2022 13:21:04 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: move some bitmapset.c macros to bitmapset.h"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> Here's a quick go at that. I've not attempted to use it for what I need,\n> but it looks like it fits the bill.\n\nPasses a quick eyeball check, but of course we should have a\nconcrete external use for the new pg_bitutils functions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Dec 2022 01:29:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: move some bitmapset.c macros to bitmapset.h"
}
] |
[
{
"msg_contents": "The SQL:2023 Standard defines a new aggregate named ANY_VALUE. It \nreturns an implementation-dependent (i.e. non-deterministic) value from \nthe rows in its group.\n\nPFA an implementation of this aggregate.\n\nIdeally, the transition function would stop being called after the first \nnon-null was found, and then the entire aggregation would stop when all \nfunctions say they are finished[*], but this patch does not go anywhere \nnear that far.\n\nThis patch is based off of commit fb958b5da8.\n\n[*] I can imagine something like array_agg(c ORDER BY x LIMIT 5) to get \nthe top five of something without going through a LATERAL subquery.\n-- \nVik Fearing",
"msg_date": "Mon, 5 Dec 2022 15:57:13 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "ANY_VALUE aggregate"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 7:57 AM Vik Fearing <vik@postgresfriends.org> wrote:\n\n> The SQL:2023 Standard defines a new aggregate named ANY_VALUE. It\n> returns an implementation-dependent (i.e. non-deterministic) value from\n> the rows in its group.\n>\n> PFA an implementation of this aggregate.\n>\n>\nCan we please add \"first_value\" and \"last_value\" if we are going to add\n\"some_random_value\" to our library of aggregates?\n\nAlso, maybe we should have any_value do something like compute a 50/50\nchance that any new value seen replaces the existing chosen value, instead\nof simply returning the first value all the time. Maybe even prohibit the\nfirst value from being chosen so long as a second value appears.\n\nDavid J.\n\nOn Mon, Dec 5, 2022 at 7:57 AM Vik Fearing <vik@postgresfriends.org> wrote:The SQL:2023 Standard defines a new aggregate named ANY_VALUE. It \nreturns an implementation-dependent (i.e. non-deterministic) value from \nthe rows in its group.\n\nPFA an implementation of this aggregate.Can we please add \"first_value\" and \"last_value\" if we are going to add \"some_random_value\" to our library of aggregates?Also, maybe we should have any_value do something like compute a 50/50 chance that any new value seen replaces the existing chosen value, instead of simply returning the first value all the time. Maybe even prohibit the first value from being chosen so long as a second value appears.David J.",
"msg_date": "Mon, 5 Dec 2022 10:56:44 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Can we please add \"first_value\" and \"last_value\" if we are going to add\n> \"some_random_value\" to our library of aggregates?\n\nFirst and last according to what ordering? We have those in the\nwindow-aggregate case, and I don't think we want to encourage people\nto believe that \"first\" and \"last\" are meaningful otherwise.\n\nANY_VALUE at least makes it clear that you're getting an unspecified\none of the inputs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Dec 2022 13:04:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 1:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > Can we please add \"first_value\" and \"last_value\" if we are going to add\n> > \"some_random_value\" to our library of aggregates?\n>\n> First and last according to what ordering? We have those in the\n> window-aggregate case, and I don't think we want to encourage people\n> to believe that \"first\" and \"last\" are meaningful otherwise.\n>\n> ANY_VALUE at least makes it clear that you're getting an unspecified\n> one of the inputs.\n\nI have personally implemented first_value() and last_value() in the\npast in cases where I had guaranteed the ordering myself, or didn't\ncare what ordering was used. I think they're perfectly sensible. But\nif we don't add them to core, at least they're easy to add in\nuser-space.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 5 Dec 2022 13:06:53 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 12:57 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Mon, Dec 5, 2022 at 7:57 AM Vik Fearing <vik@postgresfriends.org>\n> wrote:\n>\n>> The SQL:2023 Standard defines a new aggregate named ANY_VALUE. It\n>> returns an implementation-dependent (i.e. non-deterministic) value from\n>> the rows in its group.\n>>\n>> PFA an implementation of this aggregate.\n>>\n>>\n> Can we please add \"first_value\" and \"last_value\" if we are going to add\n> \"some_random_value\" to our library of aggregates?\n>\n> Also, maybe we should have any_value do something like compute a 50/50\n> chance that any new value seen replaces the existing chosen value, instead\n> of simply returning the first value all the time. Maybe even prohibit the\n> first value from being chosen so long as a second value appears.\n>\n> David J.\n>\n\nAdding to the pile of wanted aggregates: in the past I've lobbied for\nonly_value() which is like first_value() but it raises an error on\nencountering a second value.\n\nOn Mon, Dec 5, 2022 at 12:57 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Mon, Dec 5, 2022 at 7:57 AM Vik Fearing <vik@postgresfriends.org> wrote:The SQL:2023 Standard defines a new aggregate named ANY_VALUE. It \nreturns an implementation-dependent (i.e. non-deterministic) value from \nthe rows in its group.\n\nPFA an implementation of this aggregate.Can we please add \"first_value\" and \"last_value\" if we are going to add \"some_random_value\" to our library of aggregates?Also, maybe we should have any_value do something like compute a 50/50 chance that any new value seen replaces the existing chosen value, instead of simply returning the first value all the time. Maybe even prohibit the first value from being chosen so long as a second value appears.David J.Adding to the pile of wanted aggregates: in the past I've lobbied for only_value() which is like first_value() but it raises an error on encountering a second value.",
"msg_date": "Mon, 5 Dec 2022 14:31:24 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 2:31 PM Corey Huinker <corey.huinker@gmail.com> wrote:\n> Adding to the pile of wanted aggregates: in the past I've lobbied for only_value() which is like first_value() but it raises an error on encountering a second value.\n\nYeah, that's another that I have hand-rolled in the past.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 5 Dec 2022 14:41:47 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On 12/5/22 15:57, Vik Fearing wrote:\n> The SQL:2023 Standard defines a new aggregate named ANY_VALUE. It \n> returns an implementation-dependent (i.e. non-deterministic) value from \n> the rows in its group.\n> \n> PFA an implementation of this aggregate.\n\nHere is v2 of this patch. I had forgotten to update sql_features.txt.\n-- \nVik Fearing",
"msg_date": "Mon, 5 Dec 2022 21:18:57 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On 12/5/22 18:56, David G. Johnston wrote:\n> Also, maybe we should have any_value do something like compute a 50/50\n> chance that any new value seen replaces the existing chosen value, instead\n> of simply returning the first value all the time. Maybe even prohibit the\n> first value from being chosen so long as a second value appears.\n\nThe spec says the result is implementation-dependent meaning we don't \neven need to document how it is obtained, but surely behavior like this \nwould preclude future optimizations like the ones I mentioned?\n\nI once wrote a random_agg() for a training course that used reservoir \nsampling to get an evenly distributed value from the inputs. Something \nlike that seems to be what you are looking for here. I don't see the \nuse case for adding it to core, though.\n\nThe use case for ANY_VALUE is compliance with the standard.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 6 Dec 2022 04:46:37 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On 12/5/22 20:31, Corey Huinker wrote:\n> \n> Adding to the pile of wanted aggregates: in the past I've lobbied for\n> only_value() which is like first_value() but it raises an error on\n> encountering a second value.\n\nI have had use for this in the past, but I can't remember why. What is \nyour use case for it? I will happily write a patch for it, and also \nsubmit it to the SQL Committee for inclusion in the standard. I need to \njustify why it's a good idea, though, and we would need to consider what \nto do with nulls now that there is <unique null treatment>.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 6 Dec 2022 04:52:21 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On Mon, 5 Dec 2022 at 22:52, Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 12/5/22 20:31, Corey Huinker wrote:\n> >\n> > Adding to the pile of wanted aggregates: in the past I've lobbied for\n> > only_value() which is like first_value() but it raises an error on\n> > encountering a second value.\n>\n> I have had use for this in the past, but I can't remember why. What is\n> your use case for it? I will happily write a patch for it, and also\n> submit it to the SQL Committee for inclusion in the standard. I need to\n> justify why it's a good idea, though, and we would need to consider what\n> to do with nulls now that there is <unique null treatment>.\n>\n\nI have this in my local library of \"stuff that I really wish came with\nPostgres\", although I call it same_agg and it just goes to NULL if there\nare more than one distinct value.\n\nI sometimes use it when normalizing non-normalized data, but more commonly\nI use it when the query planner isn't capable of figuring out that a column\nI want to use in the output depends only on the grouping columns. For\nexample, something like:\n\nSELECT group_id, group_name, count(*) from group_group as gg natural join\ngroup_member as gm group by group_id\n\nI think that exact example actually does or is supposed to work now, since\nit realizes that I'm grouping on the primary key of group_group so the\ngroup_name field in the same table can't differ between rows of a group,\nbut most of the time when I expect that feature to allow me to use a field\nit actually doesn't.\n\nI have a vague notion that part of the issue may be the distinction between\ngg.group_id, gm.group_id, and group_id; maybe the above doesn't work but it\ndoes work if I group by gg.group_id instead of by group_id. But obviously\nthere should be no difference because in this query those 3 values cannot\ndiffer (outer joins are another story).\n\nFor reference, here is my definition:\n\nCREATE OR REPLACE FUNCTION same_sfunc (\n a anyelement,\n b anyelement\n) RETURNS anyelement\n LANGUAGE SQL IMMUTABLE STRICT\n SET search_path FROM CURRENT\nAS $$\n SELECT CASE WHEN $1 = $2 THEN $1 ELSE NULL END\n$$;\nCOMMENT ON FUNCTION same_sfunc (anyelement, anyelement) IS 'SFUNC for\nsame_agg aggregate; returns common value of parameters, or NULL if they\ndiffer';\n\nDROP AGGREGATE IF EXISTS same_agg (anyelement);\nCREATE AGGREGATE same_agg (anyelement) (\n SFUNC = same_sfunc,\n STYPE = anyelement\n);\nCOMMENT ON AGGREGATE same_agg (anyelement) IS 'Return the common non-NULL\nvalue of all non-NULL aggregated values, or NULL if some values differ';\n\nYou can tell I've had this for a while - there are several newer Postgres\nfeatures that could be used to clean this up noticeably.\n\nI also have a repeat_agg which returns the last value (not so interesting)\nbut which is sometimes useful as a window function (more interesting:\nreplace NULLs with the previous non-NULL value in the column).\n\nOn Mon, 5 Dec 2022 at 22:52, Vik Fearing <vik@postgresfriends.org> wrote:On 12/5/22 20:31, Corey Huinker wrote:\n> \n> Adding to the pile of wanted aggregates: in the past I've lobbied for\n> only_value() which is like first_value() but it raises an error on\n> encountering a second value.\n\nI have had use for this in the past, but I can't remember why. What is \nyour use case for it? I will happily write a patch for it, and also \nsubmit it to the SQL Committee for inclusion in the standard. I need to \njustify why it's a good idea, though, and we would need to consider what \nto do with nulls now that there is <unique null treatment>.\nI have this in my local library of \"stuff that I really wish came with Postgres\", although I call it same_agg and it just goes to NULL if there are more than one distinct value.I sometimes use it when normalizing non-normalized data, but more commonly I use it when the query planner isn't capable of figuring out that a column I want to use in the output depends only on the grouping columns. For example, something like:SELECT group_id, group_name, count(*) from group_group as gg natural join group_member as gm group by group_idI think that exact example actually does or is supposed to work now, since it realizes that I'm grouping on the primary key of group_group so the group_name field in the same table can't differ between rows of a group, but most of the time when I expect that feature to allow me to use a field it actually doesn't.I have a vague notion that part of the issue may be the distinction between gg.group_id, gm.group_id, and group_id; maybe the above doesn't work but it does work if I group by gg.group_id instead of by group_id. But obviously there should be no difference because in this query those 3 values cannot differ (outer joins are another story).For reference, here is my definition:CREATE OR REPLACE FUNCTION same_sfunc ( a anyelement, b anyelement) RETURNS anyelement LANGUAGE SQL IMMUTABLE STRICT SET search_path FROM CURRENTAS $$ SELECT CASE WHEN $1 = $2 THEN $1 ELSE NULL END$$;COMMENT ON FUNCTION same_sfunc (anyelement, anyelement) IS 'SFUNC for same_agg aggregate; returns common value of parameters, or NULL if they differ';DROP AGGREGATE IF EXISTS same_agg (anyelement);CREATE AGGREGATE same_agg (anyelement) ( SFUNC = same_sfunc, STYPE = anyelement);COMMENT ON AGGREGATE same_agg (anyelement) IS 'Return the common non-NULL value of all non-NULL aggregated values, or NULL if some values differ';You can tell I've had this for a while - there are several newer Postgres features that could be used to clean this up noticeably.I also have a repeat_agg which returns the last value (not so interesting) but which is sometimes useful as a window function (more interesting: replace NULLs with the previous non-NULL value in the column).",
"msg_date": "Mon, 5 Dec 2022 23:06:46 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 8:46 PM Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 12/5/22 18:56, David G. Johnston wrote:\n> > Also, maybe we should have any_value do something like compute a 50/50\n> > chance that any new value seen replaces the existing chosen value,\n> instead\n> > of simply returning the first value all the time. Maybe even prohibit\n> the\n> > first value from being chosen so long as a second value appears.\n>\n> The spec says the result is implementation-dependent meaning we don't\n> even need to document how it is obtained, but surely behavior like this\n> would preclude future optimizations like the ones I mentioned?\n>\n\nSo, given the fact that we don't actually want to name a function\nfirst_value (because some users are readily confused as to when the concept\nof first is actually valid or not) but some users do actually wish for this\nfunctionality - and you are proposing to implement it here anyway - how\nabout we actually do document that we promise to return the first non-null\nvalue encountered by the aggregate. We can then direct people to this\nfunction and just let them know to pretend the function is really named\nfirst_value in the case where they specify an order by. (last_value comes\nfor basically free with descending sorting).\n\n\n>\n> I once wrote a random_agg() for a training course that used reservoir\n> sampling to get an evenly distributed value from the inputs. Something\n> like that seems to be what you are looking for here. I don't see the\n> use case for adding it to core, though.\n>\n>\nThe use case was basically what Tom was saying - I don't want our users\nthat don't understand the necessity of order by, and don't read the\ndocumentation, to observe that we consistently return the first non-null\nvalue and assume that this is what the function promises when we are not\nmaking any such promise to them. As noted above, my preference at this\npoint would be to just make that promise.\n\nDavid J.\n\nOn Mon, Dec 5, 2022 at 8:46 PM Vik Fearing <vik@postgresfriends.org> wrote:On 12/5/22 18:56, David G. Johnston wrote:\n> Also, maybe we should have any_value do something like compute a 50/50\n> chance that any new value seen replaces the existing chosen value, instead\n> of simply returning the first value all the time. Maybe even prohibit the\n> first value from being chosen so long as a second value appears.\n\nThe spec says the result is implementation-dependent meaning we don't \neven need to document how it is obtained, but surely behavior like this \nwould preclude future optimizations like the ones I mentioned?So, given the fact that we don't actually want to name a function first_value (because some users are readily confused as to when the concept of first is actually valid or not) but some users do actually wish for this functionality - and you are proposing to implement it here anyway - how about we actually do document that we promise to return the first non-null value encountered by the aggregate. We can then direct people to this function and just let them know to pretend the function is really named first_value in the case where they specify an order by. (last_value comes for basically free with descending sorting). \n\nI once wrote a random_agg() for a training course that used reservoir \nsampling to get an evenly distributed value from the inputs. Something \nlike that seems to be what you are looking for here. I don't see the \nuse case for adding it to core, though.The use case was basically what Tom was saying - I don't want our users that don't understand the necessity of order by, and don't read the documentation, to observe that we consistently return the first non-null value and assume that this is what the function promises when we are not making any such promise to them. As noted above, my preference at this point would be to just make that promise.David J.",
"msg_date": "Mon, 5 Dec 2022 21:22:25 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On 12/6/22 05:22, David G. Johnston wrote:\n> On Mon, Dec 5, 2022 at 8:46 PM Vik Fearing <vik@postgresfriends.org> wrote:\n> \n>> On 12/5/22 18:56, David G. Johnston wrote:\n>>> Also, maybe we should have any_value do something like compute a 50/50\n>>> chance that any new value seen replaces the existing chosen value,\n>> instead\n>>> of simply returning the first value all the time. Maybe even prohibit\n>> the\n>>> first value from being chosen so long as a second value appears.\n>>\n>> The spec says the result is implementation-dependent meaning we don't\n>> even need to document how it is obtained, but surely behavior like this\n>> would preclude future optimizations like the ones I mentioned?\n>>\n> \n> So, given the fact that we don't actually want to name a function\n> first_value (because some users are readily confused as to when the concept\n> of first is actually valid or not) but some users do actually wish for this\n> functionality - and you are proposing to implement it here anyway - how\n> about we actually do document that we promise to return the first non-null\n> value encountered by the aggregate. We can then direct people to this\n> function and just let them know to pretend the function is really named\n> first_value in the case where they specify an order by. (last_value comes\n> for basically free with descending sorting).\n\nI can imagine an optimization that would remove an ORDER BY clause \nbecause it isn't needed for any other aggregate. There is no reason to \ncause an extra sort when the user has requested *any value*.\n\n>> I once wrote a random_agg() for a training course that used reservoir\n>> sampling to get an evenly distributed value from the inputs. Something\n>> like that seems to be what you are looking for here. I don't see the\n>> use case for adding it to core, though.\n>>\n>>\n> The use case was basically what Tom was saying - I don't want our users\n> that don't understand the necessity of order by, and don't read the\n> documentation, to observe that we consistently return the first non-null\n> value and assume that this is what the function promises when we are not\n> making any such promise to them. \n\nDocumenting something for the benefit of those who do not read the \ndocumentation is a ridiculous proposal.\n\n> As noted above, my preference at this point would be to just make that promise.\n\nI see no reason to paint ourselves into a corner here.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 6 Dec 2022 05:48:15 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 9:48 PM Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 12/6/22 05:22, David G. Johnston wrote:\n> > On Mon, Dec 5, 2022 at 8:46 PM Vik Fearing <vik@postgresfriends.org>\n> wrote:\n> >\n> >> On 12/5/22 18:56, David G. Johnston wrote:\n> >>> Also, maybe we should have any_value do something like compute a 50/50\n> >>> chance that any new value seen replaces the existing chosen value,\n> >> instead\n> >>> of simply returning the first value all the time. Maybe even prohibit\n> >> the\n> >>> first value from being chosen so long as a second value appears.\n> >>\n> >> The spec says the result is implementation-dependent meaning we don't\n> >> even need to document how it is obtained, but surely behavior like this\n> >> would preclude future optimizations like the ones I mentioned?\n> >>\n> >\n> > So, given the fact that we don't actually want to name a function\n> > first_value (because some users are readily confused as to when the\n> concept\n> > of first is actually valid or not) but some users do actually wish for\n> this\n> > functionality - and you are proposing to implement it here anyway - how\n> > about we actually do document that we promise to return the first\n> non-null\n> > value encountered by the aggregate. We can then direct people to this\n> > function and just let them know to pretend the function is really named\n> > first_value in the case where they specify an order by. (last_value comes\n> > for basically free with descending sorting).\n>\n> I can imagine an optimization that would remove an ORDER BY clause\n> because it isn't needed for any other aggregate.\n\n\nI'm referring to the query:\n\nselect any_value(v order by v) from (values (2),(1),(3)) as vals (v);\n// produces 1, per the documented implementation-defined behavior.\n\nSomeone writing:\n\nselect any_value(v) from (values (2),(1),(3)) as vals (v) order by v;\n\nIs not presently, nor am I saying, promised the value 1.\n\nI'm assuming you are thinking of the second query form, while the guarantee\nonly needs to apply to the first.\n\nDavid J.\n\nOn Mon, Dec 5, 2022 at 9:48 PM Vik Fearing <vik@postgresfriends.org> wrote:On 12/6/22 05:22, David G. Johnston wrote:\n> On Mon, Dec 5, 2022 at 8:46 PM Vik Fearing <vik@postgresfriends.org> wrote:\n> \n>> On 12/5/22 18:56, David G. Johnston wrote:\n>>> Also, maybe we should have any_value do something like compute a 50/50\n>>> chance that any new value seen replaces the existing chosen value,\n>> instead\n>>> of simply returning the first value all the time. Maybe even prohibit\n>> the\n>>> first value from being chosen so long as a second value appears.\n>>\n>> The spec says the result is implementation-dependent meaning we don't\n>> even need to document how it is obtained, but surely behavior like this\n>> would preclude future optimizations like the ones I mentioned?\n>>\n> \n> So, given the fact that we don't actually want to name a function\n> first_value (because some users are readily confused as to when the concept\n> of first is actually valid or not) but some users do actually wish for this\n> functionality - and you are proposing to implement it here anyway - how\n> about we actually do document that we promise to return the first non-null\n> value encountered by the aggregate. We can then direct people to this\n> function and just let them know to pretend the function is really named\n> first_value in the case where they specify an order by. (last_value comes\n> for basically free with descending sorting).\n\nI can imagine an optimization that would remove an ORDER BY clause \nbecause it isn't needed for any other aggregate.I'm referring to the query:select any_value(v order by v) from (values (2),(1),(3)) as vals (v);// produces 1, per the documented implementation-defined behavior.Someone writing:select any_value(v) from (values (2),(1),(3)) as vals (v) order by v;Is not presently, nor am I saying, promised the value 1.I'm assuming you are thinking of the second query form, while the guarantee only needs to apply to the first.David J.",
"msg_date": "Mon, 5 Dec 2022 21:57:05 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On 12/6/22 05:57, David G. Johnston wrote:\n> On Mon, Dec 5, 2022 at 9:48 PM Vik Fearing <vik@postgresfriends.org> wrote:\n> \n>> I can imagine an optimization that would remove an ORDER BY clause\n>> because it isn't needed for any other aggregate.\n> \n> \n> I'm referring to the query:\n> \n> select any_value(v order by v) from (values (2),(1),(3)) as vals (v);\n> // produces 1, per the documented implementation-defined behavior.\n\nImplementation-dependent. It is NOT implementation-defined, per spec.\n\nWe often loosen the spec rules when they don't make technical sense to \nus, but I don't know of any example of when we have tightened them.\n\n> Someone writing:\n> \n> select any_value(v) from (values (2),(1),(3)) as vals (v) order by v;\n> \n> Is not presently, nor am I saying, promised the value 1.\n> \n> I'm assuming you are thinking of the second query form, while the guarantee\n> only needs to apply to the first.\n\nI am saying that a theoretical pg_aggregate.aggorderdoesnotmatter could \nbestow upon ANY_VALUE the ability to make those two queries equivalent.\n\nIf you care about which value you get back, use something else.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 6 Dec 2022 06:40:30 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On Mon, Dec 5, 2022 at 10:40 PM Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 12/6/22 05:57, David G. Johnston wrote:\n> > On Mon, Dec 5, 2022 at 9:48 PM Vik Fearing <vik@postgresfriends.org>\n> wrote:\n> >\n> >> I can imagine an optimization that would remove an ORDER BY clause\n> >> because it isn't needed for any other aggregate.\n> >\n> >\n> > I'm referring to the query:\n> >\n> > select any_value(v order by v) from (values (2),(1),(3)) as vals (v);\n> > // produces 1, per the documented implementation-defined behavior.\n>\n> Implementation-dependent. It is NOT implementation-defined, per spec.\n>\n> I really don't care all that much about the spec here given that ORDER BY\nin an aggregate call is non-spec.\n\nWe often loosen the spec rules when they don't make technical sense to\n> us, but I don't know of any example of when we have tightened them.\n>\n\nThe function has to choose some row from among its inputs, and the system\nhas to obey an order by specification added to the function call. You are\nde-facto creating a first_value aggregate (which is by definition\nnon-standard) whether you like it or not. I'm just saying to be upfront\nand honest about it - our users do want such a capability so maybe accept\nthat there is a first time for everything. Not that picking an\nadvantageous \"implementation-dependent\" implementation should be considered\ndeviating from the spec.\n\n\n> > Someone writing:\n> >\n> > select any_value(v) from (values (2),(1),(3)) as vals (v) order by v;\n> >\n> > Is not presently, nor am I saying, promised the value 1.\n> >\n> > I'm assuming you are thinking of the second query form, while the\n> guarantee\n> > only needs to apply to the first.\n>\n> I am saying that a theoretical pg_aggregate.aggorderdoesnotmatter could\n> bestow upon ANY_VALUE the ability to make those two queries equivalent.\n>\n\nThat theoretical idea should not be entertained. Removing a user's\nexplicitly added ORDER BY should be off-limits. Any approach at\noptimization here should simply look at whether an ORDER BY is specified\nand pass that information to the function. If the function itself really\nbelieves that ordering matters it can emit its own runtime exception\nstating that fact and the user can fix their query.\n\n\n> If you care about which value you get back, use something else.\n>\n>\nThere isn't a \"something else\" to use so that isn't presently an option.\n\nI suppose it comes down to what level of belief and care you have that\npeople will simply mis-use this function if it is added in its current form\nto get the desired first_value effect that it produces.\n\nDavid J.\n\nOn Mon, Dec 5, 2022 at 10:40 PM Vik Fearing <vik@postgresfriends.org> wrote:On 12/6/22 05:57, David G. Johnston wrote:\n> On Mon, Dec 5, 2022 at 9:48 PM Vik Fearing <vik@postgresfriends.org> wrote:\n> \n>> I can imagine an optimization that would remove an ORDER BY clause\n>> because it isn't needed for any other aggregate.\n> \n> \n> I'm referring to the query:\n> \n> select any_value(v order by v) from (values (2),(1),(3)) as vals (v);\n> // produces 1, per the documented implementation-defined behavior.\n\nImplementation-dependent. It is NOT implementation-defined, per spec.\nI really don't care all that much about the spec here given that ORDER BY in an aggregate call is non-spec.\nWe often loosen the spec rules when they don't make technical sense to \nus, but I don't know of any example of when we have tightened them.The function has to choose some row from among its inputs, and the system has to obey an order by specification added to the function call. You are de-facto creating a first_value aggregate (which is by definition non-standard) whether you like it or not. I'm just saying to be upfront and honest about it - our users do want such a capability so maybe accept that there is a first time for everything. Not that picking an advantageous \"implementation-dependent\" implementation should be considered deviating from the spec.\n\n> Someone writing:\n> \n> select any_value(v) from (values (2),(1),(3)) as vals (v) order by v;\n> \n> Is not presently, nor am I saying, promised the value 1.\n> \n> I'm assuming you are thinking of the second query form, while the guarantee\n> only needs to apply to the first.\n\nI am saying that a theoretical pg_aggregate.aggorderdoesnotmatter could \nbestow upon ANY_VALUE the ability to make those two queries equivalent.That theoretical idea should not be entertained. Removing a user's explicitly added ORDER BY should be off-limits. Any approach at optimization here should simply look at whether an ORDER BY is specified and pass that information to the function. If the function itself really believes that ordering matters it can emit its own runtime exception stating that fact and the user can fix their query.\n\nIf you care about which value you get back, use something else.There isn't a \"something else\" to use so that isn't presently an option.I suppose it comes down to what level of belief and care you have that people will simply mis-use this function if it is added in its current form to get the desired first_value effect that it produces.David J.",
"msg_date": "Tue, 6 Dec 2022 20:22:36 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 4:57 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n...\n>\n>\n> I'm referring to the query:\n>\n> select any_value(v order by v) from (values (2),(1),(3)) as vals (v);\n> // produces 1, per the documented implementation-defined behavior.\n>\n> Someone writing:\n>\n> select any_value(v) from (values (2),(1),(3)) as vals (v) order by v;\n>\n> Is not presently, nor am I saying, promised the value 1.\n>\n\nShouldn't the 2nd query be producing an error, as it has an implied\nGROUP BY () - so column v cannot appear (unless aggregated) in SELECT\nand ORDER BY?\n\n\n",
"msg_date": "Wed, 7 Dec 2022 08:58:44 +0000",
"msg_from": "Pantelis Theodosiou <ypercube@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 1:58 AM Pantelis Theodosiou <ypercube@gmail.com>\nwrote:\n\n> On Tue, Dec 6, 2022 at 4:57 AM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> ...\n> >\n> >\n> > I'm referring to the query:\n> >\n> > select any_value(v order by v) from (values (2),(1),(3)) as vals (v);\n> > // produces 1, per the documented implementation-defined behavior.\n> >\n> > Someone writing:\n> >\n> > select any_value(v) from (values (2),(1),(3)) as vals (v) order by v;\n> >\n> > Is not presently, nor am I saying, promised the value 1.\n> >\n>\n> Shouldn't the 2nd query be producing an error, as it has an implied\n> GROUP BY () - so column v cannot appear (unless aggregated) in SELECT\n> and ORDER BY?\n>\n\nRight, that should be written as:\n\nselect any_value(v) from (values (2),(1),(3) order by 1) as vals (v);\n\n(you said SELECT; the discussion here is that any_value is going to be\nadded as a new aggregate function)\n\nDavid J.\n\nOn Wed, Dec 7, 2022 at 1:58 AM Pantelis Theodosiou <ypercube@gmail.com> wrote:On Tue, Dec 6, 2022 at 4:57 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n...\n>\n>\n> I'm referring to the query:\n>\n> select any_value(v order by v) from (values (2),(1),(3)) as vals (v);\n> // produces 1, per the documented implementation-defined behavior.\n>\n> Someone writing:\n>\n> select any_value(v) from (values (2),(1),(3)) as vals (v) order by v;\n>\n> Is not presently, nor am I saying, promised the value 1.\n>\n\nShouldn't the 2nd query be producing an error, as it has an implied\nGROUP BY () - so column v cannot appear (unless aggregated) in SELECT\nand ORDER BY?Right, that should be written as:select any_value(v) from (values (2),(1),(3) order by 1) as vals (v);(you said SELECT; the discussion here is that any_value is going to be added as a new aggregate function)David J.",
"msg_date": "Wed, 7 Dec 2022 06:36:04 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On 12/7/22 04:22, David G. Johnston wrote:\n> On Mon, Dec 5, 2022 at 10:40 PM Vik Fearing <vik@postgresfriends.org> wrote:\n> \n>> On 12/6/22 05:57, David G. Johnston wrote:\n>>> On Mon, Dec 5, 2022 at 9:48 PM Vik Fearing <vik@postgresfriends.org>\n>> wrote:\n>>>\n>>>> I can imagine an optimization that would remove an ORDER BY clause\n>>>> because it isn't needed for any other aggregate.\n>>>\n>>>\n>>> I'm referring to the query:\n>>>\n>>> select any_value(v order by v) from (values (2),(1),(3)) as vals (v);\n>>> // produces 1, per the documented implementation-defined behavior.\n>>\n>> Implementation-dependent. It is NOT implementation-defined, per spec.\n>\n> I really don't care all that much about the spec here given that ORDER BY\n> in an aggregate call is non-spec.\n\n\nWell, this is demonstrably wrong.\n\n<array aggregate function> ::=\n ARRAY_AGG <left paren>\n <value expression>\n [ ORDER BY <sort specification list> ]\n <right paren>\n\n\n>> We often loosen the spec rules when they don't make technical sense to\n>> us, but I don't know of any example of when we have tightened them.\n> \n> The function has to choose some row from among its inputs,\n\n\nTrue.\n\n\n> and the system has to obey an order by specification added to the function call.\n\n\nFalse.\n\n\n> You are de-facto creating a first_value aggregate (which is by definition\n> non-standard) whether you like it or not.\n\n\nI am de jure creating an any_value aggregate (which is by definition \nstandard) whether you like it or not.\n\n\n> I'm just saying to be upfront\n> and honest about it - our users do want such a capability so maybe accept\n> that there is a first time for everything. Not that picking an\n> advantageous \"implementation-dependent\" implementation should be considered\n> deviating from the spec.\n> \n> \n>>> Someone writing:\n>>>\n>>> select any_value(v) from (values (2),(1),(3)) as vals (v) order by v;\n>>>\n>>> Is not presently, nor am I saying, promised the value 1.\n>>>\n>>> I'm assuming you are thinking of the second query form, while the\n>> guarantee\n>>> only needs to apply to the first.\n>>\n>> I am saying that a theoretical pg_aggregate.aggorderdoesnotmatter could\n>> bestow upon ANY_VALUE the ability to make those two queries equivalent.\n>>\n> \n> That theoretical idea should not be entertained. Removing a user's\n> explicitly added ORDER BY should be off-limits. Any approach at\n> optimization here should simply look at whether an ORDER BY is specified\n> and pass that information to the function. If the function itself really\n> believes that ordering matters it can emit its own runtime exception\n> stating that fact and the user can fix their query.\n\n\nIt absolutely should be entertained, and I plan on doing so in an \nupcoming thread. Whether it errors or ignores is something that should \nbe discussed on that thread.\n\n\n>> If you care about which value you get back, use something else.\n>\n> There isn't a \"something else\" to use so that isn't presently an option.\n\n\nThe query\n\n SELECT proposed_first_value(x ORDER BY y) FROM ...\n\nis equivalent to\n\n SELECT (ARRAY_AGG(x ORDER BY y))[1] FROM ...\n\nso I am not very sympathetic to your claim of \"no other option\".\n\n\n> I suppose it comes down to what level of belief and care you have that\n> people will simply mis-use this function if it is added in its current form\n> to get the desired first_value effect that it produces.\n\n\nPeople who rely on explicitly undefined behavior get what they deserve \nwhen the implementation changes.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Thu, 8 Dec 2022 06:00:54 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 10:00 PM Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 12/7/22 04:22, David G. Johnston wrote:\n> > On Mon, Dec 5, 2022 at 10:40 PM Vik Fearing <vik@postgresfriends.org>\n> wrote:\n> >\n> >> On 12/6/22 05:57, David G. Johnston wrote:\n> >>> On Mon, Dec 5, 2022 at 9:48 PM Vik Fearing <vik@postgresfriends.org>\n> >> wrote:\n> >>>\n> >>>> I can imagine an optimization that would remove an ORDER BY clause\n> >>>> because it isn't needed for any other aggregate.\n> >>>\n> >>>\n> >>> I'm referring to the query:\n> >>>\n> >>> select any_value(v order by v) from (values (2),(1),(3)) as vals (v);\n> >>> // produces 1, per the documented implementation-defined behavior.\n> >>\n> >> Implementation-dependent. It is NOT implementation-defined, per spec.\n> >\n> > I really don't care all that much about the spec here given that ORDER BY\n> > in an aggregate call is non-spec.\n>\n>\n> Well, this is demonstrably wrong.\n>\n> <array aggregate function> ::=\n> ARRAY_AGG <left paren>\n> <value expression>\n> [ ORDER BY <sort specification list> ]\n> <right paren>\n>\n\nDemoable only by you and a few others...\n\nWe should update our documentation - the source of SQL Standard knowledge\nfor mere mortals.\n\nhttps://www.postgresql.org/docs/current/sql-expressions.html#SYNTAX-AGGREGATES\n\n\"Note: The ability to specify both DISTINCT and ORDER BY in an aggregate\nfunction is a PostgreSQL extension.\"\n\nApparently only DISTINCT remains as our extension.\n\n\n>\n> > You are de-facto creating a first_value aggregate (which is by definition\n> > non-standard) whether you like it or not.\n>\n>\n> I am de jure creating an any_value aggregate (which is by definition\n> standard) whether you like it or not.\n>\n\nYes, both statements seem true. At least until we decide to start ignoring\na user's explicit order by clause.\n\n\n>\n> >> If you care about which value you get back, use something else.\n> >\n> > There isn't a \"something else\" to use so that isn't presently an option.\n>\n>\n> The query\n>\n> SELECT proposed_first_value(x ORDER BY y) FROM ...\n>\n> is equivalent to\n>\n> SELECT (ARRAY_AGG(x ORDER BY y))[1] FROM ...\n>\n> so I am not very sympathetic to your claim of \"no other option\".\n>\n\nSemantically, yes, in terms of performance, not so much, for any\nnon-trivial sized group.\n\nI'm done, and apologize for getting too emotionally invested in this. I\nhope to get others to voice enough +1s to get a first_value function into\ncore along-side this one (which makes the above discussion either moot or\ndeferred until there is a concrete use case for ignoring an explicit ORDER\nBY). If that doesn't happen, well, it isn't going to make or break us\neither way.\n\nDavid J.\n\nOn Wed, Dec 7, 2022 at 10:00 PM Vik Fearing <vik@postgresfriends.org> wrote:On 12/7/22 04:22, David G. Johnston wrote:\n> On Mon, Dec 5, 2022 at 10:40 PM Vik Fearing <vik@postgresfriends.org> wrote:\n> \n>> On 12/6/22 05:57, David G. Johnston wrote:\n>>> On Mon, Dec 5, 2022 at 9:48 PM Vik Fearing <vik@postgresfriends.org>\n>> wrote:\n>>>\n>>>> I can imagine an optimization that would remove an ORDER BY clause\n>>>> because it isn't needed for any other aggregate.\n>>>\n>>>\n>>> I'm referring to the query:\n>>>\n>>> select any_value(v order by v) from (values (2),(1),(3)) as vals (v);\n>>> // produces 1, per the documented implementation-defined behavior.\n>>\n>> Implementation-dependent. It is NOT implementation-defined, per spec.\n>\n> I really don't care all that much about the spec here given that ORDER BY\n> in an aggregate call is non-spec.\n\n\nWell, this is demonstrably wrong.\n\n<array aggregate function> ::=\n ARRAY_AGG <left paren>\n <value expression>\n [ ORDER BY <sort specification list> ]\n <right paren>Demoable only by you and a few others...We should update our documentation - the source of SQL Standard knowledge for mere mortals.https://www.postgresql.org/docs/current/sql-expressions.html#SYNTAX-AGGREGATES\"Note: The ability to specify both DISTINCT and ORDER BY in an aggregate function is a PostgreSQL extension.\"Apparently only DISTINCT remains as our extension.\n\n\n> You are de-facto creating a first_value aggregate (which is by definition\n> non-standard) whether you like it or not.\n\n\nI am de jure creating an any_value aggregate (which is by definition \nstandard) whether you like it or not.Yes, both statements seem true. At least until we decide to start ignoring a user's explicit order by clause. \n\n>> If you care about which value you get back, use something else.\n>\n> There isn't a \"something else\" to use so that isn't presently an option.\n\n\nThe query\n\n SELECT proposed_first_value(x ORDER BY y) FROM ...\n\nis equivalent to\n\n SELECT (ARRAY_AGG(x ORDER BY y))[1] FROM ...\n\nso I am not very sympathetic to your claim of \"no other option\".Semantically, yes, in terms of performance, not so much, for any non-trivial sized group.I'm done, and apologize for getting too emotionally invested in this. I hope to get others to voice enough +1s to get a first_value function into core along-side this one (which makes the above discussion either moot or deferred until there is a concrete use case for ignoring an explicit ORDER BY). If that doesn't happen, well, it isn't going to make or break us either way.David J.",
"msg_date": "Wed, 7 Dec 2022 22:48:16 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On 12/8/22 06:48, David G. Johnston wrote:\n> On Wed, Dec 7, 2022 at 10:00 PM Vik Fearing <vik@postgresfriends.org> wrote:\n> \n>> On 12/7/22 04:22, David G. Johnston wrote:\n>>> On Mon, Dec 5, 2022 at 10:40 PM Vik Fearing <vik@postgresfriends.org>\n>> wrote:\n>>>\n>>>> On 12/6/22 05:57, David G. Johnston wrote:\n>>>>> On Mon, Dec 5, 2022 at 9:48 PM Vik Fearing <vik@postgresfriends.org>\n>>>> wrote:\n>>>>>\n>>>>>> I can imagine an optimization that would remove an ORDER BY clause\n>>>>>> because it isn't needed for any other aggregate.\n>>>>>\n>>>>>\n>>>>> I'm referring to the query:\n>>>>>\n>>>>> select any_value(v order by v) from (values (2),(1),(3)) as vals (v);\n>>>>> // produces 1, per the documented implementation-defined behavior.\n>>>>\n>>>> Implementation-dependent. It is NOT implementation-defined, per spec.\n>>>\n>>> I really don't care all that much about the spec here given that ORDER BY\n>>> in an aggregate call is non-spec.\n>>\n>>\n>> Well, this is demonstrably wrong.\n>>\n>> <array aggregate function> ::=\n>> ARRAY_AGG <left paren>\n>> <value expression>\n>> [ ORDER BY <sort specification list> ]\n>> <right paren>\n>>\n> \n> Demoable only by you and a few others...\n\n\nThe standard is publicly available. It is strange that we, being so \nopen, hold ourselves to such a closed standard; but that is what we do.\n\n\n> We should update our documentation - the source of SQL Standard knowledge\n> for mere mortals.\n> \n> https://www.postgresql.org/docs/current/sql-expressions.html#SYNTAX-AGGREGATES\n> \n> \"Note: The ability to specify both DISTINCT and ORDER BY in an aggregate\n> function is a PostgreSQL extension.\"\n> \n> Apparently only DISTINCT remains as our extension.\n\n\nUsing DISTINCT in an aggregate is also standard. What that note is \nsaying is that the standard does not allow *both* to be used at the same \ntime.\n\nThe standard defines these things for specific aggregates whereas we are \nmuch more generic about it and therefore have to deal with the combinations.\n\nI have submitted a doc patch to clarify that.\n\n\n>>> You are de-facto creating a first_value aggregate (which is by definition\n>>> non-standard) whether you like it or not.\n>>\n>>\n>> I am de jure creating an any_value aggregate (which is by definition\n>> standard) whether you like it or not.\n>>\n> \n> Yes, both statements seem true. At least until we decide to start ignoring\n> a user's explicit order by clause.\n\n\nI ran some tests and including an ORDER BY in an aggregate that doesn't \ncare (like COUNT) is devastating for performance. I will be proposing a \nsolution to that soon and I invite you to participate in that \nconversation when I do.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Thu, 8 Dec 2022 13:32:30 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On 05.12.22 21:18, Vik Fearing wrote:\n> On 12/5/22 15:57, Vik Fearing wrote:\n>> The SQL:2023 Standard defines a new aggregate named ANY_VALUE. It \n>> returns an implementation-dependent (i.e. non-deterministic) value \n>> from the rows in its group.\n>>\n>> PFA an implementation of this aggregate.\n> \n> Here is v2 of this patch. I had forgotten to update sql_features.txt.\n\nIn your patch, the documentation says the definition is any_value(\"any\") \nbut the catalog definitions are any_value(anyelement). Please sort that \nout.\n\nSince the transition function is declared strict, null values don't need \nto be checked. I think the whole function could be reduced to\n\nDatum\nany_value_trans(PG_FUNCTION_ARGS)\n{\n PG_RETURN_DATUM(PG_GETARG_DATUM(0));\n}\n\n\n\n",
"msg_date": "Wed, 18 Jan 2023 16:06:02 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 05.12.22 21:18, Vik Fearing wrote:\n>> On 12/5/22 15:57, Vik Fearing wrote:\n>>> The SQL:2023 Standard defines a new aggregate named ANY_VALUE. It \n>>> returns an implementation-dependent (i.e. non-deterministic) value \n>>> from the rows in its group.\n\n> Since the transition function is declared strict, null values don't need \n> to be checked.\n\nHmm, but should it be strict? That means that what it's returning\nis *not* \"any value\" but \"any non-null value\". What does the draft\nspec have to say about that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Jan 2023 10:55:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On 1/18/23 16:55, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 05.12.22 21:18, Vik Fearing wrote:\n>>> On 12/5/22 15:57, Vik Fearing wrote:\n>>>> The SQL:2023 Standard defines a new aggregate named ANY_VALUE. It\n>>>> returns an implementation-dependent (i.e. non-deterministic) value\n>>>> from the rows in its group.\n> \n>> Since the transition function is declared strict, null values don't need\n>> to be checked.\n> \n> Hmm, but should it be strict? That means that what it's returning\n> is *not* \"any value\" but \"any non-null value\". What does the draft\n> spec have to say about that?\n\nIt falls into the same category as AVG() etc. That is, nulls are \nremoved before calculation.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Wed, 18 Jan 2023 16:03:55 +0000",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On 1/18/23 16:06, Peter Eisentraut wrote:\n> On 05.12.22 21:18, Vik Fearing wrote:\n>> On 12/5/22 15:57, Vik Fearing wrote:\n>>> The SQL:2023 Standard defines a new aggregate named ANY_VALUE. It \n>>> returns an implementation-dependent (i.e. non-deterministic) value \n>>> from the rows in its group.\n>>>\n>>> PFA an implementation of this aggregate.\n>>\n>> Here is v2 of this patch. I had forgotten to update sql_features.txt.\n> \n> In your patch, the documentation says the definition is any_value(\"any\") \n> but the catalog definitions are any_value(anyelement). Please sort that \n> out.\n> \n> Since the transition function is declared strict, null values don't need \n> to be checked.\n\nThank you for the review. Attached is a new version rebased to d540a02a72.\n-- \nVik Fearing",
"msg_date": "Wed, 18 Jan 2023 17:01:34 +0000",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On 18.01.23 18:01, Vik Fearing wrote:\n> On 1/18/23 16:06, Peter Eisentraut wrote:\n>> On 05.12.22 21:18, Vik Fearing wrote:\n>>> On 12/5/22 15:57, Vik Fearing wrote:\n>>>> The SQL:2023 Standard defines a new aggregate named ANY_VALUE. It \n>>>> returns an implementation-dependent (i.e. non-deterministic) value \n>>>> from the rows in its group.\n>>>>\n>>>> PFA an implementation of this aggregate.\n>>>\n>>> Here is v2 of this patch. I had forgotten to update sql_features.txt.\n>>\n>> In your patch, the documentation says the definition is \n>> any_value(\"any\") but the catalog definitions are \n>> any_value(anyelement). Please sort that out.\n>>\n>> Since the transition function is declared strict, null values don't \n>> need to be checked.\n> \n> Thank you for the review. Attached is a new version rebased to d540a02a72.\n\nThis looks good to me now.\n\n\n",
"msg_date": "Thu, 19 Jan 2023 15:27:20 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On Thu, 19 Jan 2023 at 06:01, Vik Fearing <vik@postgresfriends.org> wrote:\n> Thank you for the review. Attached is a new version rebased to d540a02a72.\n\nI've only a bunch of nit-picks, personal preferences and random\nthoughts to offer as a review:\n\n1. I'd be inclined *not* to mention the possible future optimisation in:\n\n+ * Currently this just returns the first value, but in the future it might be\n+ * able to signal to the aggregate that it does not need to be called anymore.\n\nI think it's unlikely that the transfn would \"signal\" such a thing. It\nseems more likely if we did anything about it that nodeAgg.c would\nmaybe have some additional knowledge not to call that function if the\nagg state already has a value. Just so we're not preempting how we\nmight do such a thing in the future, it seems best just to remove the\nmention of it. I don't really think it serves as a good reminder that\nwe might want to do this one day anyway.\n\n2. +any_value_trans(PG_FUNCTION_ARGS)\n\nMany of transition function names end in \"transfn\", not \"trans\". I\nthink it's better to follow the existing (loosely followed) naming\npattern that a few aggregates seem to follow rather than invent a new\none.\n\n3. I tend to try to copy the capitalisation of keywords from the\nsurrounding regression tests. I see the following breaks that.\n\n+SELECT any_value(v) FILTER (WHERE v > 2) FROM (VALUES (1), (2), (3)) AS v (v);\n\n(obviously, ideally, we'd always just follow the same capitalisation\nof keywords everywhere in each .sql file, but we've long broken that\nand the best way can do is be consistent with surrounding tests)\n\n4. I think I'd use the word \"Returns\" instead of \"Chooses\" in:\n\n+ Chooses a non-deterministic value from the non-null input values.\n\n5. I've not managed to find a copy of the 2023 draft, so I'm assuming\nyou've got the ignoring of NULLs correct. I tried to see what other\ndatabases do using https://www.db-fiddle.com/ . I was surprised to see\nMySQL 8.0 returning NULL with:\n\ncreate table a (a int, b int);\ninsert into a values(1,null),(1,2),(1,null);\n\nselect any_value(b) from a group by a;\n\nI'd have expected \"2\" to be returned. (It gets even weirder without\nthe GROUP BY clause, so I'm not too hopeful any useful information can\nbe obtained from looking here)\n\nI know MySQL doesn't follow the spec quite as closely as we do, so I\nmight not be that surprised if they didn't pay attention to the\nwording when implementing this, however, I've not seen the spec, so I\ncan only speculate what value should be returned. Certainly not doing\nany aggregation for any_value() when there is no GROUP BY seems\nstrange. I see they don't do the same with sum(). Perhaps this is just\na side effect of their loose standards when it came to columns in the\nSELECT clause that are not in the GROUP BY clause.\n\n6. Is it worth adding a WindowFunc test somewhere in window.sql with\nan any_value(...) over (...)? Is what any_value() returns as a\nWindowFunc equally as non-deterministic as when it's used as an\nAggref? Can we assume there's no guarantee that it'll return the same\nvalue for each partition in each row? Does the spec mention anything\nabout that?\n\n7. I wondered if it's worth adding a\nSupportRequestOptimizeWindowClause support function for this\naggregate. I'm thinking that it might not be as likely people would\nuse something more specific like first_value/nth_value/last_value\ninstead of using any_value as a WindowFunc. Also, I'm currently\nthinking that a SupportRequestWFuncMonotonic for any_value() is not\nworth the dozen or so lines of code it would take to write it. I'm\nassuming it would always be a MONOTONICFUNC_BOTH function. It seems\nunlikely that someone would have a subquery with a WHERE clause in the\nupper-level query referencing the any_value() aggregate. Thought I'd\nmention both of these things anyway as someone else might think of\nsome good reason we should add them that I didn't think of.\n\nDavid\n\n\n",
"msg_date": "Mon, 23 Jan 2023 20:50:00 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On 1/23/23 08:50, David Rowley wrote:\n> On Thu, 19 Jan 2023 at 06:01, Vik Fearing <vik@postgresfriends.org> wrote:\n>> Thank you for the review. Attached is a new version rebased to d540a02a72.\n> \n> I've only a bunch of nit-picks, personal preferences and random\n> thoughts to offer as a review:\n> \n> 1. I'd be inclined *not* to mention the possible future optimisation in:\n> \n> + * Currently this just returns the first value, but in the future it might be\n> + * able to signal to the aggregate that it does not need to be called anymore.\n> \n> I think it's unlikely that the transfn would \"signal\" such a thing. It\n> seems more likely if we did anything about it that nodeAgg.c would\n> maybe have some additional knowledge not to call that function if the\n> agg state already has a value. Just so we're not preempting how we\n> might do such a thing in the future, it seems best just to remove the\n> mention of it. I don't really think it serves as a good reminder that\n> we might want to do this one day anyway.\n\nModified. My logic in having the transition function signal that it is \nfinished is to one day allow something like:\n\n array_agg(x order by y limit z)\n\n> 2. +any_value_trans(PG_FUNCTION_ARGS)\n> \n> Many of transition function names end in \"transfn\", not \"trans\". I\n> think it's better to follow the existing (loosely followed) naming\n> pattern that a few aggregates seem to follow rather than invent a new\n> one.\n\nRenamed.\n\n> 3. I tend to try to copy the capitalisation of keywords from the\n> surrounding regression tests. I see the following breaks that.\n> \n> +SELECT any_value(v) FILTER (WHERE v > 2) FROM (VALUES (1), (2), (3)) AS v (v);\n> \n> (obviously, ideally, we'd always just follow the same capitalisation\n> of keywords everywhere in each .sql file, but we've long broken that\n> and the best way can do is be consistent with surrounding tests)\n\nDowncased.\n\n> 4. I think I'd use the word \"Returns\" instead of \"Chooses\" in:\n> \n> + Chooses a non-deterministic value from the non-null input values.\n\nDone.\n\n> 5. I've not managed to find a copy of the 2023 draft, so I'm assuming\n> you've got the ignoring of NULLs correct.\n\nYes, I do. This is part of <computational operation>, so SQL:2016 10.9 \nGR 7.a applies.\n\n> 6. Is it worth adding a WindowFunc test somewhere in window.sql with\n> an any_value(...) over (...)? Is what any_value() returns as a\n> WindowFunc equally as non-deterministic as when it's used as an\n> Aggref? Can we assume there's no guarantee that it'll return the same\n> value for each partition in each row? Does the spec mention anything\n> about that?\n\nThis is governed by SQL:2016 10.9 GR 1.d and 1.e which defines the \nsource rows for the aggregate: either a group or a window frame. There \nis no difference in behavior. I don't think a windowed test is useful \nhere unless I were to implement moving transitions. I think that might \nbe overkill for this function.\n\n> 7. I wondered if it's worth adding a\n> SupportRequestOptimizeWindowClause support function for this\n> aggregate. I'm thinking that it might not be as likely people would\n> use something more specific like first_value/nth_value/last_value\n> instead of using any_value as a WindowFunc. Also, I'm currently\n> thinking that a SupportRequestWFuncMonotonic for any_value() is not\n> worth the dozen or so lines of code it would take to write it. I'm\n> assuming it would always be a MONOTONICFUNC_BOTH function. It seems\n> unlikely that someone would have a subquery with a WHERE clause in the\n> upper-level query referencing the any_value() aggregate. Thought I'd\n> mention both of these things anyway as someone else might think of\n> some good reason we should add them that I didn't think of.\n\nI thought about this for a while and decided that it was not worthwhile.\n\nv4 attached. I am putting this back to Needs Review in the commitfest \napp, but these changes were editorial so it is probably RfC like Peter \nhad set it. I will let you be the judge of that.\n-- \nVik Fearing",
"msg_date": "Thu, 9 Feb 2023 10:42:16 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "I could have used such an aggregate in the past, so +1.\n\nThis is maybe getting into nit-picking, but perhaps it should be\ndocumented as returning an \"arbitrary\" value instead of a\n\"non-deterministic\" one? Technically the value is deterministic:\nthere's a concrete algorithm specifying how it's selected. However,\nthe algorithm is reserved as an implementation detail, since the\nfunction is designed for cases in which the caller should not care\nwhich value is returned.\n\nThanks,\nMaciek\n\n\n",
"msg_date": "Tue, 14 Feb 2023 22:30:11 -0800",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
},
{
"msg_contents": "On 09.02.23 10:42, Vik Fearing wrote:\n> v4 attached. I am putting this back to Needs Review in the commitfest \n> app, but these changes were editorial so it is probably RfC like Peter \n> had set it. I will let you be the judge of that.\n\nI have committed this.\n\nI made a few small last-minute tweaks:\n\n- Changed \"non-deterministic\" to \"arbitrary\", as suggested by Maciek \nSakrejda nearby. This seemed like a handier and less jargony term.\n\n- Removed trailing whitespace in misc.c.\n\n- Changed the function description in pg_proc.dat. Apparently, we are \nusing 'aggregate transition function' there for all aggregate functions \n(instead of 'any_value transition function' etc.).\n\n- Made the tests a bit more interested by feeding in more rows and a mix \nof null and nonnull values.\n\n\n",
"msg_date": "Wed, 22 Feb 2023 09:56:56 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ANY_VALUE aggregate"
}
] |
[
{
"msg_contents": "Hello,\n\nI think this is the correct mail list for feature/modification requests.\nIf not please let me know which mail list I should use.\n\nWould it be possible to modify the information_schema.view_table_usage\n(VTU) to include materialized views? (\nhttps://www.postgresql.org/docs/current/infoschema-view-table-usage.html)\n\nCurrently when querying VTU, if the view you're interested in queries a\nmaterialized view, then it doesn't show up in VTU. For example, I was\ntrying to determine which tables/views made up a particular view:\n\n--View is present in pg_views\ndrps=> select schemaname, viewname, viewowner\ndrps-> from pg_views\ndrps-> where viewname = 'platform_version_v';\n schemaname | viewname | viewowner\n------------+--------------------+-----------\n event | platform_version_v | drps\n\n\n-- Check view_table_usage for objects that are queried by the\nplatform_version_v view, but it doesn't find any:\n\ndrps=> select *\ndrps=> from information_schema.view_table_usage\ndrps=> where view_name = 'platform_version_v';\n\n view_catalog | view_schema | view_name | table_catalog | table_schema |\ntable_name\n--------------+-------------+-----------+---------------+--------------+------------\n(0 rows)\n\nI looked at the pg_views.definition column for platform_version_v, and it\nis querying a materialized view.\n\nThe source code for information_schema.view_table_usage view is at\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/catalog/information_schema.sql;h=18725a02d1fb6ffda3d218033b972a0ff23aac3b;hb=HEAD#l2605\n\nIf I change lines 2605 and 2616 to:\n\n2605: AND v.relkind in ('v','m')\n2616: AND t.relkind IN ('r', 'v', 'f', 'p','m')\n\nand compile the modified version of VTU in my test schema, then I see the\nMV that is used in the query of platform_version_v view:\n\ndrps=> select *\ndrps=> from test.view_table_usage\ndrps=> where view_name = 'platform_version_v';\n\n view_catalog | view_schema | view_name | table_catalog |\ntable_schema | table_name\n--------------+-------------+--------------------+---------------+--------------+---------------------\n drps | event | platform_version_v | drps | event\n | platform_version_mv\n\n\nMy method of changing those 2 lines of code may not be the best or correct\nsolution, it's just to illustrate what I'm looking for.\n\nThanks!\n\nJon\n\nHello,I think this is the correct mail list for feature/modification requests. If not please let me know which mail list I should use.Would it be possible to modify the information_schema.view_table_usage (VTU) to include materialized views? (https://www.postgresql.org/docs/current/infoschema-view-table-usage.html) Currently when querying VTU, if the view you're interested in queries a materialized view, then it doesn't show up in VTU. For example, I was trying to determine which tables/views made up a particular view:--View is present in pg_viewsdrps=> select schemaname, viewname, viewownerdrps-> from pg_viewsdrps-> where viewname = 'platform_version_v'; schemaname | viewname | viewowner------------+--------------------+----------- event | platform_version_v | drps -- Check view_table_usage for objects that are queried by the platform_version_v view, but it doesn't find any:drps=> select *drps=> from information_schema.view_table_usage drps=> where view_name = 'platform_version_v'; view_catalog | view_schema | view_name | table_catalog | table_schema | table_name--------------+-------------+-----------+---------------+--------------+------------(0 rows)I looked at the pg_views.definition column for platform_version_v, and it is querying a materialized view.The source code for information_schema.view_table_usage view is at https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/catalog/information_schema.sql;h=18725a02d1fb6ffda3d218033b972a0ff23aac3b;hb=HEAD#l2605If I change lines 2605 and 2616 to:2605: AND v.relkind in ('v','m')2616: AND t.relkind IN ('r', 'v', 'f', 'p','m')and compile the modified version of VTU in my test schema, then I see the MV that is used in the query of platform_version_v view:drps=> select * drps=> from test.view_table_usagedrps=> where view_name = 'platform_version_v'; view_catalog | view_schema | view_name | table_catalog | table_schema | table_name--------------+-------------+--------------------+---------------+--------------+--------------------- drps | event | platform_version_v | drps | event | platform_version_mvMy method of changing those 2 lines of code may not be the best or correct solution, it's just to illustrate what I'm looking for.Thanks!Jon",
"msg_date": "Mon, 5 Dec 2022 11:39:01 -0600",
"msg_from": "Jonathan Lemig <jtlemig@gmail.com>",
"msg_from_op": true,
"msg_subject": "Request to modify view_table_usage to include materialized views"
},
{
"msg_contents": "Jonathan Lemig <jtlemig@gmail.com> writes:\n> Would it be possible to modify the information_schema.view_table_usage\n> (VTU) to include materialized views?\n\nIs it physically possible? Sure, it'd just take adjustment of some\nrelkind checks.\n\nHowever, it's against project policy. We consider that because the\ninformation_schema views are defined by the SQL standard, they should\nonly show standardized properties of standardized objects. If the\nstandard ever gains materialized views, we'd adjust those views to\nshow them. In the meantime, they aren't there.\n\nIt would make little sense in any case to adjust only this one view.\nBut if we were to revisit that policy, there are a lot of corner\ncases that would have to be thought through --- things that almost\nfit into the views, or that might appear in a very misleading way,\netc.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 05 Dec 2022 12:53:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Request to modify view_table_usage to include materialized views"
},
{
"msg_contents": "Hey Tom,\n\nThanks for the info. I'll submit a document change request instead.\n\nThanks!\n\nJon\n\nOn Mon, Dec 5, 2022 at 11:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Jonathan Lemig <jtlemig@gmail.com> writes:\n> > Would it be possible to modify the information_schema.view_table_usage\n> > (VTU) to include materialized views?\n>\n> Is it physically possible? Sure, it'd just take adjustment of some\n> relkind checks.\n>\n> However, it's against project policy. We consider that because the\n> information_schema views are defined by the SQL standard, they should\n> only show standardized properties of standardized objects. If the\n> standard ever gains materialized views, we'd adjust those views to\n> show them. In the meantime, they aren't there.\n>\n> It would make little sense in any case to adjust only this one view.\n> But if we were to revisit that policy, there are a lot of corner\n> cases that would have to be thought through --- things that almost\n> fit into the views, or that might appear in a very misleading way,\n> etc.\n>\n> regards, tom lane\n>\n\nHey Tom,Thanks for the info. I'll submit a document change request instead. Thanks!JonOn Mon, Dec 5, 2022 at 11:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Jonathan Lemig <jtlemig@gmail.com> writes:\n> Would it be possible to modify the information_schema.view_table_usage\n> (VTU) to include materialized views?\n\nIs it physically possible? Sure, it'd just take adjustment of some\nrelkind checks.\n\nHowever, it's against project policy. We consider that because the\ninformation_schema views are defined by the SQL standard, they should\nonly show standardized properties of standardized objects. If the\nstandard ever gains materialized views, we'd adjust those views to\nshow them. In the meantime, they aren't there.\n\nIt would make little sense in any case to adjust only this one view.\nBut if we were to revisit that policy, there are a lot of corner\ncases that would have to be thought through --- things that almost\nfit into the views, or that might appear in a very misleading way,\netc.\n\n regards, tom lane",
"msg_date": "Mon, 5 Dec 2022 12:16:03 -0600",
"msg_from": "Jonathan Lemig <jtlemig@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request to modify view_table_usage to include materialized views"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nIn logical decoding, when logical_decoding_work_mem is exceeded, the changes are\nsent to output plugin in streaming mode. But there is a restriction that the\nminimum value of logical_decoding_work_mem is 64kB. I tried to add a GUC to\nallow sending every change to output plugin without waiting until\nlogical_decoding_work_mem is exceeded.\n\nThis helps to test streaming mode. For example, to test \"Avoid streaming the\ntransaction which are skipped\" [1], it needs many XLOG_XACT_INVALIDATIONS\nmessages. With the new option, it can be tested with fewer changes and in less\ntime. Also, this new option helps to test more scenarios for \"Perform streaming\nlogical transactions by background workers\" [2].\n\n[1] https://www.postgresql.org/message-id/CAFiTN-tHK=7LzfrPs8fbT2ksrOJGQbzywcgXst2bM9-rJJAAUg@mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/CAA4eK1%2BwyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw%40mail.gmail.com\n\nRegards,\nShi yu",
"msg_date": "Tue, 6 Dec 2022 06:23:43 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Force streaming every change in logical decoding"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 11:53 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> Hi hackers,\n>\n> In logical decoding, when logical_decoding_work_mem is exceeded, the changes are\n> sent to output plugin in streaming mode. But there is a restriction that the\n> minimum value of logical_decoding_work_mem is 64kB. I tried to add a GUC to\n> allow sending every change to output plugin without waiting until\n> logical_decoding_work_mem is exceeded.\n>\n> This helps to test streaming mode. For example, to test \"Avoid streaming the\n> transaction which are skipped\" [1], it needs many XLOG_XACT_INVALIDATIONS\n> messages. With the new option, it can be tested with fewer changes and in less\n> time. Also, this new option helps to test more scenarios for \"Perform streaming\n> logical transactions by background workers\" [2].\n>\n\nYeah, I think this can also help in reducing the time for various\ntests in test_decoding/stream and\nsrc/test/subscription/t/*_stream_*.pl file by reducing the number of\nchanges required to invoke streaming mode. Can we think of making this\nGUC extendible to even test more options on server-side (publisher)\nand client-side (subscriber) cases? For example, we can have something\nlike logical_replication_mode with the following valid values: (a)\nserver_serialize: this will serialize each change to file on\npublishers and then on commit restore and send all changes; (b)\nserver_stream: this will stream each change as currently proposed in\nyour patch. Then if we want to extend it for subscriber-side testing\nthen we can introduce new options like client_serialize for the case\nbeing discussed in the email [1].\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAD21AoAVUfDrm4-%3DykihNAmR7bTX-KpHXM9jc42RbHePJv5k1w%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 6 Dec 2022 15:58:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 7:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Dec 6, 2022 at 11:53 AM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> >\n> > Hi hackers,\n> >\n> > In logical decoding, when logical_decoding_work_mem is exceeded, the changes are\n> > sent to output plugin in streaming mode. But there is a restriction that the\n> > minimum value of logical_decoding_work_mem is 64kB. I tried to add a GUC to\n> > allow sending every change to output plugin without waiting until\n> > logical_decoding_work_mem is exceeded.\n> >\n> > This helps to test streaming mode. For example, to test \"Avoid streaming the\n> > transaction which are skipped\" [1], it needs many XLOG_XACT_INVALIDATIONS\n> > messages. With the new option, it can be tested with fewer changes and in less\n> > time. Also, this new option helps to test more scenarios for \"Perform streaming\n> > logical transactions by background workers\" [2].\n> >\n>\n> Yeah, I think this can also help in reducing the time for various\n> tests in test_decoding/stream and\n> src/test/subscription/t/*_stream_*.pl file by reducing the number of\n> changes required to invoke streaming mode.\n\n+1\n\n> Can we think of making this\n> GUC extendible to even test more options on server-side (publisher)\n> and client-side (subscriber) cases? For example, we can have something\n> like logical_replication_mode with the following valid values: (a)\n> server_serialize: this will serialize each change to file on\n> publishers and then on commit restore and send all changes; (b)\n> server_stream: this will stream each change as currently proposed in\n> your patch. Then if we want to extend it for subscriber-side testing\n> then we can introduce new options like client_serialize for the case\n> being discussed in the email [1].\n\nSetting logical_replication_mode = 'client_serialize' implies that the\npublisher behaves as server_stream? or do you mean we can set like\nlogical_replication_mode = 'server_stream, client_serialize'?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 6 Dec 2022 22:48:18 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 9:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Dec 6, 2022 at 11:53 AM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> >\n> > Hi hackers,\n> >\n> > In logical decoding, when logical_decoding_work_mem is exceeded, the changes are\n> > sent to output plugin in streaming mode. But there is a restriction that the\n> > minimum value of logical_decoding_work_mem is 64kB. I tried to add a GUC to\n> > allow sending every change to output plugin without waiting until\n> > logical_decoding_work_mem is exceeded.\n> >\n> > This helps to test streaming mode. For example, to test \"Avoid streaming the\n> > transaction which are skipped\" [1], it needs many XLOG_XACT_INVALIDATIONS\n> > messages. With the new option, it can be tested with fewer changes and in less\n> > time. Also, this new option helps to test more scenarios for \"Perform streaming\n> > logical transactions by background workers\" [2].\n> >\n\n+1\n\n>\n> Yeah, I think this can also help in reducing the time for various\n> tests in test_decoding/stream and\n> src/test/subscription/t/*_stream_*.pl file by reducing the number of\n> changes required to invoke streaming mode. Can we think of making this\n> GUC extendible to even test more options on server-side (publisher)\n> and client-side (subscriber) cases? For example, we can have something\n> like logical_replication_mode with the following valid values: (a)\n> server_serialize: this will serialize each change to file on\n> publishers and then on commit restore and send all changes; (b)\n> server_stream: this will stream each change as currently proposed in\n> your patch. Then if we want to extend it for subscriber-side testing\n> then we can introduce new options like client_serialize for the case\n> being discussed in the email [1].\n>\n> Thoughts?\n\nThere is potential for lots of developer GUCs for testing/debugging in\nthe area of logical replication but IMO it might be better to keep\nthem all separated. Putting everything into a single\n'logical_replication_mode' might cause difficulties later when/if you\nwant combinations of the different modes.\n\nFor example, instead of\n\nlogical_replication_mode = XXX/YYY/ZZZ\n\nmaybe something like below will give more flexibility.\n\nlogical_replication_dev_XXX = true/false\nlogical_replication_dev_YYY = true/false\nlogical_replication_dev_ZZZ = true/false\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 7 Dec 2022 10:45:52 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 5:23 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> Hi hackers,\n>\n> In logical decoding, when logical_decoding_work_mem is exceeded, the changes are\n> sent to output plugin in streaming mode. But there is a restriction that the\n> minimum value of logical_decoding_work_mem is 64kB. I tried to add a GUC to\n> allow sending every change to output plugin without waiting until\n> logical_decoding_work_mem is exceeded.\n>\n\nSome review comments for patch v1-0001.\n\n1. Typos\n\nIn several places \"Wheather/wheather\" -> \"Whether/whether\"\n\n======\n\nsrc/backend/replication/logical/reorderbuffer.c\n\n2. ReorderBufferCheckMemoryLimit\n\n {\n ReorderBufferTXN *txn;\n\n- /* bail out if we haven't exceeded the memory limit */\n- if (rb->size < logical_decoding_work_mem * 1024L)\n+ /*\n+ * Stream the changes immediately if force_stream_mode is on and the output\n+ * plugin supports streaming. Otherwise wait until size exceeds\n+ * logical_decoding_work_mem.\n+ */\n+ bool force_stream = (force_stream_mode && ReorderBufferCanStream(rb));\n+\n+ /* bail out if force_stream is false and we haven't exceeded the\nmemory limit */\n+ if (!force_stream && rb->size < logical_decoding_work_mem * 1024L)\n return;\n\n /*\n- * Loop until we reach under the memory limit. One might think that just\n- * by evicting the largest (sub)transaction we will come under the memory\n- * limit based on assumption that the selected transaction is at least as\n- * large as the most recent change (which caused us to go over the memory\n- * limit). However, that is not true because a user can reduce the\n+ * If force_stream is true, loop until there's no change. Otherwise, loop\n+ * until we reach under the memory limit. One might think that just by\n+ * evicting the largest (sub)transaction we will come under the memory limit\n+ * based on assumption that the selected transaction is at least as large as\n+ * the most recent change (which caused us to go over the memory limit).\n+ * However, that is not true because a user can reduce the\n * logical_decoding_work_mem to a smaller value before the most recent\n * change.\n */\n- while (rb->size >= logical_decoding_work_mem * 1024L)\n+ while ((!force_stream && rb->size >= logical_decoding_work_mem * 1024L) ||\n+ (force_stream && rb->size > 0))\n {\n /*\n * Pick the largest transaction (or subtransaction) and evict it from\n\nIIUC this logic can be simplified quite a lot just by shifting that\n\"bail out\" condition into the loop.\n\nSomething like:\n\nwhile (true)\n{\nif (!(force_stream && rb->size > 0 || rb->size <\nlogical_decoding_work_mem * 1024L))\nbreak;\n...\n}\n\n------\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 7 Dec 2022 12:14:01 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 8:46 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Dec 6, 2022 at 9:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Dec 6, 2022 at 11:53 AM shiy.fnst@fujitsu.com\n> > <shiy.fnst@fujitsu.com> wrote:\n> > >\n> > > Hi hackers,\n> > >\n> > > In logical decoding, when logical_decoding_work_mem is exceeded, the changes are\n> > > sent to output plugin in streaming mode. But there is a restriction that the\n> > > minimum value of logical_decoding_work_mem is 64kB. I tried to add a GUC to\n> > > allow sending every change to output plugin without waiting until\n> > > logical_decoding_work_mem is exceeded.\n> > >\n> > > This helps to test streaming mode. For example, to test \"Avoid streaming the\n> > > transaction which are skipped\" [1], it needs many XLOG_XACT_INVALIDATIONS\n> > > messages. With the new option, it can be tested with fewer changes and in less\n> > > time. Also, this new option helps to test more scenarios for \"Perform streaming\n> > > logical transactions by background workers\" [2].\n> > >\n>\n> +1\n>\n> >\n> > Yeah, I think this can also help in reducing the time for various\n> > tests in test_decoding/stream and\n> > src/test/subscription/t/*_stream_*.pl file by reducing the number of\n> > changes required to invoke streaming mode. Can we think of making this\n> > GUC extendible to even test more options on server-side (publisher)\n> > and client-side (subscriber) cases? For example, we can have something\n> > like logical_replication_mode with the following valid values: (a)\n> > server_serialize: this will serialize each change to file on\n> > publishers and then on commit restore and send all changes; (b)\n> > server_stream: this will stream each change as currently proposed in\n> > your patch. Then if we want to extend it for subscriber-side testing\n> > then we can introduce new options like client_serialize for the case\n> > being discussed in the email [1].\n> >\n> > Thoughts?\n>\n> There is potential for lots of developer GUCs for testing/debugging in\n> the area of logical replication but IMO it might be better to keep\n> them all separated. Putting everything into a single\n> 'logical_replication_mode' might cause difficulties later when/if you\n> want combinations of the different modes.\n\nI think we want the developer option that forces streaming changes\nduring logical decoding to be PGC_USERSET but probably the developer\noption for testing the parallel apply feature would be PGC_SIGHUP.\nAlso, since streaming changes is not specific to logical replication\nbut to logical decoding, I'm not sure logical_replication_XXX is a\ngood name. IMO having force_stream_mode and a different GUC for\ntesting the parallel apply feature makes sense to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 7 Dec 2022 11:00:58 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 7:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Dec 6, 2022 at 7:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Dec 6, 2022 at 11:53 AM shiy.fnst@fujitsu.com\n> > <shiy.fnst@fujitsu.com> wrote:\n> > >\n> > > Hi hackers,\n> > >\n> > > In logical decoding, when logical_decoding_work_mem is exceeded, the changes are\n> > > sent to output plugin in streaming mode. But there is a restriction that the\n> > > minimum value of logical_decoding_work_mem is 64kB. I tried to add a GUC to\n> > > allow sending every change to output plugin without waiting until\n> > > logical_decoding_work_mem is exceeded.\n> > >\n> > > This helps to test streaming mode. For example, to test \"Avoid streaming the\n> > > transaction which are skipped\" [1], it needs many XLOG_XACT_INVALIDATIONS\n> > > messages. With the new option, it can be tested with fewer changes and in less\n> > > time. Also, this new option helps to test more scenarios for \"Perform streaming\n> > > logical transactions by background workers\" [2].\n> > >\n> >\n> > Yeah, I think this can also help in reducing the time for various\n> > tests in test_decoding/stream and\n> > src/test/subscription/t/*_stream_*.pl file by reducing the number of\n> > changes required to invoke streaming mode.\n>\n> +1\n>\n> > Can we think of making this\n> > GUC extendible to even test more options on server-side (publisher)\n> > and client-side (subscriber) cases? For example, we can have something\n> > like logical_replication_mode with the following valid values: (a)\n> > server_serialize: this will serialize each change to file on\n> > publishers and then on commit restore and send all changes; (b)\n> > server_stream: this will stream each change as currently proposed in\n> > your patch. Then if we want to extend it for subscriber-side testing\n> > then we can introduce new options like client_serialize for the case\n> > being discussed in the email [1].\n>\n> Setting logical_replication_mode = 'client_serialize' implies that the\n> publisher behaves as server_stream? or do you mean we can set like\n> logical_replication_mode = 'server_stream, client_serialize'?\n>\n\nThe latter one (logical_replication_mode = 'server_stream,\nclient_serialize'). The idea is to cover more options with one GUC and\neach option can be used individually as well as in combination with\nothers.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 7 Dec 2022 08:14:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 7:31 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Dec 7, 2022 at 8:46 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > >\n> > > Yeah, I think this can also help in reducing the time for various\n> > > tests in test_decoding/stream and\n> > > src/test/subscription/t/*_stream_*.pl file by reducing the number of\n> > > changes required to invoke streaming mode. Can we think of making this\n> > > GUC extendible to even test more options on server-side (publisher)\n> > > and client-side (subscriber) cases? For example, we can have something\n> > > like logical_replication_mode with the following valid values: (a)\n> > > server_serialize: this will serialize each change to file on\n> > > publishers and then on commit restore and send all changes; (b)\n> > > server_stream: this will stream each change as currently proposed in\n> > > your patch. Then if we want to extend it for subscriber-side testing\n> > > then we can introduce new options like client_serialize for the case\n> > > being discussed in the email [1].\n> > >\n> > > Thoughts?\n> >\n> > There is potential for lots of developer GUCs for testing/debugging in\n> > the area of logical replication but IMO it might be better to keep\n> > them all separated. Putting everything into a single\n> > 'logical_replication_mode' might cause difficulties later when/if you\n> > want combinations of the different modes.\n>\n> I think we want the developer option that forces streaming changes\n> during logical decoding to be PGC_USERSET but probably the developer\n> option for testing the parallel apply feature would be PGC_SIGHUP.\n>\n\nIdeally, that is true but if we want to combine the multiple modes in\none parameter, is there a harm in keeping it as PGC_SIGHUP?\n\n> Also, since streaming changes is not specific to logical replication\n> but to logical decoding, I'm not sure logical_replication_XXX is a\n> good name. IMO having force_stream_mode and a different GUC for\n> testing the parallel apply feature makes sense to me.\n>\n\nBut if we want to have a separate variable for testing/debugging\nstreaming like force_stream_mode, why not for serializing as well? And\nif we want for both then we can even think of combining them in one\nvariable as logical_decoding_mode with values as 'stream' and\n'serialize'. The first one specified would be given preference. Also,\nthe name force_stream_mode doesn't seem to convey that it is for\nlogical decoding. We can probably have a separate variable for the\nsubscriber side.\n\nOn one side having separate GUCs for publisher and subscriber seems to\ngive better flexibility but having more GUCs also sometimes makes them\nless usable. Here, my thought was to have a single or as few GUCs as\npossible which can be extendible by providing multiple values instead\nof having different GUCs. I was trying to map this with the existing\nstring parameters in developer options.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 7 Dec 2022 09:25:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 12:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 7, 2022 at 7:31 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Dec 7, 2022 at 8:46 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > >\n> > > > Yeah, I think this can also help in reducing the time for various\n> > > > tests in test_decoding/stream and\n> > > > src/test/subscription/t/*_stream_*.pl file by reducing the number of\n> > > > changes required to invoke streaming mode. Can we think of making this\n> > > > GUC extendible to even test more options on server-side (publisher)\n> > > > and client-side (subscriber) cases? For example, we can have something\n> > > > like logical_replication_mode with the following valid values: (a)\n> > > > server_serialize: this will serialize each change to file on\n> > > > publishers and then on commit restore and send all changes; (b)\n> > > > server_stream: this will stream each change as currently proposed in\n> > > > your patch. Then if we want to extend it for subscriber-side testing\n> > > > then we can introduce new options like client_serialize for the case\n> > > > being discussed in the email [1].\n> > > >\n> > > > Thoughts?\n> > >\n> > > There is potential for lots of developer GUCs for testing/debugging in\n> > > the area of logical replication but IMO it might be better to keep\n> > > them all separated. Putting everything into a single\n> > > 'logical_replication_mode' might cause difficulties later when/if you\n> > > want combinations of the different modes.\n> >\n> > I think we want the developer option that forces streaming changes\n> > during logical decoding to be PGC_USERSET but probably the developer\n> > option for testing the parallel apply feature would be PGC_SIGHUP.\n> >\n>\n> Ideally, that is true but if we want to combine the multiple modes in\n> one parameter, is there a harm in keeping it as PGC_SIGHUP?\n\nIt's not a big harm but we will end up doing ALTER SYSTEM and\npg_reload_conf() even in regression tests (e.g. in\ntest_decoding/stream.sql).\n\n>\n> > Also, since streaming changes is not specific to logical replication\n> > but to logical decoding, I'm not sure logical_replication_XXX is a\n> > good name. IMO having force_stream_mode and a different GUC for\n> > testing the parallel apply feature makes sense to me.\n> >\n>\n> But if we want to have a separate variable for testing/debugging\n> streaming like force_stream_mode, why not for serializing as well? And\n> if we want for both then we can even think of combining them in one\n> variable as logical_decoding_mode with values as 'stream' and\n> 'serialize'.\n\nMaking it enum makes sense to me.\n\n> The first one specified would be given preference. Also,\n> the name force_stream_mode doesn't seem to convey that it is for\n> logical decoding.\n\nAgreed.\n\n> On one side having separate GUCs for publisher and subscriber seems to\n> give better flexibility but having more GUCs also sometimes makes them\n> less usable. Here, my thought was to have a single or as few GUCs as\n> possible which can be extendible by providing multiple values instead\n> of having different GUCs. I was trying to map this with the existing\n> string parameters in developer options.\n\nI see your point. On the other hand, I'm not sure it's a good idea to\ncontrol different features by one GUC in general. The developer option\ncould be an exception?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 7 Dec 2022 14:24:37 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 10:55 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Dec 7, 2022 at 12:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > On one side having separate GUCs for publisher and subscriber seems to\n> > give better flexibility but having more GUCs also sometimes makes them\n> > less usable. Here, my thought was to have a single or as few GUCs as\n> > possible which can be extendible by providing multiple values instead\n> > of having different GUCs. I was trying to map this with the existing\n> > string parameters in developer options.\n>\n> I see your point. On the other hand, I'm not sure it's a good idea to\n> control different features by one GUC in general. The developer option\n> could be an exception?\n>\n\nI am not sure what is the best thing if this was proposed as a\nnon-developer option but it seems to me that having a single parameter\nfor publisher/subscriber, in this case, can serve our need for\ntesting/debugging. BTW, even though it is not a very good example but\nwe use max_replication_slots for different purposes on the publisher\n(the limit for slots) and subscriber (the limit for origins).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 7 Dec 2022 13:21:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 5:16 AM Peter Smith <smithpb2250@gmail.com> wrote:\n\n+1 for the idea\n\n>\n> There is potential for lots of developer GUCs for testing/debugging in\n> the area of logical replication but IMO it might be better to keep\n> them all separated. Putting everything into a single\n> 'logical_replication_mode' might cause difficulties later when/if you\n> want combinations of the different modes.\n>\n> For example, instead of\n>\n> logical_replication_mode = XXX/YYY/ZZZ\n>\n> maybe something like below will give more flexibility.\n>\n> logical_replication_dev_XXX = true/false\n> logical_replication_dev_YYY = true/false\n> logical_replication_dev_ZZZ = true/false\n>\n\nEven I agree that usability wise keeping them independent is better.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 10 Dec 2022 11:18:14 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 11:53 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> Hi hackers,\n>\n> In logical decoding, when logical_decoding_work_mem is exceeded, the changes are\n> sent to output plugin in streaming mode. But there is a restriction that the\n> minimum value of logical_decoding_work_mem is 64kB. I tried to add a GUC to\n> allow sending every change to output plugin without waiting until\n> logical_decoding_work_mem is exceeded.\n>\n> This helps to test streaming mode. For example, to test \"Avoid streaming the\n> transaction which are skipped\" [1], it needs many XLOG_XACT_INVALIDATIONS\n> messages. With the new option, it can be tested with fewer changes and in less\n> time. Also, this new option helps to test more scenarios for \"Perform streaming\n> logical transactions by background workers\" [2].\n\nSome comments on the patch\n\n1. Can you add one test case using this option\n\n2. + <varlistentry id=\"guc-force-stream-mode\" xreflabel=\"force_stream_mode\">\n+ <term><varname>force_stream_mode</varname> (<type>boolean</type>)\n+ <indexterm>\n+ <primary><varname>force_stream_mode</varname> configuration\nparameter</primary>\n+ </indexterm>\n+ </term>\n\nThis GUC name \"force_stream_mode\" somehow appears like we are forcing\nthis streaming mode irrespective of whether the\nsubscriber has requested for this mode or not. But actually it is not\nthat, it is just streaming each change if\nit is enabled. So we might need to think on the name (at least we\nshould avoid using *mode* in the name IMHO).\n\n3.\n- while (rb->size >= logical_decoding_work_mem * 1024L)\n+ while ((!force_stream && rb->size >= logical_decoding_work_mem * 1024L) ||\n+ (force_stream && rb->size > 0))\n {\n\nIt seems like if force_stream is on then indirectly it is enabling\nforce serialization as well. Because once we enter into the loop\nbased on \"force_stream\" then it will either stream or serialize but I\nguess we do not want to force serialize based on this parameter.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 10 Dec 2022 11:33:06 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Sat, Dec 10, 2022 at 5:03 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Dec 6, 2022 at 11:53 AM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> >\n> > Hi hackers,\n> >\n> > In logical decoding, when logical_decoding_work_mem is exceeded, the changes are\n> > sent to output plugin in streaming mode. But there is a restriction that the\n> > minimum value of logical_decoding_work_mem is 64kB. I tried to add a GUC to\n> > allow sending every change to output plugin without waiting until\n> > logical_decoding_work_mem is exceeded.\n> >\n> > This helps to test streaming mode. For example, to test \"Avoid streaming the\n> > transaction which are skipped\" [1], it needs many XLOG_XACT_INVALIDATIONS\n> > messages. With the new option, it can be tested with fewer changes and in less\n> > time. Also, this new option helps to test more scenarios for \"Perform streaming\n> > logical transactions by background workers\" [2].\n>\n> Some comments on the patch\n>\n...\n> This GUC name \"force_stream_mode\" somehow appears like we are forcing\n> this streaming mode irrespective of whether the\n> subscriber has requested for this mode or not. But actually it is not\n> that, it is just streaming each change if\n> it is enabled. So we might need to think on the name (at least we\n> should avoid using *mode* in the name IMHO).\n>\n\nI thought the same. Names like those shown below might be more appropriate:\nstream_checks_work_mem = true/false\nstream_mode_checks_size = true/false\nstream_for_large_tx_only = true/false\n... etc.\n\nThe GUC name length could get a bit unwieldy but isn't it better for\nit to have the correct meaning than to have a shorter but slightly\nambiguous name? Anyway, it is a developer option so I guess longer\nnames are less of a problem.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 12 Dec 2022 11:03:08 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Sat, Dec 10, 2022 at 11:18 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Dec 7, 2022 at 5:16 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> +1 for the idea\n>\n> >\n> > There is potential for lots of developer GUCs for testing/debugging in\n> > the area of logical replication but IMO it might be better to keep\n> > them all separated. Putting everything into a single\n> > 'logical_replication_mode' might cause difficulties later when/if you\n> > want combinations of the different modes.\n> >\n> > For example, instead of\n> >\n> > logical_replication_mode = XXX/YYY/ZZZ\n> >\n> > maybe something like below will give more flexibility.\n> >\n> > logical_replication_dev_XXX = true/false\n> > logical_replication_dev_YYY = true/false\n> > logical_replication_dev_ZZZ = true/false\n> >\n>\n> Even I agree that usability wise keeping them independent is better.\n>\n\nBut OTOH, doesn't introducing multiple GUCs (one to allow streaming\neach change, another to allow serialization, and a third one to\nprobably test subscriber-side work) for the purpose of testing, and\ndebugging logical replication code sound a bit more?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 12 Dec 2022 17:44:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 5:23 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> Hi hackers,\n>\n> In logical decoding, when logical_decoding_work_mem is exceeded, the changes are\n> sent to output plugin in streaming mode. But there is a restriction that the\n> minimum value of logical_decoding_work_mem is 64kB. I tried to add a GUC to\n> allow sending every change to output plugin without waiting until\n> logical_decoding_work_mem is exceeded.\n>\n> This helps to test streaming mode. For example, to test \"Avoid streaming the\n> transaction which are skipped\" [1], it needs many XLOG_XACT_INVALIDATIONS\n> messages. With the new option, it can be tested with fewer changes and in less\n> time. Also, this new option helps to test more scenarios for \"Perform streaming\n> logical transactions by background workers\" [2].\n>\n> [1] https://www.postgresql.org/message-id/CAFiTN-tHK=7LzfrPs8fbT2ksrOJGQbzywcgXst2bM9-rJJAAUg@mail.gmail.com\n> [2] https://www.postgresql.org/message-id/flat/CAA4eK1%2BwyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw%40mail.gmail.com\n>\n\nHi, I've been doing some testing that makes use of this new developer\nGUC `force_stream_mode`.\n\nIIUC this GUC is used by the walsender during the logic of the\nReorderBufferCheckMemoryLimit(). Also, AFAIK the only way that the\nwalsender is going to know this GUC value is by inheritance from the\nparent publisher at the time the walsender process gets launched.\n\nI may be overthinking this. Isn't there potential for this to become\nquite confusing depending on the timing of when this GUC is modified?\n\nE.g.1 When the walsender is launched, it will use whatever is the\ncurrent value of this GUC.\nE.g.2 But if the GUC is changed after the walsender is already\nlaunched, then that will have no effect on the already running\nwalsender.\n\nIs that understanding correct?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Tue, 13 Dec 2022 14:33:51 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 2:33 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Dec 6, 2022 at 5:23 PM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> >\n> > Hi hackers,\n> >\n> > In logical decoding, when logical_decoding_work_mem is exceeded, the changes are\n> > sent to output plugin in streaming mode. But there is a restriction that the\n> > minimum value of logical_decoding_work_mem is 64kB. I tried to add a GUC to\n> > allow sending every change to output plugin without waiting until\n> > logical_decoding_work_mem is exceeded.\n> >\n> > This helps to test streaming mode. For example, to test \"Avoid streaming the\n> > transaction which are skipped\" [1], it needs many XLOG_XACT_INVALIDATIONS\n> > messages. With the new option, it can be tested with fewer changes and in less\n> > time. Also, this new option helps to test more scenarios for \"Perform streaming\n> > logical transactions by background workers\" [2].\n> >\n> > [1] https://www.postgresql.org/message-id/CAFiTN-tHK=7LzfrPs8fbT2ksrOJGQbzywcgXst2bM9-rJJAAUg@mail.gmail.com\n> > [2] https://www.postgresql.org/message-id/flat/CAA4eK1%2BwyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw%40mail.gmail.com\n> >\n>\n> Hi, I've been doing some testing that makes use of this new developer\n> GUC `force_stream_mode`.\n>\n> IIUC this GUC is used by the walsender during the logic of the\n> ReorderBufferCheckMemoryLimit(). Also, AFAIK the only way that the\n> walsender is going to know this GUC value is by inheritance from the\n> parent publisher at the time the walsender process gets launched.\n>\n> I may be overthinking this. Isn't there potential for this to become\n> quite confusing depending on the timing of when this GUC is modified?\n>\n> E.g.1 When the walsender is launched, it will use whatever is the\n> current value of this GUC.\n> E.g.2 But if the GUC is changed after the walsender is already\n> launched, then that will have no effect on the already running\n> walsender.\n>\n> Is that understanding correct?\n>\n\nI think I was mistaken above. It looks like even the already-launched\nwalsender gets the updated GUC value via a SIGHUP on the parent\npublisher.\n\n2022-12-13 16:31:33.453 AEDT [1902] LOG: received SIGHUP, reloading\nconfiguration files\n2022-12-13 16:31:33.455 AEDT [1902] LOG: parameter\n\"force_stream_mode\" changed to \"true\"\n\nSorry for the noise.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 13 Dec 2022 16:45:00 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Sat, Dec 10, 2022 2:03 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> \r\n> On Tue, Dec 6, 2022 at 11:53 AM shiy.fnst@fujitsu.com\r\n> <shiy.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Hi hackers,\r\n> >\r\n> > In logical decoding, when logical_decoding_work_mem is exceeded, the\r\n> changes are\r\n> > sent to output plugin in streaming mode. But there is a restriction that the\r\n> > minimum value of logical_decoding_work_mem is 64kB. I tried to add a GUC\r\n> to\r\n> > allow sending every change to output plugin without waiting until\r\n> > logical_decoding_work_mem is exceeded.\r\n> >\r\n> > This helps to test streaming mode. For example, to test \"Avoid streaming the\r\n> > transaction which are skipped\" [1], it needs many\r\n> XLOG_XACT_INVALIDATIONS\r\n> > messages. With the new option, it can be tested with fewer changes and in\r\n> less\r\n> > time. Also, this new option helps to test more scenarios for \"Perform\r\n> streaming\r\n> > logical transactions by background workers\" [2].\r\n> \r\n> Some comments on the patch\r\n> \r\n\r\nThanks for your comments.\r\n\r\n> 1. Can you add one test case using this option\r\n> \r\n\r\nI added a simple test to confirm the new option works.\r\n\r\n> 2. + <varlistentry id=\"guc-force-stream-mode\"\r\n> xreflabel=\"force_stream_mode\">\r\n> + <term><varname>force_stream_mode</varname>\r\n> (<type>boolean</type>)\r\n> + <indexterm>\r\n> + <primary><varname>force_stream_mode</varname> configuration\r\n> parameter</primary>\r\n> + </indexterm>\r\n> + </term>\r\n> \r\n> This GUC name \"force_stream_mode\" somehow appears like we are forcing\r\n> this streaming mode irrespective of whether the\r\n> subscriber has requested for this mode or not. But actually it is not\r\n> that, it is just streaming each change if\r\n> it is enabled. So we might need to think on the name (at least we\r\n> should avoid using *mode* in the name IMHO).\r\n> \r\n\r\nI think a similar GUC is force_parallel_mode, and if the query is parallel\r\nunsafe or max_worker_processes is exceeded, force_parallel_mode will not work.\r\nThis is similar to what we do in this patch. So, maybe it's ok to use \"mode\". I\r\ndidn't change it in the new version of patch. What do you think?\r\n\r\n> 3.\r\n> - while (rb->size >= logical_decoding_work_mem * 1024L)\r\n> + while ((!force_stream && rb->size >= logical_decoding_work_mem *\r\n> 1024L) ||\r\n> + (force_stream && rb->size > 0))\r\n> {\r\n> \r\n> It seems like if force_stream is on then indirectly it is enabling\r\n> force serialization as well. Because once we enter into the loop\r\n> based on \"force_stream\" then it will either stream or serialize but I\r\n> guess we do not want to force serialize based on this parameter.\r\n> \r\n\r\nAgreed, I refactored the code and modified this point.\r\n\r\nPlease see the attached patch. I also fix Peter's comments[1]. The GUC name and\r\ndesign are still under discussion, so I didn't modify them.\r\n\r\nBy the way, I noticed that the comment for ReorderBufferCheckMemoryLimit() on\r\nHEAD missed something. I fix it in this patch, too.\r\n\r\n[1] https://www.postgresql.org/message-id/CAHut%2BPtOjZ_e-KLf26i1XLH2ffPEZGOmGSKy0wDjwyB_uvzxBQ%40mail.gmail.com\r\n\r\nRegards,\r\nShi yu",
"msg_date": "Wed, 14 Dec 2022 08:44:56 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 2:15 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> Please see the attached patch. I also fix Peter's comments[1]. The GUC name and\n> design are still under discussion, so I didn't modify them.\n>\n\nLet me summarize the discussion on name and design till now. As per my\nunderstanding, we have three requirements: (a) In publisher, stream\neach change of transaction instead of waiting till\nlogical_decoding_work_mem or commit; this will help us to reduce the\ntest timings of current and future tests for replication of\nin-progress transactions; (b) In publisher, serialize each change\ninstead of waiting till logical_decoding_work_mem; this can help\nreduce the test time of tests related to serialization of changes in\nlogical decoding; (c) In subscriber, during parallel apply for\nin-progress transactions (a new feature being discussed at [1]) allow\nthe system to switch to serialize mode (no more space in shared memory\nqueue between leader and parallel apply worker either due to a\nparallel worker being busy or waiting on some lock) while sending\nchanges.\n\nHaving a GUC that controls these actions/features will allow us to\nwrite tests with shorter duration and better predictability as\notherwise, they require a lot of changes. Apart from tests, these also\nhelp to easily debug the required code. So they fit the Developer\nOptions category of GUC [2].\n\nWe have discussed three different ways to provide GUC for these\nfeatures. (1) Have separate GUCs like force_server_stream_mode,\nforce_server_serialize_mode, force_client_serialize_mode (we can use\ndifferent names for these) for each of these; (2) Have two sets of\nGUCs for server and client. We can have logical_decoding_mode with\nvalues as 'stream' and 'serialize' for the server and then\nlogical_apply_serialize = true/false for the client. (3) Have one GUC\nlike logical_replication_mode with values as 'server_stream',\n'server_serialize', 'client_serialize'.\n\nThe names used here are tentative mainly to explain each of the\noptions, we can use different names once we decide among the above.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/flat/CAA4eK1%2BwyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw%40mail.gmail.com\n[2] - https://www.postgresql.org/docs/devel/runtime-config-developer.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 14 Dec 2022 17:29:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 2:15 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n\n> > - while (rb->size >= logical_decoding_work_mem * 1024L)\n> > + while ((!force_stream && rb->size >= logical_decoding_work_mem *\n> > 1024L) ||\n> > + (force_stream && rb->size > 0))\n> > {\n> >\n> > It seems like if force_stream is on then indirectly it is enabling\n> > force serialization as well. Because once we enter into the loop\n> > based on \"force_stream\" then it will either stream or serialize but I\n> > guess we do not want to force serialize based on this parameter.\n> >\n>\n> Agreed, I refactored the code and modified this point.\n\nAfter thinking more on this I feel the previous behavior made more\nsense. Because without this patch if we cross the work_mem we try to\nstream and if we can not stream for some reason e.g. partial change\nthen we serialize. And I feel your previous patch was mimicking the\nsame behavior for each change. Now in the new patch, we will try to\nstream and if we can not we will queue the change so I feel we are\ncreating a new patch that actually doesn't exist without the force\nmode.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Dec 2022 10:48:39 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 5:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 14, 2022 at 2:15 PM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> >\n> > Please see the attached patch. I also fix Peter's comments[1]. The GUC name and\n> > design are still under discussion, so I didn't modify them.\n> >\n>\n> Let me summarize the discussion on name and design till now. As per my\n> understanding, we have three requirements: (a) In publisher, stream\n> each change of transaction instead of waiting till\n> logical_decoding_work_mem or commit; this will help us to reduce the\n> test timings of current and future tests for replication of\n> in-progress transactions; (b) In publisher, serialize each change\n> instead of waiting till logical_decoding_work_mem; this can help\n> reduce the test time of tests related to serialization of changes in\n> logical decoding; (c) In subscriber, during parallel apply for\n> in-progress transactions (a new feature being discussed at [1]) allow\n> the system to switch to serialize mode (no more space in shared memory\n> queue between leader and parallel apply worker either due to a\n> parallel worker being busy or waiting on some lock) while sending\n> changes.\n>\n> Having a GUC that controls these actions/features will allow us to\n> write tests with shorter duration and better predictability as\n> otherwise, they require a lot of changes. Apart from tests, these also\n> help to easily debug the required code. So they fit the Developer\n> Options category of GUC [2].\n>\n> We have discussed three different ways to provide GUC for these\n> features. (1) Have separate GUCs like force_server_stream_mode,\n> force_server_serialize_mode, force_client_serialize_mode (we can use\n> different names for these) for each of these; (2) Have two sets of\n> GUCs for server and client. We can have logical_decoding_mode with\n> values as 'stream' and 'serialize' for the server and then\n> logical_apply_serialize = true/false for the client. (3) Have one GUC\n> like logical_replication_mode with values as 'server_stream',\n> 'server_serialize', 'client_serialize'.\n>\n> The names used here are tentative mainly to explain each of the\n> options, we can use different names once we decide among the above.\n>\n> Thoughts?\n\nI think option 2 makes sense because stream/serialize are two related\noptions and both are dependent on the same parameter\n(logical_decoding_work_mem) so having a single know is logical. And\nanother GUC for serializing logical apply.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Dec 2022 10:52:23 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 10:52 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Dec 14, 2022 at 5:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Dec 14, 2022 at 2:15 PM shiy.fnst@fujitsu.com\n> > <shiy.fnst@fujitsu.com> wrote:\n> > >\n> > > Please see the attached patch. I also fix Peter's comments[1]. The GUC name and\n> > > design are still under discussion, so I didn't modify them.\n> > >\n> >\n> > Let me summarize the discussion on name and design till now. As per my\n> > understanding, we have three requirements: (a) In publisher, stream\n> > each change of transaction instead of waiting till\n> > logical_decoding_work_mem or commit; this will help us to reduce the\n> > test timings of current and future tests for replication of\n> > in-progress transactions; (b) In publisher, serialize each change\n> > instead of waiting till logical_decoding_work_mem; this can help\n> > reduce the test time of tests related to serialization of changes in\n> > logical decoding; (c) In subscriber, during parallel apply for\n> > in-progress transactions (a new feature being discussed at [1]) allow\n> > the system to switch to serialize mode (no more space in shared memory\n> > queue between leader and parallel apply worker either due to a\n> > parallel worker being busy or waiting on some lock) while sending\n> > changes.\n> >\n> > Having a GUC that controls these actions/features will allow us to\n> > write tests with shorter duration and better predictability as\n> > otherwise, they require a lot of changes. Apart from tests, these also\n> > help to easily debug the required code. So they fit the Developer\n> > Options category of GUC [2].\n> >\n> > We have discussed three different ways to provide GUC for these\n> > features. (1) Have separate GUCs like force_server_stream_mode,\n> > force_server_serialize_mode, force_client_serialize_mode (we can use\n> > different names for these) for each of these; (2) Have two sets of\n> > GUCs for server and client. We can have logical_decoding_mode with\n> > values as 'stream' and 'serialize' for the server and then\n> > logical_apply_serialize = true/false for the client. (3) Have one GUC\n> > like logical_replication_mode with values as 'server_stream',\n> > 'server_serialize', 'client_serialize'.\n> >\n> > The names used here are tentative mainly to explain each of the\n> > options, we can use different names once we decide among the above.\n> >\n> > Thoughts?\n>\n> I think option 2 makes sense because stream/serialize are two related\n> options and both are dependent on the same parameter\n> (logical_decoding_work_mem) so having a single know is logical. And\n> another GUC for serializing logical apply.\n>\n\nThanks for your input. Sawada-San, others, any preferences/suggestions?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Dec 2022 14:44:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "Dear hackers,\r\n\r\n> We have discussed three different ways to provide GUC for these\r\n> features. (1) Have separate GUCs like force_server_stream_mode,\r\n> force_server_serialize_mode, force_client_serialize_mode (we can use\r\n> different names for these) for each of these; (2) Have two sets of\r\n> GUCs for server and client. We can have logical_decoding_mode with\r\n> values as 'stream' and 'serialize' for the server and then\r\n> logical_apply_serialize = true/false for the client. (3) Have one GUC\r\n> like logical_replication_mode with values as 'server_stream',\r\n> 'server_serialize', 'client_serialize'.\r\n\r\nI also agreed for adding new GUC parameters (and I have already done partially\r\nin parallel apply[1]), and basically options 2 made sense for me. But is it OK\r\nthat we can choose \"serialize\" mode even if subscribers require streaming?\r\n\r\nCurrently the reorder buffer transactions are serialized on publisher only when\r\nthe there are no streamable transaction. So what happen if the\r\nlogical_decoding_mode = \"serialize\" but streaming option streaming is on? If we\r\nbreak the first one and serialize changes on publisher anyway, it may be not\r\nsuitable for testing the normal operation.\r\n\r\nTherefore, I came up with the variant of (2): logical_decoding_mode can be\r\n\"normal\" or \"immediate\".\r\n\r\n\"normal\" is a default value, which is same as current HEAD. Changes are streamed\r\nor serialized when the buffered size exceeds logical_decoding_work_mem.\r\n\r\nWhen users set to \"immediate\", the walsenders starts to stream or serialize all\r\nchanges. The choice is depends on the subscription option.\r\n\r\n\r\nIn short: +1 for (2) family.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866160DE81FA2D88B8F22DEF5159%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 20 Dec 2022 09:16:29 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 2:46 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear hackers,\n>\n> > We have discussed three different ways to provide GUC for these\n> > features. (1) Have separate GUCs like force_server_stream_mode,\n> > force_server_serialize_mode, force_client_serialize_mode (we can use\n> > different names for these) for each of these; (2) Have two sets of\n> > GUCs for server and client. We can have logical_decoding_mode with\n> > values as 'stream' and 'serialize' for the server and then\n> > logical_apply_serialize = true/false for the client. (3) Have one GUC\n> > like logical_replication_mode with values as 'server_stream',\n> > 'server_serialize', 'client_serialize'.\n>\n> I also agreed for adding new GUC parameters (and I have already done partially\n> in parallel apply[1]), and basically options 2 made sense for me. But is it OK\n> that we can choose \"serialize\" mode even if subscribers require streaming?\n>\n> Currently the reorder buffer transactions are serialized on publisher only when\n> the there are no streamable transaction. So what happen if the\n> logical_decoding_mode = \"serialize\" but streaming option streaming is on? If we\n> break the first one and serialize changes on publisher anyway, it may be not\n> suitable for testing the normal operation.\n>\n\nI think the change will be streamed as soon as the next change is\nprocessed even if we serialize based on this option. See\nReorderBufferProcessPartialChange. However, I see your point that when\nthe streaming option is given, the value 'serialize' for this GUC may\nnot make much sense.\n\n> Therefore, I came up with the variant of (2): logical_decoding_mode can be\n> \"normal\" or \"immediate\".\n>\n> \"normal\" is a default value, which is same as current HEAD. Changes are streamed\n> or serialized when the buffered size exceeds logical_decoding_work_mem.\n>\n> When users set to \"immediate\", the walsenders starts to stream or serialize all\n> changes. The choice is depends on the subscription option.\n>\n\nThe other possibility to achieve what you are saying is that we allow\na minimum value of logical_decoding_work_mem as 0 which would mean\nstream or serialize each change depending on whether the streaming\noption is enabled. I think we normally don't allow a minimum value\nbelow a certain threshold for other *_work_mem parameters (like\nmaintenance_work_mem, work_mem), so we have followed the same here.\nAnd, I think it makes sense from the user's perspective because below\na certain threshold it will just add overhead by either writing small\nchanges to the disk or by sending those over the network. However, it\ncan be quite useful for testing/debugging. So, not sure, if we should\nrestrict setting logical_decoding_work_mem below a certain threshold.\nWhat do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Dec 2022 16:19:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> The other possibility to achieve what you are saying is that we allow\r\n> a minimum value of logical_decoding_work_mem as 0 which would mean\r\n> stream or serialize each change depending on whether the streaming\r\n> option is enabled.\r\n\r\nI understood that logical_decoding_work_mem may double as normal option as\r\ndeveloper option. I think yours is smarter because we can reduce # of GUCs.\r\n\r\n> I think we normally don't allow a minimum value\r\n> below a certain threshold for other *_work_mem parameters (like\r\n> maintenance_work_mem, work_mem), so we have followed the same here.\r\n> And, I think it makes sense from the user's perspective because below\r\n> a certain threshold it will just add overhead by either writing small\r\n> changes to the disk or by sending those over the network. However, it\r\n> can be quite useful for testing/debugging. So, not sure, if we should\r\n> restrict setting logical_decoding_work_mem below a certain threshold.\r\n> What do you think?\r\n\r\nYou mean to say that there is a possibility that users may set a small value without deep\r\nconsiderations, right? If so, how about using the approach like autovacuum_work_mem?\r\n\r\nautovacuum_work_mem has a range [-1, MAX_KIROBYTES], and -1 mean that it follows\r\nmaintenance_work_mem. If it is set small value like 5KB, its working memory is rounded\r\nup to 1024KB. See check_autovacuum_work_mem().\r\n\r\nBased on that, I suggest followings. Can they solve the problem what you said?\r\n\r\n* If logical_decoding_work_mem is set to 0, all transactions are streamed or serialized\r\n on publisher.\r\n* If logical_decoding_work_mem is set within [1, 63KB], the value is rounded up or ERROR\r\n is raised.\r\n* If logical_decoding_work_mem is set greater than or equal to 64KB, the set value\r\n is used.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n> Amit Kapila.\r\n",
"msg_date": "Wed, 21 Dec 2022 05:55:41 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Force streaming every change in logical decoding"
},
{
"msg_contents": "Going with ' logical_decoding_work_mem' seems a reasonable solution, but\nsince we are mixing\nthe functionality of developer and production GUC, there is a slight risk\nthat customer/DBAs may end\nup setting it to 0 and forget about it and thus hampering system's\nperformance.\nHave seen many such cases in previous org.\n\nAdding a new developer parameter seems slightly safe, considering we\nalready have one\nsuch category supported in postgres. It can be on the same line as that of\n'force_parallel_mode'.\nIt will be purely developer GUC, plus if we want to extend something in\nfuture to add/automate\nheavier test-cases or any other streaming related dev option, we can extend\nthe same parameter w/o\ndisturbing production's one. (see force_parallel_mode=regress for ref).\n\nthanks\nShveta\n\n\nOn Wed, Dec 21, 2022 at 11:25 AM Hayato Kuroda (Fujitsu) <\nkuroda.hayato@fujitsu.com> wrote:\n\n> Dear Amit,\n>\n> > The other possibility to achieve what you are saying is that we allow\n> > a minimum value of logical_decoding_work_mem as 0 which would mean\n> > stream or serialize each change depending on whether the streaming\n> > option is enabled.\n>\n> I understood that logical_decoding_work_mem may double as normal option as\n> developer option. I think yours is smarter because we can reduce # of GUCs.\n>\n> > I think we normally don't allow a minimum value\n> > below a certain threshold for other *_work_mem parameters (like\n> > maintenance_work_mem, work_mem), so we have followed the same here.\n> > And, I think it makes sense from the user's perspective because below\n> > a certain threshold it will just add overhead by either writing small\n> > changes to the disk or by sending those over the network. However, it\n> > can be quite useful for testing/debugging. So, not sure, if we should\n> > restrict setting logical_decoding_work_mem below a certain threshold.\n> > What do you think?\n>\n> You mean to say that there is a possibility that users may set a small\n> value without deep\n> considerations, right? If so, how about using the approach like\n> autovacuum_work_mem?\n>\n> autovacuum_work_mem has a range [-1, MAX_KIROBYTES], and -1 mean that it\n> follows\n> maintenance_work_mem. If it is set small value like 5KB, its working\n> memory is rounded\n> up to 1024KB. See check_autovacuum_work_mem().\n>\n> Based on that, I suggest followings. Can they solve the problem what you\n> said?\n>\n> * If logical_decoding_work_mem is set to 0, all transactions are streamed\n> or serialized\n> on publisher.\n> * If logical_decoding_work_mem is set within [1, 63KB], the value is\n> rounded up or ERROR\n> is raised.\n> * If logical_decoding_work_mem is set greater than or equal to 64KB, the\n> set value\n> is used.\n>\n> Best Regards,\n> Hayato Kuroda\n> FUJITSU LIMITED\n> > Amit Kapila.\n>\n\nGoing with ' logical_decoding_work_mem' seems a reasonable solution, but since we are mixingthe functionality of developer and production GUC, there is a slight risk that customer/DBAs may endup setting it to 0 and forget about it and thus hampering system's performance.Have seen many such cases in previous org.Adding a new developer parameter seems slightly safe, considering we already have one such category supported in postgres. It can be on the same line as that of 'force_parallel_mode'.It will be purely developer GUC, plus if we want to extend something in future to add/automate heavier test-cases or any other streaming related dev option, we can extend the same parameter w/o disturbing production's one. (see force_parallel_mode=regress for ref).thanksShvetaOn Wed, Dec 21, 2022 at 11:25 AM Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com> wrote:Dear Amit,\n\n> The other possibility to achieve what you are saying is that we allow\n> a minimum value of logical_decoding_work_mem as 0 which would mean\n> stream or serialize each change depending on whether the streaming\n> option is enabled.\n\nI understood that logical_decoding_work_mem may double as normal option as\ndeveloper option. I think yours is smarter because we can reduce # of GUCs.\n\n> I think we normally don't allow a minimum value\n> below a certain threshold for other *_work_mem parameters (like\n> maintenance_work_mem, work_mem), so we have followed the same here.\n> And, I think it makes sense from the user's perspective because below\n> a certain threshold it will just add overhead by either writing small\n> changes to the disk or by sending those over the network. However, it\n> can be quite useful for testing/debugging. So, not sure, if we should\n> restrict setting logical_decoding_work_mem below a certain threshold.\n> What do you think?\n\nYou mean to say that there is a possibility that users may set a small value without deep\nconsiderations, right? If so, how about using the approach like autovacuum_work_mem?\n\nautovacuum_work_mem has a range [-1, MAX_KIROBYTES], and -1 mean that it follows\nmaintenance_work_mem. If it is set small value like 5KB, its working memory is rounded\nup to 1024KB. See check_autovacuum_work_mem().\n\nBased on that, I suggest followings. Can they solve the problem what you said?\n\n* If logical_decoding_work_mem is set to 0, all transactions are streamed or serialized\n on publisher.\n* If logical_decoding_work_mem is set within [1, 63KB], the value is rounded up or ERROR\n is raised.\n* If logical_decoding_work_mem is set greater than or equal to 64KB, the set value\n is used.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n> Amit Kapila.",
"msg_date": "Wed, 21 Dec 2022 12:49:08 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 7:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Dec 20, 2022 at 2:46 PM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Dear hackers,\n> >\n> > > We have discussed three different ways to provide GUC for these\n> > > features. (1) Have separate GUCs like force_server_stream_mode,\n> > > force_server_serialize_mode, force_client_serialize_mode (we can use\n> > > different names for these) for each of these; (2) Have two sets of\n> > > GUCs for server and client. We can have logical_decoding_mode with\n> > > values as 'stream' and 'serialize' for the server and then\n> > > logical_apply_serialize = true/false for the client. (3) Have one GUC\n> > > like logical_replication_mode with values as 'server_stream',\n> > > 'server_serialize', 'client_serialize'.\n> >\n> > I also agreed for adding new GUC parameters (and I have already done partially\n> > in parallel apply[1]), and basically options 2 made sense for me. But is it OK\n> > that we can choose \"serialize\" mode even if subscribers require streaming?\n> >\n> > Currently the reorder buffer transactions are serialized on publisher only when\n> > the there are no streamable transaction. So what happen if the\n> > logical_decoding_mode = \"serialize\" but streaming option streaming is on? If we\n> > break the first one and serialize changes on publisher anyway, it may be not\n> > suitable for testing the normal operation.\n> >\n>\n> I think the change will be streamed as soon as the next change is\n> processed even if we serialize based on this option. See\n> ReorderBufferProcessPartialChange. However, I see your point that when\n> the streaming option is given, the value 'serialize' for this GUC may\n> not make much sense.\n>\n> > Therefore, I came up with the variant of (2): logical_decoding_mode can be\n> > \"normal\" or \"immediate\".\n> >\n> > \"normal\" is a default value, which is same as current HEAD. Changes are streamed\n> > or serialized when the buffered size exceeds logical_decoding_work_mem.\n> >\n> > When users set to \"immediate\", the walsenders starts to stream or serialize all\n> > changes. The choice is depends on the subscription option.\n> >\n>\n> The other possibility to achieve what you are saying is that we allow\n> a minimum value of logical_decoding_work_mem as 0 which would mean\n> stream or serialize each change depending on whether the streaming\n> option is enabled. I think we normally don't allow a minimum value\n> below a certain threshold for other *_work_mem parameters (like\n> maintenance_work_mem, work_mem), so we have followed the same here.\n> And, I think it makes sense from the user's perspective because below\n> a certain threshold it will just add overhead by either writing small\n> changes to the disk or by sending those over the network. However, it\n> can be quite useful for testing/debugging. So, not sure, if we should\n> restrict setting logical_decoding_work_mem below a certain threshold.\n> What do you think?\n\nI agree with (2), having separate GUCs for publisher side and\nsubscriber side. Also, on the publisher side, Amit's idea, controlling\nthe logical decoding behavior by changing logical_decoding_work_mem,\nseems like a good idea.\n\nBut I'm not sure it's a good idea if we lower the minimum value of\nlogical_decoding_work_mem to 0. I agree it's helpful for testing and\ndebugging but setting logical_decoding_work_mem = 0 doesn't benefit\nusers at all, rather brings risks.\n\nI prefer the idea Kuroda-san previously proposed; setting\nlogical_decoding_mode = 'immediate' means setting\nlogical_decoding_work_mem = 0. We might not need to have it as an enum\nparameter since it has only two values, though.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Dec 2022 16:21:55 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "Here are some review comments for patch v2.\n\nSince the GUC is still under design maybe these comments can be\nignored for now, but I guess similar comments will apply in future\nanyhow (just with some name changes).\n\n======\n\n1. Commit message\n\nAdd a new GUC force_stream_mode, when it is set on, send the change to\noutput plugin immediately in streaming mode. Otherwise, send until\nlogical_decoding_work_mem is exceeded.\n\n~\n\nIs that quite right? I thought it was more like shown below:\n\nSUGGESTION\nAdd a new GUC 'force_stream_mode' which modifies behavior when\nstreaming mode is enabled. If force_stream_mode is on the changes are\nsent to the output plugin immediately. Otherwise,(when\nforce_stream_mode is off) changes are written to memory until\nlogical_decoding_work_mem is exceeded.\n\n======\n\n2. doc/src/sgml/config.sgml\n\n+ <para>\n+ Specifies whether to force sending the changes to output plugin\n+ immediately in streaming mode. If set to <literal>off</literal> (the\n+ default), send until <varname>logical_decoding_work_mem</varname> is\n+ exceeded.\n+ </para>\n\nSuggest slight rewording like #1.\n\nSUGGESTION\nThis modifies the behavior when streaming mode is enabled. If set to\n<literal>on</literal> the changes are sent to the output plugin\nimmediately. If set <literal>off</literal> (the default), changes are\nwritten to memory until <varname>logical_decoding_work_mem</varname>\nis exceeded.\n\n======\n\n3. More docs.\n\nIt might be helpful if this developer option is referenced also on the\npage \"31.10.1 Logical Replication > Configuration Settings >\nPublishers\" [1]\n\n======\n\nsrc/backend/replication/logical/reorderbuffer.c\n\n4. GUCs\n\n+/*\n+ * Whether to send the change to output plugin immediately in streaming mode.\n+ * When it is off, wait until logical_decoding_work_mem is exceeded.\n+ */\n+bool force_stream_mode;\n\n4a.\n\"to output plugin\" -> \"to the output plugin\"\n\n~\n\n4b.\nBy convention (see. [2]) IIUC there should be some indication that\nthese (this and 'logical_decoding_work_mem') are GUC variables. e.g.\nthese should be refactored to be grouped togther without the other\nstatic var in between. And add a comment for them both like:\n\n/* GUC variable. */\n\n~\n\n4c.\nAlso, (see [2]) it makes the code more readable to give the GUC an\nexplicit initial value.\n\nSUGGESTION\nbool force_stream_mode = false;\n\n~~~\n\n5. ReorderBufferCheckMemoryLimit\n\n+ /* we know there has to be one, because the size is not zero */\n\nUppercase comment. Although not strictly added by this patch you might\nas well make this change while adjusting the indentation.\n\n======\n\nsrc/backend/utils/misc/guc_tables.c\n\n6.\n\n+ {\n+ {\"force_stream_mode\", PGC_USERSET, DEVELOPER_OPTIONS,\n+ gettext_noop(\"Force sending the changes to output plugin immediately\nif streaming is supported, without waiting till\nlogical_decoding_work_mem.\"),\n+ NULL,\n+ GUC_NOT_IN_SAMPLE\n+ },\n+ &force_stream_mode,\n+ false,\n+ NULL, NULL, NULL\n+ },\n\n\"without waiting till logical_decoding_work_mem.\".... seem like an\nincomplete sentence\n\nSUGGESTION\nForce sending any streaming changes to the output plugin immediately\nwithout waiting until logical_decoding_work_mem is exceeded.\"),\n\n------\n[1] https://www.postgresql.org/docs/devel/logical-replication-config.html\n[2] GUC declarations -\nhttps://github.com/postgres/postgres/commit/d9d873bac67047cfacc9f5ef96ee488f2cb0f1c3\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 21 Dec 2022 19:05:22 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 6:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Dec 20, 2022 at 7:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Dec 20, 2022 at 2:46 PM Hayato Kuroda (Fujitsu)\n> > <kuroda.hayato@fujitsu.com> wrote:\n> > >\n> > > Dear hackers,\n> > >\n> > > > We have discussed three different ways to provide GUC for these\n> > > > features. (1) Have separate GUCs like force_server_stream_mode,\n> > > > force_server_serialize_mode, force_client_serialize_mode (we can use\n> > > > different names for these) for each of these; (2) Have two sets of\n> > > > GUCs for server and client. We can have logical_decoding_mode with\n> > > > values as 'stream' and 'serialize' for the server and then\n> > > > logical_apply_serialize = true/false for the client. (3) Have one GUC\n> > > > like logical_replication_mode with values as 'server_stream',\n> > > > 'server_serialize', 'client_serialize'.\n> > >\n> > > I also agreed for adding new GUC parameters (and I have already done partially\n> > > in parallel apply[1]), and basically options 2 made sense for me. But is it OK\n> > > that we can choose \"serialize\" mode even if subscribers require streaming?\n> > >\n> > > Currently the reorder buffer transactions are serialized on publisher only when\n> > > the there are no streamable transaction. So what happen if the\n> > > logical_decoding_mode = \"serialize\" but streaming option streaming is on? If we\n> > > break the first one and serialize changes on publisher anyway, it may be not\n> > > suitable for testing the normal operation.\n> > >\n> >\n> > I think the change will be streamed as soon as the next change is\n> > processed even if we serialize based on this option. See\n> > ReorderBufferProcessPartialChange. However, I see your point that when\n> > the streaming option is given, the value 'serialize' for this GUC may\n> > not make much sense.\n> >\n> > > Therefore, I came up with the variant of (2): logical_decoding_mode can be\n> > > \"normal\" or \"immediate\".\n> > >\n> > > \"normal\" is a default value, which is same as current HEAD. Changes are streamed\n> > > or serialized when the buffered size exceeds logical_decoding_work_mem.\n> > >\n> > > When users set to \"immediate\", the walsenders starts to stream or serialize all\n> > > changes. The choice is depends on the subscription option.\n> > >\n> >\n> > The other possibility to achieve what you are saying is that we allow\n> > a minimum value of logical_decoding_work_mem as 0 which would mean\n> > stream or serialize each change depending on whether the streaming\n> > option is enabled. I think we normally don't allow a minimum value\n> > below a certain threshold for other *_work_mem parameters (like\n> > maintenance_work_mem, work_mem), so we have followed the same here.\n> > And, I think it makes sense from the user's perspective because below\n> > a certain threshold it will just add overhead by either writing small\n> > changes to the disk or by sending those over the network. However, it\n> > can be quite useful for testing/debugging. So, not sure, if we should\n> > restrict setting logical_decoding_work_mem below a certain threshold.\n> > What do you think?\n>\n> I agree with (2), having separate GUCs for publisher side and\n> subscriber side. Also, on the publisher side, Amit's idea, controlling\n> the logical decoding behavior by changing logical_decoding_work_mem,\n> seems like a good idea.\n>\n> But I'm not sure it's a good idea if we lower the minimum value of\n> logical_decoding_work_mem to 0. I agree it's helpful for testing and\n> debugging but setting logical_decoding_work_mem = 0 doesn't benefit\n> users at all, rather brings risks.\n>\n> I prefer the idea Kuroda-san previously proposed; setting\n> logical_decoding_mode = 'immediate' means setting\n> logical_decoding_work_mem = 0. We might not need to have it as an enum\n> parameter since it has only two values, though.\n\nDid you mean one GUC (logical_decoding_mode) will cause a side-effect\nimplicit value change on another GUC value\n(logical_decoding_work_mem)?\n\nIf so, then that seems a like potential source of confusion IMO.\n- e.g. actual value is invisibly set differently from what the user\nsees in the conf file\n- e.g. will it depend on the order they get assigned\n\nAre there any GUC precedents for something like that?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 21 Dec 2022 19:25:31 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 1:55 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Dec 21, 2022 at 6:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Dec 20, 2022 at 7:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Dec 20, 2022 at 2:46 PM Hayato Kuroda (Fujitsu)\n> > > <kuroda.hayato@fujitsu.com> wrote:\n> > > >\n> > > > Dear hackers,\n> > > >\n> > > > > We have discussed three different ways to provide GUC for these\n> > > > > features. (1) Have separate GUCs like force_server_stream_mode,\n> > > > > force_server_serialize_mode, force_client_serialize_mode (we can use\n> > > > > different names for these) for each of these; (2) Have two sets of\n> > > > > GUCs for server and client. We can have logical_decoding_mode with\n> > > > > values as 'stream' and 'serialize' for the server and then\n> > > > > logical_apply_serialize = true/false for the client. (3) Have one GUC\n> > > > > like logical_replication_mode with values as 'server_stream',\n> > > > > 'server_serialize', 'client_serialize'.\n> > > >\n> > > > I also agreed for adding new GUC parameters (and I have already done partially\n> > > > in parallel apply[1]), and basically options 2 made sense for me. But is it OK\n> > > > that we can choose \"serialize\" mode even if subscribers require streaming?\n> > > >\n> > > > Currently the reorder buffer transactions are serialized on publisher only when\n> > > > the there are no streamable transaction. So what happen if the\n> > > > logical_decoding_mode = \"serialize\" but streaming option streaming is on? If we\n> > > > break the first one and serialize changes on publisher anyway, it may be not\n> > > > suitable for testing the normal operation.\n> > > >\n> > >\n> > > I think the change will be streamed as soon as the next change is\n> > > processed even if we serialize based on this option. See\n> > > ReorderBufferProcessPartialChange. However, I see your point that when\n> > > the streaming option is given, the value 'serialize' for this GUC may\n> > > not make much sense.\n> > >\n> > > > Therefore, I came up with the variant of (2): logical_decoding_mode can be\n> > > > \"normal\" or \"immediate\".\n> > > >\n> > > > \"normal\" is a default value, which is same as current HEAD. Changes are streamed\n> > > > or serialized when the buffered size exceeds logical_decoding_work_mem.\n> > > >\n> > > > When users set to \"immediate\", the walsenders starts to stream or serialize all\n> > > > changes. The choice is depends on the subscription option.\n> > > >\n> > >\n> > > The other possibility to achieve what you are saying is that we allow\n> > > a minimum value of logical_decoding_work_mem as 0 which would mean\n> > > stream or serialize each change depending on whether the streaming\n> > > option is enabled. I think we normally don't allow a minimum value\n> > > below a certain threshold for other *_work_mem parameters (like\n> > > maintenance_work_mem, work_mem), so we have followed the same here.\n> > > And, I think it makes sense from the user's perspective because below\n> > > a certain threshold it will just add overhead by either writing small\n> > > changes to the disk or by sending those over the network. However, it\n> > > can be quite useful for testing/debugging. So, not sure, if we should\n> > > restrict setting logical_decoding_work_mem below a certain threshold.\n> > > What do you think?\n> >\n> > I agree with (2), having separate GUCs for publisher side and\n> > subscriber side. Also, on the publisher side, Amit's idea, controlling\n> > the logical decoding behavior by changing logical_decoding_work_mem,\n> > seems like a good idea.\n> >\n> > But I'm not sure it's a good idea if we lower the minimum value of\n> > logical_decoding_work_mem to 0. I agree it's helpful for testing and\n> > debugging but setting logical_decoding_work_mem = 0 doesn't benefit\n> > users at all, rather brings risks.\n> >\n> > I prefer the idea Kuroda-san previously proposed; setting\n> > logical_decoding_mode = 'immediate' means setting\n> > logical_decoding_work_mem = 0. We might not need to have it as an enum\n> > parameter since it has only two values, though.\n>\n> Did you mean one GUC (logical_decoding_mode) will cause a side-effect\n> implicit value change on another GUC value\n> (logical_decoding_work_mem)?\n>\n\nI don't think that is required. The only value that can be allowed for\nlogical_decoding_mode will be \"immediate\", something like we do for\nrecovery_target. The default will be \"\". The \"immediate\" value will\nmean that stream each change if the \"streaming\" option is enabled\n(\"on\" of \"parallel\") or if \"streaming\" is not enabled then that would\nmean serializing each change.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 21 Dec 2022 14:24:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "Dear Shveta, other hackers\r\n\r\n> Going with ' logical_decoding_work_mem' seems a reasonable solution, but since we are mixing\r\n> the functionality of developer and production GUC, there is a slight risk that customer/DBAs may end\r\n> up setting it to 0 and forget about it and thus hampering system's performance.\r\n> Have seen many such cases in previous org.\r\n\r\nThat never crossed my mind at all. Indeed, DBA may be confused, this proposal seems to be too optimized.\r\nI can withdraw this and we can go another way, maybe my previous approach.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 21 Dec 2022 09:50:12 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Wed, Dec 21, 2022 4:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Wed, Dec 21, 2022 at 1:55 PM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Wed, Dec 21, 2022 at 6:22 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > > On Tue, Dec 20, 2022 at 7:49 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > >\r\n> > > > On Tue, Dec 20, 2022 at 2:46 PM Hayato Kuroda (Fujitsu)\r\n> > > > <kuroda.hayato@fujitsu.com> wrote:\r\n> > > > >\r\n> > > > > Dear hackers,\r\n> > > > >\r\n> > > > > > We have discussed three different ways to provide GUC for these\r\n> > > > > > features. (1) Have separate GUCs like force_server_stream_mode,\r\n> > > > > > force_server_serialize_mode, force_client_serialize_mode (we can\r\n> use\r\n> > > > > > different names for these) for each of these; (2) Have two sets of\r\n> > > > > > GUCs for server and client. We can have logical_decoding_mode with\r\n> > > > > > values as 'stream' and 'serialize' for the server and then\r\n> > > > > > logical_apply_serialize = true/false for the client. (3) Have one GUC\r\n> > > > > > like logical_replication_mode with values as 'server_stream',\r\n> > > > > > 'server_serialize', 'client_serialize'.\r\n> > > > >\r\n> > > > > I also agreed for adding new GUC parameters (and I have already done\r\n> partially\r\n> > > > > in parallel apply[1]), and basically options 2 made sense for me. But is\r\n> it OK\r\n> > > > > that we can choose \"serialize\" mode even if subscribers require\r\n> streaming?\r\n> > > > >\r\n> > > > > Currently the reorder buffer transactions are serialized on publisher\r\n> only when\r\n> > > > > the there are no streamable transaction. So what happen if the\r\n> > > > > logical_decoding_mode = \"serialize\" but streaming option streaming is\r\n> on? If we\r\n> > > > > break the first one and serialize changes on publisher anyway, it may\r\n> be not\r\n> > > > > suitable for testing the normal operation.\r\n> > > > >\r\n> > > >\r\n> > > > I think the change will be streamed as soon as the next change is\r\n> > > > processed even if we serialize based on this option. See\r\n> > > > ReorderBufferProcessPartialChange. However, I see your point that\r\n> when\r\n> > > > the streaming option is given, the value 'serialize' for this GUC may\r\n> > > > not make much sense.\r\n> > > >\r\n> > > > > Therefore, I came up with the variant of (2): logical_decoding_mode\r\n> can be\r\n> > > > > \"normal\" or \"immediate\".\r\n> > > > >\r\n> > > > > \"normal\" is a default value, which is same as current HEAD. Changes\r\n> are streamed\r\n> > > > > or serialized when the buffered size exceeds\r\n> logical_decoding_work_mem.\r\n> > > > >\r\n> > > > > When users set to \"immediate\", the walsenders starts to stream or\r\n> serialize all\r\n> > > > > changes. The choice is depends on the subscription option.\r\n> > > > >\r\n> > > >\r\n> > > > The other possibility to achieve what you are saying is that we allow\r\n> > > > a minimum value of logical_decoding_work_mem as 0 which would\r\n> mean\r\n> > > > stream or serialize each change depending on whether the streaming\r\n> > > > option is enabled. I think we normally don't allow a minimum value\r\n> > > > below a certain threshold for other *_work_mem parameters (like\r\n> > > > maintenance_work_mem, work_mem), so we have followed the same\r\n> here.\r\n> > > > And, I think it makes sense from the user's perspective because below\r\n> > > > a certain threshold it will just add overhead by either writing small\r\n> > > > changes to the disk or by sending those over the network. However, it\r\n> > > > can be quite useful for testing/debugging. So, not sure, if we should\r\n> > > > restrict setting logical_decoding_work_mem below a certain threshold.\r\n> > > > What do you think?\r\n> > >\r\n> > > I agree with (2), having separate GUCs for publisher side and\r\n> > > subscriber side. Also, on the publisher side, Amit's idea, controlling\r\n> > > the logical decoding behavior by changing logical_decoding_work_mem,\r\n> > > seems like a good idea.\r\n> > >\r\n> > > But I'm not sure it's a good idea if we lower the minimum value of\r\n> > > logical_decoding_work_mem to 0. I agree it's helpful for testing and\r\n> > > debugging but setting logical_decoding_work_mem = 0 doesn't benefit\r\n> > > users at all, rather brings risks.\r\n> > >\r\n> > > I prefer the idea Kuroda-san previously proposed; setting\r\n> > > logical_decoding_mode = 'immediate' means setting\r\n> > > logical_decoding_work_mem = 0. We might not need to have it as an\r\n> enum\r\n> > > parameter since it has only two values, though.\r\n> >\r\n> > Did you mean one GUC (logical_decoding_mode) will cause a side-effect\r\n> > implicit value change on another GUC value\r\n> > (logical_decoding_work_mem)?\r\n> >\r\n> \r\n> I don't think that is required. The only value that can be allowed for\r\n> logical_decoding_mode will be \"immediate\", something like we do for\r\n> recovery_target. The default will be \"\". The \"immediate\" value will\r\n> mean that stream each change if the \"streaming\" option is enabled\r\n> (\"on\" of \"parallel\") or if \"streaming\" is not enabled then that would\r\n> mean serializing each change.\r\n> \r\n\r\nI agreed and updated the patch as Amit suggested.\r\nPlease see the attached patch.\r\n\r\nRegards,\r\nShi yu",
"msg_date": "Wed, 21 Dec 2022 13:14:09 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Wed, Dec 21, 2022 4:05 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Here are some review comments for patch v2.\r\n> \r\n> Since the GUC is still under design maybe these comments can be\r\n> ignored for now, but I guess similar comments will apply in future\r\n> anyhow (just with some name changes).\r\n> \r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> \r\n> 3. More docs.\r\n> \r\n> It might be helpful if this developer option is referenced also on the\r\n> page \"31.10.1 Logical Replication > Configuration Settings >\r\n> Publishers\" [1]\r\n> \r\n\r\nThe new added GUC is developer option, and it seems that most developer options\r\nare not referenced on other pages. So, I am not sure if we need to add this to\r\nother docs.\r\n\r\nOther comments are fixed [1]. (Some of them are ignored because of the new\r\ndesign.)\r\n\r\n[1] https://www.postgresql.org/message-id/OSZPR01MB6310AAE12BC281158880380DFDEB9%40OSZPR01MB6310.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Wed, 21 Dec 2022 13:17:18 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 10:14 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Wed, Dec 21, 2022 4:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Dec 21, 2022 at 1:55 PM Peter Smith <smithpb2250@gmail.com>\n> > wrote:\n> > >\n> > > On Wed, Dec 21, 2022 at 6:22 PM Masahiko Sawada\n> > <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Tue, Dec 20, 2022 at 7:49 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > > >\n> > > > > On Tue, Dec 20, 2022 at 2:46 PM Hayato Kuroda (Fujitsu)\n> > > > > <kuroda.hayato@fujitsu.com> wrote:\n> > > > > >\n> > > > > > Dear hackers,\n> > > > > >\n> > > > > > > We have discussed three different ways to provide GUC for these\n> > > > > > > features. (1) Have separate GUCs like force_server_stream_mode,\n> > > > > > > force_server_serialize_mode, force_client_serialize_mode (we can\n> > use\n> > > > > > > different names for these) for each of these; (2) Have two sets of\n> > > > > > > GUCs for server and client. We can have logical_decoding_mode with\n> > > > > > > values as 'stream' and 'serialize' for the server and then\n> > > > > > > logical_apply_serialize = true/false for the client. (3) Have one GUC\n> > > > > > > like logical_replication_mode with values as 'server_stream',\n> > > > > > > 'server_serialize', 'client_serialize'.\n> > > > > >\n> > > > > > I also agreed for adding new GUC parameters (and I have already done\n> > partially\n> > > > > > in parallel apply[1]), and basically options 2 made sense for me. But is\n> > it OK\n> > > > > > that we can choose \"serialize\" mode even if subscribers require\n> > streaming?\n> > > > > >\n> > > > > > Currently the reorder buffer transactions are serialized on publisher\n> > only when\n> > > > > > the there are no streamable transaction. So what happen if the\n> > > > > > logical_decoding_mode = \"serialize\" but streaming option streaming is\n> > on? If we\n> > > > > > break the first one and serialize changes on publisher anyway, it may\n> > be not\n> > > > > > suitable for testing the normal operation.\n> > > > > >\n> > > > >\n> > > > > I think the change will be streamed as soon as the next change is\n> > > > > processed even if we serialize based on this option. See\n> > > > > ReorderBufferProcessPartialChange. However, I see your point that\n> > when\n> > > > > the streaming option is given, the value 'serialize' for this GUC may\n> > > > > not make much sense.\n> > > > >\n> > > > > > Therefore, I came up with the variant of (2): logical_decoding_mode\n> > can be\n> > > > > > \"normal\" or \"immediate\".\n> > > > > >\n> > > > > > \"normal\" is a default value, which is same as current HEAD. Changes\n> > are streamed\n> > > > > > or serialized when the buffered size exceeds\n> > logical_decoding_work_mem.\n> > > > > >\n> > > > > > When users set to \"immediate\", the walsenders starts to stream or\n> > serialize all\n> > > > > > changes. The choice is depends on the subscription option.\n> > > > > >\n> > > > >\n> > > > > The other possibility to achieve what you are saying is that we allow\n> > > > > a minimum value of logical_decoding_work_mem as 0 which would\n> > mean\n> > > > > stream or serialize each change depending on whether the streaming\n> > > > > option is enabled. I think we normally don't allow a minimum value\n> > > > > below a certain threshold for other *_work_mem parameters (like\n> > > > > maintenance_work_mem, work_mem), so we have followed the same\n> > here.\n> > > > > And, I think it makes sense from the user's perspective because below\n> > > > > a certain threshold it will just add overhead by either writing small\n> > > > > changes to the disk or by sending those over the network. However, it\n> > > > > can be quite useful for testing/debugging. So, not sure, if we should\n> > > > > restrict setting logical_decoding_work_mem below a certain threshold.\n> > > > > What do you think?\n> > > >\n> > > > I agree with (2), having separate GUCs for publisher side and\n> > > > subscriber side. Also, on the publisher side, Amit's idea, controlling\n> > > > the logical decoding behavior by changing logical_decoding_work_mem,\n> > > > seems like a good idea.\n> > > >\n> > > > But I'm not sure it's a good idea if we lower the minimum value of\n> > > > logical_decoding_work_mem to 0. I agree it's helpful for testing and\n> > > > debugging but setting logical_decoding_work_mem = 0 doesn't benefit\n> > > > users at all, rather brings risks.\n> > > >\n> > > > I prefer the idea Kuroda-san previously proposed; setting\n> > > > logical_decoding_mode = 'immediate' means setting\n> > > > logical_decoding_work_mem = 0. We might not need to have it as an\n> > enum\n> > > > parameter since it has only two values, though.\n> > >\n> > > Did you mean one GUC (logical_decoding_mode) will cause a side-effect\n> > > implicit value change on another GUC value\n> > > (logical_decoding_work_mem)?\n> > >\n> >\n> > I don't think that is required. The only value that can be allowed for\n> > logical_decoding_mode will be \"immediate\", something like we do for\n> > recovery_target. The default will be \"\". The \"immediate\" value will\n> > mean that stream each change if the \"streaming\" option is enabled\n> > (\"on\" of \"parallel\") or if \"streaming\" is not enabled then that would\n> > mean serializing each change.\n> >\n>\n> I agreed and updated the patch as Amit suggested.\n> Please see the attached patch.\n>\n\nThe patch looks good to me. Some minor comments are:\n\n- * limit, but we might also adapt a more elaborate eviction strategy\n- for example\n- * evicting enough transactions to free certain fraction (e.g. 50%)\nof the memory\n- * limit.\n+ * limit, but we might also adapt a more elaborate eviction strategy - for\n+ * example evicting enough transactions to free certain fraction (e.g. 50%) of\n+ * the memory limit.\n\nThis change is not relevant with this feature.\n\n---\n+ if (logical_decoding_mode == LOGICAL_DECODING_MODE_DEFAULT\n+ && rb->size < logical_decoding_work_mem * 1024L)\n\nSince we put '&&' before the new line in all other places in\nreorderbuffer.c, I think it's better to make it consistent. The same\nis true for the change for while loop in the patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Dec 2022 22:54:44 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 7:25 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Dec 21, 2022 at 10:14 PM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n>\n> The patch looks good to me. Some minor comments are:\n>\n> - * limit, but we might also adapt a more elaborate eviction strategy\n> - for example\n> - * evicting enough transactions to free certain fraction (e.g. 50%)\n> of the memory\n> - * limit.\n> + * limit, but we might also adapt a more elaborate eviction strategy - for\n> + * example evicting enough transactions to free certain fraction (e.g. 50%) of\n> + * the memory limit.\n>\n> This change is not relevant with this feature.\n>\n> ---\n> + if (logical_decoding_mode == LOGICAL_DECODING_MODE_DEFAULT\n> + && rb->size < logical_decoding_work_mem * 1024L)\n>\n> Since we put '&&' before the new line in all other places in\n> reorderbuffer.c, I think it's better to make it consistent. The same\n> is true for the change for while loop in the patch.\n>\n\nI have addressed these comments in the attached. Additionally, I have\nmodified the docs and commit messages to make those clear. I think\ninstead of adding new tests with this patch, it may be better to\nchange some of the existing tests related to streaming to use this\nparameter as that will clearly show one of the purposes of this patch.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 22 Dec 2022 12:35:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 4:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 21, 2022 at 7:25 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Dec 21, 2022 at 10:14 PM shiy.fnst@fujitsu.com\n> > <shiy.fnst@fujitsu.com> wrote:\n> >\n> > The patch looks good to me. Some minor comments are:\n> >\n> > - * limit, but we might also adapt a more elaborate eviction strategy\n> > - for example\n> > - * evicting enough transactions to free certain fraction (e.g. 50%)\n> > of the memory\n> > - * limit.\n> > + * limit, but we might also adapt a more elaborate eviction strategy - for\n> > + * example evicting enough transactions to free certain fraction (e.g. 50%) of\n> > + * the memory limit.\n> >\n> > This change is not relevant with this feature.\n> >\n> > ---\n> > + if (logical_decoding_mode == LOGICAL_DECODING_MODE_DEFAULT\n> > + && rb->size < logical_decoding_work_mem * 1024L)\n> >\n> > Since we put '&&' before the new line in all other places in\n> > reorderbuffer.c, I think it's better to make it consistent. The same\n> > is true for the change for while loop in the patch.\n> >\n>\n> I have addressed these comments in the attached. Additionally, I have\n> modified the docs and commit messages to make those clear.\n\nThanks!\n\n> I think\n> instead of adding new tests with this patch, it may be better to\n> change some of the existing tests related to streaming to use this\n> parameter as that will clearly show one of the purposes of this patch.\n\n+1. I think test_decoding/sql/stream.sql and spill.sql are good\ncandidates and we change logical replication TAP tests in a separate\npatch.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 22 Dec 2022 16:16:32 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for updating the patch. I have also checked the patch\r\nand basically it has worked well. Almost all things I found were modified\r\nby v4.\r\n\r\nOne comment: while setting logical_decoding_mode to wrong value,\r\nI got unfriendly ERROR message.\r\n\r\n```\r\npostgres=# SET logical_decoding_mode = 1;\r\nERROR: invalid value for parameter \"logical_decoding_mode\": \"1\"\r\nHINT: Available values: , immediate\r\n```\r\n\r\nHere all acceptable enum should be output as HINT, but we could not see the empty string.\r\nShould we modify config_enum_get_options() for treating empty string, maybe\r\nlike (empty)? Or we can ignore now.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 22 Dec 2022 07:18:45 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Force streaming every change in logical decoding"
},
{
"msg_contents": "At Thu, 22 Dec 2022 12:35:46 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> I have addressed these comments in the attached. Additionally, I have\n> modified the docs and commit messages to make those clear. I think\n> instead of adding new tests with this patch, it may be better to\n> change some of the existing tests related to streaming to use this\n> parameter as that will clearly show one of the purposes of this patch.\n\nBeing late but I'm warried that we might sometime regret that the lack\nof the explicit default. Don't we really need it?\n\n+ Allows streaming or serializing changes immediately in logical decoding.\n+ The allowed values of <varname>logical_decoding_mode</varname> are the\n+ empty string and <literal>immediate</literal>. When set to\n+ <literal>immediate</literal>, stream each change if\n+ <literal>streaming</literal> option is enabled, otherwise, serialize\n+ each change. When set to an empty string, which is the default,\n+ decoding will stream or serialize changes when\n+ <varname>logical_decoding_work_mem</varname> is reached.\n\nWith (really) fresh eyes, I took a bit long time to understand what\nthe \"streaming\" option is. Couldn't we augment the description by a\nbit?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 22 Dec 2022 16:45:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 4:18 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Amit,\n>\n> Thank you for updating the patch. I have also checked the patch\n> and basically it has worked well. Almost all things I found were modified\n> by v4.\n>\n> One comment: while setting logical_decoding_mode to wrong value,\n> I got unfriendly ERROR message.\n>\n> ```\n> postgres=# SET logical_decoding_mode = 1;\n> ERROR: invalid value for parameter \"logical_decoding_mode\": \"1\"\n> HINT: Available values: , immediate\n> ```\n>\n> Here all acceptable enum should be output as HINT, but we could not see the empty string.\n> Should we modify config_enum_get_options() for treating empty string, maybe\n> like (empty)?\n\nGood point. I think the hint message can say \"The only allowed value\nis \\\"immediate\\\" as recovery_target does. Or considering the name of\nlogical_decoding_mode, we might want to have a non-empty string, say\n'normal' as Kuroda-san proposed, as the default value.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 22 Dec 2022 16:59:30 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "At Thu, 22 Dec 2022 16:59:30 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> On Thu, Dec 22, 2022 at 4:18 PM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Dear Amit,\n> >\n> > Thank you for updating the patch. I have also checked the patch\n> > and basically it has worked well. Almost all things I found were modified\n> > by v4.\n> >\n> > One comment: while setting logical_decoding_mode to wrong value,\n> > I got unfriendly ERROR message.\n> >\n> > ```\n> > postgres=# SET logical_decoding_mode = 1;\n> > ERROR: invalid value for parameter \"logical_decoding_mode\": \"1\"\n> > HINT: Available values: , immediate\n> > ```\n> >\n> > Here all acceptable enum should be output as HINT, but we could not see the empty string.\n> > Should we modify config_enum_get_options() for treating empty string, maybe\n> > like (empty)?\n> \n> Good point. I think the hint message can say \"The only allowed value\n> is \\\"immediate\\\" as recovery_target does. Or considering the name of\n> logical_decoding_mode, we might want to have a non-empty string, say\n> 'normal' as Kuroda-san proposed, as the default value.\n\nOh. I missed this, and agree to have the explicit default option.\nI slightly prefer \"buffered\" but \"normal\" also works fine for me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 22 Dec 2022 17:25:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 1:55 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 22 Dec 2022 16:59:30 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > On Thu, Dec 22, 2022 at 4:18 PM Hayato Kuroda (Fujitsu)\n> > <kuroda.hayato@fujitsu.com> wrote:\n> > >\n> > > Dear Amit,\n> > >\n> > > Thank you for updating the patch. I have also checked the patch\n> > > and basically it has worked well. Almost all things I found were modified\n> > > by v4.\n> > >\n> > > One comment: while setting logical_decoding_mode to wrong value,\n> > > I got unfriendly ERROR message.\n> > >\n> > > ```\n> > > postgres=# SET logical_decoding_mode = 1;\n> > > ERROR: invalid value for parameter \"logical_decoding_mode\": \"1\"\n> > > HINT: Available values: , immediate\n> > > ```\n> > >\n> > > Here all acceptable enum should be output as HINT, but we could not see the empty string.\n> > > Should we modify config_enum_get_options() for treating empty string, maybe\n> > > like (empty)?\n> >\n> > Good point. I think the hint message can say \"The only allowed value\n> > is \\\"immediate\\\" as recovery_target does. Or considering the name of\n> > logical_decoding_mode, we might want to have a non-empty string, say\n> > 'normal' as Kuroda-san proposed, as the default value.\n>\n> Oh. I missed this, and agree to have the explicit default option.\n> I slightly prefer \"buffered\" but \"normal\" also works fine for me.\n>\n\n+1 for \"buffered\" as that seems to convey the meaning better.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 22 Dec 2022 14:46:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 12:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Dec 22, 2022 at 4:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > I think\n> > instead of adding new tests with this patch, it may be better to\n> > change some of the existing tests related to streaming to use this\n> > parameter as that will clearly show one of the purposes of this patch.\n>\n> +1. I think test_decoding/sql/stream.sql and spill.sql are good\n> candidates and we change logical replication TAP tests in a separate\n> patch.\n>\n\nI prefer the other way, let's first do TAP tests because that will\nalso help immediately with the parallel apply feature. We need to\nexecute most of those tests in parallel mode.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 22 Dec 2022 14:49:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 1:15 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 22 Dec 2022 12:35:46 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > I have addressed these comments in the attached. Additionally, I have\n> > modified the docs and commit messages to make those clear. I think\n> > instead of adding new tests with this patch, it may be better to\n> > change some of the existing tests related to streaming to use this\n> > parameter as that will clearly show one of the purposes of this patch.\n>\n> Being late but I'm warried that we might sometime regret that the lack\n> of the explicit default. Don't we really need it?\n>\n\nFor this, I like your proposal for \"buffered\" as an explicit default value.\n\n> + Allows streaming or serializing changes immediately in logical decoding.\n> + The allowed values of <varname>logical_decoding_mode</varname> are the\n> + empty string and <literal>immediate</literal>. When set to\n> + <literal>immediate</literal>, stream each change if\n> + <literal>streaming</literal> option is enabled, otherwise, serialize\n> + each change. When set to an empty string, which is the default,\n> + decoding will stream or serialize changes when\n> + <varname>logical_decoding_work_mem</varname> is reached.\n>\n> With (really) fresh eyes, I took a bit long time to understand what\n> the \"streaming\" option is. Couldn't we augment the description by a\n> bit?\n>\n\nOkay, how about modifying it to: \"When set to\n<literal>immediate</literal>, stream each change if\n<literal>streaming</literal> option (see optional parameters set by\nCREATE SUBSCRIPTION) is enabled, otherwise, serialize each change.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 22 Dec 2022 14:53:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Thu, Dec 22, 2022 5:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Dec 22, 2022 at 1:15 PM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> >\r\n> > At Thu, 22 Dec 2022 12:35:46 +0530, Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote in\r\n> > > I have addressed these comments in the attached. Additionally, I have\r\n> > > modified the docs and commit messages to make those clear. I think\r\n> > > instead of adding new tests with this patch, it may be better to\r\n> > > change some of the existing tests related to streaming to use this\r\n> > > parameter as that will clearly show one of the purposes of this patch.\r\n> >\r\n> > Being late but I'm warried that we might sometime regret that the lack\r\n> > of the explicit default. Don't we really need it?\r\n> >\r\n> \r\n> For this, I like your proposal for \"buffered\" as an explicit default value.\r\n> \r\n> > + Allows streaming or serializing changes immediately in logical\r\n> decoding.\r\n> > + The allowed values of <varname>logical_decoding_mode</varname>\r\n> are the\r\n> > + empty string and <literal>immediate</literal>. When set to\r\n> > + <literal>immediate</literal>, stream each change if\r\n> > + <literal>streaming</literal> option is enabled, otherwise, serialize\r\n> > + each change. When set to an empty string, which is the default,\r\n> > + decoding will stream or serialize changes when\r\n> > + <varname>logical_decoding_work_mem</varname> is reached.\r\n> >\r\n> > With (really) fresh eyes, I took a bit long time to understand what\r\n> > the \"streaming\" option is. Couldn't we augment the description by a\r\n> > bit?\r\n> >\r\n> \r\n> Okay, how about modifying it to: \"When set to\r\n> <literal>immediate</literal>, stream each change if\r\n> <literal>streaming</literal> option (see optional parameters set by\r\n> CREATE SUBSCRIPTION) is enabled, otherwise, serialize each change.\r\n> \r\n\r\nI updated the patch to use \"buffered\" as the explicit default value, and include\r\nAmit's changes about document.\r\n\r\nBesides, I tried to reduce data size in streaming subscription tap tests by this\r\nnew GUC (see 0002 patch). But I didn't covert all streaming tap tests because I\r\nthink we also need to cover the case that there are lots of changes. So, 015* is\r\nnot modified. And 017* is not modified because streaming transactions and\r\nnon-streaming transactions are tested alternately in this test.\r\n\r\nI collected the time to run these tests before and after applying the patch set\r\non my machine. In debug version, it saves about 5.3 s; and in release version,\r\nit saves about 1.8 s. The time of each test is attached.\r\n\r\nRegards,\r\nShi yu",
"msg_date": "Thu, 22 Dec 2022 12:48:37 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Force streaming every change in logical decoding"
},
{
"msg_contents": "At Thu, 22 Dec 2022 14:53:34 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> For this, I like your proposal for \"buffered\" as an explicit default value.\n\nGood to hear.\n\n> Okay, how about modifying it to: \"When set to\n> <literal>immediate</literal>, stream each change if\n> <literal>streaming</literal> option (see optional parameters set by\n> CREATE SUBSCRIPTION) is enabled, otherwise, serialize each change.\n\nLooks fine. Thanks!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 23 Dec 2022 11:25:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Dec 22, 2022 at 12:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Dec 22, 2022 at 4:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > > I think\n> > > instead of adding new tests with this patch, it may be better to\n> > > change some of the existing tests related to streaming to use this\n> > > parameter as that will clearly show one of the purposes of this patch.\n> >\n> > +1. I think test_decoding/sql/stream.sql and spill.sql are good\n> > candidates and we change logical replication TAP tests in a separate\n> > patch.\n> >\n>\n> I prefer the other way, let's first do TAP tests because that will\n> also help immediately with the parallel apply feature. We need to\n> execute most of those tests in parallel mode.\n\nGood point. Or we can do both if changes for test_decoding tests are not huge?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 23 Dec 2022 12:56:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 9:48 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Thu, Dec 22, 2022 5:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Dec 22, 2022 at 1:15 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Thu, 22 Dec 2022 12:35:46 +0530, Amit Kapila\n> > <amit.kapila16@gmail.com> wrote in\n> > > > I have addressed these comments in the attached. Additionally, I have\n> > > > modified the docs and commit messages to make those clear. I think\n> > > > instead of adding new tests with this patch, it may be better to\n> > > > change some of the existing tests related to streaming to use this\n> > > > parameter as that will clearly show one of the purposes of this patch.\n> > >\n> > > Being late but I'm warried that we might sometime regret that the lack\n> > > of the explicit default. Don't we really need it?\n> > >\n> >\n> > For this, I like your proposal for \"buffered\" as an explicit default value.\n> >\n> > > + Allows streaming or serializing changes immediately in logical\n> > decoding.\n> > > + The allowed values of <varname>logical_decoding_mode</varname>\n> > are the\n> > > + empty string and <literal>immediate</literal>. When set to\n> > > + <literal>immediate</literal>, stream each change if\n> > > + <literal>streaming</literal> option is enabled, otherwise, serialize\n> > > + each change. When set to an empty string, which is the default,\n> > > + decoding will stream or serialize changes when\n> > > + <varname>logical_decoding_work_mem</varname> is reached.\n> > >\n> > > With (really) fresh eyes, I took a bit long time to understand what\n> > > the \"streaming\" option is. Couldn't we augment the description by a\n> > > bit?\n> > >\n> >\n> > Okay, how about modifying it to: \"When set to\n> > <literal>immediate</literal>, stream each change if\n> > <literal>streaming</literal> option (see optional parameters set by\n> > CREATE SUBSCRIPTION) is enabled, otherwise, serialize each change.\n> >\n>\n> I updated the patch to use \"buffered\" as the explicit default value, and include\n> Amit's changes about document.\n>\n> Besides, I tried to reduce data size in streaming subscription tap tests by this\n> new GUC (see 0002 patch). But I didn't covert all streaming tap tests because I\n> think we also need to cover the case that there are lots of changes. So, 015* is\n> not modified.\n\nIf we want to eventually convert 015 some time, isn't it better to\ninclude it even if it requires many changes? Is there any reason we\nwant to change 017 in a separate patch?\n\n> And 017* is not modified because streaming transactions and\n> non-streaming transactions are tested alternately in this test.\n\nHow about 029_on_error.pl? It also sets logical_decoding_work_mem to\n64kb to test the STREAM COMMIT case.\n\n>\n> I collected the time to run these tests before and after applying the patch set\n> on my machine. In debug version, it saves about 5.3 s; and in release version,\n> it saves about 1.8 s. The time of each test is attached.\n\nNice improvements.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 23 Dec 2022 14:02:01 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "Dear Shi,\r\n\r\nThanks for updating the patch! Followings are comments.\r\n\r\nReorderBufferCheckMemoryLimit()\r\n\r\n```\r\n+\t/*\r\n+\t * Bail out if logical_decoding_mode is disabled and we haven't exceeded\r\n+\t * the memory limit.\r\n+\t */\r\n```\r\n\r\nI think 'disabled' should be '\"buffered\"'.\r\n\r\n```\r\n+\t * If logical_decoding_mode is immediate, loop until there's no change.\r\n+\t * Otherwise, loop until we reach under the memory limit. One might think\r\n+\t * that just by evicting the largest (sub)transaction we will come under\r\n+\t * the memory limit based on assumption that the selected transaction is\r\n+\t * at least as large as the most recent change (which caused us to go over\r\n+\t * the memory limit). However, that is not true because a user can reduce\r\n+\t * the logical_decoding_work_mem to a smaller value before the most recent\r\n \t * change.\r\n \t */\r\n```\r\n\r\nDo we need to pick the largest (sub)transaciton even if we are in the immediate mode?\r\nIt seems that the liner search is done in ReorderBufferLargestStreamableTopTXN()\r\nto find the largest transaction, but in this case we can choose the arbitrary one.\r\n\r\nreorderbuffer.h\r\n\r\n+/* possible values for logical_decoding_mode */\r\n+typedef enum\r\n+{\r\n+\tLOGICAL_DECODING_MODE_BUFFERED,\r\n+\tLOGICAL_DECODING_MODE_IMMEDIATE\r\n+}\t\t\tLogicalDecodingMode;\r\n\r\nI'm not sure, but do we have to modify typedefs.list?\r\n\r\n\r\n\r\nMoreover, I have also checked the improvements of elapsed time.\r\nAll builds were made by meson system, and the unit of each cells is second.\r\nIt seemed that results have same trends with Shi.\r\n\r\nDebug build:\r\n\r\n\tHEAD\tPATCH\tDelta\r\n16\t6.44\t5.96\t0.48\r\n18\t6.92\t6.47\t0.45\r\n19\t5.93\t5.91\t0.02\r\n22\t14.98\t13.7\t1.28\r\n23\t12.01\t8.22\t3.79\r\n\r\nSUM of delta: 6.02s\r\n\r\nRelease build:\r\n\r\n\tHEAD\tPATCH\tDelta\r\n16\t3.51\t3.22\t0.29\r\n17\t4.04\t3.48\t0.56\r\n19\t3.43\t3.3\t0.13\r\n22\t10.06\t8.46\t1.6\r\n23\t6.74\t5.39\t1.35\r\n\r\nSUM of delta: 3.93s\r\n\r\n\r\nI will check and report the test coverage if I can.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 23 Dec 2022 05:18:45 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 10:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> >\n> > Besides, I tried to reduce data size in streaming subscription tap tests by this\n> > new GUC (see 0002 patch). But I didn't covert all streaming tap tests because I\n> > think we also need to cover the case that there are lots of changes. So, 015* is\n> > not modified.\n>\n> If we want to eventually convert 015 some time, isn't it better to\n> include it even if it requires many changes?\n>\n\nI think there is some confusion here because the point is that we\ndon't want to convert all the tests. It would be good to have some\ntests which follow the regular path of logical_decoding_work_mem.\n\n> Is there any reason we\n> want to change 017 in a separate patch?\n>\n\nHere also, the idea is to leave it as it is. It has a mix of stream\nand non-stream cases and it would be tricky to convert it because we\nneed to change GUC before streamed one and reload the config and\nensure reloaded config is in effect.\n\n> > And 017* is not modified because streaming transactions and\n> > non-streaming transactions are tested alternately in this test.\n>\n> How about 029_on_error.pl? It also sets logical_decoding_work_mem to\n> 64kb to test the STREAM COMMIT case.\n>\n\nHow would you like to change this? Do we want to enable the GUC or\nstreaming option just before that case and wait for it? If so, that\nmight take more time than we save.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 23 Dec 2022 10:49:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 10:48 AM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> ```\n> + * If logical_decoding_mode is immediate, loop until there's no change.\n> + * Otherwise, loop until we reach under the memory limit. One might think\n> + * that just by evicting the largest (sub)transaction we will come under\n> + * the memory limit based on assumption that the selected transaction is\n> + * at least as large as the most recent change (which caused us to go over\n> + * the memory limit). However, that is not true because a user can reduce\n> + * the logical_decoding_work_mem to a smaller value before the most recent\n> * change.\n> */\n> ```\n>\n> Do we need to pick the largest (sub)transaciton even if we are in the immediate mode?\n> It seems that the liner search is done in ReorderBufferLargestStreamableTopTXN()\n> to find the largest transaction, but in this case we can choose the arbitrary one.\n>\n\nIn immediate mode, we will stream/spill each change, so ideally, we\ndon't need to perform any search. Otherwise, also, I think changing\nthose functions will complicate the code without serving any purpose.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 23 Dec 2022 11:02:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 6:18 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n>\n> Besides, I tried to reduce data size in streaming subscription tap tests by this\n> new GUC (see 0002 patch). But I didn't covert all streaming tap tests because I\n> think we also need to cover the case that there are lots of changes. So, 015* is\n> not modified. And 017* is not modified because streaming transactions and\n> non-streaming transactions are tested alternately in this test.\n>\n\nI think we can remove the newly added test from the patch and instead\ncombine the 0001 and 0002 patches. I think we should leave the\n022_twophase_cascade as it is because it can impact code coverage,\nespecially the below part of the test:\n# 2PC PREPARE with a nested ROLLBACK TO SAVEPOINT\n$node_A->safe_psql(\n 'postgres', \"\n BEGIN;\n INSERT INTO test_tab VALUES (9999, 'foobar');\n SAVEPOINT sp_inner;\n INSERT INTO test_tab SELECT i, md5(i::text) FROM\ngenerate_series(3, 5000) s(i);\n\nHere, we will stream first time after the subtransaction, so can\nimpact the below part of the code in ReorderBufferStreamTXN:\nif (txn->snapshot_now == NULL)\n{\n...\ndlist_foreach(subxact_i, &txn->subtxns)\n{\nReorderBufferTXN *subtxn;\n\nsubtxn = dlist_container(ReorderBufferTXN, node, subxact_i.cur);\nReorderBufferTransferSnapToParent(txn, subtxn);\n}\n...\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 23 Dec 2022 11:20:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "Dear hackers,\r\n\r\n> I will check and report the test coverage if I can.\r\n\r\nI ran make coverage. PSA the screen shot that shows results.\r\nAccording to the result the coverage seemed to be not changed\r\neven if the elapsed time was reduced.\r\n\r\nOnly following lines at process_syncing_tables_for_apply() seemed to be not hit after patching,\r\nbut I thought it was the timing issue because we do not modify around there.\r\n\r\n```\r\n\t\t\t\t\t/*\r\n\t\t\t\t\t * Enter busy loop and wait for synchronization worker to\r\n\t\t\t\t\t * reach expected state (or die trying).\r\n\t\t\t\t\t */\r\n\t\t\t\t\tif (!started_tx)\r\n\t\t\t\t\t{\r\n\t\t\t\t\t\tStartTransactionCommand();\r\n\t\t\t\t\t\tstarted_tx = true;\r\n\t\t\t\t\t}\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 23 Dec 2022 07:42:09 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 1:12 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear hackers,\n>\n> > I will check and report the test coverage if I can.\n>\n> I ran make coverage. PSA the screen shot that shows results.\n> According to the result the coverage seemed to be not changed\n> even if the elapsed time was reduced.\n>\n> Only following lines at process_syncing_tables_for_apply() seemed to be not hit after patching,\n> but I thought it was the timing issue because we do not modify around there.\n>\n> ```\n> /*\n> * Enter busy loop and wait for synchronization worker to\n> * reach expected state (or die trying).\n> */\n> if (!started_tx)\n> {\n> StartTransactionCommand();\n> started_tx = true;\n> }\n> ```\n>\n\nThis part of the code is related to synchronization between apply and\nsync workers which depends upon timing. So, we can ignore this\ndifference.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 23 Dec 2022 13:59:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Fri, Dec 23, 2022 1:50 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> \r\n> On Thu, Dec 22, 2022 at 6:18 PM shiy.fnst@fujitsu.com\r\n> <shiy.fnst@fujitsu.com> wrote:\r\n> >\r\n> >\r\n> > Besides, I tried to reduce data size in streaming subscription tap tests by this\r\n> > new GUC (see 0002 patch). But I didn't covert all streaming tap tests\r\n> because I\r\n> > think we also need to cover the case that there are lots of changes. So, 015*\r\n> is\r\n> > not modified. And 017* is not modified because streaming transactions and\r\n> > non-streaming transactions are tested alternately in this test.\r\n> >\r\n> \r\n> I think we can remove the newly added test from the patch and instead\r\n> combine the 0001 and 0002 patches. I think we should leave the\r\n> 022_twophase_cascade as it is because it can impact code coverage,\r\n> especially the below part of the test:\r\n> # 2PC PREPARE with a nested ROLLBACK TO SAVEPOINT\r\n> $node_A->safe_psql(\r\n> 'postgres', \"\r\n> BEGIN;\r\n> INSERT INTO test_tab VALUES (9999, 'foobar');\r\n> SAVEPOINT sp_inner;\r\n> INSERT INTO test_tab SELECT i, md5(i::text) FROM\r\n> generate_series(3, 5000) s(i);\r\n> \r\n> Here, we will stream first time after the subtransaction, so can\r\n> impact the below part of the code in ReorderBufferStreamTXN:\r\n> if (txn->snapshot_now == NULL)\r\n> {\r\n> ...\r\n> dlist_foreach(subxact_i, &txn->subtxns)\r\n> {\r\n> ReorderBufferTXN *subtxn;\r\n> \r\n> subtxn = dlist_container(ReorderBufferTXN, node, subxact_i.cur);\r\n> ReorderBufferTransferSnapToParent(txn, subtxn);\r\n> }\r\n> ...\r\n> \r\n\r\nOK, I removed the modification in 022_twophase_cascade.pl and combine the two patches.\r\n\r\nPlease see the attached patch.\r\nI also fixed Kuroda-san's comments[1].\r\n\r\n[1] https://www.postgresql.org/message-id/TYAPR01MB5866CD99CF86EAC84119BC91F5E99%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nShi yu",
"msg_date": "Fri, 23 Dec 2022 08:32:45 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Force streaming every change in logical decoding"
},
{
"msg_contents": "Dear Shi,\r\n\r\n> I also fixed Kuroda-san's comments[1].\r\n\r\nThanks for updating the patch! I confirmed that my comments were fixed and\r\nyour patch could pass subscription and test_decoding tests. I think we can\r\nchange the status to \"Ready for Committer\".\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 23 Dec 2022 09:12:18 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 2:03 PM shiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com>\nwrote:\n>\n> On Fri, Dec 23, 2022 1:50 PM Amit Kapila <amit.kapila16@gmail.com>\n> >\n> > On Thu, Dec 22, 2022 at 6:18 PM shiy.fnst@fujitsu.com\n> > <shiy.fnst@fujitsu.com> wrote:\n> > >\n> > >\n> > > Besides, I tried to reduce data size in streaming subscription tap\ntests by this\n> > > new GUC (see 0002 patch). But I didn't covert all streaming tap tests\n> > because I\n> > > think we also need to cover the case that there are lots of changes.\nSo, 015*\n> > is\n> > > not modified. And 017* is not modified because streaming transactions\nand\n> > > non-streaming transactions are tested alternately in this test.\n> > >\n> >\n> > I think we can remove the newly added test from the patch and instead\n> > combine the 0001 and 0002 patches. I think we should leave the\n> > 022_twophase_cascade as it is because it can impact code coverage,\n> > especially the below part of the test:\n> > # 2PC PREPARE with a nested ROLLBACK TO SAVEPOINT\n> > $node_A->safe_psql(\n> > 'postgres', \"\n> > BEGIN;\n> > INSERT INTO test_tab VALUES (9999, 'foobar');\n> > SAVEPOINT sp_inner;\n> > INSERT INTO test_tab SELECT i, md5(i::text) FROM\n> > generate_series(3, 5000) s(i);\n> >\n> > Here, we will stream first time after the subtransaction, so can\n> > impact the below part of the code in ReorderBufferStreamTXN:\n> > if (txn->snapshot_now == NULL)\n> > {\n> > ...\n> > dlist_foreach(subxact_i, &txn->subtxns)\n> > {\n> > ReorderBufferTXN *subtxn;\n> >\n> > subtxn = dlist_container(ReorderBufferTXN, node, subxact_i.cur);\n> > ReorderBufferTransferSnapToParent(txn, subtxn);\n> > }\n> > ...\n> >\n>\n> OK, I removed the modification in 022_twophase_cascade.pl and combine the\ntwo patches.\n>\n> Please see the attached patch.\n> I also fixed Kuroda-san's comments[1].\n>\n> [1]\nhttps://www.postgresql.org/message-id/TYAPR01MB5866CD99CF86EAC84119BC91F5E99%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n>\n> Regards,\n> Shi yu\n\nHello,\nI ran tests (4 runs) on both versions (v5 and v6) in release mode. And the\ndata looks promising, time is reduced now:\n\n HEAD V5\n Delta (sec)\n2:20.535307 2:15.865241\n4.670066\n2:19.220917 2:14.445312\n4.775605\n2:22.492128 2:17.35755\n 5.134578\n2:20.737309 2:15.564306\n5.173003\n\nHEAD V6\n Delta (sec)\n2:20.535307 2:15.363567\n5.17174\n2:19.220917 2:15.079082\n4.14.1835\n2:22.492128 2:16.244139\n6.247989\n2:20.737309 2:16.108033\n4.629276\n\nthanks\nShveta\n\nOn Fri, Dec 23, 2022 at 2:03 PM shiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com> wrote:>> On Fri, Dec 23, 2022 1:50 PM Amit Kapila <amit.kapila16@gmail.com>> >> > On Thu, Dec 22, 2022 at 6:18 PM shiy.fnst@fujitsu.com> > <shiy.fnst@fujitsu.com> wrote:> > >> > >> > > Besides, I tried to reduce data size in streaming subscription tap tests by this> > > new GUC (see 0002 patch). But I didn't covert all streaming tap tests> > because I> > > think we also need to cover the case that there are lots of changes. So, 015*> > is> > > not modified. And 017* is not modified because streaming transactions and> > > non-streaming transactions are tested alternately in this test.> > >> >> > I think we can remove the newly added test from the patch and instead> > combine the 0001 and 0002 patches. I think we should leave the> > 022_twophase_cascade as it is because it can impact code coverage,> > especially the below part of the test:> > # 2PC PREPARE with a nested ROLLBACK TO SAVEPOINT> > $node_A->safe_psql(> > 'postgres', \"> > BEGIN;> > INSERT INTO test_tab VALUES (9999, 'foobar');> > SAVEPOINT sp_inner;> > INSERT INTO test_tab SELECT i, md5(i::text) FROM> > generate_series(3, 5000) s(i);> >> > Here, we will stream first time after the subtransaction, so can> > impact the below part of the code in ReorderBufferStreamTXN:> > if (txn->snapshot_now == NULL)> > {> > ...> > dlist_foreach(subxact_i, &txn->subtxns)> > {> > ReorderBufferTXN *subtxn;> >> > subtxn = dlist_container(ReorderBufferTXN, node, subxact_i.cur);> > ReorderBufferTransferSnapToParent(txn, subtxn);> > }> > ...> >>> OK, I removed the modification in 022_twophase_cascade.pl and combine the two patches.>> Please see the attached patch.> I also fixed Kuroda-san's comments[1].>> [1] https://www.postgresql.org/message-id/TYAPR01MB5866CD99CF86EAC84119BC91F5E99%40TYAPR01MB5866.jpnprd01.prod.outlook.com>> Regards,> Shi yuHello,I ran tests (4 runs) on both versions (v5 and v6) in release mode. And the data looks promising, time is reduced now: HEAD V5 Delta (sec)2:20.535307 2:15.865241 4.6700662:19.220917 2:14.445312 4.7756052:22.492128 2:17.35755 5.134578 2:20.737309 2:15.564306 5.173003HEAD V6 Delta (sec)2:20.535307 2:15.363567 5.171742:19.220917 2:15.079082 4.14.18352:22.492128 2:16.244139 6.2479892:20.737309 2:16.108033 4.629276thanksShveta",
"msg_date": "Fri, 23 Dec 2022 15:15:54 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 5:32 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Fri, Dec 23, 2022 1:50 PM Amit Kapila <amit.kapila16@gmail.com>\n> >\n> > On Thu, Dec 22, 2022 at 6:18 PM shiy.fnst@fujitsu.com\n> > <shiy.fnst@fujitsu.com> wrote:\n> > >\n> > >\n> > > Besides, I tried to reduce data size in streaming subscription tap tests by this\n> > > new GUC (see 0002 patch). But I didn't covert all streaming tap tests\n> > because I\n> > > think we also need to cover the case that there are lots of changes. So, 015*\n> > is\n> > > not modified. And 017* is not modified because streaming transactions and\n> > > non-streaming transactions are tested alternately in this test.\n> > >\n> >\n> > I think we can remove the newly added test from the patch and instead\n> > combine the 0001 and 0002 patches. I think we should leave the\n> > 022_twophase_cascade as it is because it can impact code coverage,\n> > especially the below part of the test:\n> > # 2PC PREPARE with a nested ROLLBACK TO SAVEPOINT\n> > $node_A->safe_psql(\n> > 'postgres', \"\n> > BEGIN;\n> > INSERT INTO test_tab VALUES (9999, 'foobar');\n> > SAVEPOINT sp_inner;\n> > INSERT INTO test_tab SELECT i, md5(i::text) FROM\n> > generate_series(3, 5000) s(i);\n> >\n> > Here, we will stream first time after the subtransaction, so can\n> > impact the below part of the code in ReorderBufferStreamTXN:\n> > if (txn->snapshot_now == NULL)\n> > {\n> > ...\n> > dlist_foreach(subxact_i, &txn->subtxns)\n> > {\n> > ReorderBufferTXN *subtxn;\n> >\n> > subtxn = dlist_container(ReorderBufferTXN, node, subxact_i.cur);\n> > ReorderBufferTransferSnapToParent(txn, subtxn);\n> > }\n> > ...\n> >\n>\n> OK, I removed the modification in 022_twophase_cascade.pl and combine the two patches.\n\nThank you for updating the patch. The v6 patch looks good to me.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 24 Dec 2022 02:01:14 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 10:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Dec 23, 2022 at 5:32 PM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> >\n> > On Fri, Dec 23, 2022 1:50 PM Amit Kapila <amit.kapila16@gmail.com>\n> > >\n> > > On Thu, Dec 22, 2022 at 6:18 PM shiy.fnst@fujitsu.com\n> > > <shiy.fnst@fujitsu.com> wrote:\n> > > >\n> > > >\n> > > > Besides, I tried to reduce data size in streaming subscription tap tests by this\n> > > > new GUC (see 0002 patch). But I didn't covert all streaming tap tests\n> > > because I\n> > > > think we also need to cover the case that there are lots of changes. So, 015*\n> > > is\n> > > > not modified. And 017* is not modified because streaming transactions and\n> > > > non-streaming transactions are tested alternately in this test.\n> > > >\n> > >\n> > > I think we can remove the newly added test from the patch and instead\n> > > combine the 0001 and 0002 patches. I think we should leave the\n> > > 022_twophase_cascade as it is because it can impact code coverage,\n> > > especially the below part of the test:\n> > > # 2PC PREPARE with a nested ROLLBACK TO SAVEPOINT\n> > > $node_A->safe_psql(\n> > > 'postgres', \"\n> > > BEGIN;\n> > > INSERT INTO test_tab VALUES (9999, 'foobar');\n> > > SAVEPOINT sp_inner;\n> > > INSERT INTO test_tab SELECT i, md5(i::text) FROM\n> > > generate_series(3, 5000) s(i);\n> > >\n> > > Here, we will stream first time after the subtransaction, so can\n> > > impact the below part of the code in ReorderBufferStreamTXN:\n> > > if (txn->snapshot_now == NULL)\n> > > {\n> > > ...\n> > > dlist_foreach(subxact_i, &txn->subtxns)\n> > > {\n> > > ReorderBufferTXN *subtxn;\n> > >\n> > > subtxn = dlist_container(ReorderBufferTXN, node, subxact_i.cur);\n> > > ReorderBufferTransferSnapToParent(txn, subtxn);\n> > > }\n> > > ...\n> > >\n> >\n> > OK, I removed the modification in 022_twophase_cascade.pl and combine the two patches.\n>\n> Thank you for updating the patch. The v6 patch looks good to me.\n>\n\nLGTM as well. I'll push this on Monday.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 24 Dec 2022 08:56:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Sat, Dec 24, 2022 at 8:56 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > >\n> > > OK, I removed the modification in 022_twophase_cascade.pl and combine the two patches.\n> >\n> > Thank you for updating the patch. The v6 patch looks good to me.\n> >\n>\n> LGTM as well. I'll push this on Monday.\n>\n\nThe patch looks good to me.\n\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 24 Dec 2022 15:28:02 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Sat, Dec 24, 2022 at 3:28 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sat, Dec 24, 2022 at 8:56 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > >\n> > > > OK, I removed the modification in 022_twophase_cascade.pl and combine the two patches.\n> > >\n> > > Thank you for updating the patch. The v6 patch looks good to me.\n> > >\n> >\n> > LGTM as well. I'll push this on Monday.\n> >\n>\n> The patch looks good to me.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 26 Dec 2022 14:04:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-26 14:04:28 +0530, Amit Kapila wrote:\n> Pushed.\n\nI did not follow this thread but saw the commit. Could you explain why a GUC\nis the right mechanism here? The commit message didn't explain why a GUC was\nchosen.\n\nTo me an option like this should be passed in when decoding rather than a GUC.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 27 Dec 2022 12:12:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 1:42 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-12-26 14:04:28 +0530, Amit Kapila wrote:\n> > Pushed.\n>\n> I did not follow this thread but saw the commit. Could you explain why a GUC\n> is the right mechanism here? The commit message didn't explain why a GUC was\n> chosen.\n>\n> To me an option like this should be passed in when decoding rather than a GUC.\n>\n\nFor that, we need to have a subscription option for this as we need to\nreduce test timing for streaming TAP tests (by streaming, I mean\nreplication of large in-progress transactions) as well. Currently,\nthere is no subscription option that is merely used for\ntesting/debugging purpose and there doesn't seem to be a use for this\nfor users. So, we didn't want to expose it as a user option. There is\nalso a risk that if users use it they will face a slowdown.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 28 Dec 2022 07:49:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Mon, 26 Dec 2022 at 14:04, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Dec 24, 2022 at 3:28 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Sat, Dec 24, 2022 at 8:56 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > >\n> > > > > OK, I removed the modification in 022_twophase_cascade.pl and combine the two patches.\n> > > >\n> > > > Thank you for updating the patch. The v6 patch looks good to me.\n> > > >\n> > >\n> > > LGTM as well. I'll push this on Monday.\n> > >\n> >\n> > The patch looks good to me.\n> >\n>\n> Pushed.\n\nIf there is nothing pending for this patch, can we close the commitfest entry?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 6 Jan 2023 11:33:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Force streaming every change in logical decoding"
},
{
"msg_contents": "On Thu, Dec 22, 2022 3:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> > I think\r\n> > instead of adding new tests with this patch, it may be better to\r\n> > change some of the existing tests related to streaming to use this\r\n> > parameter as that will clearly show one of the purposes of this patch.\r\n> \r\n> +1. I think test_decoding/sql/stream.sql and spill.sql are good\r\n> candidates and we change logical replication TAP tests in a separate\r\n> patch.\r\n> \r\n\r\nHi,\r\n\r\nI tried to reduce the data size in test_decoding tests by using the new added\r\nGUC \"logical_decoding_mode\", and found that the new GUC is not suitable for\r\nsome cases.\r\n\r\nFor example, in the following cases in stream.sql (and there are some similar\r\ncases in twophase_stream.sql), the changes in sub transaction exceed\r\nlogical_decoding_work_mem, but they won't be streamed because the it is rolled\r\nback. (But the transaction is marked as streamed.) After the sub transaction,\r\nthere is a small amount of inserts, as logical_decoding_work_mem is not\r\nexceeded, it will be streamed when decoding COMMIT. If we use\r\nlogical_decoding_mode=immediate, the small amount of inserts in toplevel\r\ntransaction will be streamed immediately. This is different from before, so I'm\r\nnot sure we can use logical_decoding_mode in this case.\r\n\r\n```\r\n-- streaming test with sub-transaction\r\nBEGIN;\r\nsavepoint s1;\r\nSELECT 'msg5' FROM pg_logical_emit_message(true, 'test', repeat('a', 50));\r\nINSERT INTO stream_test SELECT repeat('a', 2000) || g.i FROM generate_series(1, 35) g(i);\r\nTRUNCATE table stream_test;\r\nrollback to s1;\r\nINSERT INTO stream_test SELECT repeat('a', 10) || g.i FROM generate_series(1, 20) g(i);\r\nCOMMIT;\r\n```\r\n\r\nBesides, some cases in spill.sql can't use the new GUC because they want to\r\nserialize when processing the specific toplevel transaction or sub transaction.\r\nFor example, in the case below, the changes in the subxact exceed\r\nlogical_decoding_work_mem and are serialized, and the insert after subxact is\r\nnot serialized. If logical_decoding_mode is used, both of them will be\r\nserialized. This is not what we want to test.\r\n\r\n```\r\n-- spilling subxact, non-spilling main xact\r\nBEGIN;\r\nSAVEPOINT s;\r\nINSERT INTO spill_test SELECT 'serialize-subbig-topsmall--1:'||g.i FROM generate_series(1, 5000) g(i);\r\nRELEASE SAVEPOINT s;\r\nINSERT INTO spill_test SELECT 'serialize-subbig-topsmall--2:'||g.i FROM generate_series(5001, 5001) g(i);\r\nCOMMIT;\r\n```\r\n\r\nAlthough the rest cases in spill.sql can use new GUC, but it needs set and reset\r\nlogical_decoding_mode many times. Besides, the tests in this file were added\r\nbefore logical_decoding_work_mem was introduced, I am not sure if we can modify\r\nthese cases by using the new GUC.\r\n\r\nAlso, it looks the tests for toast in stream.sql can't be changed, too.\r\n\r\nDue to the above reasons, it seems that currently few tests can be modified to\r\nuse logical_decoding_mode. If later we find some tests can changed to use\r\nit, we can do it in a separate thread. I will close the CF entry.\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Fri, 6 Jan 2023 06:17:29 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Force streaming every change in logical decoding"
}
] |
[
{
"msg_contents": "While trying the codes of MemoizePath I noticed that MemoizePath cannot\nbe generated if inner path is a union-all AppendPath, even if it is\nparameterized. And I found the problem is that for a union-all\nAppendPath the ParamPathInfo is not created in the usual way. Instead\nit is created with get_appendrel_parampathinfo, which just creates a\nstruct with no ppi_clauses. As a result, MemoizePath cannot find a\nproper cache key to use.\n\nThis problem does not exist for AppendPath generated for a partitioned\ntable though, because in this case we always create a normal param_info\nwith ppi_clauses, for use in run-time pruning.\n\nAs an illustration, we can use the table 'prt' in sql/memoize.sql with\nthe settings below\n\nset enable_hashjoin to off;\nset enable_mergejoin to off;\nset enable_seqscan to off;\nset enable_material to off;\n\nexplain (costs off) select * from prt_p1 t1 join prt t2 on t1.a = t2.a;\n QUERY PLAN\n------------------------------------------------------------------\n Nested Loop\n -> Index Only Scan using iprt_p1_a on prt_p1 t1\n -> Memoize\n Cache Key: t1.a\n Cache Mode: logical\n -> Append\n -> Index Only Scan using iprt_p1_a on prt_p1 t2_1\n Index Cond: (a = t1.a)\n -> Index Only Scan using iprt_p2_a on prt_p2 t2_2\n Index Cond: (a = t1.a)\n(10 rows)\n\nexplain (costs off) select * from prt_p1 t1 join (select * from prt_p1\nunion all select * from prt_p2) t2 on t1.a = t2.a;\n QUERY PLAN\n-------------------------------------------------------\n Nested Loop\n -> Index Only Scan using iprt_p1_a on prt_p1 t1\n -> Append\n -> Index Only Scan using iprt_p1_a on prt_p1\n Index Cond: (a = t1.a)\n -> Index Only Scan using iprt_p2_a on prt_p2\n Index Cond: (a = t1.a)\n(7 rows)\n\nAs we can see, MemoizePath can be generated for partitioned AppendPath\nbut not for union-all AppendPath.\n\nFor the fix I think we can relax the check in create_append_path and\nalways use get_baserel_parampathinfo if the parent is a baserel.\n\n--- a/src/backend/optimizer/util/pathnode.c\n+++ b/src/backend/optimizer/util/pathnode.c\n@@ -1266,7 +1266,7 @@ create_append_path(PlannerInfo *root,\n * partition, and it's not necessary anyway in that case. Must skip it\nif\n * we don't have \"root\", too.)\n */\n- if (root && rel->reloptkind == RELOPT_BASEREL &&\nIS_PARTITIONED_REL(rel))\n+ if (root && rel->reloptkind == RELOPT_BASEREL)\n pathnode->path.param_info = get_baserel_parampathinfo(root,\n rel,\n\nrequired_outer);\n\nThanks\nRichard\n\nWhile trying the codes of MemoizePath I noticed that MemoizePath cannotbe generated if inner path is a union-all AppendPath, even if it isparameterized. And I found the problem is that for a union-allAppendPath the ParamPathInfo is not created in the usual way. Insteadit is created with get_appendrel_parampathinfo, which just creates astruct with no ppi_clauses. As a result, MemoizePath cannot find aproper cache key to use.This problem does not exist for AppendPath generated for a partitionedtable though, because in this case we always create a normal param_infowith ppi_clauses, for use in run-time pruning.As an illustration, we can use the table 'prt' in sql/memoize.sql withthe settings belowset enable_hashjoin to off;set enable_mergejoin to off;set enable_seqscan to off;set enable_material to off;explain (costs off) select * from prt_p1 t1 join prt t2 on t1.a = t2.a; QUERY PLAN------------------------------------------------------------------ Nested Loop -> Index Only Scan using iprt_p1_a on prt_p1 t1 -> Memoize Cache Key: t1.a Cache Mode: logical -> Append -> Index Only Scan using iprt_p1_a on prt_p1 t2_1 Index Cond: (a = t1.a) -> Index Only Scan using iprt_p2_a on prt_p2 t2_2 Index Cond: (a = t1.a)(10 rows)explain (costs off) select * from prt_p1 t1 join (select * from prt_p1 union all select * from prt_p2) t2 on t1.a = t2.a; QUERY PLAN------------------------------------------------------- Nested Loop -> Index Only Scan using iprt_p1_a on prt_p1 t1 -> Append -> Index Only Scan using iprt_p1_a on prt_p1 Index Cond: (a = t1.a) -> Index Only Scan using iprt_p2_a on prt_p2 Index Cond: (a = t1.a)(7 rows)As we can see, MemoizePath can be generated for partitioned AppendPathbut not for union-all AppendPath.For the fix I think we can relax the check in create_append_path andalways use get_baserel_parampathinfo if the parent is a baserel.--- a/src/backend/optimizer/util/pathnode.c+++ b/src/backend/optimizer/util/pathnode.c@@ -1266,7 +1266,7 @@ create_append_path(PlannerInfo *root, * partition, and it's not necessary anyway in that case. Must skip it if * we don't have \"root\", too.) */- if (root && rel->reloptkind == RELOPT_BASEREL && IS_PARTITIONED_REL(rel))+ if (root && rel->reloptkind == RELOPT_BASEREL) pathnode->path.param_info = get_baserel_parampathinfo(root, rel, required_outer);ThanksRichard",
"msg_date": "Tue, 6 Dec 2022 17:00:59 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "A problem about ParamPathInfo for an AppendPath"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 5:00 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> As we can see, MemoizePath can be generated for partitioned AppendPath\n> but not for union-all AppendPath.\n>\n> For the fix I think we can relax the check in create_append_path and\n> always use get_baserel_parampathinfo if the parent is a baserel.\n>\n\nBTW, IIUC currently we don't generate any parameterized MergeAppend\npaths, as explained in generate_orderedappend_paths. So the codes that\ngather information from a MergeAppend path's param_info for run-time\npartition pruning in create_merge_append_plan seem unnecessary.\n\nAttached is a patch for this change and the changes described upthread.\n\nThanks\nRichard",
"msg_date": "Mon, 12 Dec 2022 17:55:01 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A problem about ParamPathInfo for an AppendPath"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> Attached is a patch for this change and the changes described upthread.\n\nPushed. I thought the comment needed to be completely rewritten not just\ntweaked, and I felt it was probably reasonable to continue to exclude\ndummy paths from getting the more expensive treatment.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Mar 2023 18:15:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: A problem about ParamPathInfo for an AppendPath"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 6:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Pushed. I thought the comment needed to be completely rewritten not just\n> tweaked, and I felt it was probably reasonable to continue to exclude\n> dummy paths from getting the more expensive treatment.\n\n\nYes agreed. Thanks for the changes and pushing.\n\nThanks\nRichard\n\nOn Fri, Mar 17, 2023 at 6:15 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nPushed. I thought the comment needed to be completely rewritten not just\ntweaked, and I felt it was probably reasonable to continue to exclude\ndummy paths from getting the more expensive treatment.Yes agreed. Thanks for the changes and pushing.ThanksRichard",
"msg_date": "Fri, 17 Mar 2023 11:19:15 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A problem about ParamPathInfo for an AppendPath"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nAs discussed elsewhere [0], \\dp doesn't show privileges on system objects,\nand this behavior is not mentioned in the docs. I've attached a small\npatch that adds support for the S modifier (i.e., \\dpS) and the adjusts the\ndocs.\n\nThoughts?\n\n[0] https://postgr.es/m/a2382acd-e465-85b2-9d8e-f9ed1a5a66e9%40postgrespro.ru\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 6 Dec 2022 11:36:06 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "add \\dpS to psql"
},
{
"msg_contents": "On 06.12.2022 22:36, Nathan Bossart wrote:\n\n> As discussed elsewhere [0], \\dp doesn't show privileges on system objects,\n> and this behavior is not mentioned in the docs. I've attached a small\n> patch that adds support for the S modifier (i.e., \\dpS) and the adjusts the\n> docs.\n>\n> Thoughts?\n>\n> [0] https://postgr.es/m/a2382acd-e465-85b2-9d8e-f9ed1a5a66e9%40postgrespro.ru\n\nA few words in support of this patch, since I was the initiator of the \ndiscussion.\n\nBefore VACUUM, ANALYZE privileges, there was no such question.\nWhy check privileges on system catalog objects? But now it doesn't.\n\nIt is now possible to grant privileges on system tables,\nso it should be possible to see privileges with psql commands.\nHowever, the \\dp command does not support the S modifier, which is \ninconsistent.\n\nFurthermore. The VACUUM privilege allows you to also execute VACUUM FULL.\nVACUUM and VACUUM FULL are commands with similar names, but work \ncompletely differently.\nIt may be worth clarifying on this page: \nhttps://www.postgresql.org/docs/devel/ddl-priv.html\n\nSomething like: Allows VACUUM on a relation, including VACUUM FULL.\n\nBut that's not all.\n\nThere is a very similar command to VACUUM FULL with a different name - \nCLUSTER.\nThe VACUUM privilege does not apply to the CLUSTER command. This is \nprobably correct.\nHowever, the documentation for the CLUSTER command does not say\nwho can perform this command. I think it would be correct to add a sentence\nto the Notes section \n(https://www.postgresql.org/docs/devel/sql-cluster.html)\nsimilar to the one in the VACUUM documentation:\n\n\"To cluster a table, one must ordinarily be the table's owner or a \nsuperuser.\"\n\nReady to participate, if it seems reasonable.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n",
"msg_date": "Wed, 7 Dec 2022 10:48:49 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Wed, Dec 07, 2022 at 10:48:49AM +0300, Pavel Luzanov wrote:\n> There is a very similar command to VACUUM FULL with a different name -\n> CLUSTER.\n> The VACUUM privilege does not apply to the CLUSTER command. This is probably\n> correct.\n> However, the documentation for the CLUSTER command does not say\n> who can perform this command. I think it would be correct to add a sentence\n> to the Notes section\n> (https://www.postgresql.org/docs/devel/sql-cluster.html)\n> similar to the one in the VACUUM documentation:\n> \n> \"To cluster a table, one must ordinarily be the table's owner or a\n> superuser.\"\n\nI created a new thread for this:\n\n\thttps://postgr.es/m/20221207223924.GA4182184%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 7 Dec 2022 14:41:05 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Wed, Dec 07, 2022 at 10:48:49AM +0300, Pavel Luzanov wrote:\n> Furthermore. The VACUUM privilege allows you to also execute VACUUM FULL.\n> VACUUM and VACUUM FULL are commands with similar names, but work completely\n> differently.\n> It may be worth clarifying on this page:\n> https://www.postgresql.org/docs/devel/ddl-priv.html\n> \n> Something like: Allows VACUUM on a relation, including VACUUM FULL.\n\nSince (as you said) they work completely differently, I think it'd be\nmore useful if vacuum_full were a separate privilege, rather than being\nincluded in vacuum. And cluster could be allowed whenever vacuum_full\nis allowed.\n\n> There is a very similar command to VACUUM FULL with a different name -\n> CLUSTER. The VACUUM privilege does not apply to the CLUSTER command.\n> This is probably correct.\n\nI think if vacuum privilege allows vacuum full, then it ought to also\nallow cluster. But I suggest that it'd be even better if it doesn't\nallow either, and there was a separate privilege for those.\n\nDisclaimer: I have not been following these threads.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 7 Dec 2022 20:39:30 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Wed, Dec 07, 2022 at 08:39:30PM -0600, Justin Pryzby wrote:\n> I think if vacuum privilege allows vacuum full, then it ought to also\n> allow cluster. But I suggest that it'd be even better if it doesn't\n> allow either, and there was a separate privilege for those.\n> \n> Disclaimer: I have not been following these threads.\n\nI haven't formed an opinion on whether VACUUM FULL should get its own bit,\nbut FWIW І just finished writing the first draft of a patch set to add bits\nfor CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX. I plan to post that\ntomorrow.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 7 Dec 2022 20:19:00 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> I haven't formed an opinion on whether VACUUM FULL should get its own bit,\n> but FWIW І just finished writing the first draft of a patch set to add bits\n> for CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX. I plan to post that\n> tomorrow.\n\nThe fact that we just doubled the number of available bits doesn't\nmean we should immediately strive to use them up. Perhaps it'd\nbe better to subsume these retail privileges under some generic\n\"maintenance action\" privilege?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Dec 2022 23:25:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Wed, Dec 07, 2022 at 11:25:32PM -0500, Tom Lane wrote:\n> The fact that we just doubled the number of available bits doesn't\n> mean we should immediately strive to use them up. Perhaps it'd\n> be better to subsume these retail privileges under some generic\n> \"maintenance action\" privilege?\n\nThat's fine with me, but I wouldn't be surprised if there's disagreement on\nhow to group the commands. I certainly don't want to use up the rest of\nthe bits right away, but there might not be too many more existing\nprivileges after these three that deserve them. Perhaps I should take\ninventory and offer a plan for all the remaining privileges.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 7 Dec 2022 20:43:14 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Wed, 7 Dec 2022 at 23:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n> > I haven't formed an opinion on whether VACUUM FULL should get its own\n> bit,\n> > but FWIW І just finished writing the first draft of a patch set to add\n> bits\n> > for CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX. I plan to post that\n> > tomorrow.\n>\n> The fact that we just doubled the number of available bits doesn't\n> mean we should immediately strive to use them up. Perhaps it'd\n> be better to subsume these retail privileges under some generic\n> \"maintenance action\" privilege?\n>\n\nThat was my original suggestion:\n\nhttps://www.postgresql.org/message-id/CAMsGm5c4DycKBYZCypfV02s-SC8GwF%2BKeTt%3D%3DvbWrFn%2Bdz%3DKeg%40mail.gmail.com\n\nIn that message I review the history of permission bit growth. A bit later\nin the discussion, I did some investigation into the history of demand for\nnew permission bits and I proposed calling the new privilege MAINTAIN:\n\nhttps://www.postgresql.org/message-id/CAMsGm5d%3D2gi4kyKONUJyYFwen%3DbsWm4hz_KxLXkEhMmg5WSWTA%40mail.gmail.com\n\nFor what it's worth, I wouldn't bother changing the format of the\npermission bits to expand the pool of available bits. My previous analysis\nshows that there is no vast hidden demand for new privilege bits. If we\nimplement MAINTAIN to control access to VACUUM, ANALYZE, REFRESH, CLUSTER,\nand REINDEX, we will cover everything that I can find that has seriously\ndiscussed on this list, and still leave 3 unused bits for future expansion.\nThere is even justification for stopping after this expansion: if it is\ndone, then schema changes (DDL) will only be able to be done by owner; data\nchanges (insert, update, delete, as well as triggering of automatic data\nmaintenance actions) will be able to be done by anybody who is granted\npermission.\n\nMy guess is that if we ever do expand the privilege bit system, it should\nbe in a way that removes the limit entirely, replacing a bit map model with\nsomething more like a table with one row for each individual grant, with a\nfield indicating which grant is involved. But that is a hypothetical future.\n\nOn Wed, 7 Dec 2022 at 23:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:Nathan Bossart <nathandbossart@gmail.com> writes:\n> I haven't formed an opinion on whether VACUUM FULL should get its own bit,\n> but FWIW І just finished writing the first draft of a patch set to add bits\n> for CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX. I plan to post that\n> tomorrow.\n\nThe fact that we just doubled the number of available bits doesn't\nmean we should immediately strive to use them up. Perhaps it'd\nbe better to subsume these retail privileges under some generic\n\"maintenance action\" privilege?\nThat was my original suggestion:https://www.postgresql.org/message-id/CAMsGm5c4DycKBYZCypfV02s-SC8GwF%2BKeTt%3D%3DvbWrFn%2Bdz%3DKeg%40mail.gmail.comIn that message I review the history of permission bit growth. A bit later in the discussion, I did some investigation into the history of demand for new permission bits and I proposed calling the new privilege MAINTAIN:https://www.postgresql.org/message-id/CAMsGm5d%3D2gi4kyKONUJyYFwen%3DbsWm4hz_KxLXkEhMmg5WSWTA%40mail.gmail.comFor what it's worth, I wouldn't bother changing the format of the permission bits to expand the pool of available bits. My previous analysis shows that there is no vast hidden demand for new privilege bits. If we implement MAINTAIN to control access to VACUUM, ANALYZE, REFRESH, CLUSTER, and REINDEX, we will cover everything that I can find that has seriously discussed on this list, and still leave 3 unused bits for future expansion. There is even justification for stopping after this expansion: if it is done, then schema changes (DDL) will only be able to be done by owner; data changes (insert, update, delete, as well as triggering of automatic data maintenance actions) will be able to be done by anybody who is granted permission.My guess is that if we ever do expand the privilege bit system, it should be in a way that removes the limit entirely, replacing a bit map model with something more like a table with one row for each individual grant, with a field indicating which grant is involved. But that is a hypothetical future.",
"msg_date": "Wed, 7 Dec 2022 23:48:20 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Wed, Dec 07, 2022 at 11:48:20PM -0500, Isaac Morland wrote:\n> For what it's worth, I wouldn't bother changing the format of the\n> permission bits to expand the pool of available bits.\n\n7b37823 expanded AclMode to 64 bits, so we now have room for 16 additional\nprivileges (after the addition of VACUUM and ANALYZE in b5d6382).\n\n> My previous analysis\n> shows that there is no vast hidden demand for new privilege bits. If we\n> implement MAINTAIN to control access to VACUUM, ANALYZE, REFRESH, CLUSTER,\n> and REINDEX, we will cover everything that I can find that has seriously\n> discussed on this list, and still leave 3 unused bits for future expansion.\n\nIf we added CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX as individual\nprivilege bits, we'd still have 13 remaining for future use.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 7 Dec 2022 21:07:54 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Thu, 8 Dec 2022 at 00:07, Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Wed, Dec 07, 2022 at 11:48:20PM -0500, Isaac Morland wrote:\n> > For what it's worth, I wouldn't bother changing the format of the\n> > permission bits to expand the pool of available bits.\n>\n> 7b37823 expanded AclMode to 64 bits, so we now have room for 16 additional\n> privileges (after the addition of VACUUM and ANALYZE in b5d6382).\n>\n> > My previous analysis\n> > shows that there is no vast hidden demand for new privilege bits. If we\n> > implement MAINTAIN to control access to VACUUM, ANALYZE, REFRESH,\n> CLUSTER,\n> > and REINDEX, we will cover everything that I can find that has seriously\n> > discussed on this list, and still leave 3 unused bits for future\n> expansion.\n>\n> If we added CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX as individual\n> privilege bits, we'd still have 13 remaining for future use.\n>\n\nI was a bit imprecise. I was comparing to the state before the recent\nchanges - so 12 bits used out of 16, with MAINTAIN being the 13th bit. I\nthink in my mind it's still approximately 2019 on some level.\n\nOn Thu, 8 Dec 2022 at 00:07, Nathan Bossart <nathandbossart@gmail.com> wrote:On Wed, Dec 07, 2022 at 11:48:20PM -0500, Isaac Morland wrote:\n> For what it's worth, I wouldn't bother changing the format of the\n> permission bits to expand the pool of available bits.\n\n7b37823 expanded AclMode to 64 bits, so we now have room for 16 additional\nprivileges (after the addition of VACUUM and ANALYZE in b5d6382).\n\n> My previous analysis\n> shows that there is no vast hidden demand for new privilege bits. If we\n> implement MAINTAIN to control access to VACUUM, ANALYZE, REFRESH, CLUSTER,\n> and REINDEX, we will cover everything that I can find that has seriously\n> discussed on this list, and still leave 3 unused bits for future expansion.\n\nIf we added CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX as individual\nprivilege bits, we'd still have 13 remaining for future use.I was a bit imprecise. I was comparing to the state before the recent changes - so 12 bits used out of 16, with MAINTAIN being the 13th bit. I think in my mind it's still approximately 2019 on some level.",
"msg_date": "Thu, 8 Dec 2022 00:12:05 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Wed, Dec 07, 2022 at 11:48:20PM -0500, Isaac Morland wrote:\n>> My previous analysis\n>> shows that there is no vast hidden demand for new privilege bits. If we\n>> implement MAINTAIN to control access to VACUUM, ANALYZE, REFRESH, CLUSTER,\n>> and REINDEX, we will cover everything that I can find that has seriously\n>> discussed on this list, and still leave 3 unused bits for future expansion.\n\n> If we added CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX as individual\n> privilege bits, we'd still have 13 remaining for future use.\n\nI think the appropriate question is not \"have we still got bits left?\".\nIt should be more like \"under what plausible scenario would it be useful\nto grant somebody CLUSTER but not VACUUM privileges on a table?\".\n\nI'm really thinking that MAINTAIN is the right level of granularity\nhere. Or maybe it's worth segregating exclusive-lock from\nnot-exclusive-lock maintenance. But I really fail to see how it's\nuseful to distinguish CLUSTER from REINDEX, say.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Dec 2022 00:15:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On 08.12.2022 07:48, Isaac Morland wrote:\n> If we implement MAINTAIN to control access to VACUUM, ANALYZE, \n> REFRESH, CLUSTER, and REINDEX, we will cover everything that I can \n> find that has seriously discussed on this list\n\nI like this approach with MAINTAIN privilege. I'm trying to find any \ndisadvantages ... and I can't.\n\nFor the complete picture, I tried to see what other actions with the \ntable could *potentially* be considered as maintenance.\nHere is the list:\n\n- create|alter|drop on extended statistics objects\n- alter table|index alter column set statistics\n- alter table|index [re]set (storage_parameters)\n- alter table|index set tablespace\n- alter table alter column set storage|compression\n- any actions with the TOAST table that can be performed separately from \nthe main table\n\nI have to admit that the discussion has moved away from the $subject.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n",
"msg_date": "Thu, 8 Dec 2022 14:03:43 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On 2022-Dec-08, Pavel Luzanov wrote:\n\n> For the complete picture, I tried to see what other actions with the table\n> could *potentially* be considered as maintenance.\n> Here is the list:\n> \n> - create|alter|drop on extended statistics objects\n> - alter table|index alter column set statistics\n> - alter table|index [re]set (storage_parameters)\n> - alter table|index set tablespace\n> - alter table alter column set storage|compression\n> - any actions with the TOAST table that can be performed separately from the\n> main table\n\nWell, I can't see that any of these is valuable to grant separately from\nthe table's owner. The maintenance ones are the ones that are\ninteresting to run from a database-owner perspective, but these ones do\nnot seem to need that treatment.\n\nIf you're extremely generous you could think that ALTER .. SET STORAGE\nwould be reasonable to be run by the db-owner. However, that's not\nsomething you do on an ongoing basis -- you just do it once -- so it\nseems pointless.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 8 Dec 2022 12:35:28 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Thu, Dec 08, 2022 at 12:15:23AM -0500, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> On Wed, Dec 07, 2022 at 11:48:20PM -0500, Isaac Morland wrote:\n>>> My previous analysis\n>>> shows that there is no vast hidden demand for new privilege bits. If we\n>>> implement MAINTAIN to control access to VACUUM, ANALYZE, REFRESH, CLUSTER,\n>>> and REINDEX, we will cover everything that I can find that has seriously\n>>> discussed on this list, and still leave 3 unused bits for future expansion.\n> \n>> If we added CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX as individual\n>> privilege bits, we'd still have 13 remaining for future use.\n> \n> I think the appropriate question is not \"have we still got bits left?\".\n> It should be more like \"under what plausible scenario would it be useful\n> to grant somebody CLUSTER but not VACUUM privileges on a table?\".\n> \n> I'm really thinking that MAINTAIN is the right level of granularity\n> here. Or maybe it's worth segregating exclusive-lock from\n> not-exclusive-lock maintenance. But I really fail to see how it's\n> useful to distinguish CLUSTER from REINDEX, say.\n\nThe main idea behind this work is breaking out privileges into more\ngranular pieces. If I want to create a role that only runs VACUUM on some\ntables on the weekend, why ѕhould I have to also give it the ability to\nANALYZE, REFRESH, CLUSTER, and REINDEX? IMHO we should really let the user\ndecide what set of privileges makes sense for their use-case. I'm unsure\nthe grouping all these privileges together serves much purpose besides\npreserving ACL bits.\n\nThe other reason I'm hesitant to group the privileges together is because I\nsuspect it will be difficult to reach agreement on how to do so, as\nevidenced by past discussion [0]. That being said, I'm open to it if we\nfind a way that folks are happy with. For example, separating\nexclusive-lock and non-exclusive-lock maintenance actions seems like a\nreasonable idea (which perhaps is an argument for moving VACUUM FULL out of\nthe VACUUM privilege).\n\n[0] https://postgr.es/m/67a1d667e8ec228b5e07f232184c80348c5d93f4.camel%40j-davis.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 8 Dec 2022 09:15:03 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "I've created a new thread for making CLUSTER, REFRESH MATERIALIZED VIEW,\nand REINDEX grantable:\n\n\thttps://postgr.es/m/20221208183707.GA55474%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 8 Dec 2022 10:39:46 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Thu, Dec 08, 2022 at 09:15:03AM -0800, Nathan Bossart wrote:\n> The main idea behind this work is breaking out privileges into more\n> granular pieces. If I want to create a role that only runs VACUUM on some\n> tables on the weekend, why ѕhould I have to also give it the ability to\n> ANALYZE, REFRESH, CLUSTER, and REINDEX? IMHO we should really let the user\n> decide what set of privileges makes sense for their use-case. I'm unsure\n> the grouping all these privileges together serves much purpose besides\n> preserving ACL bits.\n\nHmm. I'd like to think that we should keep a frugal mind here. More\nbits are now available, but it does not strike me as a good idea to\nforce their usage more than necessary, so grouping all these no-quite\nDDL commands into the same bag does not sound that bad to me.\n--\nMichael",
"msg_date": "Fri, 9 Dec 2022 13:36:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Fri, Dec 09, 2022 at 01:36:03PM +0900, Michael Paquier wrote:\n> On Thu, Dec 08, 2022 at 09:15:03AM -0800, Nathan Bossart wrote:\n>> The main idea behind this work is breaking out privileges into more\n>> granular pieces. If I want to create a role that only runs VACUUM on some\n>> tables on the weekend, why ѕhould I have to also give it the ability to\n>> ANALYZE, REFRESH, CLUSTER, and REINDEX? IMHO we should really let the user\n>> decide what set of privileges makes sense for their use-case. I'm unsure\n>> the grouping all these privileges together serves much purpose besides\n>> preserving ACL bits.\n> \n> Hmm. I'd like to think that we should keep a frugal mind here. More\n> bits are now available, but it does not strike me as a good idea to\n> force their usage more than necessary, so grouping all these no-quite\n> DDL commands into the same bag does not sound that bad to me.\n\nOkay, it seems I am outnumbered. I will work on updating the patch to add\nan ACL_MAINTAIN bit and a pg_maintain_all_tables predefined role.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 9 Dec 2022 10:40:55 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Fri, Dec 09, 2022 at 10:40:55AM -0800, Nathan Bossart wrote:\n> On Fri, Dec 09, 2022 at 01:36:03PM +0900, Michael Paquier wrote:\n>> On Thu, Dec 08, 2022 at 09:15:03AM -0800, Nathan Bossart wrote:\n>>> The main idea behind this work is breaking out privileges into more\n>>> granular pieces. If I want to create a role that only runs VACUUM on some\n>>> tables on the weekend, why ѕhould I have to also give it the ability to\n>>> ANALYZE, REFRESH, CLUSTER, and REINDEX? IMHO we should really let the user\n>>> decide what set of privileges makes sense for their use-case. I'm unsure\n>>> the grouping all these privileges together serves much purpose besides\n>>> preserving ACL bits.\n>> \n>> Hmm. I'd like to think that we should keep a frugal mind here. More\n>> bits are now available, but it does not strike me as a good idea to\n>> force their usage more than necessary, so grouping all these no-quite\n>> DDL commands into the same bag does not sound that bad to me.\n> \n> Okay, it seems I am outnumbered. I will work on updating the patch to add\n> an ACL_MAINTAIN bit and a pg_maintain_all_tables predefined role.\n\nAny thoughts on $SUBJECT?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 9 Dec 2022 10:44:11 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "\nOn 2022-12-09 Fr 13:44, Nathan Bossart wrote:\n> On Fri, Dec 09, 2022 at 10:40:55AM -0800, Nathan Bossart wrote:\n>> On Fri, Dec 09, 2022 at 01:36:03PM +0900, Michael Paquier wrote:\n>>> On Thu, Dec 08, 2022 at 09:15:03AM -0800, Nathan Bossart wrote:\n>>>> The main idea behind this work is breaking out privileges into more\n>>>> granular pieces. If I want to create a role that only runs VACUUM on some\n>>>> tables on the weekend, why ѕhould I have to also give it the ability to\n>>>> ANALYZE, REFRESH, CLUSTER, and REINDEX? IMHO we should really let the user\n>>>> decide what set of privileges makes sense for their use-case. I'm unsure\n>>>> the grouping all these privileges together serves much purpose besides\n>>>> preserving ACL bits.\n>>> Hmm. I'd like to think that we should keep a frugal mind here. More\n>>> bits are now available, but it does not strike me as a good idea to\n>>> force their usage more than necessary, so grouping all these no-quite\n>>> DDL commands into the same bag does not sound that bad to me.\n>> Okay, it seems I am outnumbered. I will work on updating the patch to add\n>> an ACL_MAINTAIN bit and a pg_maintain_all_tables predefined role.\n> Any thoughts on $SUBJECT?\n\n\nYeah, the discussion got way off into the weeds here. I think the\noriginal proposal seems reasonable. Please add it to the next CF if you\nhaven't already.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 12 Dec 2022 07:01:01 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Mon, Dec 12, 2022 at 07:01:01AM -0500, Andrew Dunstan wrote:\n> On 2022-12-09 Fr 13:44, Nathan Bossart wrote:\n>> Any thoughts on $SUBJECT?\n> \n> Yeah, the discussion got way off into the weeds here. I think the\n> original proposal seems reasonable. Please add it to the next CF if you\n> haven't already.\n\nHere it is: https://commitfest.postgresql.org/41/4043/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 12 Dec 2022 09:18:22 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "> Here it is: https://commitfest.postgresql.org/41/4043/\n>\n\nHi!\n\nThe patch applies with no problem, implements what it declared, CF bot is\nhappy.\nWithout patch \\dpS shows 0 rows, after applying system objects are shown.\nConsider this patch useful, hope it will be committed soon.\n\n-- \nBest regards,\nMaxim Orlov.\n\n\nHere it is: https://commitfest.postgresql.org/41/4043/Hi!The patch applies with no problem, implements what it declared, CF bot is happy.Without patch \\dpS shows 0 rows, after applying system objects are shown.Consider this patch useful, hope it will be committed soon.-- Best regards,Maxim Orlov.",
"msg_date": "Wed, 28 Dec 2022 14:46:23 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 02:46:23PM +0300, Maxim Orlov wrote:\n> The patch applies with no problem, implements what it declared, CF bot is\n> happy.\n> Without patch \\dpS shows 0 rows, after applying system objects are shown.\n> Consider this patch useful, hope it will be committed soon.\n\nThanks for reviewing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Dec 2022 13:26:12 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Wed, 28 Dec 2022 at 21:26, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Dec 28, 2022 at 02:46:23PM +0300, Maxim Orlov wrote:\n> > The patch applies with no problem, implements what it declared, CF bot is\n> > happy.\n> > Without patch \\dpS shows 0 rows, after applying system objects are shown.\n> > Consider this patch useful, hope it will be committed soon.\n>\n> Thanks for reviewing.\n>\n\nLooking this over this, I have a couple of comments:\n\nFirstly, I think it should allow \\zS in the same fashion as \\dpS,\nsince \\z is an alias for \\dp, so the 2 should be kept in sync.\n\nSecondly, I don't think the following is the right SQL clause to use\nin the absence of \"S\":\n\n if (!showSystem && !pattern)\n appendPQExpBufferStr(&buf, \"AND n.nspname !~ '^pg_'\\n\");\n\nI know that's the condition it used before, but the problem with using\nthat now is that it will cause temporary relations to be excluded\nunless the \"S\" modifier is used, which goes against the expectation\nthat \"S\" just causes system relations to be included. Also, it fails\nto exclude information_schema relations, if that happens to be on the\nuser's search_path.\n\nSo I think we should use the same SQL clauses as every other psql\ncommand that supports \"S\", namely:\n\n if (!showSystem && !pattern)\n appendPQExpBufferStr(&buf, \" AND n.nspname <> 'pg_catalog'\\n\"\n \" AND n.nspname <> 'information_schema'\\n\");\n\nUpdated patch attached.\n\nRegards,\nDean",
"msg_date": "Fri, 6 Jan 2023 18:52:33 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Fri, Jan 06, 2023 at 06:52:33PM +0000, Dean Rasheed wrote:\n> Looking this over this, I have a couple of comments:\n\nThanks for reviewing.\n\n> Firstly, I think it should allow \\zS in the same fashion as \\dpS,\n> since \\z is an alias for \\dp, so the 2 should be kept in sync.\n\nThat seems reasonable to me.\n\n> Secondly, I don't think the following is the right SQL clause to use\n> in the absence of \"S\":\n> \n> if (!showSystem && !pattern)\n> appendPQExpBufferStr(&buf, \"AND n.nspname !~ '^pg_'\\n\");\n> \n> I know that's the condition it used before, but the problem with using\n> that now is that it will cause temporary relations to be excluded\n> unless the \"S\" modifier is used, which goes against the expectation\n> that \"S\" just causes system relations to be included. Also, it fails\n> to exclude information_schema relations, if that happens to be on the\n> user's search_path.\n> \n> So I think we should use the same SQL clauses as every other psql\n> command that supports \"S\", namely:\n> \n> if (!showSystem && !pattern)\n> appendPQExpBufferStr(&buf, \" AND n.nspname <> 'pg_catalog'\\n\"\n> \" AND n.nspname <> 'information_schema'\\n\");\n\nGood catch. I should have noticed this. The deleted comment mentions that\nthe system/temp tables normally aren't very interesting from a permissions\nperspective, so perhaps there is an argument for always excluding temp\ntables without a pattern. After all, \\dp always excludes indexes and TOAST\ntables. However, it looks like \\dt includes temp tables, and I didn't see\nany other meta-commands that excluded them.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 6 Jan 2023 16:36:51 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Sat, 7 Jan 2023 at 00:36, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Fri, Jan 06, 2023 at 06:52:33PM +0000, Dean Rasheed wrote:\n> >\n> > So I think we should use the same SQL clauses as every other psql\n> > command that supports \"S\", namely:\n> >\n> > if (!showSystem && !pattern)\n> > appendPQExpBufferStr(&buf, \" AND n.nspname <> 'pg_catalog'\\n\"\n> > \" AND n.nspname <> 'information_schema'\\n\");\n>\n> Good catch. I should have noticed this. The deleted comment mentions that\n> the system/temp tables normally aren't very interesting from a permissions\n> perspective, so perhaps there is an argument for always excluding temp\n> tables without a pattern. After all, \\dp always excludes indexes and TOAST\n> tables. However, it looks like \\dt includes temp tables, and I didn't see\n> any other meta-commands that excluded them.\n>\n\nIt might be true that temp tables aren't usually interesting from a\npermissions point of view, but it's not hard to imagine situations\nwhere interesting things do happen. It's also probably the case that\nmost users won't have many temp tables, so I don't think including\nthem by default will be particularly intrusive.\n\nAlso, from a user perspective, I think it would be something of a POLA\nviolation for \\dp[S] and \\dt[S] to include different sets of tables,\nthough I appreciate that we do that now. There's nothing in the docs\nto indicate that that's the case.\n\nAnyway, I've pushed the v2 patch as-is. If anyone feels strongly\nenough that we should change its behaviour for temp tables, then we\ncan still discuss that.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 7 Jan 2023 11:18:59 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "On Sat, Jan 07, 2023 at 11:18:59AM +0000, Dean Rasheed wrote:\n> It might be true that temp tables aren't usually interesting from a\n> permissions point of view, but it's not hard to imagine situations\n> where interesting things do happen. It's also probably the case that\n> most users won't have many temp tables, so I don't think including\n> them by default will be particularly intrusive.\n> \n> Also, from a user perspective, I think it would be something of a POLA\n> violation for \\dp[S] and \\dt[S] to include different sets of tables,\n> though I appreciate that we do that now. There's nothing in the docs\n> to indicate that that's the case.\n\nAgreed.\n\n> Anyway, I've pushed the v2 patch as-is. If anyone feels strongly\n> enough that we should change its behaviour for temp tables, then we\n> can still discuss that.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 9 Jan 2023 09:45:39 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add \\dpS to psql"
},
{
"msg_contents": "Hi,\nThank you for developing a good feature.\nI found while testing PostgreSQL 16 Beta 1 that the output of the \\? metacommand did not include \\dS, \\dpS. \nThe attached patch changes the output of the \\? meta command to:\n\nCurrent output\npsql=# \\? \n \\z [PATTERN] same as \\dp\n \\dp [PATTERN] list table, view, and sequence access privileges\n\nPatched output\npsql=# \\?\n \\dp[S] [PATTERN] list table, view, and sequence access privileges\n \\z[S] [PATTERN] same as \\dp\n\nRegards,\nNoriyoshi Shinoda\n\n-----Original Message-----\nFrom: Nathan Bossart <nathandbossart@gmail.com> \nSent: Tuesday, January 10, 2023 2:46 AM\nTo: Dean Rasheed <dean.a.rasheed@gmail.com>\nCc: Maxim Orlov <orlovmg@gmail.com>; Andrew Dunstan <andrew@dunslane.net>; Michael Paquier <michael@paquier.xyz>; Tom Lane <tgl@sss.pgh.pa.us>; Isaac Morland <isaac.morland@gmail.com>; Justin Pryzby <pryzby@telsasoft.com>; Pavel Luzanov <p.luzanov@postgrespro.ru>; pgsql-hackers@postgresql.org\nSubject: Re: add \\dpS to psql\n\nOn Sat, Jan 07, 2023 at 11:18:59AM +0000, Dean Rasheed wrote:\n> It might be true that temp tables aren't usually interesting from a \n> permissions point of view, but it's not hard to imagine situations \n> where interesting things do happen. It's also probably the case that \n> most users won't have many temp tables, so I don't think including \n> them by default will be particularly intrusive.\n> \n> Also, from a user perspective, I think it would be something of a POLA \n> violation for \\dp[S] and \\dt[S] to include different sets of tables, \n> though I appreciate that we do that now. There's nothing in the docs \n> to indicate that that's the case.\n\nAgreed.\n\n> Anyway, I've pushed the v2 patch as-is. If anyone feels strongly \n> enough that we should change its behaviour for temp tables, then we \n> can still discuss that.\n\nThanks!\n\n--\nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 29 Jun 2023 02:11:43 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": false,
"msg_subject": "RE: add \\dpS to psq [16beta1]"
},
{
"msg_contents": "On Thu, Jun 29, 2023 at 02:11:43AM +0000, Shinoda, Noriyoshi (PN Japan FSIP) wrote:\n> I found while testing PostgreSQL 16 Beta 1 that the output of the \\? metacommand did not include \\dS, \\dpS. \n> The attached patch changes the output of the \\? meta command to:\n\nThanks for the report! I've committed your patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Jun 2023 21:38:58 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add \\dpS to psq [16beta1]"
}
] |
[
{
"msg_contents": "Hi,\n\nOccasionally I see core dumps for sh, cp etc when running the tests. I think\nthis is mainly due to immediate shutdowns / crashes signalling the entire\nprocess group with SIGQUIT. If a sh/cp/... is running as part of an\narchive/restore command when the signal arrives, we'll trigger a coredump,\nbecause those tools won't have a SIGQUIT handler.\n\nISTM that postmaster's signal_child() shouldn't send SIGQUIT to the process\ngroup in the #ifdef HAVE_SETSID section. We've already signalled the backend\nwith SIGQUIT, so we could change the signal we send to the whole process group\nto one that doesn't trigger core dumps by default. SIGTERM seems like it would\nbe the right choice.\n\nThe one non-trivial aspect of this is that that signal will also be delivered\nto the group leader. It's possible that that could lead to some minor test\nbehaviour issues, because the output could change if e.g. SIGTERM is received\n/ processed first.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Dec 2022 13:58:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "core dumps generated in archive / restore commands etc"
}
] |
[
{
"msg_contents": "Hi hackers,\n\npg_dump adds an extra space in ALTER DEFAULT PRIVILEGES commands. AFAICT\nit's been this way since the command was first introduced (249724c). I've\nattached a patch to fix it.\n\nThis is admittedly just a pet peeve, but maybe it is bothering others, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 6 Dec 2022 15:27:44 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "remove extra space from dumped ALTER DEFAULT PRIVILEGES commands"
}
] |
[
{
"msg_contents": "\nWhen I looked at the bug:\n\nhttps://postgr.es/m/CALDQics_oBEYfOnu_zH6yw9WR1waPCmcrqxQ8+39hK3Op=z2UQ@mail.gmail.com\n\nI noticed that the DDL around collations is inconsistent. For instance,\nCREATE COLLATION[1] uses LOCALE, LC_COLLATE, and LC_CTYPE parameters to\nspecify either libc locales or an icu locale; whereas CREATE\nDATABASE[2] uses LOCALE, LC_COLLATE, and LC_CTYPE always for libc, and\nICU_LOCALE if the default collation is ICU.\n\nThe catalog representation is strange in a different way:\ndatcollate/collcollate are always for libc, and daticulocale is for\nicu. That means anything that deals with those fields needs to pick the\nright one based on the provider.\n\nIf this were a clean slate, it would make more sense if it were\nsomething like:\n\n datcollate/collcollate: to instantiate pg_locale_t\n datctype/collctype: to instantiate pg_locale_t\n datlibccollate: used by libc elsewhere\n datlibcctype: used by libc elsewhere\n daticulocale/colliculocale: remove these fields\n\nThat way, if you are instantiating a pg_locale_t, you always just pass\ndatcollate/datctype/collcollate/collctype, regardless of the provider\n(pg_newlocale_from_collation() would figure it out). And if you are\ngoing to do something straight with libc, you always use\ndatlibccollate/datlibcctype.\n\nAside: why don't we support different collate/ctype with ICU? It\nappears that u_strToTitle/u_strToUpper/u_strToLower just accept a\nstring \"locale\", and it would be easy enough to pass it whatever is in\ndatctype/collctype, right? We should validate that it's a valid locale;\nbut other than that, I don't see the problem.\n\nThoughts? Implementation-wise, I suppose this could create some\nannoyances in pg_dump.\n\n[1] https://www.postgresql.org/docs/devel/sql-createcollation.html\n[2] https://www.postgresql.org/docs/devel/sql-createdatabase.html\n[3] https://unicode-org.github.io/icu-docs/apidoc/released/icu4c/ustring_8h.html\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Tue, 06 Dec 2022 16:33:38 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Collation DDL inconsistencies"
}
] |
[
{
"msg_contents": "Hi All,\n\nI am trying to create HeapTuple data structure.\nFirst, I create a tuple descriptor:\n TupleDesc *td=CreateTemplateTupleDesc(colCount);\nThen, for each variable, I do:\n TupleDescInitEntry(*td,v->varattno,NULL,v->vartype,v->vartypmod,0);\nThen, I assign values:\n if int32: values[v->varattno-1]=Int8GetDatum(myValue);\nSimilarly for float.\nFinally, I create the HeapTuple:\n HeapTuple tuple=heap_form_tuple(td,values,isnull);\n\nEverything works fine with int and float. But I don't know how to handle\nchars.\nLet's say we have a character(10) column. One problem is v->vartypmod will\nbe set to 14. Shouldn't it be 10?\nSecond, how should I assign values? Is\nvalues[v->varattno-1]=CStringGetDatum(myValue); correct? Should I set the\nlast parameter to TupleDescInitEntry? Why am I getting \"invalid memory\nalloc request size\" or segfault with different configurations?\n\nHi All,I am trying to create HeapTuple data structure.First, I create a tuple descriptor: TupleDesc *td=CreateTemplateTupleDesc(colCount);Then, for each variable, I do: TupleDescInitEntry(*td,v->varattno,NULL,v->vartype,v->vartypmod,0);Then, I assign values: if int32: values[v->varattno-1]=Int8GetDatum(myValue);Similarly for float.Finally, I create the HeapTuple: HeapTuple tuple=heap_form_tuple(td,values,isnull); Everything works fine with int and float. But I don't know how to handle chars.Let's say we have a character(10) column. One problem is v->vartypmod will be set to 14. Shouldn't it be 10?Second, how should I assign values? Is values[v->varattno-1]=CStringGetDatum(myValue); correct? Should I set the last parameter to TupleDescInitEntry? Why am I getting \"invalid memory alloc request size\" or segfault with different configurations?",
"msg_date": "Tue, 6 Dec 2022 18:06:31 -0800",
"msg_from": "Amin <amin.fallahi@gmail.com>",
"msg_from_op": true,
"msg_subject": "Creating HeapTuple from char and date values"
},
{
"msg_contents": "On Thu, Dec 8, 2022 at 12:56 AM Amin <amin.fallahi@gmail.com> wrote:\n>\n> Hi All,\n>\n> I am trying to create HeapTuple data structure.\n> First, I create a tuple descriptor:\n> TupleDesc *td=CreateTemplateTupleDesc(colCount);\n> Then, for each variable, I do:\n> TupleDescInitEntry(*td,v->varattno,NULL,v->vartype,v->vartypmod,0);\n> Then, I assign values:\n> if int32: values[v->varattno-1]=Int8GetDatum(myValue);\n> Similarly for float.\n> Finally, I create the HeapTuple:\n> HeapTuple tuple=heap_form_tuple(td,values,isnull);\n>\n> Everything works fine with int and float. But I don't know how to handle chars.\n> Let's say we have a character(10) column. One problem is v->vartypmod will be set to 14. Shouldn't it be 10?\n\nI think the 4 extra bytes is varlena header - not sure. but typmod in\nthis case indicates the length of binary representation. 14 looks\ncorrect.\n\n> Second, how should I assign values? Is values[v->varattno-1]=CStringGetDatum(myValue); correct? Should I set the last parameter to TupleDescInitEntry? Why am I getting \"invalid memory alloc request size\" or segfault with different configurations?\n\nis myValue a char *?\nI think you need to use CStringGetTextDatum instead of CStringGetDatum.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 8 Dec 2022 18:45:52 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Creating HeapTuple from char and date values"
}
] |
[
{
"msg_contents": "Hi all,\n\nThis thread is a follow-up of the recent discussion about query\njumbling with DDL statements, where the conclusion was that we'd want\nto generate all this code automatically for all the nodes:\nhttps://www.postgresql.org/message-id/36e5bffe-e989-194f-85c8-06e7bc88e6f7@amazon.com\n\nWhat this patch allows to do it to compute the same query ID for\nutility statements using their parsed Node state instead of their\nstring, meaning that things like \"BEGIN\", \"bEGIN\" or \"begin\" would be\ntreated the same, for example. But the main idea is not only that.\n\nI have implemented that as of the attached, where the following things\nare done:\n- queryjumble.c is moved to src/backend/nodes/, to stick with the\nother things for node equal/read/write/copy, renamed to\njumblefuncs.c.\n- gen_node_support.c is extended to generate the functions and the\nswitch for the jumbling. There are a few exceptions, as of the Lists\nand RangeTblEntry to do the jumbling consistently.\n- Two pg_node_attr() are added in consistency with the existing ones:\nno_jumble to discard completely a node from the the query jumbling\nand jumble_ignore to discard one field from the jumble.\n\nThe patch is in a rather good shape, passes check-world and the CI,\nbut there are a few things that need to be discussed IMO. Things\ncould be perhaps divided in more patches, now the areas touched are\nquite different so it did not look like a big deal to me as the\nchanges touch different areas and are straight-forward.\n\nThe location of the Nodes is quite invasive because we only care about\nthat for T_Const now in the query jumbling, and this could be\ncompensated with a third pg_node_attr() that tracks for the \"int \nlocation\" of a Node whether it should participate in the jumbling or\nnot. There is also an argument where we would want to not include by\ndefault new fields added to a Node, but that would require added more\npg_node_attr() than what's done here.\n\nNote that the plan is to extend the normalization to some other parts\nof the Nodes, like CALL and SET as mentioned on the other thread. I\nhave done nothing about that yet but doing so can be done in a few\nlines with the facility presented here (aka just add a location\nfield). Hence, the normalization is consistent with the existing\nqueryjumble.c for the fields and the nodes processed.\n\nIn this patch, things are done so as the query ID is not computed\nanymore from the query string but from the Query. I still need to\nstudy the performance impact of that with short queries. If things\nprove to be noticeable in some cases, this stuff could be switched to\nuse a new GUC where we could have a code path for the computation of\nutilityStmt using its string as a fallback. I am not sure that I want\nto enter in this level of complications, though, to keep things\nsimple, but that's yet to be done.\n\nA bit more could be cut but pg_ident got in the way.. There are also\na few things for pg_stat_statements where a query ID of 0 can be\nimplied for utility statements in some cases.\n\nGenerating this code leads to an overall removal of code as what\n queryjumble.c is generated automatically:\n 13 files changed, 901 insertions(+), 1113 deletions(-)\n\nI am adding that to the next commit fest.\n\nThoughts?\n--\nMichael",
"msg_date": "Wed, 7 Dec 2022 16:56:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Wed, 7 Dec 2022 at 13:27, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi all,\n>\n> This thread is a follow-up of the recent discussion about query\n> jumbling with DDL statements, where the conclusion was that we'd want\n> to generate all this code automatically for all the nodes:\n> https://www.postgresql.org/message-id/36e5bffe-e989-194f-85c8-06e7bc88e6f7@amazon.com\n>\n> What this patch allows to do it to compute the same query ID for\n> utility statements using their parsed Node state instead of their\n> string, meaning that things like \"BEGIN\", \"bEGIN\" or \"begin\" would be\n> treated the same, for example. But the main idea is not only that.\n>\n> I have implemented that as of the attached, where the following things\n> are done:\n> - queryjumble.c is moved to src/backend/nodes/, to stick with the\n> other things for node equal/read/write/copy, renamed to\n> jumblefuncs.c.\n> - gen_node_support.c is extended to generate the functions and the\n> switch for the jumbling. There are a few exceptions, as of the Lists\n> and RangeTblEntry to do the jumbling consistently.\n> - Two pg_node_attr() are added in consistency with the existing ones:\n> no_jumble to discard completely a node from the the query jumbling\n> and jumble_ignore to discard one field from the jumble.\n>\n> The patch is in a rather good shape, passes check-world and the CI,\n> but there are a few things that need to be discussed IMO. Things\n> could be perhaps divided in more patches, now the areas touched are\n> quite different so it did not look like a big deal to me as the\n> changes touch different areas and are straight-forward.\n>\n> The location of the Nodes is quite invasive because we only care about\n> that for T_Const now in the query jumbling, and this could be\n> compensated with a third pg_node_attr() that tracks for the \"int\n> location\" of a Node whether it should participate in the jumbling or\n> not. There is also an argument where we would want to not include by\n> default new fields added to a Node, but that would require added more\n> pg_node_attr() than what's done here.\n>\n> Note that the plan is to extend the normalization to some other parts\n> of the Nodes, like CALL and SET as mentioned on the other thread. I\n> have done nothing about that yet but doing so can be done in a few\n> lines with the facility presented here (aka just add a location\n> field). Hence, the normalization is consistent with the existing\n> queryjumble.c for the fields and the nodes processed.\n>\n> In this patch, things are done so as the query ID is not computed\n> anymore from the query string but from the Query. I still need to\n> study the performance impact of that with short queries. If things\n> prove to be noticeable in some cases, this stuff could be switched to\n> use a new GUC where we could have a code path for the computation of\n> utilityStmt using its string as a fallback. I am not sure that I want\n> to enter in this level of complications, though, to keep things\n> simple, but that's yet to be done.\n>\n> A bit more could be cut but pg_ident got in the way.. There are also\n> a few things for pg_stat_statements where a query ID of 0 can be\n> implied for utility statements in some cases.\n>\n> Generating this code leads to an overall removal of code as what\n> queryjumble.c is generated automatically:\n> 13 files changed, 901 insertions(+), 1113 deletions(-)\n>\n> I am adding that to the next commit fest.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\nb82557ecc2ebbf649142740a1c5ce8d19089f620 ===\n=== applying patch\n./0001-Support-for-automated-query-jumble-with-all-Nodes.patch\n...\npatching file src/backend/utils/misc/queryjumble.c\nHunk #1 FAILED at 1.\nNot deleting file src/backend/utils/misc/queryjumble.c as content\ndiffers from patch\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/backend/utils/misc/queryjumble.c.rej\n\n[1] - http://cfbot.cputube.org/patch_41_4047.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 6 Jan 2023 11:37:32 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On 07.12.22 08:56, Michael Paquier wrote:\n> The location of the Nodes is quite invasive because we only care about\n> that for T_Const now in the query jumbling, and this could be\n> compensated with a third pg_node_attr() that tracks for the \"int\n> location\" of a Node whether it should participate in the jumbling or\n> not.\n\nThe generation script already has a way to identify location fields, by \n($t eq 'int' && $f =~ 'location$'), so you could use that as well.\n\n> There is also an argument where we would want to not include by\n> default new fields added to a Node, but that would require added more\n> pg_node_attr() than what's done here.\n\nI'm concerned about the large number of additional field annotations \nthis adds. We have been careful so far to document the use of each \nattribute, e.g., *why* does a field not need to be copied etc. This \npatch adds dozens and dozens of annotations without any explanation at \nall. Now, the code this replaces also has no documentation, but maybe \nthis is the time to add some.\n\n\n",
"msg_date": "Sat, 7 Jan 2023 07:37:49 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Sat, Jan 07, 2023 at 07:37:49AM +0100, Peter Eisentraut wrote:\n> The generation script already has a way to identify location fields, by ($t\n> eq 'int' && $f =~ 'location$'), so you could use that as well.\n\nI recall that some of the nodes may need renames to map with this\nchoice. That could be just one patch on top of the actual feature.\n\n> I'm concerned about the large number of additional field annotations this\n> adds. We have been careful so far to document the use of each attribute,\n> e.g., *why* does a field not need to be copied etc. This patch adds dozens\n> and dozens of annotations without any explanation at all. Now, the code\n> this replaces also has no documentation, but maybe this is the time to add\n> some.\n\nIn most cases, the addition of the node marker would be enough to\nself-explain why they are included, but there is a trend for a lot of\nthe nodes when it comes to collations and typmods where we don't want\nto see these in the jumbling calculations. I'll look at providing\nmore info for all that. (Note: I'm out for now.)\n--\nMichael",
"msg_date": "Sat, 7 Jan 2023 19:47:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Sat, Jan 07, 2023 at 07:37:49AM +0100, Peter Eisentraut wrote:\n> On 07.12.22 08:56, Michael Paquier wrote:\n>> The location of the Nodes is quite invasive because we only care about\n>> that for T_Const now in the query jumbling, and this could be\n>> compensated with a third pg_node_attr() that tracks for the \"int\n>> location\" of a Node whether it should participate in the jumbling or\n>> not.\n> \n> The generation script already has a way to identify location fields, by ($t\n> eq 'int' && $f =~ 'location$'), so you could use that as well.\n\nI did not recall exactly everything here, but there are two parts to\nthe logic:\n- gen_node_support.pl uses exactly this condition when scanning the\nnodes to put the correct macro to mark a location to track, calling\ndown RecordConstLocation().\n- Marking a bunch of nodes as jumble_ignore is actually necessary, or\nwe may finish by silencing parts of queries that should be\nsemantically unrelevant to the queries jumbled (ColumnRef is one).\nUsing a \"jumble_ignore\" flag is equally invasive to using an\n\"jumble_include\" flag for each field, I am afraid, as the number of\nfields in the nodes included in jumbles is pretty equivalent to the\nnumber of fields ignored. I tend to prefer the approach of ignoring\nthings though, which is more consistent with the practive for node\nread, write and copy.\n\nAnyway, when it comes to the location, another thing that can be\nconsidered here would be to require a node-level flag for the nodes on\nwhich we want to track the location. This overlaps a bit with the\nfields satisfying \"($t eq 'int' && $f =~ 'location$')\", but it removes\nmost of the code changes like this one as at the end we only care\nabout the location for Const nodes:\n- int location; /* token location, or -1 if unknown */\n+ int location pg_node_attr(jumble_ignore); /* token location, or -1\n+ * if unknown */\n\nI have taken this approach in v2 of the patch, shaving ~100 lines of\nmore code as there is no need to mark all these location fields with\n\"jumble_ignore\" anymore, except for Const, of course.\n\n>> There is also an argument where we would want to not include by\n>> default new fields added to a Node, but that would require added more\n>> pg_node_attr() than what's done here.\n> \n> I'm concerned about the large number of additional field annotations this\n> adds. We have been careful so far to document the use of each attribute,\n> e.g., *why* does a field not need to be copied etc. This patch adds dozens\n> and dozens of annotations without any explanation at all. Now, the code\n> this replaces also has no documentation, but maybe this is the time to add\n> some.\n\nThat's fair, though it is not doing to buy us much to update all the\nnodes with similar small comments, as well. As far as I know, there\nare basiscally three things here: typmods, collation information, and\ninternal data of the nodes stored during parse-analyze. I have added\nmore documentation to track what looks like the most relevant areas.\n\nI have begun running some performance tests with this stuff and HEAD\nto see if this leads to any difference in the query ID compilation\n(compute_query_id = on, on scissors roughly) with a simple set of\nshort commands (like BEGIN/COMMIT) or longer ones, and I am seeing a\nspeedup trend actually (?). I still need to think more about a set of\ntests here, but I think that micro-benchmarking of JumbleQuery() is\nthe most adapted approach to minimize the noise, with a few nodes of\nvarious sizes (Const, Query, ColumnRef, anything..).\n\nThoughts?\n--\nMichael",
"msg_date": "Fri, 13 Jan 2023 16:54:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On 13.01.23 08:54, Michael Paquier wrote:\n> Using a \"jumble_ignore\" flag is equally invasive to using an\n> \"jumble_include\" flag for each field, I am afraid, as the number of\n> fields in the nodes included in jumbles is pretty equivalent to the\n> number of fields ignored. I tend to prefer the approach of ignoring\n> things though, which is more consistent with the practive for node\n> read, write and copy.\n\nI concur that jumble_ignore is better. I suppose you placed the \njumble_ignore markers to maintain parity with the existing code, but I \nthink that some the markers are actually wrong and are just errors of \nomission in the existing code (such as Query.override, to pick a random \nexample). The way you have structured this would allow us to find and \nanalyze such errors better.\n\n> Anyway, when it comes to the location, another thing that can be\n> considered here would be to require a node-level flag for the nodes on\n> which we want to track the location. This overlaps a bit with the\n> fields satisfying \"($t eq 'int' && $f =~ 'location$')\", but it removes\n> most of the code changes like this one as at the end we only care\n> about the location for Const nodes:\n> - int location; /* token location, or -1 if unknown */\n> + int location pg_node_attr(jumble_ignore); /* token location, or -1\n> + * if unknown */\n> \n> I have taken this approach in v2 of the patch, shaving ~100 lines of\n> more code as there is no need to mark all these location fields with\n> \"jumble_ignore\" anymore, except for Const, of course.\n\nI don't understand why you chose that when we already have an \nestablished way. This would just make the jumble annotations \ninconsistent with the other annotations.\n\nI also have two suggestions to make this patch more palatable:\n\n1. Make a separate patch to reformat long comments, like \n835d476fd21bcfb60b055941dee8c3d9559af14c.\n\n2. Make a separate patch to move the jumble support to \nsrc/backend/nodes/jumblefuncs.c. (It was probably a mistake that it \nwasn't there to begin with, and some of the errors of omission alluded \nto above are probably caused by it having been hidden away in the wrong \nplace.)\n\n\n\n",
"msg_date": "Mon, 16 Jan 2023 15:13:35 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Mon, Jan 16, 2023 at 03:13:35PM +0100, Peter Eisentraut wrote:\n> On 13.01.23 08:54, Michael Paquier wrote:\n>> Using a \"jumble_ignore\" flag is equally invasive to using an\n>> \"jumble_include\" flag for each field, I am afraid, as the number of\n>> fields in the nodes included in jumbles is pretty equivalent to the\n>> number of fields ignored. I tend to prefer the approach of ignoring\n>> things though, which is more consistent with the practive for node\n>> read, write and copy.\n> \n> I concur that jumble_ignore is better. I suppose you placed the\n> jumble_ignore markers to maintain parity with the existing code, but I think\n> that some the markers are actually wrong and are just errors of omission in\n> the existing code (such as Query.override, to pick a random example). The\n> way you have structured this would allow us to find and analyze such errors\n> better.\n\nThanks. Yes, I have made an effort of going down to maintain an exact\ncompatibility with the existing code for now. My take is that\nremoving or adding more things into the jumble deserves its own\ndiscussion. I think that's a bit better once this code is automated\nwith the nodes, now it would not be difficult either to adjust HEAD\ndepending on that. Only the analysis effort is different.\n\n>> Anyway, when it comes to the location, another thing that can be\n>> considered here would be to require a node-level flag for the nodes on\n>> which we want to track the location. This overlaps a bit with the\n>> fields satisfying \"($t eq 'int' && $f =~ 'location$')\", but it removes\n>> most of the code changes like this one as at the end we only care\n>> about the location for Const nodes:\n>> - int location; /* token location, or -1 if unknown */\n>> + int location pg_node_attr(jumble_ignore); /* token location, or -1\n>> + * if unknown */\n>> \n>> I have taken this approach in v2 of the patch, shaving ~100 lines of\n>> more code as there is no need to mark all these location fields with\n>> \"jumble_ignore\" anymore, except for Const, of course.\n> \n> I don't understand why you chose that when we already have an established\n> way. This would just make the jumble annotations inconsistent with the\n> other annotations.\n\nBecause we don't want to track the location of all the nodes! If we\ndo that, pg_stat_statements would begin to parameterize a lot more\nthings, from column aliases to full contents of IN or WITH clauses.\nThe root point is that we only want to track the jumble location for\nConst nodes now. In order to do that, there are two approaches:\n- Mark all the locations with jumble_ignore: more invasive, but\nit requires only one per-field attribute with \"jumble_ignore\". This\nis what v1 did.\n- Mark only the fields where we want to track the location with a\nsecond node type, like \"jumble_location\". We could restrict that\ndepending on the field type or its name on top of checking the field\nattribute, whatever. This is what v2 did.\n\nWhich approach do you prefer? Marking all the locations we don't want\nwith jumble_ignore, or introduce a second attribute with\njumble_location. I'd tend to prefer the approach of minimizing the\nnumber of node and field attributes, FWIW. Now you have worked on\nthis area previously, so your view may be more adapted than my\nthinking process\n\nThe long-term perspective is that I'd like to extend the tracking of\nthe locations to more DDL nodes, like parameters of SET statements or\nparts of CALL statements. Not to mention that this makes the work of\nforks easier. This is a separate discussion.\n\n> I also have two suggestions to make this patch more palatable:\n> \n> 1. Make a separate patch to reformat long comments, like\n> 835d476fd21bcfb60b055941dee8c3d9559af14c.\n> \n> 2. Make a separate patch to move the jumble support to\n> src/backend/nodes/jumblefuncs.c. (It was probably a mistake that it wasn't\n> there to begin with, and some of the errors of omission alluded to above are\n> probably caused by it having been hidden away in the wrong place.)\n\nBoth suggestions make sense. I'll shape the next series of the patch\nto do something among these lines.\n\nMy question about the location tracking greatly influences the first\npatch (comment reformating) and third patch (automated generation of\nthe code) of the series, so do you have a preference about that? \n--\nMichael",
"msg_date": "Tue, 17 Jan 2023 12:48:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On 17.01.23 04:48, Michael Paquier wrote:\n>>> Anyway, when it comes to the location, another thing that can be\n>>> considered here would be to require a node-level flag for the nodes on\n>>> which we want to track the location. This overlaps a bit with the\n>>> fields satisfying \"($t eq 'int' && $f =~ 'location$')\", but it removes\n>>> most of the code changes like this one as at the end we only care\n>>> about the location for Const nodes:\n>>> - int location; /* token location, or -1 if unknown */\n>>> + int location pg_node_attr(jumble_ignore); /* token location, or -1\n>>> + * if unknown */\n>>>\n>>> I have taken this approach in v2 of the patch, shaving ~100 lines of\n>>> more code as there is no need to mark all these location fields with\n>>> \"jumble_ignore\" anymore, except for Const, of course.\n>>\n>> I don't understand why you chose that when we already have an established\n>> way. This would just make the jumble annotations inconsistent with the\n>> other annotations.\n> \n> Because we don't want to track the location of all the nodes! If we\n> do that, pg_stat_statements would begin to parameterize a lot more\n> things, from column aliases to full contents of IN or WITH clauses.\n> The root point is that we only want to track the jumble location for\n> Const nodes now. In order to do that, there are two approaches:\n> - Mark all the locations with jumble_ignore: more invasive, but\n> it requires only one per-field attribute with \"jumble_ignore\". This\n> is what v1 did.\n\nOk, I understand now, and I agree with this approach over the opposite. \nI was confused because the snippet you showed above used \n\"jumble_ignore\", but your patch is correct as it uses \"jumble_location\".\n\nThat said, the term \"jumble\" is really weird, because in the sense that \nwe are using it here it means, approximately, \"to mix together\", \"to \nunify\". So what we are doing with the Const nodes is really to *not* \njumble the location, but for all other node types we are jumbling the \nlocation. At least that is my understanding.\n\n\n\n",
"msg_date": "Tue, 17 Jan 2023 08:43:44 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 08:43:44AM +0100, Peter Eisentraut wrote:\n> Ok, I understand now, and I agree with this approach over the opposite. I\n> was confused because the snippet you showed above used \"jumble_ignore\", but\n> your patch is correct as it uses \"jumble_location\".\n\nOkay. I'll refresh the patch set so as we have only \"jumble_ignore\",\nthen, like v1, with preparatory patches for what you mentioned and\nanything that comes into mind.\n\n> That said, the term \"jumble\" is really weird, because in the sense that we\n> are using it here it means, approximately, \"to mix together\", \"to unify\".\n> So what we are doing with the Const nodes is really to *not* jumble the\n> location, but for all other node types we are jumbling the location. At\n> least that is my understanding.\n\nI am quite familiar with this term, FWIW. That's what we've inherited\nfrom the days where this has been introduced in pg_stat_statements.\n--\nMichael",
"msg_date": "Tue, 17 Jan 2023 16:52:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 04:52:28PM +0900, Michael Paquier wrote:\n> On Tue, Jan 17, 2023 at 08:43:44AM +0100, Peter Eisentraut wrote:\n>> Ok, I understand now, and I agree with this approach over the opposite. I\n>> was confused because the snippet you showed above used \"jumble_ignore\", but\n>> your patch is correct as it uses \"jumble_location\".\n> \n> Okay. I'll refresh the patch set so as we have only \"jumble_ignore\",\n> then, like v1, with preparatory patches for what you mentioned and\n> anything that comes into mind.\n\nThis is done as of the patch series v3 attached:\n- 0001 reformats all the comments of the nodes.\n- 0002 moves the current files for query jumble as of queryjumble.c ->\nqueryjumblefuncs.c and utils/queryjumble.h -> nodes/queryjumble.h.\n- 0003 is the core feature, where I have done a second pass over the\nnodes to make sure that things map with HEAD, incorporating the extra\ndocs coming from v2, adding a bit more.\n\n>> That said, the term \"jumble\" is really weird, because in the sense that we\n>> are using it here it means, approximately, \"to mix together\", \"to unify\".\n>> So what we are doing with the Const nodes is really to *not* jumble the\n>> location, but for all other node types we are jumbling the location. At\n>> least that is my understanding.\n> \n> I am quite familiar with this term, FWIW. That's what we've inherited\n> from the days where this has been introduced in pg_stat_statements.\n\nI have renamed the node attributes to query_jumble_ignore and\nno_query_jumble at the end, after considering Peter's point that only\n\"jumble\" could be fuzzy here. The file names are changed in\nconsequence.\n\nWhile doing all that, I have done some micro-benchmarking of\nJumbleQuery(), making it loop 50M times on my laptop each time a query\nID is computed (hideous hack with a loop in queryjumble.c):\n- For non-utility queries, aka queries that go through\nJumbleQueryInternal(), I am measuring a repeatable ~10% improvement\nwith the generated code over HEAD, which is kind of nice. I have\ntested a few DMLs and simple SELECTs, still it looks like a trend.\n- For utility queries, the new code is competing against\nhash_any_extended() with the query string, which is going to be hard\nto beat.. I am measuring what looks like a 5 times slowdown, at\nleast, and more depending on the depth of the query tree. That's not\nsurprising. On a 50M loop, this comes down to compare a computation\nof 100ns to 5ns for a 20-time slowdown for example, still this could\njustify the addition of a GUC to control whether utility queries have\ntheir query ID compiled depending on their nodes or their string\nhash, as this could become noticeable in OLTP workloads with loads\nof short statements and BEGIN/COMMIT queries?\n\nThoughts or comments?\n--\nMichael",
"msg_date": "Wed, 18 Jan 2023 16:04:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On 18.01.23 08:04, Michael Paquier wrote:\n> On Tue, Jan 17, 2023 at 04:52:28PM +0900, Michael Paquier wrote:\n>> On Tue, Jan 17, 2023 at 08:43:44AM +0100, Peter Eisentraut wrote:\n>>> Ok, I understand now, and I agree with this approach over the opposite. I\n>>> was confused because the snippet you showed above used \"jumble_ignore\", but\n>>> your patch is correct as it uses \"jumble_location\".\n>>\n>> Okay. I'll refresh the patch set so as we have only \"jumble_ignore\",\n>> then, like v1, with preparatory patches for what you mentioned and\n>> anything that comes into mind.\n> \n> This is done as of the patch series v3 attached:\n> - 0001 reformats all the comments of the nodes.\n> - 0002 moves the current files for query jumble as of queryjumble.c ->\n> queryjumblefuncs.c and utils/queryjumble.h -> nodes/queryjumble.h.\n> - 0003 is the core feature, where I have done a second pass over the\n> nodes to make sure that things map with HEAD, incorporating the extra\n> docs coming from v2, adding a bit more.\n\nThis patch structure looks good.\n\n>>> That said, the term \"jumble\" is really weird, because in the sense that we\n>>> are using it here it means, approximately, \"to mix together\", \"to unify\".\n>>> So what we are doing with the Const nodes is really to *not* jumble the\n>>> location, but for all other node types we are jumbling the location. At\n>>> least that is my understanding.\n>>\n>> I am quite familiar with this term, FWIW. That's what we've inherited\n>> from the days where this has been introduced in pg_stat_statements.\n> \n> I have renamed the node attributes to query_jumble_ignore and\n> no_query_jumble at the end, after considering Peter's point that only\n> \"jumble\" could be fuzzy here. The file names are changed in\n> consequence.\n\nI see that in the 0003 patch, most location fields now have an explicit \nmarkup with query_jumble_ignore. I thought we had previously resolved \nto consider location fields to be automatically ignored unless \nexplicitly included (like for the Const node). This appears to invert \nthat? Am I missing something?\n\n\n\n",
"msg_date": "Thu, 19 Jan 2023 09:42:03 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 09:42:03AM +0100, Peter Eisentraut wrote:\n> I see that in the 0003 patch, most location fields now have an explicit\n> markup with query_jumble_ignore. I thought we had previously resolved to\n> consider location fields to be automatically ignored unless explicitly\n> included (like for the Const node). This appears to invert that? Am I\n> missing something?\n\nMy misunderstanding then, I thought that you were OK with what was\npart of v1, where all these fields was marked as \"ignore\". But you\nactually prefer v2, with the second field \"location\" on top of\n\"ignore\". I can update 0003 to refresh that.\n\nWould you be OK if I apply 0001 (with the comments of the locations\nstill reshaped to ease future property additions) and 0002?\n--\nMichael",
"msg_date": "Thu, 19 Jan 2023 17:46:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 09:42:03AM +0100, Peter Eisentraut wrote:\n> I see that in the 0003 patch, most location fields now have an explicit\n> markup with query_jumble_ignore. I thought we had previously resolved to\n> consider location fields to be automatically ignored unless explicitly\n> included (like for the Const node). This appears to invert that? Am I\n> missing something?\n\nAs a result, I have rebased the patch set to use the two-attribute\napproach: query_jumble_ignore and query_jumble_location.\n\nOn top of the three previous patches, I am adding 0004 to implement a\nGUC able to switch the computation of the utility statements between\nwhat I am calling \"string\" to compute the query IDs based on the hash\nof the query string and the previous default, or \"jumble\", to use the\nparsed tree, with a few more tests to see the difference. Perhaps it\nis not worth bothering, but it could be possible that some users don't\nwant to pay the penalty of doing the query jumbling with the parsed\ntree for utilities, as well..\n--\nMichael",
"msg_date": "Fri, 20 Jan 2023 13:35:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On 20.01.23 05:35, Michael Paquier wrote:\n> On Thu, Jan 19, 2023 at 09:42:03AM +0100, Peter Eisentraut wrote:\n>> I see that in the 0003 patch, most location fields now have an explicit\n>> markup with query_jumble_ignore. I thought we had previously resolved to\n>> consider location fields to be automatically ignored unless explicitly\n>> included (like for the Const node). This appears to invert that? Am I\n>> missing something?\n> As a result, I have rebased the patch set to use the two-attribute\n> approach: query_jumble_ignore and query_jumble_location.\n\nStructurally, this looks okay to me now.\n\nIn your 0001 patch, most of the comment reformattings for location \nfields are no longer needed, so you should undo those.\n\nThe 0002 patch looks good.\n\nThose two could be committed with those adjustments, I think.\n\nI'll read the 0003 again more carefully. I haven't studied the new 0004 \nyet.\n\n\n\n",
"msg_date": "Fri, 20 Jan 2023 11:56:00 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 11:56:00AM +0100, Peter Eisentraut wrote:\n> In your 0001 patch, most of the comment reformattings for location fields\n> are no longer needed, so you should undo those.\n> \n> The 0002 patch looks good.\n\nOkay, I have gone through these two again and applied what I had.\n0001 has been cleaned up of the extra comment moves for the\nlocations. Now, I have kept a few changes for some of the nodes to\nhave some consistency with the other fields, in the case where most of\nthe fields at the end of the structures have to be marked with new\nnode attributes. This made the style of the header a bit more\nelegant, IMV.\n\n> I'll read the 0003 again more carefully. I haven't studied the new 0004\n> yet.\n\nThanks, again. Rebased version attached.\n--\nMichael",
"msg_date": "Sat, 21 Jan 2023 12:35:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On 21.01.23 04:35, Michael Paquier wrote:\n>> I'll read the 0003 again more carefully. I haven't studied the new 0004\n>> yet.\n> \n> Thanks, again. Rebased version attached.\n\nA couple of small fixes are attached.\n\nThere is something weird in _jumbleNode(). There are two switch \n(nodeTag(expr)) statements. Maybe that's intentional, but then it \nshould be commented better, because now it looks more like an editing \nmistake.\n\nThe handling of T_RangeTblEntry could be improved. In other contexts we \nhave for example a custom_copy attribute, which generates the switch \nentry but not the function. Something like this could be useful here too.\n\nOtherwise, this looks ok. I haven't checked whether it maintains the \nexact behavior from before. What is the test coverage situation for this?\n\nFor the 0004 patch, it should be documented why one would want one \nbehavior or the other. That's totally unclear right now.\n\nI think if we are going to accept 0004, then it might be better to \ncombine it with 0003. Otherwise, 0004 is just undoing a lot of the code \nstructure changes in JumbleQuery() that 0003 did.",
"msg_date": "Mon, 23 Jan 2023 14:27:13 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Mon, Jan 23, 2023 at 02:27:13PM +0100, Peter Eisentraut wrote:\n> A couple of small fixes are attached.\n\nThanks.\n\n> There is something weird in _jumbleNode(). There are two switch\n> (nodeTag(expr)) statements. Maybe that's intentional, but then it should be\n> commented better, because now it looks more like an editing mistake.\n\nThis one is intentional, so as it is possible to track correctly the\nhighest param ID found while browsing the nodes. IMO it would be\nconfusing to add that into gen_node_support.pl. Another thing that\ncould be done is to switch Param to have a custom implementation, like\nRangeTblEntry, though this removes the automation around the creation\nof _jumbleParam(). I have clarified the comments around that.\n\n> The handling of T_RangeTblEntry could be improved. In other contexts we\n> have for example a custom_copy attribute, which generates the switch entry\n> but not the function. Something like this could be useful here too.\n\nHmm. Okay. Fine by me.\n\n> Otherwise, this looks ok. I haven't checked whether it maintains the exact\n> behavior from before. What is the test coverage situation for this?\n\n0003 taken in isolation has some minimal coverage through\npg_stat_statements, though it turns around 15% with compute_query_id =\nauto that would enforce the jumbling path only when pg_stat_statements\nuses it. Still, my plan here is to enforce the loading of \npg_stat_statements with compute_query_id = regress and\nutility_query_id = jumble (if needed) in a new buildfarm machine,\nbecause that's the cheapest path. An extra possibility is to have\npg_regress kicked in a new TAP test with these settings, but that's\ncostly and we have already two of these :/ Another possibility is to\nplug in that into 027_stream_regress or the pg_upgrade test suite with\nnew settings :/ \n\nAnyway, the regression tests of pg_stat_statements should be extended\na bit to cover more node types by default (Say COPY with DMLs for the\nInsertStmt & co) to look at how these are showing up once normalized\nusing their parsed query, and we don't do much around that now.\nNormalizing more DDLs should use this path, as well.\n\n> For the 0004 patch, it should be documented why one would want one behavior\n> or the other. That's totally unclear right now.\n\nI am not 100% sure whether we should have this part at the end, but as\nan exit path in case somebody complains about the extra load with the\nautomated jumbling compared to the hash of the query strings, that can\nbe used as a backup. Anyway, attached is what would be a\nclarification.\n\n> I think if we are going to accept 0004, then it might be better to combine\n> it with 0003. Otherwise, 0004 is just undoing a lot of the code structure\n> changes in JumbleQuery() that 0003 did.\n\nMakes sense. That would be my intention if 0004 is the most\nacceptable and splitting things makes things a bit easier to review.\n--\nMichael",
"msg_date": "Tue, 24 Jan 2023 15:57:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 03:57:56PM +0900, Michael Paquier wrote:\n> Makes sense. That would be my intention if 0004 is the most\n> acceptable and splitting things makes things a bit easier to review.\n\nThere was a silly mistake in 0004 where the jumbling code relied on\ncompute_query_id rather than utility_query_id, so fixed and rebased as\nof v7 attached.\n--\nMichael",
"msg_date": "Wed, 25 Jan 2023 09:08:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On 24.01.23 07:57, Michael Paquier wrote:\n>> For the 0004 patch, it should be documented why one would want one behavior\n>> or the other. That's totally unclear right now.\n> I am not 100% sure whether we should have this part at the end, but as\n> an exit path in case somebody complains about the extra load with the\n> automated jumbling compared to the hash of the query strings, that can\n> be used as a backup. Anyway, attached is what would be a\n> clarification.\n\nOk, the documentation make sense now. I wonder what the performance \nimpact is. Probably, nobody cares about microoptimizing CREATE TABLE \nstatements. But BEGIN/COMMIT could matter. However, whatever you do in \nbetween the BEGIN and COMMIT will already be jumbled, so you're already \npaying the overhead. Hopefully, jumbling such simple commands will have \nno noticeable overhead.\n\nIn other words, we should test this and hopefully get rid of the \n'string' method.\n\n\n",
"msg_date": "Thu, 26 Jan 2023 09:37:13 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On 25.01.23 01:08, Michael Paquier wrote:\n> On Tue, Jan 24, 2023 at 03:57:56PM +0900, Michael Paquier wrote:\n>> Makes sense. That would be my intention if 0004 is the most\n>> acceptable and splitting things makes things a bit easier to review.\n> \n> There was a silly mistake in 0004 where the jumbling code relied on\n> compute_query_id rather than utility_query_id, so fixed and rebased as\n> of v7 attached.\n\nOverall, this looks good to me.\n\nThere are a couple of repetitive comments, like \"typmod and collation \ninformation are irrelevant for the query jumbling\". This applies to all \nnodes, so we don't need to repeat it for a number of nodes (and then not \nmention it for other nodes). Maybe there should be a central place \nsomewhere that describes \"these kinds of fields should normally be ignored\".\n\n\n\n",
"msg_date": "Thu, 26 Jan 2023 09:39:05 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 09:37:13AM +0100, Peter Eisentraut wrote:\n> Ok, the documentation make sense now. I wonder what the performance impact\n> is. Probably, nobody cares about microoptimizing CREATE TABLE statements.\n> But BEGIN/COMMIT could matter. However, whatever you do in between the\n> BEGIN and COMMIT will already be jumbled, so you're already paying the\n> overhead. Hopefully, jumbling such simple commands will have no noticeable\n> overhead.\n> \n> In other words, we should test this and hopefully get rid of the 'string'\n> method.\n\nYep. I have mentioned a few numbers upthread, and this deserves\ndiscussion.\n\nFYI, I have done more micro-benchmarking to compare both methods for\nutility queries by hijacking JumbleQuery() to run the computation in a\ntight loop run N times (could not come up with a better idea to avoid\nthe repeated palloc/pfree overhead), as the path to stress is\n_jumbleNode(). See the attached, that should be able to apply on top\nof the latest patch set (named as .txt to not feed it to the CF bot,\nand need to recompile to switch the iteration).\n\nUsing that, I can compile the following results for various cases (-O2\nand compute_query_id=on):\n query | mode | iterations | avg_runtime_ns | avg_jumble_ns \n-------------------------+--------+------------+----------------+---------------\n begin | string | 50000000 | 4.53116 | 4.54\n begin | jumble | 50000000 | 30.94578 | 30.94\n commit | string | 50000000 | 4.76004 | 4.74\n commit | jumble | 50000000 | 31.4791 | 31.48\n create table 1 column | string | 50000000 | 7.22836 | 7.08\n create table 1 column | jumble | 50000000 | 152.10852 | 151.96\n create table 5 columns | string | 50000000 | 12.43412 | 12.28\n create table 5 columns | jumble | 50000000 | 352.88976 | 349.1\n create table 20 columns | string | 5000000 | 49.591 | 48.2\n create table 20 columns | jumble | 5000000 | 2272.4066 | 2271\n drop table 1 column | string | 50000000 | 6.70538 | 6.56\n drop table 1 column | jumble | 50000000 | 50.38 | 50.24\n drop table 5 columns | string | 50000000 | 6.88256 | 6.74\n drop table 5 columns | jumble | 50000000 | 50.02898 | 49.9\n SET work_mem | string | 50000000 | 7.28752 | 7.28\n SET work_mem | jumble | 50000000 | 91.66588 | 91.64\n(16 rows)\n\navg_runtime_ns is (query runtime / iterations) and avg_jumble_ns is\nthe same with the difference between the start/end logs in the txt\npatch attached. The overhead to run the query does not matter much if\nyou compare both. The time it takes to run a jumble is correlated to\nthe number of nodes to go through for each query, and there is a\nlarger gap for more nodes to go through. Well, a simple \"begin\" or\n\"commit\" query has its computation time increase from 4ns to 30ns in\naverage which would be unnoticeable. The gap is larger for larger\nnodes, like SET, still we jump from 7ns to 90ns in this case. DDLs\ntake the most hit with this method, where a 20-column CREATE TABLE\njumps from 50ns to 2us (note that the iteration is 10 times lower\nhere).\n\nAt the end, that would be unnoticeable for the average user, I guess,\nbut here are the numbers I get on my laptop :)\n--\nMichael",
"msg_date": "Fri, 27 Jan 2023 11:59:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 09:39:05AM +0100, Peter Eisentraut wrote:\n> There are a couple of repetitive comments, like \"typmod and collation\n> information are irrelevant for the query jumbling\". This applies to all\n> nodes, so we don't need to repeat it for a number of nodes (and then not\n> mention it for other nodes). Maybe there should be a central place\n> somewhere that describes \"these kinds of fields should normally be ignored\".\n\nThe place that would make the most sense to me to centralize this\nknowlegde is nodes.h itself, because that's where the node attributes\nare defined?\n--\nMichael",
"msg_date": "Fri, 27 Jan 2023 12:07:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 03:57:56PM +0900, Michael Paquier wrote:\n> Still, my plan here is to enforce the loading of \n> pg_stat_statements with compute_query_id = regress and\n> utility_query_id = jumble (if needed) in a new buildfarm machine,\n\nActually, about this specific point, I have been able to set up a\nbuildfarm machine that uses shared_preload_libraries =\npg_stat_statements and compute_query_id = regress in the base\nconfiguration, which is uncovered yet. This works as long as one sets\nup EXTRA_INSTALL => \"contrib/pg_stat_statements\" in build_env.\n--\nMichael",
"msg_date": "Sat, 28 Jan 2023 12:32:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 11:59:47AM +0900, Michael Paquier wrote:\n> Using that, I can compile the following results for various cases (-O2\n> and compute_query_id=on):\n> query | mode | iterations | avg_runtime_ns | avg_jumble_ns \n> -------------------------+--------+------------+----------------+---------------\n> begin | string | 50000000 | 4.53116 | 4.54\n> begin | jumble | 50000000 | 30.94578 | 30.94\n> commit | string | 50000000 | 4.76004 | 4.74\n> commit | jumble | 50000000 | 31.4791 | 31.48\n> create table 1 column | string | 50000000 | 7.22836 | 7.08\n> create table 1 column | jumble | 50000000 | 152.10852 | 151.96\n> create table 5 columns | string | 50000000 | 12.43412 | 12.28\n> create table 5 columns | jumble | 50000000 | 352.88976 | 349.1\n> create table 20 columns | string | 5000000 | 49.591 | 48.2\n> create table 20 columns | jumble | 5000000 | 2272.4066 | 2271\n> drop table 1 column | string | 50000000 | 6.70538 | 6.56\n> drop table 1 column | jumble | 50000000 | 50.38 | 50.24\n> drop table 5 columns | string | 50000000 | 6.88256 | 6.74\n> drop table 5 columns | jumble | 50000000 | 50.02898 | 49.9\n> SET work_mem | string | 50000000 | 7.28752 | 7.28\n> SET work_mem | jumble | 50000000 | 91.66588 | 91.64\n> (16 rows)\n\nJust to close the loop here, I have done more measurements to compare\nthe jumble done for some DMLs and some SELECTs between HEAD and the\npatch (forgot to post some last Friday). Both methods show comparable\nresults:\n query | mode | iterations | avg_runtime_ns | avg_jumble_ns \n----------------------+--------+------------+----------------+---------------\n insert table 10 cols | master | 50000000 | 377.17878 | 377.04\n insert table 10 cols | jumble | 50000000 | 409.47924 | 409.34\n insert table 20 cols | master | 50000000 | 692.94924 | 692.8\n insert table 20 cols | jumble | 50000000 | 710.0901 | 709.96\n insert table 5 cols | master | 50000000 | 232.44308 | 232.3\n insert table 5 cols | jumble | 50000000 | 253.49854 | 253.36\n select 10 cols | master | 50000000 | 449.13608 | 383.36\n select 10 cols | jumble | 50000000 | 491.61912 | 323.86\n select 5 cols | master | 50000000 | 277.477 | 277.46\n select 5 cols | jumble | 50000000 | 323.88152 | 323.86\n(10 rows)\n\nThe averages are in ns, so the difference does not bother me much.\nThere may be some noise mixed in that ;)\n\n(Attached is the tweak I have applied on HEAD to get some numbers.)\n--\nMichael",
"msg_date": "Mon, 30 Jan 2023 19:39:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On 27.01.23 04:07, Michael Paquier wrote:\n> On Thu, Jan 26, 2023 at 09:39:05AM +0100, Peter Eisentraut wrote:\n>> There are a couple of repetitive comments, like \"typmod and collation\n>> information are irrelevant for the query jumbling\". This applies to all\n>> nodes, so we don't need to repeat it for a number of nodes (and then not\n>> mention it for other nodes). Maybe there should be a central place\n>> somewhere that describes \"these kinds of fields should normally be ignored\".\n> \n> The place that would make the most sense to me to centralize this\n> knowlegde is nodes.h itself, because that's where the node attributes\n> are defined?\n\nEither that or src/backend/nodes/README.\n\n\n",
"msg_date": "Mon, 30 Jan 2023 11:46:06 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On 27.01.23 03:59, Michael Paquier wrote:\n> At the end, that would be unnoticeable for the average user, I guess,\n> but here are the numbers I get on my laptop :)\n\nPersonally, I think we do not want the two jumble methods in parallel.\n\nMaybe there are other opinions.\n\nI'm going to set this thread as \"Ready for Committer\". Either wait a \nbit for more feedback on this topic, or just go ahead with either \nsolution. We can leave it as a semi-open item for reconsideration later.\n\n\n\n",
"msg_date": "Mon, 30 Jan 2023 11:48:45 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 11:46:06AM +0100, Peter Eisentraut wrote:\n> Either that or src/backend/nodes/README.\n\nThe README holds nothing about the node attributes currently, so\nnodes.h feels most adapted here at the end..\n\nThanks,\n--\nMichael",
"msg_date": "Tue, 31 Jan 2023 13:21:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 11:48:45AM +0100, Peter Eisentraut wrote:\n> I'm going to set this thread as \"Ready for Committer\". Either wait a bit\n> for more feedback on this topic, or just go ahead with either solution. We\n> can leave it as a semi-open item for reconsideration later.\n\nAll the measurements I have done for the last couple of days point me\nin the direction that there is no need for an extra node based on the\naveraged computation times (did a few more today with some long CREATE\nFUNCTION, VIEW or event trigger queries, for example). Agreed to add\nthe extra option as something to consider at some point during beta,\nas long as it is fresh. I am not convinced that it will be necessary\nbut let's see how it goes.\n\nWhile reviewing all the nodes, I have noticed two mistakes for a few\nthings marked as query_jumble_ignore but they should not:\n- orderClause in WindowClause\n- aliascolnames in CommonTableExpr\nThe rest was fine.\n\nvarnullingrels has been added very recently, and there was in\nqueryjumblefuncs.c a comment explaining why it should be ignored.\nThis has been moved to nodes.h, like the others.\n\nWith all that in mind, I have spent my day polishing that and doing a\nclose lookup, and the patch has been applied. Thanks a lot!\n--\nMichael",
"msg_date": "Tue, 31 Jan 2023 15:40:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 03:40:56PM +0900, Michael Paquier wrote:\n> With all that in mind, I have spent my day polishing that and doing a\n> close lookup, and the patch has been applied. Thanks a lot!\n\nWhile working on a different patch, I have noticed a small issue in\nthe way the jumbling happens for A_Const, where ValUnion was not\ngetting jumbled correctly. This caused a few statements that rely on\nthis node to compile the same query IDs when using different values.\t\nThe full contents of pg_stat_statements for a regression database\npoint to: \n- SET.\n- COPY with queries.\n- CREATE TABLE with partition bounds and default expressions.\n\nThis was causing some confusion in pg_stat_statements where some\nutility queries would be incorrectly reported, and at this point the\nintention is to keep this area compatible with the string-based method\nwhen it comes to the values. Like read, write and copy nodes, we need\nto compile the query ID based on the type of the value, which cannot\nbe automated. Attached is a patch to fix this issue with some\nregression tests, that I'd like to get fixed before moving on with\nmore business in pg_stat_statements (aka properly show Const nodes for\nutilities with normalized queries).\n\nComments or objections are welcome, of course.\n\n(FWIW, I'd like to think that there is an argument to normalize the\nA_Const nodes for a portion of the DDL queries, by ignoring their\nvalues in the query jumbling and mark a location, which would be\nreally useful for some workloads, but that's a separate discussion I\nam keeping for later.)\n--\nMichael",
"msg_date": "Sun, 5 Feb 2023 19:40:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Sun, Feb 05, 2023 at 07:40:57PM +0900, Michael Paquier wrote:\n> Comments or objections are welcome, of course.\n\nDone this part.\n\n> (FWIW, I'd like to think that there is an argument to normalize the\n> A_Const nodes for a portion of the DDL queries, by ignoring their\n> values in the query jumbling and mark a location, which would be\n> really useful for some workloads, but that's a separate discussion I\n> am keeping for later.)\n\nAnd I have a patch pretty much OK for this part of the discussion.\nWill post soon..\n--\nMichael",
"msg_date": "Tue, 7 Feb 2023 09:16:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> With all that in mind, I have spent my day polishing that and doing a\n> close lookup, and the patch has been applied. Thanks a lot!\n\nI have just noticed that this patch is generating useless jumbling\ncode for node types such as Path nodes and other planner infrastructure\nnodes. That no doubt contributes to the miserable code coverage rating\nfor queryjumblefuncs.*.c, which have enough dead lines to drag down the\noverall rating for all of backend/nodes/. Shouldn't a little more\nattention have been paid to excluding entire node classes if they can\nnever appear in Query?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Feb 2023 17:32:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Tue, Feb 07, 2023 at 05:32:07PM -0500, Tom Lane wrote:\n> I have just noticed that this patch is generating useless jumbling\n> code for node types such as Path nodes and other planner infrastructure\n> nodes. That no doubt contributes to the miserable code coverage rating\n> for queryjumblefuncs.*.c, which have enough dead lines to drag down the\n> overall rating for all of backend/nodes/. Shouldn't a little more\n> attention have been paid to excluding entire node classes if they can\n> never appear in Query?\n\nThis one was intentional to let extensions play with jumbling of such\nnodes, but perhaps you are right that it makes little sense at this\nstage. If there is an ask for it later, though.. Using\nshared_preload_libraries = pg_stat_statements and compute_query_id =\nregress shows that numbers go up to 60% for funcs.c and 30% for\nswitch.c. Removing nodes like as of the attached brings these numbers\nrespectively up to 94.5% and 93.5% for a check. With a check-world, I\nmeasure respectively 96.7% and 96.1% because there is more coverage\nfor extensions, ALTER SYSTEM and database commands, roughly.\n\nThis could also be a file-level policy by enforcing no_query_jumble in\ngen_node_support.pl by looking at the header name, still I favor\nno_query_jumble to keep all the pg_node_attr() in a single area with\nthe headers. Note that the attached includes in 0002 the tweak to\nenforce the computation with compute_query_id if you want to test it\nyourself and check my numbers. This is useful IMO as we could detect\nmissing nodes for all queries (utilities or not), still doing this\nchange may deserve a separate discussion. Note that I am not seeing\nany \"unrecognized node type\" in any of the logs for any queries.\n\nAs a side note, should we be more aggressive with the tests related to\nthe jumbling code since it is now in core? For example, XmlExpr or\nMinMaxExpr, which are part of Query nodes that can be jumbled even in\nolder branches, have zero coverage by default as\ncoverage.postgresql.org reports, because everything goes through\npg_stat_statements and it includes no queries with such nodes. My\nbuildfarm member batta makes sure to stress that no nodes are missing\nby overloading pg_stat_statements and compute_query_id = regress in \nthe configuration, so no nodes are missing from the computation, still\nthe coverage could be better across the board. Expanding the tests of\npg_stat_statements is needed in this area for some time, still could\nthere be a point in switching compute_query_id = regress so as it is a\nsynonym of \"on\" without the EXPLAIN output rather than \"auto\"? If the\nsetting is enforced by pg_regress, the coverage of queryjumble.c would\nbe so much better, at least..\n--\nMichael",
"msg_date": "Wed, 8 Feb 2023 15:47:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-08 15:47:51 +0900, Michael Paquier wrote:\n> This one was intentional to let extensions play with jumbling of such\n> nodes, but perhaps you are right that it makes little sense at this\n> stage. If there is an ask for it later, though.. Using\n> shared_preload_libraries = pg_stat_statements and compute_query_id =\n> regress shows that numbers go up to 60% for funcs.c and 30% for\n> switch.c. Removing nodes like as of the attached brings these numbers\n> respectively up to 94.5% and 93.5% for a check. With a check-world, I\n> measure respectively 96.7% and 96.1% because there is more coverage\n> for extensions, ALTER SYSTEM and database commands, roughly.\n\nGiven that we already pay the price of multiple regress runs, and that\njumbling is now really a core feature, perhaps we should enable\npg_stat_statements in pg_upgrade or 027_stream_regress.pl? I'd hope it\nwouldn't add a meaningful amount of time? A tiny bit of verification at the\nend should also be ok.\n\nBoth pg_upgrade and 027_stream_regress.pl have some advantages. The former\nwould test pg_upgrade interactions with shared_preload_libraries, the latter\ncould do some basic checks of pg_stat_statements on a standby.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Feb 2023 23:01:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Tue, Feb 07, 2023 at 11:01:03PM -0800, Andres Freund wrote:\n> Given that we already pay the price of multiple regress runs, and that\n> jumbling is now really a core feature, perhaps we should enable\n> pg_stat_statements in pg_upgrade or 027_stream_regress.pl? I'd hope it\n> wouldn't add a meaningful amount of time? A tiny bit of verification at the\n> end should also be ok.\n\nYeah, I have briefly mentioned this part upthread:\nhttps://www.postgresql.org/message-id/Y8+BdCOjxykre5es@paquier.xyz\n\nIt would not, I guess, as long as pg_stat_statements.max is set large \nenough in the TAP test. There are currently 21k~22k entries in the\nregression database, much larger than the default of 5000 so this may\nbecome an issue on small-ish machines if left untouched even in a TAP\ntest.\n\n> Both pg_upgrade and 027_stream_regress.pl have some advantages. The former\n> would test pg_upgrade interactions with shared_preload_libraries, the latter\n> could do some basic checks of pg_stat_statements on a standby.\n\nYes, there could be more checks, potentially useful for both cases, so\nI may choose both at the end of the day. Checking the consistency of\nthe contents of pg_stat_statements across a pg_upgrade run for the\nsame version may be one thing? I am not sure if it is that\ninteresting, TBH, still that's one idea :)\n--\nMichael",
"msg_date": "Wed, 8 Feb 2023 16:16:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Wed, Feb 08, 2023 at 03:47:51PM +0900, Michael Paquier wrote:\n> This one was intentional to let extensions play with jumbling of such\n> nodes, but perhaps you are right that it makes little sense at this\n> stage. If there is an ask for it later, though.. Using\n> shared_preload_libraries = pg_stat_statements and compute_query_id =\n> regress shows that numbers go up to 60% for funcs.c and 30% for\n> switch.c. Removing nodes like as of the attached brings these numbers\n> respectively up to 94.5% and 93.5% for a check. With a check-world, I\n> measure respectively 96.7% and 96.1% because there is more coverage\n> for extensions, ALTER SYSTEM and database commands, roughly.\n\nTom, did you get a chance to look at what is proposed here and expand\nthe use of query_jumble_ignore in the definitions of the nodes rather\nthan have an enforced per-file policy in gen_node_support.pl? The\nattached improves the situation by as much as we can, still the\nnumbers reported by coverage.postgresql.org won't go that up until we\nenforce query jumbling testing on the regression database (something\nwe should have done since this code moved to core, I guess).\n--\nMichael",
"msg_date": "Fri, 10 Feb 2023 07:49:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Tom, did you get a chance to look at what is proposed here and expand\n> the use of query_jumble_ignore in the definitions of the nodes rather\n> than have an enforced per-file policy in gen_node_support.pl?\n\nSorry, didn't look at it before.\n\nI'm okay with the pathnodes.h changes --- although surely you don't need\nchanges like this:\n\n-\tpg_node_attr(abstract)\n+\tpg_node_attr(abstract, no_query_jumble)\n\n\"abstract\" should already imply \"no_query_jumble\".\n\nI wonder too if you could shorten the changes by making no_query_jumble\nan inheritable attribute, and then just applying it to Path and Plan.\n\nThe changes in parsenodes.h seem wrong, except for RawStmt. Those node\ntypes are used in parsed queries, aren't they?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Feb 2023 18:12:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Thu, Feb 09, 2023 at 06:12:50PM -0500, Tom Lane wrote:\n> I'm okay with the pathnodes.h changes --- although surely you don't need\n> changes like this:\n> \n> -\tpg_node_attr(abstract)\n> +\tpg_node_attr(abstract, no_query_jumble)\n> \n> \"abstract\" should already imply \"no_query_jumble\".\n\nOkay, understood. Following this string of thoughts, I am a bit\nsurprised for two cases, though:\n- PartitionPruneStep.\n- Plan.\nBoth are abstract and both are marked with no_equal. I guess that\napplying no_query_jumble to both of them is fine, and that's what you\nmean?\n\n> I wonder too if you could shorten the changes by making no_query_jumble\n> an inheritable attribute, and then just applying it to Path and Plan.\n\nAh. I did not catch what you meant here at first, but I think that I\ndo now. Are you referring to the part of gen_node_support.pl where we\npropagate properties when a node is a supertype? This part would be\ntaken into account when a node is parsed but we find that its first\nmember is already tracked as a node:\n\t# Propagate some node attributes from supertypes\n\tif ($supertype)\n\t{\n\t\tpush @no_copy, $in_struct\n\t\t if elem $supertype, @no_copy;\n\t\tpush @no_equal, $in_struct\n\t\t if elem $supertype, @no_equal;\n\t\tpush @no_read, $in_struct\n\t\t if elem $supertype, @no_read;\n+\t\tpush @no_query_jumble, $in_struct\n+\t\t if elem $supertype, @no_query_jumble;\n\t}\n\nA benefit of doing that would also discard all the Scan and Sort\nnodes. So like the other no_* attributes, it makes sense to force the\ninheritance here.\n\n> The changes in parsenodes.h seem wrong, except for RawStmt. Those node\n> types are used in parsed queries, aren't they?\n\nRTEPermissionInfo is a recent addition, as of a61b1f7. This commit\ndocuments it as a plan node, still it is part of a Query while being\nignored in the query jumbling since its introduction, so I am a bit\nconfused by this one.\n\nAnyway, none of these need to be included in the query jumbling\ncurrently because they are ignored, but I'd be fine to generate their\ncode by default as they could become relevant if other nodes begin to\nrely on them more heavily, as being part of queries. Peter E. has\nmentioned upthread that a few nodes should include more jumbling while\nsome other parts should be ignored. This should be analyzed\nseparately because ~15 does not seem to be strictly right, either.\n\nAttached is a patch refreshed with all that. Feel free to ignore 0002\nas that's just useful to enforce the tests to go through the jumbling\ncode. The attached reaches 95.0% of line coverage after a check-world\nin funcs.c.\n\nThoughts?\n--\nMichael",
"msg_date": "Fri, 10 Feb 2023 11:28:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Okay, understood. Following this string of thoughts, I am a bit\n> surprised for two cases, though:\n> - PartitionPruneStep.\n> - Plan.\n> Both are abstract and both are marked with no_equal. I guess that\n> applying no_query_jumble to both of them is fine, and that's what you\n> mean?\n\nOn second thought, the point of that is to allow the no_equal property\nto automatically inherit to child node types, so doing likewise\nfor no_query_jumble is sensible.\n\n>> The changes in parsenodes.h seem wrong, except for RawStmt. Those node\n>> types are used in parsed queries, aren't they?\n\n> RTEPermissionInfo is a recent addition, as of a61b1f7. This commit\n> documents it as a plan node, still it is part of a Query while being\n> ignored in the query jumbling since its introduction, so I am a bit\n> confused by this one.\n\nHmm ... it is part of Query, so that documentation is wrong, and the\nfact that it's not reached by query jumbling kind of seems like a bug.\nHowever, it might be that everything in it is derived from something\nelse that *is* covered by jumbling, in which case that's okay, if\nunderdocumented.\n\n> ... Peter E. has\n> mentioned upthread that a few nodes should include more jumbling while\n> some other parts should be ignored. This should be analyzed\n> separately because ~15 does not seem to be strictly right, either.\n\nYeah. It'd surprise me not at all if people have overlooked that.\n\nv2 looks good to me as far as it goes. I agree these other questions\ndeserve a separate look.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Feb 2023 16:40:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 04:40:08PM -0500, Tom Lane wrote:\n> v2 looks good to me as far as it goes.\n\nThanks. I have applied that after an extra lookup.\n\n> I agree these other questions deserve a separate look.\n\nOkay, I may be able to come back to that. Another point is that we\nneed to do a better job in forcing the execution of the query jumbling\nin one of the TAP tests running pg_regress, outside\npg_stat_statements, to maximize coverage. Will see to that on a\nseparate thread.\n--\nMichael",
"msg_date": "Mon, 13 Feb 2023 09:11:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 11:48:45AM +0100, Peter Eisentraut wrote:\n> On 27.01.23 03:59, Michael Paquier wrote:\n>> At the end, that would be unnoticeable for the average user, I guess,\n>> but here are the numbers I get on my laptop :)\n> \n> Personally, I think we do not want the two jumble methods in parallel.\n> \n> Maybe there are other opinions.\n\n(Thanks Jonathan for the poke.)\n\nNow that we are in mid-beta for 16, it would be a good time to\nconclude on this open item:\n\"Reconsider a utility_query_id GUC to control if query jumbling of\nutilities can go through the past string-only mode and the new mode?\"\n\nIn Postgres ~15, utility commands used a hash of the query string to\ncompute their query ID. The current query jumbling code uses a Query\ninstead, like any other queries. I have registered this open item as\na self-reminder, mostly in case there would be an argument to have a\nGUC where users could switch from one mode to another. See here as\nwell for some computation times for each method (table is in ns, wiht\nmillions of iterations):\nhttps://www.postgresql.org/message-id/Y9eeYinDb1AcpWrG@paquier.xyz\n\nI still don't think that we need both methods based on these numbers,\nbut there may be more opinions about that? Are people OK if this open\nitem is discarded?\n--\nMichael",
"msg_date": "Tue, 11 Jul 2023 07:35:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On 11/7/2023 05:35, Michael Paquier wrote:\n> On Mon, Jan 30, 2023 at 11:48:45AM +0100, Peter Eisentraut wrote:\n>> On 27.01.23 03:59, Michael Paquier wrote:\n>>> At the end, that would be unnoticeable for the average user, I guess,\n>>> but here are the numbers I get on my laptop :)\n>>\n>> Personally, I think we do not want the two jumble methods in parallel.\n>>\n>> Maybe there are other opinions.\n> \n> (Thanks Jonathan for the poke.)\n> \n> Now that we are in mid-beta for 16, it would be a good time to\n> conclude on this open item:\n> \"Reconsider a utility_query_id GUC to control if query jumbling of\n> utilities can go through the past string-only mode and the new mode?\"\n> \n> In Postgres ~15, utility commands used a hash of the query string to\n> compute their query ID. The current query jumbling code uses a Query\n> instead, like any other queries. I have registered this open item as\n> a self-reminder, mostly in case there would be an argument to have a\n> GUC where users could switch from one mode to another. See here as\n> well for some computation times for each method (table is in ns, wiht\n> millions of iterations):\n> https://www.postgresql.org/message-id/Y9eeYinDb1AcpWrG@paquier.xyz\n> \n> I still don't think that we need both methods based on these numbers,\n> but there may be more opinions about that? Are people OK if this open\n> item is discarded?\nI vote for only one method based on a query tree structure.\nBTW, did you think about different algorithms of queryId generation? \nAuto-generated queryId code can open a way for extensions to have \neasy-supporting custom queryIds.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Tue, 11 Jul 2023 12:29:29 +0700",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 12:29:29PM +0700, Andrey Lepikhov wrote:\n> I vote for only one method based on a query tree structure.\n\nNoted\n\n> BTW, did you think about different algorithms of queryId generation?\n\nNot really, except if you are referring to the possibility of being\nable to handle differently different portions of the nodes depending\non a context given by the callers willing to do a query jumbling\ncomputation. (For example, choose to *not* silence the Const nodes,\netc.)\n\n> Auto-generated queryId code can open a way for extensions to have\n> easy-supporting custom queryIds.\n\nExtensions can control that at some extent, already.\n--\nMichael",
"msg_date": "Tue, 11 Jul 2023 14:35:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On 11/7/2023 12:35, Michael Paquier wrote:\n> On Tue, Jul 11, 2023 at 12:29:29PM +0700, Andrey Lepikhov wrote:\n>> I vote for only one method based on a query tree structure.\n> \n> Noted\n> \n>> BTW, did you think about different algorithms of queryId generation?\n> \n> Not really, except if you are referring to the possibility of being\n> able to handle differently different portions of the nodes depending\n> on a context given by the callers willing to do a query jumbling\n> computation. (For example, choose to *not* silence the Const nodes,\n> etc.)\nYes, I have two requests on different queryId algorithms:\n1. With suppressed Const nodes.\n2. With replacement of Oids with full names - to give a chance to see \nthe same queryId at different instances for the same query.\n\nIt is quite trivial to implement, but not easy in support.\n> \n>> Auto-generated queryId code can open a way for extensions to have\n>> easy-supporting custom queryIds.\n> \n> Extensions can control that at some extent, already.\n> --\n> Michael\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Tue, 11 Jul 2023 13:44:39 +0700",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 07:35:43AM +0900, Michael Paquier wrote:\n> I still don't think that we need both methods based on these numbers,\n> but there may be more opinions about that? Are people OK if this open\n> item is discarded?\n\nHearing nothing about this point, removed from the open item list,\nthen.\n--\nMichael",
"msg_date": "Wed, 19 Jul 2023 12:30:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Generating code for query jumbling through gen_node_support.pl"
}
] |
[
{
"msg_contents": "Per Alvaro's advice, forking this from [1].\n\nIn light of my proposed changes to decouple permission checking from\nthe range table on that thread (now committed as a61b1f7482), I had\nalso been posting a patch there to rethink commands/view.c's\neditorializing of a view rule action query' range table to add the\nplaceholder RTEs for checking the permissions of the view relation\namong other things.\n\nThat patch came to life after Tom's comment in the same thread, where\nhe wondered if we could do away with those placeholder entries [2] if\npermission checking details were to go elsewhere.\n\nAll but very recent versions of the patch were simply removing the\nfunction UpdateRangeTableOfViewParse() that added those entries, such\nthat a view rule's action query would be stored with only the RTEs of\nthe relations mentioned in the view's query, with no trace whatsoever\nof the view relation. ApplyRetrieveRule() working with a given user\nquery on the view would add a placeholder entry for the view for the\npurpose served by those no-longer-present placeholder RTEs in the rule\naction query's range table. It would accomplish that by adding a copy\nof the query's view RTE with appropriate permission details filled in\nbefore converting the latter into a RTE_SUBQUERY entry. However, this\napproach of not storing the placeholder in the stored rule would lead\nto a whole lot of regression test output changes, because the stored\nview queries of many regression tests involving views would now end up\nwith only 1 entry in the range table instead of 3, causing ruleutils.c\nto no longer qualify the column names in the deparsed representation\nof those queries appearing in those regression test expected outputs.\n\nTo avoid that churn (not sure if really a goal to strive for in this\ncase!), I thought it might be better to keep the OLD entry in the\nstored action query while getting rid of the NEW entry. Other than\navoiding the regression test output churn, this also makes the changes\nof ApplyRetrieveRule unnecessary. Actually, as I was addressing\nAlvaro's comments on the now-committed patch, I was starting to get\nconcerned about the implications of the change in position of the view\nrelation RTE in the query's range table if ApplyRetrieveRule() adds\none from scratch instead of simply recycling the OLD entry from stored\nrule action query, even though I could see that there are no\n*user-visible* changes, especially after decoupling permission\nchecking from the range table.\n\nAnyway, the attached patch implements this 2nd approach.\n\nI'll add this to the January CF.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA+HiwqGjJDmUhDSfv-U2qhKJjt9ST7Xh9JXC_irsAQ1TAUsJYg@mail.gmail.com\n[2] https://www.postgresql.org/message-id/697679.1625154303%40sss.pgh.pa.us",
"msg_date": "Wed, 7 Dec 2022 18:42:30 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 6:42 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Per Alvaro's advice, forking this from [1].\n\nForgot to add Alvaro.\n\n> In light of my proposed changes to decouple permission checking from\n> the range table on that thread (now committed as a61b1f7482), I had\n> also been posting a patch there to rethink commands/view.c's\n> editorializing of a view rule action query' range table to add the\n> placeholder RTEs for checking the permissions of the view relation\n> among other things.\n>\n> That patch came to life after Tom's comment in the same thread, where\n> he wondered if we could do away with those placeholder entries [2] if\n> permission checking details were to go elsewhere.\n>\n> All but very recent versions of the patch were simply removing the\n> function UpdateRangeTableOfViewParse() that added those entries, such\n> that a view rule's action query would be stored with only the RTEs of\n> the relations mentioned in the view's query, with no trace whatsoever\n> of the view relation. ApplyRetrieveRule() working with a given user\n> query on the view would add a placeholder entry for the view for the\n> purpose served by those no-longer-present placeholder RTEs in the rule\n> action query's range table. It would accomplish that by adding a copy\n> of the query's view RTE with appropriate permission details filled in\n> before converting the latter into a RTE_SUBQUERY entry. However, this\n> approach of not storing the placeholder in the stored rule would lead\n> to a whole lot of regression test output changes, because the stored\n> view queries of many regression tests involving views would now end up\n> with only 1 entry in the range table instead of 3, causing ruleutils.c\n> to no longer qualify the column names in the deparsed representation\n> of those queries appearing in those regression test expected outputs.\n>\n> To avoid that churn (not sure if really a goal to strive for in this\n> case!), I thought it might be better to keep the OLD entry in the\n> stored action query while getting rid of the NEW entry. Other than\n> avoiding the regression test output churn, this also makes the changes\n> of ApplyRetrieveRule unnecessary. Actually, as I was addressing\n> Alvaro's comments on the now-committed patch, I was starting to get\n> concerned about the implications of the change in position of the view\n> relation RTE in the query's range table if ApplyRetrieveRule() adds\n> one from scratch instead of simply recycling the OLD entry from stored\n> rule action query, even though I could see that there are no\n> *user-visible* changes, especially after decoupling permission\n> checking from the range table.\n>\n> Anyway, the attached patch implements this 2nd approach.\n>\n> I'll add this to the January CF.\n\nDone.\n\nhttps://commitfest.postgresql.org/41/4048/\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Dec 2022 18:47:03 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "On 2022-Dec-07, Amit Langote wrote:\n\n> However, this\n> approach of not storing the placeholder in the stored rule would lead\n> to a whole lot of regression test output changes, because the stored\n> view queries of many regression tests involving views would now end up\n> with only 1 entry in the range table instead of 3, causing ruleutils.c\n> to no longer qualify the column names in the deparsed representation\n> of those queries appearing in those regression test expected outputs.\n> \n> To avoid that churn (not sure if really a goal to strive for in this\n> case!), I thought it might be better to keep the OLD entry in the\n> stored action query while getting rid of the NEW entry.\n\nIf the *only* argument for keeping the RTE for OLD is to avoid\nregression test churn, then definitely it is not worth doing and it\nshould be ripped out.\n\n> Other than avoiding the regression test output churn, this also makes\n> the changes of ApplyRetrieveRule unnecessary.\n\nBut do these changes mean the code is worse afterwards? Changing stuff,\nper se, is not bad. Also, since you haven't posted the \"complete\" patch\nsince Nov 7th, it's not easy to tell what those changes are.\n\nMaybe you should post both versions of the patch -- one that removes\njust NEW, and one that removes both OLD and NEW, so that we can judge.\n\n> Actually, as I was addressing Alvaro's comments on the now-committed\n> patch, I was starting to get concerned about the implications of the\n> change in position of the view relation RTE in the query's range table\n> if ApplyRetrieveRule() adds one from scratch instead of simply\n> recycling the OLD entry from stored rule action query, even though I\n> could see that there are no *user-visible* changes, especially after\n> decoupling permission checking from the range table.\n\nHmm, I think I see the point, though I don't necessarily agree that\nthere is a problem.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 8 Dec 2022 10:12:45 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "On Thu, Dec 8, 2022 at 6:12 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Dec-07, Amit Langote wrote:\n> > However, this\n> > approach of not storing the placeholder in the stored rule would lead\n> > to a whole lot of regression test output changes, because the stored\n> > view queries of many regression tests involving views would now end up\n> > with only 1 entry in the range table instead of 3, causing ruleutils.c\n> > to no longer qualify the column names in the deparsed representation\n> > of those queries appearing in those regression test expected outputs.\n> >\n> > To avoid that churn (not sure if really a goal to strive for in this\n> > case!), I thought it might be better to keep the OLD entry in the\n> > stored action query while getting rid of the NEW entry.\n>\n> If the *only* argument for keeping the RTE for OLD is to avoid\n> regression test churn, then definitely it is not worth doing and it\n> should be ripped out.\n>\n> > Other than avoiding the regression test output churn, this also makes\n> > the changes of ApplyRetrieveRule unnecessary.\n>\n> But do these changes mean the code is worse afterwards? Changing stuff,\n> per se, is not bad. Also, since you haven't posted the \"complete\" patch\n> since Nov 7th, it's not easy to tell what those changes are.\n>\n> Maybe you should post both versions of the patch -- one that removes\n> just NEW, and one that removes both OLD and NEW, so that we can judge.\n\nOK, I gave the previous approach another try to see if I can change\nApplyRetrieveRule() in a bit more convincing way this time around, now\nthat the RTEPermissionInfo patch is in.\n\nI would say I'm more satisfied with how it turned out this time. Let\nme know what you think.\n\n> > Actually, as I was addressing Alvaro's comments on the now-committed\n> > patch, I was starting to get concerned about the implications of the\n> > change in position of the view relation RTE in the query's range table\n> > if ApplyRetrieveRule() adds one from scratch instead of simply\n> > recycling the OLD entry from stored rule action query, even though I\n> > could see that there are no *user-visible* changes, especially after\n> > decoupling permission checking from the range table.\n>\n> Hmm, I think I see the point, though I don't necessarily agree that\n> there is a problem.\n\nYeah, I'm not worried as much with the new version. That is helped by\nthe fact that I've made ApplyRetrieveRule() now do basically what\nUpdateRangeTableOfViewParse() would do with the stored rule query.\nAlso, our making add_rtes_to_flat_rtable() add perminfos in the RTE\norder helped find the bug with the last version.\n\nAttaching both patches.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 9 Dec 2022 15:07:18 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "On Fri, Dec 9, 2022 at 3:07 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Dec 8, 2022 at 6:12 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > On 2022-Dec-07, Amit Langote wrote:\n> > > However, this\n> > > approach of not storing the placeholder in the stored rule would lead\n> > > to a whole lot of regression test output changes, because the stored\n> > > view queries of many regression tests involving views would now end up\n> > > with only 1 entry in the range table instead of 3, causing ruleutils.c\n> > > to no longer qualify the column names in the deparsed representation\n> > > of those queries appearing in those regression test expected outputs.\n> > >\n> > > To avoid that churn (not sure if really a goal to strive for in this\n> > > case!), I thought it might be better to keep the OLD entry in the\n> > > stored action query while getting rid of the NEW entry.\n> >\n> > If the *only* argument for keeping the RTE for OLD is to avoid\n> > regression test churn, then definitely it is not worth doing and it\n> > should be ripped out.\n> >\n> > > Other than avoiding the regression test output churn, this also makes\n> > > the changes of ApplyRetrieveRule unnecessary.\n> >\n> > But do these changes mean the code is worse afterwards? Changing stuff,\n> > per se, is not bad. Also, since you haven't posted the \"complete\" patch\n> > since Nov 7th, it's not easy to tell what those changes are.\n> >\n> > Maybe you should post both versions of the patch -- one that removes\n> > just NEW, and one that removes both OLD and NEW, so that we can judge.\n>\n> OK, I gave the previous approach another try to see if I can change\n> ApplyRetrieveRule() in a bit more convincing way this time around, now\n> that the RTEPermissionInfo patch is in.\n>\n> I would say I'm more satisfied with how it turned out this time. Let\n> me know what you think.\n>\n> > > Actually, as I was addressing Alvaro's comments on the now-committed\n> > > patch, I was starting to get concerned about the implications of the\n> > > change in position of the view relation RTE in the query's range table\n> > > if ApplyRetrieveRule() adds one from scratch instead of simply\n> > > recycling the OLD entry from stored rule action query, even though I\n> > > could see that there are no *user-visible* changes, especially after\n> > > decoupling permission checking from the range table.\n> >\n> > Hmm, I think I see the point, though I don't necessarily agree that\n> > there is a problem.\n>\n> Yeah, I'm not worried as much with the new version. That is helped by\n> the fact that I've made ApplyRetrieveRule() now do basically what\n> UpdateRangeTableOfViewParse() would do with the stored rule query.\n> Also, our making add_rtes_to_flat_rtable() add perminfos in the RTE\n> order helped find the bug with the last version.\n>\n> Attaching both patches.\n\nLooks like I forgot to update some expected output files.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 9 Dec 2022 15:50:05 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "On Fri, 9 Dec 2022 at 12:20, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Fri, Dec 9, 2022 at 3:07 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Thu, Dec 8, 2022 at 6:12 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > > On 2022-Dec-07, Amit Langote wrote:\n> > > > However, this\n> > > > approach of not storing the placeholder in the stored rule would lead\n> > > > to a whole lot of regression test output changes, because the stored\n> > > > view queries of many regression tests involving views would now end up\n> > > > with only 1 entry in the range table instead of 3, causing ruleutils.c\n> > > > to no longer qualify the column names in the deparsed representation\n> > > > of those queries appearing in those regression test expected outputs.\n> > > >\n> > > > To avoid that churn (not sure if really a goal to strive for in this\n> > > > case!), I thought it might be better to keep the OLD entry in the\n> > > > stored action query while getting rid of the NEW entry.\n> > >\n> > > If the *only* argument for keeping the RTE for OLD is to avoid\n> > > regression test churn, then definitely it is not worth doing and it\n> > > should be ripped out.\n> > >\n> > > > Other than avoiding the regression test output churn, this also makes\n> > > > the changes of ApplyRetrieveRule unnecessary.\n> > >\n> > > But do these changes mean the code is worse afterwards? Changing stuff,\n> > > per se, is not bad. Also, since you haven't posted the \"complete\" patch\n> > > since Nov 7th, it's not easy to tell what those changes are.\n> > >\n> > > Maybe you should post both versions of the patch -- one that removes\n> > > just NEW, and one that removes both OLD and NEW, so that we can judge.\n> >\n> > OK, I gave the previous approach another try to see if I can change\n> > ApplyRetrieveRule() in a bit more convincing way this time around, now\n> > that the RTEPermissionInfo patch is in.\n> >\n> > I would say I'm more satisfied with how it turned out this time. Let\n> > me know what you think.\n> >\n> > > > Actually, as I was addressing Alvaro's comments on the now-committed\n> > > > patch, I was starting to get concerned about the implications of the\n> > > > change in position of the view relation RTE in the query's range table\n> > > > if ApplyRetrieveRule() adds one from scratch instead of simply\n> > > > recycling the OLD entry from stored rule action query, even though I\n> > > > could see that there are no *user-visible* changes, especially after\n> > > > decoupling permission checking from the range table.\n> > >\n> > > Hmm, I think I see the point, though I don't necessarily agree that\n> > > there is a problem.\n> >\n> > Yeah, I'm not worried as much with the new version. That is helped by\n> > the fact that I've made ApplyRetrieveRule() now do basically what\n> > UpdateRangeTableOfViewParse() would do with the stored rule query.\n> > Also, our making add_rtes_to_flat_rtable() add perminfos in the RTE\n> > order helped find the bug with the last version.\n> >\n> > Attaching both patches.\n>\n> Looks like I forgot to update some expected output files.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\n54afdcd6182af709cb0ab775c11b90decff166eb ===\n=== applying patch\n./v1-0001-Do-not-add-the-NEW-entry-to-view-rule-action-s-ra.patch\nHunk #1 succeeded at 1908 (offset 1 line).\n=== applying patch ./v2-0001-Remove-UpdateRangeTableOfViewParse.patch\npatching file contrib/postgres_fdw/expected/postgres_fdw.out\nHunk #1 FAILED at 2606.\nHunk #2 FAILED at 2669.\n2 out of 4 hunks FAILED -- saving rejects to file\ncontrib/postgres_fdw/expected/postgres_fdw.out.rej\n\n[1] - http://cfbot.cputube.org/patch_41_4048.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 4 Jan 2023 15:47:32 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 7:17 PM vignesh C <vignesh21@gmail.com> wrote:\n> On Fri, 9 Dec 2022 at 12:20, Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Fri, Dec 9, 2022 at 3:07 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Thu, Dec 8, 2022 at 6:12 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > > > On 2022-Dec-07, Amit Langote wrote:\n> > > > > However, this\n> > > > > approach of not storing the placeholder in the stored rule would lead\n> > > > > to a whole lot of regression test output changes, because the stored\n> > > > > view queries of many regression tests involving views would now end up\n> > > > > with only 1 entry in the range table instead of 3, causing ruleutils.c\n> > > > > to no longer qualify the column names in the deparsed representation\n> > > > > of those queries appearing in those regression test expected outputs.\n> > > > >\n> > > > > To avoid that churn (not sure if really a goal to strive for in this\n> > > > > case!), I thought it might be better to keep the OLD entry in the\n> > > > > stored action query while getting rid of the NEW entry.\n> > > >\n> > > > If the *only* argument for keeping the RTE for OLD is to avoid\n> > > > regression test churn, then definitely it is not worth doing and it\n> > > > should be ripped out.\n> > > >\n> > > > > Other than avoiding the regression test output churn, this also makes\n> > > > > the changes of ApplyRetrieveRule unnecessary.\n> > > >\n> > > > But do these changes mean the code is worse afterwards? Changing stuff,\n> > > > per se, is not bad. Also, since you haven't posted the \"complete\" patch\n> > > > since Nov 7th, it's not easy to tell what those changes are.\n> > > >\n> > > > Maybe you should post both versions of the patch -- one that removes\n> > > > just NEW, and one that removes both OLD and NEW, so that we can judge.\n> > >\n> > > OK, I gave the previous approach another try to see if I can change\n> > > ApplyRetrieveRule() in a bit more convincing way this time around, now\n> > > that the RTEPermissionInfo patch is in.\n> > >\n> > > I would say I'm more satisfied with how it turned out this time. Let\n> > > me know what you think.\n> > >\n> > > > > Actually, as I was addressing Alvaro's comments on the now-committed\n> > > > > patch, I was starting to get concerned about the implications of the\n> > > > > change in position of the view relation RTE in the query's range table\n> > > > > if ApplyRetrieveRule() adds one from scratch instead of simply\n> > > > > recycling the OLD entry from stored rule action query, even though I\n> > > > > could see that there are no *user-visible* changes, especially after\n> > > > > decoupling permission checking from the range table.\n> > > >\n> > > > Hmm, I think I see the point, though I don't necessarily agree that\n> > > > there is a problem.\n> > >\n> > > Yeah, I'm not worried as much with the new version. That is helped by\n> > > the fact that I've made ApplyRetrieveRule() now do basically what\n> > > UpdateRangeTableOfViewParse() would do with the stored rule query.\n> > > Also, our making add_rtes_to_flat_rtable() add perminfos in the RTE\n> > > order helped find the bug with the last version.\n> > >\n> > > Attaching both patches.\n> >\n> > Looks like I forgot to update some expected output files.\n>\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n> === Applying patches on top of PostgreSQL commit ID\n> 54afdcd6182af709cb0ab775c11b90decff166eb ===\n> === applying patch\n> ./v1-0001-Do-not-add-the-NEW-entry-to-view-rule-action-s-ra.patch\n> Hunk #1 succeeded at 1908 (offset 1 line).\n> === applying patch ./v2-0001-Remove-UpdateRangeTableOfViewParse.patch\n> patching file contrib/postgres_fdw/expected/postgres_fdw.out\n> Hunk #1 FAILED at 2606.\n> Hunk #2 FAILED at 2669.\n> 2 out of 4 hunks FAILED -- saving rejects to file\n> contrib/postgres_fdw/expected/postgres_fdw.out.rej\n\nThanks for the heads up. cfbot fails because it's applying both the\npatches which, being alternative approaches to address $subject, are\nmutually conflicting.\n\nI've attached just the patch that we should move forward with, as\nAlvaro might agree.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 5 Jan 2023 15:47:43 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> I've attached just the patch that we should move forward with, as\n> Alvaro might agree.\n\nI've looked at this briefly but don't like it very much, specifically\nthe business about retroactively adding an RTE and RTEPermissionInfo\ninto the view's replacement subquery. That seems expensive and bug-prone:\nif you're going to do that you might as well just leave the OLD entry\nin place, as the earlier patch did, because you're just reconstructing\nthat same state of the parsetree a bit later on.\n\nFurthermore, if that's where we end up then I'm not really sure this is\nworth doing at all. The idea driving this was that if we could get rid\nof *both* OLD and NEW RTE entries then we'd not have O(N^2) behavior in\ndeep subquery pull-ups due to the rangetable getting longer with each one.\nBut getting it down from two extra entries to one extra entry isn't going\nto fix that big-O problem. (The patch as presented still has O(N^2)\nplanning time for the nested-views example discussed earlier.)\n\nConceivably we could make it work by allowing RTE_SUBQUERY RTEs to\ncarry a relation OID and associated RTEPermissionInfo, so that when a\nview's RTE_RELATION RTE is transmuted to an RTE_SUBQUERY RTE it still\ncarries the info needed to let us lock and permission-check the view.\nThat might be a bridge too far from the ugliness perspective ...\nalthough certainly the business with OLD and/or NEW RTEs isn't very\npretty either.\n\nAnyway, if you don't feel like tackling that then I'd go back to the\npatch version that kept the OLD RTE. (Maybe we could rename that to\nsomething else, though, such as \"*VIEW*\"?)\n\nBTW, I don't entirely understand why this patch is passing regression\ntests, because it's failed to deal with numerous places that have\nhard-wired knowledge about these extra RTEs. Look for references to\nPRS2_OLD_VARNO and PRS2_NEW_VARNO. I think the original rationale\nfor UpdateRangeTableOfViewParse was that we needed to keep the rtables\nof ON SELECT rules looking similar to those of other types of rules.\nMaybe we've cleaned up all the places that used to depend on that,\nbut I'm not convinced.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 08 Jan 2023 15:58:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 5:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > I've attached just the patch that we should move forward with, as\n> > Alvaro might agree.\n>\n> I've looked at this briefly but don't like it very much, specifically\n> the business about retroactively adding an RTE and RTEPermissionInfo\n> into the view's replacement subquery. That seems expensive and bug-prone:\n> if you're going to do that you might as well just leave the OLD entry\n> in place, as the earlier patch did, because you're just reconstructing\n> that same state of the parsetree a bit later on.\n>\n> Furthermore, if that's where we end up then I'm not really sure this is\n> worth doing at all. The idea driving this was that if we could get rid\n> of *both* OLD and NEW RTE entries then we'd not have O(N^2) behavior in\n> deep subquery pull-ups due to the rangetable getting longer with each one.\n> But getting it down from two extra entries to one extra entry isn't going\n> to fix that big-O problem. (The patch as presented still has O(N^2)\n> planning time for the nested-views example discussed earlier.)\n\nHmm, that's true.\n\n> Conceivably we could make it work by allowing RTE_SUBQUERY RTEs to\n> carry a relation OID and associated RTEPermissionInfo, so that when a\n> view's RTE_RELATION RTE is transmuted to an RTE_SUBQUERY RTE it still\n> carries the info needed to let us lock and permission-check the view.\n> That might be a bridge too far from the ugliness perspective ...\n> although certainly the business with OLD and/or NEW RTEs isn't very\n> pretty either.\n\nI had thought about that idea but was a bit scared of trying it,\nbecause it does sound like something that might become a maintenance\nburden in the future. Though I gave that a try today given that it\nsounds like I may have your permission. ;-)\n\nSo, in the attached updated version, I removed the bits of\nApplyRetrieveRule() that would add the placeholder entry (OLD) and\nalso the existing lines that would reset relid, rellockmode, and\nperminfoindex of the view RTE that's converted into a RTE_SUBQUERY\none. Then I fixed places to deal with subquery RTEs sometimes having\nthe relid, etc. set, just enough to pass make check-world. I was\nsurprised that nothing failed with a -DWRITE_READ_PARSE_PLAN_TREES\nbuild, because of the way RTEs are written and read -- relid,\nrellockmode are not written/read for RTE_SUBQUERY RTEs.\n\n> BTW, I don't entirely understand why this patch is passing regression\n> tests, because it's failed to deal with numerous places that have\n> hard-wired knowledge about these extra RTEs. Look for references to\n> PRS2_OLD_VARNO and PRS2_NEW_VARNO. I think the original rationale\n> for UpdateRangeTableOfViewParse was that we needed to keep the rtables\n> of ON SELECT rules looking similar to those of other types of rules.\n> Maybe we've cleaned up all the places that used to depend on that,\n> but I'm not convinced.\n\nAFAICS, the places that still have hard-wired knowledge of these\nplaceholder RTEs only manipulate non-SELECT rules, so don't care about\nviews.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 11 Jan 2023 23:47:58 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Mon, Jan 9, 2023 at 5:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Conceivably we could make it work by allowing RTE_SUBQUERY RTEs to\n>> carry a relation OID and associated RTEPermissionInfo, so that when a\n>> view's RTE_RELATION RTE is transmuted to an RTE_SUBQUERY RTE it still\n>> carries the info needed to let us lock and permission-check the view.\n>> That might be a bridge too far from the ugliness perspective ...\n>> although certainly the business with OLD and/or NEW RTEs isn't very\n>> pretty either.\n\n> I had thought about that idea but was a bit scared of trying it,\n> because it does sound like something that might become a maintenance\n> burden in the future. Though I gave that a try today given that it\n> sounds like I may have your permission. ;-)\n\nGiven the small number of places that need to be touched, I don't\nthink it's a maintenance problem. I agree with your fear that you\nmight have missed some, but I also searched and found no more.\n\n> I was\n> surprised that nothing failed with a -DWRITE_READ_PARSE_PLAN_TREES\n> build, because of the way RTEs are written and read -- relid,\n> rellockmode are not written/read for RTE_SUBQUERY RTEs.\n\nI think that's mostly accidental, stemming from the facts that:\n(1) Stored rules wouldn't have these fields populated yet anyway.\n(2) The regression tests haven't got any good way to check that a\nneeded lock was actually acquired. It looks to me like with the\npatch as you have it, when a plan tree is copied into the plan\ncache the view relid is lost (if pg_plan_query stripped it thanks\nto -DWRITE_READ_PARSE_PLAN_TREES) and thus we won't re-acquire the\nview lock in AcquireExecutorLocks during later plan uses. But\nthat would have no useful effect unless it forced a re-plan due to\na concurrent view replacement, which is a scenario I'm pretty sure\nwe don't actually exercise in the tests.\n(3) The executor doesn't look at these fields after startup, so\nfailure to transmit them to parallel workers doesn't hurt.\n\nIn any case, it would clearly be very foolish not to fix\noutfuncs/readfuncs to preserve all the fields we're using.\n\n>> BTW, I don't entirely understand why this patch is passing regression\n>> tests, because it's failed to deal with numerous places that have\n>> hard-wired knowledge about these extra RTEs. Look for references to\n>> PRS2_OLD_VARNO and PRS2_NEW_VARNO.\n\n> AFAICS, the places that still have hard-wired knowledge of these\n> placeholder RTEs only manipulate non-SELECT rules, so don't care about\n> views.\n\nYeah, I looked through them too and didn't find any problems.\n\nI've pushed this with some cleanup --- aside from fixing\noutfuncs/readfuncs, I did some more work on the comments, which\nI think you were too sloppy about.\n\nSadly, the original nested-views test case still has O(N^2)\nplanning time :-(. I dug into that harder and now see where\nthe problem really lies. The rewriter recursively replaces\nthe view RTE_RELATION RTEs with RTE_SUBQUERY all the way down,\nwhich takes it only linear time. However, then we have a deep\nnest of RTE_SUBQUERYs, and the initial copyObject in\npull_up_simple_subquery repeatedly copies everything below the\ncurrent pullup recursion level, so that it's still O(N^2)\neven though the final rtable will have only N entries.\n\nI'm afraid to remove the copyObject step, because that would\ncause problems in the cases where we try to perform pullup\nand have to abandon it later. (Maybe we could get rid of\nall such cases, but I'm not sanguine about that succeeding.)\nI'm tempted to try to fix it by taking view replacement out\nof the rewriter altogether and making prepjointree.c handle\nit during the same recursive scan that does subquery pullup,\nso that we aren't applying copyObject to already-expanded\nRTE_SUBQUERY nests. However, that's more work than I care to\nput into the problem right now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 Jan 2023 20:06:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": " On Thu, Jan 12, 2023 at 10:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Mon, Jan 9, 2023 at 5:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Conceivably we could make it work by allowing RTE_SUBQUERY RTEs to\n> >> carry a relation OID and associated RTEPermissionInfo, so that when a\n> >> view's RTE_RELATION RTE is transmuted to an RTE_SUBQUERY RTE it still\n> >> carries the info needed to let us lock and permission-check the view.\n> >> That might be a bridge too far from the ugliness perspective ...\n> >> although certainly the business with OLD and/or NEW RTEs isn't very\n> >> pretty either.\n>\n> > I had thought about that idea but was a bit scared of trying it,\n> > because it does sound like something that might become a maintenance\n> > burden in the future. Though I gave that a try today given that it\n> > sounds like I may have your permission. ;-)\n>\n> Given the small number of places that need to be touched, I don't\n> think it's a maintenance problem. I agree with your fear that you\n> might have missed some, but I also searched and found no more.\n\nOK, thanks.\n\n> > I was\n> > surprised that nothing failed with a -DWRITE_READ_PARSE_PLAN_TREES\n> > build, because of the way RTEs are written and read -- relid,\n> > rellockmode are not written/read for RTE_SUBQUERY RTEs.\n>\n> I think that's mostly accidental, stemming from the facts that:\n> (1) Stored rules wouldn't have these fields populated yet anyway.\n> (2) The regression tests haven't got any good way to check that a\n> needed lock was actually acquired. It looks to me like with the\n> patch as you have it, when a plan tree is copied into the plan\n> cache the view relid is lost (if pg_plan_query stripped it thanks\n> to -DWRITE_READ_PARSE_PLAN_TREES) and thus we won't re-acquire the\n> view lock in AcquireExecutorLocks during later plan uses. But\n> that would have no useful effect unless it forced a re-plan due to\n> a concurrent view replacement, which is a scenario I'm pretty sure\n> we don't actually exercise in the tests.\n\nAh, does it make sense to have isolation tests cover this?\n\n> (3) The executor doesn't look at these fields after startup, so\n> failure to transmit them to parallel workers doesn't hurt.\n>\n> In any case, it would clearly be very foolish not to fix\n> outfuncs/readfuncs to preserve all the fields we're using.\n>\n> I've pushed this with some cleanup --- aside from fixing\n> outfuncs/readfuncs, I did some more work on the comments, which\n> I think you were too sloppy about.\n\nThanks a lot for the fixes.\n\n> Sadly, the original nested-views test case still has O(N^2)\n> planning time :-(. I dug into that harder and now see where\n> the problem really lies. The rewriter recursively replaces\n> the view RTE_RELATION RTEs with RTE_SUBQUERY all the way down,\n> which takes it only linear time. However, then we have a deep\n> nest of RTE_SUBQUERYs, and the initial copyObject in\n> pull_up_simple_subquery repeatedly copies everything below the\n> current pullup recursion level, so that it's still O(N^2)\n> even though the final rtable will have only N entries.\n\nThat makes sense.\n\n> I'm afraid to remove the copyObject step, because that would\n> cause problems in the cases where we try to perform pullup\n> and have to abandon it later. (Maybe we could get rid of\n> all such cases, but I'm not sanguine about that succeeding.)\n> I'm tempted to try to fix it by taking view replacement out\n> of the rewriter altogether and making prepjointree.c handle\n> it during the same recursive scan that does subquery pullup,\n> so that we aren't applying copyObject to already-expanded\n> RTE_SUBQUERY nests. However, that's more work than I care to\n> put into the problem right now.\n\nOK, I will try to give your idea a shot sometime later.\n\nBTW, I noticed that we could perhaps remove the following in the\nfireRIRrules()'s loop that calls ApplyRetrieveRule(), because we no\nlonger put any unreferenced OLD/NEW RTEs in the view queries.\n\n /*\n * If the table is not referenced in the query, then we ignore it.\n * This prevents infinite expansion loop due to new rtable entries\n * inserted by expansion of a rule. A table is referenced if it is\n * part of the join set (a source table), or is referenced by any Var\n * nodes, or is the result table.\n */\n if (rt_index != parsetree->resultRelation &&\n !rangeTableEntry_used((Node *) parsetree, rt_index, 0))\n continue;\n\nCommenting this out doesn't break make check.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 Jan 2023 11:42:03 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Thu, Jan 12, 2023 at 10:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I've pushed this with some cleanup --- aside from fixing\n>> outfuncs/readfuncs, I did some more work on the comments, which\n>> I think you were too sloppy about.\n\n> Thanks a lot for the fixes.\n\nIt looks like we're not out of the woods on this: the buildfarm\nmembers that run cross-version-upgrade tests are all unhappy.\nMost of them are not reporting any useful details, but I suspect\nthat they are barfing because dumps from the old server include\ntable-qualified variable names in some CREATE VIEW commands while\ndumps from HEAD omit the qualifications. I don't see any\nmechanism in TestUpgradeXversion.pm that could deal with that\nconveniently, and in any case we'd have to roll out a client\nscript update to the affected animals. I fear we may have to\nrevert this pending development of better TestUpgradeXversion.pm\nsupport.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 Jan 2023 22:45:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 12:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Thu, Jan 12, 2023 at 10:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I've pushed this with some cleanup --- aside from fixing\n> >> outfuncs/readfuncs, I did some more work on the comments, which\n> >> I think you were too sloppy about.\n>\n> > Thanks a lot for the fixes.\n>\n> It looks like we're not out of the woods on this: the buildfarm\n> members that run cross-version-upgrade tests are all unhappy.\n> Most of them are not reporting any useful details, but I suspect\n> that they are barfing because dumps from the old server include\n> table-qualified variable names in some CREATE VIEW commands while\n> dumps from HEAD omit the qualifications. I don't see any\n> mechanism in TestUpgradeXversion.pm that could deal with that\n> conveniently, and in any case we'd have to roll out a client\n> script update to the affected animals. I fear we may have to\n> revert this pending development of better TestUpgradeXversion.pm\n> support.\n\nAh, OK, no problem.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 Jan 2023 13:47:00 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 10:45:33PM -0500, Tom Lane wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Thu, Jan 12, 2023 at 10:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I've pushed this with some cleanup --- aside from fixing\n> >> outfuncs/readfuncs, I did some more work on the comments, which\n> >> I think you were too sloppy about.\n> \n> > Thanks a lot for the fixes.\n> \n> It looks like we're not out of the woods on this: the buildfarm\n> members that run cross-version-upgrade tests are all unhappy.\n> Most of them are not reporting any useful details, but I suspect\n> that they are barfing because dumps from the old server include\n> table-qualified variable names in some CREATE VIEW commands while\n> dumps from HEAD omit the qualifications. I don't see any\n> mechanism in TestUpgradeXversion.pm that could deal with that\n> conveniently, and in any case we'd have to roll out a client\n> script update to the affected animals. I fear we may have to\n> revert this pending development of better TestUpgradeXversion.pm\n> support.\n\nThere's a diffs available for several of them:\n\n- SELECT citext_table.id,\n- citext_table.name\n+ SELECT id,\n+ name\n\nIt looks like TestUpgradeXversion.pm is using the diff command I sent to\nget tigher bounds on allowable changes.\n\n20210415153722.GL6091@telsasoft.com\n\nIt's ugly and a terrible hack, and I don't know whether anyone would say\nit's good enough, but one could can probably avoid the diff like:\n\nsed -r '/CREATE/,/^$/{ s/\\w+\\.//g }'\n\nYou'd still have to wait for it to be deployed, though.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 11 Jan 2023 23:12:43 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "\nOn 2023-01-12 Th 00:12, Justin Pryzby wrote:\n> On Wed, Jan 11, 2023 at 10:45:33PM -0500, Tom Lane wrote:\n>> Amit Langote <amitlangote09@gmail.com> writes:\n>>> On Thu, Jan 12, 2023 at 10:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> I've pushed this with some cleanup --- aside from fixing\n>>>> outfuncs/readfuncs, I did some more work on the comments, which\n>>>> I think you were too sloppy about.\n>>> Thanks a lot for the fixes.\n>> It looks like we're not out of the woods on this: the buildfarm\n>> members that run cross-version-upgrade tests are all unhappy.\n>> Most of them are not reporting any useful details, but I suspect\n>> that they are barfing because dumps from the old server include\n>> table-qualified variable names in some CREATE VIEW commands while\n>> dumps from HEAD omit the qualifications. I don't see any\n>> mechanism in TestUpgradeXversion.pm that could deal with that\n>> conveniently, and in any case we'd have to roll out a client\n>> script update to the affected animals. I fear we may have to\n>> revert this pending development of better TestUpgradeXversion.pm\n>> support.\n> There's a diffs available for several of them:\n>\n> - SELECT citext_table.id,\n> - citext_table.name\n> + SELECT id,\n> + name\n>\n> It looks like TestUpgradeXversion.pm is using the diff command I sent to\n> get tigher bounds on allowable changes.\n>\n> 20210415153722.GL6091@telsasoft.com\n>\n> It's ugly and a terrible hack, and I don't know whether anyone would say\n> it's good enough, but one could can probably avoid the diff like:\n>\n> sed -r '/CREATE/,/^$/{ s/\\w+\\.//g }'\n>\n> You'd still have to wait for it to be deployed, though.\n\n\nThat looks quite awful. I don't think you could persuade me to deploy it\n(We don't use sed anyway). It might be marginally better if the pattern\nwere /CREATE.*VIEW/ and we ignored that first line, but it still seems\nawful to me.\n\nAnother approach might be simply to increase the latitude allowed for\nold versions <= 15 with new versions >= 16. Currently we allow 90 for\ncases where the versions differ, but we could increase it to, say, 200\nin such cases (we'd need to experiment a bit to find the right limit).\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 12 Jan 2023 09:09:43 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-01-12 Th 00:12, Justin Pryzby wrote:\n>> It's ugly and a terrible hack, and I don't know whether anyone would say\n>> it's good enough, but one could can probably avoid the diff like:\n>> sed -r '/CREATE/,/^$/{ s/\\w+\\.//g }'\n\n> That looks quite awful. I don't think you could persuade me to deploy it\n> (We don't use sed anyway). It might be marginally better if the pattern\n> were /CREATE.*VIEW/ and we ignored that first line, but it still seems\n> awful to me.\n\nYeah, does not sound workable: it would risk ignoring actual problems.\n\nI was wondering whether we could store a per-version patch or Perl\nscript that edits the old dump file to remove known discrepancies\nfrom HEAD. If well-maintained, that could eliminate the need for the\narbitrary \"fuzz factors\" that are in TestUpgradeXversion.pm right now.\nI'd really want these files to be kept in the community source tree,\nthough, so that we do not need a new BF client release to change them.\n\nThis isn't the first time this has come up, but now we have a case\nwhere it's actually blocking development, so maybe it's time to\nmake something happen. If you want I can work on a patch for the\nBF client.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Jan 2023 09:54:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 09:54:09AM -0500, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 2023-01-12 Th 00:12, Justin Pryzby wrote:\n> >> It's ugly and a terrible hack, and I don't know whether anyone would say\n> >> it's good enough, but one could can probably avoid the diff like:\n> >> sed -r '/CREATE/,/^$/{ s/\\w+\\.//g }'\n> \n> > That looks quite awful. I don't think you could persuade me to deploy it\n> > (We don't use sed anyway). It might be marginally better if the pattern\n> > were /CREATE.*VIEW/ and we ignored that first line, but it still seems\n> > awful to me.\n> \n> Yeah, does not sound workable: it would risk ignoring actual problems.\n> \n> I was wondering whether we could store a per-version patch or Perl\n> script that edits the old dump file to remove known discrepancies\n> from HEAD. If well-maintained, that could eliminate the need for the\n> arbitrary \"fuzz factors\" that are in TestUpgradeXversion.pm right now.\n> I'd really want these files to be kept in the community source tree,\n> though, so that we do not need a new BF client release to change them.\n> \n> This isn't the first time this has come up, but now we have a case\n> where it's actually blocking development, so maybe it's time to\n> make something happen. If you want I can work on a patch for the\n> BF client.\n\nWhat about also including a dump from an old version, too ?\nThen the upgrade test can test actual upgrades.\n\nA new dump file would need to be updated at every release; the old ones\ncould stick around, maybe forever.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 12 Jan 2023 09:57:07 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> What about also including a dump from an old version, too ?\n> Then the upgrade test can test actual upgrades.\n\nThe BF clients already do that (if enabled), but they work from\nup-to-date installations of the respective branch tips. I'd not\nwant to have some branches including hypothetical output of\nother branches, because it'd be too easy for those files to get\nout of sync and deliver misleading answers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Jan 2023 11:01:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "\nOn 2023-01-12 Th 09:54, Tom Lane wrote:\n>\n> I was wondering whether we could store a per-version patch or Perl\n> script that edits the old dump file to remove known discrepancies\n> from HEAD. If well-maintained, that could eliminate the need for the\n> arbitrary \"fuzz factors\" that are in TestUpgradeXversion.pm right now.\n> I'd really want these files to be kept in the community source tree,\n> though, so that we do not need a new BF client release to change them.\n>\n> This isn't the first time this has come up, but now we have a case\n> where it's actually blocking development, so maybe it's time to\n> make something happen. If you want I can work on a patch for the\n> BF client.\n>\n> \t\t\t\n\n\nI wouldn't worry too much about the client for now. What we'd need is a)\na place in the source code where we know to find the module b) a module\nname c) one or more functions to call to make the adjustment(s).\n\nso, say in src/test/perl we have PostgreSQL/AdjustUpgrade.pm with a\nsubroutine adjust_dumpfile($oldversion, $dumpfile).\n\nThat would be fairly easy to look for and call, and a good place to\nstart. More ambitiously we might also provide a function do do most of\nthe pre_upgrade adjustments made in TestUpgradeXversion.pm at lines\n405-604. But let's walk before we try to run. This is probably a good\ntime to be doing this as I want to push out a new release pretty soon to\ndeal with the up-to-date check issues.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 12 Jan 2023 12:18:31 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-01-12 Th 09:54, Tom Lane wrote:\n>> I was wondering whether we could store a per-version patch or Perl\n>> script that edits the old dump file to remove known discrepancies\n>> from HEAD.\n\n> so, say in src/test/perl we have PostgreSQL/AdjustUpgrade.pm with a\n> subroutine adjust_dumpfile($oldversion, $dumpfile).\n\nSeems reasonable. I was imagining per-old-version .pm files, but\nthere's likely to be a fair amount of commonality between what to\ndo for different old versions, so probably that approach would be\ntoo duplicative.\n\n> That would be fairly easy to look for and call, and a good place to\n> start. More ambitiously we might also provide a function do do most of\n> the pre_upgrade adjustments made in TestUpgradeXversion.pm at lines\n> 405-604. But let's walk before we try to run.\n\nI think that part is also very very important to abstract out of the\nBF client. We've been burnt on that before too. So, perhaps one\nsubroutine that can apply updates to the source DB just before\nwe dump it, and then a second that can edit the dump file after?\n\nWe could imagine a third custom subroutine that abstracts the\nactual dump file comparison, but I'd prefer to get to a place\nwhere we just expect exact match after the edit step.\n\nI'll work on a straw-man patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Jan 2023 12:31:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 10:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Mon, Jan 9, 2023 at 5:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Conceivably we could make it work by allowing RTE_SUBQUERY RTEs to\n> >> carry a relation OID and associated RTEPermissionInfo, so that when a\n> >> view's RTE_RELATION RTE is transmuted to an RTE_SUBQUERY RTE it still\n> >> carries the info needed to let us lock and permission-check the view.\n> >> That might be a bridge too far from the ugliness perspective ...\n> >> although certainly the business with OLD and/or NEW RTEs isn't very\n> >> pretty either.\n>\n> > I had thought about that idea but was a bit scared of trying it,\n> > because it does sound like something that might become a maintenance\n> > burden in the future. Though I gave that a try today given that it\n> > sounds like I may have your permission. ;-)\n>\n> Given the small number of places that need to be touched, I don't\n> think it's a maintenance problem. I agree with your fear that you\n> might have missed some, but I also searched and found no more.\n\nWhile thinking about query view locking in context of [1], I realized\nthat we have missed also fixing AcquirePlannerLocks() /\nScanQueryForLocks() to consider that an RTE_SUBQUERY rte may belong to\na view, which must be locked the same as RTE_RELATION entries. Note\nwe did fix AcquireExecutorLocks() in 47bb9db75 as follows:\n\n@@ -1769,7 +1769,8 @@ AcquireExecutorLocks(List *stmt_list, bool acquire)\n {\n RangeTblEntry *rte = (RangeTblEntry *) lfirst(lc2);\n\n- if (rte->rtekind != RTE_RELATION)\n+ if (!(rte->rtekind == RTE_RELATION ||\n+ (rte->rtekind == RTE_SUBQUERY && OidIsValid(rte->relid))))\n\nAttached a patch to fix.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://commitfest.postgresql.org/42/3478/",
"msg_date": "Wed, 5 Apr 2023 11:32:31 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> While thinking about query view locking in context of [1], I realized\n> that we have missed also fixing AcquirePlannerLocks() /\n> ScanQueryForLocks() to consider that an RTE_SUBQUERY rte may belong to\n> a view, which must be locked the same as RTE_RELATION entries.\n\nI think you're right about that, because AcquirePlannerLocks is supposed\nto reacquire whatever locks parsing+rewriting would have gotten.\nHowever, what's with this hunk?\n\n@@ -527,7 +527,7 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,\n \tresult->partPruneInfos = glob->partPruneInfos;\n \tresult->rtable = glob->finalrtable;\n \tresult->permInfos = glob->finalrteperminfos;\n-\tresult->viewRelations = glob->viewRelations;\n+\tresult->viewRelations = NIL;\n \tresult->resultRelations = glob->resultRelations;\n \tresult->appendRelations = glob->appendRelations;\n \tresult->subplans = glob->subplans;\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Apr 2023 14:33:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "I wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n>> While thinking about query view locking in context of [1], I realized\n>> that we have missed also fixing AcquirePlannerLocks() /\n>> ScanQueryForLocks() to consider that an RTE_SUBQUERY rte may belong to\n>> a view, which must be locked the same as RTE_RELATION entries.\n\n> I think you're right about that, because AcquirePlannerLocks is supposed\n> to reacquire whatever locks parsing+rewriting would have gotten.\n\nAfter poking at this a bit more, I'm not sure there is any observable bug,\nbecause we still notice the view change in AcquireExecutorLocks and\nloop back to re-plan after that. It still seems like a good idea to\nnotice such changes sooner not later to reduce wasted work, so I went\nahead and pushed the patch.\n\nThe only way it'd be a live bug is if the planner actually fails because\nit's working with a stale view definition. I tried to make it fail by\nadjusting the view to no longer use an underlying table and then\ndropping that table ... but AcquirePlannerLocks still detected that,\nbecause of course it recurses and locks the table reference it finds\nin the view subquery. Maybe you could make a failure case involving\ndropping a user-defined function instead, but I thought that was getting\npretty far afield, so I didn't pursue it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Apr 2023 16:05:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
},
{
"msg_contents": "On Thu, Apr 6, 2023 at 3:33 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > While thinking about query view locking in context of [1], I realized\n> > that we have missed also fixing AcquirePlannerLocks() /\n> > ScanQueryForLocks() to consider that an RTE_SUBQUERY rte may belong to\n> > a view, which must be locked the same as RTE_RELATION entries.\n>\n> I think you're right about that, because AcquirePlannerLocks is supposed\n> to reacquire whatever locks parsing+rewriting would have gotten.\n> However, what's with this hunk?\n>\n> @@ -527,7 +527,7 @@ standard_planner(Query *parse, const char\n> *query_string, int cursorOptions,\n> result->partPruneInfos = glob->partPruneInfos;\n> result->rtable = glob->finalrtable;\n> result->permInfos = glob->finalrteperminfos;\n> - result->viewRelations = glob->viewRelations;\n> + result->viewRelations = NIL;\n> result->resultRelations = glob->resultRelations;\n> result->appendRelations = glob->appendRelations;\n> result->subplans = glob->subplans;\n\n\nOops, I was working in the wrong local branch.\n\nThanks for pushing. I agree that there’s no live bug as such right now,\nbut still good to be consistent.\n\n> --\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\nOn Thu, Apr 6, 2023 at 3:33 Tom Lane <tgl@sss.pgh.pa.us> wrote:Amit Langote <amitlangote09@gmail.com> writes:\n> While thinking about query view locking in context of [1], I realized\n> that we have missed also fixing AcquirePlannerLocks() /\n> ScanQueryForLocks() to consider that an RTE_SUBQUERY rte may belong to\n> a view, which must be locked the same as RTE_RELATION entries.\n\nI think you're right about that, because AcquirePlannerLocks is supposed\nto reacquire whatever locks parsing+rewriting would have gotten.\nHowever, what's with this hunk?\n\n@@ -527,7 +527,7 @@ standard_planner(Query *parse, const char *query_string, int cursorOptions,\n result->partPruneInfos = glob->partPruneInfos;\n result->rtable = glob->finalrtable;\n result->permInfos = glob->finalrteperminfos;\n- result->viewRelations = glob->viewRelations;\n+ result->viewRelations = NIL;\n result->resultRelations = glob->resultRelations;\n result->appendRelations = glob->appendRelations;\n result->subplans = glob->subplans;Oops, I was working in the wrong local branch.Thanks for pushing. I agree that there’s no live bug as such right now, but still good to be consistent.-- Thanks, Amit LangoteEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 6 Apr 2023 08:27:35 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: on placeholder entries in view rule action query's range table"
}
] |
[
{
"msg_contents": "Per Alvaro's advice, forking this from [1].\n\nIn that thread, Tom had asked if it wouldn't be better to find a new\nplace to put extraUpdatedCols [2] instead of RangeTblEntry, along with\nthe permission-checking fields are now no longer stored in\nRangeTblEntry.\n\nIn [3] of the same thread, I proposed to move it into a List of\nBitmapsets in ModifyTable, as implemented in the attached patch that I\nhad been posting to that thread.\n\nThe latest version of that patch is attached herewith. I'll add this\none to the January CF too.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA+HiwqGjJDmUhDSfv-U2qhKJjt9ST7Xh9JXC_irsAQ1TAUsJYg@mail.gmail.com\n[2] https://www.postgresql.org/message-id/3098829.1658956718%40sss.pgh.pa.us\n[3] https://www.postgresql.org/message-id/CA%2BHiwqEHoLgN%3DvSsaNMaHP-%2BqYPT40-ooySyrieXZHNzbSBj0w%40mail.gmail.com",
"msg_date": "Wed, 7 Dec 2022 20:54:36 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "moving extraUpdatedCols out of RangeTblEntry (into ModifyTable)"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 8:54 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Per Alvaro's advice, forking this from [1].\n>\n> In that thread, Tom had asked if it wouldn't be better to find a new\n> place to put extraUpdatedCols [2] instead of RangeTblEntry, along with\n> the permission-checking fields are now no longer stored in\n> RangeTblEntry.\n>\n> In [3] of the same thread, I proposed to move it into a List of\n> Bitmapsets in ModifyTable, as implemented in the attached patch that I\n> had been posting to that thread.\n>\n> The latest version of that patch is attached herewith. I'll add this\n> one to the January CF too.\n\nDone.\n\nhttps://commitfest.postgresql.org/41/4049/\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 7 Dec 2022 20:57:09 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: moving extraUpdatedCols out of RangeTblEntry (into ModifyTable)"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 8:54 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Per Alvaro's advice, forking this from [1].\n>\n> In that thread, Tom had asked if it wouldn't be better to find a new\n> place to put extraUpdatedCols [2] instead of RangeTblEntry, along with\n> the permission-checking fields are now no longer stored in\n> RangeTblEntry.\n>\n> In [3] of the same thread, I proposed to move it into a List of\n> Bitmapsets in ModifyTable, as implemented in the attached patch that I\n> had been posting to that thread.\n>\n> The latest version of that patch is attached herewith.\n\nUpdated to replace a list_nth() with list_nth_node() and rewrote the\ncommit message.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 8 Dec 2022 11:47:21 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: moving extraUpdatedCols out of RangeTblEntry (into ModifyTable)"
},
{
"msg_contents": "On Thu, 8 Dec 2022 at 08:17, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Wed, Dec 7, 2022 at 8:54 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > Per Alvaro's advice, forking this from [1].\n> >\n> > In that thread, Tom had asked if it wouldn't be better to find a new\n> > place to put extraUpdatedCols [2] instead of RangeTblEntry, along with\n> > the permission-checking fields are now no longer stored in\n> > RangeTblEntry.\n> >\n> > In [3] of the same thread, I proposed to move it into a List of\n> > Bitmapsets in ModifyTable, as implemented in the attached patch that I\n> > had been posting to that thread.\n> >\n> > The latest version of that patch is attached herewith.\n>\n> Updated to replace a list_nth() with list_nth_node() and rewrote the\n> commit message.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\ne351f85418313e97c203c73181757a007dfda6d0 ===\n=== applying patch\n./v2-0001-Add-per-result-relation-extraUpdatedCols-to-Modif.patch\n...\npatching file src/backend/optimizer/util/inherit.c\nHunk #2 FAILED at 50.\nHunk #9 succeeded at 926 with fuzz 2 (offset -1 lines).\n1 out of 9 hunks FAILED -- saving rejects to file\nsrc/backend/optimizer/util/inherit.c.rej\n\n[1] - http://cfbot.cputube.org/patch_41_4049.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 3 Jan 2023 18:36:55 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: moving extraUpdatedCols out of RangeTblEntry (into ModifyTable)"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Updated to replace a list_nth() with list_nth_node() and rewrote the\n> commit message.\n\nSo I was working through this with intent to commit, when I realized\nthat the existing code it's revising is flat out broken. You can't\nsimply translate a parent rel's set of dependent generated columns\nto obtain the correct set for a child. Maybe that's sufficient for\npartitioned tables, but it fails miserably for general inheritance:\n\nregression=# create table pp(f1 int);\nCREATE TABLE\nregression=# create table cc(f2 int generated always as (f1+1) stored) inherits(pp);\nCREATE TABLE\nregression=# insert into cc values(42);\nINSERT 0 1\nregression=# table cc;\n f1 | f2 \n----+----\n 42 | 43\n(1 row)\n\nregression=# update pp set f1 = f1*10;\nUPDATE 1\nregression=# table cc;\n f1 | f2 \n-----+----\n 420 | 43\n(1 row)\n\nSo we have a long-standing existing bug to fix here.\n\nI think what we have to do basically is repeat what fill_extraUpdatedCols\ndoes independently for each target table. That's not really horrible:\ngiven the premise that we're moving this calculation into the planner,\nwe can have expand_single_inheritance_child run the code while we have\neach target table open. It'll require some rethinking though, and we\nwill need to have the set of update target columns already available\nat that point. This suggests that we want to put the updated_cols and\nextraUpdatedCols fields into RelOptInfo not PlannerInfo.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Jan 2023 14:13:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: moving extraUpdatedCols out of RangeTblEntry (into ModifyTable)"
},
{
"msg_contents": "I wrote:\n> I think what we have to do basically is repeat what fill_extraUpdatedCols\n> does independently for each target table. That's not really horrible:\n> given the premise that we're moving this calculation into the planner,\n> we can have expand_single_inheritance_child run the code while we have\n> each target table open. It'll require some rethinking though, and we\n> will need to have the set of update target columns already available\n> at that point. This suggests that we want to put the updated_cols and\n> extraUpdatedCols fields into RelOptInfo not PlannerInfo.\n\nAfter further thought: maybe we should get radical and postpone this\nwork all the way to executor startup. The downside of that is having\nto do it over again on each execution of a prepared plan. But the\nupside is that when the UPDATE targets a many-partitioned table,\nwe would have a chance at not doing the work at all for partitions\nthat get pruned at runtime. I'm not sure if that win would emerge\nimmediately or if we still have executor work to do to manage pruning\nof the target table. I'm also not sure that this'd be a net win\noverall. But it seems worth considering.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Jan 2023 19:15:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: moving extraUpdatedCols out of RangeTblEntry (into ModifyTable)"
},
{
"msg_contents": "I wrote:\n> After further thought: maybe we should get radical and postpone this\n> work all the way to executor startup. The downside of that is having\n> to do it over again on each execution of a prepared plan. But the\n> upside is that when the UPDATE targets a many-partitioned table,\n> we would have a chance at not doing the work at all for partitions\n> that get pruned at runtime. I'm not sure if that win would emerge\n> immediately or if we still have executor work to do to manage pruning\n> of the target table. I'm also not sure that this'd be a net win\n> overall. But it seems worth considering.\n\nHere's a draft patch that does it like that. This seems like a win\nfor more reasons than just pruning, because I was able to integrate\nthe calculation into runtime setup of the expressions, so that we\naren't doing an extra stringToNode() on them.\n\nThere's still a code path that does such a calculation at plan time\n(get_rel_all_updated_cols), but it's only used by postgres_fdw which\nhas some other rather-inefficient behaviors in the same area.\n\nI've not looked into what it'd take to back-patch this. We can't\nadd a field to ResultRelInfo in released branches (cf 4b3e37993),\nbut we might be able to repurpose RangeTblEntry.extraUpdatedCols.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 04 Jan 2023 14:59:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: moving extraUpdatedCols out of RangeTblEntry (into ModifyTable)"
},
{
"msg_contents": "On Thu, Jan 5, 2023 at 4:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > After further thought: maybe we should get radical and postpone this\n> > work all the way to executor startup. The downside of that is having\n> > to do it over again on each execution of a prepared plan. But the\n> > upside is that when the UPDATE targets a many-partitioned table,\n> > we would have a chance at not doing the work at all for partitions\n> > that get pruned at runtime. I'm not sure if that win would emerge\n> > immediately or if we still have executor work to do to manage pruning\n> > of the target table. I'm also not sure that this'd be a net win\n> > overall. But it seems worth considering.\n>\n> Here's a draft patch that does it like that. This seems like a win\n> for more reasons than just pruning, because I was able to integrate\n> the calculation into runtime setup of the expressions, so that we\n> aren't doing an extra stringToNode() on them.\n\nThanks for the patch. This looks pretty neat and I agree that this\nseems like a net win overall.\n\nAs an aside, I wonder why AttrDefault (and other things in\nRelationData that need stringToNode() done on them to put into a Query\nor a plan tree) doesn't store the expression Node tree to begin with?\n\n> There's still a code path that does such a calculation at plan time\n> (get_rel_all_updated_cols), but it's only used by postgres_fdw which\n> has some other rather-inefficient behaviors in the same area.\n>\n> I've not looked into what it'd take to back-patch this. We can't\n> add a field to ResultRelInfo in released branches (cf 4b3e37993),\n> but we might be able to repurpose RangeTblEntry.extraUpdatedCols.\n\nI think we can make that work. Would you like me to give that a try?\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 5 Jan 2023 22:53:36 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: moving extraUpdatedCols out of RangeTblEntry (into ModifyTable)"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Thu, Jan 5, 2023 at 4:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Here's a draft patch that does it like that. This seems like a win\n>> for more reasons than just pruning, because I was able to integrate\n>> the calculation into runtime setup of the expressions, so that we\n>> aren't doing an extra stringToNode() on them.\n\n> Thanks for the patch. This looks pretty neat and I agree that this\n> seems like a net win overall.\n\nThanks for looking.\n\n>> I've not looked into what it'd take to back-patch this. We can't\n>> add a field to ResultRelInfo in released branches (cf 4b3e37993),\n>> but we might be able to repurpose RangeTblEntry.extraUpdatedCols.\n\n> I think we can make that work. Would you like me to give that a try?\n\nI'm on it already. AFAICT, the above won't actually work because\nwe don't have RTEs for all ResultRelInfos (per the\n\"relinfo->ri_RangeTableIndex != 0\" test in ExecGetExtraUpdatedCols).\nProbably we need something more like what 4b3e37993 did.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 05 Jan 2023 10:28:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: moving extraUpdatedCols out of RangeTblEntry (into ModifyTable)"
},
{
"msg_contents": "On Fri, Jan 6, 2023 at 12:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Thu, Jan 5, 2023 at 4:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I've not looked into what it'd take to back-patch this. We can't\n> >> add a field to ResultRelInfo in released branches (cf 4b3e37993),\n> >> but we might be able to repurpose RangeTblEntry.extraUpdatedCols.\n>\n> > I think we can make that work. Would you like me to give that a try?\n>\n> I'm on it already.\n\nThanks. What you committed seems fine to me.\n\n> AFAICT, the above won't actually work because\n> we don't have RTEs for all ResultRelInfos (per the\n> \"relinfo->ri_RangeTableIndex != 0\" test in ExecGetExtraUpdatedCols).\n\nYeah, I noticed that too. I was thinking that that wouldn't be a\nproblem, because it is only partitions that are execution-time routing\ntargets that don't have their own RTEs and using a translated copy of\nthe root parent's RTE's extraUpdatedCols for them as before isn't\nwrong. Note that partitions can't have generated columns that are not\npresent in its parent.\n\nBTW, you wrote in the commit message:\n\n However, there's nothing that says a traditional-inheritance child\n can't have generated columns that aren't there in its parent, or that\n have different dependencies than are in the parent's expression.\n (At present it seems that we don't enforce that for partitioning\n either, which is likely wrong to some degree or other; but the case\n clearly needs to be handled with traditional inheritance.)\n\nMaybe I'm missing something, but AFICS, neither\ntraditional-inheritance child tables nor partitions allow a generated\ncolumn with an expression that is not the same as the parent's\nexpression for the same generated column:\n\n-- traditional inheritance\ncreate table inhp (a int, b int generated always as (a+1) stored);\ncreate table inhc (b int generated always as (a+2) stored) inherits (inhp);\nNOTICE: moving and merging column \"b\" with inherited definition\nDETAIL: User-specified column moved to the position of the inherited column.\nERROR: child column \"b\" specifies generation expression\nHINT: Omit the generation expression in the definition of the child\ntable column to inherit the generation expression from the parent\ntable.\ncreate table inhc (a int, b int);\nalter table inhc inherit inhp;\nERROR: column \"b\" in child table must be a generated column\nalter table inhc drop b, add b int generated always as (a+2) stored;\nalter table inhc inherit inhp;\nERROR: column \"b\" in child table has a conflicting generation expression\n\n-- partitioning\ncreate table partp (a int, b int generated always as (a+1) stored)\npartition by list (a);\ncreate table partc partition of partp (b generated always as (a+2)\nstored) for values in (1);\nERROR: generated columns are not supported on partitions\ncreate table partc (a int, b int);\nalter table partp attach partition partc for values in (1);\nERROR: column \"b\" in child table must be a generated column\nalter table partc drop b, add b int generated always as (a+2) stored;\nalter table partp attach partition partc for values in (1);\nERROR: column \"b\" in child table has a conflicting generation expression\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 6 Jan 2023 15:29:07 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: moving extraUpdatedCols out of RangeTblEntry (into ModifyTable)"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> BTW, you wrote in the commit message:\n> (At present it seems that we don't enforce that for partitioning\n> either, which is likely wrong to some degree or other; but the case\n> clearly needs to be handled with traditional inheritance.)\n\n> Maybe I'm missing something, but AFICS, neither\n> traditional-inheritance child tables nor partitions allow a generated\n> column with an expression that is not the same as the parent's\n> expression for the same generated column:\n\nWell, there's some large holes in that, as per my post at [1].\nI'm on board with locking this down for partitioning, but we haven't\ndone so yet.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/2793383.1672944799%40sss.pgh.pa.us\n\n\n",
"msg_date": "Fri, 06 Jan 2023 01:33:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: moving extraUpdatedCols out of RangeTblEntry (into ModifyTable)"
},
{
"msg_contents": "On Fri, Jan 6, 2023 at 3:33 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > BTW, you wrote in the commit message:\n> > (At present it seems that we don't enforce that for partitioning\n> > either, which is likely wrong to some degree or other; but the case\n> > clearly needs to be handled with traditional inheritance.)\n>\n> > Maybe I'm missing something, but AFICS, neither\n> > traditional-inheritance child tables nor partitions allow a generated\n> > column with an expression that is not the same as the parent's\n> > expression for the same generated column:\n>\n> Well, there's some large holes in that, as per my post at [1].\n> I'm on board with locking this down for partitioning, but we haven't\n> done so yet.\n>\n> [1] https://www.postgresql.org/message-id/2793383.1672944799%40sss.pgh.pa.us\n\nAh, I had missed that. Will check and reply there.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 6 Jan 2023 15:45:34 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: moving extraUpdatedCols out of RangeTblEntry (into ModifyTable)"
}
] |
[
{
"msg_contents": "Hi all!\n\nPG by default always creates the primary key in the default tablespace \nunless you specify it to use an index that is defined in a user-defined \ntablespace.\n\nWe can create indexes in user-defined tablespaces, why can't we create \na primary key in a user-defined tablespace without having to associate \nit with an index? Something like:\nALTER TABLE myschema.mytable ADD PRIMARY KEY (akey) tablespace mytablespace;\n\nRegards,\n\nMichael Vitale\n\n\n\n\n\n\n\nHi all!\n\nPG by default always creates the primary key in the default tablespace \nunless you specify it to use an index that is defined in a user-defined \ntablespace.\n\nWe can create indexes in user-defined tablespaces, why can't we create a\n primary key in a user-defined tablespace without having to associate it\n with an index? Something like:\nALTER TABLE myschema.mytable ADD PRIMARY KEY (akey) tablespace mytablespace;\n\n\n\n\n\n\n\n\n\n\nRegards,\nMichael Vitale",
"msg_date": "Wed, 7 Dec 2022 06:56:55 -0500",
"msg_from": "MichaelDBA <MichaelDBA@sqlexec.com>",
"msg_from_op": true,
"msg_subject": "Directly associate primary key with user-defined tablespace"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nA colleague of mine reported a slight inconvenience with pg_dump.\n\nHe is dumping the data from a remote server. There are several\nthousands of tables in the database. Making a dump locally and/or\nusing pg_basebackup and/or logical replication is not an option. So\nwhat pg_dump currently does is sending LOCK TABLE queries one after\nanother. Every query needs an extra round trip. So if we have let's\nsay 2000 tables and every round trip takes 100 ms then ~3.5 minutes\nare spent in the not most useful way.\n\nWhat he proposes is taking the locks in batches. I.e. instead of:\n\nLOCK TABLE foo IN ACCESS SHARE MODE;\nLOCK TABLE bar IN ACCESS SHARE MODE;\n\ndo:\n\nLOCK TABLE foo, bar, ... IN ACCESS SHARE MODE;\n\nThe proposed patch makes this change. It's pretty straightforward and\nas a side effect saves a bit of network traffic too.\n\nThoughts?\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 7 Dec 2022 18:08:45 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] pg_dump: lock tables in batches"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 12:09 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n>\n> Hi hackers,\n>\n> A colleague of mine reported a slight inconvenience with pg_dump.\n>\n> He is dumping the data from a remote server. There are several\n> thousands of tables in the database. Making a dump locally and/or\n> using pg_basebackup and/or logical replication is not an option. So\n> what pg_dump currently does is sending LOCK TABLE queries one after\n> another. Every query needs an extra round trip. So if we have let's\n> say 2000 tables and every round trip takes 100 ms then ~3.5 minutes\n> are spent in the not most useful way.\n>\n> What he proposes is taking the locks in batches. I.e. instead of:\n>\n> LOCK TABLE foo IN ACCESS SHARE MODE;\n> LOCK TABLE bar IN ACCESS SHARE MODE;\n>\n> do:\n>\n> LOCK TABLE foo, bar, ... IN ACCESS SHARE MODE;\n>\n> The proposed patch makes this change. It's pretty straightforward and\n> as a side effect saves a bit of network traffic too.\n>\n\n+1 for that change. It will improve the dump for databases with thousands\nof relations.\n\nThe code LGTM and it passes in all tests and compiles without any warning.\n\nRegards,\n\n-- \nFabrízio de Royes Mello\n\nOn Wed, Dec 7, 2022 at 12:09 PM Aleksander Alekseev <aleksander@timescale.com> wrote:>> Hi hackers,>> A colleague of mine reported a slight inconvenience with pg_dump.>> He is dumping the data from a remote server. There are several> thousands of tables in the database. Making a dump locally and/or> using pg_basebackup and/or logical replication is not an option. So> what pg_dump currently does is sending LOCK TABLE queries one after> another. Every query needs an extra round trip. So if we have let's> say 2000 tables and every round trip takes 100 ms then ~3.5 minutes> are spent in the not most useful way.>> What he proposes is taking the locks in batches. I.e. instead of:>> LOCK TABLE foo IN ACCESS SHARE MODE;> LOCK TABLE bar IN ACCESS SHARE MODE;>> do:>> LOCK TABLE foo, bar, ... IN ACCESS SHARE MODE;>> The proposed patch makes this change. It's pretty straightforward and> as a side effect saves a bit of network traffic too.>+1 for that change. It will improve the dump for databases with thousands of relations.The code LGTM and it passes in all tests and compiles without any warning.Regards,-- Fabrízio de Royes Mello",
"msg_date": "Wed, 7 Dec 2022 12:30:48 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dump: lock tables in batches"
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> What he proposes is taking the locks in batches.\n\nI have a strong sense of deja vu here. I'm pretty sure I experimented\nwith this idea last year and gave up on it. I don't recall exactly\nwhy, but either it didn't show any meaningful performance improvement\nfor me or there was some actual downside (that I'm not remembering\nright now).\n\nThis would've been in the leadup to 989596152 and adjacent commits.\nI took a quick look through the threads cited in those commit messages\nand didn't find anything about it, but I think the discussion had\ngotten scattered across more threads. Some digging in the archives\ncould be useful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Dec 2022 10:44:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dump: lock tables in batches"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-07 10:44:33 -0500, Tom Lane wrote:\n> Aleksander Alekseev <aleksander@timescale.com> writes:\n> > What he proposes is taking the locks in batches.\n> \n> I have a strong sense of deja vu here. I'm pretty sure I experimented\n> with this idea last year and gave up on it. I don't recall exactly\n> why, but either it didn't show any meaningful performance improvement\n> for me or there was some actual downside (that I'm not remembering\n> right now).\n\n> This would've been in the leadup to 989596152 and adjacent commits.\n> I took a quick look through the threads cited in those commit messages\n> and didn't find anything about it, but I think the discussion had\n> gotten scattered across more threads. Some digging in the archives\n> could be useful.\n\nIIRC the case we were looking at around 989596152 were CPU bound workloads,\nrather than latency bound workloads. It'd not be surprising to have cases\nwhere batching LOCKs helps latency, but not CPU bound.\n\n\nI wonder if \"manual\" batching is the best answer. Alexander, have you\nconsidered using libpq level pipelining?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Dec 2022 09:08:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dump: lock tables in batches"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-12-07 10:44:33 -0500, Tom Lane wrote:\n>> I have a strong sense of deja vu here. I'm pretty sure I experimented\n>> with this idea last year and gave up on it. I don't recall exactly\n>> why, but either it didn't show any meaningful performance improvement\n>> for me or there was some actual downside (that I'm not remembering\n>> right now).\n\n> IIRC the case we were looking at around 989596152 were CPU bound workloads,\n> rather than latency bound workloads. It'd not be surprising to have cases\n> where batching LOCKs helps latency, but not CPU bound.\n\nYeah, perhaps. Anyway my main point is that I don't want to just assume\nthis is a win; I want to see some actual performance tests.\n\n> I wonder if \"manual\" batching is the best answer. Alexander, have you\n> considered using libpq level pipelining?\n\nI'd be a bit nervous about how well that works with older servers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Dec 2022 12:28:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dump: lock tables in batches"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-07 12:28:03 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-12-07 10:44:33 -0500, Tom Lane wrote:\n> >> I have a strong sense of deja vu here. I'm pretty sure I experimented\n> >> with this idea last year and gave up on it. I don't recall exactly\n> >> why, but either it didn't show any meaningful performance improvement\n> >> for me or there was some actual downside (that I'm not remembering\n> >> right now).\n> \n> > IIRC the case we were looking at around 989596152 were CPU bound workloads,\n> > rather than latency bound workloads. It'd not be surprising to have cases\n> > where batching LOCKs helps latency, but not CPU bound.\n> \n> Yeah, perhaps. Anyway my main point is that I don't want to just assume\n> this is a win; I want to see some actual performance tests.\n\nFWIW, one can simulate network latency with 'netem' on linux. Works even for\n'lo'.\n\nping -c 3 -n localhost\n\n64 bytes from ::1: icmp_seq=1 ttl=64 time=0.035 ms\n64 bytes from ::1: icmp_seq=2 ttl=64 time=0.049 ms\n64 bytes from ::1: icmp_seq=3 ttl=64 time=0.043 ms\n\ntc qdisc add dev lo root netem delay 10ms\n\n64 bytes from ::1: icmp_seq=1 ttl=64 time=20.1 ms\n64 bytes from ::1: icmp_seq=2 ttl=64 time=20.2 ms\n64 bytes from ::1: icmp_seq=3 ttl=64 time=20.2 ms\n\ntc qdisc delete dev lo root netem\n\n64 bytes from ::1: icmp_seq=1 ttl=64 time=0.036 ms\n64 bytes from ::1: icmp_seq=2 ttl=64 time=0.047 ms\n64 bytes from ::1: icmp_seq=3 ttl=64 time=0.050 ms\n\n\n> > I wonder if \"manual\" batching is the best answer. Alexander, have you\n> > considered using libpq level pipelining?\n> \n> I'd be a bit nervous about how well that works with older servers.\n\nI don't think there should be any problem - E.g. pgjdbc has been using\npipelining for ages.\n\nNot sure if it's the right answer, just to be clear. I suspect that eventually\nwe're going to need to have a special \"acquire pg_dump locks\" function that is\ncheaper than retail lock acquisition and perhaps deals more gracefully with\nexceeding max_locks_per_transaction. Which would presumably not be pipelined.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Dec 2022 09:44:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dump: lock tables in batches"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 2:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-12-07 10:44:33 -0500, Tom Lane wrote:\n> >> I have a strong sense of deja vu here. I'm pretty sure I experimented\n> >> with this idea last year and gave up on it. I don't recall exactly\n> >> why, but either it didn't show any meaningful performance improvement\n> >> for me or there was some actual downside (that I'm not remembering\n> >> right now).\n>\n> > IIRC the case we were looking at around 989596152 were CPU bound\nworkloads,\n> > rather than latency bound workloads. It'd not be surprising to have\ncases\n> > where batching LOCKs helps latency, but not CPU bound.\n>\n> Yeah, perhaps. Anyway my main point is that I don't want to just assume\n> this is a win; I want to see some actual performance tests.\n>\n\nHere we have some numbers about the Aleksander's patch:\n\n1) Setup script\n\nCREATE DATABASE t1000;\nCREATE DATABASE t10000;\nCREATE DATABASE t100000;\n\n\\c t1000\nSELECT format('CREATE TABLE t%s(c1 INTEGER PRIMARY KEY, c2 TEXT, c3\nTIMESTAMPTZ);', i) FROM generate_series(1, 1000) AS i \\gexec\n\n\\c t10000\nSELECT format('CREATE TABLE t%s(c1 INTEGER PRIMARY KEY, c2 TEXT, c3\nTIMESTAMPTZ);', i) FROM generate_series(1, 10000) AS i \\gexec\n\n\\c t100000\nSELECT format('CREATE TABLE t%s(c1 INTEGER PRIMARY KEY, c2 TEXT, c3\nTIMESTAMPTZ);', i) FROM generate_series(1, 100000) AS i \\gexec\n\n2) Execution script\n\ntime pg_dump -s t1000 > /dev/null\ntime pg_dump -s t10000 > /dev/null\ntime pg_dump -s t100000 > /dev/null\n\n3) HEAD execution\n\n$ time pg_dump -s t1000 > /dev/null\n0.02user 0.01system 0:00.36elapsed 8%CPU (0avgtext+0avgdata\n11680maxresident)k\n0inputs+0outputs (0major+1883minor)pagefaults 0swaps\n\n$ time pg_dump -s t10000 > /dev/null\n0.30user 0.10system 0:05.04elapsed 8%CPU (0avgtext+0avgdata\n57772maxresident)k\n0inputs+0outputs (0major+14042minor)pagefaults 0swaps\n\n$ time pg_dump -s t100000 > /dev/null\n3.42user 2.13system 7:50.09elapsed 1%CPU (0avgtext+0avgdata\n517900maxresident)k\n0inputs+0outputs (0major+134636minor)pagefaults 0swaps\n\n4) PATCH execution\n\n$ time pg_dump -s t1000 > /dev/null\n0.02user 0.00system 0:00.28elapsed 9%CPU (0avgtext+0avgdata\n11700maxresident)k\n0inputs+0outputs (0major+1886minor)pagefaults 0swaps\n\n$ time pg_dump -s t10000 > /dev/null\n0.18user 0.03system 0:02.17elapsed 10%CPU (0avgtext+0avgdata\n57592maxresident)k\n0inputs+0outputs (0major+14072minor)pagefaults 0swaps\n\n$ time pg_dump -s t100000 > /dev/null\n1.97user 0.32system 0:21.39elapsed 10%CPU (0avgtext+0avgdata\n517932maxresident)k\n0inputs+0outputs (0major+134892minor)pagefaults 0swaps\n\n5) Summary\n\n HEAD patch\n 1k tables 0:00.36 0:00.28\n 10k tables 0:05.04 0:02.17\n100k tables 7:50.09 0:21.39\n\nSeems we get very good performance gain using Aleksander's patch. I used\nthe \"-s\" to not waste time issuing COPY for each relation (even all is\nempty) and evidence the difference due to roundtrip for LOCK TABLE. This\npatch will also improve the pg_upgrade execution over database with\nthousands of relations.\n\nRegards,\n\n--\nFabrízio de Royes Mello\n\nOn Wed, Dec 7, 2022 at 2:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> Andres Freund <andres@anarazel.de> writes:> > On 2022-12-07 10:44:33 -0500, Tom Lane wrote:> >> I have a strong sense of deja vu here. I'm pretty sure I experimented> >> with this idea last year and gave up on it. I don't recall exactly> >> why, but either it didn't show any meaningful performance improvement> >> for me or there was some actual downside (that I'm not remembering> >> right now).>> > IIRC the case we were looking at around 989596152 were CPU bound workloads,> > rather than latency bound workloads. It'd not be surprising to have cases> > where batching LOCKs helps latency, but not CPU bound.>> Yeah, perhaps. Anyway my main point is that I don't want to just assume> this is a win; I want to see some actual performance tests.>Here we have some numbers about the Aleksander's patch:1) Setup scriptCREATE DATABASE t1000;CREATE DATABASE t10000;CREATE DATABASE t100000;\\c t1000SELECT format('CREATE TABLE t%s(c1 INTEGER PRIMARY KEY, c2 TEXT, c3 TIMESTAMPTZ);', i) FROM generate_series(1, 1000) AS i \\gexec\\c t10000SELECT format('CREATE TABLE t%s(c1 INTEGER PRIMARY KEY, c2 TEXT, c3 TIMESTAMPTZ);', i) FROM generate_series(1, 10000) AS i \\gexec\\c t100000SELECT format('CREATE TABLE t%s(c1 INTEGER PRIMARY KEY, c2 TEXT, c3 TIMESTAMPTZ);', i) FROM generate_series(1, 100000) AS i \\gexec2) Execution scripttime pg_dump -s t1000 > /dev/nulltime pg_dump -s t10000 > /dev/nulltime pg_dump -s t100000 > /dev/null3) HEAD execution$ time pg_dump -s t1000 > /dev/null0.02user 0.01system 0:00.36elapsed 8%CPU (0avgtext+0avgdata 11680maxresident)k0inputs+0outputs (0major+1883minor)pagefaults 0swaps$ time pg_dump -s t10000 > /dev/null0.30user 0.10system 0:05.04elapsed 8%CPU (0avgtext+0avgdata 57772maxresident)k0inputs+0outputs (0major+14042minor)pagefaults 0swaps$ time pg_dump -s t100000 > /dev/null3.42user 2.13system 7:50.09elapsed 1%CPU (0avgtext+0avgdata 517900maxresident)k0inputs+0outputs (0major+134636minor)pagefaults 0swaps4) PATCH execution$ time pg_dump -s t1000 > /dev/null0.02user 0.00system 0:00.28elapsed 9%CPU (0avgtext+0avgdata 11700maxresident)k0inputs+0outputs (0major+1886minor)pagefaults 0swaps$ time pg_dump -s t10000 > /dev/null0.18user 0.03system 0:02.17elapsed 10%CPU (0avgtext+0avgdata 57592maxresident)k0inputs+0outputs (0major+14072minor)pagefaults 0swaps$ time pg_dump -s t100000 > /dev/null1.97user 0.32system 0:21.39elapsed 10%CPU (0avgtext+0avgdata 517932maxresident)k0inputs+0outputs (0major+134892minor)pagefaults 0swaps5) Summary HEAD patch 1k tables 0:00.36 0:00.28 10k tables 0:05.04 0:02.17100k tables 7:50.09 0:21.39Seems we get very good performance gain using Aleksander's patch. I used the \"-s\" to not waste time issuing COPY for each relation (even all is empty) and evidence the difference due to roundtrip for LOCK TABLE. This patch will also improve the pg_upgrade execution over database with thousands of relations.Regards,--Fabrízio de Royes Mello",
"msg_date": "Wed, 7 Dec 2022 18:14:01 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dump: lock tables in batches"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-07 18:14:01 -0300, Fabr�zio de Royes Mello wrote:\n> Here we have some numbers about the Aleksander's patch:\n\nHm. Were they taken in an assertion enabled build or such? Just testing the\nt10000 case on HEAD, I get 0:01.23 elapsed for an unpatched pg_dump in an\noptimized build. And that's on a machine with not all that fast cores.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Dec 2022 13:32:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dump: lock tables in batches"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-07 13:32:42 -0800, Andres Freund wrote:\n> On 2022-12-07 18:14:01 -0300, Fabr�zio de Royes Mello wrote:\n> > Here we have some numbers about the Aleksander's patch:\n> \n> Hm. Were they taken in an assertion enabled build or such? Just testing the\n> t10000 case on HEAD, I get 0:01.23 elapsed for an unpatched pg_dump in an\n> optimized build. And that's on a machine with not all that fast cores.\n\nComparing the overhead on the server side.\n\nCREATE OR REPLACE FUNCTION exec(v_sql text) RETURNS text LANGUAGE plpgsql AS $$BEGIN EXECUTE v_sql;RETURN v_sql;END;$$;\n\nAcquire locks in separate statements, three times:\n\nSELECT exec(string_agg(format('LOCK t%s;', i), '')) FROM generate_series(1, 10000) AS i;\n1267.909 ms\n1116.008 ms\n1108.383 ms\n\nAcquire all locks in a single statement, three times:\nSELECT exec('LOCK '||string_agg(format('t%s', i), ', ')) FROM generate_series(1, 10000) AS i;\n1210.732 ms\n1101.390 ms\n1105.331 ms\n\nSo there's some performance difference that's independent of the avoided\nroundtrips - but it's pretty small.\n\n\nWith an artificial delay of 100ms, the perf difference between the batching\npatch and not using the batching patch is huge. Huge enough that I don't have\nthe patience to wait for the non-batched case to complete.\n\nWith batching pg_dump -s -h localhost t10000 took 0:16.23 elapsed, without I\ncancelled after 603 tables had been locked, which took 2:06.43.\n\n\nThis made me try out using pipeline mode. Seems to work nicely from what I can\ntell. The timings are a tad slower than the \"manual\" batch mode - I think\nthat's largely due to the extended protocol just being overcomplicated.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Dec 2022 14:43:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dump: lock tables in batches"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> With an artificial delay of 100ms, the perf difference between the batching\n> patch and not using the batching patch is huge. Huge enough that I don't have\n> the patience to wait for the non-batched case to complete.\n\nClearly, if you insert a sufficiently large artificial round-trip delay,\neven squeezing a single command out of a pg_dump run will appear\nworthwhile. What I'm unsure about is whether it's worthwhile at\nrealistic round-trip delays (where \"realistic\" means that the dump\nperformance would otherwise be acceptable). I think the reason I didn't\npursue this last year is that experimentation convinced me the answer\nwas \"no\".\n\n> With batching pg_dump -s -h localhost t10000 took 0:16.23 elapsed, without I\n> cancelled after 603 tables had been locked, which took 2:06.43.\n\nIs \"-s\" mode actually a relevant criterion here? With per-table COPY\ncommands added into the mix you could not possibly get better than 2x\nimprovement, and likely a good deal less.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Dec 2022 17:53:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dump: lock tables in batches"
},
{
"msg_contents": "On Wed, Dec 7, 2022 at 2:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Is \"-s\" mode actually a relevant criterion here? With per-table COPY\n> commands added into the mix you could not possibly get better than 2x\n> improvement, and likely a good deal less.\n\nDon't we hit this code path in pg_upgrade? You won't see huge\nround-trip times, of course, but you also won't see COPY.\n\nAbsolute performance aside, isn't there another concern that, the\nlonger it takes for us to lock the tables, the bigger the window there\nis for someone to interfere with them between our catalog query and\nthe lock itself?\n\n--Jacob\n\n\n",
"msg_date": "Wed, 7 Dec 2022 15:13:43 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dump: lock tables in batches"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-07 17:53:05 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > With an artificial delay of 100ms, the perf difference between the batching\n> > patch and not using the batching patch is huge. Huge enough that I don't have\n> > the patience to wait for the non-batched case to complete.\n> \n> Clearly, if you insert a sufficiently large artificial round-trip delay,\n> even squeezing a single command out of a pg_dump run will appear\n> worthwhile. What I'm unsure about is whether it's worthwhile at\n> realistic round-trip delays (where \"realistic\" means that the dump\n> performance would otherwise be acceptable). I think the reason I didn't\n> pursue this last year is that experimentation convinced me the answer\n> was \"no\".\n\nIt seems to be a win even without any artificial delay. Not a *huge* win, but\na noticable win. And even just a few ms make it quite painful.\n\n\n> > With batching pg_dump -s -h localhost t10000 took 0:16.23 elapsed, without I\n> > cancelled after 603 tables had been locked, which took 2:06.43.\n> \n> Is \"-s\" mode actually a relevant criterion here? With per-table COPY\n> commands added into the mix you could not possibly get better than 2x\n> improvement, and likely a good deal less.\n\nWell, -s isn't something used all that rarely, so it'd not be insane to\noptimize it in isolation. But more importantly, I think the potential win\nwithout -s is far bigger than 2x, because the COPYs can be done in parallel,\nwhereas the locking happens in the non-parallel stage.\n\nWith just a 5ms delay, very well within normal network latency range, I get:\n\npg_dump.master -h localhost -j10 -f /tmp/pg_dump_backup -Fd t10000\n2m7.830s\n\npg_dump.pipeline -h localhost -j10 -f /tmp/pg_dump_backup -Fd t10000\n0m24.183s\n\npg_dump.batch -h localhost -j10 -f /tmp/pg_dump_backup -Fd t10000\n0m24.321s\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Dec 2022 15:45:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dump: lock tables in batches"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-12-07 17:53:05 -0500, Tom Lane wrote:\n>> Is \"-s\" mode actually a relevant criterion here? With per-table COPY\n>> commands added into the mix you could not possibly get better than 2x\n>> improvement, and likely a good deal less.\n\n> Well, -s isn't something used all that rarely, so it'd not be insane to\n> optimize it in isolation. But more importantly, I think the potential win\n> without -s is far bigger than 2x, because the COPYs can be done in parallel,\n> whereas the locking happens in the non-parallel stage.\n\nTrue, and there's the reduce-the-lock-window issue that Jacob mentioned.\n\n> With just a 5ms delay, very well within normal network latency range, I get:\n> [ a nice win ]\n\nOK. I'm struggling to figure out why I rejected this idea last year.\nI know that I thought about it and I'm fairly certain I actually\ntested it. Maybe I only tried it with near-zero-latency local\nloopback; but I doubt that, because the potential for network\nlatency was certainly a factor in that whole discussion.\n\nOne idea is that I might've tried it before getting rid of all the\nother per-object queries, at which point it wouldn't have stood out\nquite so much. But I'm just guessing. I have a nagging feeling\nthere was something else.\n\nOh well, I guess we can always revert it if we discover a problem later.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 07 Dec 2022 19:03:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dump: lock tables in batches"
},
{
"msg_contents": "Le 08/12/2022 à 01:03, Tom Lane a écrit :\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2022-12-07 17:53:05 -0500, Tom Lane wrote:\n>>> Is \"-s\" mode actually a relevant criterion here? With per-table COPY\n>>> commands added into the mix you could not possibly get better than 2x\n>>> improvement, and likely a good deal less.\n>> Well, -s isn't something used all that rarely, so it'd not be insane to\n>> optimize it in isolation. But more importantly, I think the potential win\n>> without -s is far bigger than 2x, because the COPYs can be done in parallel,\n>> whereas the locking happens in the non-parallel stage.\n> True, and there's the reduce-the-lock-window issue that Jacob mentioned.\n>\n>> With just a 5ms delay, very well within normal network latency range, I get:\n>> [ a nice win ]\n> OK. I'm struggling to figure out why I rejected this idea last year.\n> I know that I thought about it and I'm fairly certain I actually\n> tested it. Maybe I only tried it with near-zero-latency local\n> loopback; but I doubt that, because the potential for network\n> latency was certainly a factor in that whole discussion.\n>\n> One idea is that I might've tried it before getting rid of all the\n> other per-object queries, at which point it wouldn't have stood out\n> quite so much. But I'm just guessing. I have a nagging feeling\n> there was something else.\n>\n> Oh well, I guess we can always revert it if we discover a problem later.\n>\n> \t\t\tregards, tom lane\n>\nHi,\n\n\nI have done a review of this patch, it applies well on current master \nand compiles without problem.\n\nmake check/installcheck and world run without failure, pg_dump tests \nwith pgtap enabled work fine too.\n\nI have also given a try to the bench mentioned in the previous posts and \nI have the same performances gain with the -s option.\n\n\nAs it seems to have a consensus to apply this patch I will change the \ncommitfest status to ready for committers.\n\n\nRegards,\n\n-- \nGilles Darold\n\n\n\n",
"msg_date": "Tue, 3 Jan 2023 20:46:47 +0100",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dump: lock tables in batches"
},
{
"msg_contents": "Gilles Darold <gilles@migops.com> writes:\n> As it seems to have a consensus to apply this patch I will change the \n> commitfest status to ready for committers.\n\nYeah. I still have a nagging worry about why I didn't do this already,\nbut without evidence I can't fairly block the patch. Hence, pushed.\n\nI did cut the LOCK TABLE query length limit from 1MB to 100K, because\nI doubt there is any measurable performance difference, and I'm a\nlittle worried about overstressing smaller servers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Jan 2023 18:00:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] pg_dump: lock tables in batches"
}
] |
[
{
"msg_contents": "Hi all!\n\nThis adds tab-complete for optional parameters (as can be specified \nusing WITH) for views, similarly to how it works for storage parameters \nof tables.\n\nThanks,\nChristoph Heiss",
"msg_date": "Wed, 7 Dec 2022 20:44:27 +0100",
"msg_from": "Christoph Heiss <christoph@c8h4.io>",
"msg_from_op": true,
"msg_subject": "[PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "Hi Christoph,\n\nI just took a quick look at your patch.\nSome suggestions:\n\n+ else if (Matches(\"ALTER\", \"VIEW\", MatchAny, \"SET\", \"(\"))\n> + COMPLETE_WITH_LIST(view_optional_parameters);\n> + /* ALTER VIEW xxx RESET ( yyy , ... ) */\n> + else if (Matches(\"ALTER\", \"VIEW\", MatchAny, \"RESET\", \"(\"))\n> + COMPLETE_WITH_LIST(view_optional_parameters);\n\n\nWhat about combining these two cases into one like Matches(\"ALTER\", \"VIEW\",\nMatchAny, \"SET|RESET\", \"(\") ?\n\n /* ALTER VIEW <name> */\n> else if (Matches(\"ALTER\", \"VIEW\", MatchAny))\n> COMPLETE_WITH(\"ALTER COLUMN\", \"OWNER TO\", \"RENAME\",\n> \"SET SCHEMA\");\n\n\nAlso seems like SET and RESET don't get auto-completed for \"ALTER VIEW\n<name>\".\nI think it would be nice to include those missing words.\n\nThanks,\n--\nMelih Mutlu\nMicrosoft\n\nHi Christoph,I just took a quick look at your patch. Some suggestions:+ else if (Matches(\"ALTER\", \"VIEW\", MatchAny, \"SET\", \"(\"))+ COMPLETE_WITH_LIST(view_optional_parameters);+ /* ALTER VIEW xxx RESET ( yyy , ... ) */+ else if (Matches(\"ALTER\", \"VIEW\", MatchAny, \"RESET\", \"(\"))+ COMPLETE_WITH_LIST(view_optional_parameters);What about combining these two cases into one like Matches(\"ALTER\", \"VIEW\", MatchAny, \"SET|RESET\", \"(\") ? /* ALTER VIEW <name> */ else if (Matches(\"ALTER\", \"VIEW\", MatchAny)) COMPLETE_WITH(\"ALTER COLUMN\", \"OWNER TO\", \"RENAME\", \"SET SCHEMA\");Also seems like SET and RESET don't get auto-completed for \"ALTER VIEW <name>\". I think it would be nice to include those missing words.Thanks,--Melih MutluMicrosoft",
"msg_date": "Thu, 8 Dec 2022 14:19:25 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "Thanks for the review!\n\nOn 12/8/22 12:19, Melih Mutlu wrote:\n> Hi Christoph,\n> \n> I just took a quick look at your patch.\n> Some suggestions:\n> \n> + else if (Matches(\"ALTER\", \"VIEW\", MatchAny, \"SET\", \"(\"))\n> + COMPLETE_WITH_LIST(view_optional_parameters);\n> + /* ALTER VIEW xxx RESET ( yyy , ... ) */\n> + else if (Matches(\"ALTER\", \"VIEW\", MatchAny, \"RESET\", \"(\"))\n> + COMPLETE_WITH_LIST(view_optional_parameters);\n> \n> \n> What about combining these two cases into one like Matches(\"ALTER\", \n> \"VIEW\", MatchAny, \"SET|RESET\", \"(\") ?\nGood thinking, incorporated that into v2.\nWhile at it, I also added completion for the values of the SET \nparameters, since that is useful as well.\n\n> \n> /* ALTER VIEW <name> */\n> else if (Matches(\"ALTER\", \"VIEW\", MatchAny))\n> COMPLETE_WITH(\"ALTER COLUMN\", \"OWNER TO\", \"RENAME\",\n> \"SET SCHEMA\");\n> \n> \n> Also seems like SET and RESET don't get auto-completed for \"ALTER VIEW \n> <name>\".\n> I think it would be nice to include those missing words.\nThat is already contained in the patch:\n\n@@ -2139,7 +2146,7 @@ psql_completion(const char *text, int start, int end)\n /* ALTER VIEW <name> */\n else if (Matches(\"ALTER\", \"VIEW\", MatchAny))\n COMPLETE_WITH(\"ALTER COLUMN\", \"OWNER TO\", \"RENAME\",\n- \"SET SCHEMA\");\n+ \"SET SCHEMA\", \"SET (\", \"RESET (\");\n\nThanks,\nChristoph",
"msg_date": "Fri, 9 Dec 2022 11:31:21 +0100",
"msg_from": "Christoph Heiss <christoph@c8h4.io>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "On Fri, 9 Dec 2022 at 16:01, Christoph Heiss <christoph@c8h4.io> wrote:\n>\n> Thanks for the review!\n>\n> On 12/8/22 12:19, Melih Mutlu wrote:\n> > Hi Christoph,\n> >\n> > I just took a quick look at your patch.\n> > Some suggestions:\n> >\n> > + else if (Matches(\"ALTER\", \"VIEW\", MatchAny, \"SET\", \"(\"))\n> > + COMPLETE_WITH_LIST(view_optional_parameters);\n> > + /* ALTER VIEW xxx RESET ( yyy , ... ) */\n> > + else if (Matches(\"ALTER\", \"VIEW\", MatchAny, \"RESET\", \"(\"))\n> > + COMPLETE_WITH_LIST(view_optional_parameters);\n> >\n> >\n> > What about combining these two cases into one like Matches(\"ALTER\",\n> > \"VIEW\", MatchAny, \"SET|RESET\", \"(\") ?\n> Good thinking, incorporated that into v2.\n> While at it, I also added completion for the values of the SET\n> parameters, since that is useful as well.\n>\n> >\n> > /* ALTER VIEW <name> */\n> > else if (Matches(\"ALTER\", \"VIEW\", MatchAny))\n> > COMPLETE_WITH(\"ALTER COLUMN\", \"OWNER TO\", \"RENAME\",\n> > \"SET SCHEMA\");\n> >\n> >\n> > Also seems like SET and RESET don't get auto-completed for \"ALTER VIEW\n> > <name>\".\n> > I think it would be nice to include those missing words.\n> That is already contained in the patch:\n>\n> @@ -2139,7 +2146,7 @@ psql_completion(const char *text, int start, int end)\n> /* ALTER VIEW <name> */\n> else if (Matches(\"ALTER\", \"VIEW\", MatchAny))\n> COMPLETE_WITH(\"ALTER COLUMN\", \"OWNER TO\", \"RENAME\",\n> - \"SET SCHEMA\");\n> + \"SET SCHEMA\", \"SET (\", \"RESET (\");\n\nOne suggestion:\nTab completion for \"alter view v1 set (check_option =\" is handled to\ncomplete with CASCADED and LOCAL but the same is not handled for\ncreate view: \"create view viewname with ( check_option =\"\n+ else if (Matches(\"ALTER\", \"VIEW\", MatchAny, \"SET\", \"(\",\n\"check_option\", \"=\"))\n+ COMPLETE_WITH(\"local\", \"cascaded\");\n\nI felt we should keep the handling consistent for both create view and\nalter view.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 6 Jan 2023 17:21:42 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "On Fri, 6 Jan 2023 at 11:52, vignesh C <vignesh21@gmail.com> wrote:\n>\n> One suggestion:\n> Tab completion for \"alter view v1 set (check_option =\" is handled to\n> complete with CASCADED and LOCAL but the same is not handled for\n> create view: \"create view viewname with ( check_option =\"\n> + else if (Matches(\"ALTER\", \"VIEW\", MatchAny, \"SET\", \"(\",\n> \"check_option\", \"=\"))\n> + COMPLETE_WITH(\"local\", \"cascaded\");\n>\n> I felt we should keep the handling consistent for both create view and\n> alter view.\n>\n\nHmm, I don't think we should be offering \"check_option\" as a tab\ncompletion for CREATE VIEW at all, since that would encourage users to\nuse non-SQL-standard syntax, rather than CREATE VIEW ... WITH\n[CASCADED|LOCAL] CHECK OPTION.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 6 Jan 2023 12:18:44 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "Hi Christoph,\n\nThanks for the patch! I just tested it and I could reproduce the \nexpected behaviour in these cases:\n\npostgres=# CREATE VIEW w\nAS WITH (\n\npostgres=# CREATE OR REPLACE VIEW w\nAS WITH (\n\npostgres=# CREATE VIEW w WITH (\nCHECK_OPTION SECURITY_BARRIER SECURITY_INVOKER\n\npostgres=# CREATE OR REPLACE VIEW w WITH (\nCHECK_OPTION SECURITY_BARRIER SECURITY_INVOKER\n\npostgres=# ALTER VIEW w\nALTER COLUMN OWNER TO RENAME RESET ( SET ( \nSET SCHEMA\n\npostgres=# ALTER VIEW w RESET (\nCHECK_OPTION SECURITY_BARRIER SECURITY_INVOKER\n\npostgres=# ALTER VIEW w SET (check_option =\nCASCADED LOCAL\n\nHowever, an \"ALTER TABLE <name> S<tab>\" does not complete the open \nparenthesis \"(\" from \"SET (\", as suggested in \"ALTER VIEW <name> <tab>\".\n\npostgres=# ALTER VIEW w SET\nDisplay all 187 possibilities? (y or n)\n\nIs it intended to behave like this? If so, an \"ALTER VIEW <name> \nRES<tab>\" does complete the open parenthesis -> \"RESET (\".\n\nBest,\nJim\n\nOn 09.12.22 11:31, Christoph Heiss wrote:\n> Thanks for the review!\n>\n> On 12/8/22 12:19, Melih Mutlu wrote:\n>> Hi Christoph,\n>>\n>> I just took a quick look at your patch.\n>> Some suggestions:\n>>\n>> + else if (Matches(\"ALTER\", \"VIEW\", MatchAny, \"SET\", \"(\"))\n>> + COMPLETE_WITH_LIST(view_optional_parameters);\n>> + /* ALTER VIEW xxx RESET ( yyy , ... ) */\n>> + else if (Matches(\"ALTER\", \"VIEW\", MatchAny, \"RESET\", \"(\"))\n>> + COMPLETE_WITH_LIST(view_optional_parameters);\n>>\n>>\n>> What about combining these two cases into one like Matches(\"ALTER\", \n>> \"VIEW\", MatchAny, \"SET|RESET\", \"(\") ?\n> Good thinking, incorporated that into v2.\n> While at it, I also added completion for the values of the SET \n> parameters, since that is useful as well.\n>\n>>\n>> /* ALTER VIEW <name> */\n>> else if (Matches(\"ALTER\", \"VIEW\", MatchAny))\n>> COMPLETE_WITH(\"ALTER COLUMN\", \"OWNER TO\", \"RENAME\",\n>> \"SET SCHEMA\");\n>>\n>>\n>> Also seems like SET and RESET don't get auto-completed for \"ALTER \n>> VIEW <name>\".\n>> I think it would be nice to include those missing words.\n> That is already contained in the patch:\n>\n> @@ -2139,7 +2146,7 @@ psql_completion(const char *text, int start, int \n> end)\n> /* ALTER VIEW <name> */\n> else if (Matches(\"ALTER\", \"VIEW\", MatchAny))\n> COMPLETE_WITH(\"ALTER COLUMN\", \"OWNER TO\", \"RENAME\",\n> - \"SET SCHEMA\");\n> + \"SET SCHEMA\", \"SET (\", \"RESET (\");\n>\n> Thanks,\n> Christoph",
"msg_date": "Mon, 9 Jan 2023 16:32:09 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "On Fri, 9 Dec 2022 at 16:01, Christoph Heiss <christoph@c8h4.io> wrote:\n>\n> Thanks for the review!\n>\n> On 12/8/22 12:19, Melih Mutlu wrote:\n> > Hi Christoph,\n> >\n> > I just took a quick look at your patch.\n> > Some suggestions:\n> >\n> > + else if (Matches(\"ALTER\", \"VIEW\", MatchAny, \"SET\", \"(\"))\n> > + COMPLETE_WITH_LIST(view_optional_parameters);\n> > + /* ALTER VIEW xxx RESET ( yyy , ... ) */\n> > + else if (Matches(\"ALTER\", \"VIEW\", MatchAny, \"RESET\", \"(\"))\n> > + COMPLETE_WITH_LIST(view_optional_parameters);\n> >\n> >\n> > What about combining these two cases into one like Matches(\"ALTER\",\n> > \"VIEW\", MatchAny, \"SET|RESET\", \"(\") ?\n> Good thinking, incorporated that into v2.\n> While at it, I also added completion for the values of the SET\n> parameters, since that is useful as well.\n>\n> >\n> > /* ALTER VIEW <name> */\n> > else if (Matches(\"ALTER\", \"VIEW\", MatchAny))\n> > COMPLETE_WITH(\"ALTER COLUMN\", \"OWNER TO\", \"RENAME\",\n> > \"SET SCHEMA\");\n> >\n> >\n> > Also seems like SET and RESET don't get auto-completed for \"ALTER VIEW\n> > <name>\".\n> > I think it would be nice to include those missing words.\n> That is already contained in the patch:\n>\n> @@ -2139,7 +2146,7 @@ psql_completion(const char *text, int start, int end)\n> /* ALTER VIEW <name> */\n> else if (Matches(\"ALTER\", \"VIEW\", MatchAny))\n> COMPLETE_WITH(\"ALTER COLUMN\", \"OWNER TO\", \"RENAME\",\n> - \"SET SCHEMA\");\n> + \"SET SCHEMA\", \"SET (\", \"RESET (\");\n\nFor some reason CFBot is not able to apply the patch, please have a\nlook and post an updated version if required, also check and handle\nDean Rasheed and Jim Jones comment if applicable:\n=== Applying patches on top of PostgreSQL commit ID\n5f6401f81cb24bd3930e0dc589fc4aa8b5424cdc ===\n=== applying patch\n./v2-0001-psql-Add-tab-complete-for-optional-view-parameter.patch\ngpatch: **** Only garbage was found in the patch input.\n\n[1] - http://cfbot.cputube.org/patch_41_4053.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 11 Jan 2023 22:20:29 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 5:50 AM vignesh C <vignesh21@gmail.com> wrote:\n> For some reason CFBot is not able to apply the patch, please have a\n> look and post an updated version if required, also check and handle\n> Dean Rasheed and Jim Jones comment if applicable:\n> === Applying patches on top of PostgreSQL commit ID\n> 5f6401f81cb24bd3930e0dc589fc4aa8b5424cdc ===\n> === applying patch\n> ./v2-0001-psql-Add-tab-complete-for-optional-view-parameter.patch\n> gpatch: **** Only garbage was found in the patch input.\n\nMelanie pointed me at this issue. This particular entry is now fixed,\nand I think I know what happened: cfbot wasn't checking the HTTP\nstatus when downloading patches from the web archives, because I had\nincorrectly assumed Python's requests.get() would raise an exception\nif the web server sent an error status, but it turns out you have to\nask for that. I've now fixed that. So I think it was probably trying\nto apply one of those \"guru meditation\" error message the web archives\noccasionally spit out.\n\n\n",
"msg_date": "Thu, 12 Jan 2023 11:29:11 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, failed\nDocumentation: tested, passed\n\nHi Christoph,\r\n\r\nThe patch have a potential, although I have to agree with Jim Jones, it obviously have a problem with \"alter view <name> set<tab>\" handling.\r\n\r\nFirst of all user can notice, that SET and RESET alternatives are proposed in an absolutely equivalent way:\r\npostgres=# alter view VVV <tab>\r\nALTER COLUMN OWNER TO RENAME RESET ( SET ( SET SCHEMA\r\n\r\nBut completion of a parentheses differs fore these alternatives:\r\n\r\npostgres=# alter view VVV reset<tab> -> completes with \"<space>(\" -> <tab>\r\nCHECK_OPTION SECURITY_BARRIER SECURITY_INVOKER\r\n\r\npostgres=# alter view VVV set<tab> -> completes with a single spase -> <tab>\r\nDisplay all 188 possibilities? (y or n)\r\n(and these 188 possibilities do not contain \"(\")\r\n\r\nThe probmen is obviously in the newly added second line of the following clause:\r\n\t\tCOMPLETE_WITH(\"ALTER COLUMN\", \"OWNER TO\", \"RENAME\",\r\n\t\t\t\t\t \"SET SCHEMA\", \"SET (\", \"RESET (\");\r\n\r\n\"set schema\" and \"set (\" alternatives are competing, while completion of the common part \"set<space>\" leads to a string composition which does not have the check branch (Matches(\"ALTER\", \"VIEW\", MatchAny, \"SET\")).\r\n\r\nI think it may worth looking at \"alter materialized view\" completion tree and making \"alter view\" the same way.\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Sun, 29 Jan 2023 10:19:12 +0000",
"msg_from": "Mikhail Gribkov <youzhick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "On Sun, 29 Jan 2023 at 05:20, Mikhail Gribkov <youzhick@gmail.com> wrote:\n>\n> The problem is obviously in the newly added second line of the following clause:\n> COMPLETE_WITH(\"ALTER COLUMN\", \"OWNER TO\", \"RENAME\",\n> \"SET SCHEMA\", \"SET (\", \"RESET (\");\n>\n> \"set schema\" and \"set (\" alternatives are competing, while completion of the common part \"set<space>\" leads to a string composition which does not have the check branch (Matches(\"ALTER\", \"VIEW\", MatchAny, \"SET\")).\n>\n> I think it may worth looking at \"alter materialized view\" completion tree and making \"alter view\" the same way.\n>\n> The new status of this patch is: Waiting on Author\n\nI think this patch received real feedback and it looks like it's clear\nwhere there's more work needed. I'll move this to the next commitfest.\nIf you plan to work on it this month we can always move it back.\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Mon, 20 Mar 2023 14:40:43 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "> On 29 Jan 2023, at 11:19, Mikhail Gribkov <youzhick@gmail.com> wrote:\n\n> The new status of this patch is: Waiting on Author\n\nThis patch has been Waiting on Author since January with the thread being\nstale, so I am marking this as Returned with Feedback for now. Please feel\nfree to resubmit to a future CF when there is renewed interest in working on\nthis.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 15:28:33 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "Hi all,\nsorry for the long delay.\n\nOn Mon, Jan 09, 2023 at 04:32:09PM +0100, Jim Jones wrote:\n> However, an \"ALTER TABLE <name> S<tab>\" does not complete the open\n> parenthesis \"(\" from \"SET (\", as suggested in \"ALTER VIEW <name> <tab>\".\n>\n> postgres=# ALTER VIEW w SET\n> Display all 187 possibilities? (y or n)\n>\n> Is it intended to behave like this? If so, an \"ALTER VIEW <name>\n> RES<tab>\" does complete the open parenthesis -> \"RESET (\".\n\nOn Sun, Jan 29, 2023 at 10:19:12AM +0000, Mikhail Gribkov wrote:\n> The patch have a potential, although I have to agree with Jim Jones,\n> it obviously have a problem with \"alter view <name> set<tab>\"\n> handling.\n> [..]\n> I think it may worth looking at \"alter materialized view\" completion\n> tree and making \"alter view\" the same way.\n\nThank you both for reviewing/testing and the suggestions. Yeah,\ndefinitively, sounds very sensible.\n\nI've attached a new revision, rebased and addressing the above by\naligning it with how \"ALTER MATERIALIZED VIEW\" works, such that \"SET (\"\nand \"SET SCHEMA\" won't compete anymore. So that should now work more\nlike expected.\n\npostgres=# ALTER MATERIALIZED VIEW m\nALTER COLUMN CLUSTER ON DEPENDS ON EXTENSION\nNO DEPENDS ON EXTENSION OWNER TO RENAME\nRESET ( SET\n\npostgres=# ALTER MATERIALIZED VIEW m SET\n( ACCESS METHOD SCHEMA TABLESPACE\nWITHOUT CLUSTER\n\npostgres=# ALTER VIEW v\nALTER COLUMN OWNER TO RENAME RESET ( SET\n\npostgres=# ALTER VIEW v SET\n( SCHEMA\n\npostgres=# ALTER VIEW v SET (\nCHECK_OPTION SECURITY_BARRIER SECURITY_INVOKER\n\nOn Fri, Jan 06, 2023 at 12:18:44PM +0000, Dean Rasheed wrote:\n> Hmm, I don't think we should be offering \"check_option\" as a tab\n> completion for CREATE VIEW at all, since that would encourage users to\n> use non-SQL-standard syntax, rather than CREATE VIEW ... WITH\n> [CASCADED|LOCAL] CHECK OPTION.\n\nLeft that part in for now. I would argue that it is a well-documented\ncombination and as such users would expect it to turn up in the\ntab-complete as well. OTOH not against removing it either, if there are\nothers voicing the same opinion ..\n\nThanks,\nChristoph",
"msg_date": "Mon, 7 Aug 2023 20:49:04 +0200",
"msg_from": "Christoph Heiss <christoph@c8h4.io>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "On Mon, 7 Aug 2023 at 19:49, Christoph Heiss <christoph@c8h4.io> wrote:\n>\n> On Fri, Jan 06, 2023 at 12:18:44PM +0000, Dean Rasheed wrote:\n> > Hmm, I don't think we should be offering \"check_option\" as a tab\n> > completion for CREATE VIEW at all, since that would encourage users to\n> > use non-SQL-standard syntax, rather than CREATE VIEW ... WITH\n> > [CASCADED|LOCAL] CHECK OPTION.\n>\n> Left that part in for now. I would argue that it is a well-documented\n> combination and as such users would expect it to turn up in the\n> tab-complete as well. OTOH not against removing it either, if there are\n> others voicing the same opinion ..\n>\n\nOn reflection, I think that's probably OK. I mean, I still don't like\nthe fact that it's offering to complete with non-SQL-standard syntax,\nbut that seems less bad than using an incomplete list of options, and\nI don't really have any other better ideas.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 8 Aug 2023 09:17:51 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "\nOn Tue, Aug 08, 2023 at 09:17:51AM +0100, Dean Rasheed wrote:\n>\n> On Mon, 7 Aug 2023 at 19:49, Christoph Heiss <christoph@c8h4.io> wrote:\n> >\n> > On Fri, Jan 06, 2023 at 12:18:44PM +0000, Dean Rasheed wrote:\n> > > Hmm, I don't think we should be offering \"check_option\" as a tab\n> > > completion for CREATE VIEW at all, since that would encourage users to\n> > > use non-SQL-standard syntax, rather than CREATE VIEW ... WITH\n> > > [CASCADED|LOCAL] CHECK OPTION.\n> >\n> > Left that part in for now. I would argue that it is a well-documented\n> > combination and as such users would expect it to turn up in the\n> > tab-complete as well. OTOH not against removing it either, if there are\n> > others voicing the same opinion ..\n> >\n>\n> On reflection, I think that's probably OK. I mean, I still don't like\n> the fact that it's offering to complete with non-SQL-standard syntax,\n> but that seems less bad than using an incomplete list of options, and\n> I don't really have any other better ideas.\n\nMy thought pretty much as well. While obviously far from ideal as you\nsay, it's the less bad case of these both. I would also guess that it is\nnot the only instance of non-SQL-standard syntax completion in psql ..\n\nThanks for weighing in once again.\n\nCheers,\nChristoph\n\n\n",
"msg_date": "Tue, 8 Aug 2023 12:21:08 +0200",
"msg_from": "Christoph Heiss <christoph@c8h4.io>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "Applied v3 patch to master and verified it with below commands,\n\n#Alter view\n\npostgres=# alter view v <tab>\nALTER COLUMN OWNER TO RENAME RESET ( SET\n\npostgres=# alter view v set <tab>\n( SCHEMA\n\npostgres=# alter view v set ( <tab>\nCHECK_OPTION SECURITY_BARRIER SECURITY_INVOKER\n\npostgres=# alter view v reset ( <tab>\nCHECK_OPTION SECURITY_BARRIER SECURITY_INVOKER\n\npostgres=# alter view v set ( check_option = <tab>\nCASCADED LOCAL\n\npostgres=# alter view v set ( security_barrier = <tab>\nFALSE TRUE\n\npostgres=# alter view v set ( security_invoker = <tab>\nFALSE TRUE\n\n\n#Create view\n\npostgres=# create view v\nAS WITH (\npostgres=# create or replace view v\nAS WITH (\n\npostgres=# create view v with (\nCHECK_OPTION SECURITY_BARRIER SECURITY_INVOKER\n\npostgres=# create or replace view v with (\nCHECK_OPTION SECURITY_BARRIER SECURITY_INVOKER\n\npostgres=# create view v with (*)<tab>AS\npostgres=# create or replace view v with (*)<tab>AS\n\npostgres=# create view v as <tab>SELECT\npostgres=# create or replace view v as <tab>SELECT\n\n\nFor below changes,\n\n else if (TailMatches(\"CREATE\", \"VIEW\", MatchAny, \"AS\") ||\n- TailMatches(\"CREATE\", \"OR\", \"REPLACE\", \"VIEW\", MatchAny, \n\"AS\"))\n+ TailMatches(\"CREATE\", \"VIEW\", MatchAny, \"WITH\", \"(*)\", \n\"AS\") ||\n+ TailMatches(\"CREATE\", \"OR\", \"REPLACE\", \"VIEW\", MatchAny, \n\"AS\") ||\n+ TailMatches(\"CREATE\", \"OR\", \"REPLACE\", \"VIEW\", MatchAny, \n\"WITH\", \"(*)\", \"AS\"))\n\nit would be great to switch the order of the 3rd and the 4th line to \nmake a better match for \"CREATE\" and \"CREATE OR REPLACE\" .\n\n\nSince it supports <tab> in the middle for below case,\npostgres=# alter view v set ( security_<tab>\nsecurity_barrier security_invoker\n\nand during view reset it can also provide all the options list,\npostgres=# alter view v reset (\nCHECK_OPTION SECURITY_BARRIER SECURITY_INVOKER\n\nbut not sure if it is a good idea or possible to autocomplete the reset \noptions after seeing one of the options showing up with \",\" for example,\npostgres=# alter view v reset ( CHECK_OPTION, <tab>\nSECURITY_BARRIER SECURITY_INVOKER\n\n\nThank you,\n\nDavid\n\n\n\n",
"msg_date": "Fri, 11 Aug 2023 12:48:17 -0700",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "\nOn Fri, Aug 11, 2023 at 12:48:17PM -0700, David Zhang wrote:\n>\n> Applied v3 patch to master and verified it with below commands,\nThanks for testing!\n\n> [..]\n>\n> For below changes,\n>\n> ���� else if (TailMatches(\"CREATE\", \"VIEW\", MatchAny, \"AS\") ||\n> -��� ��� ��� �TailMatches(\"CREATE\", \"OR\", \"REPLACE\", \"VIEW\", MatchAny,\n> \"AS\"))\n> +��� ��� ��� �TailMatches(\"CREATE\", \"VIEW\", MatchAny, \"WITH\", \"(*)\", \"AS\")\n> ||\n> +��� ��� ��� �TailMatches(\"CREATE\", \"OR\", \"REPLACE\", \"VIEW\", MatchAny, \"AS\")\n> ||\n> +��� ��� ��� �TailMatches(\"CREATE\", \"OR\", \"REPLACE\", \"VIEW\", MatchAny,\n> \"WITH\", \"(*)\", \"AS\"))\n>\n> it would be great to switch the order of the 3rd and the 4th line to make a\n> better match for \"CREATE\" and \"CREATE OR REPLACE\" .\n\nI don't see how it would effect matching in any way - or am I\noverlooking something here?\n\npostgres=# CREATE VIEW v <tab>\nAS WITH (\n\npostgres=# CREATE VIEW v AS <tab>\n.. autocompletes with \"SELECT\"\n\npostgres=# CREATE VIEW v WITH ( security_invoker = true ) <tab>\n.. autocompletes with \"AS\" and so on\n\nAnd the same for CREATE OR REPLACE VIEW.\n\nCan you provide an example case that would benefit from that?\n\n>\n> Since it supports <tab> in the middle for below case,\n> postgres=# alter view v set ( security_<tab>\n> security_barrier� security_invoker\n>\n> and during view reset it can also provide all the options list,\n> postgres=# alter view v reset (\n> CHECK_OPTION����� SECURITY_BARRIER� SECURITY_INVOKER\n>\n> but not sure if it is a good idea or possible to autocomplete the reset\n> options after seeing one of the options showing up with \",\" for example,\n> postgres=# alter view v reset ( CHECK_OPTION, <tab>\n> SECURITY_BARRIER� SECURITY_INVOKER\n\nI'd rather not add this and leave it as-is. It doesn't add any real\nvalue - how often does this case really come up, especially with ALTER\nVIEW only having three options?\n\nThanks,\nChristoph\n\n\n",
"msg_date": "Sat, 12 Aug 2023 20:09:52 +0200",
"msg_from": "Christoph Heiss <christoph@c8h4.io>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "\n>> [..]\n>>\n>> For below changes,\n>>\n>> else if (TailMatches(\"CREATE\", \"VIEW\", MatchAny, \"AS\") ||\n>> - TailMatches(\"CREATE\", \"OR\", \"REPLACE\", \"VIEW\", MatchAny,\n>> \"AS\"))\n>> + TailMatches(\"CREATE\", \"VIEW\", MatchAny, \"WITH\", \"(*)\", \"AS\")\n>> ||\n>> + TailMatches(\"CREATE\", \"OR\", \"REPLACE\", \"VIEW\", MatchAny, \"AS\")\n>> ||\n>> + TailMatches(\"CREATE\", \"OR\", \"REPLACE\", \"VIEW\", MatchAny,\n>> \"WITH\", \"(*)\", \"AS\"))\n>>\n>> it would be great to switch the order of the 3rd and the 4th line to make a\n>> better match for \"CREATE\" and \"CREATE OR REPLACE\" .\n> I don't see how it would effect matching in any way - or am I\n> overlooking something here?\n\nIt won't affect the SQL matching. What I was trying to say is that using \n'CREATE OR REPLACE ...' after 'CREATE ...' can enhance code structure, \nmaking it more readable. For instance,\n\n/* Complete CREATE [ OR REPLACE ] VIEW <name> WITH ( ... ) with \"AS\" */\nelse if (TailMatches(\"CREATE\", \"VIEW\", MatchAny, \"WITH\", \"(*)\") ||\n TailMatches(\"CREATE\", \"OR\", \"REPLACE\", \"VIEW\", \nMatchAny, \"WITH\", \"(*)\"))\n COMPLETE_WITH(\"AS\");\n\n\"CREATE\", \"OR\", \"REPLACE\", \"VIEW\", MatchAny, \"WITH\", \"(*)\" follows \n\"CREATE\", \"VIEW\", MatchAny, \"WITH\", \"(*)\") immediately.\n\nbest regards,\n\nDavid\n\n\n\n",
"msg_date": "Mon, 14 Aug 2023 10:34:11 -0700",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "I noticed that on this commitfest entry \n(https://commitfest.postgresql.org/44/4491/), the reviewers were \nassigned by the patch author (presumably because they had previously \ncontributed to this thread). Unless these individuals know about that, \nthis is unlikely to work out. It's better to remove the reviewer \nentries and let people sign up on their own. (You can nudge potential \ncandidates, of course.)\n\n\nOn 07.08.23 20:49, Christoph Heiss wrote:\n> \n> Hi all,\n> sorry for the long delay.\n> \n> On Mon, Jan 09, 2023 at 04:32:09PM +0100, Jim Jones wrote:\n>> However, an \"ALTER TABLE <name> S<tab>\" does not complete the open\n>> parenthesis \"(\" from \"SET (\", as suggested in \"ALTER VIEW <name> <tab>\".\n>>\n>> postgres=# ALTER VIEW w SET\n>> Display all 187 possibilities? (y or n)\n>>\n>> Is it intended to behave like this? If so, an \"ALTER VIEW <name>\n>> RES<tab>\" does complete the open parenthesis -> \"RESET (\".\n> \n> On Sun, Jan 29, 2023 at 10:19:12AM +0000, Mikhail Gribkov wrote:\n>> The patch have a potential, although I have to agree with Jim Jones,\n>> it obviously have a problem with \"alter view <name> set<tab>\"\n>> handling.\n>> [..]\n>> I think it may worth looking at \"alter materialized view\" completion\n>> tree and making \"alter view\" the same way.\n> \n> Thank you both for reviewing/testing and the suggestions. Yeah,\n> definitively, sounds very sensible.\n> \n> I've attached a new revision, rebased and addressing the above by\n> aligning it with how \"ALTER MATERIALIZED VIEW\" works, such that \"SET (\"\n> and \"SET SCHEMA\" won't compete anymore. So that should now work more\n> like expected.\n> \n> postgres=# ALTER MATERIALIZED VIEW m\n> ALTER COLUMN CLUSTER ON DEPENDS ON EXTENSION\n> NO DEPENDS ON EXTENSION OWNER TO RENAME\n> RESET ( SET\n> \n> postgres=# ALTER MATERIALIZED VIEW m SET\n> ( ACCESS METHOD SCHEMA TABLESPACE\n> WITHOUT CLUSTER\n> \n> postgres=# ALTER VIEW v\n> ALTER COLUMN OWNER TO RENAME RESET ( SET\n> \n> postgres=# ALTER VIEW v SET\n> ( SCHEMA\n> \n> postgres=# ALTER VIEW v SET (\n> CHECK_OPTION SECURITY_BARRIER SECURITY_INVOKER\n> \n> On Fri, Jan 06, 2023 at 12:18:44PM +0000, Dean Rasheed wrote:\n>> Hmm, I don't think we should be offering \"check_option\" as a tab\n>> completion for CREATE VIEW at all, since that would encourage users to\n>> use non-SQL-standard syntax, rather than CREATE VIEW ... WITH\n>> [CASCADED|LOCAL] CHECK OPTION.\n> \n> Left that part in for now. I would argue that it is a well-documented\n> combination and as such users would expect it to turn up in the\n> tab-complete as well. OTOH not against removing it either, if there are\n> others voicing the same opinion ..\n> \n> Thanks,\n> Christoph\n\n\n\n",
"msg_date": "Wed, 6 Sep 2023 08:52:14 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "On Mon, 14 Aug 2023 at 18:34, David Zhang <david.zhang@highgo.ca> wrote:\n>\n> it would be great to switch the order of the 3rd and the 4th line to make a\n> better match for \"CREATE\" and \"CREATE OR REPLACE\" .\n>\n\nI took a look at this, and I think it's probably neater to keep the\n\"AS SELECT\" completion for CREATE [OR REPLACE] VIEW xxx WITH (*)\nseparate from the already existing support for \"AS SELECT\" without\nWITH.\n\nA couple of other points:\n\n1. It looks slightly neater, and works better, to complete one word at\na time -- e.g., \"WITH\" then \"(\", instead of \"WITH (\", since the latter\ndoesn't work if the user has already typed \"WITH\".\n\n2. It should also complete with \"=\" after the option, where appropriate.\n\n3. CREATE VIEW should offer \"local\" and \"cascaded\" after\n\"check_option\" (though there's no point in doing likewise for the\nboolean options, since they default to true, if present, and false\notherwise).\n\nAttached is an updated patch, incorporating those comments.\n\nBarring any further comments, I think this is ready for commit.\n\nRegards,\nDean",
"msg_date": "Thu, 23 Nov 2023 11:07:33 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 4:37 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Mon, 14 Aug 2023 at 18:34, David Zhang <david.zhang@highgo.ca> wrote:\n> >\n> > it would be great to switch the order of the 3rd and the 4th line to make a\n> > better match for \"CREATE\" and \"CREATE OR REPLACE\" .\n> >\n>\n> I took a look at this, and I think it's probably neater to keep the\n> \"AS SELECT\" completion for CREATE [OR REPLACE] VIEW xxx WITH (*)\n> separate from the already existing support for \"AS SELECT\" without\n> WITH.\n>\n> A couple of other points:\n>\n> 1. It looks slightly neater, and works better, to complete one word at\n> a time -- e.g., \"WITH\" then \"(\", instead of \"WITH (\", since the latter\n> doesn't work if the user has already typed \"WITH\".\n>\n> 2. It should also complete with \"=\" after the option, where appropriate.\n>\n> 3. CREATE VIEW should offer \"local\" and \"cascaded\" after\n> \"check_option\" (though there's no point in doing likewise for the\n> boolean options, since they default to true, if present, and false\n> otherwise).\n>\n> Attached is an updated patch, incorporating those comments.\n>\n> Barring any further comments, I think this is ready for commit.\n\nI reviewed the given Patch and it is working fine.\n\nThanks and Regards,\nShubham Khanna.\n\n\n",
"msg_date": "Tue, 28 Nov 2023 09:12:34 +0530",
"msg_from": "Shubham Khanna <khannashubham1197@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
},
{
"msg_contents": "On Tue, 28 Nov 2023 at 03:42, Shubham Khanna\n<khannashubham1197@gmail.com> wrote:\n>\n> I reviewed the given Patch and it is working fine.\n>\n\nThanks for checking. Patch pushed.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 28 Nov 2023 10:00:44 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] psql: Add tab-complete for optional view parameters"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile looking into other opportunities for per-table permissions, I noticed\na weird discrepancy in CLUSTER. When evaluating whether the current user\nhas permission to CLUSTER a table, we ordinarily just check for ownership.\nHowever, the database owner is also allowed to CLUSTER all partitions that\nare not shared. This was added in 3f19e17, and I didn't see any discussion\nabout it in the corresponding thread [0].\n\nMy first instinct is that we should just remove the database ownership\ncheck, which is what I've done in the attached patch. I don't see any\nstrong reason to complicate matters with special\ndatabase-owner-but-not-shared checks like other commands (e.g., VACUUM).\nBut perhaps we should do so just for consistency's sake. Thoughts?\n\nIt was also noted elsewhere [1] that the privilege requirements for CLUSTER\nare not documented. The attached patch adds such documentation.\n\n[0] https://postgr.es/m/20220411140609.GF26620%40telsasoft.com\n[1] https://postgr.es/m/661148f4-c7f1-dec1-2bc8-29f3bd58e242%40postgrespro.ru\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 7 Dec 2022 14:39:24 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "fix and document CLUSTER privileges"
},
{
"msg_contents": "On Wed, Dec 07, 2022 at 02:39:24PM -0800, Nathan Bossart wrote:\n> Hi hackers,\n> \n> While looking into other opportunities for per-table permissions, I noticed\n> a weird discrepancy in CLUSTER. When evaluating whether the current user\n> has permission to CLUSTER a table, we ordinarily just check for ownership.\n> However, the database owner is also allowed to CLUSTER all partitions that\n> are not shared. This was added in 3f19e17, and I didn't see any discussion\n> about it in the corresponding thread [0].\n> \n> My first instinct is that we should just remove the database ownership\n> check, which is what I've done in the attached patch. I don't see any\n> strong reason to complicate matters with special\n> database-owner-but-not-shared checks like other commands (e.g., VACUUM).\n> But perhaps we should do so just for consistency's sake. Thoughts?\n\nYour patch makes it inconsistent with vacuum full, which is strange\nbecause vacuum full calls cluster.\n\npostgres=> VACUUM FULL t;\nVACUUM\npostgres=> CLUSTER t;\nERROR: must be owner of table t\n\nBTW, it'd be helpful to copy the relevant parties on this kind of\nmessage, especially if there's a new thread dedicated just to this.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 7 Dec 2022 20:25:59 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "On Wed, Dec 07, 2022 at 08:25:59PM -0600, Justin Pryzby wrote:\n> Your patch makes it inconsistent with vacuum full, which is strange\n> because vacuum full calls cluster.\n> \n> postgres=> VACUUM FULL t;\n> VACUUM\n> postgres=> CLUSTER t;\n> ERROR: must be owner of table t\n\nThis is the existing behavior on HEAD. I think it has been this way for a\nwhile. Granted, that doesn't mean it's ideal, but AFAICT it's intentional.\n\nLooking closer, the database ownership check in\nget_tables_to_cluster_partitioned() appears to have no meaningful effect.\nIn this code path, cluster_rel() will always be called with CLUOPT_RECHECK,\nand that function only checks for table ownership. Anything gathered in\nget_tables_to_cluster_partitioned() that the user doesn't own will be\nskipped. So, I don't think my patch changes the behavior in any meaningful\nway, but I still think it's worthwhile to make the checks consistent.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 7 Dec 2022 20:13:13 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "On 08.12.2022 01:39, Nathan Bossart wrote:\n> It was also noted elsewhere [1] that the privilege requirements for CLUSTER\n> are not documented. The attached patch adds such documentation.\n> [1] https://postgr.es/m/661148f4-c7f1-dec1-2bc8-29f3bd58e242%40postgrespro.ru\n\nThanks for the patch. It correctly states the existing behavior.\n\nBut perhaps we should wait for the decision in discussion [1] (link above),\nsince this decision may affect the documentation on the CLUSTER command.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n",
"msg_date": "Thu, 8 Dec 2022 14:15:58 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "\nOn 2022-12-07 We 23:13, Nathan Bossart wrote:\n> On Wed, Dec 07, 2022 at 08:25:59PM -0600, Justin Pryzby wrote:\n>> Your patch makes it inconsistent with vacuum full, which is strange\n>> because vacuum full calls cluster.\n>>\n>> postgres=> VACUUM FULL t;\n>> VACUUM\n>> postgres=> CLUSTER t;\n>> ERROR: must be owner of table t\n> This is the existing behavior on HEAD. I think it has been this way for a\n> while. Granted, that doesn't mean it's ideal, but AFAICT it's intentional.\n>\n> Looking closer, the database ownership check in\n> get_tables_to_cluster_partitioned() appears to have no meaningful effect.\n> In this code path, cluster_rel() will always be called with CLUOPT_RECHECK,\n> and that function only checks for table ownership. Anything gathered in\n> get_tables_to_cluster_partitioned() that the user doesn't own will be\n> skipped. So, I don't think my patch changes the behavior in any meaningful\n> way, but I still think it's worthwhile to make the checks consistent.\n\n\n\nWe should probably talk about what the privileges should be, though. I\nthink there's a case to be made that CLUSTER should be governed by the\nVACUUM privileges, given how VACUUM FULL is now implemented.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 8 Dec 2022 07:20:28 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "On Thu, Dec 08, 2022 at 07:20:28AM -0500, Andrew Dunstan wrote:\n> We should probably talk about what the privileges should be, though. I\n> think there's a case to be made that CLUSTER should be governed by the\n> VACUUM privileges, given how VACUUM FULL is now implemented.\n\nCurrently, CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX (minus REINDEX\nSCHEMA|DATABASE|SYSTEM) require ownership of the relation or superuser. In\nfact, all three use the same RangeVarCallbackOwnsTable() callback function.\nMy current thinking is that this is good enough. I don't sense any strong\ndemand for allowing database owners to run these commands on all non-shared\nrelations, and there's ongoing work to break out the privileges to GRANT\nand predefined roles. However, I don't have a strong opinion about this.\n\nIf we do want to change the permissions model for CLUSTER, it might make\nsense to change all three of the aforementioned commands to look more like\nVACUUM/ANALYZE.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 8 Dec 2022 10:13:24 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "On Thu, Dec 8, 2022 at 1:13 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Thu, Dec 08, 2022 at 07:20:28AM -0500, Andrew Dunstan wrote:\n> > We should probably talk about what the privileges should be, though. I\n> > think there's a case to be made that CLUSTER should be governed by the\n> > VACUUM privileges, given how VACUUM FULL is now implemented.\n>\n> Currently, CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX (minus REINDEX\n> SCHEMA|DATABASE|SYSTEM) require ownership of the relation or superuser. In\n> fact, all three use the same RangeVarCallbackOwnsTable() callback function.\n> My current thinking is that this is good enough. I don't sense any strong\n> demand for allowing database owners to run these commands on all non-shared\n> relations, and there's ongoing work to break out the privileges to GRANT\n> and predefined roles.\n\n+1.\n\nI don't see why being the database owner should give you the right to\nrun a random subset of commands on any table in the database. Tables\nhave their own system for access privileges; we should use that, or\nextend it as required.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Dec 2022 16:08:40 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "On Thu, Dec 08, 2022 at 04:08:40PM -0500, Robert Haas wrote:\n> On Thu, Dec 8, 2022 at 1:13 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> Currently, CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX (minus REINDEX\n>> SCHEMA|DATABASE|SYSTEM) require ownership of the relation or superuser. In\n>> fact, all three use the same RangeVarCallbackOwnsTable() callback function.\n>> My current thinking is that this is good enough. I don't sense any strong\n>> demand for allowing database owners to run these commands on all non-shared\n>> relations, and there's ongoing work to break out the privileges to GRANT\n>> and predefined roles.\n> \n> +1.\n> \n> I don't see why being the database owner should give you the right to\n> run a random subset of commands on any table in the database. Tables\n> have their own system for access privileges; we should use that, or\n> extend it as required.\n\nHere is a rebased version of the patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 14 Dec 2022 09:34:35 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "Here is a new version of the patch. I've moved the privilege checks to a\nnew function, and I added a note in the docs about clustering partitioned\ntables in a transaction block (it's not allowed).\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 15 Dec 2022 20:57:00 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "Le 16/12/2022 à 05:57, Nathan Bossart a écrit :\n> Here is a new version of the patch. I've moved the privilege checks to a\n> new function, and I added a note in the docs about clustering partitioned\n> tables in a transaction block (it's not allowed).\n>\n\nGetting into review of this patch I wonder why the CLUSTER command do \nnot react as VACUUM FULL command when there is insuffisant privileges. \nFor example with a partitioned table (ptnowner) and two partitions \n(ptnowner1 and ptnowner2) with the second partition owned by another \nuser, let' say usr2. We have the following report when executing vacuum \nas usr2:\n\ntestdb=> VACUUM FULL ptnowner;\nWARNING: permission denied to vacuum \"ptnowner\", skipping it\nWARNING: permission denied to vacuum \"ptnowner1\", skipping it\nVACUUM\n\nHere only ptnowner2 have been vacuumed which is correct and expected.\n\nFor the cluster command:\n\ntestdb=> CLUSTER;\nCLUSTER\n\n\nI would have expected something like:\n\ntestdb=> CLUSTER;\nWARNING: permission denied to cluster \"ptnowner1\", skipping it\nCLUSTER\n\nI mean that the silent behavior is not very helpful.\n\n\nThis is the current behavior of the CLUSTER command and current patch \nadds a sentence about the silent behavior in the documentation. This is \ngood but I just want to ask if we could want to fix this behavior too or \njust keep things like that with the lack of noise.\n\n\nBest regards,\n\n-- \nGilles Darold\n\n\n\n\n\n\nLe 16/12/2022 à 05:57, Nathan Bossart a\n écrit :\n\n\nHere is a new version of the patch. I've moved the privilege checks to a\nnew function, and I added a note in the docs about clustering partitioned\ntables in a transaction block (it's not allowed).\n\n\n\n\n\nGetting into review of this patch I wonder why the CLUSTER\n command do not react as VACUUM FULL command when there is\n insuffisant privileges. For example with a partitioned table\n (ptnowner) and two partitions (ptnowner1 and ptnowner2) with the\n second partition owned by another user, let' say usr2. We have the\n following report when executing vacuum as usr2:\ntestdb=> VACUUM FULL ptnowner;\n WARNING: permission denied to vacuum \"ptnowner\", skipping it\n WARNING: permission denied to vacuum \"ptnowner1\", skipping it\n VACUUM\nHere only ptnowner2 have been vacuumed which is correct and\n expected.\nFor the cluster command:\ntestdb=> CLUSTER;\n CLUSTER\n\n\nI would have expected something like:\ntestdb=> CLUSTER;\n WARNING: permission denied to cluster \"ptnowner1\", skipping it\n CLUSTER\n\n\nI mean that the silent behavior is not very helpful.\n\n\nThis is the current behavior of the CLUSTER command and current\n patch adds a sentence about the silent behavior in the\n documentation. This is good but I just want to ask if we could\n want to fix this behavior too or just keep things like that with the lack of noise.\n \n\n\n\nBest regards,\n\n-- \nGilles Darold",
"msg_date": "Wed, 4 Jan 2023 14:25:13 +0100",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": false,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "On Wed, Jan 04, 2023 at 02:25:13PM +0100, Gilles Darold wrote:\n> This is the current behavior of the CLUSTER command and current patch adds a\n> sentence about the silent behavior in the documentation. This is good but I\n> just want to ask if we could want to fix this behavior too or just keep\n> things like that with the lack of noise.\n\nI've proposed something like what you are describing in another thread [0].\nI intended to simply fix and document the current behavior in this thread\nand to take up any new changes in the other one.\n\n[0] https://commitfest.postgresql.org/41/4070/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 4 Jan 2023 10:18:56 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "Le 04/01/2023 à 19:18, Nathan Bossart a écrit :\n> On Wed, Jan 04, 2023 at 02:25:13PM +0100, Gilles Darold wrote:\n>> This is the current behavior of the CLUSTER command and current patch adds a\n>> sentence about the silent behavior in the documentation. This is good but I\n>> just want to ask if we could want to fix this behavior too or just keep\n>> things like that with the lack of noise.\n> I've proposed something like what you are describing in another thread [0].\n> I intended to simply fix and document the current behavior in this thread\n> and to take up any new changes in the other one.\n>\n> [0] https://commitfest.postgresql.org/41/4070/\n\n\nGot it, this is patch add_cluster_skip_messages.patch . IMHO this patch \nshould be part of this commitfest as it is directly based on this one. \nYou could create a second patch here that adds the warning message so \nthat committers can decide here if it should be applied.\n\n\n-- \nGilles Darold\n\n\n\n",
"msg_date": "Wed, 4 Jan 2023 23:27:05 +0100",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": false,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "On Wed, Jan 04, 2023 at 11:27:05PM +0100, Gilles Darold wrote:\n> Got it, this is patch add_cluster_skip_messages.patch . IMHO this patch\n> should be part of this commitfest as it is directly based on this one. You\n> could create a second patch here that adds the warning message so that\n> committers can decide here if it should be applied.\n\nThat's fine with me. I added the warning messages in v4.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 4 Jan 2023 21:12:35 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "Le 05/01/2023 à 06:12, Nathan Bossart a écrit :\n> On Wed, Jan 04, 2023 at 11:27:05PM +0100, Gilles Darold wrote:\n>> Got it, this is patch add_cluster_skip_messages.patch . IMHO this patch\n>> should be part of this commitfest as it is directly based on this one. You\n>> could create a second patch here that adds the warning message so that\n>> committers can decide here if it should be applied.\n> That's fine with me. I added the warning messages in v4.\n\n\nThis is a bit confusing, this commitfest entry patch is also included in \nan other commitfest entry [1] into patch \nv3-0001-fix-maintain-privs.patch with some additional conditions.\n\n\nCommitters should be aware that this commitfest entry must be withdrawn \nif [1] is committed first. There is no status or dependency field that \nI can use, I could move this one to \"Ready for Committer\" status but \nthis is not exact until [1] has been committed or withdrawn.\n\n\n[1] https://commitfest.postgresql.org/41/4070/\n\n\n-- \nGilles Darold\n\n\n\n",
"msg_date": "Thu, 5 Jan 2023 14:38:58 +0100",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": false,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "On Thu, Jan 05, 2023 at 02:38:58PM +0100, Gilles Darold wrote:\n> This is a bit confusing, this commitfest entry patch is also included in an\n> other commitfest entry [1] into patch v3-0001-fix-maintain-privs.patch with\n> some additional conditions.\n> \n> Committers should be aware that this commitfest entry must be withdrawn if\n> [1] is committed first.� There is no status or dependency field that I can\n> use, I could move this one to \"Ready for Committer\" status but this is not\n> exact until [1] has been committed or withdrawn.\n\nI will either rebase the other patch or discard this one as needed. I'm\nnot positive that we'll proceed with the proposed approach for the other\none, but the patch tracked here should still be worthwhile regardless.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 5 Jan 2023 09:11:42 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "Apparently I forgot to run all the tests before posting v4. Here is a new\nversion of the patch that should pass all tests.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 5 Jan 2023 16:26:46 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "Le 06/01/2023 à 01:26, Nathan Bossart a écrit :\n> Apparently I forgot to run all the tests before posting v4. Here is a new\n> version of the patch that should pass all tests.\n\nReview status:\n\n\nThe patch applies and compiles without issues, make check and \ncheckinstall tests are running without error.\n\nIt aim to limit the permission check to run the CLUSTER command on a \npartition to ownership and the MAINTAIN privilege. Which it actually does.\n\nIn commit 3f19e17, to have CLUSTER ignore partitions not owned by \ncaller, there was still a useless check of database ownership or shared \nrelation in get_tables_to_cluster_partitioned().\n\n\nDocumentation have been updated to explain the conditions of a \nsuccessful execution of the CLUSTER command.\n\n\nAdditionally this patch also adds a warning when a partition is skipped \ndue to lack of permission just like VACUUM is doing:\n\n WARNING: permission denied to vacuum \"ptnowner2\", skipping it\n\nwith CLUSTER now we have the same message:\n\n WARNING: permission denied to cluster \"ptnowner2\", skipping it\n\nPrevious behavior was to skip the partition silently.\n\n\nTests on the CLUSTER command have been modified to skip warning messages \nexcept partially in src/test/regress/sql/cluster.sql to validate the \npresence of the warning.\n\n\nI'm moving this commitfest entry to Ready for Committers.\n\n\nRegards,\n\n-- \nGilles Darold\n\n\n\n",
"msg_date": "Wed, 11 Jan 2023 14:22:26 +0100",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": false,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 02:22:26PM +0100, Gilles Darold wrote:\n> I'm moving this commitfest entry to Ready for Committers.\n\nThank you for reviewing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 11 Jan 2023 09:54:17 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "Le 11/01/2023 à 18:54, Nathan Bossart a écrit :\n> On Wed, Jan 11, 2023 at 02:22:26PM +0100, Gilles Darold wrote:\n>> I'm moving this commitfest entry to Ready for Committers.\n> Thank you for reviewing.\n>\nI have changed the status to \"Returned with feedback\" as per commit \nff9618e8 this patch might not be applied anymore if I have well understood.\n\n\nNathan, please confirm and fix the status of this commit fest entry.\n\n\nBest regards,\n\n-- \nGilles Darold\n\n\n\n",
"msg_date": "Sat, 14 Jan 2023 10:40:40 +0100",
"msg_from": "Gilles Darold <gilles@migops.com>",
"msg_from_op": false,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "On Sat, Jan 14, 2023 at 10:40:40AM +0100, Gilles Darold wrote:\n> Nathan, please confirm and fix the status of this commit fest entry.\n\nYes, thank you for taking care of this. I believe the only changes in this\npatch that didn't make it into ff9618e are the following documentation\nadjustments. I've added Jeff to get his thoughts.\n\ndiff --git a/doc/src/sgml/ref/cluster.sgml b/doc/src/sgml/ref/cluster.sgml\nindex b9f2acb1de..29f0f1fd90 100644\n--- a/doc/src/sgml/ref/cluster.sgml\n+++ b/doc/src/sgml/ref/cluster.sgml\n@@ -67,7 +67,8 @@ CLUSTER [VERBOSE]\n </para>\n \n <para>\n- <command>CLUSTER</command> without any parameter reclusters all the\n+ <command>CLUSTER</command> without a\n+ <replaceable class=\"parameter\">table_name</replaceable> reclusters all the\n previously-clustered tables in the current database that the calling user\n has privileges for. This form of <command>CLUSTER</command> cannot be\n executed inside a transaction block.\n@@ -211,7 +212,8 @@ CLUSTER [VERBOSE]\n <para>\n Clustering a partitioned table clusters each of its partitions using the\n partition of the specified partitioned index. When clustering a partitioned\n- table, the index may not be omitted.\n+ table, the index may not be omitted. <command>CLUSTER</command> on a\n+ partitioned table cannot be executed inside a transaction block.\n </para>\n \n </refsect1>\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 14 Jan 2023 14:40:00 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "On Sat, 2023-01-14 at 14:40 -0800, Nathan Bossart wrote:\n> On Sat, Jan 14, 2023 at 10:40:40AM +0100, Gilles Darold wrote:\n> > Nathan, please confirm and fix the status of this commit fest\n> > entry.\n> \n> Yes, thank you for taking care of this. I believe the only changes\n> in this\n> patch that didn't make it into ff9618e are the following\n> documentation\n> adjustments. I've added Jeff to get his thoughts.\n\nCommitted these extra clarifications. Thank you.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Wed, 25 Jan 2023 20:27:57 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: fix and document CLUSTER privileges"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 08:27:57PM -0800, Jeff Davis wrote:\n> Committed these extra clarifications. Thank you.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 26 Jan 2023 10:01:16 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: fix and document CLUSTER privileges"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nThe cost of an Aggregate node seems to be the same regardless of the \ninput being pre-sorted or not. If however the input is not sorted, the \nAggregate node must additionally perform a sort which can impact runtime \nsignificantly. Here is an example:\n\nCREATE TABLE foo(col0 INT, col1 TEXT);\nINSERT INTO foo SELECT generate_series(1, 100000), \n'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' || md5(random()::TEXT);\nCREATE INDEX foo_idx ON foo(col1);\nSET max_parallel_workers_per_gather = 0;\nSET enable_bitmapscan = FALSE;\n\n-- Unsorted input. Aggregate node must additionally sort all rows.\nEXPLAIN ANALYZE SELECT COUNT(DISTINCT(col1)) FROM foo;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=2584.00..2584.01 rows=1 width=8) (actual \ntime=1684.705..1684.809 rows=1 loops=1)\n -> Seq Scan on foo (cost=0.00..2334.00 rows=100000 width=71) \n(actual time=0.018..343.280 rows=100000 loops=1)\n Planning Time: 0.317 ms\n Execution Time: 1685.543 ms\n\n\n-- Pre-sorted input. Aggregate node doesn't have to sort all rows.\nSET enable_seqscan = FALSE;\nEXPLAIN ANALYZE SELECT COUNT(DISTINCT(col1)) FROM foo;\n\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=6756.42..6756.43 rows=1 width=8) (actual \ntime=819.028..819.041 rows=1 loops=1)\n -> Index Only Scan using foo_idx on foo (cost=6506.42..6506.42 \nrows=100000 width=71) (actual time=0.046..404.260 rows=100000 loops=1)\n Heap Fetches: 100000\n Heap Prefetches: 1334\n Planning Time: 0.438 ms\n Execution Time: 819.515 ms\n\nThe cost of the Aggregate node is in both cases the same (250.0) while \nits runtime is 1.3s in the unsorted case and 0.4s in the pre-sorted case.\n\nAlso, why does the Aggregate node sort itself? Why don't we instead emit \nan explicit Sort node in the plan so that it's visible to the user what \nis happening? As soon as there's also a GROUP BY in the query, a Sort \nnode occurs in the plan. This seems inconsistent.\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Thu, 8 Dec 2022 10:06:22 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Aggregate node doesn't include cost for sorting"
},
{
"msg_contents": "On Thu, 8 Dec 2022 at 22:06, David Geier <geidav.pg@gmail.com> wrote:\n> The cost of an Aggregate node seems to be the same regardless of the\n> input being pre-sorted or not. If however the input is not sorted, the\n> Aggregate node must additionally perform a sort which can impact runtime\n> significantly. Here is an example:\n>\n> -- Unsorted input. Aggregate node must additionally sort all rows.\n> EXPLAIN ANALYZE SELECT COUNT(DISTINCT(col1)) FROM foo;\n>\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=2584.00..2584.01 rows=1 width=8) (actual\n> time=1684.705..1684.809 rows=1 loops=1)\n> -> Seq Scan on foo (cost=0.00..2334.00 rows=100000 width=71)\n\nThis output surely must be from a version of PostgreSQL prior to\n1349d279? I can't quite figure out why you've added a \"SET\nenable_seqscan = FALSE;\". That makes it look like you've used the same\nversion of PostgreSQL to produce both of these plans. The 2nd plan\nyou've shown must be post 1349d279.\n\nSo, with the assumption that you've used 2 different versions to show\nthis output, for post 1349d279, there does not seem to be much choice\non how the aggregate is executed. What's your concern about the\ncostings having to be accurate given there's no other plan choice?\n\nThe new pre-sorted aggregate code will always request the sort order\nthat suits the largest number of ORDER BY / DISTINCT aggregates. When\nthere are multiple ORDER BY / DISTINCT aggregates and they have\ndifferent sort requirements then there certainly are completing ways\nthat the aggregation portion of the plan can be executed. I opted to\nmake the choice just based on the number of aggregates that could\nbecome presorted. nodeAgg.c is currently not very smart about sharing\nsorts between multiple aggregates with the same sort requirements. If\nthat was made better, there might be more motivation to have better\ncosting code in make_pathkeys_for_groupagg(). However, the reasons\nfor the reversion of db0d67db seemed to be mainly around the lack of\nability to accurately cost multiple competing sort orders. We'd need\nto come up with some better way to do that if we were to want to give\nmake_pathkeys_for_groupagg() similar abilities.\n\n> Also, why does the Aggregate node sort itself? Why don't we instead emit\n> an explicit Sort node in the plan so that it's visible to the user what\n> is happening? As soon as there's also a GROUP BY in the query, a Sort\n> node occurs in the plan. This seems inconsistent.\n\nPost 1349d279 we do that, but it can only do it for 1 sort order.\nThere can be any number of aggregate functions which require a sort\nand they don't all have to have the same sort order requirements. We\ncan't do the same as WindowAgg does by chaining nodes together either\nbecause the aggregate node aggregates the results and we'd need all\nthe same input rows to be available at each step.\n\nThe only other way would be to have it so an Aggregate node could be\nfed by multiple different input nodes and then it would only work on\nthe aggregates that suit the given input before reading the next input\nand doing the other aggregates. Making it work like that would cause\nquite a bit of additional effort during planning (not to mention the\nexecutor). We'd have to run the join search once per required order,\nwhich is one of the slowest parts of planning. Right now, you could\nprobably make that work by just writing the SQL to have a subquery per\nsort requirement.\n\nDavid\n\n\n",
"msg_date": "Thu, 8 Dec 2022 23:40:58 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Aggregate node doesn't include cost for sorting"
},
{
"msg_contents": "Hi David,\n\nThanks for the explanations and awesome that this was improved on already.\nI didn't have this change on my radar.\n\nOn 12/8/22 11:40, David Rowley wrote:\n>\n> This output surely must be from a version of PostgreSQL prior to\n> 1349d279? I can't quite figure out why you've added a \"SET\n> enable_seqscan = FALSE;\". That makes it look like you've used the same\n> version of PostgreSQL to produce both of these plans. The 2nd plan\n> you've shown must be post 1349d279.\n\n\nBoth plans were captured on 14.5, which is indeed prior to 1349d279.\n\nI disabled sequential scan to show that there's an alternative plan \nwhich is superior to the chosen plan: Index Only Scan is more expensive \nand takes longer than the Seq Scan, but the subsequent Aggregate runs \nmuch faster as it doesn't have to sort, making the plan overall superior.\n\n>\n> So, with the assumption that you've used 2 different versions to show\n> this output, for post 1349d279, there does not seem to be much choice\n> on how the aggregate is executed. What's your concern about the\n> costings having to be accurate given there's no other plan choice?\n\nThere's another plan choice which is using the index to get pre-sorted \ninput rows, see previous comment.\n\n>\n> The new pre-sorted aggregate code will always request the sort order\n> that suits the largest number of ORDER BY / DISTINCT aggregates. When\n> there are multiple ORDER BY / DISTINCT aggregates and they have\n> different sort requirements then there certainly are completing ways\n> that the aggregation portion of the plan can be executed. I opted to\n> make the choice just based on the number of aggregates that could\n> become presorted. nodeAgg.c is currently not very smart about sharing\n> sorts between multiple aggregates with the same sort requirements. If\n> that was made better, there might be more motivation to have better\n> costing code in make_pathkeys_for_groupagg(). However, the reasons\n> for the reversion of db0d67db seemed to be mainly around the lack of\n> ability to accurately cost multiple competing sort orders. We'd need\n> to come up with some better way to do that if we were to want to give\n> make_pathkeys_for_groupagg() similar abilities.\n>\n>> Also, why does the Aggregate node sort itself? Why don't we instead emit\n>> an explicit Sort node in the plan so that it's visible to the user what\n>> is happening? As soon as there's also a GROUP BY in the query, a Sort\n>> node occurs in the plan. This seems inconsistent.\n> Post 1349d279 we do that, but it can only do it for 1 sort order.\n> There can be any number of aggregate functions which require a sort\n> and they don't all have to have the same sort order requirements. We\n> can't do the same as WindowAgg does by chaining nodes together either\n> because the aggregate node aggregates the results and we'd need all\n> the same input rows to be available at each step.\n>\n> The only other way would be to have it so an Aggregate node could be\n> fed by multiple different input nodes and then it would only work on\n> the aggregates that suit the given input before reading the next input\n> and doing the other aggregates. Making it work like that would cause\n> quite a bit of additional effort during planning (not to mention the\n> executor). We'd have to run the join search once per required order,\n> which is one of the slowest parts of planning. Right now, you could\n> probably make that work by just writing the SQL to have a subquery per\n> sort requirement.\n\nThanks for the explanation!\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Thu, 8 Dec 2022 13:11:59 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Aggregate node doesn't include cost for sorting"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> So, with the assumption that you've used 2 different versions to show\n> this output, for post 1349d279, there does not seem to be much choice\n> on how the aggregate is executed. What's your concern about the\n> costings having to be accurate given there's no other plan choice?\n\nIt's true that the cost attributed to the Agg node won't impact any\nlive decisions in the plan level in which it appears. However, if\nthat's a subquery, then the total cost attributed to the subplan\ncould in principle affect plan choices in the outer query. So there\nis a valid argument for wanting to try to get it right.\n\nHaving said that, I think that the actual impact on outer-level choices\nis usually minimal. So it didn't bother me that we swept this under\nthe rug before --- and I'm pretty sure that we're sweeping similar\nthings under the rug elsewhere in top-of-query planning. However,\ngiven 1349d279 it should surely be true that the planner knows how many\nsorts it's left to be done at runtime (a number we did not have at hand\nbefore). So it seems like it ought to be simple enough to account for\nthis effect more accurately. I'd be in favor of doing so if it's\nsimple and cheap to get the numbers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Dec 2022 09:38:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Aggregate node doesn't include cost for sorting"
},
{
"msg_contents": "On Fri, 9 Dec 2022 at 01:12, David Geier <geidav.pg@gmail.com> wrote:\n> Both plans were captured on 14.5, which is indeed prior to 1349d279.\n>\n> I disabled sequential scan to show that there's an alternative plan\n> which is superior to the chosen plan: Index Only Scan is more expensive\n> and takes longer than the Seq Scan, but the subsequent Aggregate runs\n> much faster as it doesn't have to sort, making the plan overall superior.\n\nAha, 14.5. What's going on there is that it's still doing the sort.\nThe aggregate code in that version does not skip the sort because of\nthe presorted input. A likely explanation for the performance increase\nis due to the presorted check in our qsort implementation. The\nsuccessful presort check is O(N), whereas an actual sort is O(N *\nlogN).\n\nIt's true that if we had been doing proper costing on these ORDER BY /\nDISTINCT aggregates that we could have noticed that the input path's\npathkeys indicate that no sort is required and costed accordingly, but\nif we'd gone to the trouble of factoring that into the costs, then it\nwould also have made sense to make nodeAgg.c not sort on presorted\ninput. We got the latter in 1349d279. It's just we didn't do anything\nabout the costings in that commit.\n\nAnyway, in the next version of Postgres, the planner is highly likely\nto choose the 2nd plan in your original email. It'll also be even\nfaster than you've shown due to the aggregate code not having to store\nand read tuples in the tuplesort object. Also, no O(N) presort check\neither. The performance should be much closer to what it would be if\nyou disabled seqscan and dropped the DISTINCT out of your aggregate.\n\nDavid\n\n\n",
"msg_date": "Fri, 9 Dec 2022 09:56:43 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Aggregate node doesn't include cost for sorting"
},
{
"msg_contents": "On Fri, 9 Dec 2022 at 03:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It's true that the cost attributed to the Agg node won't impact any\n> live decisions in the plan level in which it appears. However, if\n> that's a subquery, then the total cost attributed to the subplan\n> could in principle affect plan choices in the outer query. So there\n> is a valid argument for wanting to try to get it right.\n\nI guess the jit thresholds are another reason to try to make the costs\na reflection of the expected run-time too.\n\n> Having said that, I think that the actual impact on outer-level choices\n> is usually minimal. So it didn't bother me that we swept this under\n> the rug before --- and I'm pretty sure that we're sweeping similar\n> things under the rug elsewhere in top-of-query planning. However,\n> given 1349d279 it should surely be true that the planner knows how many\n> sorts it's left to be done at runtime (a number we did not have at hand\n> before). So it seems like it ought to be simple enough to account for\n> this effect more accurately. I'd be in favor of doing so if it's\n> simple and cheap to get the numbers.\n\nOk, probably Heikki's work in 0a2bc5d61 is a more useful piece of work\nto get us closer to that goal. I think all that's required to make it\nwork is adding on the costs in the final foreach loop in\nget_agg_clause_costs(). The Aggrefs have already been marked as\naggpresorted by that time, so it should be a matter of:\n\nif ((aggref->aggorder != NIL || aggref->aggdistinct != NIL) &&\n!aggref->aggpresorted)\n // add costs for sort\n\nDavid\n\n\n",
"msg_date": "Fri, 9 Dec 2022 10:24:00 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Aggregate node doesn't include cost for sorting"
}
] |
[
{
"msg_contents": "I have been playing around with making updatable views support MERGE,\nand it looks to be fairly straightforward.\n\nI'm intending to support auto-updatable views, WITH CHECK OPTION, and\ntrigger-updatable views, but not views with rules, as I think that\nwould be more trouble than it's worth.\n\nPer the SQL standard, if the view isn't auto-updatable, it requires\nthe appropriate INSTEAD OF INSERT/UPDATE/DELETE triggers to perform\nthe merge actions. One limitation with the current patch is that it\nwill only work if the view is either auto-updatable with no INSTEAD OF\ntriggers, or it has a full set of INSTEAD OF triggers for all\nINSERT/UPDATE/DELETE actions mentioned in the MERGE command. It\ndoesn't support a mix of those 2 cases (i.e., a partial set of INSTEAD\nOF triggers, such as an INSTEAD OF INSERT trigger only, on an\notherwise auto-updatable view). Perhaps it will be possible to\novercome that limitation in the future, but I think that it will be\nhard.\n\nIn practice though, I think that this shouldn't be very limiting -- I\nthink it's uncommon for people to define INSTEAD OF triggers on\nauto-updatable views, and if they do, they just need to be sure to\nprovide a full set.\n\nAttached is a WIP patch, which I'll add to the next CF. I still need\nto do more testing, and update the docs, but so far, everything\nappears to work.\n\nRegards,\nDean",
"msg_date": "Thu, 8 Dec 2022 10:03:29 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Supporting MERGE on updatable views"
},
{
"msg_contents": "On Thu, 8 Dec 2022 at 10:03, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Attached is a WIP patch, which I'll add to the next CF. I still need\n> to do more testing, and update the docs, but so far, everything\n> appears to work.\n>\n\nNew patch attached with doc updates and a few other, mostly cosmetic, changes.\n\nOne notable change is that I realised that the check for rules on the\ntarget table needs to be done in the rewriter, rather than the parser,\nin case expanding a view hierarchy leads to base relations with rules\nthat the parser wouldn't notice.\n\nRegards,\nDean",
"msg_date": "Wed, 21 Dec 2022 20:04:26 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "On Wed, 21 Dec 2022 at 20:04, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> New patch attached with doc updates and a few other, mostly cosmetic, changes.\n>\n\nNew version fixing a bug in preprocess_targetlist() -- given a simple\nauto-updatable view that also has INSTEAD OF triggers, subquery pullup\nof the target may produce PlaceHolderVars in MERGE WHEN clauses, which\nthe former code wasn't expecting to find.\n\nRegards,\nDean",
"msg_date": "Fri, 30 Dec 2022 12:47:23 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "On Fri, 30 Dec 2022 at 12:47, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> New version fixing a bug in preprocess_targetlist().\n>\n\nRebased version, following 5d29d525ff.\n\nRegards,\nDean",
"msg_date": "Sat, 21 Jan 2023 11:03:08 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "On Sat, 21 Jan 2023 at 11:03, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Rebased version, following 5d29d525ff.\n>\n\nUpdated version attached.\n\nThis needed a little extra tweaking to work following the change to\nmake Vars outer-join aware, so it's worth checking that I understood\nthat properly -- when merging into a trigger-updatable view, the\nwhole-row Var added to the targetlist by the rewriter is nullable by\nthe join added by transform_MERGE_to_join().\n\nThe make-Vars-outer-join-aware patch has possibly made this patch's\nchange to preprocess_targetlist() unnecessary, but I left it in just\nin case, even though I can no longer trigger that failure mode. It\nfeels safer, and more consistent with the code later on in that\nfunction.\n\nRegards,\nDean",
"msg_date": "Tue, 7 Feb 2023 10:03:46 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "On Tue, 7 Feb 2023 at 10:03, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Updated version attached.\n>\n\nRebased version attached.\n\nRegards,\nDean",
"msg_date": "Fri, 24 Feb 2023 05:43:53 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "On Fri, 24 Feb 2023 at 05:43, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Rebased version attached.\n>\n\nAnother rebase.\n\nRegards,\nDean",
"msg_date": "Mon, 13 Mar 2023 13:00:37 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "On Mon, 13 Mar 2023 at 13:00, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Another rebase.\n>\n\nAnd another rebase.\n\nRegards,\nDean",
"msg_date": "Sun, 19 Mar 2023 09:11:37 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "On Sun, 19 Mar 2023 at 09:11, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> And another rebase.\n>\n\nRebased version attached.\n\nAside from a little minor tidying up, I have renamed the new field on\nthe Query to \"mergeTargetRelation\", which is a little more consistent\nwith the naming of existing fields, and with the \"mergeSourceRelation\"\nfield that the \"WHEN NOT MATCHED BY SOURCE\" patch adds.\n\nRegards,\nDean",
"msg_date": "Sun, 2 Jul 2023 11:29:29 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "hi.\nExcellent work! regress test passed! The code looks so intuitive!\n\ndoc/src/sgml/ref/create_view.sgml.\nDo we need to add <<command>MERGE</command> for the following sentence?\n\nIf the view or any of its base\n relations has an <literal>INSTEAD</literal> rule that causes the\n <command>INSERT</command> or <command>UPDATE</command> command\nto be rewritten, then\n all check options will be ignored in the rewritten query, including any\n checks from automatically updatable views defined on top of the relation\n with the <literal>INSTEAD</literal> rule.\n\n\nin src/backend/executor/nodeModifyTable.c line 3800: ExecModifyTable\n`\ndatum = ExecGetJunkAttribute(slot,resultRelInfo->ri_RowIdAttNo,&isNull);\n.....\noldtupdata.t_data = DatumGetHeapTupleHeader(datum);\noldtupdata.t_len = HeapTupleHeaderGetDatumLength(oldtupdata.t_data);\n`\nIn ExecGetJunkAttribute(slot,resultRelInfo->ri_RowIdAttNo,&isNull);\n\ndoes resultRelInfo->ri_RowIdAttNo must be 1 to make sure\nDatumGetHeapTupleHeader(datum) works?\n(I am not familiar with this part.....)\n\n\n",
"msg_date": "Sat, 28 Oct 2023 16:34:51 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "On Sat, 28 Oct 2023 at 09:35, jian he <jian.universality@gmail.com> wrote:\n>\n> hi.\n> Excellent work! regress test passed! The code looks so intuitive!\n>\n\nThanks for looking!\n\n> doc/src/sgml/ref/create_view.sgml.\n> Do we need to add <<command>MERGE</command> for the following sentence?\n>\n> If the view or any of its base\n> relations has an <literal>INSTEAD</literal> rule that causes the\n> <command>INSERT</command> or <command>UPDATE</command> command\n> to be rewritten, then\n> all check options will be ignored in the rewritten query, including any\n> checks from automatically updatable views defined on top of the relation\n> with the <literal>INSTEAD</literal> rule.\n>\n\nWe don't want to include MERGE in that sentence, because MERGE isn't\nsupported on views or tables with rules, but I guess we could add\nanother sentence after that one, to make that clear.\n\n> in src/backend/executor/nodeModifyTable.c line 3800: ExecModifyTable\n> `\n> datum = ExecGetJunkAttribute(slot,resultRelInfo->ri_RowIdAttNo,&isNull);\n> .....\n> oldtupdata.t_data = DatumGetHeapTupleHeader(datum);\n> oldtupdata.t_len = HeapTupleHeaderGetDatumLength(oldtupdata.t_data);\n> `\n> In ExecGetJunkAttribute(slot,resultRelInfo->ri_RowIdAttNo,&isNull);\n>\n> does resultRelInfo->ri_RowIdAttNo must be 1 to make sure\n> DatumGetHeapTupleHeader(datum) works?\n> (I am not familiar with this part.....)\n\nWell, it's not necessarily 1. It's whatever the attribute number of\nthe \"wholerow\" attribute is, which can vary. \"datum\" is then set to\nthe value of the \"wholerow\" attribute, which is the OLD tuple, so\nusing DatumGetHeapTupleHeader() makes sense. This relies on the code\nin ExecInitModifyTable(), which sets up ri_RowIdAttNo.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sun, 29 Oct 2023 17:17:34 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "On Sun, 29 Oct 2023 at 17:17, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Sat, 28 Oct 2023 at 09:35, jian he <jian.universality@gmail.com> wrote:\n> >\n> > Do we need to add <<command>MERGE</command> for the following sentence?\n> >\n> We don't want to include MERGE in that sentence, because MERGE isn't\n> supported on views or tables with rules, but I guess we could add\n> another sentence after that one, to make that clear.\n>\n\nHere's an updated patch doing that, plus another couple of minor\nupdates to that page.\n\nI also noticed that the code to detect rules ought to ignore disabled\nrules, so I've updated it to do so, and added a new regression test to\ncover that case.\n\nArguably that's a pre-existing bug, so the fix could be extracted and\napplied separately, but I'm not sure that it's worth the effort.\n\nRegards,\nDean",
"msg_date": "Mon, 30 Oct 2023 09:33:53 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "On Mon, 30 Oct 2023 at 15:04, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Sun, 29 Oct 2023 at 17:17, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> >\n> > On Sat, 28 Oct 2023 at 09:35, jian he <jian.universality@gmail.com> wrote:\n> > >\n> > > Do we need to add <<command>MERGE</command> for the following sentence?\n> > >\n> > We don't want to include MERGE in that sentence, because MERGE isn't\n> > supported on views or tables with rules, but I guess we could add\n> > another sentence after that one, to make that clear.\n> >\n>\n> Here's an updated patch doing that, plus another couple of minor\n> updates to that page.\n>\n> I also noticed that the code to detect rules ought to ignore disabled\n> rules, so I've updated it to do so, and added a new regression test to\n> cover that case.\n>\n> Arguably that's a pre-existing bug, so the fix could be extracted and\n> applied separately, but I'm not sure that it's worth the effort.\n\nCFBot shows that the patch does not apply anymore as in [1]:\n\n=== Applying patches on top of PostgreSQL commit ID\n55627ba2d334ce98e1f5916354c46472d414bda6 ===\n=== applying patch ./support-merge-into-view-v10.patch\n....\npatching file src/backend/executor/nodeModifyTable.c\n...\nHunk #7 FAILED at 2914.\n...\n1 out of 15 hunks FAILED -- saving rejects to file\nsrc/backend/executor/nodeModifyTable.c.rej\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_4076.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 26 Jan 2024 20:31:57 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "On Fri, 26 Jan 2024 at 15:02, vignesh C <vignesh21@gmail.com> wrote:\n>\n> CFBot shows that the patch does not apply anymore as in [1]:\n>\n\nRebased version attached.\n\nRegards,\nDean",
"msg_date": "Fri, 26 Jan 2024 15:51:24 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "Thanks for working on this. The patch looks well finished. I didn't\ntry to run it, though. I gave it a read and found nothing to complain\nabout, just these two pretty minor comments:\n\nOn 2024-Jan-26, Dean Rasheed wrote:\n\n> diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c\n> new file mode 100644\n> index 13a9b7d..5a99e4a\n> --- a/src/backend/executor/execMain.c\n> +++ b/src/backend/executor/execMain.c\n> @@ -1021,11 +1021,13 @@ InitPlan(QueryDesc *queryDesc, int eflag\n> * CheckValidRowMarkRel.\n> */\n> void\n> -CheckValidResultRel(ResultRelInfo *resultRelInfo, CmdType operation)\n> +CheckValidResultRel(ResultRelInfo *resultRelInfo, CmdType operation,\n> +\t\t\t\t\tList *mergeActions)\n> {\n\nI suggest to add commentary explaining the new argument of this function.\n\n> @@ -1080,6 +1083,51 @@ CheckValidResultRel(ResultRelInfo *resul\n> \t\t\t\t\t\t\t\t\t\tRelationGetRelationName(resultRel)),\n> \t\t\t\t\t\t\t\t errhint(\"To enable deleting from the view, provide an INSTEAD OF DELETE trigger or an unconditional ON DELETE DO INSTEAD rule.\")));\n> \t\t\t\t\tbreak;\n> +\t\t\t\tcase CMD_MERGE:\n> +\n> +\t\t\t\t\t/*\n> +\t\t\t\t\t * Must have a suitable INSTEAD OF trigger for each MERGE\n> +\t\t\t\t\t * action. Note that the error hints here differ from\n> +\t\t\t\t\t * above, since MERGE doesn't support rules.\n> +\t\t\t\t\t */\n> +\t\t\t\t\tforeach(lc, mergeActions)\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\tMergeAction *action = (MergeAction *) lfirst(lc);\n> +\n> +\t\t\t\t\t\tswitch (action->commandType)\n> +\t\t\t\t\t\t{\n> +\t\t\t\t\t\t\tcase CMD_INSERT:\n> +\t\t\t\t\t\t\t\tif (!trigDesc || !trigDesc->trig_insert_instead_row)\n> +\t\t\t\t\t\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t\t\t\t\t\t(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> +\t\t\t\t\t\t\t\t\t\t\t errmsg(\"cannot insert into view \\\"%s\\\"\",\n> +\t\t\t\t\t\t\t\t\t\t\t\t\tRelationGetRelationName(resultRel)),\n> +\t\t\t\t\t\t\t\t\t\t\t errhint(\"To enable inserting into the view using MERGE, provide an INSTEAD OF INSERT trigger.\")));\n\nLooking at the comments in ereport_view_not_updatable and the code\ncoverate reports, it appears that these checks here are dead code? If\nso, maybe it would be better to turn them into elog() calls? I don't\nmean to touch the existing code, just that I don't see that it makes\nsense to introduce the new ones as ereport().\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La persona que no quería pecar / estaba obligada a sentarse\n en duras y empinadas sillas / desprovistas, por cierto\n de blandos atenuantes\" (Patricio Vogel)\n\n\n",
"msg_date": "Fri, 26 Jan 2024 17:48:02 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "On Fri, 26 Jan 2024 at 16:48, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Thanks for working on this. The patch looks well finished. I didn't\n> try to run it, though. I gave it a read and found nothing to complain\n> about, just these two pretty minor comments:\n>\n\nThanks for reviewing.\n\n> > -CheckValidResultRel(ResultRelInfo *resultRelInfo, CmdType operation)\n> > +CheckValidResultRel(ResultRelInfo *resultRelInfo, CmdType operation,\n> > + List *mergeActions)\n> > {\n>\n> I suggest to add commentary explaining the new argument of this function.\n>\n\nOK.\n\n> > @@ -1080,6 +1083,51 @@ CheckValidResultRel(ResultRelInfo *resul\n> > RelationGetRelationName(resultRel)),\n> > errhint(\"To enable deleting from the view, provide an INSTEAD OF DELETE trigger or an unconditional ON DELETE DO INSTEAD rule.\")));\n> > break;\n> > + case CMD_MERGE:\n> > +\n> > + /*\n> > + * Must have a suitable INSTEAD OF trigger for each MERGE\n> > + * action. Note that the error hints here differ from\n> > + * above, since MERGE doesn't support rules.\n> > + */\n> > + foreach(lc, mergeActions)\n> > + {\n> > + MergeAction *action = (MergeAction *) lfirst(lc);\n> > +\n> > + switch (action->commandType)\n> > + {\n> > + case CMD_INSERT:\n> > + if (!trigDesc || !trigDesc->trig_insert_instead_row)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > + errmsg(\"cannot insert into view \\\"%s\\\"\",\n> > + RelationGetRelationName(resultRel)),\n> > + errhint(\"To enable inserting into the view using MERGE, provide an INSTEAD OF INSERT trigger.\")));\n>\n> Looking at the comments in ereport_view_not_updatable and the code\n> coverate reports, it appears that these checks here are dead code? If\n> so, maybe it would be better to turn them into elog() calls? I don't\n> mean to touch the existing code, just that I don't see that it makes\n> sense to introduce the new ones as ereport().\n>\n\nYeah, for all practical purposes, that check in CheckValidResultRel()\nhas been dead code since d751ba5235, but I think it's still worth\ndoing, and if we're going to do it, we should do it properly. I don't\nlike using elog() in some cases and ereport() in others, but I also\ndon't like having that much duplicated code between this and the\nrewriter (and this patch doubles the size of that block).\n\nA neater solution is to export the rewriter functions and use them in\nCheckValidResultRel(). All these checks can then be reduced to\n\n if (!view_has_instead_trigger(...))\n error_view_not_updatable(...)\n\nwhich eliminates a lot of duplicated code and means that we now have\njust one place that throws these errors.\n\nRegards,\nDean",
"msg_date": "Mon, 29 Jan 2024 09:44:21 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "On 2024-Jan-29, Dean Rasheed wrote:\n\n> Yeah, for all practical purposes, that check in CheckValidResultRel()\n> has been dead code since d751ba5235, but I think it's still worth\n> doing, and if we're going to do it, we should do it properly. I don't\n> like using elog() in some cases and ereport() in others, but I also\n> don't like having that much duplicated code between this and the\n> rewriter (and this patch doubles the size of that block).\n> \n> A neater solution is to export the rewriter functions and use them in\n> CheckValidResultRel(). All these checks can then be reduced to\n> \n> if (!view_has_instead_trigger(...))\n> error_view_not_updatable(...)\n> \n> which eliminates a lot of duplicated code and means that we now have\n> just one place that throws these errors.\n\nThis looks quite nice, thanks. LGTM.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No me acuerdo, pero no es cierto. No es cierto, y si fuera cierto,\n no me acuerdo.\" (Augusto Pinochet a una corte de justicia)\n\n\n",
"msg_date": "Tue, 30 Jan 2024 12:58:21 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "On Tue, 30 Jan 2024 at 11:58, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> This looks quite nice, thanks. LGTM.\n>\n\nGoing over this again, I spotted a bug -- the UPDATE path in\nExecMergeMatched() wasn't calling ExecUpdateEpilogue() for\ntrigger-updatable views, because it wasn't setting updateCxt.updated\nto true. (This matters if you have an auto-updatable view WITH CHECK\nOPTION on top of a trigger-updatable view, so I've added a new test\nthere.)\n\nRather than setting updateCxt.updated to true, making the\ntrigger-invoking code in ExecMergeMatched() diverge from ExecUpdate(),\na better fix is to simply remove the UpdateContext.updated flag\nentirely. The only place that reads it is this code in\nExecMergeMatched():\n\n if (result == TM_Ok && updateCxt.updated)\n {\n ExecUpdateEpilogue(context, &updateCxt, resultRelInfo,\n tupleid, NULL, newslot);\n\nwhere result is the result from ExecUpdateAct(). However, all paths\nthrough ExecUpdateAct() that return TM_Ok also set updateCxt.updated\nto true, so the flag is redundant. It looks like that has always been\nthe case, ever since it was introduced. Getting rid of it is a useful\nsimplification, and brings the UPDATE path in ExecMergeMatched() more\ninto line with ExecUpdate(), which always calls ExecUpdateEpilogue()\nif ExecUpdateAct() returns TM_Ok.\n\nAside from that, I've done a little more copy-editing and added a few\nmore test cases, and I think this is pretty-much good to go, though I\nthink I'll split this up into separate commits, since removing\nUpdateContext.updated isn't really related to the MERGE INTO view\nfeature.\n\nRegards,\nDean",
"msg_date": "Thu, 29 Feb 2024 09:36:09 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "On 2024-Feb-29, Dean Rasheed wrote:\n\n> Going over this again, I spotted a bug -- the UPDATE path in\n> ExecMergeMatched() wasn't calling ExecUpdateEpilogue() for\n> trigger-updatable views, because it wasn't setting updateCxt.updated\n> to true. (This matters if you have an auto-updatable view WITH CHECK\n> OPTION on top of a trigger-updatable view,\n\nOh, right ... glad you found that. It sounds a bit convoluted and it\nwould have been a pain if users had found out afterwards.\n\n> [...], so I've added a new test there.)\n\nGreat, thanks.\n\n> Rather than setting updateCxt.updated to true, making the\n> trigger-invoking code in ExecMergeMatched() diverge from ExecUpdate(),\n> a better fix is to simply remove the UpdateContext.updated flag\n> entirely. The only place that reads it is this code in\n> ExecMergeMatched():\n> \n> if (result == TM_Ok && updateCxt.updated)\n> {\n> ExecUpdateEpilogue(context, &updateCxt, resultRelInfo,\n> tupleid, NULL, newslot);\n> \n> where result is the result from ExecUpdateAct(). However, all paths\n> through ExecUpdateAct() that return TM_Ok also set updateCxt.updated\n> to true, so the flag is redundant. It looks like that has always been\n> the case, ever since it was introduced. Getting rid of it is a useful\n> simplification, and brings the UPDATE path in ExecMergeMatched() more\n> into line with ExecUpdate(), which always calls ExecUpdateEpilogue()\n> if ExecUpdateAct() returns TM_Ok.\n\nThis is a great find! I agree with getting rid of\nUpdateContext->updated as a separate commit.\n\n> Aside from that, I've done a little more copy-editing and added a few\n> more test cases, and I think this is pretty-much good to go, though I\n> think I'll split this up into separate commits, since removing\n> UpdateContext.updated isn't really related to the MERGE INTO view\n> feature.\n\nBy all means let's get the feature out there. It's not a frequently\nrequested thing but it does seem to come up.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Tiene valor aquel que admite que es un cobarde\" (Fernandel)\n\n\n",
"msg_date": "Thu, 29 Feb 2024 10:50:05 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Supporting MERGE on updatable views"
},
{
"msg_contents": "On Thu, 29 Feb 2024 at 09:50, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> By all means let's get the feature out there. It's not a frequently\n> requested thing but it does seem to come up.\n>\n\nPushed. Thanks for reviewing.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 29 Feb 2024 16:37:28 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Supporting MERGE on updatable views"
}
] |
[
{
"msg_contents": "Hello,\n\nI propose to add a new value \"no_data_found\" for the \nplpgsql.extra_errors and plpgsql.extra_warnings parameters [1].\n\nWith plpgsql.extra_errors=no_data_found SELECT INTO raises no_data_found \nexception when the result set is empty. With \nplpgsql.extra_errors=too_many_rows,no_data_found SELECT INTO behaves \nlike SELECT INTO STRICT [2]. This could simplify migration from PL/SQL \nand may be just more convenient.\n\nOne potential downside is that plpgsql.extra_errors=no_data_found could \nbreak existing functions expecting to get null or checking IF found \nexplicitly. This is also true for the too_many_rows exception, but \narguably it's a programmer error, while no_data_found switches to a \ndifferent convention for handling (or not handling) an empty result with \nSELECT INTO.\n\nOtherwise the patch is straightforward.\n\nWhat do you think?\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n[1] \nhttps://www.postgresql.org/docs/devel/plpgsql-development-tips.html#PLPGSQL-EXTRA-CHECKS\n[2] \nhttps://www.postgresql.org/docs/devel/plpgsql-statements.html#PLPGSQL-STATEMENTS-SQL-ONEROW",
"msg_date": "Thu, 8 Dec 2022 14:29:20 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Add PL/pgSQL extra check no_data_found"
},
{
"msg_contents": "čt 8. 12. 2022 v 12:29 odesílatel Sergey Shinderuk <\ns.shinderuk@postgrespro.ru> napsal:\n\n> Hello,\n>\n> I propose to add a new value \"no_data_found\" for the\n> plpgsql.extra_errors and plpgsql.extra_warnings parameters [1].\n>\n> With plpgsql.extra_errors=no_data_found SELECT INTO raises no_data_found\n> exception when the result set is empty. With\n> plpgsql.extra_errors=too_many_rows,no_data_found SELECT INTO behaves\n> like SELECT INTO STRICT [2]. This could simplify migration from PL/SQL\n> and may be just more convenient.\n>\n> One potential downside is that plpgsql.extra_errors=no_data_found could\n> break existing functions expecting to get null or checking IF found\n> explicitly. This is also true for the too_many_rows exception, but\n> arguably it's a programmer error, while no_data_found switches to a\n> different convention for handling (or not handling) an empty result with\n> SELECT INTO.\n>\n> Otherwise the patch is straightforward.\n>\n> What do you think?\n>\n\nI am not against it. It makes sense.\n\nI don't like the idea about possible replacement of INTO STRICT by INTO +\nextra warnings.\n\nHandling exceptions is significantly more expensive than in Oracle, and\nusing INTO without STRICT with the next test IF NOT FOUND THEN can save one\nsafepoint and one handling an exception. It should be mentioned in the\ndocumentation. Using this very common Oracle's pattern can have a very\nnegative impact on performance in Postgres. If somebody does port from\nOracle, and wants compatible behavior then he should use INTO STRICT. I\nthink it is counterproductive to hide syntax differences when there is a\nsignificant difference in performance (and will be).\n\nRegards\n\nPavel\n\n\n\n\n> --\n> Sergey Shinderuk https://postgrespro.com/\n>\n>\n> [1]\n>\n> https://www.postgresql.org/docs/devel/plpgsql-development-tips.html#PLPGSQL-EXTRA-CHECKS\n> [2]\n>\n> https://www.postgresql.org/docs/devel/plpgsql-statements.html#PLPGSQL-STATEMENTS-SQL-ONEROW\n\nčt 8. 12. 2022 v 12:29 odesílatel Sergey Shinderuk <s.shinderuk@postgrespro.ru> napsal:Hello,\n\nI propose to add a new value \"no_data_found\" for the \nplpgsql.extra_errors and plpgsql.extra_warnings parameters [1].\n\nWith plpgsql.extra_errors=no_data_found SELECT INTO raises no_data_found \nexception when the result set is empty. With \nplpgsql.extra_errors=too_many_rows,no_data_found SELECT INTO behaves \nlike SELECT INTO STRICT [2]. This could simplify migration from PL/SQL \nand may be just more convenient.\n\nOne potential downside is that plpgsql.extra_errors=no_data_found could \nbreak existing functions expecting to get null or checking IF found \nexplicitly. This is also true for the too_many_rows exception, but \narguably it's a programmer error, while no_data_found switches to a \ndifferent convention for handling (or not handling) an empty result with \nSELECT INTO.\n\nOtherwise the patch is straightforward.\n\nWhat do you think?I am not against it. It makes sense. I don't like the idea about possible replacement of INTO STRICT by INTO + extra warnings.Handling exceptions is significantly more expensive than in Oracle, and using INTO without STRICT with the next test IF NOT FOUND THEN can save one safepoint and one handling an exception. It should be mentioned in the documentation. Using this very common Oracle's pattern can have a very negative impact on performance in Postgres. If somebody does port from Oracle, and wants compatible behavior then he should use INTO STRICT. I think it is counterproductive to hide syntax differences when there is a significant difference in performance (and will be).RegardsPavel\n\n-- \nSergey Shinderuk https://postgrespro.com/\n\n\n[1] \nhttps://www.postgresql.org/docs/devel/plpgsql-development-tips.html#PLPGSQL-EXTRA-CHECKS\n[2] \nhttps://www.postgresql.org/docs/devel/plpgsql-statements.html#PLPGSQL-STATEMENTS-SQL-ONEROW",
"msg_date": "Fri, 9 Dec 2022 07:46:21 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add PL/pgSQL extra check no_data_found"
},
{
"msg_contents": "On 09.12.2022 09:46, Pavel Stehule wrote:\n> I don't like the idea about possible replacement of INTO STRICT by INTO \n> + extra warnings.\n> \n> Handling exceptions is significantly more expensive than in Oracle, and \n> using INTO without STRICT with the next test IF NOT FOUND THEN can save \n> one safepoint and one handling an exception. It should be mentioned in \n> the documentation. Using this very common Oracle's pattern can have a \n> very negative impact on performance in Postgres. If somebody does port \n> from Oracle, and wants compatible behavior then he should use INTO \n> STRICT. I think it is counterproductive to hide syntax differences when \n> there is a significant difference in performance (and will be).\n\nFair enough. Thank you, Pavel.\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n\n",
"msg_date": "Fri, 9 Dec 2022 11:27:56 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add PL/pgSQL extra check no_data_found"
},
{
"msg_contents": "Hi!\n \nThis new feature will be in demand for customers who migrate their largeapplications (having millions of lines of PL/SQL code) from Oracle to PostreSQL.\nIt will reduce the amount of work on rewriting the code will provide an opportunity to reduce budgets for the migration project.\n \nYes, in case the part of the code that handles no_data_found is executed very often, this will cause performance loss.\nDuring the testing phase, this will be discovered and the customer will rewrite these problem areas of the code - add the phrase STRICT.\nHe will not need to change all the code at the very beginning, as it happens now, without this feature.\n \nI am convinced that this functionality will attract even more customers to PostgreSQL - it will increase the popularity of the PostgeSQL DBMS.\n \nThank you!\n \nBest Regards\nIgor Melnikov\n\n \n>Понедельник, 12 декабря 2022, 15:23 +03:00 от Pavel Stehule <pavel.stehule@gmail.com>:\n> \n> \n>čt 8. 12. 2022 v 12:29 odesílatel Sergey Shinderuk < s.shinderuk@postgrespro.ru > napsal: \n>>Hello,\n>>\n>>I propose to add a new value \"no_data_found\" for the\n>>plpgsql.extra_errors and plpgsql.extra_warnings parameters [1].\n>>\n>>With plpgsql.extra_errors=no_data_found SELECT INTO raises no_data_found\n>>exception when the result set is empty. With\n>>plpgsql.extra_errors=too_many_rows,no_data_found SELECT INTO behaves\n>>like SELECT INTO STRICT [2]. This could simplify migration from PL/SQL\n>>and may be just more convenient.\n>>\n>>One potential downside is that plpgsql.extra_errors=no_data_found could\n>>break existing functions expecting to get null or checking IF found\n>>explicitly. This is also true for the too_many_rows exception, but\n>>arguably it's a programmer error, while no_data_found switches to a\n>>different convention for handling (or not handling) an empty result with\n>>SELECT INTO.\n>>\n>>Otherwise the patch is straightforward.\n>>\n>>What do you think?\n> \n>I am not against it. It makes sense.\n> \n>I don't like the idea about possible replacement of INTO STRICT by INTO + extra warnings.\n> \n>Handling exceptions is significantly more expensive than in Oracle, and using INTO without STRICT with the next test IF NOT FOUND THEN can save one safepoint and one handling an exception. It should be mentioned in the documentation. Using this very common Oracle's pattern can have a very negative impact on performance in Postgres. If somebody does port from Oracle, and wants compatible behavior then he should use INTO STRICT. I think it is counterproductive to hide syntax differences when there is a significant difference in performance (and will be).\n> \n>Regards\n> \n>Pavel\n> \n> \n> \n>>--\n>>Sergey Shinderuk https://postgrespro.com/\n>>\n>>\n>>[1]\n>>https://www.postgresql.org/docs/devel/plpgsql-development-tips.html#PLPGSQL-EXTRA-CHECKS\n>>[2]\n>>https://www.postgresql.org/docs/devel/plpgsql-statements.html#PLPGSQL-STATEMENTS-SQL-ONEROW \n \n \nС уважением,\nМельников Игорь\nmelnikov_ii@mail.ru\n \nHi! This new feature will be in demand for customers who migrate their largeapplications (having millions of lines of PL/SQL code) from Oracle to PostreSQL.It will reduce the amount of work on rewriting the code will provide an opportunity to reduce budgets for the migration project. Yes, in case the part of the code that handles no_data_found is executed very often, this will cause performance loss.During the testing phase, this will be discovered and the customer will rewrite these problem areas of the code - add the phrase STRICT.He will not need to change all the code at the very beginning, as it happens now, without this feature. I am convinced that this functionality will attract even more customers to PostgreSQL - it will increase the popularity of the PostgeSQL DBMS. Thank you! Best RegardsIgor Melnikov Понедельник, 12 декабря 2022, 15:23 +03:00 от Pavel Stehule <pavel.stehule@gmail.com>: čt 8. 12. 2022 v 12:29 odesílatel Sergey Shinderuk <s.shinderuk@postgrespro.ru> napsal:Hello,I propose to add a new value \"no_data_found\" for theplpgsql.extra_errors and plpgsql.extra_warnings parameters [1].With plpgsql.extra_errors=no_data_found SELECT INTO raises no_data_foundexception when the result set is empty. Withplpgsql.extra_errors=too_many_rows,no_data_found SELECT INTO behaveslike SELECT INTO STRICT [2]. This could simplify migration from PL/SQLand may be just more convenient.One potential downside is that plpgsql.extra_errors=no_data_found couldbreak existing functions expecting to get null or checking IF foundexplicitly. This is also true for the too_many_rows exception, butarguably it's a programmer error, while no_data_found switches to adifferent convention for handling (or not handling) an empty result withSELECT INTO.Otherwise the patch is straightforward.What do you think? I am not against it. It makes sense. I don't like the idea about possible replacement of INTO STRICT by INTO + extra warnings. Handling exceptions is significantly more expensive than in Oracle, and using INTO without STRICT with the next test IF NOT FOUND THEN can save one safepoint and one handling an exception. It should be mentioned in the documentation. Using this very common Oracle's pattern can have a very negative impact on performance in Postgres. If somebody does port from Oracle, and wants compatible behavior then he should use INTO STRICT. I think it is counterproductive to hide syntax differences when there is a significant difference in performance (and will be). Regards Pavel --Sergey Shinderuk https://postgrespro.com/[1]https://www.postgresql.org/docs/devel/plpgsql-development-tips.html#PLPGSQL-EXTRA-CHECKS[2]https://www.postgresql.org/docs/devel/plpgsql-statements.html#PLPGSQL-STATEMENTS-SQL-ONEROW С уважением,Мельников Игорьmelnikov_ii@mail.ru",
"msg_date": "Mon, 12 Dec 2022 15:36:58 +0300",
"msg_from": "=?UTF-8?B?0JzQtdC70YzQvdC40LrQvtCyINCY0LPQvtGA0Yw=?=\n <melnikov_ii@mail.ru>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?B?UmU6IEFkZCBQTC9wZ1NRTCBleHRyYSBjaGVjayBub19kYXRhX2ZvdW5k?="
},
{
"msg_contents": "Hi\n\npo 12. 12. 2022 v 13:37 odesílatel Мельников Игорь <melnikov_ii@mail.ru>\nnapsal:\n\n> Hi!\n>\n> This new feature will be in demand for customers who migrate their\n> largeapplications (having millions of lines of PL/SQL code) from Oracle to\n> PostreSQL.\n> It will reduce the amount of work on rewriting the code will provide an\n> opportunity to reduce budgets for the migration project.\n>\n> Yes, in case the part of the code that handles no_data_found is executed\n> very often, this will cause performance loss.\n> During the testing phase, this will be discovered and the customer will\n> rewrite these problem areas of the code - add the phrase STRICT.\n> He will not need to change all the code at the very beginning, as it\n> happens now, without this feature.\n>\n\nora2pg does this work by default. It is great tool and reduces lot of work\n\nhttps://ora2pg.darold.net/\n\nRegards\n\nPavel\n\n\n\n>\n> *I am convinced that this functionality will attract even more customers\n> to PostgreSQL - it will increase the popularity of the PostgeSQL DBMS.*\n>\n> Thank you!\n>\n> Best Regards\n> Igor Melnikov\n>\n>\n>\n> Понедельник, 12 декабря 2022, 15:23 +03:00 от Pavel Stehule <\n> pavel.stehule@gmail.com>:\n>\n>\n>\n> čt 8. 12. 2022 v 12:29 odesílatel Sergey Shinderuk <\n> s.shinderuk@postgrespro.ru\n> <//e.mail.ru/compose/?mailto=mailto%3as.shinderuk@postgrespro.ru>> napsal:\n>\n> Hello,\n>\n> I propose to add a new value \"no_data_found\" for the\n> plpgsql.extra_errors and plpgsql.extra_warnings parameters [1].\n>\n> With plpgsql.extra_errors=no_data_found SELECT INTO raises no_data_found\n> exception when the result set is empty. With\n> plpgsql.extra_errors=too_many_rows,no_data_found SELECT INTO behaves\n> like SELECT INTO STRICT [2]. This could simplify migration from PL/SQL\n> and may be just more convenient.\n>\n> One potential downside is that plpgsql.extra_errors=no_data_found could\n> break existing functions expecting to get null or checking IF found\n> explicitly. This is also true for the too_many_rows exception, but\n> arguably it's a programmer error, while no_data_found switches to a\n> different convention for handling (or not handling) an empty result with\n> SELECT INTO.\n>\n> Otherwise the patch is straightforward.\n>\n> What do you think?\n>\n>\n> I am not against it. It makes sense.\n>\n> I don't like the idea about possible replacement of INTO STRICT by INTO +\n> extra warnings.\n>\n> Handling exceptions is significantly more expensive than in Oracle, and\n> using INTO without STRICT with the next test IF NOT FOUND THEN can save one\n> safepoint and one handling an exception. It should be mentioned in the\n> documentation. Using this very common Oracle's pattern can have a very\n> negative impact on performance in Postgres. If somebody does port from\n> Oracle, and wants compatible behavior then he should use INTO STRICT. I\n> think it is counterproductive to hide syntax differences when there is a\n> significant difference in performance (and will be).\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n> --\n> Sergey Shinderuk https://postgrespro.com/\n>\n>\n> [1]\n>\n> https://www.postgresql.org/docs/devel/plpgsql-development-tips.html#PLPGSQL-EXTRA-CHECKS\n> [2]\n>\n> https://www.postgresql.org/docs/devel/plpgsql-statements.html#PLPGSQL-STATEMENTS-SQL-ONEROW\n>\n>\n>\n> С уважением,\n> Мельников Игорь\n> melnikov_ii@mail.ru\n>\n>\n\nHipo 12. 12. 2022 v 13:37 odesílatel Мельников Игорь <melnikov_ii@mail.ru> napsal:\nHi! This new feature will be in demand for customers who migrate their largeapplications (having millions of lines of PL/SQL code) from Oracle to PostreSQL.It will reduce the amount of work on rewriting the code will provide an opportunity to reduce budgets for the migration project. Yes, in case the part of the code that handles no_data_found is executed very often, this will cause performance loss.During the testing phase, this will be discovered and the customer will rewrite these problem areas of the code - add the phrase STRICT.He will not need to change all the code at the very beginning, as it happens now, without this feature.ora2pg does this work by default. It is great tool and reduces lot of workhttps://ora2pg.darold.net/RegardsPavel I am convinced that this functionality will attract even more customers to PostgreSQL - it will increase the popularity of the PostgeSQL DBMS. Thank you! Best RegardsIgor Melnikov Понедельник, 12 декабря 2022, 15:23 +03:00 от Pavel Stehule <pavel.stehule@gmail.com>: čt 8. 12. 2022 v 12:29 odesílatel Sergey Shinderuk <s.shinderuk@postgrespro.ru> napsal:Hello,I propose to add a new value \"no_data_found\" for theplpgsql.extra_errors and plpgsql.extra_warnings parameters [1].With plpgsql.extra_errors=no_data_found SELECT INTO raises no_data_foundexception when the result set is empty. Withplpgsql.extra_errors=too_many_rows,no_data_found SELECT INTO behaveslike SELECT INTO STRICT [2]. This could simplify migration from PL/SQLand may be just more convenient.One potential downside is that plpgsql.extra_errors=no_data_found couldbreak existing functions expecting to get null or checking IF foundexplicitly. This is also true for the too_many_rows exception, butarguably it's a programmer error, while no_data_found switches to adifferent convention for handling (or not handling) an empty result withSELECT INTO.Otherwise the patch is straightforward.What do you think? I am not against it. It makes sense. I don't like the idea about possible replacement of INTO STRICT by INTO + extra warnings. Handling exceptions is significantly more expensive than in Oracle, and using INTO without STRICT with the next test IF NOT FOUND THEN can save one safepoint and one handling an exception. It should be mentioned in the documentation. Using this very common Oracle's pattern can have a very negative impact on performance in Postgres. If somebody does port from Oracle, and wants compatible behavior then he should use INTO STRICT. I think it is counterproductive to hide syntax differences when there is a significant difference in performance (and will be). Regards Pavel --Sergey Shinderuk https://postgrespro.com/[1]https://www.postgresql.org/docs/devel/plpgsql-development-tips.html#PLPGSQL-EXTRA-CHECKS[2]https://www.postgresql.org/docs/devel/plpgsql-statements.html#PLPGSQL-STATEMENTS-SQL-ONEROW С уважением,Мельников Игорьmelnikov_ii@mail.ru",
"msg_date": "Mon, 12 Dec 2022 14:00:24 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add PL/pgSQL extra check no_data_found"
},
{
"msg_contents": "I know, know.\nBut ora2pg NOT convert source code in application tier anonymouse block and dynamic SQL in server side pl/sql.\nThis part of application need to be rewrite manually.\n \n\"no_data_found\" for the plpgsql.extra_errors and plpgsql.extra_warnings will be reduce this part of work.\n \nAlso, in my opinion, it looks strange that there too_many_rows is in plpgsql.extra_errors and plpgsql.extra_warnings, but no_data_found NOT.\nWhy?\n \nThanx\n \nBest Regards\nIgor Melnikov\n\n \n>Понедельник, 12 декабря 2022, 16:01 +03:00 от Pavel Stehule < pavel.stehule@gmail.com >:\n> \n>Hi \n>po 12. 12. 2022 v 13:37 odesílatel Мельников Игорь < melnikov_ii@mail.ru > napsal:\n>>Hi!\n>> \n>>This new feature will be in demand for customers who migrate their largeapplications (having millions of lines of PL/SQL code) from Oracle to PostreSQL.\n>>It will reduce the amount of work on rewriting the code will provide an opportunity to reduce budgets for the migration project.\n>> \n>>Yes, in case the part of the code that handles no_data_found is executed very often, this will cause performance loss.\n>>During the testing phase, this will be discovered and the customer will rewrite these problem areas of the code - add the phrase STRICT.\n>>He will not need to change all the code at the very beginning, as it happens now, without this feature.\n> \n>ora2pg does this work by default. It is great tool and reduces lot of work\n> \n>https://ora2pg.darold.net/\n> \n>Regards\n> \n>Pavel\n> \n> \n>> \n>>I am convinced that this functionality will attract even more customers to PostgreSQL - it will increase the popularity of the PostgeSQL DBMS.\n>> \n>>Thank you!\n>> \n>>Best Regards\n>>Igor Melnikov\n>>\n>> \n>>>Понедельник, 12 декабря 2022, 15:23 +03:00 от Pavel Stehule < pavel.stehule@gmail.com >:\n>>> \n>>> \n>>>čt 8. 12. 2022 v 12:29 odesílatel Sergey Shinderuk < s.shinderuk@postgrespro.ru > napsal: \n>>>>Hello,\n>>>>\n>>>>I propose to add a new value \"no_data_found\" for the\n>>>>plpgsql.extra_errors and plpgsql.extra_warnings parameters [1].\n>>>>\n>>>>With plpgsql.extra_errors=no_data_found SELECT INTO raises no_data_found\n>>>>exception when the result set is empty. With\n>>>>plpgsql.extra_errors=too_many_rows,no_data_found SELECT INTO behaves\n>>>>like SELECT INTO STRICT [2]. This could simplify migration from PL/SQL\n>>>>and may be just more convenient.\n>>>>\n>>>>One potential downside is that plpgsql.extra_errors=no_data_found could\n>>>>break existing functions expecting to get null or checking IF found\n>>>>explicitly. This is also true for the too_many_rows exception, but\n>>>>arguably it's a programmer error, while no_data_found switches to a\n>>>>different convention for handling (or not handling) an empty result with\n>>>>SELECT INTO.\n>>>>\n>>>>Otherwise the patch is straightforward.\n>>>>\n>>>>What do you think?\n>>> \n>>>I am not against it. It makes sense.\n>>> \n>>>I don't like the idea about possible replacement of INTO STRICT by INTO + extra warnings.\n>>> \n>>>Handling exceptions is significantly more expensive than in Oracle, and using INTO without STRICT with the next test IF NOT FOUND THEN can save one safepoint and one handling an exception. It should be mentioned in the documentation. Using this very common Oracle's pattern can have a very negative impact on performance in Postgres. If somebody does port from Oracle, and wants compatible behavior then he should use INTO STRICT. I think it is counterproductive to hide syntax differences when there is a significant difference in performance (and will be).\n>>> \n>>>Regards\n>>> \n>>>Pavel\n>>> \n>>> \n>>> \n>>>>--\n>>>>Sergey Shinderuk https://postgrespro.com/\n>>>>\n>>>>\n>>>>[1]\n>>>>https://www.postgresql.org/docs/devel/plpgsql-development-tips.html#PLPGSQL-EXTRA-CHECKS\n>>>>[2]\n>>>>https://www.postgresql.org/docs/devel/plpgsql-statements.html#PLPGSQL-STATEMENTS-SQL-ONEROW \n>> \n>> \n>>С уважением,\n>>Мельников Игорь\n>>melnikov_ii@mail.ru\n>> \n \n \nС уважением,\nМельников Игорь\nmelnikov_ii@mail.ru\n \nI know, know.But ora2pg NOT convert source code in application tier anonymouse block and dynamic SQL in server side pl/sql.This part of application need to be rewrite manually. \"no_data_found\" for the plpgsql.extra_errors and plpgsql.extra_warnings will be reduce this part of work. Also, in my opinion, it looks strange that there too_many_rows is in plpgsql.extra_errors and plpgsql.extra_warnings, but no_data_found NOT.Why? Thanx Best RegardsIgor Melnikov Понедельник, 12 декабря 2022, 16:01 +03:00 от Pavel Stehule <pavel.stehule@gmail.com>: Hi po 12. 12. 2022 v 13:37 odesílatel Мельников Игорь <melnikov_ii@mail.ru> napsal:Hi! This new feature will be in demand for customers who migrate their largeapplications (having millions of lines of PL/SQL code) from Oracle to PostreSQL.It will reduce the amount of work on rewriting the code will provide an opportunity to reduce budgets for the migration project. Yes, in case the part of the code that handles no_data_found is executed very often, this will cause performance loss.During the testing phase, this will be discovered and the customer will rewrite these problem areas of the code - add the phrase STRICT.He will not need to change all the code at the very beginning, as it happens now, without this feature. ora2pg does this work by default. It is great tool and reduces lot of work https://ora2pg.darold.net/ Regards Pavel I am convinced that this functionality will attract even more customers to PostgreSQL - it will increase the popularity of the PostgeSQL DBMS. Thank you! Best RegardsIgor Melnikov Понедельник, 12 декабря 2022, 15:23 +03:00 от Pavel Stehule <pavel.stehule@gmail.com>: čt 8. 12. 2022 v 12:29 odesílatel Sergey Shinderuk <s.shinderuk@postgrespro.ru> napsal:Hello,I propose to add a new value \"no_data_found\" for theplpgsql.extra_errors and plpgsql.extra_warnings parameters [1].With plpgsql.extra_errors=no_data_found SELECT INTO raises no_data_foundexception when the result set is empty. Withplpgsql.extra_errors=too_many_rows,no_data_found SELECT INTO behaveslike SELECT INTO STRICT [2]. This could simplify migration from PL/SQLand may be just more convenient.One potential downside is that plpgsql.extra_errors=no_data_found couldbreak existing functions expecting to get null or checking IF foundexplicitly. This is also true for the too_many_rows exception, butarguably it's a programmer error, while no_data_found switches to adifferent convention for handling (or not handling) an empty result withSELECT INTO.Otherwise the patch is straightforward.What do you think? I am not against it. It makes sense. I don't like the idea about possible replacement of INTO STRICT by INTO + extra warnings. Handling exceptions is significantly more expensive than in Oracle, and using INTO without STRICT with the next test IF NOT FOUND THEN can save one safepoint and one handling an exception. It should be mentioned in the documentation. Using this very common Oracle's pattern can have a very negative impact on performance in Postgres. If somebody does port from Oracle, and wants compatible behavior then he should use INTO STRICT. I think it is counterproductive to hide syntax differences when there is a significant difference in performance (and will be). Regards Pavel --Sergey Shinderuk https://postgrespro.com/[1]https://www.postgresql.org/docs/devel/plpgsql-development-tips.html#PLPGSQL-EXTRA-CHECKS[2]https://www.postgresql.org/docs/devel/plpgsql-statements.html#PLPGSQL-STATEMENTS-SQL-ONEROW С уважением,Мельников Игорьmelnikov_ii@mail.ru С уважением,Мельников Игорьmelnikov_ii@mail.ru",
"msg_date": "Mon, 12 Dec 2022 16:16:21 +0300",
"msg_from": "=?UTF-8?B?0JzQtdC70YzQvdC40LrQvtCyINCY0LPQvtGA0Yw=?=\n <melnikov_ii@mail.ru>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?B?UmU6IEFkZCBQTC9wZ1NRTCBleHRyYSBjaGVjayBub19kYXRhX2ZvdW5k?="
},
{
"msg_contents": "po 12. 12. 2022 v 14:16 odesílatel Мельников Игорь <melnikov_ii@mail.ru>\nnapsal:\n\n> I know, know.\n> But ora2pg NOT convert source code in application tier anonymouse block\n> and dynamic SQL in server side pl/sql.\n> This part of application need to be rewrite manually.\n>\n> \"no_data_found\" for the plpgsql.extra_errors and plpgsql.extra_warnings\n> will be reduce this part of work.\n>\n> Also, in my opinion, it looks strange that there too_many_rows is in plpgsql.extra_errors\n> and plpgsql.extra_warnings, but no_data_found NOT.\n> Why?\n>\n\nThe extra checks are not designed for compatibility with Oracle. It is\ndesigned to implement some common checks that are harder or slower to\nimplement in plpgsql.\n\nno_data_found issue can be simply checked by variable FOUND. On the second\nhand, too many rows is more complex (a little bit). You need to use the GET\nDIAGNOSTICS command and IF.\n\nExtra checks were designed to check some less frequent but nasty errors to\nwrite safer code. It is not designed for better portability from Oracle.\n\nRegards\n\nPavel\n\n\n\n> Thanx\n>\n> Best Regards\n> Igor Melnikov\n>\n>\n>\n> Понедельник, 12 декабря 2022, 16:01 +03:00 от Pavel Stehule <\n> pavel.stehule@gmail.com <http:///compose?To=pavel.stehule@gmail.com>>:\n>\n> Hi\n>\n> po 12. 12. 2022 v 13:37 odesílatel Мельников Игорь <melnikov_ii@mail.ru\n> <http://e.mail.ru/compose/?mailto=mailto%3amelnikov_ii@mail.ru>> napsal:\n>\n> Hi!\n>\n> This new feature will be in demand for customers who migrate their\n> largeapplications (having millions of lines of PL/SQL code) from Oracle to\n> PostreSQL.\n> It will reduce the amount of work on rewriting the code will provide an\n> opportunity to reduce budgets for the migration project.\n>\n> Yes, in case the part of the code that handles no_data_found is executed\n> very often, this will cause performance loss.\n> During the testing phase, this will be discovered and the customer will\n> rewrite these problem areas of the code - add the phrase STRICT.\n> He will not need to change all the code at the very beginning, as it\n> happens now, without this feature.\n>\n>\n> ora2pg does this work by default. It is great tool and reduces lot of work\n>\n> https://ora2pg.darold.net/\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n> *I am convinced that this functionality will attract even more customers\n> to PostgreSQL - it will increase the popularity of the PostgeSQL DBMS.*\n>\n> Thank you!\n>\n> Best Regards\n> Igor Melnikov\n>\n>\n>\n> Понедельник, 12 декабря 2022, 15:23 +03:00 от Pavel Stehule <\n> pavel.stehule@gmail.com\n> <http://e.mail.ru/compose/?mailto=mailto%3apavel.stehule@gmail.com>>:\n>\n>\n>\n> čt 8. 12. 2022 v 12:29 odesílatel Sergey Shinderuk <\n> s.shinderuk@postgrespro.ru\n> <http://e.mail.ru/compose/?mailto=mailto%3as.shinderuk@postgrespro.ru>>\n> napsal:\n>\n> Hello,\n>\n> I propose to add a new value \"no_data_found\" for the\n> plpgsql.extra_errors and plpgsql.extra_warnings parameters [1].\n>\n> With plpgsql.extra_errors=no_data_found SELECT INTO raises no_data_found\n> exception when the result set is empty. With\n> plpgsql.extra_errors=too_many_rows,no_data_found SELECT INTO behaves\n> like SELECT INTO STRICT [2]. This could simplify migration from PL/SQL\n> and may be just more convenient.\n>\n> One potential downside is that plpgsql.extra_errors=no_data_found could\n> break existing functions expecting to get null or checking IF found\n> explicitly. This is also true for the too_many_rows exception, but\n> arguably it's a programmer error, while no_data_found switches to a\n> different convention for handling (or not handling) an empty result with\n> SELECT INTO.\n>\n> Otherwise the patch is straightforward.\n>\n> What do you think?\n>\n>\n> I am not against it. It makes sense.\n>\n> I don't like the idea about possible replacement of INTO STRICT by INTO +\n> extra warnings.\n>\n> Handling exceptions is significantly more expensive than in Oracle, and\n> using INTO without STRICT with the next test IF NOT FOUND THEN can save one\n> safepoint and one handling an exception. It should be mentioned in the\n> documentation. Using this very common Oracle's pattern can have a very\n> negative impact on performance in Postgres. If somebody does port from\n> Oracle, and wants compatible behavior then he should use INTO STRICT. I\n> think it is counterproductive to hide syntax differences when there is a\n> significant difference in performance (and will be).\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>\n> --\n> Sergey Shinderuk https://postgrespro.com/\n>\n>\n> [1]\n>\n> https://www.postgresql.org/docs/devel/plpgsql-development-tips.html#PLPGSQL-EXTRA-CHECKS\n> [2]\n>\n> https://www.postgresql.org/docs/devel/plpgsql-statements.html#PLPGSQL-STATEMENTS-SQL-ONEROW\n>\n>\n>\n> С уважением,\n> Мельников Игорь\n> melnikov_ii@mail.ru\n> <http://e.mail.ru/compose/?mailto=mailto%3amelnikov_ii@mail.ru>\n>\n>\n>\n>\n> С уважением,\n> Мельников Игорь\n> melnikov_ii@mail.ru <http:///compose?To=melnikov_ii@mail.ru>\n>\n>\n\npo 12. 12. 2022 v 14:16 odesílatel Мельников Игорь <melnikov_ii@mail.ru> napsal:\nI know, know.But ora2pg NOT convert source code in application tier anonymouse block and dynamic SQL in server side pl/sql.This part of application need to be rewrite manually. \"no_data_found\" for the plpgsql.extra_errors and plpgsql.extra_warnings will be reduce this part of work. Also, in my opinion, it looks strange that there too_many_rows is in plpgsql.extra_errors and plpgsql.extra_warnings, but no_data_found NOT.Why?The extra checks are not designed for compatibility with Oracle. It is designed to implement some common checks that are harder or slower to implement in plpgsql. no_data_found issue can be simply checked by variable FOUND. On the second hand, too many rows is more complex (a little bit). You need to use the GET DIAGNOSTICS command and IF.Extra checks were designed to check some less frequent but nasty errors to write safer code. It is not designed for better portability from Oracle.RegardsPavel Thanx Best RegardsIgor Melnikov Понедельник, 12 декабря 2022, 16:01 +03:00 от Pavel Stehule <pavel.stehule@gmail.com>: Hi po 12. 12. 2022 v 13:37 odesílatel Мельников Игорь <melnikov_ii@mail.ru> napsal:Hi! This new feature will be in demand for customers who migrate their largeapplications (having millions of lines of PL/SQL code) from Oracle to PostreSQL.It will reduce the amount of work on rewriting the code will provide an opportunity to reduce budgets for the migration project. Yes, in case the part of the code that handles no_data_found is executed very often, this will cause performance loss.During the testing phase, this will be discovered and the customer will rewrite these problem areas of the code - add the phrase STRICT.He will not need to change all the code at the very beginning, as it happens now, without this feature. ora2pg does this work by default. It is great tool and reduces lot of work https://ora2pg.darold.net/ Regards Pavel I am convinced that this functionality will attract even more customers to PostgreSQL - it will increase the popularity of the PostgeSQL DBMS. Thank you! Best RegardsIgor Melnikov Понедельник, 12 декабря 2022, 15:23 +03:00 от Pavel Stehule <pavel.stehule@gmail.com>: čt 8. 12. 2022 v 12:29 odesílatel Sergey Shinderuk <s.shinderuk@postgrespro.ru> napsal:Hello,I propose to add a new value \"no_data_found\" for theplpgsql.extra_errors and plpgsql.extra_warnings parameters [1].With plpgsql.extra_errors=no_data_found SELECT INTO raises no_data_foundexception when the result set is empty. Withplpgsql.extra_errors=too_many_rows,no_data_found SELECT INTO behaveslike SELECT INTO STRICT [2]. This could simplify migration from PL/SQLand may be just more convenient.One potential downside is that plpgsql.extra_errors=no_data_found couldbreak existing functions expecting to get null or checking IF foundexplicitly. This is also true for the too_many_rows exception, butarguably it's a programmer error, while no_data_found switches to adifferent convention for handling (or not handling) an empty result withSELECT INTO.Otherwise the patch is straightforward.What do you think? I am not against it. It makes sense. I don't like the idea about possible replacement of INTO STRICT by INTO + extra warnings. Handling exceptions is significantly more expensive than in Oracle, and using INTO without STRICT with the next test IF NOT FOUND THEN can save one safepoint and one handling an exception. It should be mentioned in the documentation. Using this very common Oracle's pattern can have a very negative impact on performance in Postgres. If somebody does port from Oracle, and wants compatible behavior then he should use INTO STRICT. I think it is counterproductive to hide syntax differences when there is a significant difference in performance (and will be). Regards Pavel --Sergey Shinderuk https://postgrespro.com/[1]https://www.postgresql.org/docs/devel/plpgsql-development-tips.html#PLPGSQL-EXTRA-CHECKS[2]https://www.postgresql.org/docs/devel/plpgsql-statements.html#PLPGSQL-STATEMENTS-SQL-ONEROW С уважением,Мельников Игорьmelnikov_ii@mail.ru С уважением,Мельников Игорьmelnikov_ii@mail.ru",
"msg_date": "Mon, 12 Dec 2022 15:37:06 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add PL/pgSQL extra check no_data_found"
}
] |
[
{
"msg_contents": "While testing MERGE, I noticed that it supports inheritance\nhierarchies and the ONLY keyword, but that isn't documented. Attached\nis a patch to merge.sgml, borrowing text from update.sgml and\ndelete.sgml.\n\nI note that there are also a couple of places early in the manual\n(advanced.sgml and ddl.sgml) that also discuss inheritance, citing\nSELECT, UPDATE and DELETE as examples of (already-discussed) commands\nthat support ONLY. However, since MERGE isn't mentioned until much\nlater in the manual, it's probably best to leave those as-is. They\ndon't claim to be complete lists of commands supporting ONLY, and it\nwould be a pain to make them that.\n\nRegards,\nDean",
"msg_date": "Thu, 8 Dec 2022 15:26:52 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Documenting MERGE INTO ONLY ..."
},
{
"msg_contents": "On Thu, Dec 08, 2022 at 03:26:52PM +0000, Dean Rasheed wrote:\n> While testing MERGE, I noticed that it supports inheritance\n> hierarchies and the ONLY keyword, but that isn't documented. Attached\n> is a patch to merge.sgml, borrowing text from update.sgml and\n> delete.sgml.\n\nLGTM. I didn't see any tests for this in merge.sql or inherit.sql. Do you\nthink we should add some?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 8 Dec 2022 11:08:08 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Documenting MERGE INTO ONLY ..."
},
{
"msg_contents": "On 2022-Dec-08, Nathan Bossart wrote:\n\n> On Thu, Dec 08, 2022 at 03:26:52PM +0000, Dean Rasheed wrote:\n> > While testing MERGE, I noticed that it supports inheritance\n> > hierarchies and the ONLY keyword, but that isn't documented. Attached\n> > is a patch to merge.sgml, borrowing text from update.sgml and\n> > delete.sgml.\n> \n> LGTM.\n\nLGTM2.\n\n> I didn't see any tests for this in merge.sql or inherit.sql. Do you\n> think we should add some?\n\nOuch! We should definitely have some.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 9 Dec 2022 11:02:27 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Documenting MERGE INTO ONLY ..."
},
{
"msg_contents": "On Fri, 9 Dec 2022 at 10:02, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Dec-08, Nathan Bossart wrote:\n>\n> > On Thu, Dec 08, 2022 at 03:26:52PM +0000, Dean Rasheed wrote:\n> > > While testing MERGE, I noticed that it supports inheritance\n> > > hierarchies and the ONLY keyword, but that isn't documented. Attached\n> > > is a patch to merge.sgml, borrowing text from update.sgml and\n> > > delete.sgml.\n> >\n> > LGTM.\n>\n> LGTM2.\n>\n> > I didn't see any tests for this in merge.sql or inherit.sql. Do you\n> > think we should add some?\n>\n> Ouch! We should definitely have some.\n>\n\nAgreed. I just pushed this, with some additional tests.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 9 Dec 2022 10:09:38 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Documenting MERGE INTO ONLY ..."
}
] |
[
{
"msg_contents": "Hi hackers,\n\nThis is meant as a continuation of the work to make VACUUM and ANALYZE\ngrantable privileges [0]. As noted there, the primary motivation for this\nis to continue chipping away at things that require special privileges or\neven superuser. I've attached two patches. 0001 makes it possible to\ngrant CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX. 0002 adds\npredefined roles that allow performing these commands on all relations.\nAfter applying these patches, there are 13 privilege bits remaining for\nfuture use.\n\nThere is an ongoing discussion in another thread [1] about how these\nprivileges should be divvied up. Should each command get it's own\nprivilege bit (as I've done in the attached patches), or should the\nprivileges be grouped in some fashion (e.g., adding a MAINTAIN bit that\ngoverns all of them, splitting out exclusive-lock operations from\nnon-exclusive-lock ones)?\n\nMost of the changes in the attached patches are rather mechanical, and like\nVACUUM/ANALYZE, there is room for future enhancement, such as granting the\nprivileges on databases/schemas instead of just tables.\n\n[0] https://postgr.es/m/20220722203735.GB3996698%40nathanxps13\n[1] https://postgr.es/m/20221206193606.GB3078082%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 8 Dec 2022 10:37:07 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, 2022-12-08 at 10:37 -0800, Nathan Bossart wrote:\n> 0001 makes it possible to\n> grant CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX. 0002 adds\n> predefined roles that allow performing these commands on all\n> relations.\n\nRegarding the pg_refresh_all_matview predefined role, I don't think\nit's a good idea. Refreshing a materialized view doesn't seem like an\nadministrative action to me.\n\nFirst, it's unbounded in time, so the admin would need to be careful to\nhave a timeout. Second, the freshness of a materialized view seems very\nspecific to the application, rather than something that an admin would\nhave a blanket policy about. Thirdly, there's not a lot of information\nthe admin could use to make decisions about when to refresh (as opposed\nto VACUUM/CLUSTER/REINDEX, where the stats are helpful).\n\nBut I'm fine with having a grantable privilege to refresh a\nmaterialized view.\n\nIt seems like the discussion on VACUUM/CLUSTER/REINDEX privileges is\nhappening in the other thread. What would you like to accomplish in\nthis thread?\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Sat, 10 Dec 2022 12:07:12 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Sat, Dec 10, 2022 at 12:07:12PM -0800, Jeff Davis wrote:\n> It seems like the discussion on VACUUM/CLUSTER/REINDEX privileges is\n> happening in the other thread. What would you like to accomplish in\n> this thread?\n\nGiven the feedback in the other thread [0], I was planning to rewrite this\npatch to create a MAINTAIN privilege and a pg_maintain_all_tables\npredefined role that allowed VACUUM, ANALYZE, CLUSTER, REFRESH MATERIALIZED\nVIEW, and REINDEX.\n\n[0] https://postgr.es/m/20221206193606.GB3078082%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 10 Dec 2022 12:41:09 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Sat, Dec 10, 2022 at 12:41:09PM -0800, Nathan Bossart wrote:\n> On Sat, Dec 10, 2022 at 12:07:12PM -0800, Jeff Davis wrote:\n>> It seems like the discussion on VACUUM/CLUSTER/REINDEX privileges is\n>> happening in the other thread. What would you like to accomplish in\n>> this thread?\n> \n> Given the feedback in the other thread [0], I was planning to rewrite this\n> patch to create a MAINTAIN privilege and a pg_maintain_all_tables\n> predefined role that allowed VACUUM, ANALYZE, CLUSTER, REFRESH MATERIALIZED\n> VIEW, and REINDEX.\n\nPatch attached. I ended up reverting some parts of the VACUUM/ANALYZE\npatch that were no longer needed (i.e., if the user doesn't have permission\nto VACUUM, we don't need to separately check whether the user has\npermission to ANALYZE). Otherwise, I don't think there's anything\ntremendously different between v1 and v2 besides the fact that all the\nprivileges are grouped together.\n\nSince there are only 15 privilege bits used after this patch is applied,\npresumably we could revert widening AclMode to 64 bits. However, I imagine\nthat will still be necessary at some point in the near future, so I don't\nsee a strong reason to revert it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 12 Dec 2022 12:04:27 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Mon, Dec 12, 2022 at 12:04:27PM -0800, Nathan Bossart wrote:\n> Patch attached. I ended up reverting some parts of the VACUUM/ANALYZE\n> patch that were no longer needed (i.e., if the user doesn't have permission\n> to VACUUM, we don't need to separately check whether the user has\n> permission to ANALYZE). Otherwise, I don't think there's anything\n> tremendously different between v1 and v2 besides the fact that all the\n> privileges are grouped together.\n\nHere is a v3 of the patch that fixes a typo in the docs.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 12 Dec 2022 13:01:36 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Mon, 2022-12-12 at 13:01 -0800, Nathan Bossart wrote:\n> On Mon, Dec 12, 2022 at 12:04:27PM -0800, Nathan Bossart wrote:\n> > Patch attached. I ended up reverting some parts of the\n> > VACUUM/ANALYZE\n> > patch that were no longer needed (i.e., if the user doesn't have\n> > permission\n> > to VACUUM, we don't need to separately check whether the user has\n> > permission to ANALYZE). Otherwise, I don't think there's anything\n> > tremendously different between v1 and v2 besides the fact that all\n> > the\n> > privileges are grouped together.\n> \n> Here is a v3 of the patch that fixes a typo in the docs.\n\nCommitted.\n\nThe only significant change is that it also allows LOCK TABLE if you\nhave the MAINTAIN privilege.\n\nI noticed a couple issues unrelated to your patch, and started separate\npatch threads[1][2].\n\n[1] \nhttps://www.postgresql.org/message-id/c0a85c2e83158560314b576b6241c8ed0aea1745.camel@j-davis.com\n[2]\nhttps://www.postgresql.org/message-id/9550c76535404a83156252b25a11babb4792ea1e.camel@j-davis.com\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Tue, 13 Dec 2022 19:05:10 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 07:05:10PM -0800, Jeff Davis wrote:\n> Committed.\n> \n> The only significant change is that it also allows LOCK TABLE if you\n> have the MAINTAIN privilege.\n\nThanks!\n\n> I noticed a couple issues unrelated to your patch, and started separate\n> patch threads[1][2].\n\nWill take a look.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 13 Dec 2022 19:23:32 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "After a fresh install, including the patch for \\dpS [1],\nI found that granting MAINTAIN privilege does not allow the TOAST table \nto be vacuumed.\n\npostgres@postgres(16.0)=# GRANT MAINTAIN ON pg_type TO alice;\nGRANT\npostgres@postgres(16.0)=# \\c - alice\nYou are now connected to database \"postgres\" as user \"alice\".\nalice@postgres(16.0)=> \\dpS pg_type\n Access privileges\n Schema | Name | Type | Access privileges | Column \nprivileges | Policies\n------------+---------+-------+----------------------------+-------------------+----------\n pg_catalog | pg_type | table | \npostgres=arwdDxtm/postgres+| |\n | | | =r/postgres +| |\n | | | alice=m/postgres | |\n(1 row)\n\nSo, the patch for \\dpS works as expected and can be committed.\n\nalice@postgres(16.0)=> VACUUM pg_type;\nWARNING: permission denied to vacuum \"pg_toast_1247\", skipping it\nVACUUM\n\n[1] \nhttps://www.postgresql.org/message-id/20221206193606.GB3078082%40nathanxps13\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n",
"msg_date": "Wed, 14 Dec 2022 12:07:13 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 12:07:13PM +0300, Pavel Luzanov wrote:\n> I found that granting MAINTAIN privilege does not allow the TOAST table to\n> be vacuumed.\n\nHm. My first thought is that this is the appropriate behavior. WDYT?\n\n> So, the patch for \\dpS works as expected and can be committed.\n\nThanks for reviewing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 14 Dec 2022 09:13:36 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On 2022-Dec-14, Nathan Bossart wrote:\n\n> On Wed, Dec 14, 2022 at 12:07:13PM +0300, Pavel Luzanov wrote:\n> > I found that granting MAINTAIN privilege does not allow the TOAST table to\n> > be vacuumed.\n> \n> Hm. My first thought is that this is the appropriate behavior. WDYT?\n\nIt seems wrong to me. If you can vacuum a table, surely you can also\nvacuum its toast table. If you can vacuum all normal tables, you should\nbe able to vacuum all toast tables too.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La virtud es el justo medio entre dos defectos\" (Aristóteles)\n\n\n",
"msg_date": "Wed, 14 Dec 2022 19:05:34 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 07:05:34PM +0100, Alvaro Herrera wrote:\n> On 2022-Dec-14, Nathan Bossart wrote:\n>> On Wed, Dec 14, 2022 at 12:07:13PM +0300, Pavel Luzanov wrote:\n>> > I found that granting MAINTAIN privilege does not allow the TOAST table to\n>> > be vacuumed.\n>> \n>> Hm. My first thought is that this is the appropriate behavior. WDYT?\n> \n> It seems wrong to me. If you can vacuum a table, surely you can also\n> vacuum its toast table. If you can vacuum all normal tables, you should\n> be able to vacuum all toast tables too.\n\nOkay. Should all the privileges governed by MAINTAIN apply to a relation's\nTOAST table as well? I would think so, but I don't want to assume too much\nbefore writing the patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 14 Dec 2022 10:16:59 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, 2022-12-14 at 10:16 -0800, Nathan Bossart wrote:\n> Okay. Should all the privileges governed by MAINTAIN apply to a\n> relation's\n> TOAST table as well?\n\nYes, I agree.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Wed, 14 Dec 2022 11:05:13 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, 2022-12-14 at 12:07 +0300, Pavel Luzanov wrote:\n> After a fresh install, including the patch for \\dpS [1],\n> I found that granting MAINTAIN privilege does not allow the TOAST\n> table \n> to be vacuumed.\n\nI wanted to also mention partitioning. The behavior is that MAINTAIN\nprivileges on the partitioned table does not imply MAINTAIN privileges\non the partitions. I believe that's fine and it's consistent with other\nprivileges on partitioned tables, such as SELECT and INSERT. In the\ncase of an admin maintaining users' tables, they'd be a member of\npg_maintain anyway.\n\nFurthermore, MAINTAIN privileges on the partitioned table do not grant\nthe ability to create new partitions. There's a comment in tablecmds.c\nalluding to a possible \"UNDER\" privilege:\n\n /* \n * We should have an UNDER permission flag for this, but for now, \n * demand that creator of a child table own the parent. \n */\n\nPerhaps there's something we want to do there, but it's a different use\ncase than the MAINTAIN privilege, so I don't see a reason it should be\ngrouped. Also, there's a bit of weirdness to think about in cases where\nanother user creates (and owns) a partition of your table (currently\nthis is only possible if the other user is a superuser).\n\nI am not suggesting a change here, just posting in case someone has a\ndifferent opinion.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Wed, 14 Dec 2022 11:46:54 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, 14 Dec 2022 at 14:47, Jeff Davis <pgsql@j-davis.com> wrote:\n\nFurthermore, MAINTAIN privileges on the partitioned table do not grant\n> the ability to create new partitions. There's a comment in tablecmds.c\n> alluding to a possible \"UNDER\" privilege:\n>\n> /*\n> * We should have an UNDER permission flag for this, but for now,\n> * demand that creator of a child table own the parent.\n> */\n>\n> Perhaps there's something we want to do there, but it's a different use\n> case than the MAINTAIN privilege, so I don't see a reason it should be\n> grouped. Also, there's a bit of weirdness to think about in cases where\n> another user creates (and owns) a partition of your table (currently\n> this is only possible if the other user is a superuser).\n>\n\nI strongly agree. MAINTAIN is for actions that leave the schema the same.\nConceptually, running MAINTAIN shouldn't affect the result of pg_dump. That\nmay not be strictly true, but adding a table is definitely not something\nthat MAINTAIN should allow.\n\nIs there a firm decision on the issue of changing the cluster index of a\ntable? Re-clustering a table on the same index is clearly something that\nshould be granted by MAINTAIN as I imagine it, but changing the cluster\nindex, strictly speaking, changes the schema and could be considered\noutside of the scope of what should be allowed. On the other hand, I can\nsee simplicity in having CLUSTER check the same permissions whether or not\nthe cluster index is being updated.\n\nOn Wed, 14 Dec 2022 at 14:47, Jeff Davis <pgsql@j-davis.com> wrote:\nFurthermore, MAINTAIN privileges on the partitioned table do not grant\nthe ability to create new partitions. There's a comment in tablecmds.c\nalluding to a possible \"UNDER\" privilege:\n\n /* \n * We should have an UNDER permission flag for this, but for now, \n * demand that creator of a child table own the parent. \n */\n\nPerhaps there's something we want to do there, but it's a different use\ncase than the MAINTAIN privilege, so I don't see a reason it should be\ngrouped. Also, there's a bit of weirdness to think about in cases where\nanother user creates (and owns) a partition of your table (currently\nthis is only possible if the other user is a superuser).I strongly agree. MAINTAIN is for actions that leave the schema the same. Conceptually, running MAINTAIN shouldn't affect the result of pg_dump. That may not be strictly true, but adding a table is definitely not something that MAINTAIN should allow.Is there a firm decision on the issue of changing the cluster index of a table? Re-clustering a table on the same index is clearly something that should be granted by MAINTAIN as I imagine it, but changing the cluster index, strictly speaking, changes the schema and could be considered outside of the scope of what should be allowed. On the other hand, I can see simplicity in having CLUSTER check the same permissions whether or not the cluster index is being updated.",
"msg_date": "Wed, 14 Dec 2022 15:32:45 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, 2022-12-14 at 15:32 -0500, Isaac Morland wrote:\n\n> Is there a firm decision on the issue of changing the cluster index\n> of a table? Re-clustering a table on the same index is clearly\n> something that should be granted by MAINTAIN as I imagine it, but\n> changing the cluster index, strictly speaking, changes the schema and\n> could be considered outside of the scope of what should be allowed.\n> On the other hand, I can see simplicity in having CLUSTER check the\n> same permissions whether or not the cluster index is being updated.\n\nIn both the case of CLUSTER and REFRESH, I don't have a strong enough\nopinion to break them out into separate privileges. There's some\nargument that can be made; but at the same time it's hard for me to\nimagine someone really making use of the privileges separately.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n",
"msg_date": "Wed, 14 Dec 2022 12:56:59 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On 14.12.2022 22:46, Jeff Davis wrote:\n> The behavior is that MAINTAIN\n> privileges on the partitioned table does not imply MAINTAIN privileges\n> on the partitions. I believe that's fine and it's consistent with other\n> privileges on partitioned tables, such as SELECT and INSERT.\n\nSorry, I may have missed something, but here's what I see:\n\npostgres@postgres(16.0)=# create table p (id int) partition by list (id);\npostgres@postgres(16.0)=# create table p1 partition of p for values in (1);\npostgres@postgres(16.0)=# create table p2 partition of p for values in (2);\n\npostgres@postgres(16.0)=# grant select, insert, maintain on p to alice ;\n\npostgres@postgres(16.0)=# \\c - alice\nYou are now connected to database \"postgres\" as user \"alice\".\n\nalice@postgres(16.0)=> insert into p values (1);\nINSERT 0 1\nalice@postgres(16.0)=> select * from p;\n id\n----\n 1\n(1 row)\n\nalice@postgres(16.0)=> vacuum p;\nWARNING: permission denied to vacuum \"p1\", skipping it\nWARNING: permission denied to vacuum \"p2\", skipping it\nVACUUM\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n",
"msg_date": "Thu, 15 Dec 2022 01:02:39 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 01:02:39AM +0300, Pavel Luzanov wrote:\n> On 14.12.2022 22:46, Jeff Davis wrote:\n> > The behavior is that MAINTAIN\n> > privileges on the partitioned table does not imply MAINTAIN privileges\n> > on the partitions. I believe that's fine and it's consistent with other\n> > privileges on partitioned tables, such as SELECT and INSERT.\n> \n> Sorry, I may have missed something, but here's what I see:\n> \n> postgres@postgres(16.0)=# create table p (id int) partition by list (id);\n> postgres@postgres(16.0)=# create table p1 partition of p for values in (1);\n> postgres@postgres(16.0)=# create table p2 partition of p for values in (2);\n> \n> postgres@postgres(16.0)=# grant select, insert, maintain on p to alice ;\n> \n> postgres@postgres(16.0)=# \\c - alice\n> You are now connected to database \"postgres\" as user \"alice\".\n> \n> alice@postgres(16.0)=> insert into p values (1);\n> INSERT 0 1\n> alice@postgres(16.0)=> select * from p;\n> �id\n> ----\n> � 1\n> (1 row)\n> \n> alice@postgres(16.0)=> vacuum p;\n> WARNING:� permission denied to vacuum \"p1\", skipping it\n> WARNING:� permission denied to vacuum \"p2\", skipping it\n> VACUUM\n\nYeah, but:\n\nregression=> insert into p1 values (1);\nERROR: permission denied for table p1\nregression=> select * from p1;\nERROR: permission denied for table p1\n\n\n",
"msg_date": "Wed, 14 Dec 2022 16:11:40 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, 14 Dec 2022 at 15:57, Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Wed, 2022-12-14 at 15:32 -0500, Isaac Morland wrote:\n>\n> > Is there a firm decision on the issue of changing the cluster index\n> > of a table? Re-clustering a table on the same index is clearly\n> > something that should be granted by MAINTAIN as I imagine it, but\n> > changing the cluster index, strictly speaking, changes the schema and\n> > could be considered outside of the scope of what should be allowed.\n> > On the other hand, I can see simplicity in having CLUSTER check the\n> > same permissions whether or not the cluster index is being updated.\n>\n> In both the case of CLUSTER and REFRESH, I don't have a strong enough\n> opinion to break them out into separate privileges. There's some\n> argument that can be made; but at the same time it's hard for me to\n> imagine someone really making use of the privileges separately.\n>\n\nThanks, that makes a lot of sense. I wanted to make sure the question was\nconsidered. I'm very pleased this is happening and appreciate all the work\nyou're doing. I have a few places where I want to be able to grant MAINTAIN\nso I'll be using this as soon as it's available on our production database.\n\nOn Wed, 14 Dec 2022 at 15:57, Jeff Davis <pgsql@j-davis.com> wrote:On Wed, 2022-12-14 at 15:32 -0500, Isaac Morland wrote:\n\n> Is there a firm decision on the issue of changing the cluster index\n> of a table? Re-clustering a table on the same index is clearly\n> something that should be granted by MAINTAIN as I imagine it, but\n> changing the cluster index, strictly speaking, changes the schema and\n> could be considered outside of the scope of what should be allowed.\n> On the other hand, I can see simplicity in having CLUSTER check the\n> same permissions whether or not the cluster index is being updated.\n\nIn both the case of CLUSTER and REFRESH, I don't have a strong enough\nopinion to break them out into separate privileges. There's some\nargument that can be made; but at the same time it's hard for me to\nimagine someone really making use of the privileges separately.Thanks, that makes a lot of sense. I wanted to make sure the question was considered. I'm very pleased this is happening and appreciate all the work you're doing. I have a few places where I want to be able to grant MAINTAIN so I'll be using this as soon as it's available on our production database.",
"msg_date": "Wed, 14 Dec 2022 17:40:14 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 11:05:13AM -0800, Jeff Davis wrote:\n> On Wed, 2022-12-14 at 10:16 -0800, Nathan Bossart wrote:\n>> Okay.� Should all the privileges governed by MAINTAIN apply to a\n>> relation's\n>> TOAST table as well?\n> \n> Yes, I agree.\n\nThis might be tricky, because AFAICT you have to scan pg_class to find a\nTOAST table's main relation.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 14 Dec 2022 15:29:39 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 03:29:39PM -0800, Nathan Bossart wrote:\n> On Wed, Dec 14, 2022 at 11:05:13AM -0800, Jeff Davis wrote:\n>> On Wed, 2022-12-14 at 10:16 -0800, Nathan Bossart wrote:\n>>> Okay. Should all the privileges governed by MAINTAIN apply to a\n>>> relation's\n>>> TOAST table as well?\n>> \n>> Yes, I agree.\n> \n> This might be tricky, because AFAICT you have to scan pg_class to find a\n> TOAST table's main relation.\n\nUgh, yeah. Are we talking about a case where we know the toast\ninformation but need to look back at some information of its parent to\ndo a decision? I don't recall a case where we do that. CLUSTER,\nREINDEX and VACUUM lock first the parent when working on it, and no\nAEL is taken on the parent if doing directly a VACUUM or a REINDEX on\nthe toast table, so that could lead to deadlock scenarios. Shouldn't\nMAINTAIN be sent down to the toast table as well if that's not done\nthis way?\n\nFWIW, I have briefly poked at that here:\nhttps://www.postgresql.org/message-id/YZI+aNEnnpBASxNU@paquier.xyz\n--\nMichael",
"msg_date": "Thu, 15 Dec 2022 09:12:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, 2022-12-14 at 16:11 -0600, Justin Pryzby wrote:\n> Yeah, but:\n> \n> regression=> insert into p1 values (1);\n> ERROR: permission denied for table p1\n> regression=> select * from p1;\n> ERROR: permission denied for table p1\n\nRight, that's what I had in mind: a user is only granted operations on\nthe partitioned table, not the partitions.\n\nIt happens that an INSERT or SELECT on the partitioned table flows\nthrough to the partitions, whereas the VACUUM ends up skipping them, so\nI guess the analogy could be interpreted either way. Hmmm...\n\nThinking about it another way: logical partitioning is about making the\ntable logically one table, but physically many tables. That would imply\nthat the privileges should apply per-partition. But then that doesn't\nmake a lot of sense, because what maintenance can you do on the\npartitioned table (which itself has no data)?\n\nThere's definitely a problem with this patch and partitioning, because\nREINDEX affects the partitions, CLUSTER is a no-op, and VACUUM/ANALYZE\nskip them.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Wed, 14 Dec 2022 16:18:05 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 09:12:26AM +0900, Michael Paquier wrote:\n> On Wed, Dec 14, 2022 at 03:29:39PM -0800, Nathan Bossart wrote:\n>> On Wed, Dec 14, 2022 at 11:05:13AM -0800, Jeff Davis wrote:\n>>> On Wed, 2022-12-14 at 10:16 -0800, Nathan Bossart wrote:\n>>>> Okay. Should all the privileges governed by MAINTAIN apply to a\n>>>> relation's\n>>>> TOAST table as well?\n>>> \n>>> Yes, I agree.\n>> \n>> This might be tricky, because AFAICT you have to scan pg_class to find a\n>> TOAST table's main relation.\n> \n> Ugh, yeah. Are we talking about a case where we know the toast\n> information but need to look back at some information of its parent to\n> do a decision? I don't recall a case where we do that. CLUSTER,\n> REINDEX and VACUUM lock first the parent when working on it, and no\n> AEL is taken on the parent if doing directly a VACUUM or a REINDEX on\n> the toast table, so that could lead to deadlock scenarios. Shouldn't\n> MAINTAIN be sent down to the toast table as well if that's not done\n> this way?\n\nAnother option I'm looking at is skipping the privilege checks when VACUUM\nrecurses to a TOAST table. This won't allow you to VACUUM the TOAST table\ndirectly, but it would at least address the originally-reported issue [0].\n\nSince you can't ANALYZE, REFRESH, or LOCK TOAST tables, this isn't a\nproblem for those commands. CLUSTER and REINDEX seem to process relations'\nTOAST tables without extra privilege checks already. So with the attached\npatch applied, you wouldn't be able to VACUUM, CLUSTER, and REINDEX TOAST\ntableѕ directly (unless you were given MAINTAIN or pg_maintain), but you\ncould indirectly process them by specifying the main relation.\n\nI don't know if this is good enough. It seems like ideally you should be\nable to VACUUM a TOAST table directly if you have MAINTAIN on its main\nrelation.\n\n[0] https://postgr.es/m/b572d238-0de2-9cad-5f34-4741dc627834%40postgrespro.ru\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 14 Dec 2022 16:27:05 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On 15.12.2022 03:18, Jeff Davis wrote:\n> Right, that's what I had in mind: a user is only granted operations on\n> the partitioned table, not the partitions.\n\nIt's all clear now.\n\n> There's definitely a problem with this patch and partitioning, because\n> REINDEX affects the partitions, CLUSTER is a no-op, and VACUUM/ANALYZE\n> skip them.\n\nI think the approach that Nathan implemented [1] for TOAST tables\nin the latest version can be used for partitioned tables as well.\nSkipping the privilege check for partitions while working with\na partitioned table. In that case we would get exactly the same behavior\nas for INSERT, SELECT, etc privileges - the MAINTAIN privilege would \nwork for\nthe whole partitioned table, but not for individual partitions.\n\n[1] \nhttps://www.postgresql.org/message-id/20221215002705.GA889413%40nathanxps13\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n",
"msg_date": "Thu, 15 Dec 2022 12:31:00 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On 15.12.2022 03:27, Nathan Bossart wrote:\n> Another option I'm looking at is skipping the privilege checks when VACUUM\n> recurses to a TOAST table. This won't allow you to VACUUM the TOAST table\n> directly, but it would at least address the originally-reported issue\n\nThis approach can be implemented for partitioned tables too. Skipping\nthe privilege checks when VACUUM/ANALYZE recurses to partitions.\n\n> I don't know if this is good enough.\n\nAt least it's better than before.\n\n> It seems like ideally you should be\n> able to VACUUM a TOAST table directly if you have MAINTAIN on its main\n> relation.\n\nI agree, that would be ideally.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n",
"msg_date": "Thu, 15 Dec 2022 12:42:00 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, 2022-12-15 at 12:31 +0300, Pavel Luzanov wrote:\n> I think the approach that Nathan implemented [1] for TOAST tables\n> in the latest version can be used for partitioned tables as well.\n> Skipping the privilege check for partitions while working with\n> a partitioned table. In that case we would get exactly the same\n> behavior\n> as for INSERT, SELECT, etc privileges - the MAINTAIN privilege would \n> work for\n> the whole partitioned table, but not for individual partitions.\n\nThere is some weirdness in 15, too:\n\n create user foo;\n create user su superuser;\n grant all privileges on schema public to foo;\n\n \\c - foo\n create table p(i int) partition by range (i);\n create index p_idx on p (i);\n create table p0 partition of p for values from (0) to (10);\n\n \\c - su\n create table p1 partition of p for values from (10) to (20);\n\n \\c - foo\n\n -- possibly weird because the 15 inserts into p1 (owned by su)\n insert into p values (5), (15);\n\n -- all these are as expected:\n select * from p; -- returns 5 & 15\n insert into p1 values(16); -- permission denied\n select * from p1; -- permission denied\n \n -- the following commands seem inconsistent to me:\n vacuum p; -- skips p1 with warning\n analyze p; -- skips p1 with warning\n cluster p using p_idx; -- silently skips p1\n reindex table p; -- reindexes p0 and p1 (owned by su)\n\n -- RLS is also bypassed\n \\c - su\n grant select, insert on p1 to foo;\n alter table p1 enable row level security;\n create policy odd on p1 using (i % 2 = 1) with check (i % 2 = 1);\n \\c - foo\n insert into p1 values (16); -- RLS error\n insert into p values (16); -- succeeds\n select * from p1; -- returns only 15\n select * from p; -- returns 5, 15, 16\n \nThe proposal to skip privilege checks for partitions would be\nconsistent with INSERT, SELECT, REINDEX that flow through to the\nunderlying partitions regardless of permissions/ownership (and even\nRLS). It would be very minor behavior change on 15 for this weird case\nof superuser-owned partitions, but I doubt anyone would be relying on\nthat.\n\n+1.\n\nI do have some lingering doubts about whether we should even allow\ninconsistent ownership/permissions. But I don't think we need to settle\nthat question now.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Thu, 15 Dec 2022 10:10:43 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, 2022-12-14 at 16:27 -0800, Nathan Bossart wrote:\n> I don't know if this is good enough. It seems like ideally you\n> should be\n> able to VACUUM a TOAST table directly if you have MAINTAIN on its\n> main\n> relation.\n\nRight now, targetting the toast table directly requires the USAGE\nprivilege on the toast schema, and you have to look up the name first,\nright? As it is, that's not a great UI.\n\nHow about if we add a VACUUM option like TOAST_ONLY (or combine it with\nthe PROCESS_TOAST option)? Then, you're always looking at the parent\ntable first so there's no deadlock, do the permission checks on the\nparent, and then expand to the toast table with no check. This can be a\nfollow-up patch; for now, the idea of skipping the privilege checks\nwhen expanding looks like an improvement.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Thu, 15 Dec 2022 10:42:15 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 10:42:15AM -0800, Jeff Davis wrote:\n> Right now, targetting the toast table directly requires the USAGE\n> privilege on the toast schema, and you have to look up the name first,\n> right? As it is, that's not a great UI.\n> \n> How about if we add a VACUUM option like TOAST_ONLY (or combine it with\n> the PROCESS_TOAST option)? Then, you're always looking at the parent\n> table first so there's no deadlock, do the permission checks on the\n> parent, and then expand to the toast table with no check. This can be a\n> follow-up patch; for now, the idea of skipping the privilege checks\n> when expanding looks like an improvement.\n\nI originally suggested an option to allow specifying whether to process the\nmain relation, but we ended up only adding PROCESS_TOAST [0]. FWIW I still\nthink that such an option would be useful for the reasons you describe.\n\n[0] https://postgr.es/m/BA8951E9-1524-48C5-94AF-73B1F0D7857F%40amazon.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 15 Dec 2022 11:12:46 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 10:10:43AM -0800, Jeff Davis wrote:\n> On Thu, 2022-12-15 at 12:31 +0300, Pavel Luzanov wrote:\n> > I think the approach that Nathan implemented [1] for TOAST tables\n> > in the latest version can be used for partitioned tables as well.\n> > Skipping the privilege check for partitions while working with\n> > a partitioned table. In that case we would get exactly the same\n> > behavior\n> > as for INSERT, SELECT, etc privileges - the MAINTAIN privilege would \n> > work for\n> > the whole partitioned table, but not for individual partitions.\n> \n> There is some weirdness in 15, too:\n\nI gather you mean postgresql v15.1 and master ?\n\n> -- the following commands seem inconsistent to me:\n> vacuum p; -- skips p1 with warning\n> analyze p; -- skips p1 with warning\n> cluster p using p_idx; -- silently skips p1\n> reindex table p; -- reindexes p0 and p1 (owned by su)\n\nClustering on a partitioned table is new in v15, and this behavior is\nfrom 3f19e176ae0 and cfdd03f45e6, which added\nget_tables_to_cluster_partitioned(), borrowing from expand_vacuum_rel()\nand get_tables_to_cluster().\n\nvacuum initially calls vacuum_is_permitted_for_relation() only for the\npartitioned table, and *later* locks the partition and then checks its\npermissions, which is when the message is output.\n\nSince v15, cluster calls get_tables_to_cluster_partitioned(), which\nsilently discards partitions failing ACL.\n\nWe could change it to emit a message, which would seem to behave like\nvacuum, except that the check is happening earlier, and (unlike vacuum)\npartitions skipped later during CLUOPT_RECHECK wouldn't have any message\noutput.\n\nOr we could change cluster_rel() to output a message when skipping. But\nthese patches hardly touched that function at all. I suppose we could\nchange to emit a message during RECHECK (maybe only in master branch).\nIf need be, that could be controlled by a new CLUOPT_*.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 15 Dec 2022 13:48:13 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 01:48:13PM -0600, Justin Pryzby wrote:\n> vacuum initially calls vacuum_is_permitted_for_relation() only for the\n> partitioned table, and *later* locks the partition and then checks its\n> permissions, which is when the message is output.\n> \n> Since v15, cluster calls get_tables_to_cluster_partitioned(), which\n> silently discards partitions failing ACL.\n> \n> We could change it to emit a message, which would seem to behave like\n> vacuum, except that the check is happening earlier, and (unlike vacuum)\n> partitions skipped later during CLUOPT_RECHECK wouldn't have any message\n> output.\n> \n> Or we could change cluster_rel() to output a message when skipping. But\n> these patches hardly touched that function at all. I suppose we could\n> change to emit a message during RECHECK (maybe only in master branch).\n> If need be, that could be controlled by a new CLUOPT_*.\n\nYeah, VACUUM/ANALYZE works differently. For one, you can specify multiple\nrelations in the command. Also, VACUUM/ANALYZE only takes an\nAccessShareLock when first assessing permissions (which actually skips\npartitions). CLUSTER immediately takes an AccessExclusiveLock, so the\npermissions are checked up front. This is done \"to avoid lock-upgrade\nhazard in the single-transaction case,\" which IIUC is what allows CLUSTER\non a single table to run within a transaction block (unlike VACUUM). I\ndon't know if running CLUSTER in a transaction block is important. IMO we\nshould consider making CLUSTER look a lot more like VACUUM/ANALYZE in this\nregard. The attached patch adds WARNING messages, but we're still far from\nconsistency with VACUUM/ANALYZE.\n\nSide note: I see that CLUSTER on a partitioned table is disallowed in a\ntransaction block, which should probably be added to my documentation patch\n[0].\n\n[0] https://commitfest.postgresql.org/41/4054/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 15 Dec 2022 17:19:26 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, 2022-12-15 at 17:19 -0800, Nathan Bossart wrote:\n> The attached patch adds WARNING messages, but we're still far from\n> consistency with VACUUM/ANALYZE.\n\nBut why make CLUSTER more like VACUUM? Shouldn't we make\nVACUUM/CLUSTER/ANALYZE more like INSERT/SELECT/REINDEX?\n\nAs far as I can tell, the only way you can get in this situation in 15\nis by having a superuser create partitions in a non-superuser's table.\nI don't think that was an intended use case, so it's probably just\nhistorical reasons that VACUUM/CLUSTER/ANALYZE ended up that way, not\nprecedent we should necessarily follow.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Thu, 15 Dec 2022 20:35:53 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 08:35:53PM -0800, Jeff Davis wrote:\n> But why make CLUSTER more like VACUUM? Shouldn't we make\n> VACUUM/CLUSTER/ANALYZE more like INSERT/SELECT/REINDEX?\n\nHm. Since VACUUM may happen across multiple transactions, it is careful to\nre-check the privileges for each relation. For example, vacuum_rel()\ncontains this comment:\n\n\t/*\n\t * Check if relation needs to be skipped based on privileges. This check\n\t * happens also when building the relation list to vacuum for a manual\n\t * operation, and needs to be done additionally here as VACUUM could\n\t * happen across multiple transactions where privileges could have changed\n\t * in-between. Make sure to only generate logs for VACUUM in this case.\n\t */\n\nI do wonder whether this is something we really need to be concerned about.\nIn the worst case, you might be able to VACUUM a table with a concurrent\nowner change.\n\nThe logic for gathering all relations to process (i.e.,\nget_all_vacuum_rels() and get_tables_to_cluster()) would also need to be\nadjusted to handle partitioned tables. Right now, we gather all the\npartitions and checks privileges on each. I think this would be easy to\nchange.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 15 Dec 2022 21:13:54 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 09:13:54PM -0800, Nathan Bossart wrote:\n> I do wonder whether this is something we really need to be concerned about.\n> In the worst case, you might be able to VACUUM a table with a concurrent\n> owner change.\n\nI suppose we might want to avoid running the index functions as the new\nowner.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 15 Dec 2022 21:20:21 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 10:10:43AM -0800, Jeff Davis wrote:\n> The proposal to skip privilege checks for partitions would be\n> consistent with INSERT, SELECT, REINDEX that flow through to the\n> underlying partitions regardless of permissions/ownership (and even\n> RLS). It would be very minor behavior change on 15 for this weird case\n> of superuser-owned partitions, but I doubt anyone would be relying on\n> that.\n\nI've attached a work-in-progress patch that aims to accomplish this.\nInstead of skipping the privilege checks, I added logic to trawl through\npg_inherits and pg_class to check whether the user has privileges for the\npartitioned table or for the main relation of a TOAST table. This means\nthat MAINTAIN on a partitioned table is enough to execute maintenance\ncommands on all the partitions, and MAINTAIN on a main relation is enough\nto execute maintenance commands on its TOAST table. Also, the maintenance\ncommands that flow through to the partitions or the TOAST table should no\nlonger error due to permissions when the user only has MAINTAIN on the\nparitioned table or main relation.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 16 Dec 2022 22:04:08 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Fri, Dec 16, 2022 at 10:04 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Thu, Dec 15, 2022 at 10:10:43AM -0800, Jeff Davis wrote:\n> > The proposal to skip privilege checks for partitions would be\n> > consistent with INSERT, SELECT, REINDEX that flow through to the\n> > underlying partitions regardless of permissions/ownership (and even\n> > RLS). It would be very minor behavior change on 15 for this weird case\n> > of superuser-owned partitions, but I doubt anyone would be relying on\n> > that.\n>\n> I've attached a work-in-progress patch that aims to accomplish this.\n> Instead of skipping the privilege checks, I added logic to trawl through\n> pg_inherits and pg_class to check whether the user has privileges for the\n> partitioned table or for the main relation of a TOAST table. This means\n> that MAINTAIN on a partitioned table is enough to execute maintenance\n> commands on all the partitions, and MAINTAIN on a main relation is enough\n> to execute maintenance commands on its TOAST table. Also, the maintenance\n> commands that flow through to the partitions or the TOAST table should no\n> longer error due to permissions when the user only has MAINTAIN on the\n> paritioned table or main relation.\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\nHi,\n\n+cluster_is_permitted_for_relation(Oid relid, Oid userid)\n+{\n+ return pg_class_aclcheck(relid, userid, ACL_MAINTAIN) ==\nACLCHECK_OK ||\n+ has_parent_privs(relid, userid, ACL_MAINTAIN);\n\nSince the func only contains one statement, it seems this can be defined as\na macro instead.\n\n+ List *ancestors = get_partition_ancestors(relid);\n+ Oid root = InvalidOid;\n\nnit: it would be better if the variable `root` can be aligned with variable\n`ancestors`.\n\nCheers\n\nOn Fri, Dec 16, 2022 at 10:04 PM Nathan Bossart <nathandbossart@gmail.com> wrote:On Thu, Dec 15, 2022 at 10:10:43AM -0800, Jeff Davis wrote:\n> The proposal to skip privilege checks for partitions would be\n> consistent with INSERT, SELECT, REINDEX that flow through to the\n> underlying partitions regardless of permissions/ownership (and even\n> RLS). It would be very minor behavior change on 15 for this weird case\n> of superuser-owned partitions, but I doubt anyone would be relying on\n> that.\n\nI've attached a work-in-progress patch that aims to accomplish this.\nInstead of skipping the privilege checks, I added logic to trawl through\npg_inherits and pg_class to check whether the user has privileges for the\npartitioned table or for the main relation of a TOAST table. This means\nthat MAINTAIN on a partitioned table is enough to execute maintenance\ncommands on all the partitions, and MAINTAIN on a main relation is enough\nto execute maintenance commands on its TOAST table. Also, the maintenance\ncommands that flow through to the partitions or the TOAST table should no\nlonger error due to permissions when the user only has MAINTAIN on the\nparitioned table or main relation.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.comHi,+cluster_is_permitted_for_relation(Oid relid, Oid userid)+{+ return pg_class_aclcheck(relid, userid, ACL_MAINTAIN) == ACLCHECK_OK ||+ has_parent_privs(relid, userid, ACL_MAINTAIN);Since the func only contains one statement, it seems this can be defined as a macro instead.+ List *ancestors = get_partition_ancestors(relid);+ Oid root = InvalidOid;nit: it would be better if the variable `root` can be aligned with variable `ancestors`.Cheers",
"msg_date": "Sat, 17 Dec 2022 04:39:29 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "Here is a new version of the patch. Besides some cleanup, I added an index\non reltoastrelid for the toast-to-main-relation lookup. Before I bother\nadjusting the tests and documentation, I'm curious to hear thoughts on\nwhether this seems like a viable approach.\n\nOn Sat, Dec 17, 2022 at 04:39:29AM -0800, Ted Yu wrote:\n> +cluster_is_permitted_for_relation(Oid relid, Oid userid)\n> +{\n> + return pg_class_aclcheck(relid, userid, ACL_MAINTAIN) ==\n> ACLCHECK_OK ||\n> + has_parent_privs(relid, userid, ACL_MAINTAIN);\n> \n> Since the func only contains one statement, it seems this can be defined as\n> a macro instead.\n\nIn the new version, there is a bit more to this function, so I didn't\nconvert it to a macro.\n\n> + List *ancestors = get_partition_ancestors(relid);\n> + Oid root = InvalidOid;\n> \n> nit: it would be better if the variable `root` can be aligned with variable\n> `ancestors`.\n\nHm. It looked alright on my machine. In any case, I'll be sure to run\npgindent at some point. This patch is still in early stages.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 18 Dec 2022 15:30:18 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Sat, Dec 17, 2022 at 04:39:29AM -0800, Ted Yu wrote:\n> + List *ancestors = get_partition_ancestors(relid);\n> + Oid root = InvalidOid;\n> \n> nit: it would be better if the variable `root` can be aligned with variable\n> `ancestors`.\n\nIt is aligned, but only after configuring one's editor/pager/mail client\nto display tabs in the manner assumed by postgres' coding style.\n\n\n",
"msg_date": "Sun, 18 Dec 2022 17:38:03 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Sun, Dec 18, 2022 at 3:30 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> Here is a new version of the patch. Besides some cleanup, I added an index\n> on reltoastrelid for the toast-to-main-relation lookup. Before I bother\n> adjusting the tests and documentation, I'm curious to hear thoughts on\n> whether this seems like a viable approach.\n>\n> On Sat, Dec 17, 2022 at 04:39:29AM -0800, Ted Yu wrote:\n> > +cluster_is_permitted_for_relation(Oid relid, Oid userid)\n> > +{\n> > + return pg_class_aclcheck(relid, userid, ACL_MAINTAIN) ==\n> > ACLCHECK_OK ||\n> > + has_parent_privs(relid, userid, ACL_MAINTAIN);\n> >\n> > Since the func only contains one statement, it seems this can be defined\n> as\n> > a macro instead.\n>\n> In the new version, there is a bit more to this function, so I didn't\n> convert it to a macro.\n>\n> > + List *ancestors = get_partition_ancestors(relid);\n> > + Oid root = InvalidOid;\n> >\n> > nit: it would be better if the variable `root` can be aligned with\n> variable\n> > `ancestors`.\n>\n> Hm. It looked alright on my machine. In any case, I'll be sure to run\n> pgindent at some point. This patch is still in early stages.\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\nHi,\n\n+ * Note: Because this function assumes that the realtion whose OID is\npassed as\n\nTypo: realtion -> relation\n\nCheers\n\nOn Sun, Dec 18, 2022 at 3:30 PM Nathan Bossart <nathandbossart@gmail.com> wrote:Here is a new version of the patch. Besides some cleanup, I added an index\non reltoastrelid for the toast-to-main-relation lookup. Before I bother\nadjusting the tests and documentation, I'm curious to hear thoughts on\nwhether this seems like a viable approach.\n\nOn Sat, Dec 17, 2022 at 04:39:29AM -0800, Ted Yu wrote:\n> +cluster_is_permitted_for_relation(Oid relid, Oid userid)\n> +{\n> + return pg_class_aclcheck(relid, userid, ACL_MAINTAIN) ==\n> ACLCHECK_OK ||\n> + has_parent_privs(relid, userid, ACL_MAINTAIN);\n> \n> Since the func only contains one statement, it seems this can be defined as\n> a macro instead.\n\nIn the new version, there is a bit more to this function, so I didn't\nconvert it to a macro.\n\n> + List *ancestors = get_partition_ancestors(relid);\n> + Oid root = InvalidOid;\n> \n> nit: it would be better if the variable `root` can be aligned with variable\n> `ancestors`.\n\nHm. It looked alright on my machine. In any case, I'll be sure to run\npgindent at some point. This patch is still in early stages.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.comHi,+ * Note: Because this function assumes that the realtion whose OID is passed asTypo: realtion -> relationCheers",
"msg_date": "Sun, 18 Dec 2022 16:25:15 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Sun, Dec 18, 2022 at 04:25:15PM -0800, Ted Yu wrote:\n> + * Note: Because this function assumes that the realtion whose OID is\n> passed as\n> \n> Typo: realtion -> relation\n\nThanks, fixed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 18 Dec 2022 16:31:35 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Sun, 2022-12-18 at 17:38 -0600, Justin Pryzby wrote:\n> On Sat, Dec 17, 2022 at 04:39:29AM -0800, Ted Yu wrote:\n> > + List *ancestors = get_partition_ancestors(relid);\n> > + Oid root = InvalidOid;\n> > \n> > nit: it would be better if the variable `root` can be aligned with\n> > variable\n> > `ancestors`.\n> \n> It is aligned, but only after configuring one's editor/pager/mail\n> client\n> to display tabs in the manner assumed by postgres' coding style.\n\nIf you use emacs or vim, there are editor config samples in\nsrc/tools/editors/\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 19 Dec 2022 14:43:05 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Sun, Dec 18, 2022 at 03:30:18PM -0800, Nathan Bossart wrote:\n> Here is a new version of the patch. Besides some cleanup, I added an index\n> on reltoastrelid for the toast-to-main-relation lookup. Before I bother\n> adjusting the tests and documentation, I'm curious to hear thoughts on\n> whether this seems like a viable approach.\n\nI'd like to get this fixed, but I have yet to hear thoughts on the\nsuggested approach. I'll proceed with adjusting the tests and\ndocumentation shortly unless someone objects.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 3 Jan 2023 15:45:49 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Tue, Jan 03, 2023 at 03:45:49PM -0800, Nathan Bossart wrote:\n> I'd like to get this fixed, but I have yet to hear thoughts on the\n> suggested approach. I'll proceed with adjusting the tests and\n> documentation shortly unless someone objects.\n\nAs promised, here is a new version of the patch with adjusted tests and\ndocumentation.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 9 Jan 2023 14:51:57 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Mon, 2023-01-09 at 14:51 -0800, Nathan Bossart wrote:\n> On Tue, Jan 03, 2023 at 03:45:49PM -0800, Nathan Bossart wrote:\n> > I'd like to get this fixed, but I have yet to hear thoughts on the\n> > suggested approach. I'll proceed with adjusting the tests and\n> > documentation shortly unless someone objects.\n> \n> As promised, here is a new version of the patch with adjusted tests\n> and\n> documentation.\n\nThe current patch doesn't handle the case properly where you have\npartitions that have toast tables. An easy fix by recursing, but I\nthink we might want to do things differently anyway.\n\nI'm hesitant to add an index to pg_class just for the privilege checks\non toast tables, and I don't think we need to. Instead, we can just\nskip the privilege check on a toast table if it's not referenced\ndirectly, because we already checked the privileges on the parent, and\nwe still hold the session lock so nothing strange should have happened.\n\nI'll look into that, but so far it looks like it should come out\ncleanly for toast tables. The way you're checking privileges on the\npartitioned tables is fine.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Fri, 13 Jan 2023 11:56:03 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 11:56:03AM -0800, Jeff Davis wrote:\n> I'm hesitant to add an index to pg_class just for the privilege checks\n> on toast tables, and I don't think we need to.\n\nI bet this index will be useful for more than just these privilege checks\n(e.g., autovacuum currently creates a hash table for the\ntoast-to-main-relation mapping), but I do understand the hesitation.\n\n> Instead, we can just\n> skip the privilege check on a toast table if it's not referenced\n> directly, because we already checked the privileges on the parent, and\n> we still hold the session lock so nothing strange should have happened.\n\nThat would fix the problem in the original complaint, but it wouldn't allow\nfor vacuuming toast tables directly if you only have MAINTAIN privileges on\nthe main relation. If you can vacuum the toast table indirectly via the\nmain relation, shouldn't it be possible to vacuum it directly?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 13 Jan 2023 12:33:34 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Fri, 2023-01-13 at 12:33 -0800, Nathan Bossart wrote:\n> That would fix the problem in the original complaint, but it wouldn't\n> allow\n> for vacuuming toast tables directly if you only have MAINTAIN\n> privileges on\n> the main relation. If you can vacuum the toast table indirectly via\n> the\n> main relation, shouldn't it be possible to vacuum it directly?\n\nPerhaps, but that's barely supported today: you have to awkwardly find\nthe internal toast table name yourself, and you need the admin to grant\nyou USAGE on the pg_toast schema. I don't think we're obligated to also\nsupport this hackery for non-owners with a new MAINTAIN privilege.\n\nIf we care about that use case, let's do it right and have forms of\nVACUUM/CLUSTER/REINDEX that check permissions on the main table, skip\nthe work on the main table, and descend directly to the toast tables.\nThat doesn't seem hard, but it's a separate patch.\n\nRight now, we should simply fix the problem.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Fri, 13 Jan 2023 13:30:28 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 01:30:28PM -0800, Jeff Davis wrote:\n> If we care about that use case, let's do it right and have forms of\n> VACUUM/CLUSTER/REINDEX that check permissions on the main table, skip\n> the work on the main table, and descend directly to the toast tables.\n> That doesn't seem hard, but it's a separate patch.\n\nYou may be interested in https://commitfest.postgresql.org/41/4088/.\n\n> Right now, we should simply fix the problem.\n\nOkay. Here is a new patch set. I've split the partition work out to a\nseparate patch, and I've removed the main relation lookups for TOAST tables\nin favor of adding a skip_privs flag to vacuum_rel(). The latter patch\nprobably needs some additional commentary and tests, which I'll go ahead\nand add if we want to proceed with this approach. I'm assuming the session\nlock should be sufficient for avoiding the case where the TOAST table's OID\nis reused by the time we get to it, but I'm not sure if it's sufficient to\nprevent vacuuming if the privileges on the main relation have changed\nacross transactions. Even if it's not, I'm not sure that case is worth\nworrying about too much.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 13 Jan 2023 14:56:26 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 02:56:26PM -0800, Nathan Bossart wrote:\n> Okay. Here is a new patch set.\n\nAnd here is a rebased patch set (c44f633 changed the same LOCK TABLE docs).\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 13 Jan 2023 15:13:39 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "I've been reviewing ff9618e lately, and I'm wondering whether it has the\nsame problem that 19de0ab solved. Specifically, ff9618e introduces\nhas_partition_ancestor_privs(), which is used to check whether a user has\nMAINTAIN on any partition ancestors. This involves syscache lookups, and\npresently this function does not take any relation locks. I did spend some\ntime trying to induce cache lookup errors, but I didn't have any luck.\nHowever, unless this can be made safe without too much trouble, I think I'm\ninclined to partially revert ff9618e, leaving the TOAST-related parts\nintact.\n\nBy reverting the partition-related parts of ff9618e, users would need to\nhave MAINTAIN on the partition itself to perform the maintenance command.\nMAINTAIN on the partitioned table would no longer be sufficient. This is\nmore like how things work on supported versions today. Privileges are\nchecked for each partition, so a command that flows down to all partitions\nmight refuse to process a partition (e.g., if the current user doesn't own\nthe partition).\n\nIn the future, perhaps we could reevaluate adding these partition ancestor\nprivilege checks, but I'd rather leave it out for now instead of\nintroducing behavior in v16 that is potentially buggy and difficult to\nremove post-GA.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 13 Jun 2023 14:12:46 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 02:12:46PM -0700, Nathan Bossart wrote:\n> I've been reviewing ff9618e lately, and I'm wondering whether it has the\n> same problem that 19de0ab solved. Specifically, ff9618e introduces\n> has_partition_ancestor_privs(), which is used to check whether a user has\n> MAINTAIN on any partition ancestors. This involves syscache lookups, and\n> presently this function does not take any relation locks. I did spend some\n> time trying to induce cache lookup errors, but I didn't have any luck.\n> However, unless this can be made safe without too much trouble, I think I'm\n> inclined to partially revert ff9618e, leaving the TOAST-related parts\n> intact.\n\nHmm. get_rel_relispartition() and pg_class_aclcheck() are rather\nreliable when it comes to that as far as it goes. Still\nget_partition_ancestors() is your problem, isn't it? Indeed, it seems\nlike a bad idea to do partition tree lookups without at least an\nAccessShareLock as you may finish with a list that makes\npg_class_aclcheck() complain on a missing relation. The race is\npretty narrow, but a stop point in get_partition_ancestors() with some\npartition tree manipulation should be enough to make operations like a\nschema-wide REINDEX less transparent with missing relations at least.\n\nhas_partition_ancestor_privs() is used in\nRangeVarCallbackMaintainsTable(), on top of that. As written, it\nencourages incorrect use patterns.\n\n> By reverting the partition-related parts of ff9618e, users would need to\n> have MAINTAIN on the partition itself to perform the maintenance command.\n> MAINTAIN on the partitioned table would no longer be sufficient. This is\n> more like how things work on supported versions today. Privileges are\n> checked for each partition, so a command that flows down to all partitions\n> might refuse to process a partition (e.g., if the current user doesn't own\n> the partition).\n> \n> In the future, perhaps we could reevaluate adding these partition ancestor\n> privilege checks, but I'd rather leave it out for now instead of\n> introducing behavior in v16 that is potentially buggy and difficult to\n> remove post-GA.\n\nWhile on it, this buzzes me:\n static bool\n-vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params)\n+vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params, bool skip_privs)\n\nVacuumParams has been originally introduced to avoid extending\nvacuum_rel() with a bunch of arguments, no?\n\nSo, yes, agreed about the removal of has_partition_ancestor_privs().\nI am adding an open item assigned to you and Jeff.\n--\nMichael",
"msg_date": "Wed, 14 Jun 2023 08:16:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 08:16:15AM +0900, Michael Paquier wrote:\n> While on it, this buzzes me:\n> static bool\n> -vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params)\n> +vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params, bool skip_privs)\n> \n> VacuumParams has been originally introduced to avoid extending\n> vacuum_rel() with a bunch of arguments, no?\n\nYeah, that could probably be moved into VacuumParams.\n\n> So, yes, agreed about the removal of has_partition_ancestor_privs().\n> I am adding an open item assigned to you and Jeff.\n\nThanks. I suspect there's more discussion incoming, but I'm hoping to\nclose this item one way or another by 16beta2.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 13 Jun 2023 16:54:42 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 04:54:42PM -0700, Nathan Bossart wrote:\n> On Wed, Jun 14, 2023 at 08:16:15AM +0900, Michael Paquier wrote:\n>> So, yes, agreed about the removal of has_partition_ancestor_privs().\n>> I am adding an open item assigned to you and Jeff.\n> \n> Thanks. I suspect there's more discussion incoming, but I'm hoping to\n> close this item one way or another by 16beta2.\n\nConcretely, I am proposing something like the attached patches.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 14 Jun 2023 11:17:11 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Tue, 2023-06-13 at 14:12 -0700, Nathan Bossart wrote:\n> I've been reviewing ff9618e lately, and I'm wondering whether it has\n> the\n> same problem that 19de0ab solved. Specifically, ff9618e introduces\n> has_partition_ancestor_privs(), which is used to check whether a user\n> has\n> MAINTAIN on any partition ancestors. This involves syscache lookups,\n> and\n> presently this function does not take any relation locks. I did\n> spend some\n> time trying to induce cache lookup errors, but I didn't have any\n> luck.\n> However, unless this can be made safe without too much trouble, I\n> think I'm\n> inclined to partially revert ff9618e, leaving the TOAST-related parts\n> intact.\n\nAgreed. Having it work on partition hierarchies is a nice-to-have, but\nnot central to the usability of the feature. If it's causing problems,\nbest to take that out and reconsider in 17 if worthwhile.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 14 Jun 2023 11:30:45 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 11:17:11AM -0700, Nathan Bossart wrote:\n> On Tue, Jun 13, 2023 at 04:54:42PM -0700, Nathan Bossart wrote:\n> > On Wed, Jun 14, 2023 at 08:16:15AM +0900, Michael Paquier wrote:\n> >> So, yes, agreed about the removal of has_partition_ancestor_privs().\n> >> I am adding an open item assigned to you and Jeff.\n> > \n> > Thanks. I suspect there's more discussion incoming, but I'm hoping to\n> > close this item one way or another by 16beta2.\n> \n> Concretely, I am proposing something like the attached patches.\n\nThe result after 0001 is applied is that a couple of\nobject_ownercheck() calls that existed before ff9618e are removed from\nsome ACL checks in the REINDEX, CLUSTER and VACUUM paths. Is that OK\nfor shared relations and shouldn't cluster_is_permitted_for_relation()\ninclude that? vacuum_is_permitted_for_relation() is consistent on\nthis side.\n\nHere are the paths that now differ:\ncluster_rel\nget_tables_to_cluster\nget_tables_to_cluster_partitioned\nRangeVarCallbackForReindexIndex\nReindexMultipleTables\n\n0002 looks OK to retain the skip check for toast relations in the\nVACUUM case.\n--\nMichael",
"msg_date": "Thu, 15 Jun 2023 09:46:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 09:46:33AM +0900, Michael Paquier wrote:\n> The result after 0001 is applied is that a couple of\n> object_ownercheck() calls that existed before ff9618e are removed from\n> some ACL checks in the REINDEX, CLUSTER and VACUUM paths. Is that OK\n> for shared relations and shouldn't cluster_is_permitted_for_relation()\n> include that? vacuum_is_permitted_for_relation() is consistent on\n> this side.\n\nThese object_ownercheck() calls were removed because they were redundant,\nas owners have all privileges by default. Privileges can be revoked from\nthe owner, so an extra ownership check would effectively bypass the\nrelation's ACL in that case. I looked around and didn't see any other\nexamples of a combined ownership and ACL check like we were doing for\nMAINTAIN. The only thing that gives me pause is that the docs call out\nownership as sufficient for some maintenance commands. With these patches,\nthat's only true as long as no one revokes privileges from the owner. IMO\nwe should update the docs and leave out the ownership checks since MAINTAIN\nis now a grantable privilege like any other. WDYT?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 14 Jun 2023 21:10:44 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, Jun 14, 2023 at 09:10:44PM -0700, Nathan Bossart wrote:\n> IMO\n> we should update the docs and leave out the ownership checks since MAINTAIN\n> is now a grantable privilege like any other. WDYT?\n\nHere's an attempt at adjusting the documentation as I proposed yesterday.\nI think this is a good opportunity to simplify the privilege-related\nsections for these maintenance commands.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 15 Jun 2023 16:57:00 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 04:57:00PM -0700, Nathan Bossart wrote:\n> Here's an attempt at adjusting the documentation as I proposed yesterday.\n> I think this is a good opportunity to simplify the privilege-related\n> sections for these maintenance commands.\n\nI noticed that it was possible to make the documentation changes in 0001\neasier to read. Apologies for the noise.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 15 Jun 2023 22:20:25 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, Jun 15, 2023 at 10:20:25PM -0700, Nathan Bossart wrote:\n> I noticed that it was possible to make the documentation changes in 0001\n> easier to read. Apologies for the noise.\n\nYet more noise...\n\nIn v4 of the patch set, I moved the skip_privs flag refactoring to 0001. I\nintend to commit this tomorrow unless there is additional feedback.\n\nI split out the documentation simplifications to 0003 since it seemed\nindependent. Also, I adjusted some ACL-related error messages in 0002 to\nappropriately use \"permission denied\" messages instead of \"must be owner\"\nmessages. I'm hoping to commit 0002 and 0003 by the end of the week so\nthat these fixes are available in 16beta2.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 19 Jun 2023 14:55:34 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Mon, Jun 19, 2023 at 02:55:34PM -0700, Nathan Bossart wrote:\n> In v4 of the patch set, I moved the skip_privs flag refactoring to 0001. I\n> intend to commit this tomorrow unless there is additional feedback.\n\nFine by me. 0001 looks OK seen from here.\n\n> These object_ownercheck() calls were removed because they were redundant,\n> as owners have all privileges by default. Privileges can be revoked from\n> the owner, so an extra ownership check would effectively bypass the\n> relation's ACL in that case. I looked around and didn't see any other\n> examples of a combined ownership and ACL check like we were doing for\n> MAINTAIN. The only thing that gives me pause is that the docs call out\n> ownership as sufficient for some maintenance commands. With these patches,\n> that's only true as long as no one revokes privileges from the owner. IMO\n> we should update the docs and leave out the ownership checks since MAINTAIN\n> is now a grantable privilege like any other. WDYT?\n\nTBH, I have a mixed feeling about this line of reasoning because\nMAINTAIN is much broader and less specific than TRUNCATE, for\ninstance, being spawned across so much more operations. As you say,\nowners of a relation have the MAINTAIN right by default, but they\nwould not be able to run any maintenance operations if somebody has\nrevoked their MAINTAIN right to do so, even if they are the owners of\nthe so-said relation. Perhaps that's OK in the long run, still I have\nmixed feeling about whether that's the best decision we can take here, \nespecially because MAINTAIN impacts VACUUM, ANALYZE, CLUSTER, REFRESH\nMATVIEW, REINDEX and LOCK. Some users may find that surprising as they\nused to have more control over these operations as owners of the\nrelations worked on.\n--\nMichael",
"msg_date": "Tue, 20 Jun 2023 14:26:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Tue, 2023-06-20 at 14:26 +0900, Michael Paquier wrote:\n> TBH, I have a mixed feeling about this line of reasoning because\n> MAINTAIN is much broader and less specific than TRUNCATE, for\n> instance, being spawned across so much more operations.\n\n...\n\n> Some users may find that surprising as they\n> used to have more control over these operations as owners of the\n> relations worked on.\n\nIt seems like the user shouldn't be surprised if they can carry out the\naction; nor should they be surprised if they can't carry out the\naction. Having privileges revoked on a table from the table's owner is\nan edge case in behavior and both make sense to me.\n\nIn the absense of a use case, I'd be inclined towards just being\nconsistent with the other privileges.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 20 Jun 2023 09:16:59 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Mon, 2023-06-19 at 14:55 -0700, Nathan Bossart wrote:\n> In v4 of the patch set, I moved the skip_privs flag refactoring to\n> 0001. I\n> intend to commit this tomorrow unless there is additional feedback.\n\nI think v4-0001 broke the handling of toast tables? It looks like you\nremoved the check for !skip_privs but need to add it to the flags in\nvacuum_is_permitted_for_relation(). \n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 20 Jun 2023 10:04:37 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 10:04:37AM -0700, Jeff Davis wrote:\n> I think v4-0001 broke the handling of toast tables? It looks like you\n> removed the check for !skip_privs but need to add it to the flags in\n> vacuum_is_permitted_for_relation(). \n\nGood catch. I'm not sure why some of the calls to\nvacuum_is_permitted_for_relation() are masking the options. AFAICT we can\nsimply remove the masks. I've done so in the attached patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 20 Jun 2023 10:40:32 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 09:16:59AM -0700, Jeff Davis wrote:\n> On Tue, 2023-06-20 at 14:26 +0900, Michael Paquier wrote:\n>> TBH, I have a mixed feeling about this line of reasoning because\n>> MAINTAIN is much broader and less specific than TRUNCATE, for\n>> instance, being spawned across so much more operations.\n> \n> ...\n> \n>> Some users may find that surprising as they\n>> used to have more control over these operations as owners of the\n>> relations worked on.\n> \n> It seems like the user shouldn't be surprised if they can carry out the\n> action; nor should they be surprised if they can't carry out the\n> action. Having privileges revoked on a table from the table's owner is\n> an edge case in behavior and both make sense to me.\n> \n> In the absense of a use case, I'd be inclined towards just being\n> consistent with the other privileges.\n\nAgreed, I think we should make MAINTAIN consistent with the other grantable\nprivileges.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 20 Jun 2023 10:42:10 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 10:40:32AM -0700, Nathan Bossart wrote:\n> On Tue, Jun 20, 2023 at 10:04:37AM -0700, Jeff Davis wrote:\n>> I think v4-0001 broke the handling of toast tables? It looks like you\n>> removed the check for !skip_privs but need to add it to the flags in\n>> vacuum_is_permitted_for_relation(). \n> \n> Good catch. I'm not sure why some of the calls to\n> vacuum_is_permitted_for_relation() are masking the options. AFAICT we can\n> simply remove the masks. I've done so in the attached patch.\n\nOh, I think I see why. This appears to be used to control which WARNING\nmessage is emitted. If you lose permissions before you get to analyzing in\na VACUUM (ANALYZE) command, you'll get a \"permission denied to vacuum\"\nmessage instead of a \"permission denied to analyze\" message. IMO a better\nway to do that would be to control only those two bits (VACOPT_VACUUM and\nVACOPT_ANALYZE) in calls to vacuum_is_permitted_for_relation(), and to\nleave the rest untouched.\n\nPatch incoming...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 20 Jun 2023 10:49:27 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 10:49:27AM -0700, Nathan Bossart wrote:\n> Patch incoming...\n\nAttached.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 20 Jun 2023 10:56:34 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Mon, 2023-06-19 at 14:55 -0700, Nathan Bossart wrote:\n> I'm hoping to commit 0002 and 0003 by the end of the week so\n> that these fixes are available in 16beta2.\n\nA few observations for the case where a user does have the MAINTAIN\nprivilege on a partitioned table but not the partitions:\n\n * they can LOCK TABLE on the partitioned table\n * ANALYZE works on the inheritance tree but not the individual\npartitions\n * CLUSTER and VACUUM are useless because they skip all of the\npartitions. That's consistent with the purpose of this thread -- to\navoid the locking problems trying to support those operations on\npartitioned tables.\n * REINDEX TABLE applies to all indexes in all partitions, which seems\na bit inconsistent.\n\nThe only behavior I'm worried about is REINDEX. I'm not sure what we\nshould do about it, or if we even want to do something about it. If we\nwant REINDEX to fail in this case, we should be sure to check\npermissions on everything up-front to avoid doing a lot of work. The\nonly other option I can think of is to REINDEX only those indexes\ndeclared on the partitioned table (not the individual partitions),\nwhich seems consistent but might be confusing to users.\n\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 20 Jun 2023 11:43:05 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Tue, 2023-06-20 at 10:56 -0700, Nathan Bossart wrote:\n> On Tue, Jun 20, 2023 at 10:49:27AM -0700, Nathan Bossart wrote:\n> > Patch incoming...\n> \n> Attached.\n\nLooks good to me.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 20 Jun 2023 11:49:36 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 11:49:36AM -0700, Jeff Davis wrote:\n> On Tue, 2023-06-20 at 10:56 -0700, Nathan Bossart wrote:\n>> On Tue, Jun 20, 2023 at 10:49:27AM -0700, Nathan Bossart wrote:\n>> > Patch incoming...\n>> \n>> Attached.\n> \n> Looks good to me.\n\nThanks, committed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 20 Jun 2023 15:20:24 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "I've attached rebased versions of the remaining two patches.\n\nOn Tue, Jun 20, 2023 at 11:43:05AM -0700, Jeff Davis wrote:\n> * REINDEX TABLE applies to all indexes in all partitions, which seems\n> a bit inconsistent.\n> \n> The only behavior I'm worried about is REINDEX. I'm not sure what we\n> should do about it, or if we even want to do something about it. If we\n> want REINDEX to fail in this case, we should be sure to check\n> permissions on everything up-front to avoid doing a lot of work. The\n> only other option I can think of is to REINDEX only those indexes\n> declared on the partitioned table (not the individual partitions),\n> which seems consistent but might be confusing to users.\n\nAt the moment, I think I'm inclined to call this \"existing behavior\" since\nwe didn't check privileges for each partition in this case even before\nMAINTAIN was introduced. IIUC we still process the individual partitions\nin v15 regardless of whether the calling user owns the partition.\n\nHowever, I do agree that it feels inconsistent. Besides the options you\nproposed, we might also consider making REINDEX work a bit more like VACUUM\nand ANALYZE and emit a WARNING for any relations that the user is not\npermitted to process. But this probably deserves its own thread, and it\nmight even need to wait until v17.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 20 Jun 2023 15:52:57 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 11:43:05AM -0700, Jeff Davis wrote:\n> The only behavior I'm worried about is REINDEX. I'm not sure what we\n> should do about it, or if we even want to do something about it. If we\n> want REINDEX to fail in this case, we should be sure to check\n> permissions on everything up-front to avoid doing a lot of work.\n\nYes, that feels a bit inconsistent to only check the partitioned table\nin RangeVarCallbackForReindexIndex() and let all the partitions\nprocess as a user may not have the permissions to work on the\npartitions themselves. We'd need something close to\nexpand_vacuum_rel() for this work. I am not sure that this level of\nchange is required, TBH, still it could be discussed for v17~.\n\n> The\n> only other option I can think of is to REINDEX only those indexes\n> declared on the partitioned table (not the individual partitions),\n> which seems consistent but might be confusing to users.\n\nI am not sure to understand this last sentence. REINDEX on a\npartitioned table builds a list of the indexes to work on in the first\ntransaction processing the command in ReindexPartitions(), and there\nis no need to process partitioned indexes as these have no storage, so\nyour suggestion is a no-op?\n--\nMichael",
"msg_date": "Wed, 21 Jun 2023 07:53:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 03:52:57PM -0700, Nathan Bossart wrote:\n> However, I do agree that it feels inconsistent. Besides the options you\n> proposed, we might also consider making REINDEX work a bit more like VACUUM\n> and ANALYZE and emit a WARNING for any relations that the user is not\n> permitted to process. But this probably deserves its own thread, and it\n> might even need to wait until v17.\n\nLooking at 0001..\n\n-step s2_auth { SET ROLE regress_cluster_part; }\n+step s2_auth { SET ROLE regress_cluster_part; SET client_min_messages = ERROR; }\n\nIs this change necessary because the ordering of the WARNING messages\ngenerated for denied permissions is not guaranteed?\n\nFrom the generated vacuum.out:\n-- Only one partition owned by other user.\nALTER TABLE vacowned_parted OWNER TO CURRENT_USER;\nSET ROLE regress_vacuum;\nVACUUM vacowned_parted;\nWARNING: permission denied to vacuum \"vacowned_parted\", skipping it\nWARNING: permission denied to vacuum \"vacowned_part2\", skipping it\n\nThis is interesting. In this case, regress_vacuum owns only one\npartition, but we would be able to vacuum it even when querying\nvacowned_parted. Seeing from [1], this is intentional as per the\nargument that VACUUM/ANALYZE can take multiple relations. Am I\ngetting that right? That's different from CLUSTER or REINDEX, where\nnot owning the partitioned table fails immediately.\n\nI think that there is a testing gap with the coverage of CLUSTER.\n\"Ownership of partitions is checked\" is a test that looks for the case\nwhere regress_ptnowner owns the partitioned table and one of its\npartitions, checking that the leaf not owned is skipped, but we don't\nhave a test where we attempt a CLUSTER on the partitioned table with\nregress_ptnowner *not* owning the partitioned table, only one or more\nof its partitions owned by regress_ptnowner. In this case, the\ncommand would fail.\n\n- privilege on the catalog. If a role has permission to\n- <command>REINDEX</command> a partitioned table, it is also permitted to\n- <command>REINDEX</command> each of its partitions, regardless of whether the\n- role has the aforementioned privileges on the partition. Of course,\n- superusers can always reindex anything.\n+ privilege on the catalog. Of course, superusers can always reindex anything.\n\nWith 0001 applied, if a user is marked as an owner of a partitioned\ntable, all the partitions are reindexed even if this user does not own\na portion of them, making this change incorrect while the former is\nmore correct?\n\n[1]: https://www.postgresql.org/message-id/20221216011926.GA771496@nathanxps13\n--\nMichael",
"msg_date": "Wed, 21 Jun 2023 10:21:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 10:21:04AM +0900, Michael Paquier wrote:\n> Looking at 0001..\n\nThanks for taking a look.\n\n> -step s2_auth { SET ROLE regress_cluster_part; }\n> +step s2_auth { SET ROLE regress_cluster_part; SET client_min_messages = ERROR; }\n> \n> Is this change necessary because the ordering of the WARNING messages\n> generated for denied permissions is not guaranteed?\n\nYes.\n\n> From the generated vacuum.out:\n> -- Only one partition owned by other user.\n> ALTER TABLE vacowned_parted OWNER TO CURRENT_USER;\n> SET ROLE regress_vacuum;\n> VACUUM vacowned_parted;\n> WARNING: permission denied to vacuum \"vacowned_parted\", skipping it\n> WARNING: permission denied to vacuum \"vacowned_part2\", skipping it\n> \n> This is interesting. In this case, regress_vacuum owns only one\n> partition, but we would be able to vacuum it even when querying\n> vacowned_parted. Seeing from [1], this is intentional as per the\n> argument that VACUUM/ANALYZE can take multiple relations. Am I\n> getting that right? That's different from CLUSTER or REINDEX, where\n> not owning the partitioned table fails immediately.\n\nYes.\n\n> I think that there is a testing gap with the coverage of CLUSTER.\n> \"Ownership of partitions is checked\" is a test that looks for the case\n> where regress_ptnowner owns the partitioned table and one of its\n> partitions, checking that the leaf not owned is skipped, but we don't\n> have a test where we attempt a CLUSTER on the partitioned table with\n> regress_ptnowner *not* owning the partitioned table, only one or more\n> of its partitions owned by regress_ptnowner. In this case, the\n> command would fail.\n\nWe could add something for this, but it'd really just exercise the checks\nin RangeVarCallbackMaintainsTable(), which already has a decent amount of\ncoverage.\n\n> - privilege on the catalog. If a role has permission to\n> - <command>REINDEX</command> a partitioned table, it is also permitted to\n> - <command>REINDEX</command> each of its partitions, regardless of whether the\n> - role has the aforementioned privileges on the partition. Of course,\n> - superusers can always reindex anything.\n> + privilege on the catalog. Of course, superusers can always reindex anything.\n> \n> With 0001 applied, if a user is marked as an owner of a partitioned\n> table, all the partitions are reindexed even if this user does not own\n> a portion of them, making this change incorrect while the former is\n> more correct?\n\nThe former wording would be true from the perspective that REINDEX on a\npartitioned table will flow down to its partitions and skip privilege\nchecks on them, but it's incomplete because REINDEX on the individual\npartitions might still fail due to privileges (even if the user has\nprivileges to REINDEX the partitioned table). After both patches are\napplied, the privilege documentation is distilled down to\n\n\tReindexing a single index or table requires having the MAINTAIN\n\tprivilege on the table.\n\nplus some assorted notes about REINDEX DATABASE/SCHEMA/SYSTEM. I think the\nproposed wording is accurate, but I can see the argument that it leaves\nsome ambiguity for the partitioned table case. Perhaps we should add\nsomething like\n\n\tNote that while REINDEX on a partitioned index or table requires\n\tMAINTAIN on the partitioned table, such commands skip the privilege\n\tchecks when processing the individual partitions.\n\nThoughts? I'm trying to keep the privilege documentation for maintenance\ncommands as simple as possible, so I'm hoping to avoid adding too much text\ndedicated to these special cases.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 20 Jun 2023 21:15:18 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, 2023-06-21 at 07:53 +0900, Michael Paquier wrote:\n> I am not sure to understand this last sentence. REINDEX on a\n> partitioned table builds a list of the indexes to work on in the\n> first\n> transaction processing the command in ReindexPartitions(), and there\n> is no need to process partitioned indexes as these have no storage,\n> so\n> your suggestion is a no-op?\n\nWhat I meant is that if you do:\n\n CREATE TABLE p(i INT, j INT) PARTITION BY RANGE (i);\n CREATE TABLE p0 PARTITION OF p FOR VALUES FROM (00) TO (10);\n CREATE TABLE p1 PARTITION OF p FOR VALUES FROM (10) TO (20);\n CREATE INDEX p_idx ON p (i);\n CREATE INDEX special_idx ON p0 (j);\n GRANT MAINTAIN ON p TO foo;\n \\c - foo\n REINDEX TABLE p;\n\nThat would reindex p0_i_idx and p1_i_idx, but skip special_idx. That\nmight be too confusing, but feels a bit more consistent permissions-\nwise.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 21 Jun 2023 09:26:09 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Tue, 2023-06-20 at 15:52 -0700, Nathan Bossart wrote:\n> At the moment, I think I'm inclined to call this \"existing behavior\"\n> since\n> we didn't check privileges for each partition in this case even\n> before\n> MAINTAIN was introduced. IIUC we still process the individual\n> partitions\n> in v15 regardless of whether the calling user owns the partition.\n\nThat's fine with me. I just wanted to bring it up in case someone else\nthought it was a problem.\n\n> However, I do agree that it feels inconsistent. Besides the options\n> you\n> proposed, we might also consider making REINDEX work a bit more like\n> VACUUM\n> and ANALYZE and emit a WARNING for any relations that the user is not\n> permitted to process. But this probably deserves its own thread, and\n> it\n> might even need to wait until v17.\n\nYes, we can revisit for 17.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Wed, 21 Jun 2023 09:29:54 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 09:15:18PM -0700, Nathan Bossart wrote:\n> Perhaps we should add something like\n> \n> \tNote that while REINDEX on a partitioned index or table requires\n> \tMAINTAIN on the partitioned table, such commands skip the privilege\n> \tchecks when processing the individual partitions.\n> \n> Thoughts? I'm trying to keep the privilege documentation for maintenance\n> commands as simple as possible, so I'm hoping to avoid adding too much text\n> dedicated to these special cases.\n\nHere is a new patch set that includes this new sentence.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 21 Jun 2023 10:16:24 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 09:26:09AM -0700, Jeff Davis wrote:\n> What I meant is that if you do:\n> \n> CREATE TABLE p(i INT, j INT) PARTITION BY RANGE (i);\n> CREATE TABLE p0 PARTITION OF p FOR VALUES FROM (00) TO (10);\n> CREATE TABLE p1 PARTITION OF p FOR VALUES FROM (10) TO (20);\n> CREATE INDEX p_idx ON p (i);\n> CREATE INDEX special_idx ON p0 (j);\n> GRANT MAINTAIN ON p TO foo;\n> \\c - foo\n> REINDEX TABLE p;\n> \n> That would reindex p0_i_idx and p1_i_idx, but skip special_idx. That\n> might be too confusing, but feels a bit more consistent permissions-\n> wise.\n\nFWIW, the current behavior to reindex special_idx in this case feels\nmore natural to me, as the user requests a REINDEX at table-level, not\nat index-level. This would mean to me that all the indexes of all the\npartitions should be rebuilt on, not just the partitioned indexes that\nare defined in the partitioned table requested for rebuild.\n--\nMichael",
"msg_date": "Thu, 22 Jun 2023 09:50:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 10:16:24AM -0700, Nathan Bossart wrote:\n>> I think that there is a testing gap with the coverage of CLUSTER.\n>> \"Ownership of partitions is checked\" is a test that looks for the case\n>> where regress_ptnowner owns the partitioned table and one of its\n>> partitions, checking that the leaf not owned is skipped, but we don't\n>> have a test where we attempt a CLUSTER on the partitioned table with\n>> regress_ptnowner *not* owning the partitioned table, only one or more\n>> of its partitions owned by regress_ptnowner. In this case, the\n>> command would fail.\n> \n> We could add something for this, but it'd really just exercise the checks\n> in RangeVarCallbackMaintainsTable(), which already has a decent amount of\n> coverage.\n\nIt seems to me that this has some value for the CLUSTER path, so I\nwould add a small thing for it.\n\n> On Tue, Jun 20, 2023 at 09:15:18PM -0700, Nathan Bossart wrote:\n>> Perhaps we should add something like\n>> \n>> \tNote that while REINDEX on a partitioned index or table requires\n>> \tMAINTAIN on the partitioned table, such commands skip the privilege\n>> \tchecks when processing the individual partitions.\n>> \n>> Thoughts? I'm trying to keep the privilege documentation for maintenance\n>> commands as simple as possible, so I'm hoping to avoid adding too much text\n>> dedicated to these special cases.\n> \n> Here is a new patch set that includes this new sentence.\n\n- aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_INDEX,\n- relation->relname);\nInteresting that the previous code assumed ACLCHECK_NOT_OWNER all the\ntime in the reindex RangeVar callback.\n\n- /*\n- * We already checked that the user has privileges to CLUSTER the\n- * partitioned table when we locked it earlier, so there's no need to\n- * check the privileges again here.\n- */\n+ if (!cluster_is_permitted_for_relation(relid, GetUserId()))\n+ continue;\nI would add a comment here that this ACL recheck for the leaves is an\nimportant thing to keep around as it impacts the case where the leaves\nhave a different owner than the parent, and the owner of the parent\nclusters it. The only place in the tests where this has an influence\nis the isolation test cluster-conflict-partition.\n\nThe documentation changes seem in line with the code changes,\nparticularly for VACUUM and REINDEX where we have some special\nhandling for shared catalogs with ownership.\n--\nMichael",
"msg_date": "Thu, 22 Jun 2023 10:46:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, Jun 22, 2023 at 10:46:41AM +0900, Michael Paquier wrote:\n> On Wed, Jun 21, 2023 at 10:16:24AM -0700, Nathan Bossart wrote:\n>>> I think that there is a testing gap with the coverage of CLUSTER.\n>>> \"Ownership of partitions is checked\" is a test that looks for the case\n>>> where regress_ptnowner owns the partitioned table and one of its\n>>> partitions, checking that the leaf not owned is skipped, but we don't\n>>> have a test where we attempt a CLUSTER on the partitioned table with\n>>> regress_ptnowner *not* owning the partitioned table, only one or more\n>>> of its partitions owned by regress_ptnowner. In this case, the\n>>> command would fail.\n>> \n>> We could add something for this, but it'd really just exercise the checks\n>> in RangeVarCallbackMaintainsTable(), which already has a decent amount of\n>> coverage.\n> \n> It seems to me that this has some value for the CLUSTER path, so I\n> would add a small thing for it.\n\nDone.\n\n> - /*\n> - * We already checked that the user has privileges to CLUSTER the\n> - * partitioned table when we locked it earlier, so there's no need to\n> - * check the privileges again here.\n> - */\n> + if (!cluster_is_permitted_for_relation(relid, GetUserId()))\n> + continue;\n> I would add a comment here that this ACL recheck for the leaves is an\n> important thing to keep around as it impacts the case where the leaves\n> have a different owner than the parent, and the owner of the parent\n> clusters it. The only place in the tests where this has an influence\n> is the isolation test cluster-conflict-partition.\n\nDone.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 21 Jun 2023 20:06:06 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 08:06:06PM -0700, Nathan Bossart wrote:\n> On Thu, Jun 22, 2023 at 10:46:41AM +0900, Michael Paquier wrote:\n>> - /*\n>> - * We already checked that the user has privileges to CLUSTER the\n>> - * partitioned table when we locked it earlier, so there's no need to\n>> - * check the privileges again here.\n>> - */\n>> + if (!cluster_is_permitted_for_relation(relid, GetUserId()))\n>> + continue;\n>> I would add a comment here that this ACL recheck for the leaves is an\n>> important thing to keep around as it impacts the case where the leaves\n>> have a different owner than the parent, and the owner of the parent\n>> clusters it. The only place in the tests where this has an influence\n>> is the isolation test cluster-conflict-partition.\n> \n> Done.\n\n+ /*\n+ * It's possible that the user does not have privileges to CLUSTER the\n+ * leaf partition despite having such privileges on the partitioned\n+ * table. We skip any partitions which the user is not permitted to\n+ * CLUSTER.\n+ */\n\nSounds good to me. Thanks.\n--\nMichael",
"msg_date": "Thu, 22 Jun 2023 16:11:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, Jun 22, 2023 at 04:11:08PM +0900, Michael Paquier wrote:\n> Sounds good to me. Thanks.\n\nI plan to commit these patches later today.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 22 Jun 2023 08:43:01 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
},
{
"msg_contents": "On Thu, Jun 22, 2023 at 08:43:01AM -0700, Nathan Bossart wrote:\n> I plan to commit these patches later today.\n\nCommitted. I've also marked the related open item for v16 as resolved.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 22 Jun 2023 16:01:00 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow granting CLUSTER, REFRESH MATERIALIZED VIEW, and REINDEX"
}
] |
[
{
"msg_contents": "When checking something else in the base backup code, I've noticed that\nsendFileWithContent() does not advance the 'content' pointer. The sink buffer\nis large enough (32kB) so that the first iteration usually processes the whole\nfile (only special files are processed by this function), and thus that the\nproblem is hidden.\n\nHowever it's possible to hit the issue: if there are too many tablespaces,\npg_basebackup generates corrupted tablespace_map. Instead of writing all the\ntablespace paths it writes only some and then starts to write the contents\nfrom the beginning again.\n\nThe attached script generates scripts to create many tablespaces as well as\nthe underlying directories. Fix is attached here as well.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Thu, 08 Dec 2022 20:44:05 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "sendFileWithContent() does not advance the source pointer"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-08 20:44:05 +0100, Antonin Houska wrote:\n> When checking something else in the base backup code, I've noticed that\n> sendFileWithContent() does not advance the 'content' pointer.\n\nOof. Luckily it looks like that is a relatively recent issue, introduced in\nbef47ff85df, which is only in 15.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Dec 2022 13:53:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: sendFileWithContent() does not advance the source pointer"
},
{
"msg_contents": "On Thu, Dec 8, 2022 at 2:43 PM Antonin Houska <ah@cybertec.at> wrote:\n> When checking something else in the base backup code, I've noticed that\n> sendFileWithContent() does not advance the 'content' pointer. The sink buffer\n> is large enough (32kB) so that the first iteration usually processes the whole\n> file (only special files are processed by this function), and thus that the\n> problem is hidden.\n>\n> However it's possible to hit the issue: if there are too many tablespaces,\n> pg_basebackup generates corrupted tablespace_map. Instead of writing all the\n> tablespace paths it writes only some and then starts to write the contents\n> from the beginning again.\n\nThanks for the report, analysis, and fix. I have committed your patch\nand back-patched to v15.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 12 Dec 2022 10:40:38 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: sendFileWithContent() does not advance the source pointer"
}
] |
[
{
"msg_contents": "Just a utility function to generate random numbers from a normal\ndistribution. I find myself doing this several times a year, and I am\nsure I must not be the only one.\n\nrandom_normal(stddev float8 DEFAULT 1.0, mean float8 DEFAULT 0.0)",
"msg_date": "Thu, 8 Dec 2022 13:53:23 -0800",
"msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>",
"msg_from_op": true,
"msg_subject": "[PATCH] random_normal function"
},
{
"msg_contents": "On Thu, Dec 8, 2022 at 2:53 PM Paul Ramsey <pramsey@cleverelephant.ca>\nwrote:\n\n>\n> random_normal(stddev float8 DEFAULT 1.0, mean float8 DEFAULT 0.0)\n>\n\nAny particular justification for placing stddev before mean? A brief\nsurvey seems to indicate other libraries, as well as (at least for me)\nlearned convention, has the mean be supplied first, then the standard\ndeviation. The implementation/commentary seems to use that convention as\nwell.\n\nSome suggestions:\n\n/* Apply optional user parameters */ - that isn't important or even what is\nhappening though, and the body of the function shouldn't care about the\nsource of the values for the variables it uses.\n\nInstead:\n/* Transform the normal standard variable (z) using the target normal\ndistribution parameters */\n\nPersonally I'd probably make that even more explicit:\n\n+ float8 z\n...\n* z = pg_prng_double_normal(&drandom_seed)\n+ /* ... */\n* result = (stddev * z) + mean\n\nAnd a possible micro-optimization...\n\n+ bool rescale = true\n+ if (PG_NARGS() = 0)\n+ rescale = false\n...\n+ if (rescale)\n ... result = (stddev * z) + mean\n+ else\n+ result = z\n\nDavid J.\n\nOn Thu, Dec 8, 2022 at 2:53 PM Paul Ramsey <pramsey@cleverelephant.ca> wrote:\nrandom_normal(stddev float8 DEFAULT 1.0, mean float8 DEFAULT 0.0)Any particular justification for placing stddev before mean? A brief survey seems to indicate other libraries, as well as (at least for me) learned convention, has the mean be supplied first, then the standard deviation. The implementation/commentary seems to use that convention as well.Some suggestions:/* Apply optional user parameters */ - that isn't important or even what is happening though, and the body of the function shouldn't care about the source of the values for the variables it uses.Instead:/* Transform the normal standard variable (z) using the target normal distribution parameters */Personally I'd probably make that even more explicit:+ float8 z...* z = pg_prng_double_normal(&drandom_seed)+ /* ... */* result = (stddev * z) + meanAnd a possible micro-optimization...+ bool rescale = true+ if (PG_NARGS() = 0)+ rescale = false...+ if (rescale) ... result = (stddev * z) + mean+ else+ result = zDavid J.",
"msg_date": "Thu, 8 Dec 2022 15:40:59 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On Thu, Dec 08, 2022 at 01:53:23PM -0800, Paul Ramsey wrote:\n> Just a utility function to generate random numbers from a normal\n> distribution. I find myself doing this several times a year, and I am\n> sure I must not be the only one.\n> \n> random_normal(stddev float8 DEFAULT 1.0, mean float8 DEFAULT 0.0)\n\n+++ b/src/backend/catalog/system_functions.sql\n@@ -620,6 +620,13 @@ CREATE OR REPLACE FUNCTION\n STABLE PARALLEL SAFE\n AS 'sql_localtimestamp';\n \n+CREATE OR REPLACE FUNCTION\n+ random_normal(stddev float8 DEFAULT 1.0, mean float8 DEFAULT 0.0)\n+RETURNS float8\n+LANGUAGE INTERNAL\n+STRICT VOLATILE PARALLEL SAFE\n+AS 'make_interval';\n\nI guess make_interval is a typo ?\n\nThis is causing it to fail tests:\nhttp://cfbot.cputube.org/paul-ramsey.html\n\nBTW you can run the same tests as CFBOT does from your own github\naccount; see:\nhttps://www.postgresql.org/message-id/20221116232507.GO26337@telsasoft.com\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 8 Dec 2022 16:46:23 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "> On Dec 8, 2022, at 2:40 PM, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> \n> On Thu, Dec 8, 2022 at 2:53 PM Paul Ramsey <pramsey@cleverelephant.ca> wrote:\n> \n> random_normal(stddev float8 DEFAULT 1.0, mean float8 DEFAULT 0.0)\n> \n> Any particular justification for placing stddev before mean? A brief survey seems to indicate other libraries, as well as (at least for me) learned convention, has the mean be supplied first, then the standard deviation. The implementation/commentary seems to use that convention as well.\n\nNo, I'm not sure what was going through my head, but I'm sure it made sense at the time (maybe something like \"people will tend to jimmy with the stddev more frequently than the mean\"). I've reversed the order\n\n> Some suggestions:\n\nThanks! Taken :)\n\n> And a possible micro-optimization...\n> \n> + bool rescale = true\n> + if (PG_NARGS() = 0)\n> + rescale = false\n> ...\n> + if (rescale)\n> ... result = (stddev * z) + mean\n> + else\n> + result = z\n\nFeels a little too micro to me (relative to the overall cost of the transform from uniform to normal distribution). I'm going to leave it out unless you violently want it.\n\nRevised patch attached.\n\nThanks!\n\nP",
"msg_date": "Thu, 8 Dec 2022 14:57:34 -0800",
"msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "\n\n> On Dec 8, 2022, at 2:46 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> I guess make_interval is a typo ?\n> \n> This is causing it to fail tests:\n> http://cfbot.cputube.org/paul-ramsey.html\n> \n\nYep, dumb typo, thanks! This bot is amazeballs, thank you!\n\nP.\n\n\n",
"msg_date": "Thu, 8 Dec 2022 14:58:02 -0800",
"msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "> \n> Revised patch attached.\n\nAnd again, because I think I missed one change in the last one.",
"msg_date": "Thu, 8 Dec 2022 15:25:24 -0800",
"msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "> On Dec 8, 2022, at 3:25 PM, Paul Ramsey <pramsey@cleverelephant.ca> wrote:\n> \n>> \n>> Revised patch attached.\n> \n> And again, because I think I missed one change in the last one.\n> \n> <random_normal_03.patch>\n\nFinal tme, with fixes from cirrusci.",
"msg_date": "Thu, 8 Dec 2022 16:44:56 -0800",
"msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On Thu, Dec 08, 2022 at 04:44:56PM -0800, Paul Ramsey wrote:\n> Final tme, with fixes from cirrusci.\n\nWell, why not. Seems like you would use that a lot with PostGIS.\n\n #include <math.h> /* for ldexp() */\n+#include <float.h> /* for DBL_EPSILON */\nAnd be careful with the order here.\n\n+static void\n+drandom_check_default_seed()\nWe always use (void) rather than empty parenthesis sets.\n\nI would not leave that unchecked, so I think that you should add\nsomething in ramdom.sql. Or would you prefer switching some of\nthe regression tests be switched so as they use the new normal\nfunction?\n\n(Ahem. Bonus points for a random_string() returning a bytea, based on\npg_strong_random().)\n--\nMichael",
"msg_date": "Fri, 9 Dec 2022 13:29:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "\n> On Dec 8, 2022, at 8:29 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Dec 08, 2022 at 04:44:56PM -0800, Paul Ramsey wrote:\n>> Final tme, with fixes from cirrusci.\n> \n> Well, why not. Seems like you would use that a lot with PostGIS.\n> \n> #include <math.h> /* for ldexp() */\n> +#include <float.h> /* for DBL_EPSILON */\n> And be careful with the order here.\n\nShould be ... alphabetical?\n\n> +static void\n> +drandom_check_default_seed()\n> We always use (void) rather than empty parenthesis sets.\n\nOK\n\n> I would not leave that unchecked, so I think that you should add\n> something in ramdom.sql. Or would you prefer switching some of\n> the regression tests be switched so as they use the new normal\n> function?\n\nReading through those tests... seems like they will (rarely) fail. Is that... OK? \nThe tests seem to be mostly worried that random() starts returning constants, which seems like a good thing to test for (is the random number generating returning randomness).\nAn obvious test for this function is that the mean and stddev converge on the supplied parameters, given enough inputs, which is actually kind of the opposite test. I use the same random number generator as the uniform distribution, so that aspect is already covered by the existing tests.\n\n> (Ahem. Bonus points for a random_string() returning a bytea, based on\n> pg_strong_random().)\n\nWould love to. Separate patch of bundled into this one?\n\nP\n\n\n> --\n> Michael\n\n\n\n",
"msg_date": "Fri, 9 Dec 2022 09:17:03 -0800",
"msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "\n\n> On Dec 8, 2022, at 1:53 PM, Paul Ramsey <pramsey@cleverelephant.ca> wrote:\n> \n> Just a utility function to generate random numbers from a normal\n> distribution. I find myself doing this several times a year, and I am\n> sure I must not be the only one.\n\nThanks for the patch. What do you think about these results?\n\n+-- The semantics of a negative stddev are not well defined\n+SELECT random_normal(mean := 0, stddev := -1);\n+ random_normal \n+---------------------\n+ -1.0285744583010896\n+(1 row)\n+\n+SELECT random_normal(mean := 0, stddev := '-Inf');\n+ random_normal \n+---------------\n+ Infinity\n+(1 row)\n+\n+-- This result may be defensible...\n+SELECT random_normal(mean := '-Inf', stddev := 'Inf');\n+ random_normal \n+---------------\n+ -Infinity\n+(1 row)\n+\n+-- but if so, why is this NaN?\n+SELECT random_normal(mean := 'Inf', stddev := 'Inf');\n+ random_normal \n+---------------\n+ NaN\n+(1 row)\n+\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 9 Dec 2022 10:39:22 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "\n\n> On Dec 9, 2022, at 10:39 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n>> On Dec 8, 2022, at 1:53 PM, Paul Ramsey <pramsey@cleverelephant.ca> wrote:\n>> \n>> Just a utility function to generate random numbers from a normal\n>> distribution. I find myself doing this several times a year, and I am\n>> sure I must not be the only one.\n> \n> Thanks for the patch. What do you think about these results?\n\nAngels on pins time! :)\n\n> +-- The semantics of a negative stddev are not well defined\n> +SELECT random_normal(mean := 0, stddev := -1);\n> + random_normal \n> +---------------------\n> + -1.0285744583010896\n> +(1 row)\n\nQuestion is does a negative stddev make enough sense? It is functionally using fabs(stddev), \n\nSELECT avg(random_normal(mean := 0, stddev := -1)) from generate_series(1,1000);\n avg \n---------------------\n 0.03156106778729526\n\nSo could toss an invalid parameter on negative? Not sure if that's more helpful than just being mellow about this input.\n\n\n> +\n> +SELECT random_normal(mean := 0, stddev := '-Inf');\n> + random_normal \n> +---------------\n> + Infinity\n> +(1 row)\n\nThe existing logic around means and stddevs and Inf is hard to tease out:\n\nSELECT avg(v),stddev(v) from (VALUES ('Inf'::float8, '-Inf'::float8)) a(v);\n avg | stddev \n----------+--------\n Infinity | \n\nThe return of NULL of stddev would seem to argue that null in this case means \"does not compute\" at some level. So return NULL on Inf stddev?\n\n> +\n> +-- This result may be defensible...\n> +SELECT random_normal(mean := '-Inf', stddev := 'Inf');\n> + random_normal \n> +---------------\n> + -Infinity\n> +(1 row)\n> +\n> +-- but if so, why is this NaN?\n> +SELECT random_normal(mean := 'Inf', stddev := 'Inf');\n> + random_normal \n> +---------------\n> + NaN\n> +(1 row)\n\nAn Inf mean only implies that one value in the distribution is Inf, but running the function in reverse (generating values) and only generating one value from the distribution implies we have to always return Inf (except in this case stddev is also Inf, so I'd go with NULL, assuming we accept the NULL premise above.\n\nHow do you read the tea leaves?\n\nP.\n\n\n\n\n",
"msg_date": "Fri, 9 Dec 2022 10:51:53 -0800",
"msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On 12/9/22 13:51, Paul Ramsey wrote:\n>> On Dec 9, 2022, at 10:39 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>>> On Dec 8, 2022, at 1:53 PM, Paul Ramsey <pramsey@cleverelephant.ca> wrote:\n>>> \n>>> Just a utility function to generate random numbers from a normal\n>>> distribution. I find myself doing this several times a year, and I am\n>>> sure I must not be the only one.\n>> \n>> Thanks for the patch. What do you think about these results?\n> \n> Angels on pins time! :)\n\nI just noticed this thread -- what is lacking in the normal_rand() \nfunction in the tablefunc contrib?\n\nhttps://www.postgresql.org/docs/current/tablefunc.html#id-1.11.7.52.5\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Fri, 9 Dec 2022 14:01:20 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "\n\n> On Dec 9, 2022, at 11:01 AM, Joe Conway <mail@joeconway.com> wrote:\n> \n> On 12/9/22 13:51, Paul Ramsey wrote:\n>>> On Dec 9, 2022, at 10:39 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>>>> On Dec 8, 2022, at 1:53 PM, Paul Ramsey <pramsey@cleverelephant.ca> wrote:\n>>>> Just a utility function to generate random numbers from a normal\n>>>> distribution. I find myself doing this several times a year, and I am\n>>>> sure I must not be the only one.\n>>> Thanks for the patch. What do you think about these results?\n>> Angels on pins time! :)\n> \n> I just noticed this thread -- what is lacking in the normal_rand() function in the tablefunc contrib?\n> \n> https://www.postgresql.org/docs/current/tablefunc.html#id-1.11.7.52.5\n\nSimplicity I guess mostly. random_normal() has a direct analogue in random() which is also a core function. I mean it could equally be pointed out that a user can implement their own Box-Muller calculation pretty trivially. Part of this submission is a personal wondering to what extent the community values convenience vs composibility. The set-returning nature of normal_rand() may be a bit of a red herring to people who just want one value (even though normal_rand (1, 0.0, 1.0) does exactly what they want).\n\nP.\n\n",
"msg_date": "Fri, 9 Dec 2022 11:10:25 -0800",
"msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "\n\n> On Dec 9, 2022, at 11:10 AM, Paul Ramsey <pramsey@cleverelephant.ca> wrote:\n> \n> \n> \n>> On Dec 9, 2022, at 11:01 AM, Joe Conway <mail@joeconway.com> wrote:\n>> \n>> On 12/9/22 13:51, Paul Ramsey wrote:\n>>>> On Dec 9, 2022, at 10:39 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>>>>> On Dec 8, 2022, at 1:53 PM, Paul Ramsey <pramsey@cleverelephant.ca> wrote:\n>>>>> Just a utility function to generate random numbers from a normal\n>>>>> distribution. I find myself doing this several times a year, and I am\n>>>>> sure I must not be the only one.\n>>>> Thanks for the patch. What do you think about these results?\n>>> Angels on pins time! :)\n>> \n>> I just noticed this thread -- what is lacking in the normal_rand() function in the tablefunc contrib?\n>> \n>> https://www.postgresql.org/docs/current/tablefunc.html#id-1.11.7.52.5\n> \n> Simplicity I guess mostly. random_normal() has a direct analogue in random() which is also a core function. I mean it could equally be pointed out that a user can implement their own Box-Muller calculation pretty trivially. Part of this submission is a personal wondering to what extent the community values convenience vs composibility. The set-returning nature of normal_rand() may be a bit of a red herring to people who just want one value (even though normal_rand (1, 0.0, 1.0) does exactly what they want).\n\nNo related to the \"reason to exist\", but normal_rand() has some interesting behaviour under Mark's test cases!\n\nselect normal_rand (1, 'Inf', 'Inf'), a from generate_series(1,2) a;\n normal_rand | a \n-------------+---\n NaN | 1\n Infinity | 2\n(2 rows)\n\n\n\n\n\n\n",
"msg_date": "Fri, 9 Dec 2022 11:11:34 -0800",
"msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "> On Dec 9, 2022, at 9:17 AM, Paul Ramsey <pramsey@cleverelephant.ca> wrote:\n> \n> \n>> On Dec 8, 2022, at 8:29 PM, Michael Paquier <michael@paquier.xyz> wrote:\n>> \n>> On Thu, Dec 08, 2022 at 04:44:56PM -0800, Paul Ramsey wrote:\n>>> Final tme, with fixes from cirrusci.\n>> \n>> Well, why not. Seems like you would use that a lot with PostGIS.\n>> \n>> #include <math.h> /* for ldexp() */\n>> +#include <float.h> /* for DBL_EPSILON */\n>> And be careful with the order here.\n> \n> Should be ... alphabetical?\n> \n>> +static void\n>> +drandom_check_default_seed()\n>> We always use (void) rather than empty parenthesis sets.\n> \n> OK\n> \n>> I would not leave that unchecked, so I think that you should add\n>> something in ramdom.sql. Or would you prefer switching some of\n>> the regression tests be switched so as they use the new normal\n>> function?\n> \n> Reading through those tests... seems like they will (rarely) fail. Is that... OK? \n> The tests seem to be mostly worried that random() starts returning constants, which seems like a good thing to test for (is the random number generating returning randomness).\n> An obvious test for this function is that the mean and stddev converge on the supplied parameters, given enough inputs, which is actually kind of the opposite test. I use the same random number generator as the uniform distribution, so that aspect is already covered by the existing tests.\n> \n>> (Ahem. Bonus points for a random_string() returning a bytea, based on\n>> pg_strong_random().)\n> \n> Would love to. Separate patch of bundled into this one?\n\nHere's the original with suggestions applied and a random_string that applies on top of it.\n\nThanks!\n\nP",
"msg_date": "Fri, 9 Dec 2022 15:20:35 -0800",
"msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "> On Dec 9, 2022, at 3:20 PM, Paul Ramsey <pramsey@cleverelephant.ca> wrote:\n> \n> \n> \n>> On Dec 9, 2022, at 9:17 AM, Paul Ramsey <pramsey@cleverelephant.ca> wrote:\n>> \n>> \n>>> On Dec 8, 2022, at 8:29 PM, Michael Paquier <michael@paquier.xyz> wrote:\n>>> \n>>> On Thu, Dec 08, 2022 at 04:44:56PM -0800, Paul Ramsey wrote:\n>>>> Final tme, with fixes from cirrusci.\n>>> \n>>> Well, why not. Seems like you would use that a lot with PostGIS.\n>>> \n>>> #include <math.h> /* for ldexp() */\n>>> +#include <float.h> /* for DBL_EPSILON */\n>>> And be careful with the order here.\n>> \n>> Should be ... alphabetical?\n>> \n>>> +static void\n>>> +drandom_check_default_seed()\n>>> We always use (void) rather than empty parenthesis sets.\n>> \n>> OK\n>> \n>>> I would not leave that unchecked, so I think that you should add\n>>> something in ramdom.sql. Or would you prefer switching some of\n>>> the regression tests be switched so as they use the new normal\n>>> function?\n>> \n>> Reading through those tests... seems like they will (rarely) fail. Is that... OK? \n>> The tests seem to be mostly worried that random() starts returning constants, which seems like a good thing to test for (is the random number generating returning randomness).\n>> An obvious test for this function is that the mean and stddev converge on the supplied parameters, given enough inputs, which is actually kind of the opposite test. I use the same random number generator as the uniform distribution, so that aspect is already covered by the existing tests.\n>> \n>>> (Ahem. Bonus points for a random_string() returning a bytea, based on\n>>> pg_strong_random().)\n>> \n>> Would love to. Separate patch of bundled into this one?\n> \n> Here's the original with suggestions applied and a random_string that applies on top of it.\n> \n> Thanks!\n> \n> P\n\nClearing up one CI failure.",
"msg_date": "Tue, 13 Dec 2022 15:51:11 -0800",
"msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 03:51:11PM -0800, Paul Ramsey wrote:\n> Clearing up one CI failure.\n\n+-- normal values converge on stddev == 2.0\n+SELECT round(stddev(random_normal(2, 2)))\n+ FROM generate_series(1, 10000);\n\nI am not sure that it is a good idea to make a test based on a random\nbehavior that should tend to a normalized value. This is costly in\ncycles, requiring a lot of work just for generate_series(). You could\ndo the same kind of thing as random() a few lines above?\n\n+SELECT bool_and(random_string(16) != random_string(16)) AS same\n+ FROM generate_series(1,8);\nThat should be fine in terms of impossible chances :)\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"return size must be non-negative\")))\nThis could have a test, same for 0.\n\n+#ifndef M_PI\n+#define M_PI 3.14159265358979323846\n+#endif\nPostgres' float.h includes one version of that.\n--\nMichael",
"msg_date": "Thu, 15 Dec 2022 14:17:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "> On Dec 14, 2022, at 9:17 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Dec 13, 2022 at 03:51:11PM -0800, Paul Ramsey wrote:\n>> Clearing up one CI failure.\n> \n> +-- normal values converge on stddev == 2.0\n> +SELECT round(stddev(random_normal(2, 2)))\n> + FROM generate_series(1, 10000);\n> \n> I am not sure that it is a good idea to make a test based on a random\n> behavior that should tend to a normalized value. This is costly in\n> cycles, requiring a lot of work just for generate_series(). You could\n> do the same kind of thing as random() a few lines above?\n> \n> +SELECT bool_and(random_string(16) != random_string(16)) AS same\n> + FROM generate_series(1,8);\n> That should be fine in terms of impossible chances :)\n> \n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"return size must be non-negative\")))\n> This could have a test, same for 0.\n> \n> +#ifndef M_PI\n> +#define M_PI 3.14159265358979323846\n> +#endif\n> Postgres' float.h includes one version of that.\n\nThanks again!\n\nP",
"msg_date": "Thu, 15 Dec 2022 12:53:40 -0800",
"msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "Hello Paul,\n\nMy 0.02€ about the patch:\n\nPatch did not apply with \"git apply\", I had to \"patch -p1 <\" and there was \na bunch of warnings.\n\nPatches compile and make check is okay.\n\nThe first patch adds empty lines at the end of \"sql/random.sql\", I think \nthat it should not.\n\n# About random_normal:\n\nI'm fine with providing a random_normal implementation at prng and SQL \nlevels.\n\nThere is already such an implementation in \"pgbench.c\", which outputs \nintegers, I'd suggest that it should also use the new prng function, there \nshould not be two box-muller transformations in pg.\n\n# About pg_prng_double_normal:\n\nOn the comment, I'd write \"mean + stddev * val\" instead of starting with \nthe stddev part.\n\nUsually there is an empty line between the variable declarations and the\nfirst statement.\n\nThere should be a comment about why it needs u1 \nlarger than some epsilon. This constraint seems to generate a small bias?\n\nI'd suggest to add the UNLIKELY() compiler hint on the loop.\n\n# About random_string:\n\nShould the doc about random_string tell that the output bytes are expected \nto be uniformly distributed? Does it return \"random values\" or \"pseudo \nrandom values\"?\n\nI do not understand why the \"drandom_string\" function is in \"float.c\", as \nit is not really related to floats. Also it does not return a string but a \nbytea, so why call it \"_string\" in the first place? I'm do not think that \nit should use pg_strong_random which may be very costly on some platform. \nAlso, pg_strong_random does not use the seed, so I do not understand why \nit needs to be checked. I'd suggest that the output should really be \nuniform pseudo-random, possibly based on the drandom() state, or maybe \nnot.\n\nOverall, I think that there should be a clearer discussion and plan about \nwhich random functionS postgres should provide to complement the standard \ninstead of going there… randomly:-)\n\n-- \nFabien.",
"msg_date": "Sat, 17 Dec 2022 17:49:15 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On Sat, Dec 17, 2022 at 05:49:15PM +0100, Fabien COELHO wrote:\n> Overall, I think that there should be a clearer discussion and plan about\n> which random functionS postgres should provide to complement the standard\n> instead of going there… randomly:-)\n\nSo, what does the specification tells about seeds, normal and random\nfunctions? A bunch of DBMSs implement RAND, sometimes RANDOM, SEED or\neven NORMAL using from time to time specific SQL keywords to do the\nwork.\n\nNote that SQLValueFunction made the addition of more returning data\ntypes a bit more complicated (not much, still) than the new\nCOERCE_SQL_SYNTAX by going through a mapping function, so the\nkeyword/function mapping is straight-forward.\n--\nMichael",
"msg_date": "Mon, 19 Dec 2022 12:36:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "Bonjour Michaël,\n\n>> Overall, I think that there should be a clearer discussion and plan about\n>> which random functionS postgres should provide to complement the standard\n>> instead of going there… randomly:-)\n>\n> So, what does the specification tells about seeds, normal and random\n> functions? A bunch of DBMSs implement RAND, sometimes RANDOM, SEED or\n> even NORMAL using from time to time specific SQL keywords to do the\n> work.\n\nI do not have the SQL standard, so I have no idea about what is in there.\n\n From a typical use case point of view, I'd say uniform, normal and \nexponential would make sense for floats. I'm also okay with generating a \nuniform bytes pseudo-randomly.\n\nI'd be more at ease to add simple functions rather than a special \nheavy-on-keywords syntax, even if standard.\n\n> Note that SQLValueFunction made the addition of more returning data\n> types a bit more complicated (not much, still) than the new\n> COERCE_SQL_SYNTAX by going through a mapping function, so the\n> keyword/function mapping is straight-forward.\n\nI'm unclear about why this paragraph is here.\n\n-- \nFabien.",
"msg_date": "Wed, 21 Dec 2022 08:47:32 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On 12/19/22 04:36, Michael Paquier wrote:\n> On Sat, Dec 17, 2022 at 05:49:15PM +0100, Fabien COELHO wrote:\n>> Overall, I think that there should be a clearer discussion and plan about\n>> which random functionS postgres should provide to complement the standard\n>> instead of going there… randomly:-)\n> \n> So, what does the specification tells about seeds, normal and random\n> functions?\n\n\nNothing at all.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Wed, 21 Dec 2022 09:18:54 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 08:47:32AM +0100, Fabien COELHO wrote:\n> From a typical use case point of view, I'd say uniform, normal and\n> exponential would make sense for floats. I'm also okay with generating a\n> uniform bytes pseudo-randomly.\n\nI'd agree with this set.\n\n> I'd be more at ease to add simple functions rather than a special\n> heavy-on-keywords syntax, even if standard.\n\nOkay.\n\n>> Note that SQLValueFunction made the addition of more returning data\n>> types a bit more complicated (not much, still) than the new\n>> COERCE_SQL_SYNTAX by going through a mapping function, so the\n>> keyword/function mapping is straight-forward.\n> \n> I'm unclear about why this paragraph is here.\n\nJust saying that using COERCE_SQL_SYNTAX for SQL keywords is easier\nthan the older style. If the SQL specification mentions no SQL\nkeywords for such things, this is irrelevant, of course :)\n--\nMichael",
"msg_date": "Thu, 22 Dec 2022 11:38:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On Thu, Dec 08, 2022 at 02:58:02PM -0800, Paul Ramsey wrote:\n> > On Dec 8, 2022, at 2:46 PM, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > \n> > I guess make_interval is a typo ?\n> > \n> > This is causing it to fail tests:\n> > http://cfbot.cputube.org/paul-ramsey.html\n> > \n> \n> Yep, dumb typo, thanks! This bot is amazeballs, thank you!\n\nThis is still failing tests - did you enable cirrusci on your own github\naccount to run available checks on the patch ?\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 30 Dec 2022 21:58:04 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On Fri, Dec 30, 2022 at 09:58:04PM -0600, Justin Pryzby wrote:\n> This is still failing tests - did you enable cirrusci on your own github\n> account to run available checks on the patch ?\n\nFYI, here is the failure:\n[21:23:10.814] In file included from pg_prng.c:27:\n[21:23:10.814] ../../src/include/utils/float.h:46:16: error: ‘struct\nNode’ declared inside parameter list will not be visible outside of\nthis definition or declaration [-Werror] \n[21:23:10.814] 46 | struct Node *escontext);\n\nAnd a link to it, from the CF bot:\nhttps://cirrus-ci.com/task/5969961391226880?logs=gcc_warning#L452\n--\nMichael",
"msg_date": "Tue, 3 Jan 2023 11:24:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> FYI, here is the failure:\n> [21:23:10.814] In file included from pg_prng.c:27:\n> [21:23:10.814] ../../src/include/utils/float.h:46:16: error: ‘struct\n> Node’ declared inside parameter list will not be visible outside of\n> this definition or declaration [-Werror] \n> [21:23:10.814] 46 | struct Node *escontext);\n\nHmm ... this looks an awful lot like it is the fault of ccff2d20e\nnot of the random_normal patch; that is, we probably need a\n\"struct Node\" stub declaration in float.h. However, why are we\nnot seeing any reports of this from elsewhere? I'm concerned now\nthat there are more places also needing stub declarations, but\nmy test process didn't expose it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Jan 2023 21:38:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "I wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> FYI, here is the failure:\n>> [21:23:10.814] In file included from pg_prng.c:27:\n>> [21:23:10.814] ../../src/include/utils/float.h:46:16: error: ‘struct\n>> Node’ declared inside parameter list will not be visible outside of\n>> this definition or declaration [-Werror] \n>> [21:23:10.814] 46 | struct Node *escontext);\n\n> Hmm ... this looks an awful lot like it is the fault of ccff2d20e\n> not of the random_normal patch; that is, we probably need a\n> \"struct Node\" stub declaration in float.h.\n\n[ ... some head-scratching later ... ]\n\nNo, we don't per our normal headerscheck rules, which are that\nheaders such as utils/float.h need to be compilable after including\njust postgres.h. The \"struct Node\" stub declaration in elog.h will\nbe visible, making the declaration of float8in_internal kosher.\n\nSo the problem in this patch is that it's trying to include\nutils/float.h in a src/common file, where we have not included\npostgres.h. Question is, why did you do that? I see nothing in\npg_prng_double_normal() that looks like it should require that header.\nIf it did, it'd be questionable whether it's okay to be in src/common.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Jan 2023 11:41:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "I wrote:\n> So the problem in this patch is that it's trying to include\n> utils/float.h in a src/common file, where we have not included\n> postgres.h. Question is, why did you do that?\n\n(Ah, for M_PI ... but our practice is just to duplicate that #define\nwhere needed outside the backend.)\n\nI spent some time reviewing this patch. I'm on board with\ninventing random_normal(): the definition seems solid and\nthe use-case for it seems reasonably well established.\nI'm not necessarily against inventing similar functions for\nother distributions, but this patch is not required to do so.\nWe can leave that discussion until somebody is motivated to\nsubmit a patch for one.\n\nOn the other hand, I'm much less on board with inventing\nrandom_string(): we don't have any clear field demand for it\nand the appropriate definitional details are a lot less obvious\n(for example, whether it needs to be based on pg_strong_random()\nrather than the random() sequence). I think we should leave that\nout, and I have done so in the attached updated patch.\n\nI noted several errors in the submitted patch. It was creating\nthe function as PARALLEL SAFE which is just wrong, and the whole\nbusiness with checking PG_NARGS is useless because it will always\nbe 2. (That's not how default arguments work.)\n\nThe business with checking against DBL_EPSILON seems wrong too.\nAll we need is to ensure that u1 > 0 so that log(u1) will not\nchoke; per spec, log() is defined for any positive input. I see that\nthat seems to have been modeled on the C++ code in the Wikipedia\npage, but I'm not sure that C++'s epsilon means the same thing, and\nif it does then their example code is just wrong. See the discussion\nabout \"tails truncation\" immediately above it: artificially\nconstraining the range of u1 just limits how much of the tail\nof the distribution we can reproduce. So that led me to doing\nit the same way as in the existing Box-Muller code in pgbench,\nwhich I then deleted per Fabien's advice.\n\nBTW, the pgbench code was using sin() not cos(), which I duplicated\nbecause using cos() causes the expected output of the pgbench tests\nto change. I'm not sure whether there was any hard reason to prefer\none or the other, and we can certainly change the expected output\nif there's some reason to prefer cos().\n\nI concur with not worrying about the Inf/NaN cases that Mark\npointed out. It's not obvious that the results the proposed code\nproduces are wrong, and it's even less obvious that anyone will\never care.\n\nAlso, I tried running the new random.sql regression cases over\nand over, and found that the \"not all duplicates\" test fails about\none time in 100000 or so. We could probably tolerate that given\nthat the random test is marked \"ignore\" in parallel_schedule, but\nI thought it best to add one more iteration so we could knock the\nodds down. Also I changed the test iterations so they weren't\nall invoking random_normal() in exactly the same way.\n\nThis version seems committable to me, barring objections.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 08 Jan 2023 19:20:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "I wrote:\n> Also, I tried running the new random.sql regression cases over\n> and over, and found that the \"not all duplicates\" test fails about\n> one time in 100000 or so. We could probably tolerate that given\n> that the random test is marked \"ignore\" in parallel_schedule, but\n> I thought it best to add one more iteration so we could knock the\n> odds down.\n\nHmm ... it occurred to me to try the same check on the existing\nrandom() tests (attached), and darn if they don't fail even more\noften, usually within 50K iterations. So maybe we should rethink\nthat whole thing.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 08 Jan 2023 21:17:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On Mon, 9 Jan 2023 at 00:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> This version seems committable to me, barring objections.\n>\n\nWhilst I have no objection to adding random_normal(), ISTM that we're\nat risk of adding an arbitrary set of random functions without a clear\nidea of where we'll end up, and how they'll affect one another (shared\nstate or not).\n\nFor example, random_normal() uses a PRNG and it shares the same state\nas random()/setseed(). That's fine, and presumably if we add other\ncontinuous distributions like exponential, they'll follow suit.\n\nBut what if we add something like random_int() to return a random\ninteger in a range (something that has been suggested before, and I\nthink would be very useful), or other discrete random functions? Would\nthey use a PRNG and share the same state? If so, having that state in\nfloat.c wouldn't make sense anymore. If not, will they have their own\nseed-setting functions, and how many seed-setting functions will we\nend up with?\n\nOver on [1] we're currently heading towards adding array_shuffle() and\narray_sample() using a PRNG with a separate state shared just by those\n2 functions, with no way to seed it, and not marking them as PARALLEL\nRESTRICTED, so they can't be made deterministic.\n\nISTM that random functions should fall into 1 of 2 clearly defined\ncategories -- strong random functions and pseudorandom functions. IMO\nall pseudorandom functions should be PARALLEL RESTRICTED and have a\nway to set their seed, so that they can be made deterministic --\nsomething that's very useful for users writing tests.\n\nNow maybe having multiple seed-setting functions (one per datatype?)\nis OK, but it seems unnecessary and cumbersome to me. Why would you\never want to seed the random_int() sequence and the random() sequence\ndifferently? No other library of random functions I know of behaves\nthat way.\n\nSo IMO all pseudorandom functions should share the same PRNG state and\nseed-setting functions. That would mean they should all be in the same\n(new) C file, so that the PRNG state can be kept private to that file.\n\nI think it would also make sense to add a new \"Random Functions\"\nsection to the docs, and move the descriptions of random(),\nrandom_normal() and setseed() there. That way, all the functions\naffected by setseed() can be kept together on one page (future random\nfunctions may not all be sensibly classified as \"mathematical\").\n\nRegards,\nDean\n\n[1] https://www.postgresql.org/message-id/flat/9d160a44-7675-51e8-60cf-6d64b76db831@aboutsource.net\n\n\n",
"msg_date": "Mon, 9 Jan 2023 12:03:34 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> So IMO all pseudorandom functions should share the same PRNG state and\n> seed-setting functions. That would mean they should all be in the same\n> (new) C file, so that the PRNG state can be kept private to that file.\n\nI agree with this in principle, but I feel no need to actually reshuffle\nthe code before we accept a proposal for such a function that wouldn't\nlogically belong in float.c.\n\n> I think it would also make sense to add a new \"Random Functions\"\n> section to the docs, and move the descriptions of random(),\n> random_normal() and setseed() there.\n\nLikewise, this'll just add confusion in the short run. A <sect1>\nwith only three functions in it is going to look mighty odd.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 09 Jan 2023 10:26:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On Mon, 9 Jan 2023 at 15:26, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> > So IMO all pseudorandom functions should share the same PRNG state and\n> > seed-setting functions. That would mean they should all be in the same\n> > (new) C file, so that the PRNG state can be kept private to that file.\n>\n> I agree with this in principle, but I feel no need to actually reshuffle\n> the code before we accept a proposal for such a function that wouldn't\n> logically belong in float.c.\n>\n> > I think it would also make sense to add a new \"Random Functions\"\n> > section to the docs, and move the descriptions of random(),\n> > random_normal() and setseed() there.\n>\n> Likewise, this'll just add confusion in the short run. A <sect1>\n> with only three functions in it is going to look mighty odd.\n>\n\nOK, that's fair enough, while we're just adding random_normal().\n\nBTW, \"UUID Functions\" only has 1 function in it :-)\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 9 Jan 2023 16:12:37 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "I wrote:\n> Hmm ... it occurred to me to try the same check on the existing\n> random() tests (attached), and darn if they don't fail even more\n> often, usually within 50K iterations. So maybe we should rethink\n> that whole thing.\n\nI pushed Paul's patch with the previously-discussed tests, but\nthe more I look at random.sql the less I like it. I propose\nthat we nuke the existing tests from orbit and replace with\nsomething more or less as attached. This is faster than what\nwe have, removes the unnecessary dependency on the \"onek\" table,\nand I believe it to be a substantially more thorough test of the\nrandom functions' properties. (We could probably go further\nthan this, like trying to verify distribution properties. But\nit's been too long since college statistics for me to want to\nwrite such tests myself, and I'm not real sure we need them.)\n\nBTW, if this does bring the probability of failure down to the\none-in-a-billion range, I think we could also nuke the whole\n\"ignore:\" business, simplifying pg_regress and allowing the\nrandom test to be run in parallel with others.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 09 Jan 2023 13:52:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On Mon, 9 Jan 2023 at 18:52, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I pushed Paul's patch with the previously-discussed tests, but\n> the more I look at random.sql the less I like it. I propose\n> that we nuke the existing tests from orbit and replace with\n> something more or less as attached.\n\nLooks like a definite improvement.\n\n> (We could probably go further\n> than this, like trying to verify distribution properties. But\n> it's been too long since college statistics for me to want to\n> write such tests myself, and I'm not real sure we need them.)\n\nI played around with the Kolmogorov–Smirnov test, to test random() for\nuniformity. The following passes roughly 99.9% of the time:\n\nCREATE OR REPLACE FUNCTION ks_test_uniform_random()\nRETURNS boolean AS\n$$\nDECLARE\n n int := 1000; -- Number of samples\n c float8 := 1.94947; -- Critical value for 99.9% confidence\n/* c float8 := 1.62762; -- Critical value for 99% confidence */\n/* c float8 := 1.22385; -- Critical value for 90% confidence */\n ok boolean;\nBEGIN\n ok := (\n WITH samples AS (\n SELECT random() r FROM generate_series(1, n) ORDER BY 1\n ), indexed_samples AS (\n SELECT (row_number() OVER())-1.0 i, r FROM samples\n )\n SELECT max(abs(i/n-r)) < c / sqrt(n) FROM indexed_samples\n );\n RETURN ok;\nEND\n$$\nLANGUAGE plpgsql;\n\nand is very fast. So that gives decent confidence that random() is\nindeed uniform.\n\nWith a one-in-a-thousand chance of failing, if you wanted something\nwith around a one-in-a-billion chance of failing, you could just try\nit 3 times:\n\nSELECT ks_test_uniform_random() OR\n ks_test_uniform_random() OR\n ks_test_uniform_random();\n\nbut it feels pretty hacky, and probably not really necessary.\n\nRigorous tests for other distributions are harder, but also probably\nnot necessary if we have confidence that the underlying PRNG is\nuniform.\n\n> BTW, if this does bring the probability of failure down to the\n> one-in-a-billion range, I think we could also nuke the whole\n> \"ignore:\" business, simplifying pg_regress and allowing the\n> random test to be run in parallel with others.\n\nI didn't check the one-in-a-billion claim, but +1 for that.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 9 Jan 2023 22:38:36 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On Mon, 9 Jan 2023 at 18:52, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> (We could probably go further\n>> than this, like trying to verify distribution properties. But\n>> it's been too long since college statistics for me to want to\n>> write such tests myself, and I'm not real sure we need them.)\n\n> I played around with the Kolmogorov–Smirnov test, to test random() for\n> uniformity. The following passes roughly 99.9% of the time:\n\nAh, cool, thanks for this code.\n\n> With a one-in-a-thousand chance of failing, if you wanted something\n> with around a one-in-a-billion chance of failing, you could just try\n> it 3 times:\n> SELECT ks_test_uniform_random() OR\n> ks_test_uniform_random() OR\n> ks_test_uniform_random();\n> but it feels pretty hacky, and probably not really necessary.\n\nThat seems like a good way, because I'm not satisfied with\none-in-a-thousand odds if we want to remove the \"ignore\" marker.\nIt's still plenty fast enough: on my machine, the v2 patch below\ntakes about 19ms, versus 22ms for the script as it stands in HEAD.\n\n> Rigorous tests for other distributions are harder, but also probably\n> not necessary if we have confidence that the underlying PRNG is\n> uniform.\n\nAgreed.\n\n>> BTW, if this does bring the probability of failure down to the\n>> one-in-a-billion range, I think we could also nuke the whole\n>> \"ignore:\" business, simplifying pg_regress and allowing the\n>> random test to be run in parallel with others.\n\n> I didn't check the one-in-a-billion claim, but +1 for that.\n\nI realized that we do already run random in a parallel group;\nthe \"ignore: random\" line in parallel_schedule just marks it\nas failure-ignorable, it doesn't schedule it. (The comment is a\nbit misleading about this, but I want to remove that not rewrite it.)\nNonetheless, nuking the whole ignore-failures mechanism seems like\ngood cleanup to me.\n\nAlso, I tried this on some 32-bit big-endian hardware (NetBSD on macppc)\nto verify my thesis that the results of random() are now machine\nindependent. That part works, but the random_normal() tests didn't;\nI saw low-order-bit differences from the results on x86_64 Linux.\nPresumably, that's because one or more of sqrt()/log()/sin() are\nrounding off a bit differently there. v2 attached deals with this by\nbacking off to \"extra_float_digits = 0\" for that test. Once it hits the\nbuildfarm we might find we have to reduce extra_float_digits some more,\nbut that was enough to make NetBSD/macppc happy.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 09 Jan 2023 18:38:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On Mon, 9 Jan 2023 at 23:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I tried this on some 32-bit big-endian hardware (NetBSD on macppc)\n> to verify my thesis that the results of random() are now machine\n> independent. That part works, but the random_normal() tests didn't;\n\nAh yes, I wondered about that.\n\n> I saw low-order-bit differences from the results on x86_64 Linux.\n> Presumably, that's because one or more of sqrt()/log()/sin() are\n> rounding off a bit differently there. v2 attached deals with this by\n> backing off to \"extra_float_digits = 0\" for that test.\n\nMakes sense.\n\nI double-checked the one-in-a-billion claim, and it looks about right\nfor each test.\n\nThe one I wasn't sure about was the chance of duplicates for\nrandom_normal(). Analysing it more closely, it actually has a smaller\nchance of duplicates, since the difference between 2 standard normal\ndistributions is another normal distribution with a standard deviation\nof sqrt(2), and so the probability of a pair of random_normal()'s\nbeing the same is about 2*sqrt(pi) ~ 3.5449 times lower than for\nrandom(). So you can call random_normal() around 5600 times (rather\nthan 3000 times) before having a 1e-9 chance of duplicates. So, as\nwith the random() duplicates test, the probability of failure with\njust 1000 values should be well below 1e-9. Intuitively, that was\nalways going to be true, but I wanted to know the details.\n\nThe rest looks good to me, except there's a random non-ASCII character\ninstead of a hyphen in \"Kolmogorov-Smirnov\" (because I copy-pasted the\nname from some random website).\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 10 Jan 2023 08:33:53 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On Tue, 10 Jan 2023 at 08:33, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> The rest looks good to me, except there's a random non-ASCII character\n> instead of a hyphen in \"Kolmogorov-Smirnov\" (because I copy-pasted the\n> name from some random website).\n>\n\nOh, never mind. I see you already fixed that.\nI should finish reading all my mail before hitting reply.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 10 Jan 2023 10:20:00 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> I double-checked the one-in-a-billion claim, and it looks about right\n> for each test.\n\nThanks for double-checking my arithmetic.\n\n> The rest looks good to me, except there's a random non-ASCII character\n> instead of a hyphen in \"Kolmogorov-Smirnov\" (because I copy-pasted the\n> name from some random website).\n\nYeah, I caught that before committing.\n\nThe AIX buildfarm members were still showing low-order diffs in the\nrandom_normal results at extra_float_digits = 0, but they seem OK\nafter reducing it to -1.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Jan 2023 09:34:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 6:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> > I double-checked the one-in-a-billion claim, and it looks about right\n> > for each test.\n>\n> Thanks for double-checking my arithmetic.\n>\n> > The rest looks good to me, except there's a random non-ASCII character\n> > instead of a hyphen in \"Kolmogorov-Smirnov\" (because I copy-pasted the\n> > name from some random website).\n>\n> Yeah, I caught that before committing.\n>\n> The AIX buildfarm members were still showing low-order diffs in the\n> random_normal results at extra_float_digits = 0, but they seem OK\n> after reducing it to -1.\n\nI should leave the country more often... thanks for cleaning up my\npatch and committing it Tom! It's a Christmas miracle (at least, for\nme :)\nP.\n\n\n",
"msg_date": "Tue, 10 Jan 2023 08:11:33 -0800",
"msg_from": "Paul Ramsey <pramsey@cleverelephant.ca>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On 1/9/23 23:52, Tom Lane wrote:\n> BTW, if this does bring the probability of failure down to the\n> one-in-a-billion range, I think we could also nuke the whole\n> \"ignore:\" business, simplifying pg_regress and allowing the\n> random test to be run in parallel with others.\nWith 'ignore' option we get used to cover by tests some of the time \ndependent features, such as \"statement_timeout\", \n\"idle_in_transaction_session_timeout\", usage of user timeouts in \nextensions and so on.\n\nWe have used the pg_sleep() function to interrupt a query at certain \nexecution phase. But on some platforms, especially in containers, the \nquery can vary execution time in so widely that the pg_sleep() timeout, \nrequired to get rid of dependency on a query execution time, has become \nunacceptable. So, the \"ignore\" option was the best choice.\n\nFor Now, Do we only have the \"isolation tests\" option to create stable \nexecution time-dependent tests now? Or I'm not aware about some test \nmachinery?\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Thu, 19 Jan 2023 10:39:27 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> On 1/9/23 23:52, Tom Lane wrote:\n>> BTW, if this does bring the probability of failure down to the\n>> one-in-a-billion range, I think we could also nuke the whole\n>> \"ignore:\" business, simplifying pg_regress and allowing the\n>> random test to be run in parallel with others.\n\n> We have used the pg_sleep() function to interrupt a query at certain \n> execution phase. But on some platforms, especially in containers, the \n> query can vary execution time in so widely that the pg_sleep() timeout, \n> required to get rid of dependency on a query execution time, has become \n> unacceptable. So, the \"ignore\" option was the best choice.\n\nBut does such a test have any actual value? If your test infrastructure\nignores the result, what makes you think you'd notice if the test did\nindeed detect a problem?\n\nI think \"ignore:\" was a kluge we put in twenty-plus years ago when our\ntesting standards were a lot lower, and it's way past time we got\nrid of it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Jan 2023 01:01:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
},
{
"msg_contents": "On 1/19/23 11:01, Tom Lane wrote:\n> Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n>> On 1/9/23 23:52, Tom Lane wrote:\n>>> BTW, if this does bring the probability of failure down to the\n>>> one-in-a-billion range, I think we could also nuke the whole\n>>> \"ignore:\" business, simplifying pg_regress and allowing the\n>>> random test to be run in parallel with others.\n> \n>> We have used the pg_sleep() function to interrupt a query at certain\n>> execution phase. But on some platforms, especially in containers, the\n>> query can vary execution time in so widely that the pg_sleep() timeout,\n>> required to get rid of dependency on a query execution time, has become\n>> unacceptable. So, the \"ignore\" option was the best choice.\n> \n> But does such a test have any actual value? If your test infrastructure\n> ignores the result, what makes you think you'd notice if the test did\n> indeed detect a problem?\nYes, it is good to catch SEGFAULTs and assertions which may be frequent \nbecause of a logic complexity in the case of timeouts.\n\n> \n> I think \"ignore:\" was a kluge we put in twenty-plus years ago when our\n> testing standards were a lot lower, and it's way past time we got\n> rid of it.\nOk, I will try to invent alternative way for deep (and stable) testing \nof timeouts. Thank you for the answer.\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Thu, 19 Jan 2023 11:48:20 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] random_normal function"
}
] |
[
{
"msg_contents": "Hi,\n\nSince\n\ncommit 3f0e786ccbf\nAuthor: Andres Freund <andres@anarazel.de>\nDate: 2022-12-07 12:13:35 -0800\n\n meson: Add 'running' test setup, as a replacement for installcheck\n\nCI tests the pg_regress/isolationtester tests that support doing so against a\nrunning server.\n\n\nUnfortunately cfbot shows that that doesn't work entirely reliably.\n\nThe most frequent case is postgres_fdw, which somewhat regularly fails with a\nregression.diff like this:\n\ndiff -U3 /tmp/cirrus-ci-build/contrib/postgres_fdw/expected/postgres_fdw.out /tmp/cirrus-ci-build/build/testrun/postgres_fdw-running/regress/results/postgres_fdw.out\n--- /tmp/cirrus-ci-build/contrib/postgres_fdw/expected/postgres_fdw.out\t2022-12-08 20:35:24.772888000 +0000\n+++ /tmp/cirrus-ci-build/build/testrun/postgres_fdw-running/regress/results/postgres_fdw.out\t2022-12-08 20:43:38.199450000 +0000\n@@ -9911,8 +9911,7 @@\n \tWHERE application_name = 'fdw_retry_check';\n pg_terminate_backend\n ----------------------\n- t\n-(1 row)\n+(0 rows)\n\n -- This query should detect the broken connection when starting new remote\n -- transaction, reestablish new connection, and then succeed.\n\n\nSee e.g.\nhttps://cirrus-ci.com/task/5925540020879360\nhttps://api.cirrus-ci.com/v1/artifact/task/5925540020879360/testrun/build/testrun/postgres_fdw-running/regress/regression.diffs\nhttps://api.cirrus-ci.com/v1/artifact/task/5925540020879360/testrun/build/testrun/runningcheck.log\n\n\nThe following comment in the test provides a hint what might be happening:\n\n-- If debug_discard_caches is active, it results in\n-- dropping remote connections after every transaction, making it\n-- impossible to test termination meaningfully. So turn that off\n-- for this test.\nSET debug_discard_caches = 0;\n\n\nI guess that a cache reset message arrives and leads to the connection being\nterminated. Unfortunately that's hard to see right now, as the relevant log\nmessages are output with DEBUG3 - it's quite verbose, so enabling it for all\ntests will be painful.\n\nI *think* I have seen this failure locally at least once, when running the\ntest normally.\n\n\nI'll try to reproduce this locally for a bit. If I can't, the only other idea\nI have for debugging this is to change log_min_messages in that section of the\npostgres_fdw test to DEBUG3.\n\n\n\nThe second failure case I observed, at a lower frequency, is in the main\nregression tests:\nhttps://cirrus-ci.com/task/5640584912699392\nhttps://api.cirrus-ci.com/v1/artifact/task/5640584912699392/testrun/build/testrun/regress-running/regress/regression.diffs\nhttps://api.cirrus-ci.com/v1/artifact/task/5640584912699392/testrun/build/testrun/runningcheck.log\n\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/create_index.out /tmp/cirrus-ci-build/build/testrun/regress-running/regress/results/create_index.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/create_index.out\t2022-12-08 16:49:28.239508000 +0000\n+++ /tmp/cirrus-ci-build/build/testrun/regress-running/regress/results/create_index.out\t2022-12-08 16:57:17.286650000 +0000\n@@ -1910,11 +1910,15 @@\n SELECT unique1 FROM tenk1\n WHERE unique1 IN (1,42,7)\n ORDER BY unique1;\n- QUERY PLAN\n--------------------------------------------------------\n- Index Only Scan using tenk1_unique1 on tenk1\n- Index Cond: (unique1 = ANY ('{1,42,7}'::integer[]))\n-(2 rows)\n+ QUERY PLAN\n+-------------------------------------------------------------------\n+ Sort\n+ Sort Key: unique1\n+ -> Bitmap Heap Scan on tenk1\n+ Recheck Cond: (unique1 = ANY ('{1,42,7}'::integer[]))\n+ -> Bitmap Index Scan on tenk1_unique1\n+ Index Cond: (unique1 = ANY ('{1,42,7}'::integer[]))\n+(6 rows)\n\n SELECT unique1 FROM tenk1\n WHERE unique1 IN (1,42,7)\n\n\nWhich I think we've seen a number of times before, even in the temp-install\ncase. We fixed one source of this issue in this thread:\nhttps://www.postgresql.org/message-id/1346227.1649887693%40sss.pgh.pa.us\nbut it looks like there's some remaining instability.\n\nAccording to the server log (link above), there's no autovacuum on\ntenk1.\n\nUnfortunately we don't log non-automatic vacuums unless they are run with\nverbose, so we can't see what horizon was used (see heap_vacuum_rel()'s\ncomputation of 'instrument').\n\nI don't have a better idea than to either change the above, or to revert\n91998539b227dfc6dd091714da7d106f2c95a321.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Dec 2022 16:15:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "tests against running server occasionally fail, postgres_fdw & tenk1"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-08 16:15:11 -0800, Andres Freund wrote:\n> commit 3f0e786ccbf\n> Author: Andres Freund <andres@anarazel.de>\n> Date: 2022-12-07 12:13:35 -0800\n> \n> meson: Add 'running' test setup, as a replacement for installcheck\n> \n> CI tests the pg_regress/isolationtester tests that support doing so against a\n> running server.\n> \n> \n> Unfortunately cfbot shows that that doesn't work entirely reliably.\n> \n> The most frequent case is postgres_fdw, which somewhat regularly fails with a\n> regression.diff like this:\n> \n> diff -U3 /tmp/cirrus-ci-build/contrib/postgres_fdw/expected/postgres_fdw.out /tmp/cirrus-ci-build/build/testrun/postgres_fdw-running/regress/results/postgres_fdw.out\n> --- /tmp/cirrus-ci-build/contrib/postgres_fdw/expected/postgres_fdw.out\t2022-12-08 20:35:24.772888000 +0000\n> +++ /tmp/cirrus-ci-build/build/testrun/postgres_fdw-running/regress/results/postgres_fdw.out\t2022-12-08 20:43:38.199450000 +0000\n> @@ -9911,8 +9911,7 @@\n> \tWHERE application_name = 'fdw_retry_check';\n> pg_terminate_backend\n> ----------------------\n> - t\n> -(1 row)\n> +(0 rows)\n> \n> -- This query should detect the broken connection when starting new remote\n> -- transaction, reestablish new connection, and then succeed.\n> \n> \n> See e.g.\n> https://cirrus-ci.com/task/5925540020879360\n> https://api.cirrus-ci.com/v1/artifact/task/5925540020879360/testrun/build/testrun/postgres_fdw-running/regress/regression.diffs\n> https://api.cirrus-ci.com/v1/artifact/task/5925540020879360/testrun/build/testrun/runningcheck.log\n> \n> \n> The following comment in the test provides a hint what might be happening:\n> \n> -- If debug_discard_caches is active, it results in\n> -- dropping remote connections after every transaction, making it\n> -- impossible to test termination meaningfully. So turn that off\n> -- for this test.\n> SET debug_discard_caches = 0;\n> \n> \n> I guess that a cache reset message arrives and leads to the connection being\n> terminated. Unfortunately that's hard to see right now, as the relevant log\n> messages are output with DEBUG3 - it's quite verbose, so enabling it for all\n> tests will be painful.\n> \n> I *think* I have seen this failure locally at least once, when running the\n> test normally.\n> \n> \n> I'll try to reproduce this locally for a bit. If I can't, the only other idea\n> I have for debugging this is to change log_min_messages in that section of the\n> postgres_fdw test to DEBUG3.\n\nOh. I tried to reproduce the issue, without success so far, but eventually my\ntest loop got stuck in something I reported previously and forgot about since:\nhttps://www.postgresql.org/message-id/20220925232237.p6uskba2dw6fnwj2%40awork3.anarazel.de\n\nI wonder if the timing on the freebsd CI task works out to hitting a \"smaller\nversion\" of the problem that eventually resolves itself, which then leads to a\nsinval reset getting sent, causing the observable problem.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 8 Dec 2022 16:36:07 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tests against running server occasionally fail, postgres_fdw &\n tenk1"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-08 16:36:07 -0800, Andres Freund wrote:\n> On 2022-12-08 16:15:11 -0800, Andres Freund wrote:\n> > Unfortunately cfbot shows that that doesn't work entirely reliably.\n> >\n> > The most frequent case is postgres_fdw, which somewhat regularly fails with a\n> > regression.diff like this:\n> >\n> > diff -U3 /tmp/cirrus-ci-build/contrib/postgres_fdw/expected/postgres_fdw.out /tmp/cirrus-ci-build/build/testrun/postgres_fdw-running/regress/results/postgres_fdw.out\n> > --- /tmp/cirrus-ci-build/contrib/postgres_fdw/expected/postgres_fdw.out\t2022-12-08 20:35:24.772888000 +0000\n> > +++ /tmp/cirrus-ci-build/build/testrun/postgres_fdw-running/regress/results/postgres_fdw.out\t2022-12-08 20:43:38.199450000 +0000\n> > @@ -9911,8 +9911,7 @@\n> > \tWHERE application_name = 'fdw_retry_check';\n> > pg_terminate_backend\n> > ----------------------\n> > - t\n> > -(1 row)\n> > +(0 rows)\n> >\n> > -- This query should detect the broken connection when starting new remote\n> > -- transaction, reestablish new connection, and then succeed.\n\n> >\n> > I guess that a cache reset message arrives and leads to the connection being\n> > terminated. Unfortunately that's hard to see right now, as the relevant log\n> > messages are output with DEBUG3 - it's quite verbose, so enabling it for all\n> > tests will be painful.\n> >\n> > I *think* I have seen this failure locally at least once, when running the\n> > test normally.\n> >\n> >\n> > I'll try to reproduce this locally for a bit. If I can't, the only other idea\n> > I have for debugging this is to change log_min_messages in that section of the\n> > postgres_fdw test to DEBUG3.\n>\n> Oh. I tried to reproduce the issue, without success so far, but eventually my\n> test loop got stuck in something I reported previously and forgot about since:\n> https://www.postgresql.org/message-id/20220925232237.p6uskba2dw6fnwj2%40awork3.anarazel.de\n>\n> I wonder if the timing on the freebsd CI task works out to hitting a \"smaller\n> version\" of the problem that eventually resolves itself, which then leads to a\n> sinval reset getting sent, causing the observable problem.\n\nThe issue referenced above is now fixed, and I haven't seen instances of it\nsince then. I also just now fixed the issue that often lead to failing to\nupload logs.\n\nUnfortunately the fdw_retry_check issue from above has re-occurred since then:\n\nhttps://cirrus-ci.com/task/4901157940756480\nhttps://api.cirrus-ci.com/v1/artifact/task/4901157940756480/testrun/build/testrun/postgres_fdw-running/regress/regression.diffs\n\n\nAnother run hit an issue we've been fighting repeatedly on the buildfarm / CI:\nhttps://cirrus-ci.com/task/5527490404286464\nhttps://api.cirrus-ci.com/v1/artifact/task/5527490404286464/testrun/build/testrun/regress-running/regress/regression.diffs\n\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/create_index.out /tmp/cirrus-ci-build/build/testrun/regress-running/regress/results/create_index.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/create_index.out\t2023-02-06 23:52:43.627604000 +0000\n+++ /tmp/cirrus-ci-build/build/testrun/regress-running/regress/results/create_index.out\t2023-02-07 00:03:04.535232000 +0000\n@@ -1930,12 +1930,13 @@\n SELECT thousand, tenthous FROM tenk1\n WHERE thousand < 2 AND tenthous IN (1001,3000)\n ORDER BY thousand;\n- QUERY PLAN\n--------------------------------------------------------\n- Index Only Scan using tenk1_thous_tenthous on tenk1\n- Index Cond: (thousand < 2)\n- Filter: (tenthous = ANY ('{1001,3000}'::integer[]))\n-(3 rows)\n+ QUERY PLAN\n+--------------------------------------------------------------------------------------\n+ Sort\n+ Sort Key: thousand\n+ -> Index Only Scan using tenk1_thous_tenthous on tenk1\n+ Index Cond: ((thousand < 2) AND (tenthous = ANY ('{1001,3000}'::integer[])))\n+(4 rows)\n\n SELECT thousand, tenthous FROM tenk1\n WHERE thousand < 2 AND tenthous IN (1001,3000)\n\n\nI'd be tempted to disable the test, but it also found genuine issues in a\nbunch of CF entries, and all these test instabilities seem like ones we'd also\nsee, with a lower occurrence on the buildfarm.\n\n\nWRT the fdw_retry_check: I wonder if we should increase the log level of\na) pgfdw_inval_callback deciding to disconnect\nb) ReceiveSharedInvalidMessages() deciding to reset\n\nto DEBUG1, at least temporarily?\n\nAlternatively we could add a\nSET log_min_messages=debug4;\n...\nRESET log_min_messages;\n\naround the postgres_fdw connection re-establishment test?\n\n\nOne thing nudging me towards the more global approach is that I have the vague\nfeelign that there's a wider issue around hitting more sinval resets than we\nshould - and it'd right now be very hard to know about that.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 Feb 2023 17:53:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tests against running server occasionally fail, postgres_fdw &\n tenk1"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-06 17:53:00 -0800, Andres Freund wrote:\n> WRT the fdw_retry_check: I wonder if we should increase the log level of\n> a) pgfdw_inval_callback deciding to disconnect\n> b) ReceiveSharedInvalidMessages() deciding to reset\n>\n> to DEBUG1, at least temporarily?\n>\n> Alternatively we could add a\n> SET log_min_messages=debug4;\n> ...\n> RESET log_min_messages;\n>\n> around the postgres_fdw connection re-establishment test?\n>\n>\n> One thing nudging me towards the more global approach is that I have the vague\n> feelign that there's a wider issue around hitting more sinval resets than we\n> should - and it'd right now be very hard to know about that.\n\nLuckily it proved to be easy enough to reproduce on a private branch, by\nsetting the test to repeat a couple times.\n\nI set the aforementioned log messages to LOG. And indeed:\n\n2023-02-07 02:54:18.773 UTC [10800][client backend] [pg_regress/postgres_fdw][7/10526:0] LOG: cache state reset\n2023-02-07 02:54:18.773 UTC [10800][client backend] [pg_regress/postgres_fdw][7/10526:0] LOG: discarding connection 0x802251f00\n\nthat was preceded by another log message less than 200 ms before:\n2023-02-07 02:54:18.588 UTC [10800][client backend] [pg_regress/postgres_fdw][7/10523:55242] STATEMENT: ALTER SERVER loopback OPTIONS (application_name 'fdw_retry_check');\n\nThe log statements inbetween are about isolation/reindex-concurrently-toast\nand pg_regress/indexing.\n\nSo the problem is indeed that we somehow quickly overflow the sinval queue. I\nguess we need a bit more logging around the size of the sinval queue and its\n\"fullness\"?\n\n\nI'm a bit surprised that MAXNUMMESSAGES is a hardcoded 4096. It's not\nparticularly surprising that that's quickly overflown?\n\n\nThere's something off. Isolationtester's control connection emits *loads* of\ninvalidation messages:\n2023-02-06 19:29:06.430 PST [2125297][client backend][6/0:121864][isolation/receipt-report/control connection] LOG: previously emitted 7662 messages, 21 this time\n2023-02-06 19:29:06.566 PST [2125297][client backend][6/0:121873][isolation/receipt-report/control connection] LOG: previously emitted 8155 messages, 99 this time\n2023-02-06 19:29:06.655 PST [2125297][client backend][6/0:121881][isolation/receipt-report/control connection] LOG: previously emitted 8621 messages, 99 this time\n2023-02-06 19:29:06.772 PST [2125297][client backend][6/0:121892][isolation/receipt-report/control connection] LOG: previously emitted 9208 messages, 85 this time\n2023-02-06 19:29:06.867 PST [2125297][client backend][6/0:121900][isolation/receipt-report/control connection] LOG: previously emitted 9674 messages, 85 this time\n\nand this happens with lots of other tests.\n\nGreetings,\n\nAndres Freund\n\n\nPS: The reindex-concurrently-toast output seems to indicate something is\nbroken in the test... There's lots of non-existing table references in the\nexpected file, without that immediately making sense.\n\n\n",
"msg_date": "Mon, 6 Feb 2023 19:29:46 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tests against running server occasionally fail, postgres_fdw &\n tenk1"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-06 19:29:46 -0800, Andres Freund wrote:\n> There's something off. Isolationtester's control connection emits *loads* of\n> invalidation messages:\n> 2023-02-06 19:29:06.430 PST [2125297][client backend][6/0:121864][isolation/receipt-report/control connection] LOG: previously emitted 7662 messages, 21 this time\n> 2023-02-06 19:29:06.566 PST [2125297][client backend][6/0:121873][isolation/receipt-report/control connection] LOG: previously emitted 8155 messages, 99 this time\n> 2023-02-06 19:29:06.655 PST [2125297][client backend][6/0:121881][isolation/receipt-report/control connection] LOG: previously emitted 8621 messages, 99 this time\n> 2023-02-06 19:29:06.772 PST [2125297][client backend][6/0:121892][isolation/receipt-report/control connection] LOG: previously emitted 9208 messages, 85 this time\n> 2023-02-06 19:29:06.867 PST [2125297][client backend][6/0:121900][isolation/receipt-report/control connection] LOG: previously emitted 9674 messages, 85 this time\n\nAh, that's just due to setup-teardown happening in the control connection.\n\n\nFWIW, I see plenty of sinval resets even if I increase the sinval queue size\nsubstantially. I suspect we've increased the number of sinval messages we sent\na good bit over time, due to additional syscaches.\n\nAs we only process catchup interrupts while in ReadCommand(), it's not\nsurprising that we very quickly get behind. A single statement suffices.\n\nAnyway, that's not really a correctness issue, just a performance issue.\n\n\nBut the postgres_fdw.sql vulnerability to syscache resets seems not\ngreat. Perhaps pgfdw_inval_callback() could check if the definition of the\nforeign server actually changed, before dropping the connection?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 Feb 2023 20:47:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tests against running server occasionally fail, postgres_fdw &\n tenk1"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-06 17:53:00 -0800, Andres Freund wrote:\n> Another run hit an issue we've been fighting repeatedly on the buildfarm / CI:\n> https://cirrus-ci.com/task/5527490404286464\n> https://api.cirrus-ci.com/v1/artifact/task/5527490404286464/testrun/build/testrun/regress-running/regress/regression.diffs\n>\n> diff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/create_index.out /tmp/cirrus-ci-build/build/testrun/regress-running/regress/results/create_index.out\n> --- /tmp/cirrus-ci-build/src/test/regress/expected/create_index.out\t2023-02-06 23:52:43.627604000 +0000\n> +++ /tmp/cirrus-ci-build/build/testrun/regress-running/regress/results/create_index.out\t2023-02-07 00:03:04.535232000 +0000\n> @@ -1930,12 +1930,13 @@\n> SELECT thousand, tenthous FROM tenk1\n> WHERE thousand < 2 AND tenthous IN (1001,3000)\n> ORDER BY thousand;\n> - QUERY PLAN\n> --------------------------------------------------------\n> - Index Only Scan using tenk1_thous_tenthous on tenk1\n> - Index Cond: (thousand < 2)\n> - Filter: (tenthous = ANY ('{1001,3000}'::integer[]))\n> -(3 rows)\n> + QUERY PLAN\n> +--------------------------------------------------------------------------------------\n> + Sort\n> + Sort Key: thousand\n> + -> Index Only Scan using tenk1_thous_tenthous on tenk1\n> + Index Cond: ((thousand < 2) AND (tenthous = ANY ('{1001,3000}'::integer[])))\n> +(4 rows)\n>\n> SELECT thousand, tenthous FROM tenk1\n> WHERE thousand < 2 AND tenthous IN (1001,3000)\n>\n>\n> I'd be tempted to disable the test, but it also found genuine issues in a\n> bunch of CF entries, and all these test instabilities seem like ones we'd also\n> see, with a lower occurrence on the buildfarm.\n\nThe last occasion we hit this was at: https://www.postgresql.org/message-id/1346227.1649887693%40sss.pgh.pa.us\n\nI'm working on cleaning up the patch used for debugging in that thread, to\nmake VACUUM log to the server log if VERBOSE isn't used.\n\nOne thing I'm not quite sure what to do about is that we atm use a hardcoded\nDEBUG2 (not controlled by VERBOSE) in a bunch of places:\n\n\tereport(DEBUG2,\n\t\t\t(errmsg(\"table \\\"%s\\\": removed %lld dead item identifiers in %u pages\",\n\t\t\t\t\tvacrel->relname, (long long) index, vacuumed_pages)));\n\n ivinfo.message_level = DEBUG2;\n\nI find DEBUG2 hard to use to run the entire regression tests, it results in a\nlot of output. Lots of it far less important than these kinds of details\nhere. So I'd like to use a different log level for them - but without further\ncomplications that'd mean they'd show up in VACUUUM VERBOSE.\n\nI made them part of VERBOSE for now, but not because I think that's\nnecessarily the right answer, but because it could be useful for debugging\nthis stupid flapping test.\n\n\nI right now set instrument = true when\nmessage_level_is_interesting(DEBUG1). But that probably should be false? I set\nit to true because of starttime, but it'd probably be better to just move it\nout of the if (instrument). Also would require re-jiggering the condition of\nthe \"main block\" doing the logging.\n\n\nFWIW, running all regression tests that support doing so against a running\nserver with DEBUG1 results in 8.1MB, DEBUG2 in 17MB.\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 7 Feb 2023 18:47:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tests against running server occasionally fail, postgres_fdw &\n tenk1"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 6:47 PM Andres Freund <andres@anarazel.de> wrote:\n> One thing I'm not quite sure what to do about is that we atm use a hardcoded\n> DEBUG2 (not controlled by VERBOSE) in a bunch of places:\n>\n> ereport(DEBUG2,\n> (errmsg(\"table \\\"%s\\\": removed %lld dead item identifiers in %u pages\",\n> vacrel->relname, (long long) index, vacuumed_pages)));\n>\n> ivinfo.message_level = DEBUG2;\n>\n> I find DEBUG2 hard to use to run the entire regression tests, it results in a\n> lot of output. Lots of it far less important than these kinds of details\n> here. So I'd like to use a different log level for them - but without further\n> complications that'd mean they'd show up in VACUUUM VERBOSE.\n\nI think that these DEBUG2 traces are of limited value, even for\nexperts. I personally never use them -- I just use VACUUM\nVERBOSE/autovacuum logging for everything, since it's far easier to\nread, and isn't missing something that the DEBUG2 traces have. In fact\nI wonder if we should even have them at all.\n\nI generally don't care about the details when VACUUM runs out of space\nfor dead_items -- these days the important thing is to just avoid it\ncompletely. I also generally don't care about how many index tuples\nwere deleted by each index's ambulkdelete() call, since VACUUM VERBOSE\nalready shows me the number of LP_DEAD stubs encountered/removed in\nthe heap. I can see the size of indexes and information about page\ndeletion in VACUUM VERBOSE these days, too.\n\nDon't get me wrong. It *would* be useful to see more information about\neach index in VACUUM VERBOSE output -- just not the number of tuples\ndeleted. Tuples really don't matter much at this level. But seeing\nsomething about the number of WAL records written while vacuuming each\nindex is another story. That's a cost that is likely to vary in\npossibly-interesting ways amongst indexes on the table, unlike\nIndexBulkDeleteResult.tuples_removed, which is very noisy, and\nsignifies almost nothing important on its own.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 8 Feb 2023 14:03:49 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: tests against running server occasionally fail,\n postgres_fdw & tenk1"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-08 14:03:49 -0800, Peter Geoghegan wrote:\n> On Tue, Feb 7, 2023 at 6:47 PM Andres Freund <andres@anarazel.de> wrote:\n> > One thing I'm not quite sure what to do about is that we atm use a hardcoded\n> > DEBUG2 (not controlled by VERBOSE) in a bunch of places:\n> >\n> > ereport(DEBUG2,\n> > (errmsg(\"table \\\"%s\\\": removed %lld dead item identifiers in %u pages\",\n> > vacrel->relname, (long long) index, vacuumed_pages)));\n> >\n> > ivinfo.message_level = DEBUG2;\n> >\n> > I find DEBUG2 hard to use to run the entire regression tests, it results in a\n> > lot of output. Lots of it far less important than these kinds of details\n> > here. So I'd like to use a different log level for them - but without further\n> > complications that'd mean they'd show up in VACUUUM VERBOSE.\n>\n> I think that these DEBUG2 traces are of limited value, even for\n> experts. I personally never use them -- I just use VACUUM\n> VERBOSE/autovacuum logging for everything, since it's far easier to\n> read, and isn't missing something that the DEBUG2 traces have. In fact\n> I wonder if we should even have them at all.\n\nI find it useful information when debugging problems. Without it, the log\ndoesn't tell you which index was processed when a problem started to occur. Or\neven that we were scanning indexes at all.\n\nIOW, I care a lot more about log entries denoting the start / end of an index\nvacuum than attached numbers. Although I also care about those, to some\ndegree.\n\n\n\n> I generally don't care about the details when VACUUM runs out of space\n> for dead_items -- these days the important thing is to just avoid it\n> completely.\n\nI wonder if it'd possibly make sense to log more verbosely if we do end up\nrunning out of space, but not otherwise? Particularly because it's important\nto avoid multi-pass vacuums? Right now there's basically no log output until\nthe vacuum finished, which in a bad situation could take days, with many index\nscan cycles. Logging enough to be able to see such things happening IMO is\nimportant.\n\n\n\n> I also generally don't care about how many index tuples\n> were deleted by each index's ambulkdelete() call, since VACUUM VERBOSE\n> already shows me the number of LP_DEAD stubs encountered/removed in\n> the heap.\n\nIsn't it actually quite useful to see how many entries were removed from the\nindex relative to the number of LP_DEAD removed in the heap? And relative to\nother indexes? E.g. to understand how effective killtuple style logic is?\n\n\nOne annoyance with the multiple logs as part of a [auto]vacuum is that we end\nup logging the context / statement repeatedly, leading to pointless output\nlike this:\n\n2023-02-08 15:55:01.117 PST [3904676][client backend][2/55:0][psql] LOG: vacuuming \"postgres.public.large\"\n2023-02-08 15:55:01.117 PST [3904676][client backend][2/55:0][psql] STATEMENT: VACUUM (PARALLEL 0) large ;\n2023-02-08 15:55:02.671 PST [3904676][client backend][2/55:0][psql] LOG: scanned index \"large_pkey\" to remove 499994 row versions\n2023-02-08 15:55:02.671 PST [3904676][client backend][2/55:0][psql] CONTEXT: while vacuuming index \"large_pkey\" of relation \"public.large\"\n2023-02-08 15:55:02.671 PST [3904676][client backend][2/55:0][psql] STATEMENT: VACUUM (PARALLEL 0) large ;\n...\n2023-02-08 15:55:03.496 PST [3904676][client backend][2/55:0][psql] STATEMENT: VACUUM (PARALLEL 0) large ;\n2023-02-08 15:55:03.498 PST [3904676][client backend][2/56:0][psql] LOG: vacuuming \"postgres.pg_toast.pg_toast_3370138\"\n2023-02-08 15:55:03.498 PST [3904676][client backend][2/56:0][psql] STATEMENT: VACUUM (PARALLEL 0) large ;\n2023-02-08 15:55:03.498 PST [3904676][client backend][2/56:0][psql] LOG: finished vacuuming \"postgres.pg_toast.pg_toast_3370138\": index scans: 0\n\n\nI think we should emit most of the non-error/warning messages in vacuum with\nerrhidestmt(true), errhidecontext(true) to avoid that. The error context is\nquite helpful for error messages due to corruption, cancellations and such,\nbut not for messages where we already are careful to include relation names.\n\n\nI generally like the improved log output for [auto]vacuum, but the fact that\nyou can't see anymore when index scans start imo problematic. Right now you\ncan't even infer how long the first index scan takes, which really isn't\ngreat.\n\n\nI'd thus like to:\n\n1) Use errhidestmt(true), errhidecontext(true) for vacuum\n ereport(non-error-or-warning)\n\n2) Add a message to lazy_vacuum() or lazy_vacuum_all_indexes(), that includes\n - num_index_scans\n - how many indexes we'll scan\n - how many dead tids we're working on removing\n\n3) Add a log at the end of lazy_vacuum_heap_rel() that's logged only (or more\n verbosely) when lazy_vacuum() was run due to running out of space\n\n If we just do the heap scan once, this can be easily inferred from the\n other messages.\n\n4) When we run out of space for dead tids, increase the log level for the rest\n of vacuum. It's sufficiently bad if that happens that we really ought to\n include it in the log by default.\n\n5) Remove the row versions from vac_bulkdel_one_index()'s message, it's\n already included in 2).\n\n Instead we could emit the content from vac_cleanup_one_index(), imo that's\n a lot more useful when separated for each scan.\n\n6) Possibly remove the log output from vac_cleanup_one_index()?\n\n\n2) and 3) together allow to figure out how long individual scan / vacuum\nphases are taking. 1) should reduce log verbosity sufficiently to make it\neasier to actually read the output.\n\n\nFWIW, I'm not proposing to do all of that in one patch, once I understand a\nbit more what we have concensus and what not I can split it into steps.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Feb 2023 16:29:35 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tests against running server occasionally fail, postgres_fdw &\n tenk1"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 4:29 PM Andres Freund <andres@anarazel.de> wrote:\n> I find it useful information when debugging problems. Without it, the log\n> doesn't tell you which index was processed when a problem started to occur. Or\n> even that we were scanning indexes at all.\n\nI guess it might have some limited value when doing some sort of\ntop-down debugging of the regression tests, where (say) some random\nbuildfarm issue is the starting point, and you don't really know what\nyou're looking for at all. I cannot remember ever approaching things\nthat way myself, though.\n\n> > I generally don't care about the details when VACUUM runs out of space\n> > for dead_items -- these days the important thing is to just avoid it\n> > completely.\n>\n> I wonder if it'd possibly make sense to log more verbosely if we do end up\n> running out of space, but not otherwise? Particularly because it's important\n> to avoid multi-pass vacuums? Right now there's basically no log output until\n> the vacuum finished, which in a bad situation could take days, with many index\n> scan cycles. Logging enough to be able to see such things happening IMO is\n> important.\n\nThat definitely could make sense, but ISTM that it should be some\ntotally orthogonal thing.\n\n> > I also generally don't care about how many index tuples\n> > were deleted by each index's ambulkdelete() call, since VACUUM VERBOSE\n> > already shows me the number of LP_DEAD stubs encountered/removed in\n> > the heap.\n>\n> Isn't it actually quite useful to see how many entries were removed from the\n> index relative to the number of LP_DEAD removed in the heap? And relative to\n> other indexes? E.g. to understand how effective killtuple style logic is?\n\nWay less than you'd think. I'd even go so far as to say that showing\nusers the number of index tuples deleted by VACUUM at the level of\nindividual indexes could be very misleading. The simplest way to see\nthat this is true is with an example:\n\nAssume that we have two indexes on the same table: A UUID v4 index,\nand a traditional serial/identity column style primary key index.\nFurther assume that index tuple deletion does a perfect job with both\nindexes, in the sense that no leaf page is ever split while the\npre-split leaf page has even one single remaining delete-safe index\ntuple remaining. So the index deletion stuff is equally effective in\nboth indexes, in a way that could be measured exactly (by custom\ninstrumentation code, say). What does the instrumentation of the\nnumber of index tuples deleted by VACUUM reveal here?\n\nI would expect the UUID index to have *many* more index tuples deleted\nby VACUUM than the traditional serial/identity column style primary\nkey index did in the same period (unless perhaps there was an\nunrealistically uniform distribution of updates across the PK's key\nspace). We're talking about a 3x difference here, possibly much more.\nIn the case of the UUID index, we'll have needed fewer opportunistic\nindex deletion passes because there was naturally more free space on\neach leaf page due to generic B-Tree overhead -- allowing\nopportunistic index tuple deletion to be much more lazy overall,\nrelative to how things went with the other index. In reality we get\nthe same optimal outcome for each index, but\nIndexBulkDeleteResult.tuples_removed suggests that just the opposite\nhas happened. That's just perverse.\n\nThis isn't just a cute example. If anything it *understates* the\nextent to which these kinds of differences are possible. I could come\nup with a case where the difference was far larger still, just by\nadding a few more details. Users ought to focus on the picture over\ntime, and the space utilization for remaining live tuples. To a large\ndegree it doesn't actually matter whether it's VACUUM or opportunistic\ndeletion that does most of the deleting, provided it happens and is\nreasonably efficient. They're two sides of the same coin.\n\nSpace utilization over time for live tuples matters most. Ideally it\ncan be normalized to account for the effects of these workload\ndifferences, and things like nbtree deduplication. But even just\ndividing the size of the index in pages by the number of live tuples\nin the index tells me plenty, with no noise from VACUUM implementation\ndetails.\n\nWe care about signal to noise ratio. Managing the noise is no less\nimportant than increasing the signal. It might even be more important.\n\n> I think we should emit most of the non-error/warning messages in vacuum with\n> errhidestmt(true), errhidecontext(true) to avoid that. The error context is\n> quite helpful for error messages due to corruption, cancellations and such,\n> but not for messages where we already are careful to include relation names.\n\nAgreed.\n\n> I'd thus like to:\n>\n> 1) Use errhidestmt(true), errhidecontext(true) for vacuum\n> ereport(non-error-or-warning)\n\nMakes sense.\n\n> 2) Add a message to lazy_vacuum() or lazy_vacuum_all_indexes(), that includes\n> - num_index_scans\n> - how many indexes we'll scan\n> - how many dead tids we're working on removing\n\nIt's not obvious how you can know the number of index scans at this\npoint. Well, it depends on how you define \"index scan\". It's true that\nthe number shown as \"index scans\" by VACUUM VERBOSE could be shown\nhere instead, earlier on. However, there are cases where VACUUM\nVERBOSE shows 0 index scans, but also shows that it has scanned one or\nmore indexes (usually not all the indexes, just a subset). This\nhappens whenever an amvacuumcleanup() routine decides it needs to scan\nan index to do stuff like recycle previously deleted pages.\n\nAfter 14, nbtree does a pretty good job of avoiding that when it\ndoesn't really make sense. But it's still possible. It's also quite\ncommon with GIN indexes, I think -- in fact it can be quite painful\nthere. This is a good thing for performance, of course, but it also\nmakes VACUUM VERBOSE show information that makes sense to users, since\nthings actually happen in a way that makes a lot more sense. I'm quite\nhappy about the fact that the new VACUUM VERBOSE allows users to\nmostly ignore obscure details like whether an index was scanned by\namvacuumcleanup() or by ambulkdelete() -- stuff that basically nobody\nunderstands. That seems worth preserving.\n\n> 3) Add a log at the end of lazy_vacuum_heap_rel() that's logged only (or more\n> verbosely) when lazy_vacuum() was run due to running out of space\n>\n> If we just do the heap scan once, this can be easily inferred from the\n> other messages.\n\nI don't mind adding something that makes it easier to notice the\nnumber of index scans early. However, the ambulkdelete vs\namvacuumcleanup index scan situation needs more thought.\n\n> 4) When we run out of space for dead tids, increase the log level for the rest\n> of vacuum. It's sufficiently bad if that happens that we really ought to\n> include it in the log by default.\n\nThat makes sense. Same could be done when the failsafe triggers.\n\n> 2) and 3) together allow to figure out how long individual scan / vacuum\n> phases are taking. 1) should reduce log verbosity sufficiently to make it\n> easier to actually read the output.\n\nIt's not just verbosity. It's also showing the same details\nconsistently for the same table over time, so that successive VACUUMs\ncan be compared to each other easily. The worst thing about the old\nVACUUM VERBOSE was that it was inconsistent about how much it showed\nin a way that made little sense, based on low level details like the\norder that things happen in, not the order that actually made sense.\n\nAs I said, I don't mind making VACUUM VERBOSE behave a little bit more\nlike a progress indicator, which is how it used to work. Maybe I went\na little too far in the direction of neatly summarizing the whole\nVACUUM operation in one go. But I doubt that I went too far with it by\nall that much. Overall, the old VACUUM VERBOSE was extremely hard to\nuse, and was poorly maintained -- let's not go back to that. (See\ncommit ec196930 for evidence of how sloppily it was maintained.)\n\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 8 Feb 2023 18:37:41 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: tests against running server occasionally fail,\n postgres_fdw & tenk1"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-08 18:37:41 -0800, Peter Geoghegan wrote:\n> On Wed, Feb 8, 2023 at 4:29 PM Andres Freund <andres@anarazel.de> wrote:\n> > 2) Add a message to lazy_vacuum() or lazy_vacuum_all_indexes(), that includes\n> > - num_index_scans\n> > - how many indexes we'll scan\n> > - how many dead tids we're working on removing\n>\n> It's not obvious how you can know the number of index scans at this\n> point. Well, it depends on how you define \"index scan\".\n\nWhat I mean is to show the number of times we've done lazy_vacuum() so far,\nalthough probably 1 based. Particularly if we do implement my proposal to\nturn up verbosity once we're doing more than one pass, that'll allow at least\nsome insight to how bad things are from the log.\n\n\n\n> This is a good thing for performance, of course, but it also makes VACUUM\n> VERBOSE show information that makes sense to users, since things actually\n> happen in a way that makes a lot more sense. I'm quite happy about the fact\n> that the new VACUUM VERBOSE allows users to mostly ignore obscure details\n> like whether an index was scanned by amvacuumcleanup() or by ambulkdelete()\n> -- stuff that basically nobody understands. That seems worth preserving.\n\nI don't mind making the messages as similar as possible, but I do mind if I as\na postgres hacker, or an expert consultant, can't parse that detail out. We\nneed to be able to debug things like amvacuumcleanup() doing too much work too\noften.\n\n\n> As I said, I don't mind making VACUUM VERBOSE behave a little bit more\n> like a progress indicator, which is how it used to work. Maybe I went\n> a little too far in the direction of neatly summarizing the whole\n> VACUUM operation in one go. But I doubt that I went too far with it by\n> all that much. Overall, the old VACUUM VERBOSE was extremely hard to\n> use, and was poorly maintained -- let's not go back to that. (See\n> commit ec196930 for evidence of how sloppily it was maintained.)\n\nI don't want to go back to that either, as I said I mostly like the new\noutput.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Feb 2023 19:18:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tests against running server occasionally fail, postgres_fdw &\n tenk1"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 7:18 PM Andres Freund <andres@anarazel.de> wrote:\n> > This is a good thing for performance, of course, but it also makes VACUUM\n> > VERBOSE show information that makes sense to users, since things actually\n> > happen in a way that makes a lot more sense. I'm quite happy about the fact\n> > that the new VACUUM VERBOSE allows users to mostly ignore obscure details\n> > like whether an index was scanned by amvacuumcleanup() or by ambulkdelete()\n> > -- stuff that basically nobody understands. That seems worth preserving.\n>\n> I don't mind making the messages as similar as possible, but I do mind if I as\n> a postgres hacker, or an expert consultant, can't parse that detail out. We\n> need to be able to debug things like amvacuumcleanup() doing too much work too\n> often.\n\nFWIW you can tell even today. You can observe that the number of index\nscans is 0, and that one or more indexes have their size reported --\nthat indicates that an amvacuumcleanup()-only scan took place, say\nbecause we needed to put some preexisting deleted pages in the FSM.\n\nThere is also another detail that strongly hints that VACUUM VERBOSE\nhad to scan an index during its call to amvacuumcleanup(), which is\natypical: it only shows details for that particular index, which is\nreally noticeable. It won't report anything about those indexes that\nhad no-op calls to amvacuumcleanup().\n\nIt kind of makes sense that we report 0 index scans when there were 0\ncalls to ambulkdelete(), even though there may still have been some\nindex scans during a call to some amvacuumcleanup() routine. The\ncommon case is that they're no-op calls for every index, but even when\nthey're not there is still probably only one or two indexes that have\nto do a noticeable amount of I/O. It makes sense to \"round down to 0\".\n\nGranted, there are some notable exceptions. For example,\ngistvacuumcleanup() doesn't even try to avoid scanning the index. But\nthat's really a problem in gistvacuumcleanup() -- since it really\ndoesn't make very much sense, even from a GiST point of view. It can\nfollow exactly the same approach as B-Tree here, since its approach to\npage deletion is already directly based on nbtree.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 9 Feb 2023 17:28:46 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: tests against running server occasionally fail,\n postgres_fdw & tenk1"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-08 16:15:11 -0800, Andres Freund wrote:\n> The most frequent case is postgres_fdw, which somewhat regularly fails with a\n> regression.diff like this:\n> \n> diff -U3 /tmp/cirrus-ci-build/contrib/postgres_fdw/expected/postgres_fdw.out /tmp/cirrus-ci-build/build/testrun/postgres_fdw-running/regress/results/postgres_fdw.out\n> --- /tmp/cirrus-ci-build/contrib/postgres_fdw/expected/postgres_fdw.out\t2022-12-08 20:35:24.772888000 +0000\n> +++ /tmp/cirrus-ci-build/build/testrun/postgres_fdw-running/regress/results/postgres_fdw.out\t2022-12-08 20:43:38.199450000 +0000\n> @@ -9911,8 +9911,7 @@\n> \tWHERE application_name = 'fdw_retry_check';\n> pg_terminate_backend\n> ----------------------\n> - t\n> -(1 row)\n> +(0 rows)\n> \n> -- This query should detect the broken connection when starting new remote\n> -- transaction, reestablish new connection, and then succeed.\n> \n> \n> See e.g.\n> https://cirrus-ci.com/task/5925540020879360\n> https://api.cirrus-ci.com/v1/artifact/task/5925540020879360/testrun/build/testrun/postgres_fdw-running/regress/regression.diffs\n> https://api.cirrus-ci.com/v1/artifact/task/5925540020879360/testrun/build/testrun/runningcheck.log\n> \n> \n> The following comment in the test provides a hint what might be happening:\n> \n> -- If debug_discard_caches is active, it results in\n> -- dropping remote connections after every transaction, making it\n> -- impossible to test termination meaningfully. So turn that off\n> -- for this test.\n> SET debug_discard_caches = 0;\n> \n> \n> I guess that a cache reset message arrives and leads to the connection being\n> terminated. Unfortunately that's hard to see right now, as the relevant log\n> messages are output with DEBUG3 - it's quite verbose, so enabling it for all\n> tests will be painful.\n\nDownthread I reported that I was able to pinpoint that the source of the issue\nindeed is a cache inval message arriving in the wrong moment.\n\n\nWe've had trouble with this test for years by now. We added workarounds, like\n\ncommit 1273a15bf91fa322915e32d3b6dc6ec916397268\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2021-05-04 13:36:26 -0400\n\n Disable cache clobber to avoid breaking postgres_fdw termination test.\n\nBut that didn't suffice to make it reliable. Not entirely surprising, given\nthere are cache resource sources other than clobber cache.\n\nUnless somebody comes up with a way to make the test more reliable pretty\nsoon, I think we should just remove it. It's one of the most frequently\nflapping tests at the moment.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Feb 2023 11:43:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tests against running server occasionally fail, postgres_fdw &\n tenk1"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-12-08 16:15:11 -0800, Andres Freund wrote:\n>> The most frequent case is postgres_fdw, which somewhat regularly fails with a\n>> regression.diff like this:\n>> WHERE application_name = 'fdw_retry_check';\n>> pg_terminate_backend\n>> ----------------------\n>> - t\n>> -(1 row)\n>> +(0 rows)\n\n> Unless somebody comes up with a way to make the test more reliable pretty\n> soon, I think we should just remove it. It's one of the most frequently\n> flapping tests at the moment.\n\nIf that's the only diff, we could just hide it, say by writing\n\ndo $$ begin\nPERFORM pg_terminate_backend(pid, 180000) FROM pg_stat_activity\nWHERE application_name = 'fdw_retry_check';\nend $$;\n\nThe actually important thing is the failure check after this;\nwe don't care that much whether the initially-created connection\nis still live at this point.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Feb 2023 14:51:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tests against running server occasionally fail,\n postgres_fdw & tenk1"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-26 14:51:45 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-12-08 16:15:11 -0800, Andres Freund wrote:\n> >> The most frequent case is postgres_fdw, which somewhat regularly fails with a\n> >> regression.diff like this:\n> >> WHERE application_name = 'fdw_retry_check';\n> >> pg_terminate_backend\n> >> ----------------------\n> >> - t\n> >> -(1 row)\n> >> +(0 rows)\n> \n> > Unless somebody comes up with a way to make the test more reliable pretty\n> > soon, I think we should just remove it. It's one of the most frequently\n> > flapping tests at the moment.\n> \n> If that's the only diff, we could just hide it, say by writing\n> \n> do $$ begin\n> PERFORM pg_terminate_backend(pid, 180000) FROM pg_stat_activity\n> WHERE application_name = 'fdw_retry_check';\n> end $$;\n> \n> The actually important thing is the failure check after this;\n> we don't care that much whether the initially-created connection\n> is still live at this point.\n\nHm, yea, that should work. It's indeed the entirety of the diff\nhttps://api.cirrus-ci.com/v1/artifact/task/4718859714822144/testrun/build/testrun/postgres_fdw-running/regress/regression.diffs\n\nIf we go that way we can remove the debug_discard muckery as well, I think?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Feb 2023 12:06:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tests against running server occasionally fail, postgres_fdw &\n tenk1"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-26 14:51:45 -0500, Tom Lane wrote:\n>> If that's the only diff, we could just hide it, say by writing\n\n> Hm, yea, that should work. It's indeed the entirety of the diff\n> https://api.cirrus-ci.com/v1/artifact/task/4718859714822144/testrun/build/testrun/postgres_fdw-running/regress/regression.diffs\n\n> If we go that way we can remove the debug_discard muckery as well, I think?\n\nPerhaps. I'll check to see if that stanza passes with debug_discard on.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Feb 2023 15:36:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tests against running server occasionally fail,\n postgres_fdw & tenk1"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Hm, yea, that should work. It's indeed the entirety of the diff\n> https://api.cirrus-ci.com/v1/artifact/task/4718859714822144/testrun/build/testrun/postgres_fdw-running/regress/regression.diffs\n\n> If we go that way we can remove the debug_discard muckery as well, I think?\n\nOkay, so that seems to work for the \"reestablish new connection\" test:\nas coded here, it passes with or without debug_discard_caches enabled,\nand I believe it's testing what it intends to either way. So that's\ngood.\n\nHowever, the other stanza with debug_discard_caches muckery is the\none about \"test postgres_fdw.application_name GUC\", and in that case\nignoring the number of terminated connections would destroy the\npoint of the test entirely; because without that, you're proving\nnothing about what the remote's application_name actually looks like.\n\nI'm inclined to think we should indeed just nuke that test. It's\novercomplicated and it expends a lot of test cycles on a pretty\nmarginal feature.\n\nSo I propose the attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 26 Feb 2023 15:57:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tests against running server occasionally fail,\n postgres_fdw & tenk1"
},
{
"msg_contents": "I wrote:\n> I'm inclined to think we should indeed just nuke that test. It's\n> overcomplicated and it expends a lot of test cycles on a pretty\n> marginal feature.\n\nPerhaps a better idea: at the start of the test, set\npostgres_fdw.application_name to something that exercises all the\navailable escape sequences, but don't try to verify what the\nresult looks like. That at least gives us code coverage for the\nescape sequence processing code, even if it doesn't prove that\nthe output is desirable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Feb 2023 16:03:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tests against running server occasionally fail,\n postgres_fdw & tenk1"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-26 15:57:01 -0500, Tom Lane wrote:\n> However, the other stanza with debug_discard_caches muckery is the\n> one about \"test postgres_fdw.application_name GUC\", and in that case\n> ignoring the number of terminated connections would destroy the\n> point of the test entirely; because without that, you're proving\n> nothing about what the remote's application_name actually looks like.\n> \n> I'm inclined to think we should indeed just nuke that test. It's\n> overcomplicated and it expends a lot of test cycles on a pretty\n> marginal feature.\n\nIt does seem fairly complicated...\n\n*If* we wanted to rescue it, we probably could just use a transaction around\nthe SELECT and the termination, which ought to prevent sinval issues.\n\nNot that I understand why that tries to terminate connections, instead of just\nlooking at application name.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Feb 2023 13:09:38 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tests against running server occasionally fail, postgres_fdw &\n tenk1"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Not that I understand why that tries to terminate connections, instead of just\n> looking at application name.\n\nThe test is trying to verify the application name reported by the\n\"remote\" session, which isn't constant, so we can't just do \"select\napplication_name from pg_stat_activity\". I agree that terminating the\nconnection seems like kind of a strange thing to do --- maybe it's to\nensure that we get a new session with the updated application name\nfor the next test case? If not, maybe we could do \"select 1 from\npg_stat_activity where application_name = computed-pattern\", but that\nhas the same problem that a cache flush might have terminated the\nremote session.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Feb 2023 18:59:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tests against running server occasionally fail,\n postgres_fdw & tenk1"
},
{
"msg_contents": "I wrote:\n> ... maybe we could do \"select 1 from\n> pg_stat_activity where application_name = computed-pattern\", but that\n> has the same problem that a cache flush might have terminated the\n> remote session.\n\nHah - I thought of a solution. We can avoid this race condition if\nwe make the remote session itself inspect pg_stat_activity and\nreturn its displayed application_name. Just need a foreign table\nthat maps onto pg_stat_activity. Of course, this'd add yet another\nlayer of baroque-ness to a test section that I already don't think\nis worth the trouble. Should we go that way, or just rip it out?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 Feb 2023 10:01:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tests against running server occasionally fail,\n postgres_fdw & tenk1"
},
{
"msg_contents": "I wrote:\n> Hah - I thought of a solution. We can avoid this race condition if\n> we make the remote session itself inspect pg_stat_activity and\n> return its displayed application_name. Just need a foreign table\n> that maps onto pg_stat_activity.\n\nI went ahead and coded it that way, and it doesn't look too awful.\nAny objections?\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 27 Feb 2023 12:42:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tests against running server occasionally fail,\n postgres_fdw & tenk1"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-27 12:42:00 -0500, Tom Lane wrote:\n> I wrote:\n> > Hah - I thought of a solution. We can avoid this race condition if\n> > we make the remote session itself inspect pg_stat_activity and\n> > return its displayed application_name. Just need a foreign table\n> > that maps onto pg_stat_activity.\n\nSounds reasonable. I guess you could also do it with a function that is\nallowed to be pushed down. But given that you already solved it this way...\n\nI think it's worth having an example for checks like this in the postgres_fdw\ntests, even if it's perhaps not worth it for the application_name GUC on its\nown. We saw that the GUC test copied the debug_discard_caches use of another\ntest...\n\n\n> I went ahead and coded it that way, and it doesn't look too awful.\n> Any objections?\n\nLooks good to me.\n\nI think it'd be an indication of a bug around the invalidation handling if the\nterminations were required. So even leaving other things aside, I prefer this\nversion.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 27 Feb 2023 11:50:27 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: tests against running server occasionally fail, postgres_fdw &\n tenk1"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-02-27 12:42:00 -0500, Tom Lane wrote:\n>> I went ahead and coded it that way, and it doesn't look too awful.\n>> Any objections?\n\n> Looks good to me.\n\n> I think it'd be an indication of a bug around the invalidation handling if the\n> terminations were required. So even leaving other things aside, I prefer this\n> version.\n\nSounds good. I'll work on getting this back-patched.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 Feb 2023 15:46:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tests against running server occasionally fail,\n postgres_fdw & tenk1"
}
] |
[
{
"msg_contents": "A number of static assertions could be moved to better places.\n\nWe first added StaticAssertStmt() in 2012, which required all static \nassertions to be inside function bodies. We then added \nStaticAssertDecl() in 2020, which enabled static assertions on file \nlevel. We have a number of calls that were stuck in not-really-related \nfunctions for this historical reason. This patch set cleans that up.\n\n0001-Update-static-assert-usage-comment.patch\n\nThis updates the usage information in c.h to be more current and precise.\n\n0002-Move-array-size-related-static-assertions.patch\n\nThis moves some obviously poorly placed ones.\n\n0003-Move-some-static-assertions-to-better-places.patch\n\nThis moves some that I thought were suboptimally placed but it could be \ndebated or refined.\n\n0004-Use-StaticAssertDecl-where-possible.patch\n\nThis just changes some StaticAssertStmt() to StaticAssertDecl() where \nappropriate. It's optional.",
"msg_date": "Fri, 9 Dec 2022 08:46:58 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "static assert cleanup"
},
{
"msg_contents": "On Fri, Dec 9, 2022 at 2:47 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n>\n> 0003-Move-some-static-assertions-to-better-places.patch\n>\n> This moves some that I thought were suboptimally placed but it could be\n> debated or refined.\n\n+ * We really want ItemPointerData to be exactly 6 bytes. This is rather a\n+ * random place to check, but there is no better place.\n\nSince the assert is no longer in a random function body, it seems we can\nremove the second sentence.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Dec 9, 2022 at 2:47 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:>> 0003-Move-some-static-assertions-to-better-places.patch>> This moves some that I thought were suboptimally placed but it could be> debated or refined.+ * We really want ItemPointerData to be exactly 6 bytes. This is rather a+ * random place to check, but there is no better place.Since the assert is no longer in a random function body, it seems we can remove the second sentence.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 9 Dec 2022 17:01:19 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: static assert cleanup"
},
{
"msg_contents": "On Fri, Dec 9, 2022 at 6:47 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> A number of static assertions could be moved to better places.\n>\n> We first added StaticAssertStmt() in 2012, which required all static\n> assertions to be inside function bodies. We then added\n> StaticAssertDecl() in 2020, which enabled static assertions on file\n> level. We have a number of calls that were stuck in not-really-related\n> functions for this historical reason. This patch set cleans that up.\n>\n> 0001-Update-static-assert-usage-comment.patch\n>\n> This updates the usage information in c.h to be more current and precise.\n>\n> 0002-Move-array-size-related-static-assertions.patch\n>\n> This moves some obviously poorly placed ones.\n>\n> 0003-Move-some-static-assertions-to-better-places.patch\n>\n> This moves some that I thought were suboptimally placed but it could be\n> debated or refined.\n>\n> 0004-Use-StaticAssertDecl-where-possible.patch\n>\n> This just changes some StaticAssertStmt() to StaticAssertDecl() where\n> appropriate. It's optional.\n\nPatch 0002\n\ndiff --git a/src/backend/utils/cache/syscache.c\nb/src/backend/utils/cache/syscache.c\nindex eec644ec84..bb3dd6f4d2 100644\n--- a/src/backend/utils/cache/syscache.c\n+++ b/src/backend/utils/cache/syscache.c\n@@ -1040,6 +1040,9 @@ static const struct cachedesc cacheinfo[] = {\n }\n };\n\n+StaticAssertDecl(SysCacheSize == (int) lengthof(cacheinfo),\n+ \"SysCacheSize does not match syscache.c's array\");\n+\n static CatCache *SysCache[SysCacheSize];\n\nIn almost every example I found of StaticAssertXXX, the lengthof(arr)\npart came first in the condition. Since you are modifying this anyway,\nshould this one also be reversed for consistency?\n\n======\n\nPatch 0004\n\ndiff --git a/src/backend/executor/execExprInterp.c\nb/src/backend/executor/execExprInterp.c\nindex 1dab2787b7..ec26ae506f 100644\n--- a/src/backend/executor/execExprInterp.c\n+++ b/src/backend/executor/execExprInterp.c\n@@ -496,7 +496,7 @@ ExecInterpExpr(ExprState *state, ExprContext\n*econtext, bool *isnull)\n &&CASE_EEOP_LAST\n };\n\n- StaticAssertStmt(EEOP_LAST + 1 == lengthof(dispatch_table),\n+ StaticAssertDecl(EEOP_LAST + 1 == lengthof(dispatch_table),\n \"dispatch_table out of whack with ExprEvalOp\");\n\nDitto the previous comment.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 12 Dec 2022 09:18:07 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: static assert cleanup"
},
{
"msg_contents": "On 11.12.22 23:18, Peter Smith wrote:\n> +StaticAssertDecl(SysCacheSize == (int) lengthof(cacheinfo),\n> + \"SysCacheSize does not match syscache.c's array\");\n> +\n> static CatCache *SysCache[SysCacheSize];\n> \n> In almost every example I found of StaticAssertXXX, the lengthof(arr)\n> part came first in the condition. Since you are modifying this anyway,\n> should this one also be reversed for consistency?\n\nMakes sense. I have pushed this separately.\n\n\n",
"msg_date": "Wed, 14 Dec 2022 16:15:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: static assert cleanup"
},
{
"msg_contents": "On 09.12.22 11:01, John Naylor wrote:\n> \n> On Fri, Dec 9, 2022 at 2:47 PM Peter Eisentraut \n> <peter.eisentraut@enterprisedb.com \n> <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n> >\n> > 0003-Move-some-static-assertions-to-better-places.patch\n> >\n> > This moves some that I thought were suboptimally placed but it could be\n> > debated or refined.\n> \n> + * We really want ItemPointerData to be exactly 6 bytes. This is rather a\n> + * random place to check, but there is no better place.\n> \n> Since the assert is no longer in a random function body, it seems we can \n> remove the second sentence.\n\nCommitted with the discussed adjustments.\n\n\n\n",
"msg_date": "Thu, 15 Dec 2022 10:32:27 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: static assert cleanup"
}
] |
[
{
"msg_contents": "Hi,\n\nWALRead() currently reads WAL from the WAL file on the disk, which\nmeans, the walsenders serving streaming and logical replication\n(callers of WALRead()) will have to hit the disk/OS's page cache for\nreading the WAL. This may increase the amount of read IO required for\nall the walsenders put together as one typically maintains many\nstandbys/subscribers on production servers for high availability,\ndisaster recovery, read-replicas and so on. Also, it may increase\nreplication lag if all the WAL reads are always hitting the disk.\n\nIt may happen that WAL buffers contain the requested WAL, if so, the\nWALRead() can attempt to read from the WAL buffers first before\nreading from the file. If the read hits the WAL buffers, then reading\nfrom the file on disk is avoided. This mainly reduces the read IO/read\nsystem calls. It also enables us to do other features specified\nelsewhere [1].\n\nI'm attaching a patch that implements the idea which is also noted\nelsewhere [2]. I've run some tests [3]. The WAL buffers hit ratio with\nthe patch stood at 95%, in other words, the walsenders avoided 95% of\nthe time reading from the file. The benefit, if measured in terms of\nthe amount of data - 79% (13.5GB out of total 17GB) of the requested\nWAL is read from the WAL buffers as opposed to 21% from the file. Note\nthat the WAL buffers hit ratio can be very low for write-heavy\nworkloads, in which case, file reads are inevitable.\n\nThe patch introduces concurrent readers for the WAL buffers, so far\nonly there are concurrent writers. In the patch, WALRead() takes just\none lock (WALBufMappingLock) in shared mode to enable concurrent\nreaders and does minimal things - checks if the requested WAL page is\npresent in WAL buffers, if so, copies the page and releases the lock.\nI think taking just WALBufMappingLock is enough here as the concurrent\nwriters depend on it to initialize and replace a page in WAL buffers.\n\nI'll add this to the next commitfest.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/CALj2ACXCSM%2BsTR%3D5NNRtmSQr3g1Vnr-yR91azzkZCaCJ7u4d4w%40mail.gmail.com\n\n[2]\n * XXX probably this should be improved to suck data directly from the\n * WAL buffers when possible.\n */\nbool\nWALRead(XLogReaderState *state,\n\n[3]\n1 primary, 1 sync standby, 1 async standby\n./pgbench --initialize --scale=300 postgres\n./pgbench --jobs=16 --progress=300 --client=32 --time=900\n--username=ubuntu postgres\n\nPATCHED:\n-[ RECORD 1 ]----------+----------------\napplication_name | assb1\nwal_read | 31005\nwal_read_bytes | 3800607104\nwal_read_time | 779.402\nwal_read_buffers | 610611\nwal_read_bytes_buffers | 14493226440\nwal_read_time_buffers | 3033.309\nsync_state | async\n-[ RECORD 2 ]----------+----------------\napplication_name | ssb1\nwal_read | 31027\nwal_read_bytes | 3800932712\nwal_read_time | 696.365\nwal_read_buffers | 610580\nwal_read_bytes_buffers | 14492900832\nwal_read_time_buffers | 2989.507\nsync_state | sync\n\nHEAD:\n-[ RECORD 1 ]----+----------------\napplication_name | assb1\nwal_read | 705627\nwal_read_bytes | 18343480640\nwal_read_time | 7607.783\nsync_state | async\n-[ RECORD 2 ]----+------------\napplication_name | ssb1\nwal_read | 705625\nwal_read_bytes | 18343480640\nwal_read_time | 4539.058\nsync_state | sync\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 9 Dec 2022 14:33:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "At Fri, 9 Dec 2022 14:33:39 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> The patch introduces concurrent readers for the WAL buffers, so far\n> only there are concurrent writers. In the patch, WALRead() takes just\n> one lock (WALBufMappingLock) in shared mode to enable concurrent\n> readers and does minimal things - checks if the requested WAL page is\n> present in WAL buffers, if so, copies the page and releases the lock.\n> I think taking just WALBufMappingLock is enough here as the concurrent\n> writers depend on it to initialize and replace a page in WAL buffers.\n> \n> I'll add this to the next commitfest.\n> \n> Thoughts?\n\nThis adds copying of the whole page (at least) at every WAL *record*\nread, fighting all WAL writers by taking WALBufMappingLock on a very\nbusy page while the copying. I'm a bit doubtful that it results in an\noverall improvement. It seems to me almost all pread()s here happens\non file buffer so it is unclear to me that copying a whole WAL page\n(then copying the target record again) wins over a pread() call that\ncopies only the record to read. Do you have an actual number of how\nfrequent WAL reads go to disk, or the actual number of performance\ngain or real I/O reduction this patch offers?\n\nThis patch copies the bleeding edge WAL page without recording the\n(next) insertion point nor checking whether all in-progress insertion\nbehind the target LSN have finished. Thus the copied page may have\nholes. That being said, the sequential-reading nature and the fact\nthat WAL buffers are zero-initialized may make it work for recovery,\nbut I don't think this also works for replication.\n\nI remember that the one of the advantage of reading the on-memory WAL\nrecords is that that allows walsender to presend the unwritten\nrecords. So perhaps we should manage how far the buffer is filled with\nvalid content (or how far we can presend) in this feature.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 12 Dec 2022 11:57:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "At Mon, 12 Dec 2022 11:57:17 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> This patch copies the bleeding edge WAL page without recording the\n> (next) insertion point nor checking whether all in-progress insertion\n> behind the target LSN have finished. Thus the copied page may have\n> holes. That being said, the sequential-reading nature and the fact\n> that WAL buffers are zero-initialized may make it work for recovery,\n> but I don't think this also works for replication.\n\nMmm. I'm a bit dim. Recovery doesn't read concurrently-written\nrecords. Please forget about recovery.\n\n> I remember that the one of the advantage of reading the on-memory WAL\n> records is that that allows walsender to presend the unwritten\n> records. So perhaps we should manage how far the buffer is filled with\n> valid content (or how far we can presend) in this feature.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 12 Dec 2022 12:06:36 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Sorry for the confusion.\n\nAt Mon, 12 Dec 2022 12:06:36 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 12 Dec 2022 11:57:17 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > This patch copies the bleeding edge WAL page without recording the\n> > (next) insertion point nor checking whether all in-progress insertion\n> > behind the target LSN have finished. Thus the copied page may have\n> > holes. That being said, the sequential-reading nature and the fact\n> > that WAL buffers are zero-initialized may make it work for recovery,\n> > but I don't think this also works for replication.\n> \n> Mmm. I'm a bit dim. Recovery doesn't read concurrently-written\n> records. Please forget about recovery.\n\nNO... Logical walsenders do that. So, please forget about this...\n\n> > I remember that the one of the advantage of reading the on-memory WAL\n> > records is that that allows walsender to presend the unwritten\n> > records. So perhaps we should manage how far the buffer is filled with\n> > valid content (or how far we can presend) in this feature.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 12 Dec 2022 12:08:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Mon, Dec 12, 2022 at 8:27 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n\nThanks for providing thoughts.\n\n> At Fri, 9 Dec 2022 14:33:39 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > The patch introduces concurrent readers for the WAL buffers, so far\n> > only there are concurrent writers. In the patch, WALRead() takes just\n> > one lock (WALBufMappingLock) in shared mode to enable concurrent\n> > readers and does minimal things - checks if the requested WAL page is\n> > present in WAL buffers, if so, copies the page and releases the lock.\n> > I think taking just WALBufMappingLock is enough here as the concurrent\n> > writers depend on it to initialize and replace a page in WAL buffers.\n> >\n> > I'll add this to the next commitfest.\n> >\n> > Thoughts?\n>\n> This adds copying of the whole page (at least) at every WAL *record*\n> read,\n\nIn the worst case yes, but that may not always be true. On a typical\nproduction server with decent write traffic, it happens that the\ncallers of WALRead() read a full WAL page of size XLOG_BLCKSZ bytes or\nMAX_SEND_SIZE bytes.\n\n> fighting all WAL writers by taking WALBufMappingLock on a very\n> busy page while the copying. I'm a bit doubtful that it results in an\n> overall improvement.\n\nWell, the tests don't reflect that [1], I've run an insert work load\n[2]. The WAL is being read from WAL buffers 99% of the time, which is\npretty cool. If you have any use-cases in mind, please share them\nand/or feel free to run at your end.\n\n> It seems to me almost all pread()s here happens\n> on file buffer so it is unclear to me that copying a whole WAL page\n> (then copying the target record again) wins over a pread() call that\n> copies only the record to read.\n\nThat's not always guaranteed. Imagine a typical production server with\ndecent write traffic and heavy analytical queries (which fills OS page\ncache with the table pages accessed for the queries), the WAL pread()\ncalls turn to IOPS. Despite the WAL being present in WAL buffers,\ncustomers will be paying unnecessarily for these IOPS too. With the\npatch, we are basically avoiding the pread() system calls which may\nturn into IOPS on production servers (99% of the time for the insert\nuse case [1][2], 95% of the time for pgbench use case specified\nupthread). With the patch, WAL buffers can act as L1 cache, if one\ncalls OS page cache as L2 cache (of course this illustration is not\nrelated to the typical processor L1 and L2 ... caches).\n\n> Do you have an actual number of how\n> frequent WAL reads go to disk, or the actual number of performance\n> gain or real I/O reduction this patch offers?\n\nIt might be a bit tough to generate such heavy traffic. An idea is to\nensure the WAL page/file goes out of the OS page cache before\nWALRead() - these might help here - 0002 patch from\nhttps://www.postgresql.org/message-id/CA%2BhUKGLmeyrDcUYAty90V_YTcoo5kAFfQjRQ-_1joS_%3DX7HztA%40mail.gmail.com\nand tool https://github.com/klando/pgfincore.\n\n> This patch copies the bleeding edge WAL page without recording the\n> (next) insertion point nor checking whether all in-progress insertion\n> behind the target LSN have finished. Thus the copied page may have\n> holes. That being said, the sequential-reading nature and the fact\n> that WAL buffers are zero-initialized may make it work for recovery,\n> but I don't think this also works for replication.\n\nWALRead() callers are smart enough to take the flushed bytes only.\nAlthough they read the whole WAL page, they calculate the valid bytes.\n\n> I remember that the one of the advantage of reading the on-memory WAL\n> records is that that allows walsender to presend the unwritten\n> records. So perhaps we should manage how far the buffer is filled with\n> valid content (or how far we can presend) in this feature.\n\nYes, the non-flushed WAL can be read and sent across if one wishes to\nto make replication faster and parallel flushing on primary and\nstandbys at the cost of a bit of extra crash handling, that's\nmentioned here https://www.postgresql.org/message-id/CALj2ACXCSM%2BsTR%3D5NNRtmSQr3g1Vnr-yR91azzkZCaCJ7u4d4w%40mail.gmail.com.\nHowever, this can be a separate discussion.\n\nI also want to reiterate that the patch implemented a TODO item:\n\n * XXX probably this should be improved to suck data directly from the\n * WAL buffers when possible.\n */\nbool\nWALRead(XLogReaderState *state,\n\n[1]\nPATCHED:\n1 1470.329907\n2 1437.096329\n4 2966.096948\n8 5978.441040\n16 11405.538255\n32 22933.546058\n64 43341.870038\n128 73623.837068\n256 104754.248661\n512 115746.359530\n768 106106.691455\n1024 91900.079086\n2048 84134.278589\n4096 62580.875507\n\n-[ RECORD 1 ]----------+-----------\napplication_name | assb1\nsent_lsn | 0/1B8106A8\nwrite_lsn | 0/1B8106A8\nflush_lsn | 0/1B8106A8\nreplay_lsn | 0/1B8106A8\nwrite_lag |\nflush_lag |\nreplay_lag |\nwal_read | 104\nwal_read_bytes | 10733008\nwal_read_time | 1.845\nwal_read_buffers | 76662\nwal_read_bytes_buffers | 383598808\nwal_read_time_buffers | 205.418\nsync_state | async\n\nHEAD:\n1 1312.054496\n2 1449.429321\n4 2717.496207\n8 5913.361540\n16 10762.978907\n32 19653.449728\n64 41086.124269\n128 68548.061171\n256 104468.415361\n512 114328.943598\n768 91751.279309\n1024 96403.736757\n2048 82155.140270\n4096 66160.659511\n\n-[ RECORD 1 ]----+-----------\napplication_name | assb1\nsent_lsn | 0/1AB5BCB8\nwrite_lsn | 0/1AB5BCB8\nflush_lsn | 0/1AB5BCB8\nreplay_lsn | 0/1AB5BCB8\nwrite_lag |\nflush_lag |\nreplay_lag |\nwal_read | 71967\nwal_read_bytes | 381009080\nwal_read_time | 243.616\nsync_state | async\n\n[2] Test details:\n./configure --prefix=$PWD/inst/ CFLAGS=\"-O3\" > install.log && make -j\n8 install > install.log 2>&1 &\n1 primary, 1 async standby\ncd inst/bin\n./pg_ctl -D data -l logfile stop\n./pg_ctl -D assbdata -l logfile1 stop\nrm -rf data assbdata\nrm logfile logfile1\nfree -m\nsudo su -c 'sync; echo 3 > /proc/sys/vm/drop_caches'\nfree -m\n./initdb -D data\nrm -rf /home/ubuntu/archived_wal\nmkdir /home/ubuntu/archived_wal\ncat << EOF >> data/postgresql.conf\nshared_buffers = '8GB'\nwal_buffers = '1GB'\nmax_wal_size = '16GB'\nmax_connections = '5000'\narchive_mode = 'on'\narchive_command='cp %p /home/ubuntu/archived_wal/%f'\ntrack_wal_io_timing = 'on'\nEOF\n./pg_ctl -D data -l logfile start\n./psql -c \"select\npg_create_physical_replication_slot('assb1_repl_slot', true, false)\"\npostgres\n./pg_ctl -D data -l logfile restart\n./pg_basebackup -D assbdata\n./pg_ctl -D data -l logfile stop\ncat << EOF >> assbdata/postgresql.conf\nport=5433\nprimary_conninfo='host=localhost port=5432 dbname=postgres user=ubuntu\napplication_name=assb1'\nprimary_slot_name='assb1_repl_slot'\nrestore_command='cp /home/ubuntu/archived_wal/%f %p'\nEOF\ntouch assbdata/standby.signal\n./pg_ctl -D data -l logfile start\n./pg_ctl -D assbdata -l logfile1 start\n./pgbench -i -s 1 -d postgres\n./psql -d postgres -c \"ALTER TABLE pgbench_accounts DROP CONSTRAINT\npgbench_accounts_pkey;\"\ncat << EOF >> insert.sql\n\\set aid random(1, 10 * :scale)\n\\set delta random(1, 100000 * :scale)\nINSERT INTO pgbench_accounts (aid, bid, abalance) VALUES (:aid, :aid, :delta);\nEOF\nulimit -S -n 5000\nfor c in 1 2 4 8 16 32 64 128 256 512 768 1024 2048 4096; do echo -n\n\"$c \";./pgbench -n -M prepared -U ubuntu postgres -f insert.sql -c$c\n-j$c -T5 2>&1|grep '^tps'|awk '{print $3}';done\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 23 Dec 2022 15:45:55 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 3:46 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Dec 12, 2022 at 8:27 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n>\n> Thanks for providing thoughts.\n>\n> > At Fri, 9 Dec 2022 14:33:39 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > > The patch introduces concurrent readers for the WAL buffers, so far\n> > > only there are concurrent writers. In the patch, WALRead() takes just\n> > > one lock (WALBufMappingLock) in shared mode to enable concurrent\n> > > readers and does minimal things - checks if the requested WAL page is\n> > > present in WAL buffers, if so, copies the page and releases the lock.\n> > > I think taking just WALBufMappingLock is enough here as the concurrent\n> > > writers depend on it to initialize and replace a page in WAL buffers.\n> > >\n> > > I'll add this to the next commitfest.\n> > >\n> > > Thoughts?\n> >\n> > This adds copying of the whole page (at least) at every WAL *record*\n> > read,\n>\n> In the worst case yes, but that may not always be true. On a typical\n> production server with decent write traffic, it happens that the\n> callers of WALRead() read a full WAL page of size XLOG_BLCKSZ bytes or\n> MAX_SEND_SIZE bytes.\n\nI agree with this.\n\n> > This patch copies the bleeding edge WAL page without recording the\n> > (next) insertion point nor checking whether all in-progress insertion\n> > behind the target LSN have finished. Thus the copied page may have\n> > holes. That being said, the sequential-reading nature and the fact\n> > that WAL buffers are zero-initialized may make it work for recovery,\n> > but I don't think this also works for replication.\n>\n> WALRead() callers are smart enough to take the flushed bytes only.\n> Although they read the whole WAL page, they calculate the valid bytes.\n\nRight\n\nOn first read the patch looks good, although it needs some more\nthoughts on 'XXX' comments in the patch.\n\nAnd also I do not like that XLogReadFromBuffers() is using 3 bools\nhit/partial hit/miss, instead of this we can use an enum or some\ntristate variable, I think that will be cleaner.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 25 Dec 2022 16:55:26 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Sun, Dec 25, 2022 at 4:55 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > > This adds copying of the whole page (at least) at every WAL *record*\n> > > read,\n> >\n> > In the worst case yes, but that may not always be true. On a typical\n> > production server with decent write traffic, it happens that the\n> > callers of WALRead() read a full WAL page of size XLOG_BLCKSZ bytes or\n> > MAX_SEND_SIZE bytes.\n>\n> I agree with this.\n>\n> > > This patch copies the bleeding edge WAL page without recording the\n> > > (next) insertion point nor checking whether all in-progress insertion\n> > > behind the target LSN have finished. Thus the copied page may have\n> > > holes. That being said, the sequential-reading nature and the fact\n> > > that WAL buffers are zero-initialized may make it work for recovery,\n> > > but I don't think this also works for replication.\n> >\n> > WALRead() callers are smart enough to take the flushed bytes only.\n> > Although they read the whole WAL page, they calculate the valid bytes.\n>\n> Right\n>\n> On first read the patch looks good, although it needs some more\n> thoughts on 'XXX' comments in the patch.\n\nThanks a lot for reviewing.\n\nHere are some open points that I mentioned in v1 patch:\n\n1.\n+ * XXX: Perhaps, measuring the immediate lock availability and its impact\n+ * on concurrent WAL writers is a good idea here.\n\nIt was shown in my testng upthread [1] that the patch does no harm in\nthis regard. It will be great if other members try testing in their\nrespective environments and use cases.\n\n2.\n+ * XXX: Perhaps, returning if lock is not immediately available a good idea\n+ * here. The caller can then go ahead with reading WAL from WAL file.\n\nAfter thinking a bit more on this, ISTM that doing the above is right\nto not cause any contention when the lock is busy. I've done so in the\nv2 patch.\n\n3.\n+ * XXX: Perhaps, quickly finding if the given WAL record is in WAL buffers\n+ * a good idea here. This avoids unnecessary lock acquire-release cycles.\n+ * One way to do that is by maintaining oldest WAL record that's currently\n+ * present in WAL buffers.\n\nI think by doing the above we might end up creating a new point of\ncontention. Because shared variables to track min and max available\nLSNs in the WAL buffers will need to be protected against all the\nconcurrent writers. Also, with the change that's done in (2) above,\nthat is, quickly exiting if the lock was busy, this comment seems\nunnecessary to worry about. Hence, I decided to leave it there.\n\n4.\n+ * XXX: Perhaps, we can further go and validate the found page header,\n+ * record header and record at least in assert builds, something like\n+ * the xlogreader.c does and return if any of those validity checks\n+ * fail. Having said that, we stick to the minimal checks for now.\n\nI was being over-cautious initially. The fact that we acquire\nWALBufMappingLock while reading the needed WAL buffer page itself\nguarantees that no one else initializes it/makes it ready for next use\nin AdvanceXLInsertBuffer(). The checks that we have for page header\n(xlp_magic, xlp_pageaddr and xlp_tli) in the patch are enough for us\nto ensure that we're not reading a page that got just initialized. The\ncallers will anyway perform extensive checks on page and record in\nXLogReaderValidatePageHeader() and ValidXLogRecordHeader()\nrespectively. If any such failures occur after reading WAL from WAL\nbuffers, then that must be treated as a bug IMO. Hence, I don't think\nwe need to do the above.\n\n> And also I do not like that XLogReadFromBuffers() is using 3 bools\n> hit/partial hit/miss, instead of this we can use an enum or some\n> tristate variable, I think that will be cleaner.\n\nYeah, that seems more verbose, all that information can be deduced\nfrom requested bytes and read bytes, I've done so in the v2 patch.\n\nPlease review the attached v2 patch further.\n\nI'm also attaching two helper patches (as .txt files) herewith for\ntesting that basically adds WAL read stats -\nUSE-ON-HEAD-Collect-WAL-read-from-file-stats.txt - apply on HEAD and\nmonitor pg_stat_replication for per-walsender WAL read from WAL file\nstats. USE-ON-PATCH-Collect-WAL-read-from-buffers-and-file-stats.txt -\napply on v2 patch and monitor pg_stat_replication for per-walsender\nWAL read from WAL buffers and WAL file stats.\n\n[1] https://www.postgresql.org/message-id/CALj2ACXUbvON86vgwTkum8ab3bf1%3DHkMxQ5hZJZS3ZcJn8NEXQ%40mail.gmail.com\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 26 Dec 2022 14:20:07 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Mon, 2022-12-26 at 14:20 +0530, Bharath Rupireddy wrote:\n> Please review the attached v2 patch further.\n\nI'm still unclear on the performance goals of this patch. I see that it\nwill reduce syscalls, which sounds good, but to what end?\n\nDoes it allow a greater number of walsenders? Lower replication\nlatency? Less IO bandwidth? All of the above?\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Sat, 14 Jan 2023 00:48:52 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-14 00:48:52 -0800, Jeff Davis wrote:\n> On Mon, 2022-12-26 at 14:20 +0530, Bharath Rupireddy wrote:\n> > Please review the attached v2 patch further.\n> \n> I'm still unclear on the performance goals of this patch. I see that it\n> will reduce syscalls, which sounds good, but to what end?\n> \n> Does it allow a greater number of walsenders? Lower replication\n> latency? Less IO bandwidth? All of the above?\n\nOne benefit would be that it'd make it more realistic to use direct IO for WAL\n- for which I have seen significant performance benefits. But when we\nafterwards have to re-read it from disk to replicate, it's less clearly a win.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 14 Jan 2023 12:34:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Sat, Jan 14, 2023 at 12:34 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-01-14 00:48:52 -0800, Jeff Davis wrote:\n> > On Mon, 2022-12-26 at 14:20 +0530, Bharath Rupireddy wrote:\n> > > Please review the attached v2 patch further.\n> >\n> > I'm still unclear on the performance goals of this patch. I see that it\n> > will reduce syscalls, which sounds good, but to what end?\n> >\n> > Does it allow a greater number of walsenders? Lower replication\n> > latency? Less IO bandwidth? All of the above?\n>\n> One benefit would be that it'd make it more realistic to use direct IO for\n> WAL\n> - for which I have seen significant performance benefits. But when we\n> afterwards have to re-read it from disk to replicate, it's less clearly a\n> win.\n>\n\n +1. Archive modules rely on reading the wal files for PITR. Direct IO for\nWAL requires reading these files from disk anyways for archival. However,\nArchiving using utilities like pg_receivewal can take advantage of this\npatch together with direct IO for WAL.\n\nThanks,\nSatya\n\nOn Sat, Jan 14, 2023 at 12:34 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-01-14 00:48:52 -0800, Jeff Davis wrote:\n> On Mon, 2022-12-26 at 14:20 +0530, Bharath Rupireddy wrote:\n> > Please review the attached v2 patch further.\n> \n> I'm still unclear on the performance goals of this patch. I see that it\n> will reduce syscalls, which sounds good, but to what end?\n> \n> Does it allow a greater number of walsenders? Lower replication\n> latency? Less IO bandwidth? All of the above?\n\nOne benefit would be that it'd make it more realistic to use direct IO for WAL\n- for which I have seen significant performance benefits. But when we\nafterwards have to re-read it from disk to replicate, it's less clearly a win. +1. Archive modules rely on reading the wal files for PITR. Direct IO for WAL requires reading these files from disk anyways for archival. However, Archiving using utilities like pg_receivewal can take advantage of this patch together with direct IO for WAL.Thanks,Satya",
"msg_date": "Wed, 25 Jan 2023 12:27:30 -0800",
"msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-14 12:34:03 -0800, Andres Freund wrote:\n> On 2023-01-14 00:48:52 -0800, Jeff Davis wrote:\n> > On Mon, 2022-12-26 at 14:20 +0530, Bharath Rupireddy wrote:\n> > > Please review the attached v2 patch further.\n> > \n> > I'm still unclear on the performance goals of this patch. I see that it\n> > will reduce syscalls, which sounds good, but to what end?\n> > \n> > Does it allow a greater number of walsenders? Lower replication\n> > latency? Less IO bandwidth? All of the above?\n> \n> One benefit would be that it'd make it more realistic to use direct IO for WAL\n> - for which I have seen significant performance benefits. But when we\n> afterwards have to re-read it from disk to replicate, it's less clearly a win.\n\nSatya's email just now reminded me of another important reason:\n\nEventually we should add the ability to stream out WAL *before* it has locally\nbeen written out and flushed. Obviously the relevant positions would have to\nbe noted in the relevant message in the streaming protocol, and we couldn't\ngenerally allow standbys to apply that data yet.\n\nThat'd allow us to significantly reduce the overhead of synchronous\nreplication, because instead of commonly needing to send out all the pending\nWAL at commit, we'd just need to send out the updated flush position. The\nreason this would lower the overhead is that:\n\na) The reduced amount of data to be transferred reduces latency - it's easy to\n accumulate a few TCP packets worth of data even in a single small OLTP\n transaction\nb) The remote side can start to write out data earlier\n\n\nOf course this would require additional infrastructure on the receiver\nside. E.g. some persistent state indicating up to where WAL is allowed to be\napplied, to avoid the standby getting ahead of th eprimary, in case the\nprimary crash-restarts (or has more severe issues).\n\n\nWith a bit of work we could perform WAL replay on standby without waiting for\nthe fdatasync of the received WAL - that only needs to happen when a) we need\nto confirm a flush position to the primary b) when we need to write back pages\nfrom the buffer pool (and some other things).\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 25 Jan 2023 13:15:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 2:45 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-01-14 12:34:03 -0800, Andres Freund wrote:\n> > On 2023-01-14 00:48:52 -0800, Jeff Davis wrote:\n> > > On Mon, 2022-12-26 at 14:20 +0530, Bharath Rupireddy wrote:\n> > > > Please review the attached v2 patch further.\n> > >\n> > > I'm still unclear on the performance goals of this patch. I see that it\n> > > will reduce syscalls, which sounds good, but to what end?\n> > >\n> > > Does it allow a greater number of walsenders? Lower replication\n> > > latency? Less IO bandwidth? All of the above?\n> >\n> > One benefit would be that it'd make it more realistic to use direct IO for WAL\n> > - for which I have seen significant performance benefits. But when we\n> > afterwards have to re-read it from disk to replicate, it's less clearly a win.\n>\n> Satya's email just now reminded me of another important reason:\n>\n> Eventually we should add the ability to stream out WAL *before* it has locally\n> been written out and flushed. Obviously the relevant positions would have to\n> be noted in the relevant message in the streaming protocol, and we couldn't\n> generally allow standbys to apply that data yet.\n>\n> That'd allow us to significantly reduce the overhead of synchronous\n> replication, because instead of commonly needing to send out all the pending\n> WAL at commit, we'd just need to send out the updated flush position. The\n> reason this would lower the overhead is that:\n>\n> a) The reduced amount of data to be transferred reduces latency - it's easy to\n> accumulate a few TCP packets worth of data even in a single small OLTP\n> transaction\n> b) The remote side can start to write out data earlier\n>\n>\n> Of course this would require additional infrastructure on the receiver\n> side. E.g. some persistent state indicating up to where WAL is allowed to be\n> applied, to avoid the standby getting ahead of th eprimary, in case the\n> primary crash-restarts (or has more severe issues).\n>\n>\n> With a bit of work we could perform WAL replay on standby without waiting for\n> the fdatasync of the received WAL - that only needs to happen when a) we need\n> to confirm a flush position to the primary b) when we need to write back pages\n> from the buffer pool (and some other things).\n\nThanks Andres, Jeff and Satya for taking a look at the thread. Andres\nis right, the eventual plan is to do a bunch of other stuff as\ndescribed above and we've discussed this in another thread (see\nbelow). I would like to once again clarify motivation behind this\nfeature:\n\n1. It enables WAL readers (callers of WALRead() - wal senders,\npg_walinspect etc.) to use WAL buffers as first level cache which\nmight reduce number of IOPS at a peak load especially when the pread()\nresults in a disk read (WAL isn't available in OS page cache). I had\nearlier presented the buffer hit ratio/amount of pread() system calls\nreduced with wal senders in the first email of this thread (95% of the\ntime wal senders are able to read from WAL buffers without impacting\nanybody). Now, here are the results with the WAL DIO patch [1] - where\nWAL pread() turns into a disk read, see the results [2] and attached\ngraph.\n\n2. As Andres rightly mentioned, it helps WAL DIO; since there's no OS\npage cache, using WAL buffers as read cache helps a lot. It is clearly\nevident from my experiment with WAL DIO patch [1], see the results [2]\nand attached graph. As expected, WAL DIO brings down the TPS, whereas\nWAL buffers read i.e. this patch brings it up.\n\n3. As Andres rightly mentioned above, it enables flushing WAL in\nparallel on primary and all standbys [3]. I haven't yet started work\non this, I will aim for PG 17.\n\n4. It will make the work on - disallow async standbys or subscribers\ngetting ahead of the sync standbys [3] possible. I haven't yet started\nwork on this, I will aim for PG 17.\n\n5. It implements the following TODO item specified near WALRead():\n * XXX probably this should be improved to suck data directly from the\n * WAL buffers when possible.\n */\nbool\nWALRead(XLogReaderState *state,\n\nThat said, this feature is separately reviewable and perhaps can go\nseparately as it has its own benefits.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGLmeyrDcUYAty90V_YTcoo5kAFfQjRQ-_1joS_%3DX7HztA%40mail.gmail.com\n\n[2] Test case is an insert pgbench workload.\nclients HEAD WAL DIO WAL DIO & WAL BUFFERS READ WAL BUFFERS READ\n1 1404 1070 1424 1375\n2 1487 796 1454 1517\n4 3064 1743 3011 3019\n8 6114 3556 6026 5954\n16 11560 7051 12216 12132\n32 23181 13079 23449 23561\n64 43607 26983 43997 45636\n128 80723 45169 81515 81911\n256 110925 90185 107332 114046\n512 119354 109817 110287 117506\n768 112435 105795 106853 111605\n1024 107554 105541 105942 109370\n2048 88552 79024 80699 90555\n4096 61323 54814 58704 61743\n\n[3]\nhttps://www.postgresql.org/message-id/20220309020123.sneaoijlg3rszvst@alap3.anarazel.de\nhttps://www.postgresql.org/message-id/CALj2ACXCSM%2BsTR%3D5NNRtmSQr3g1Vnr-yR91azzkZCaCJ7u4d4w%40mail.gmail.com\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 26 Jan 2023 11:03:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 2:33 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Jan 26, 2023 at 2:45 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2023-01-14 12:34:03 -0800, Andres Freund wrote:\n> > > On 2023-01-14 00:48:52 -0800, Jeff Davis wrote:\n> > > > On Mon, 2022-12-26 at 14:20 +0530, Bharath Rupireddy wrote:\n> > > > > Please review the attached v2 patch further.\n> > > >\n> > > > I'm still unclear on the performance goals of this patch. I see that it\n> > > > will reduce syscalls, which sounds good, but to what end?\n> > > >\n> > > > Does it allow a greater number of walsenders? Lower replication\n> > > > latency? Less IO bandwidth? All of the above?\n> > >\n> > > One benefit would be that it'd make it more realistic to use direct IO for WAL\n> > > - for which I have seen significant performance benefits. But when we\n> > > afterwards have to re-read it from disk to replicate, it's less clearly a win.\n> >\n> > Satya's email just now reminded me of another important reason:\n> >\n> > Eventually we should add the ability to stream out WAL *before* it has locally\n> > been written out and flushed. Obviously the relevant positions would have to\n> > be noted in the relevant message in the streaming protocol, and we couldn't\n> > generally allow standbys to apply that data yet.\n> >\n> > That'd allow us to significantly reduce the overhead of synchronous\n> > replication, because instead of commonly needing to send out all the pending\n> > WAL at commit, we'd just need to send out the updated flush position. The\n> > reason this would lower the overhead is that:\n> >\n> > a) The reduced amount of data to be transferred reduces latency - it's easy to\n> > accumulate a few TCP packets worth of data even in a single small OLTP\n> > transaction\n> > b) The remote side can start to write out data earlier\n> >\n> >\n> > Of course this would require additional infrastructure on the receiver\n> > side. E.g. some persistent state indicating up to where WAL is allowed to be\n> > applied, to avoid the standby getting ahead of th eprimary, in case the\n> > primary crash-restarts (or has more severe issues).\n> >\n> >\n> > With a bit of work we could perform WAL replay on standby without waiting for\n> > the fdatasync of the received WAL - that only needs to happen when a) we need\n> > to confirm a flush position to the primary b) when we need to write back pages\n> > from the buffer pool (and some other things).\n>\n> Thanks Andres, Jeff and Satya for taking a look at the thread. Andres\n> is right, the eventual plan is to do a bunch of other stuff as\n> described above and we've discussed this in another thread (see\n> below). I would like to once again clarify motivation behind this\n> feature:\n>\n> 1. It enables WAL readers (callers of WALRead() - wal senders,\n> pg_walinspect etc.) to use WAL buffers as first level cache which\n> might reduce number of IOPS at a peak load especially when the pread()\n> results in a disk read (WAL isn't available in OS page cache). I had\n> earlier presented the buffer hit ratio/amount of pread() system calls\n> reduced with wal senders in the first email of this thread (95% of the\n> time wal senders are able to read from WAL buffers without impacting\n> anybody). Now, here are the results with the WAL DIO patch [1] - where\n> WAL pread() turns into a disk read, see the results [2] and attached\n> graph.\n>\n> 2. As Andres rightly mentioned, it helps WAL DIO; since there's no OS\n> page cache, using WAL buffers as read cache helps a lot. It is clearly\n> evident from my experiment with WAL DIO patch [1], see the results [2]\n> and attached graph. As expected, WAL DIO brings down the TPS, whereas\n> WAL buffers read i.e. this patch brings it up.\n>\n> [2] Test case is an insert pgbench workload.\n> clients HEAD WAL DIO WAL DIO & WAL BUFFERS READ WAL BUFFERS READ\n> 1 1404 1070 1424 1375\n> 2 1487 796 1454 1517\n> 4 3064 1743 3011 3019\n> 8 6114 3556 6026 5954\n> 16 11560 7051 12216 12132\n> 32 23181 13079 23449 23561\n> 64 43607 26983 43997 45636\n> 128 80723 45169 81515 81911\n> 256 110925 90185 107332 114046\n> 512 119354 109817 110287 117506\n> 768 112435 105795 106853 111605\n> 1024 107554 105541 105942 109370\n> 2048 88552 79024 80699 90555\n> 4096 61323 54814 58704 61743\n\nIf I'm understanding this result correctly, it seems to me that your\npatch works well with the WAL DIO patch (WALDIO vs. WAL DIO & WAL\nBUFFERS READ), but there seems no visible performance gain with only\nyour patch (HEAD vs. WAL BUFFERS READ). So it seems to me that your\npatch should be included in the WAL DIO patch rather than applying it\nalone. Am I missing something?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 27 Jan 2023 14:24:51 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-27 14:24:51 +0900, Masahiko Sawada wrote:\n> If I'm understanding this result correctly, it seems to me that your\n> patch works well with the WAL DIO patch (WALDIO vs. WAL DIO & WAL\n> BUFFERS READ), but there seems no visible performance gain with only\n> your patch (HEAD vs. WAL BUFFERS READ). So it seems to me that your\n> patch should be included in the WAL DIO patch rather than applying it\n> alone. Am I missing something?\n\nWe already support using DIO for WAL - it's just restricted in a way that\nmakes it practically not usable. And the reason for that is precisely that\nwalsenders need to read the WAL. See get_sync_bit():\n\n\t/*\n\t * Optimize writes by bypassing kernel cache with O_DIRECT when using\n\t * O_SYNC and O_DSYNC. But only if archiving and streaming are disabled,\n\t * otherwise the archive command or walsender process will read the WAL\n\t * soon after writing it, which is guaranteed to cause a physical read if\n\t * we bypassed the kernel cache. We also skip the\n\t * posix_fadvise(POSIX_FADV_DONTNEED) call in XLogFileClose() for the same\n\t * reason.\n\t *\n\t * Never use O_DIRECT in walreceiver process for similar reasons; the WAL\n\t * written by walreceiver is normally read by the startup process soon\n\t * after it's written. Also, walreceiver performs unaligned writes, which\n\t * don't work with O_DIRECT, so it is required for correctness too.\n\t */\n\tif (!XLogIsNeeded() && !AmWalReceiverProcess())\n\t\to_direct_flag = PG_O_DIRECT;\n\n\nEven if that weren't the case, splitting up bigger commits in incrementally\ncommittable chunks is a good idea.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 26 Jan 2023 22:17:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 3:17 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-01-27 14:24:51 +0900, Masahiko Sawada wrote:\n> > If I'm understanding this result correctly, it seems to me that your\n> > patch works well with the WAL DIO patch (WALDIO vs. WAL DIO & WAL\n> > BUFFERS READ), but there seems no visible performance gain with only\n> > your patch (HEAD vs. WAL BUFFERS READ). So it seems to me that your\n> > patch should be included in the WAL DIO patch rather than applying it\n> > alone. Am I missing something?\n>\n> We already support using DIO for WAL - it's just restricted in a way that\n> makes it practically not usable. And the reason for that is precisely that\n> walsenders need to read the WAL. See get_sync_bit():\n>\n> /*\n> * Optimize writes by bypassing kernel cache with O_DIRECT when using\n> * O_SYNC and O_DSYNC. But only if archiving and streaming are disabled,\n> * otherwise the archive command or walsender process will read the WAL\n> * soon after writing it, which is guaranteed to cause a physical read if\n> * we bypassed the kernel cache. We also skip the\n> * posix_fadvise(POSIX_FADV_DONTNEED) call in XLogFileClose() for the same\n> * reason.\n> *\n> * Never use O_DIRECT in walreceiver process for similar reasons; the WAL\n> * written by walreceiver is normally read by the startup process soon\n> * after it's written. Also, walreceiver performs unaligned writes, which\n> * don't work with O_DIRECT, so it is required for correctness too.\n> */\n> if (!XLogIsNeeded() && !AmWalReceiverProcess())\n> o_direct_flag = PG_O_DIRECT;\n>\n>\n> Even if that weren't the case, splitting up bigger commits in incrementally\n> committable chunks is a good idea.\n\nAgreed. I was wondering about the fact that the test result doesn't\nshow things to satisfy the first motivation of this patch, which is to\nimprove performance by reducing disk I/O and system calls regardless\nof the DIO patch. But it makes sense to me that this patch is a part\nof the DIO patch series.\n\nI'd like to confirm whether there is any performance regression caused\nby this patch in some cases, especially when not using DIO.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 27 Jan 2023 15:46:04 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 12:16 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I'd like to confirm whether there is any performance regression caused\n> by this patch in some cases, especially when not using DIO.\n\nThanks. I ran some insert tests with primary and 1 async standby.\nPlease see the numbers below and attached graphs. I've not noticed a\nregression as such, in fact, with patch, there's a slight improvement.\nNote that there's no WAL DIO involved here.\n\ntest-case 1:\nclients HEAD PATCHED\n1 139 156\n2 624 599\n4 3113 3410\n8 6194 6433\n16 11255 11722\n32 22455 21658\n64 46072 47103\n128 80255 85970\n256 110067 111488\n512 114043 118094\n768 109588 111892\n1024 106144 109361\n2048 85808 90745\n4096 55911 53755\n\ntest-case 2:\nclients HEAD PATCHED\n1 177 128\n2 186 425\n4 2114 2946\n8 5835 5840\n16 10654 11199\n32 14071 13959\n64 18092 17519\n128 27298 28274\n256 24600 24843\n512 17139 19450\n768 16778 20473\n1024 18294 20209\n2048 12898 13920\n4096 6399 6815\n\ntest-case 3:\nclients HEAD PATCHED\n1 148 191\n2 302 317\n4 3415 3243\n8 5864 6193\n16 9573 10267\n32 14069 15819\n64 17424 18453\n128 24493 29192\n256 33180 38250\n512 35568 36551\n768 29731 30317\n1024 32291 32124\n2048 27964 28933\n4096 13702 15034\n\n[1]\ncat << EOF >> data/postgresql.conf\nshared_buffers = '8GB'\nwal_buffers = '1GB'\nmax_wal_size = '16GB'\nmax_connections = '5000'\narchive_mode = 'on'\narchive_command='cp %p /home/ubuntu/archived_wal/%f'\nEOF\n\ntest-case 1:\n./pgbench -i -s 300 -d postgres\n./psql -d postgres -c \"ALTER TABLE pgbench_accounts DROP CONSTRAINT\npgbench_accounts_pkey;\"\ncat << EOF >> insert.sql\n\\set aid random(1, 10 * :scale)\n\\set delta random(1, 100000 * :scale)\nINSERT INTO pgbench_accounts (aid, bid, abalance) VALUES (:aid, :aid, :delta);\nEOF\nfor c in 1 2 4 8 16 32 64 128 256 512 768 1024 2048 4096; do echo -n\n\"$c \";./pgbench -n -M prepared -U ubuntu postgres -f insert.sql -c$c\n-j$c -T5 2>&1|grep '^tps'|awk '{print $3}';done\n\ntest-case 2:\n./pgbench --initialize --scale=300 postgres\nfor c in 1 2 4 8 16 32 64 128 256 512 768 1024 2048 4096; do echo -n\n\"$c \";./pgbench -n -M prepared -U ubuntu postgres -b tpcb-like -c$c\n-j$c -T5 2>&1|grep '^tps'|awk '{print $3}';done\n\ntest-case 3:\n./pgbench --initialize --scale=300 postgres\nfor c in 1 2 4 8 16 32 64 128 256 512 768 1024 2048 4096; do echo -n\n\"$c \";./pgbench -n -M prepared -U ubuntu postgres -b simple-update\n-c$c -j$c -T5 2>&1|grep '^tps'|awk '{print $3}';done\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 27 Jan 2023 15:05:01 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Mon, Dec 26, 2022 at 2:20 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n\nI have gone through this patch, I have some comments (mostly cosmetic\nand comments)\n\n1.\n+ /*\n+ * We have found the WAL buffer page holding the given LSN. Read from a\n+ * pointer to the right offset within the page.\n+ */\n+ memcpy(page, (XLogCtl->pages + idx * (Size) XLOG_BLCKSZ),\n+ (Size) XLOG_BLCKSZ);\n\n\n From the above comments, it appears that we are reading from the exact\npointer we are interested to read, but actually, we are reading\nthe complete page. I think this comment needs to be fixed and we can\nalso explain why we read the complete page here.\n\n2.\n+static char *\n+GetXLogBufferForRead(XLogRecPtr ptr, TimeLineID tli, char *page)\n+{\n+ XLogRecPtr expectedEndPtr;\n+ XLogRecPtr endptr;\n+ int idx;\n+ char *recptr = NULL;\n\nGenerally, we use the name 'recptr' to represent XLogRecPtr type of\nvariable, but in your case, it is actually data at that recptr, so\nbetter use some other name like 'buf' or 'buffer'.\n\n\n3.\n+ if ((recptr + nbytes) <= (page + XLOG_BLCKSZ))\n+ {\n+ /* All the bytes are in one page. */\n+ memcpy(dst, recptr, nbytes);\n+ dst += nbytes;\n+ *read_bytes += nbytes;\n+ ptr += nbytes;\n+ nbytes = 0;\n+ }\n+ else if ((recptr + nbytes) > (page + XLOG_BLCKSZ))\n+ {\n+ /* All the bytes are not in one page. */\n+ Size bytes_remaining;\n\nWhy do you have this 'else if ((recptr + nbytes) > (page +\nXLOG_BLCKSZ))' check in the else part? why it is not directly else\nwithout a condition in 'if'?\n\n4.\n+XLogReadFromBuffers(XLogRecPtr startptr,\n+ TimeLineID tli,\n+ Size count,\n+ char *buf,\n+ Size *read_bytes)\n\nI think we do not need 2 separate variables 'count' and '*read_bytes',\njust one variable for input/output is sufficient. The original value\ncan always be stored in some temp variable\ninstead of the function argument.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 7 Feb 2023 16:12:37 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 4:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Dec 26, 2022 at 2:20 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I have gone through this patch, I have some comments (mostly cosmetic\n> and comments)\n\nThanks a lot for reviewing.\n\n> From the above comments, it appears that we are reading from the exact\n> pointer we are interested to read, but actually, we are reading\n> the complete page. I think this comment needs to be fixed and we can\n> also explain why we read the complete page here.\n\nI modified it. Please see the attached v3 patch.\n\n> Generally, we use the name 'recptr' to represent XLogRecPtr type of\n> variable, but in your case, it is actually data at that recptr, so\n> better use some other name like 'buf' or 'buffer'.\n\nChanged it to use 'data' as it seemed more appropriate than just a\nbuffer to not confuse with the WAL buffer page.\n\n> 3.\n> + if ((recptr + nbytes) <= (page + XLOG_BLCKSZ))\n> + {\n> + }\n> + else if ((recptr + nbytes) > (page + XLOG_BLCKSZ))\n> + {\n>\n> Why do you have this 'else if ((recptr + nbytes) > (page +\n> XLOG_BLCKSZ))' check in the else part? why it is not directly else\n> without a condition in 'if'?\n\nChanged.\n\n> I think we do not need 2 separate variables 'count' and '*read_bytes',\n> just one variable for input/output is sufficient. The original value\n> can always be stored in some temp variable\n> instead of the function argument.\n\nWe could do that, but for the sake of readability and not cluttering\nthe API, I kept it as-is.\n\nBesides addressing the above review comments, I've made some more\nchanges - 1) I optimized the patch a bit by removing an extra memcpy.\nUp until v2 patch, the entire WAL buffer page is returned and the\ncaller takes what is wanted from it. This adds an extra memcpy, so I\nchanged it to avoid extra memcpy and just copy what is wanted. 2) I\nimproved the comments.\n\nI can also do a few other things, but before working on them, I'd like\nto hear from others:\n1. A separate wait event (WAIT_EVENT_WAL_READ_FROM_BUFFERS) for\nreading from WAL buffers - right now, WAIT_EVENT_WAL_READ is being\nused both for reading from WAL buffers and WAL files. Given the fact\nthat we won't wait for a lock or do a time-taking task while reading\nfrom buffers, it seems unnecessary.\n2. A separate TAP test for verifying that the WAL is actually read\nfrom WAL buffers - right now, existing tests for recovery,\nsubscription, pg_walinspect already cover the code, see [1]. However,\nif needed, I can add a separate TAP test.\n3. Use the oldest initialized WAL buffer page to quickly tell if the\ngiven LSN is present in WAL buffers without taking any lock - right\nnow, WALBufMappingLock is acquired to do so. While this doesn't seem\nto impact much, it's good to optimize it away. But, the oldest\ninitialized WAL buffer page isn't tracked, so I've put up a patch and\nsent in another thread [2]. Irrespective of [2], we are still good\nwith what we have in this patch.\n\n[1]\nrecovery tests:\nPATCHED: WAL buffers hit - 14759, misses - 3371\n\nsubscription tests:\nPATCHED: WAL buffers hit - 1972, misses - 32616\n\npg_walinspect tests:\nPATCHED: WAL buffers hit - 8, misses - 8\n\n[2] https://www.postgresql.org/message-id/CALj2ACVgi6LirgLDZh%3DFdfdvGvKAD%3D%3DWTOSWcQy%3DAtNgPDVnKw%40mail.gmail.com\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 8 Feb 2023 09:57:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 9:57 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I can also do a few other things, but before working on them, I'd like\n> to hear from others:\n> 1. A separate wait event (WAIT_EVENT_WAL_READ_FROM_BUFFERS) for\n> reading from WAL buffers - right now, WAIT_EVENT_WAL_READ is being\n> used both for reading from WAL buffers and WAL files. Given the fact\n> that we won't wait for a lock or do a time-taking task while reading\n> from buffers, it seems unnecessary.\n\nYes, we do not need this separate wait event and we also don't need\nWAIT_EVENT_WAL_READ wait event while reading from the buffer. Because\nwe are not performing any IO so no specific wait event is needed and\nfor reading from the WAL buffer we are acquiring WALBufMappingLock so\nthat lwlock event will be tracked under that lock.\n\n> 2. A separate TAP test for verifying that the WAL is actually read\n> from WAL buffers - right now, existing tests for recovery,\n> subscription, pg_walinspect already cover the code, see [1]. However,\n> if needed, I can add a separate TAP test.\n\nCan we write a test that can actually validate that we have read from\na WAL Buffer? If so then it would be good to have such a test to avoid\nany future breakage in that logic. But if it is just for hitting the\ncode but no guarantee that whether we can validate as part of the test\nwhether it has hit the WAL buffer or not then I think the existing\ncases are sufficient.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 8 Feb 2023 10:33:38 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 10:33 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Feb 8, 2023 at 9:57 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > I can also do a few other things, but before working on them, I'd like\n> > to hear from others:\n> > 1. A separate wait event (WAIT_EVENT_WAL_READ_FROM_BUFFERS) for\n> > reading from WAL buffers - right now, WAIT_EVENT_WAL_READ is being\n> > used both for reading from WAL buffers and WAL files. Given the fact\n> > that we won't wait for a lock or do a time-taking task while reading\n> > from buffers, it seems unnecessary.\n>\n> Yes, we do not need this separate wait event and we also don't need\n> WAIT_EVENT_WAL_READ wait event while reading from the buffer. Because\n> we are not performing any IO so no specific wait event is needed and\n> for reading from the WAL buffer we are acquiring WALBufMappingLock so\n> that lwlock event will be tracked under that lock.\n\nNope, LWLockConditionalAcquire doesn't wait, so no lock wait event (no\nLWLockReportWaitStart) there. I agree to not have any wait event for\nreading from WAL buffers as no IO is involved there. I removed it in\nthe attached v4 patch.\n\n> > 2. A separate TAP test for verifying that the WAL is actually read\n> > from WAL buffers - right now, existing tests for recovery,\n> > subscription, pg_walinspect already cover the code, see [1]. However,\n> > if needed, I can add a separate TAP test.\n>\n> Can we write a test that can actually validate that we have read from\n> a WAL Buffer? If so then it would be good to have such a test to avoid\n> any future breakage in that logic. But if it is just for hitting the\n> code but no guarantee that whether we can validate as part of the test\n> whether it has hit the WAL buffer or not then I think the existing\n> cases are sufficient.\n\nWe could set up a standby or a logical replication subscriber or\npg_walinspect extension and verify if the code got hit with the help\nof the server log (DEBUG1) message added by the patch. However, this\ncan make the test volatile.\n\nTherefore, I came up with a simple and small test module/extension\nnamed test_wal_read_from_buffers under src/test/module. It basically\nexposes a SQL-function given an LSN, it calls XLogReadFromBuffers()\nand returns true if it hits WAL buffers, otherwise false. And the\nsimple TAP test of this module verifies if the function returns true.\nI attached the test module as v4-0002 here. The test module looks\nspecific and also helps as demonstration of how one can possibly use\nthe new XLogReadFromBuffers().\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 8 Feb 2023 20:00:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Wed, Feb 08, 2023 at 08:00:00PM +0530, Bharath Rupireddy wrote:\n> +\t\t\t/*\n> +\t\t\t * We read some of the requested bytes. Continue to read remaining\n> +\t\t\t * bytes.\n> +\t\t\t */\n> +\t\t\tptr += nread;\n> +\t\t\tnbytes -= nread;\n> +\t\t\tdst += nread;\n> +\t\t\t*read_bytes += nread;\n\nWhy do we only read a page at a time in XLogReadFromBuffersGuts()? What is\npreventing us from copying all the data we need in one go?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Feb 2023 16:44:52 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 6:14 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Feb 08, 2023 at 08:00:00PM +0530, Bharath Rupireddy wrote:\n> > + /*\n> > + * We read some of the requested bytes. Continue to read remaining\n> > + * bytes.\n> > + */\n> > + ptr += nread;\n> > + nbytes -= nread;\n> > + dst += nread;\n> > + *read_bytes += nread;\n>\n> Why do we only read a page at a time in XLogReadFromBuffersGuts()? What is\n> preventing us from copying all the data we need in one go?\n\nNote that most of the WALRead() callers request a single page of\nXLOG_BLCKSZ bytes even if the server has less or more available WAL\npages. It's the streaming replication wal sender that can request less\nthan XLOG_BLCKSZ bytes and upto MAX_SEND_SIZE (16 * XLOG_BLCKSZ). And,\nif we read, say, MAX_SEND_SIZE at once while holding\nWALBufMappingLock, that might impact concurrent inserters (at least, I\ncan say it in theory) - one of the main intentions of this patch is\nnot to impact inserters much.\n\nTherefore, I feel reading one WAL buffer page at a time, which works\nfor most of the cases, without impacting concurrent inserters much is\nbetter - https://www.postgresql.org/message-id/CALj2ACWXHP6Ha1BfDB14txm%3DXP272wCbOV00mcPg9c6EXbnp5A%40mail.gmail.com.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 28 Feb 2023 10:38:31 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 10:38 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n+/*\n+ * Guts of XLogReadFromBuffers().\n+ *\n+ * Read 'count' bytes into 'buf', starting at location 'ptr', from WAL\n+ * fetched WAL buffers on timeline 'tli' and return the read bytes.\n+ */\ns/fetched WAL buffers/fetched from WAL buffers\n\n\n+ else if (nread < nbytes)\n+ {\n+ /*\n+ * We read some of the requested bytes. Continue to read remaining\n+ * bytes.\n+ */\n+ ptr += nread;\n+ nbytes -= nread;\n+ dst += nread;\n+ *read_bytes += nread;\n+ }\n\nThe 'if' condition should always be true. You can replace the same\nwith an assertion instead.\ns/Continue to read remaining/Continue to read the remaining\n\nThe good thing about this patch is that it reduces read IO calls\nwithout impacting the write performance (at least not that\nnoticeable). It also takes us one step forward towards the\nenhancements mentioned in the thread. If performance is a concern, we\ncan introduce a GUC to enable/disable this feature.\n\n-- \nThanks & Regards,\nKuntal Ghosh\n\n\n",
"msg_date": "Wed, 1 Mar 2023 00:06:11 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 10:38:31AM +0530, Bharath Rupireddy wrote:\n> On Tue, Feb 28, 2023 at 6:14 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> Why do we only read a page at a time in XLogReadFromBuffersGuts()? What is\n>> preventing us from copying all the data we need in one go?\n> \n> Note that most of the WALRead() callers request a single page of\n> XLOG_BLCKSZ bytes even if the server has less or more available WAL\n> pages. It's the streaming replication wal sender that can request less\n> than XLOG_BLCKSZ bytes and upto MAX_SEND_SIZE (16 * XLOG_BLCKSZ). And,\n> if we read, say, MAX_SEND_SIZE at once while holding\n> WALBufMappingLock, that might impact concurrent inserters (at least, I\n> can say it in theory) - one of the main intentions of this patch is\n> not to impact inserters much.\n\nPerhaps we should test both approaches to see if there is a noticeable\ndifference. It might not be great for concurrent inserts to repeatedly\ntake the lock, either. If there's no real difference, we might be able to\nsimplify the code a bit.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 28 Feb 2023 20:15:23 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 12:06 AM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n>\n> On Tue, Feb 28, 2023 at 10:38 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> +/*\n> + * Guts of XLogReadFromBuffers().\n> + *\n> + * Read 'count' bytes into 'buf', starting at location 'ptr', from WAL\n> + * fetched WAL buffers on timeline 'tli' and return the read bytes.\n> + */\n> s/fetched WAL buffers/fetched from WAL buffers\n\nModified that comment a bit and moved it to the XLogReadFromBuffers.\n\n> + else if (nread < nbytes)\n> + {\n> + /*\n> + * We read some of the requested bytes. Continue to read remaining\n> + * bytes.\n> + */\n> + ptr += nread;\n> + nbytes -= nread;\n> + dst += nread;\n> + *read_bytes += nread;\n> + }\n>\n> The 'if' condition should always be true. You can replace the same\n> with an assertion instead.\n\nYeah, added an assert and changed that else if (nread < nbytes) to\nelse only condition.\n\n> s/Continue to read remaining/Continue to read the remaining\n\nDone.\n\n> The good thing about this patch is that it reduces read IO calls\n> without impacting the write performance (at least not that\n> noticeable). It also takes us one step forward towards the\n> enhancements mentioned in the thread.\n\nRight.\n\n> If performance is a concern, we\n> can introduce a GUC to enable/disable this feature.\n\nI didn't see any performance issues from my testing so far with 3\ndifferent pgbench cases\nhttps://www.postgresql.org/message-id/CALj2ACWXHP6Ha1BfDB14txm%3DXP272wCbOV00mcPg9c6EXbnp5A%40mail.gmail.com.\n\nWhile adding a GUC to enable/disable a feature sounds useful, IMHO it\nisn't good for the user. Because we already have too many GUCs for the\nuser and we may not want all features to be defensive and add their\nown GUCs. If at all, any bugs arise due to some corner-case we missed\nto count in, we can surely help fix them. Having said this, I'm open\nto suggestions here.\n\nPlease find the attached v5 patch set for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 1 Mar 2023 14:39:54 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 2:39 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Please find the attached v5 patch set for further review.\n\nI simplified the code largely by moving the logic of reading the WAL\nbuffer page from a separate function to the main while loop. This\nenabled me to get rid of XLogReadFromBuffersGuts() that v5 and other\nprevious patches have.\n\nPlease find the attached v6 patch set for further review. Meanwhile,\nI'll continue to work on the review comment raised upthread -\nhttps://www.postgresql.org/message-id/20230301041523.GA1453450%40nathanxps13.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 2 Mar 2023 17:13:32 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 9:45 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Feb 28, 2023 at 10:38:31AM +0530, Bharath Rupireddy wrote:\n> > On Tue, Feb 28, 2023 at 6:14 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> Why do we only read a page at a time in XLogReadFromBuffersGuts()? What is\n> >> preventing us from copying all the data we need in one go?\n> >\n> > Note that most of the WALRead() callers request a single page of\n> > XLOG_BLCKSZ bytes even if the server has less or more available WAL\n> > pages. It's the streaming replication wal sender that can request less\n> > than XLOG_BLCKSZ bytes and upto MAX_SEND_SIZE (16 * XLOG_BLCKSZ). And,\n> > if we read, say, MAX_SEND_SIZE at once while holding\n> > WALBufMappingLock, that might impact concurrent inserters (at least, I\n> > can say it in theory) - one of the main intentions of this patch is\n> > not to impact inserters much.\n>\n> Perhaps we should test both approaches to see if there is a noticeable\n> difference. It might not be great for concurrent inserts to repeatedly\n> take the lock, either. If there's no real difference, we might be able to\n> simplify the code a bit.\n\nI took a stab at this - acquire WALBufMappingLock separately for each\nrequested WAL buffer page vs acquire WALBufMappingLock once for all\nrequested WAL buffer pages. I chose the pgbench tpcb-like benchmark\nthat has 3 UPDATE statements and 1 INSERT statement. I ran pgbench for\n30min with scale factor 100 and 4096 clients with primary and 1 async\nstandby, see [1]. I captured wait_events to see the contention on\nWALBufMappingLock. I haven't noticed any contention on the lock and no\ndifference in TPS too, see [2] for results on HEAD, see [3] for\nresults on v6 patch which has \"acquire WALBufMappingLock separately\nfor each requested WAL buffer page\" strategy and see [4] for results\non v7 patch (attached herewith) which has \"acquire WALBufMappingLock\nonce for all requested WAL buffer pages\" strategy. Another thing to\nnote from the test results is that reduction in WALRead IO wait events\nfrom 136 on HEAD to 1 on v6 or v7 patch. So, the read from WAL buffers\nis really helping here.\n\nWith these observations, I'd like to use the approach that acquires\nWALBufMappingLock once for all requested WAL buffer pages unlike v6\nand the previous patches.\n\nI'm attaching the v7 patch set with this change for further review.\n\n[1]\nshared_buffers = '8GB'\nwal_buffers = '1GB'\nmax_wal_size = '16GB'\nmax_connections = '5000'\narchive_mode = 'on'\narchive_command='cp %p /home/ubuntu/archived_wal/%f'\n./pgbench --initialize --scale=100 postgres\n./pgbench -n -M prepared -U ubuntu postgres -b tpcb-like -c4096 -j4096 -T1800\n\n[2]\nHEAD:\ndone in 20.03 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 15.53 s, vacuum 0.19 s, primary keys 4.30 s).\ntps = 11654.475345 (without initial connection time)\n\n50950253 Lock | transactionid\n16472447 Lock | tuple\n3869523 LWLock | LockManager\n 739283 IPC | ProcArrayGroupUpdate\n 718549 |\n 439877 LWLock | WALWrite\n 130737 Client | ClientRead\n 121113 LWLock | BufferContent\n 70778 LWLock | WALInsert\n 43346 IPC | XactGroupUpdate\n 18547\n 18546 Activity | LogicalLauncherMain\n 18545 Activity | AutoVacuumMain\n 18272 Activity | ArchiverMain\n 17627 Activity | WalSenderMain\n 17207 Activity | WalWriterMain\n 15455 IO | WALSync\n 14963 LWLock | ProcArray\n 14747 LWLock | XactSLRU\n 13943 Timeout | CheckpointWriteDelay\n 10519 Activity | BgWriterHibernate\n 8022 Activity | BgWriterMain\n 4486 Timeout | SpinDelay\n 4443 Activity | CheckpointerMain\n 1435 Lock | extend\n 670 LWLock | XidGen\n 373 IO | WALWrite\n 283 Timeout | VacuumDelay\n 268 IPC | ArchiveCommand\n 249 Timeout | VacuumTruncate\n 136 IO | WALRead\n 115 IO | WALInitSync\n 74 IO | DataFileWrite\n 67 IO | WALInitWrite\n 36 IO | DataFileFlush\n 35 IO | DataFileExtend\n 17 IO | DataFileRead\n 4 IO | SLRUWrite\n 3 IO | BufFileWrite\n 2 IO | DataFileImmediateSync\n 1 Tuples only is on.\n 1 LWLock | SInvalWrite\n 1 LWLock | LockFastPath\n 1 IO | ControlFileSyncUpdate\n\n[3]\ndone in 19.99 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 15.52 s, vacuum 0.18 s, primary keys 4.28 s).\ntps = 11689.584538 (without initial connection time)\n\n50678977 Lock | transactionid\n16252048 Lock | tuple\n4146827 LWLock | LockManager\n 768256 |\n 719923 IPC | ProcArrayGroupUpdate\n 432836 LWLock | WALWrite\n 140354 Client | ClientRead\n 124203 LWLock | BufferContent\n 74355 LWLock | WALInsert\n 39852 IPC | XactGroupUpdate\n 30728\n 30727 Activity | LogicalLauncherMain\n 30726 Activity | AutoVacuumMain\n 30420 Activity | ArchiverMain\n 29881 Activity | WalSenderMain\n 29418 Activity | WalWriterMain\n 23428 Activity | BgWriterHibernate\n 15960 Timeout | CheckpointWriteDelay\n 15840 IO | WALSync\n 15066 LWLock | ProcArray\n 14577 Activity | CheckpointerMain\n 14377 LWLock | XactSLRU\n 7291 Activity | BgWriterMain\n 4336 Timeout | SpinDelay\n 1707 Lock | extend\n 720 LWLock | XidGen\n 362 Timeout | VacuumTruncate\n 360 IO | WALWrite\n 304 Timeout | VacuumDelay\n 301 IPC | ArchiveCommand\n 106 IO | WALInitSync\n 82 IO | DataFileWrite\n 66 IO | WALInitWrite\n 45 IO | DataFileFlush\n 25 IO | DataFileExtend\n 18 IO | DataFileRead\n 5 LWLock | LockFastPath\n 2 IO | DataFileSync\n 2 IO | DataFileImmediateSync\n 1 Tuples only is on.\n 1 LWLock | BufferMapping\n 1 IO | WALRead\n 1 IO | SLRUWrite\n 1 IO | SLRURead\n 1 IO | ReplicationSlotSync\n 1 IO | BufFileRead\n\n[4]\ndone in 19.92 s (drop tables 0.00 s, create tables 0.01 s, client-side\ngenerate 15.53 s, vacuum 0.23 s, primary keys 4.16 s).\ntps = 11671.869074 (without initial connection time)\n\n50614021 Lock | transactionid\n16482561 Lock | tuple\n4086451 LWLock | LockManager\n 777507 |\n 714329 IPC | ProcArrayGroupUpdate\n 420593 LWLock | WALWrite\n 138142 Client | ClientRead\n 125381 LWLock | BufferContent\n 75283 LWLock | WALInsert\n 38759 IPC | XactGroupUpdate\n 20283\n 20282 Activity | LogicalLauncherMain\n 20281 Activity | AutoVacuumMain\n 20002 Activity | ArchiverMain\n 19467 Activity | WalSenderMain\n 19036 Activity | WalWriterMain\n 15836 IO | WALSync\n 15708 Timeout | CheckpointWriteDelay\n 15346 LWLock | ProcArray\n 15095 LWLock | XactSLRU\n 11852 Activity | BgWriterHibernate\n 8424 Activity | BgWriterMain\n 4636 Timeout | SpinDelay\n 4415 Activity | CheckpointerMain\n 2048 Lock | extend\n 1457 Timeout | VacuumTruncate\n 646 LWLock | XidGen\n 402 IO | WALWrite\n 306 Timeout | VacuumDelay\n 278 IPC | ArchiveCommand\n 117 IO | WALInitSync\n 74 IO | DataFileWrite\n 66 IO | WALInitWrite\n 35 IO | DataFileFlush\n 29 IO | DataFileExtend\n 24 LWLock | LockFastPath\n 14 IO | DataFileRead\n 2 IO | SLRUWrite\n 2 IO | DataFileImmediateSync\n 2 IO | BufFileWrite\n 1 Tuples only is on.\n 1 LWLock | BufferMapping\n 1 IO | WALRead\n 1 IO | SLRURead\n 1 IO | BufFileRead\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 3 Mar 2023 19:00:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "+void\n+XLogReadFromBuffers(XLogRecPtr startptr,\n+ TimeLineID tli,\n+ Size count,\n+ char *buf,\n+ Size *read_bytes)\n\nSince this function presently doesn't return anything, can we have it\nreturn the number of bytes read instead of storing it in a pointer\nvariable?\n\n+ ptr = startptr;\n+ nbytes = count;\n+ dst = buf;\n\nThese variables seem superfluous.\n\n+ /*\n+ * Requested WAL isn't available in WAL buffers, so return with\n+ * what we have read so far.\n+ */\n+ break;\n\nnitpick: I'd move this to the top so that you can save a level of\nindentation.\n\n\tif (expectedEndPtr != endptr)\n\t\tbreak;\n\n\t... logic for when the data is found in the WAL buffers ...\n\n+ /*\n+ * All the bytes are not in one page. Read available bytes on\n+ * the current page, copy them over to output buffer and\n+ * continue to read remaining bytes.\n+ */\n\nIs it possible to memcpy more than a page at a time?\n\n+ /*\n+ * The fact that we acquire WALBufMappingLock while reading the WAL\n+ * buffer page itself guarantees that no one else initializes it or\n+ * makes it ready for next use in AdvanceXLInsertBuffer().\n+ *\n+ * However, we perform basic page header checks for ensuring that\n+ * we are not reading a page that just got initialized. Callers\n+ * will anyway perform extensive page-level and record-level\n+ * checks.\n+ */\n\nHm. I wonder if we should make these assertions instead.\n\n+ elog(DEBUG1, \"read %zu bytes out of %zu bytes from WAL buffers for given LSN %X/%X, Timeline ID %u\",\n+ *read_bytes, count, LSN_FORMAT_ARGS(startptr), tli);\n\nI definitely don't think we should put an elog() in this code path.\nPerhaps this should be guarded behind WAL_DEBUG.\n\n+ /*\n+ * Check if we have read fully (hit), partially (partial hit) or\n+ * nothing (miss) from WAL buffers. If we have read either partially or\n+ * nothing, then continue to read the remaining bytes the usual way,\n+ * that is, read from WAL file.\n+ */\n+ if (count == read_bytes)\n+ {\n+ /* Buffer hit, so return. */\n+ return true;\n+ }\n+ else if (read_bytes > 0 && count > read_bytes)\n+ {\n+ /*\n+ * Buffer partial hit, so reset the state to count the read bytes\n+ * and continue.\n+ */\n+ buf += read_bytes;\n+ startptr += read_bytes;\n+ count -= read_bytes;\n+ }\n+\n+ /* Buffer miss i.e., read_bytes = 0, so continue */\n\nI think we can simplify this. We effectively take the same action any time\n\"count\" doesn't equal \"read_bytes\", so there's no need for the \"else if\".\n\n\tif (count == read_bytes)\n\t\treturn true;\n\n\tbuf += read_bytes;\n\tstartptr += read_bytes;\n\tcount -= read_bytes;\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 6 Mar 2023 14:00:27 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 3:30 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> +void\n> +XLogReadFromBuffers(XLogRecPtr startptr,\n>\n> Since this function presently doesn't return anything, can we have it\n> return the number of bytes read instead of storing it in a pointer\n> variable?\n\nDone.\n\n> + ptr = startptr;\n> + nbytes = count;\n> + dst = buf;\n>\n> These variables seem superfluous.\n\nNeeded startptr and count for DEBUG1 message and assertion at the end.\nRemoved dst and used buf in the new patch now.\n\n> + /*\n> + * Requested WAL isn't available in WAL buffers, so return with\n> + * what we have read so far.\n> + */\n> + break;\n>\n> nitpick: I'd move this to the top so that you can save a level of\n> indentation.\n\nDone.\n\n> + /*\n> + * All the bytes are not in one page. Read available bytes on\n> + * the current page, copy them over to output buffer and\n> + * continue to read remaining bytes.\n> + */\n>\n> Is it possible to memcpy more than a page at a time?\n\nIt would complicate things a lot there; the logic to figure out the\nlast page bytes that may or may not fit in the whole page gets\ncomplicated. Also, the logic to verify each page's header gets\ncomplicated. We might lose out if we memcpy all the pages at once and\nstart verifying each page's header in another loop.\n\nI would like to keep it simple - read a single page from WAL buffers,\nverify it and continue.\n\n> + /*\n> + * The fact that we acquire WALBufMappingLock while reading the WAL\n> + * buffer page itself guarantees that no one else initializes it or\n> + * makes it ready for next use in AdvanceXLInsertBuffer().\n> + *\n> + * However, we perform basic page header checks for ensuring that\n> + * we are not reading a page that just got initialized. Callers\n> + * will anyway perform extensive page-level and record-level\n> + * checks.\n> + */\n>\n> Hm. I wonder if we should make these assertions instead.\n\nOkay. I added XLogReaderValidatePageHeader for assert-only builds\nwhich will help catch any issues there. But we can't perform record\nlevel checks here because this function doesn't know where the record\nstarts from, it knows only pages. This change required us to pass in\nXLogReaderState to XLogReadFromBuffers. I marked it as\nPG_USED_FOR_ASSERTS_ONLY and did page header checks only when it is\npassed as non-null so that someone who doesn't have XLogReaderState\ncan still read from buffers.\n\n> + elog(DEBUG1, \"read %zu bytes out of %zu bytes from WAL buffers for given LSN %X/%X, Timeline ID %u\",\n> + *read_bytes, count, LSN_FORMAT_ARGS(startptr), tli);\n>\n> I definitely don't think we should put an elog() in this code path.\n> Perhaps this should be guarded behind WAL_DEBUG.\n\nPlacing it behind WAL_DEBUG doesn't help users/developers. My\nintention was to let users know that the WAL read hit the buffers,\nit'll help them report if any issue occurs and also help developers to\ndebug that issue.\n\nOn a different note - I was recently looking at the code around\nWAL_DEBUG macro and the wal_debug GUC. It looks so complex that one\nneeds to build source code with the WAL_DEBUG macro and enable the GUC\nto see the extended logs for WAL. IMO, the best way there is either:\n1) unify all the code under WAL_DEBUG macro and get rid of wal_debug GUC, or\n2) unify all the code under wal_debug GUC (it is developer-only and\nsuperuser-only so there shouldn't be a problem even if we ship it out\nof the box).\n\nIf someone is concerned about the GUC being enabled on production\nservers knowingly or unknowingly with option (2), we can go ahead with\noption (1). I will discuss this separately to see what others think.\n\n> I think we can simplify this. We effectively take the same action any time\n> \"count\" doesn't equal \"read_bytes\", so there's no need for the \"else if\".\n>\n> if (count == read_bytes)\n> return true;\n>\n> buf += read_bytes;\n> startptr += read_bytes;\n> count -= read_bytes;\n\nI wanted to avoid setting these unnecessarily for buffer misses.\n\nThanks a lot for reviewing. I'm attaching the v8 patch for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 7 Mar 2023 12:39:13 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Tue, Mar 07, 2023 at 12:39:13PM +0530, Bharath Rupireddy wrote:\n> On Tue, Mar 7, 2023 at 3:30 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> Is it possible to memcpy more than a page at a time?\n> \n> It would complicate things a lot there; the logic to figure out the\n> last page bytes that may or may not fit in the whole page gets\n> complicated. Also, the logic to verify each page's header gets\n> complicated. We might lose out if we memcpy all the pages at once and\n> start verifying each page's header in another loop.\n\nDoesn't the complicated logic you describe already exist to some extent in\nthe patch? You are copying a page at a time, which involves calculating\nvarious addresses and byte counts.\n\n>> + elog(DEBUG1, \"read %zu bytes out of %zu bytes from WAL buffers for given LSN %X/%X, Timeline ID %u\",\n>> + *read_bytes, count, LSN_FORMAT_ARGS(startptr), tli);\n>>\n>> I definitely don't think we should put an elog() in this code path.\n>> Perhaps this should be guarded behind WAL_DEBUG.\n> \n> Placing it behind WAL_DEBUG doesn't help users/developers. My\n> intention was to let users know that the WAL read hit the buffers,\n> it'll help them report if any issue occurs and also help developers to\n> debug that issue.\n\nI still think an elog() is mighty expensive for this code path, even when\nit doesn't actually produce any messages. And when it does, I think it has\nthe potential to be incredibly noisy.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 7 Mar 2023 09:44:50 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "> [1]\n> subscription tests:\n> PATCHED: WAL buffers hit - 1972, misses - 32616\n\nCan you share more details about the test here?\n\nI went through the v8 patch. Following are my thoughts to improve the\nWAL buffer hit ratio.\n\nCurrently the no-longer-needed WAL data present in WAL buffers gets\ncleared in XLogBackgroundFlush() which is called based on the\nwal_writer_delay config setting. Once the data is flushed to the disk,\nit is treated as no-longer-needed and it will be cleared as soon as\npossible based on some config settings. I have done some testing by\ntweaking the wal_writer_delay config setting to confirm the behaviour.\nWe can see that the WAL buffer hit ratio is good when the\nwal_writer_delay is big enough [2] compared to smaller\nwal_writer_delay [1]. So irrespective of the wal_writer_delay\nsettings, we should keep the WAL data in the WAL buffers as long as\npossible so that all the readers (Mainly WAL senders) can take\nadvantage of this. The WAL page should be evicted from the WAL buffers\nonly when the WAL buffer is full and we need room for the new page.\nThe patch attached takes care of this. We can see the improvements in\nWAL buffer hit ratio even when the wal_writer_delay is set to lower\nvalue [3].\n\nSecond, In WALRead(), we try to read the data from disk whenever we\ndon't find the data from WAL buffers. We don't store this data in the\nWAL buffer. We just read the data, use it and leave it. If we store\nthis data to the WAL buffer, then we may avoid a few disk reads.\n\n[1]:\nwal_buffers=1GB\nwal_writer_delay=1ms\n./pgbench --initialize --scale=300 postgres\n\nWAL buffers hit=5046\nWAL buffers miss=56767\n\n[2]:\nwal_buffers=1GB\nwal_writer_delay=10s\n./pgbench --initialize --scale=300 postgres\n\nWAL buffers hit=45454\nWAL buffers miss=14064\n\n[3]:\nwal_buffers=1GB\nwal_writer_delay=1ms\n./pgbench --initialize --scale=300 postgres\n\nWAL buffers hit=37214\nWAL buffers miss=844\n\nPlease share your thoughts.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Tue, Mar 7, 2023 at 12:39 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Mar 7, 2023 at 3:30 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >\n> > +void\n> > +XLogReadFromBuffers(XLogRecPtr startptr,\n> >\n> > Since this function presently doesn't return anything, can we have it\n> > return the number of bytes read instead of storing it in a pointer\n> > variable?\n>\n> Done.\n>\n> > + ptr = startptr;\n> > + nbytes = count;\n> > + dst = buf;\n> >\n> > These variables seem superfluous.\n>\n> Needed startptr and count for DEBUG1 message and assertion at the end.\n> Removed dst and used buf in the new patch now.\n>\n> > + /*\n> > + * Requested WAL isn't available in WAL buffers, so return with\n> > + * what we have read so far.\n> > + */\n> > + break;\n> >\n> > nitpick: I'd move this to the top so that you can save a level of\n> > indentation.\n>\n> Done.\n>\n> > + /*\n> > + * All the bytes are not in one page. Read available bytes on\n> > + * the current page, copy them over to output buffer and\n> > + * continue to read remaining bytes.\n> > + */\n> >\n> > Is it possible to memcpy more than a page at a time?\n>\n> It would complicate things a lot there; the logic to figure out the\n> last page bytes that may or may not fit in the whole page gets\n> complicated. Also, the logic to verify each page's header gets\n> complicated. We might lose out if we memcpy all the pages at once and\n> start verifying each page's header in another loop.\n>\n> I would like to keep it simple - read a single page from WAL buffers,\n> verify it and continue.\n>\n> > + /*\n> > + * The fact that we acquire WALBufMappingLock while reading the WAL\n> > + * buffer page itself guarantees that no one else initializes it or\n> > + * makes it ready for next use in AdvanceXLInsertBuffer().\n> > + *\n> > + * However, we perform basic page header checks for ensuring that\n> > + * we are not reading a page that just got initialized. Callers\n> > + * will anyway perform extensive page-level and record-level\n> > + * checks.\n> > + */\n> >\n> > Hm. I wonder if we should make these assertions instead.\n>\n> Okay. I added XLogReaderValidatePageHeader for assert-only builds\n> which will help catch any issues there. But we can't perform record\n> level checks here because this function doesn't know where the record\n> starts from, it knows only pages. This change required us to pass in\n> XLogReaderState to XLogReadFromBuffers. I marked it as\n> PG_USED_FOR_ASSERTS_ONLY and did page header checks only when it is\n> passed as non-null so that someone who doesn't have XLogReaderState\n> can still read from buffers.\n>\n> > + elog(DEBUG1, \"read %zu bytes out of %zu bytes from WAL buffers for given LSN %X/%X, Timeline ID %u\",\n> > + *read_bytes, count, LSN_FORMAT_ARGS(startptr), tli);\n> >\n> > I definitely don't think we should put an elog() in this code path.\n> > Perhaps this should be guarded behind WAL_DEBUG.\n>\n> Placing it behind WAL_DEBUG doesn't help users/developers. My\n> intention was to let users know that the WAL read hit the buffers,\n> it'll help them report if any issue occurs and also help developers to\n> debug that issue.\n>\n> On a different note - I was recently looking at the code around\n> WAL_DEBUG macro and the wal_debug GUC. It looks so complex that one\n> needs to build source code with the WAL_DEBUG macro and enable the GUC\n> to see the extended logs for WAL. IMO, the best way there is either:\n> 1) unify all the code under WAL_DEBUG macro and get rid of wal_debug GUC, or\n> 2) unify all the code under wal_debug GUC (it is developer-only and\n> superuser-only so there shouldn't be a problem even if we ship it out\n> of the box).\n>\n> If someone is concerned about the GUC being enabled on production\n> servers knowingly or unknowingly with option (2), we can go ahead with\n> option (1). I will discuss this separately to see what others think.\n>\n> > I think we can simplify this. We effectively take the same action any time\n> > \"count\" doesn't equal \"read_bytes\", so there's no need for the \"else if\".\n> >\n> > if (count == read_bytes)\n> > return true;\n> >\n> > buf += read_bytes;\n> > startptr += read_bytes;\n> > count -= read_bytes;\n>\n> I wanted to avoid setting these unnecessarily for buffer misses.\n>\n> Thanks a lot for reviewing. I'm attaching the v8 patch for further review.\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 12 Mar 2023 00:52:00 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Sun, Mar 12, 2023 at 12:52 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> I went through the v8 patch.\n\nThanks for looking at it. Please post the responses in-line, not above\nthe entire previous message for better readability.\n\n> Following are my thoughts to improve the\n> WAL buffer hit ratio.\n\nNote that the motive of this patch is to read WAL from WAL buffers\n*when possible* without affecting concurrent WAL writers.\n\n> Currently the no-longer-needed WAL data present in WAL buffers gets\n> cleared in XLogBackgroundFlush() which is called based on the\n> wal_writer_delay config setting. Once the data is flushed to the disk,\n> it is treated as no-longer-needed and it will be cleared as soon as\n> possible based on some config settings.\n\nBeing opportunistic in pre-initializing as many possible WAL buffer\npages as is there for a purpose. There's an illuminating comment [1],\nso that's done for a purpose, so removing it fully is a no-go IMO. For\ninstance, it'll make WAL buffer pages available for concurrent writers\nso there will be less work for writers in GetXLogBuffer. I'm sure\nremoving the opportunistic pre-initialization of the WAL buffer pages\nwill hurt performance in a highly concurrent-write workload.\n\n /*\n * Great, done. To take some work off the critical path, try to initialize\n * as many of the no-longer-needed WAL buffers for future use as we can.\n */\n AdvanceXLInsertBuffer(InvalidXLogRecPtr, insertTLI, true);\n\n> Second, In WALRead(), we try to read the data from disk whenever we\n> don't find the data from WAL buffers. We don't store this data in the\n> WAL buffer. We just read the data, use it and leave it. If we store\n> this data to the WAL buffer, then we may avoid a few disk reads.\n\nAgain this is going to hurt concurrent writers. Note that wal_buffers\naren't used as full cache per-se, there'll be multiple writers to it,\n*when possible* readers will try to read from it without hurting\nwriters.\n\n> The patch attached takes care of this.\n\nPlease post the new proposal as a text file (not a .patch file) or as\na plain text in the email itself if the change is small or attach all\nthe patches if the patch is over-and-above the proposed patches.\nAttaching a single over-and-above patch will make CFBot unhappy and\nwill force authors to repost the original patches. Typically, we\nfollow this. Having said, I have some review comments to fix on\nv8-0001, so, I'll be sending out v9 patch-set soon.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 12 Mar 2023 23:00:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Sun, Mar 12, 2023 at 11:00 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> I have some review comments to fix on\n> v8-0001, so, I'll be sending out v9 patch-set soon.\n\nPlease find the attached v9 patch set for further review. I moved the\ncheck for just-initialized WAL buffer pages before reading the page.\nUp until now, it's the other way around, meaning, read the page and\nthen check the header if it is just-initialized, which is wrong. The\nattached v9 patch set corrects it.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 14 Mar 2023 09:02:23 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 11:14 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Mar 07, 2023 at 12:39:13PM +0530, Bharath Rupireddy wrote:\n> > On Tue, Mar 7, 2023 at 3:30 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> Is it possible to memcpy more than a page at a time?\n> >\n> > It would complicate things a lot there; the logic to figure out the\n> > last page bytes that may or may not fit in the whole page gets\n> > complicated. Also, the logic to verify each page's header gets\n> > complicated. We might lose out if we memcpy all the pages at once and\n> > start verifying each page's header in another loop.\n>\n> Doesn't the complicated logic you describe already exist to some extent in\n> the patch? You are copying a page at a time, which involves calculating\n> various addresses and byte counts.\n\nOkay here I am with the v10 patch set attached that avoids multiple\nmemcpy calls which must benefit the callers who want to read more than\n1 WAL buffer page (streaming replication WAL sender for instance).\n\n> >> + elog(DEBUG1, \"read %zu bytes out of %zu bytes from WAL buffers for given LSN %X/%X, Timeline ID %u\",\n> >> + *read_bytes, count, LSN_FORMAT_ARGS(startptr), tli);\n> >>\n> >> I definitely don't think we should put an elog() in this code path.\n> >> Perhaps this should be guarded behind WAL_DEBUG.\n> >\n> > Placing it behind WAL_DEBUG doesn't help users/developers. My\n> > intention was to let users know that the WAL read hit the buffers,\n> > it'll help them report if any issue occurs and also help developers to\n> > debug that issue.\n>\n> I still think an elog() is mighty expensive for this code path, even when\n> it doesn't actually produce any messages. And when it does, I think it has\n> the potential to be incredibly noisy.\n\nWell, my motive was to have a way for the user to know WAL buffer hits\nand misses to report any found issues. However, I have a plan later to\nadd WAL buffer stats (hits/misses). I understand that even if someone\nenables DEBUG1, this message can bloat server log files and make\nrecovery slower, especially on a standby. Hence, I agree to keep these\nlogs behind the WAL_DEBUG macro like others and did so in the attached\nv10 patch set.\n\nPlease review the attached v10 patch set further.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 14 Mar 2023 13:28:42 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Sat, 2023-01-14 at 12:34 -0800, Andres Freund wrote:\n> One benefit would be that it'd make it more realistic to use direct\n> IO for WAL\n> - for which I have seen significant performance benefits. But when we\n> afterwards have to re-read it from disk to replicate, it's less\n> clearly a win.\n\nDoes this patch still look like a good fit for your (or someone else's)\nplans for direct IO here? If so, would committing this soon make it\neasier to make progress on that, or should we wait until it's actually\nneeded?\n\nIf I recall, this patch does not provide a perforance benefit as-is\n(correct me if things have changed) and I don't know if a reduction in\nsyscalls alone is enough to justify it. But if it paves the way for\ndirect IO for WAL, that does seem worth it.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 03 Oct 2023 16:05:32 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-03 16:05:32 -0700, Jeff Davis wrote:\n> On Sat, 2023-01-14 at 12:34 -0800, Andres Freund wrote:\n> > One benefit would be that it'd make it more realistic to use direct\n> > IO for WAL\n> > - for which I have seen significant performance benefits. But when we\n> > afterwards have to re-read it from disk to replicate, it's less\n> > clearly a win.\n> \n> Does this patch still look like a good fit for your (or someone else's)\n> plans for direct IO here? If so, would committing this soon make it\n> easier to make progress on that, or should we wait until it's actually\n> needed?\n\nI think it'd be quite useful to have. Even with the code as of 16, I see\nbetter performance in some workloads with debug_io_direct=wal,\nwal_sync_method=open_datasync compared to any other configuration. Except of\ncourse that it makes walsenders more problematic, as they suddenly require\nread IO. Thus having support for walsenders to send directly from wal buffers\nwould be beneficial, even without further AIO infrastructure.\n\n\nI also think there are other quite desirable features that are made easier by\nthis patch. One of the primary problems with using synchronous replication is\nthe latency increase, obviously. We can't send out WAL before it has locally\nbeen wirten out and flushed to disk. For some workloads, we could\nsubstantially lower synchronous commit latency if we were able to send WAL to\nremote nodes *before* WAL has been made durable locally, even if the receiving\nsystems wouldn't be allowed to write that data to disk yet: It takes less time\nto send just \"write LSN: %X/%X, flush LSNL: %X/%X\" than also having to send\nall the not-yet-durable WAL.\n\nIn many OLTP workloads there won't be WAL flushes between generating WAL for\nDML and commit, which means that the amount of WAL that needs to be sent out\nat commit can be of nontrivial size.\n\nE.g. for pgbench, normally a transaction is about ~550 bytes (fitting in a\nsingle tcp/ip packet), but a pgbench transaction that needs to emit FPIs for\neverything is a lot larger: ~45kB (not fitting in a single packet). Obviously\nmany real world workloads OLTP workloads actually do more writes than\npgbench. Making the commit latency of the latter be closer to the commit\nlatency of the former when using syncrep would obviously be great.\n\nOf course this patch is just a relatively small step towards that: We'd also\nneed in-memory buffering on the receiving side, the replication protocol would\nneed to be improved, we'd likely need an option to explicitly opt into\nreceiving unflushed data. But it's still a pretty much required step.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 11 Oct 2023 15:43:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Thu, Oct 12, 2023 at 4:13 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-10-03 16:05:32 -0700, Jeff Davis wrote:\n> >\n> > Does this patch still look like a good fit for your (or someone else's)\n> > plans for direct IO here? If so, would committing this soon make it\n> > easier to make progress on that, or should we wait until it's actually\n> > needed?\n>\n> I think it'd be quite useful to have. Even with the code as of 16, I see\n> better performance in some workloads with debug_io_direct=wal,\n> wal_sync_method=open_datasync compared to any other configuration. Except of\n> course that it makes walsenders more problematic, as they suddenly require\n> read IO. Thus having support for walsenders to send directly from wal buffers\n> would be beneficial, even without further AIO infrastructure.\n\nRight. Tests show the benefit with WAL DIO + this patch -\nhttps://www.postgresql.org/message-id/CALj2ACV6rS%2B7iZx5%2BoAvyXJaN4AG-djAQeM1mrM%3DYSDkVrUs7g%40mail.gmail.com.\n\nAlso, irrespective of WAL DIO, the WAL buffers hit ratio with the\npatch stood at 95% for 1 primary, 1 sync standby, 1 async standby,\npgbench --scale=300 --client=32 --time=900. In other words, the\nwalsenders avoided 95% of the time reading from the file/avoided pread\nsystem calls - https://www.postgresql.org/message-id/CALj2ACXKKK%3DwbiG5_t6dGao5GoecMwRkhr7GjVBM_jg54%2BNa%3DQ%40mail.gmail.com.\n\n> I also think there are other quite desirable features that are made easier by\n> this patch. One of the primary problems with using synchronous replication is\n> the latency increase, obviously. We can't send out WAL before it has locally\n> been wirten out and flushed to disk. For some workloads, we could\n> substantially lower synchronous commit latency if we were able to send WAL to\n> remote nodes *before* WAL has been made durable locally, even if the receiving\n> systems wouldn't be allowed to write that data to disk yet: It takes less time\n> to send just \"write LSN: %X/%X, flush LSNL: %X/%X\" than also having to send\n> all the not-yet-durable WAL.\n>\n> In many OLTP workloads there won't be WAL flushes between generating WAL for\n> DML and commit, which means that the amount of WAL that needs to be sent out\n> at commit can be of nontrivial size.\n>\n> E.g. for pgbench, normally a transaction is about ~550 bytes (fitting in a\n> single tcp/ip packet), but a pgbench transaction that needs to emit FPIs for\n> everything is a lot larger: ~45kB (not fitting in a single packet). Obviously\n> many real world workloads OLTP workloads actually do more writes than\n> pgbench. Making the commit latency of the latter be closer to the commit\n> latency of the former when using syncrep would obviously be great.\n>\n> Of course this patch is just a relatively small step towards that: We'd also\n> need in-memory buffering on the receiving side, the replication protocol would\n> need to be improved, we'd likely need an option to explicitly opt into\n> receiving unflushed data. But it's still a pretty much required step.\n\nYes, this patch can pave the way for all of the above features in\nfuture. However, I'm looking forward to getting this in for now.\nLater, I'll come up with more concrete thoughts on the above.\n\nHaving said above, the latest v10 patch after addressing some of the\nreview comments is at\nhttps://www.postgresql.org/message-id/CALj2ACU3ZYzjOv4vZTR%2BLFk5PL4ndUnbLS6E1vG2dhDBjQGy2A%40mail.gmail.com.\nAny further thoughts on the patch is welcome.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 18 Oct 2023 01:32:54 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Thu, Oct 12, 2023 at 4:13 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-10-03 16:05:32 -0700, Jeff Davis wrote:\n> > On Sat, 2023-01-14 at 12:34 -0800, Andres Freund wrote:\n> > > One benefit would be that it'd make it more realistic to use direct\n> > > IO for WAL\n> > > - for which I have seen significant performance benefits. But when we\n> > > afterwards have to re-read it from disk to replicate, it's less\n> > > clearly a win.\n> >\n> > Does this patch still look like a good fit for your (or someone else's)\n> > plans for direct IO here? If so, would committing this soon make it\n> > easier to make progress on that, or should we wait until it's actually\n> > needed?\n>\n> I think it'd be quite useful to have. Even with the code as of 16, I see\n> better performance in some workloads with debug_io_direct=wal,\n> wal_sync_method=open_datasync compared to any other configuration. Except of\n> course that it makes walsenders more problematic, as they suddenly require\n> read IO. Thus having support for walsenders to send directly from wal buffers\n> would be beneficial, even without further AIO infrastructure.\n\nI'm attaching the v11 patch set with the following changes:\n- Improved input validation in the function that reads WAL from WAL\nbuffers in 0001 patch.\n- Improved test module's code in 0002 patch.\n- Modernized meson build file in 0002 patch.\n- Added commit messages for both the patches.\n- Ran pgindent on both the patches.\n\nAny thoughts are welcome.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 20 Oct 2023 22:19:32 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Fri, Oct 20, 2023 at 10:19 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Oct 12, 2023 at 4:13 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2023-10-03 16:05:32 -0700, Jeff Davis wrote:\n> > > On Sat, 2023-01-14 at 12:34 -0800, Andres Freund wrote:\n> > > > One benefit would be that it'd make it more realistic to use direct\n> > > > IO for WAL\n> > > > - for which I have seen significant performance benefits. But when we\n> > > > afterwards have to re-read it from disk to replicate, it's less\n> > > > clearly a win.\n> > >\n> > > Does this patch still look like a good fit for your (or someone else's)\n> > > plans for direct IO here? If so, would committing this soon make it\n> > > easier to make progress on that, or should we wait until it's actually\n> > > needed?\n> >\n> > I think it'd be quite useful to have. Even with the code as of 16, I see\n> > better performance in some workloads with debug_io_direct=wal,\n> > wal_sync_method=open_datasync compared to any other configuration. Except of\n> > course that it makes walsenders more problematic, as they suddenly require\n> > read IO. Thus having support for walsenders to send directly from wal buffers\n> > would be beneficial, even without further AIO infrastructure.\n>\n> I'm attaching the v11 patch set with the following changes:\n> - Improved input validation in the function that reads WAL from WAL\n> buffers in 0001 patch.\n> - Improved test module's code in 0002 patch.\n> - Modernized meson build file in 0002 patch.\n> - Added commit messages for both the patches.\n> - Ran pgindent on both the patches.\n>\n> Any thoughts are welcome.\n\nI'm attaching v12 patch set with just pgperltidy ran on the new TAP\ntest added in 0002. No other changes from that of v11 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 21 Oct 2023 23:59:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Sat, 2023-10-21 at 23:59 +0530, Bharath Rupireddy wrote:\n> I'm attaching v12 patch set with just pgperltidy ran on the new TAP\n> test added in 0002. No other changes from that of v11 patch set.\n\nThank you.\n\nComments:\n\n* It would be good to document that this is partially an optimization\n(read from memory first) and partially an API difference that allows\nreading unflushed data. For instance, walsender may benefit\nperformance-wise (and perhaps later with the ability to read unflushed\ndata) whereas pg_walinspect benefits primarily from reading unflushed\ndata.\n\n* Shouldn't there be a new method in XLogReaderRoutine (e.g.\nread_unflushed_data), rather than having logic in WALRead()? The\ncallers can define the method if it makes sense (and that would be a\ngood place to document why); or leave it NULL if not.\n\n* I'm not clear on the \"partial hit\" case. Wouldn't that mean you found\nthe earliest byte in the buffers, but not the latest byte requested? Is\nthat possible, and if so under what circumstances? I added an\n\"Assert(nread == 0 || nread == count)\" in WALRead() after calling\nXLogReadFromBuffers(), and it wasn't hit.\n\n* If the partial hit case is important, wouldn't XLogReadFromBuffers()\nfill in the end of the buffer rather than the beginning?\n\n* Other functions that use xlblocks, e.g. GetXLogBuffer(), use more\neffort to avoid acquiring WALBufMappingLock. Perhaps you can avoid it,\ntoo? One idea is to check that XLogCtl->xlblocks[idx] is equal to\nexpectedEndPtr both before and after the memcpy(), with appropriate\nbarriers. That could mitigate concerns expressed by Kyotaro Horiguchi\nand Masahiko Sawada.\n\n* Are you sure that reducing the number of calls to memcpy() is a win?\nI would expect that to be true only if the memcpy()s are tiny, but here\nthey are around XLOG_BLCKSZ. I believe this was done based on a comment\nfrom Nathan Bossart, but I didn't really follow why that's important.\nAlso, if we try to use one memcpy for all of the data, it might not\ninteract well with my idea above to avoid taking the lock.\n\n* Style-wise, the use of \"unlikely\" seems excessive, unless there's a\nreason to think it matters.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 24 Oct 2023 17:15:19 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Tue, Oct 24, 2023 at 05:15:19PM -0700, Jeff Davis wrote:\n> * Are you sure that reducing the number of calls to memcpy() is a win?\n> I would expect that to be true only if the memcpy()s are tiny, but here\n> they are around XLOG_BLCKSZ. I believe this was done based on a comment\n> from Nathan Bossart, but I didn't really follow why that's important.\n> Also, if we try to use one memcpy for all of the data, it might not\n> interact well with my idea above to avoid taking the lock.\n\nI don't recall exactly why I suggested this, but if additional memcpy()s\nhelp in some way and don't negatively impact performance, then I retract my\nprevious comment.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 24 Oct 2023 20:54:01 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Wed, Oct 25, 2023 at 5:45 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> Comments:\n\nThanks for reviewing.\n\n> * It would be good to document that this is partially an optimization\n> (read from memory first) and partially an API difference that allows\n> reading unflushed data. For instance, walsender may benefit\n> performance-wise (and perhaps later with the ability to read unflushed\n> data) whereas pg_walinspect benefits primarily from reading unflushed\n> data.\n\nCommit message has these things covered in detail. However, I think\nadding some info in the code comments is a good idea and done around\nthe WALRead() function in the attached v13 patch set.\n\n> * Shouldn't there be a new method in XLogReaderRoutine (e.g.\n> read_unflushed_data), rather than having logic in WALRead()? The\n> callers can define the method if it makes sense (and that would be a\n> good place to document why); or leave it NULL if not.\n\nI've designed the new function XLogReadFromBuffers to read from WAL\nbuffers in such a way that one can easily embed it in page_read\ncallbacks if it makes sense. Almost all the available backend\npage_read callbacks read_local_xlog_page_no_wait,\nread_local_xlog_page, logical_read_xlog_page except XLogPageRead\n(which is used for recovery when WAL buffers aren't used at all) have\none thing in common, that is, WALRead(). Therefore, it seemed a\nnatural choice for me to call XLogReadFromBuffers. In other words, I'd\nsay it's the responsibility of page_read callback implementers to\ndecide if they want to read from WAL buffers or not and hence I don't\nthink we need a separate XLogReaderRoutine.\n\nIf someone wants to read unflushed WAL, the typical way to implement\nit is to write a new page_read callback\nread_local_unflushed_xlog_page/logical_read_unflushed_xlog_page or\nsimilar without WALRead() but just the new function\nXLogReadFromBuffers to read from WAL buffers and return.\n\n> * I'm not clear on the \"partial hit\" case. Wouldn't that mean you found\n> the earliest byte in the buffers, but not the latest byte requested? Is\n> that possible, and if so under what circumstances? I added an\n> \"Assert(nread == 0 || nread == count)\" in WALRead() after calling\n> XLogReadFromBuffers(), and it wasn't hit.\n>\n> * If the partial hit case is important, wouldn't XLogReadFromBuffers()\n> fill in the end of the buffer rather than the beginning?\n\nPartial hit was possible when the requested WAL pages are read one\npage at a time from WAL buffers with WALBufMappingLock\nacquisition-release for each page as the requested page can be\nreplaced by the time the lock is released and reacquired. This was the\ncase up until the v6 patch -\nhttps://www.postgresql.org/message-id/CALj2ACWTNneq2EjMDyUeWF-BnwpewuhiNEfjo9bxLwFU9iPF0w%40mail.gmail.com.\nNow that the approach has been changed to read multiple pages at once\nunder one WALBufMappingLock acquisition-release. .\nWe can either keep the partial hit handling (just to not miss\nanything) or turn the following partial hit case to an error or an\nAssert(false);. I prefer to keep the partial hit handling as-is just\nin case:\n+ else if (count > nread)\n+ {\n+ /*\n+ * Buffer partial hit, so reset the state to count the read bytes\n+ * and continue.\n+ */\n+ buf += nread;\n+ startptr += nread;\n+ count -= nread;\n+ }\n\n> * Other functions that use xlblocks, e.g. GetXLogBuffer(), use more\n> effort to avoid acquiring WALBufMappingLock. Perhaps you can avoid it,\n> too? One idea is to check that XLogCtl->xlblocks[idx] is equal to\n> expectedEndPtr both before and after the memcpy(), with appropriate\n> barriers. That could mitigate concerns expressed by Kyotaro Horiguchi\n> and Masahiko Sawada.\n\nYes, I proposed that idea in another thread -\nhttps://www.postgresql.org/message-id/CALj2ACVFSirOFiABrNVAA6JtPHvA9iu%2Bwp%3DqkM9pdLZ5mwLaFg%40mail.gmail.com.\nIf that looks okay, I can add it to the next version of this patch\nset.\n\n> * Are you sure that reducing the number of calls to memcpy() is a win?\n> I would expect that to be true only if the memcpy()s are tiny, but here\n> they are around XLOG_BLCKSZ. I believe this was done based on a comment\n> from Nathan Bossart, but I didn't really follow why that's important.\n> Also, if we try to use one memcpy for all of the data, it might not\n> interact well with my idea above to avoid taking the lock.\n\nUp until the v6 patch -\nhttps://www.postgresql.org/message-id/CALj2ACWTNneq2EjMDyUeWF-BnwpewuhiNEfjo9bxLwFU9iPF0w%40mail.gmail.com,\nthe requested WAL was being read one page at a time from WAL buffers\ninto output buffer with one memcpy call for each page. Now that the\napproach has been changed to read multiple pages at once under one\nWALBufMappingLock acquisition-release with comparatively lesser number\nof memcpy calls. I honestly haven't seen any difference between the\ntwo approaches -\nhttps://www.postgresql.org/message-id/CALj2ACUpQGiwQTzmoSMOFk5%3DWiJc06FcYpxzBX0SEej4ProRzg%40mail.gmail.com.\n\nThe new approach of reading multiple pages at once under one\nWALBufMappingLock acquisition-release clearly wins over reading one\npage at a time with multiple lock acquisition-release cycles.\n\n> * Style-wise, the use of \"unlikely\" seems excessive, unless there's a\n> reason to think it matters.\n\nGiven the current use of XLogReadFromBuffers, the input parameters are\npassed as expected, IOW, these are unlikely events. The comments [1]\nsay that the unlikely() is to be used in hot code paths; I think\nreading WAL from buffers is a hot code path especially when called\nfrom (logical, physical) walsenders. If there's any stronger reason\nthan the appearance/style-wise, I'm okay to not use them. For now,\nI've retained them.\n\nFWIW, I found heapam.c using unlikely() extensively for safety checks.\n\n[1]\n * Hints to the compiler about the likelihood of a branch. Both likely() and\n * unlikely() return the boolean value of the contained expression.\n *\n * These should only be used sparingly, in very hot code paths. It's very easy\n * to mis-estimate likelihoods.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 27 Oct 2023 03:46:32 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Fri, 2023-10-27 at 03:46 +0530, Bharath Rupireddy wrote:\n> \n> Almost all the available backend\n> page_read callbacks read_local_xlog_page_no_wait,\n> read_local_xlog_page, logical_read_xlog_page except XLogPageRead\n> (which is used for recovery when WAL buffers aren't used at all) have\n> one thing in common, that is, WALRead(). Therefore, it seemed a\n> natural choice for me to call XLogReadFromBuffers. In other words,\n> I'd\n> say it's the responsibility of page_read callback implementers to\n> decide if they want to read from WAL buffers or not and hence I don't\n> think we need a separate XLogReaderRoutine.\n\nI think I see what you are saying: WALRead() is at a lower level than\nthe XLogReaderRoutine callbacks, because it's used by the .page_read\ncallbacks.\n\nThat makes sense, but my first interpretation was that WALRead() is\nabove the XLogReaderRoutine callbacks because it calls .segment_open\nand .segment_close. To me that sounds like a layering violation, but it\nexists today without your patch.\n\nI suppose the question is: should reading from the WAL buffers an\nintentional thing that the caller does explicitly by specific callers?\nOr is it an optimization that should be hidden from the caller?\n\nI tend toward the former, at least for now. I suspect that when we do\nsome more interesting things, like replicating unflushed data, we will\nwant reading from buffers to be a separate step, not combined with\nWALRead(). After things in this area settle a bit then we might want to\nrefactor and combine them again.\n\n> If someone wants to read unflushed WAL, the typical way to implement\n> it is to write a new page_read callback\n> read_local_unflushed_xlog_page/logical_read_unflushed_xlog_page or\n> similar without WALRead() but just the new function\n> XLogReadFromBuffers to read from WAL buffers and return.\n\nThen why is it being called from WALRead() at all?\n\n> \n> I prefer to keep the partial hit handling as-is just\n> in case:\n> \n\nSo a \"partial hit\" is essentially a narrow race condition where one\npage is read from buffers, and it's valid; and by the time it gets to\nthe next page, it has already been evicted (along with the previously\nread page)? In other words, I think you are describing a case where\neviction is happening faster than the memcpy()s in a loop, which is\ncertainly possible due to scheduling or whatever, but doesn't seem like\nthe common case.\n\nThe case I'd expect for a partial read is when the startptr points to\nan evicted page, but some later pages in the requested range are still\npresent in the buffers.\n\nI'm not really sure whether either of these cases matters, but if we\nimplement one and not the other, there should be some explanation.\n\n> Yes, I proposed that idea in another thread -\n> https://www.postgresql.org/message-id/CALj2ACVFSirOFiABrNVAA6JtPHvA9iu%2Bwp%3DqkM9pdLZ5mwLaFg%40mail.gmail.com\n> .\n> If that looks okay, I can add it to the next version of this patch\n> set.\n\nThe code in the email above still shows a call to:\n\n /*\n * Requested WAL is available in WAL buffers, so recheck the\nexistence\n * under the WALBufMappingLock and read if the page still exists,\notherwise\n * return.\n */\n LWLockAcquire(WALBufMappingLock, LW_SHARED);\n\nand I don't think that's required. How about something like:\n\n endptr1 = XLogCtl->xlblocks[idx];\n /* Requested WAL isn't available in WAL buffers. */\n if (expectedEndPtr != endptr1)\n break;\n\n pg_read_barrier();\n ...\n memcpy(buf, data, bytes_read_this_loop);\n ...\n pg_read_barrier();\n endptr2 = XLogCtl->xlblocks[idx];\n if (expectedEndPtr != endptr2)\n break;\n\n ntotal += bytes_read_this_loop;\n /* success; move on to next page */\n\nI'm not sure why GetXLogBuffer() doesn't just use pg_atomic_read_u64().\nI suppose because xlblocks are not guaranteed to be 64-bit aligned?\nShould we just align it to 64 bits so we can use atomics? (I don't\nthink it matters in this case, but atomics would avoid the need to\nthink about it.)\n> \n\n> \n> FWIW, I found heapam.c using unlikely() extensively for safety\n> checks.\n\nOK, I won't object to the use of unlikely(), though I typically don't\nuse it without a fairly strong reason to think I should override what\nthe compiler thinks and/or what branch predictors can handle.\n\nIn this case, I think some of those errors are not really necessary\nanyway, though:\n\n * XLogReadFromBuffers shouldn't take a timeline argument just to\ndemand that it's always equal to the wal insertion timeline.\n * Why check that startptr is earlier than the flush pointer, but not\nstartptr+count? Also, given that we intend to support reading unflushed\ndata, it would be good to comment that the function still works past\nthe flush pointer, and that it will be safe to remove later (right?).\n * An \"Assert(!RecoveryInProgress())\" would be more appropriate than\nan error. Perhaps we will remove even that check in the future to\nachieve cascaded replication of unflushed data.\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n\n",
"msg_date": "Fri, 27 Oct 2023 13:52:56 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Sat, Oct 28, 2023 at 2:22 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> I think I see what you are saying: WALRead() is at a lower level than\n> the XLogReaderRoutine callbacks, because it's used by the .page_read\n> callbacks.\n>\n> That makes sense, but my first interpretation was that WALRead() is\n> above the XLogReaderRoutine callbacks because it calls .segment_open\n> and .segment_close. To me that sounds like a layering violation, but it\n> exists today without your patch.\n\nRight. WALRead() is a common function used by most if not all\npage_read callbacks. Typically, the page_read callbacks code has 2\nparts - first determine the target/start LSN and second read WAL (via\nWALRead() for instance).\n\n> I suppose the question is: should reading from the WAL buffers an\n> intentional thing that the caller does explicitly by specific callers?\n> Or is it an optimization that should be hidden from the caller?\n>\n> I tend toward the former, at least for now.\n\nYes, it's an optimization that must be hidden from the caller.\n\n> I suspect that when we do\n> some more interesting things, like replicating unflushed data, we will\n> want reading from buffers to be a separate step, not combined with\n> WALRead(). After things in this area settle a bit then we might want to\n> refactor and combine them again.\n\nAs said previously, the new XLogReadFromBuffers() function is generic\nand extensible in the way that anyone with a target/start LSN\n(corresponding to flushed or written-but-not-yet-flushed WAL) and TLI\ncan call it to read from WAL buffers. It's just that the patch\ncurrently uses it where it makes sense i.e. in WALRead(). But, it's\nusable in, say, a page_read callback reading unflushed WAL from WAL\nbuffers.\n\n> > If someone wants to read unflushed WAL, the typical way to implement\n> > it is to write a new page_read callback\n> > read_local_unflushed_xlog_page/logical_read_unflushed_xlog_page or\n> > similar without WALRead() but just the new function\n> > XLogReadFromBuffers to read from WAL buffers and return.\n>\n> Then why is it being called from WALRead() at all?\n\nThe patch focuses on reading flushed WAL from WAL buffers if\navailable, not the unflushed WAL at all; that's why it's in WALRead()\nbefore reading from the WAL file using pg_pread().\n\nI'm trying to make a point that the XLogReadFromBuffers() enables one\nto read unflushed WAL from WAL buffers (if really wanted for future\nfeatures like replicate from WAL buffers as a new opt-in feature to\nimprove the replication performance).\n\n> > I prefer to keep the partial hit handling as-is just\n> > in case:\n>\n> So a \"partial hit\" is essentially a narrow race condition where one\n> page is read from buffers, and it's valid; and by the time it gets to\n> the next page, it has already been evicted (along with the previously\n> read page)?\n>\n> In other words, I think you are describing a case where\n> eviction is happening faster than the memcpy()s in a loop, which is\n> certainly possible due to scheduling or whatever, but doesn't seem like\n> the common case.\n>\n> The case I'd expect for a partial read is when the startptr points to\n> an evicted page, but some later pages in the requested range are still\n> present in the buffers.\n>\n> I'm not really sure whether either of these cases matters, but if we\n> implement one and not the other, there should be some explanation.\n\nAt any given point of time, WAL buffer pages are maintained as a\ncircularly sorted array in an ascending order from\nOldestInitializedPage to InitializedUpTo (new pages are inserted at\nthis end). Also, the current patch holds WALBufMappingLock while\nreading the buffer pages, meaning, no one can replace the buffer pages\nuntil reading is finished. Therefore, the above two described partial\nhit cases can't happen - when reading multiple pages if the first page\nis found to be existing in the buffer pages, it means the other pages\nmust exist too because of the circular and sortedness of the WAL\nbuffer page array.\n\nHere's an illustration with WAL buffers circular array (idx, LSN) of\nsize 10 elements with contents as {(0, 160), (1, 170), (2, 180), (3,\n90), (4, 100), (5, 110), (6, 120), (7, 130), (8, 140), (9, 150)} and\ncurrent InitializedUpTo pointing to page at LSN 180, idx 2.\n- Read 6 pages starting from LSN 80. Nothing is read from WAL buffers\nas the page at LSN 80 doesn't exist despite other 5 pages starting\nfrom LSN 90 exist.\n- Read 6 pages starting from LSN 90. All the pages exist and are read\nfrom WAL buffers.\n- Read 6 pages starting from LSN 150. Note that WAL is currently\nflushed only up to page at LSN 180 and the callers won't ask for\nunflushed WAL read. If a caller asks for an unflushed WAL read\nintentionally or unintentionally, XLogReadFromBuffers() reads only 4\npages starting from LSN 150 to LSN 180 and will leave the remaining 2\npages for the caller to deal with. This is the partial hit that can\nhappen. Therefore, retaining the partial hit code in WALRead() as-is\nin the current patch is needed IMV.\n\n> > Yes, I proposed that idea in another thread -\n> > https://www.postgresql.org/message-id/CALj2ACVFSirOFiABrNVAA6JtPHvA9iu%2Bwp%3DqkM9pdLZ5mwLaFg%40mail.gmail.com\n> > .\n> > If that looks okay, I can add it to the next version of this patch\n> > set.\n>\n> The code in the email above still shows a call to:\n>\n> /*\n> * Requested WAL is available in WAL buffers, so recheck the\n> existence\n> * under the WALBufMappingLock and read if the page still exists,\n> otherwise\n> * return.\n> */\n> LWLockAcquire(WALBufMappingLock, LW_SHARED);\n>\n> and I don't think that's required. How about something like:\n>\n> endptr1 = XLogCtl->xlblocks[idx];\n> /* Requested WAL isn't available in WAL buffers. */\n> if (expectedEndPtr != endptr1)\n> break;\n>\n> pg_read_barrier();\n> ...\n> memcpy(buf, data, bytes_read_this_loop);\n> ...\n> pg_read_barrier();\n> endptr2 = XLogCtl->xlblocks[idx];\n> if (expectedEndPtr != endptr2)\n> break;\n>\n> ntotal += bytes_read_this_loop;\n> /* success; move on to next page */\n>\n> I'm not sure why GetXLogBuffer() doesn't just use pg_atomic_read_u64().\n> I suppose because xlblocks are not guaranteed to be 64-bit aligned?\n> Should we just align it to 64 bits so we can use atomics? (I don't\n> think it matters in this case, but atomics would avoid the need to\n> think about it.)\n\nWALBufMappingLock protects both xlblocks and WAL buffer pages [1][2].\nI'm not sure how using the memory barrier, not WALBufMappingLock,\nprevents writers from replacing WAL buffer pages while readers reading\nthe pages. FWIW, GetXLogBuffer() reads the xlblocks value without the\nlock but it confirms the WAL existence under the lock and gets the WAL\nbuffer page under the lock [3].\n\nI'll reiterate the WALBufMappingLock thing for this patch - the idea\nis to know whether or not the WAL at a given LSN exists in WAL buffers\nwithout acquiring WALBufMappingLock; if exists acquire the lock in\nshared mode, read from WAL buffers and then release. WAL buffer pages\nare organized as a circular array with the InitializedUpTo as the\nlatest filled WAL buffer page. If there's a way to track the oldest\nfilled WAL buffer page (OldestInitializedPage), at any given point of\ntime, the elements of the circular array are sorted in an ascending\norder from OldestInitializedPage to InitializedUpTo. With this\napproach, no lock is required to know if the WAL at given LSN exists\nin WAL buffers, we can just do this if lsn >=\nXLogCtl->OldestInitializedPage && lsn < XLogCtl->InitializedUpTo. I\nproposed this idea here\nhttps://www.postgresql.org/message-id/CALj2ACVgi6LirgLDZh%3DFdfdvGvKAD%3D%3DWTOSWcQy%3DAtNgPDVnKw%40mail.gmail.com.\nI've pulled that patch in here as 0001 to showcase its use for this\nfeature.\n\n> * Why check that startptr is earlier than the flush pointer, but not\n> startptr+count? Also, given that we intend to support reading unflushed\n> data, it would be good to comment that the function still works past\n> the flush pointer, and that it will be safe to remove later (right?).\n\nThat won't work, see the comment below. Actual flushed LSN may not\nalways be greater than startptr+count. GetFlushRecPtr() check in\nXLogReadFromBuffers() is similar to what pg_walinspect has in\nGetCurrentLSN().\n\n /*\n * Even though we just determined how much of the page can be validly read\n * as 'count', read the whole page anyway. It's guaranteed to be\n * zero-padded up to the page boundary if it's incomplete.\n */\n if (!WALRead(state, cur_page, targetPagePtr, XLOG_BLCKSZ, tli,\n &errinfo))\n\n> * An \"Assert(!RecoveryInProgress())\" would be more appropriate than\n> an error. Perhaps we will remove even that check in the future to\n> achieve cascaded replication of unflushed data.\n>\n> In this case, I think some of those errors are not really necessary\n> anyway, though:\n>\n> * XLogReadFromBuffers shouldn't take a timeline argument just to\n> demand that it's always equal to the wal insertion timeline.\n\nI've changed XLogReadFromBuffers() to return as-if nothing was read\n(cache miss) when the server is in recovery or the requested TLI is\nnot the current server's insertion TLI. It is better than failing with\nERRORs so that the callers don't have to have any checks for recovery\nor TLI.\n\nPSA v14 patch set.\n\n[1]\n * WALBufMappingLock: must be held to replace a page in the WAL buffer cache.\n\n[2]\n * and xlblocks values certainly do. xlblocks values are protected by\n * WALBufMappingLock.\n */\n char *pages; /* buffers for unwritten XLOG pages */\n XLogRecPtr *xlblocks; /* 1st byte ptr-s + XLOG_BLCKSZ */\n\n[3]\n * However, we don't hold a lock while we read the value. If someone has\n * just initialized the page, it's possible that we get a \"torn read\" of\n * the XLogRecPtr if 64-bit fetches are not atomic on this platform. In\n * that case we will see a bogus value. That's ok, we'll grab the mapping\n * lock (in AdvanceXLInsertBuffer) and retry if we see anything else than\n * the page we're looking for. But it means that when we do this unlocked\n * read, we might see a value that appears to be ahead of the page we're\n * looking for. Don't PANIC on that, until we've verified the value while\n * holding the lock.\n */\n expectedEndPtr = ptr;\n expectedEndPtr += XLOG_BLCKSZ - ptr % XLOG_BLCKSZ;\n\n endptr = XLogCtl->xlblocks[idx];\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 2 Nov 2023 22:38:38 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Thu, 2023-11-02 at 22:38 +0530, Bharath Rupireddy wrote:\n> > I suppose the question is: should reading from the WAL buffers an\n> > intentional thing that the caller does explicitly by specific\n> > callers?\n> > Or is it an optimization that should be hidden from the caller?\n> > \n> > I tend toward the former, at least for now.\n> \n> Yes, it's an optimization that must be hidden from the caller.\n\nAs I said, I tend toward the opposite: that specific callers should\nread from the buffers explicitly in the cases where it makes sense.\n\nI don't think this is the most important point right now though, let's\nsort out the other details.\n\n> > \n> At any given point of time, WAL buffer pages are maintained as a\n> circularly sorted array in an ascending order from\n> OldestInitializedPage to InitializedUpTo (new pages are inserted at\n> this end).\n\nI don't see any reference to OldestInitializedPage or anything like it,\nwith or without your patch. Am I missing something?\n\n> - Read 6 pages starting from LSN 80. Nothing is read from WAL buffers\n> as the page at LSN 80 doesn't exist despite other 5 pages starting\n> from LSN 90 exist.\n\nThis is what I imagined a \"partial hit\" was: read the 5 pages starting\nat 90. The caller would then need to figure out how to read the page at\nLSN 80 from the segment files.\n\nI am not saying we should support this case; perhaps it doesn't matter.\nI'm just describing why that term was confusing for me.\n\n> If a caller asks for an unflushed WAL read\n> intentionally or unintentionally, XLogReadFromBuffers() reads only 4\n> pages starting from LSN 150 to LSN 180 and will leave the remaining 2\n> pages for the caller to deal with. This is the partial hit that can\n> happen.\n\nTo me that's more like an EOF case. \"Partial hit\" sounds to me like the\ndata exists but is not available in the cache (i.e. go to the segment\nfiles); whereas if it encountered the end, the data is not available at\nall.\n\n> > \n> WALBufMappingLock protects both xlblocks and WAL buffer pages [1][2].\n> I'm not sure how using the memory barrier, not WALBufMappingLock,\n> prevents writers from replacing WAL buffer pages while readers\n> reading\n> the pages.\n\nIt doesn't *prevent* that case, but it does *detect* that case. We\ndon't want to prevent writers from replacing WAL buffers, because that\nwould mean we are slowing down the critical WAL writing path.\n\nLet me explain the potential problem cases, and how the barriers\nprevent them:\n\nPotential problem 1: the page is not yet resident in the cache at the\ntime the memcpy begins. In this case, the first read barrier would\nensure that the page is also not yet resident at the time xlblocks[idx]\nis read into endptr1, and we'd break out of the loop.\n\nPotential problem 2: the page is evicted before the memcpy finishes. In\nthis case, the second read barrier would ensure that the page was also\nevicted before xlblocks[idx] is read into endptr2, and again we'd\ndetect the problem and break out of the loop.\n\nI assume here that, if xlblocks[idx] holds the endPtr of the desired\npage, all of the bytes for that page are resident at that moment. I\ndon't think that's true right now: AdvanceXLInsertBuffers() zeroes the\nold page before updating xlblocks[nextidx]. I think it needs something\nlike:\n\n pg_atomic_write_u64(&XLogCtl->xlblocks[nextidx], InvalidXLogRecPtr);\n pg_write_barrier();\n\nbefore the MemSet.\n\nI didn't review your latest v14 patch yet.\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n\n",
"msg_date": "Fri, 03 Nov 2023 00:05:06 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Fri, Nov 3, 2023 at 12:35 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Thu, 2023-11-02 at 22:38 +0530, Bharath Rupireddy wrote:\n> > > I suppose the question is: should reading from the WAL buffers an\n> > > intentional thing that the caller does explicitly by specific\n> > > callers?\n> > > Or is it an optimization that should be hidden from the caller?\n> > >\n> > > I tend toward the former, at least for now.\n> >\n> > Yes, it's an optimization that must be hidden from the caller.\n>\n> As I said, I tend toward the opposite: that specific callers should\n> read from the buffers explicitly in the cases where it makes sense.\n\nHow about adding a bool flag (read_from_wal_buffers) to\nXLogReaderState so that the callers can set it if they want this\nfacility via XLogReaderAllocate()?\n\n> > At any given point of time, WAL buffer pages are maintained as a\n> > circularly sorted array in an ascending order from\n> > OldestInitializedPage to InitializedUpTo (new pages are inserted at\n> > this end).\n>\n> I don't see any reference to OldestInitializedPage or anything like it,\n> with or without your patch. Am I missing something?\n\nOldestInitializedPage is introduced in v14-0001 patch. Please have a look.\n\n> > - Read 6 pages starting from LSN 80. Nothing is read from WAL buffers\n> > as the page at LSN 80 doesn't exist despite other 5 pages starting\n> > from LSN 90 exist.\n>\n> This is what I imagined a \"partial hit\" was: read the 5 pages starting\n> at 90. The caller would then need to figure out how to read the page at\n> LSN 80 from the segment files.\n>\n> I am not saying we should support this case; perhaps it doesn't matter.\n> I'm just describing why that term was confusing for me.\n\nOkay. Current patch doesn't support this case.\n\n> > If a caller asks for an unflushed WAL read\n> > intentionally or unintentionally, XLogReadFromBuffers() reads only 4\n> > pages starting from LSN 150 to LSN 180 and will leave the remaining 2\n> > pages for the caller to deal with. This is the partial hit that can\n> > happen.\n>\n> To me that's more like an EOF case. \"Partial hit\" sounds to me like the\n> data exists but is not available in the cache (i.e. go to the segment\n> files); whereas if it encountered the end, the data is not available at\n> all.\n\nRight. We can tweak the comments around \"partial hit\" if required.\n\n> > WALBufMappingLock protects both xlblocks and WAL buffer pages [1][2].\n> > I'm not sure how using the memory barrier, not WALBufMappingLock,\n> > prevents writers from replacing WAL buffer pages while readers\n> > reading\n> > the pages.\n>\n> It doesn't *prevent* that case, but it does *detect* that case. We\n> don't want to prevent writers from replacing WAL buffers, because that\n> would mean we are slowing down the critical WAL writing path.\n>\n> Let me explain the potential problem cases, and how the barriers\n> prevent them:\n>\n> Potential problem 1: the page is not yet resident in the cache at the\n> time the memcpy begins. In this case, the first read barrier would\n> ensure that the page is also not yet resident at the time xlblocks[idx]\n> is read into endptr1, and we'd break out of the loop.\n>\n> Potential problem 2: the page is evicted before the memcpy finishes. In\n> this case, the second read barrier would ensure that the page was also\n> evicted before xlblocks[idx] is read into endptr2, and again we'd\n> detect the problem and break out of the loop.\n\nUnderstood.\n\n> I assume here that, if xlblocks[idx] holds the endPtr of the desired\n> page, all of the bytes for that page are resident at that moment. I\n> don't think that's true right now: AdvanceXLInsertBuffers() zeroes the\n> old page before updating xlblocks[nextidx].\n\nRight.\n\n> I think it needs something like:\n>\n> pg_atomic_write_u64(&XLogCtl->xlblocks[nextidx], InvalidXLogRecPtr);\n> pg_write_barrier();\n>\n> before the MemSet.\n\nI think it works. First, xlblocks needs to be turned to an array of\n64-bit atomics and then the above change. With this, all those who\nreads xlblocks with or without WALBufMappingLock also need to check if\nxlblocks[idx] is ever InvalidXLogRecPtr and take appropriate action.\n\nI'm sure you have seen the following. It looks like I'm leaning\ntowards the claim that it's safe to read xlblocks without\nWALBufMappingLock. I'll put up a patch for these changes separately.\n\n /*\n * Make sure the initialization of the page becomes visible to others\n * before the xlblocks update. GetXLogBuffer() reads xlblocks without\n * holding a lock.\n */\n pg_write_barrier();\n\n *((volatile XLogRecPtr *) &XLogCtl->xlblocks[nextidx]) = NewPageEndPtr;\n\nI think the 3 things that helps read from WAL buffers without\nWALBufMappingLock are: 1) couple of the read barriers in\nXLogReadFromBuffers, 2) atomically initializing xlblocks[idx] to\nInvalidXLogRecPtr plus a write barrier in AdvanceXLInsertBuffer(), 3)\nthe following sanity check to see if the read page is valid in\nXLogReadFromBuffers(). If it sounds sensible, I'll work towards coding\nit up. Thoughts?\n\n+ , we\n+ * need to ensure that we are not reading a page that just got\n+ * initialized. For this, we look at the needed page header.\n+ */\n+ phdr = (XLogPageHeader) page;\n+\n+ /* Return, if WAL buffer page doesn't look valid. */\n+ if (!(phdr->xlp_magic == XLOG_PAGE_MAGIC &&\n+ phdr->xlp_pageaddr == (ptr - (ptr % XLOG_BLCKSZ)) &&\n+ phdr->xlp_tli == tli))\n+ break;\n+\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 3 Nov 2023 20:23:30 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Fri, 2023-11-03 at 20:23 +0530, Bharath Rupireddy wrote:\n> > \n> OldestInitializedPage is introduced in v14-0001 patch. Please have a\n> look.\n\nI don't see why that's necessary if we move to the algorithm I\nsuggested below that doesn't require a lock.\n\n> > \n> Okay. Current patch doesn't support this [partial hit of newer pages]\n> case.\n\nOK, no need to support it until you see a reason.\n> > \n\n> > \n> > I think it needs something like:\n> > \n> > pg_atomic_write_u64(&XLogCtl->xlblocks[nextidx],\n> > InvalidXLogRecPtr);\n> > pg_write_barrier();\n> > \n> > before the MemSet.\n> \n> I think it works. First, xlblocks needs to be turned to an array of\n> 64-bit atomics and then the above change.\n\nDoes anyone see a reason we shouldn't move to atomics here?\n\n> \n> pg_write_barrier();\n> \n> *((volatile XLogRecPtr *) &XLogCtl->xlblocks[nextidx]) =\n> NewPageEndPtr;\n\nI am confused why the \"volatile\" is required on that line (not from\nyour patch). I sent a separate message about that:\n\nhttps://www.postgresql.org/message-id/784f72ac09061fe5eaa5335cc347340c367c73ac.camel@j-davis.com\n\n> I think the 3 things that helps read from WAL buffers without\n> WALBufMappingLock are: 1) couple of the read barriers in\n> XLogReadFromBuffers, 2) atomically initializing xlblocks[idx] to\n> InvalidXLogRecPtr plus a write barrier in AdvanceXLInsertBuffer(), 3)\n> the following sanity check to see if the read page is valid in\n> XLogReadFromBuffers(). If it sounds sensible, I'll work towards\n> coding\n> it up. Thoughts?\n\nI like it. I think it will ultimately be a fairly simple loop. And by\nmoving to atomics, we won't need the delicate comment in\nGetXLogBuffer().\n\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 03 Nov 2023 12:47:24 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-03 20:23:30 +0530, Bharath Rupireddy wrote:\n> On Fri, Nov 3, 2023 at 12:35 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > On Thu, 2023-11-02 at 22:38 +0530, Bharath Rupireddy wrote:\n> > > > I suppose the question is: should reading from the WAL buffers an\n> > > > intentional thing that the caller does explicitly by specific\n> > > > callers?\n> > > > Or is it an optimization that should be hidden from the caller?\n> > > >\n> > > > I tend toward the former, at least for now.\n> > >\n> > > Yes, it's an optimization that must be hidden from the caller.\n> >\n> > As I said, I tend toward the opposite: that specific callers should\n> > read from the buffers explicitly in the cases where it makes sense.\n> \n> How about adding a bool flag (read_from_wal_buffers) to\n> XLogReaderState so that the callers can set it if they want this\n> facility via XLogReaderAllocate()?\n\nThat seems wrong architecturally - why should xlogreader itself know about any\nof this? What would it mean in frontend code if read_from_wal_buffers were\nset? IMO this is something that should happen purely within the read function.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 Nov 2023 16:58:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-02 22:38:38 +0530, Bharath Rupireddy wrote:\n> From 5b5469d7dcd8e98bfcaf14227e67356bbc1f5fe8 Mon Sep 17 00:00:00 2001\n> From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n> Date: Thu, 2 Nov 2023 15:10:51 +0000\n> Subject: [PATCH v14] Track oldest initialized WAL buffer page\n>\n> ---\n> src/backend/access/transam/xlog.c | 170 ++++++++++++++++++++++++++++++\n> src/include/access/xlog.h | 1 +\n> 2 files changed, 171 insertions(+)\n>\n> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> index b541be8eec..fdf2ef310b 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -504,6 +504,45 @@ typedef struct XLogCtlData\n> \tXLogRecPtr *xlblocks;\t\t/* 1st byte ptr-s + XLOG_BLCKSZ */\n> \tint\t\t\tXLogCacheBlck;\t/* highest allocated xlog buffer index */\n>\n> +\t/*\n> +\t * Start address of oldest initialized page in XLog buffers.\n> +\t *\n> +\t * We mainly track oldest initialized page explicitly to quickly tell if a\n> +\t * given WAL record is available in XLog buffers. It also can be used for\n> +\t * other purposes, see notes below.\n> +\t *\n> +\t * OldestInitializedPage gives XLog buffers following properties:\n> +\t *\n> +\t * 1) At any given point of time, pages in XLog buffers array are sorted\n> +\t * in an ascending order from OldestInitializedPage till InitializedUpTo.\n> +\t * Note that we verify this property for assert-only builds, see\n> +\t * IsXLogBuffersArraySorted() for more details.\n\nThis is true - but also not, if you look at it a bit too literally. The\nbuffers in xlblocks itself obviously aren't ordered when wrapping around\nbetween XLogRecPtrToBufIdx(OldestInitializedPage) and\nXLogRecPtrToBufIdx(InitializedUpTo).\n\n\n> +\t * 2) OldestInitializedPage is monotonically increasing (by virtue of how\n> +\t * postgres generates WAL records), that is, its value never decreases.\n> +\t * This property lets someone read its value without a lock. There's no\n> +\t * problem even if its value is slightly stale i.e. concurrently being\n> +\t * updated. One can still use it for finding if a given WAL record is\n> +\t * available in XLog buffers. At worst, one might get false positives\n> +\t * (i.e. OldestInitializedPage may tell that the WAL record is available\n> +\t * in XLog buffers, but when one actually looks at it, it isn't really\n> +\t * available). This is more efficient and performant than acquiring a lock\n> +\t * for reading. Note that we may not need a lock to read\n> +\t * OldestInitializedPage but we need to update it holding\n> +\t * WALBufMappingLock.\n\nI'd\ns/may not need/do not need/\n\nBut perhaps rephrase it a bit more, to something like:\n\nTo update OldestInitializedPage, WALBufMappingLock needs to be held\nexclusively, for reading no lock is required.\n\n\n> +\t *\n> +\t * 3) One can start traversing XLog buffers from OldestInitializedPage\n> +\t * till InitializedUpTo to list out all valid WAL records and stats, and\n> +\t * expose them via SQL-callable functions to users.\n> +\t *\n> +\t * 4) XLog buffers array is inherently organized as a circular, sorted and\n> +\t * rotated array with OldestInitializedPage as pivot with the property\n> +\t * where LSN of previous buffer page (if valid) is greater than\n> +\t * OldestInitializedPage and LSN of next buffer page (if valid) is greater\n> +\t * than OldestInitializedPage.\n> +\t */\n> +\tXLogRecPtr\tOldestInitializedPage;\n\nIt seems a bit odd to name a LSN containing variable *Page...\n\n\n> \t/*\n> \t * InsertTimeLineID is the timeline into which new WAL is being inserted\n> \t * and flushed. It is zero during recovery, and does not change once set.\n> @@ -590,6 +629,10 @@ static ControlFileData *ControlFile = NULL;\n> #define NextBufIdx(idx)\t\t\\\n> \t\t(((idx) == XLogCtl->XLogCacheBlck) ? 0 : ((idx) + 1))\n>\n> +/* Macro to retreat to previous buffer index. */\n> +#define PreviousBufIdx(idx)\t\t\\\n> +\t\t(((idx) == 0) ? XLogCtl->XLogCacheBlck : ((idx) - 1))\n\nI think it might be worth making these inlines and adding assertions that idx\nis not bigger than XLogCtl->XLogCacheBlck?\n\n\n> /*\n> * XLogRecPtrToBufIdx returns the index of the WAL buffer that holds, or\n> * would hold if it was in cache, the page containing 'recptr'.\n> @@ -708,6 +751,10 @@ static void WALInsertLockAcquireExclusive(void);\n> static void WALInsertLockRelease(void);\n> static void WALInsertLockUpdateInsertingAt(XLogRecPtr insertingAt);\n>\n> +#ifdef USE_ASSERT_CHECKING\n> +static bool IsXLogBuffersArraySorted(void);\n> +#endif\n> +\n> /*\n> * Insert an XLOG record represented by an already-constructed chain of data\n> * chunks. This is a low-level routine; to construct the WAL record header\n> @@ -1992,6 +2039,52 @@ AdvanceXLInsertBuffer(XLogRecPtr upto, TimeLineID tli, bool opportunistic)\n> \t\tXLogCtl->InitializedUpTo = NewPageEndPtr;\n>\n> \t\tnpages++;\n> +\n> +\t\t/*\n> +\t\t * Try updating oldest initialized XLog buffer page.\n> +\t\t *\n> +\t\t * Update it if we are initializing an XLog buffer page for the first\n> +\t\t * time or if XLog buffers are full and we are wrapping around.\n> +\t\t */\n> +\t\tif (XLogRecPtrIsInvalid(XLogCtl->OldestInitializedPage) ||\n> +\t\t\tXLogRecPtrToBufIdx(XLogCtl->OldestInitializedPage) == nextidx)\n> +\t\t{\n> +\t\t\tAssert(XLogCtl->OldestInitializedPage < NewPageBeginPtr);\n> +\n> +\t\t\tXLogCtl->OldestInitializedPage = NewPageBeginPtr;\n> +\t\t}\n\nWait, isn't this too late? At this point the buffer can already be used by\nGetXLogBuffers(). I think thi sneeds to happen at the latest just before\n\t\t*((volatile XLogRecPtr *) &XLogCtl->xlblocks[nextidx]) = NewPageEndPtr;\n\n\nWhy is it legal to get here with XLogCtl->OldestInitializedPage being invalid?\n\n\n> +\n> +/*\n> + * Returns whether or not a given WAL record is available in XLog buffers.\n> + *\n> + * Note that we don't read OldestInitializedPage under a lock, see description\n> + * near its definition in xlog.c for more details.\n> + *\n> + * Note that caller needs to pass in an LSN known to the server, not a future\n> + * or unwritten or unflushed LSN.\n> + */\n> +bool\n> +IsWALRecordAvailableInXLogBuffers(XLogRecPtr lsn)\n> +{\n> +\tif (!XLogRecPtrIsInvalid(lsn) &&\n> +\t\t!XLogRecPtrIsInvalid(XLogCtl->OldestInitializedPage) &&\n> +\t\tlsn >= XLogCtl->OldestInitializedPage &&\n> +\t\tlsn < XLogCtl->InitializedUpTo)\n> +\t{\n> +\t\treturn true;\n> +\t}\n> +\n> +\treturn false;\n> +}\n> diff --git a/src/include/access/xlog.h b/src/include/access/xlog.h\n> index a14126d164..35235010e6 100644\n> --- a/src/include/access/xlog.h\n> +++ b/src/include/access/xlog.h\n> @@ -261,6 +261,7 @@ extern void ReachedEndOfBackup(XLogRecPtr EndRecPtr, TimeLineID tli);\n> extern void SetInstallXLogFileSegmentActive(void);\n> extern bool IsInstallXLogFileSegmentActive(void);\n> extern void XLogShutdownWalRcv(void);\n> +extern bool IsWALRecordAvailableInXLogBuffers(XLogRecPtr lsn);\n>\n> /*\n> * Routines to start, stop, and get status of a base backup.\n> --\n> 2.34.1\n>\n\n> From db027d8f1dcb53ebceef0135287f120acf67cc21 Mon Sep 17 00:00:00 2001\n> From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\n> Date: Thu, 2 Nov 2023 15:36:11 +0000\n> Subject: [PATCH v14] Allow WAL reading from WAL buffers\n>\n> This commit adds WALRead() the capability to read WAL from WAL\n> buffers when possible. When requested WAL isn't available in WAL\n> buffers, the WAL is read from the WAL file as usual. It relies on\n> WALBufMappingLock so that no one replaces the WAL buffer page that\n> we're reading from. It skips reading from WAL buffers if\n> WALBufMappingLock can't be acquired immediately. In other words,\n> it doesn't wait for WALBufMappingLock to be available. This helps\n> reduce the contention on WALBufMappingLock.\n>\n> This commit benefits the callers of WALRead(), that are walsenders\n> and pg_walinspect. They can now avoid reading WAL from the WAL\n> file (possibly avoiding disk IO). Tests show that the WAL buffers\n> hit ratio stood at 95% for 1 primary, 1 sync standby, 1 async\n> standby, with pgbench --scale=300 --client=32 --time=900. In other\n> words, the walsenders avoided 95% of the time reading from the\n> file/avoided pread system calls:\n> https://www.postgresql.org/message-id/CALj2ACXKKK%3DwbiG5_t6dGao5GoecMwRkhr7GjVBM_jg54%2BNa%3DQ%40mail.gmail.com\n>\n> This commit also benefits when direct IO is enabled for WAL.\n> Reading WAL from WAL buffers puts back the performance close to\n> that of without direct IO for WAL:\n> https://www.postgresql.org/message-id/CALj2ACV6rS%2B7iZx5%2BoAvyXJaN4AG-djAQeM1mrM%3DYSDkVrUs7g%40mail.gmail.com\n>\n> This commit also paves the way for the following features in\n> future:\n> - Improves synchronous replication performance by replicating\n> directly from WAL buffers.\n> - A opt-in way for the walreceivers to receive unflushed WAL.\n> More details here:\n> https://www.postgresql.org/message-id/20231011224353.cl7c2s222dw3de4j%40awork3.anarazel.de\n>\n> Author: Bharath Rupireddy\n> Reviewed-by: Dilip Kumar, Andres Freund\n> Reviewed-by: Nathan Bossart, Kuntal Ghosh\n> Discussion: https://www.postgresql.org/message-id/CALj2ACXKKK%3DwbiG5_t6dGao5GoecMwRkhr7GjVBM_jg54%2BNa%3DQ%40mail.gmail.com\n> ---\n> src/backend/access/transam/xlog.c | 205 ++++++++++++++++++++++++\n> src/backend/access/transam/xlogreader.c | 41 ++++-\n> src/include/access/xlog.h | 6 +\n> 3 files changed, 250 insertions(+), 2 deletions(-)\n>\n> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n> index fdf2ef310b..ff5dccaaa7 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -1753,6 +1753,211 @@ GetXLogBuffer(XLogRecPtr ptr, TimeLineID tli)\n> \treturn cachedPos + ptr % XLOG_BLCKSZ;\n> }\n\n>\n\n> +/*\n> + * Read WAL from WAL buffers.\n> + *\n> + * Read 'count' bytes of WAL from WAL buffers into 'buf', starting at location\n> + * 'startptr', on timeline 'tli' and return total read bytes.\n> + *\n> + * This function returns quickly in the following cases:\n> + * - When passed-in timeline is different than server's current insertion\n> + * timeline as WAL is always inserted into WAL buffers on insertion timeline.\n> + * - When server is in recovery as WAL buffers aren't currently used in\n> + * recovery.\n> + *\n> + * Note that this function reads as much as it can from WAL buffers, meaning,\n> + * it may not read all the requested 'count' bytes. Caller must be aware of\n> + * this and deal with it.\n> + *\n> + * Note that this function is not available for frontend code as WAL buffers is\n> + * an internal mechanism to the server.\n>\n> + */\n> +Size\n> +XLogReadFromBuffers(XLogReaderState *state,\n> +\t\t\t\t\tXLogRecPtr startptr,\n> +\t\t\t\t\tTimeLineID tli,\n> +\t\t\t\t\tSize count,\n> +\t\t\t\t\tchar *buf)\n> +{\n> +\tXLogRecPtr\tptr;\n> +\tXLogRecPtr\tcur_lsn;\n> +\tSize\t\tnbytes;\n> +\tSize\t\tntotal;\n> +\tSize\t\tnbatch;\n> +\tchar\t *batchstart;\n> +\n> +\tif (RecoveryInProgress())\n> +\t\treturn 0;\n>\n> +\tif (tli != GetWALInsertionTimeLine())\n> +\t\treturn 0;\n> +\n> +\tAssert(!XLogRecPtrIsInvalid(startptr));\n> +\n> +\tcur_lsn = GetFlushRecPtr(NULL);\n> +\tif (unlikely(startptr > cur_lsn))\n> +\t\telog(ERROR, \"WAL start LSN %X/%X specified for reading from WAL buffers must be less than current database system WAL LSN %X/%X\",\n> +\t\t\t LSN_FORMAT_ARGS(startptr), LSN_FORMAT_ARGS(cur_lsn));\n\nHm, why does this check belong here? For some tools it might be legitimate to\nread the WAL before it was fully flushed.\n\n\n\n> +\t/*\n> +\t * Holding WALBufMappingLock ensures inserters don't overwrite this value\n> +\t * while we are reading it. We try to acquire it in shared mode so that\n> +\t * the concurrent WAL readers are also allowed. We try to do as less work\n> +\t * as possible while holding the lock to not impact concurrent WAL writers\n> +\t * much. We quickly exit to not cause any contention, if the lock isn't\n> +\t * immediately available.\n> +\t */\n> +\tif (!LWLockConditionalAcquire(WALBufMappingLock, LW_SHARED))\n> +\t\treturn 0;\n\nThat seems problematic - that lock is often heavily contended. We could\ninstead check IsWALRecordAvailableInXLogBuffers() once before reading the\npage, then read the page contents *without* holding a lock, and then check\nIsWALRecordAvailableInXLogBuffers() again - if the page was replaced in the\ninterim we read bogus data, but that's a bit of a wasted effort.\n\n\n> +\tptr = startptr;\n> +\tnbytes = count;\t\t\t\t/* Total bytes requested to be read by caller. */\n> +\tntotal = 0;\t\t\t\t\t/* Total bytes read. */\n> +\tnbatch = 0;\t\t\t\t\t/* Bytes to be read in single batch. */\n> +\tbatchstart = NULL;\t\t\t/* Location to read from for single batch. */\n\nWhat does \"batch\" mean?\n\n\n> +\twhile (nbytes > 0)\n> +\t{\n> +\t\tXLogRecPtr\texpectedEndPtr;\n> +\t\tXLogRecPtr\tendptr;\n> +\t\tint\t\t\tidx;\n> +\t\tchar\t *page;\n> +\t\tchar\t *data;\n> +\t\tXLogPageHeader phdr;\n> +\n> +\t\tidx = XLogRecPtrToBufIdx(ptr);\n> +\t\texpectedEndPtr = ptr;\n> +\t\texpectedEndPtr += XLOG_BLCKSZ - ptr % XLOG_BLCKSZ;\n> +\t\tendptr = XLogCtl->xlblocks[idx];\n> +\n> +\t\t/* Requested WAL isn't available in WAL buffers. */\n> +\t\tif (expectedEndPtr != endptr)\n> +\t\t\tbreak;\n> +\n> +\t\t/*\n> +\t\t * We found WAL buffer page containing given XLogRecPtr. Get starting\n> +\t\t * address of the page and a pointer to the right location of given\n> +\t\t * XLogRecPtr in that page.\n> +\t\t */\n> +\t\tpage = XLogCtl->pages + idx * (Size) XLOG_BLCKSZ;\n> +\t\tdata = page + ptr % XLOG_BLCKSZ;\n> +\n> +\t\t/*\n> +\t\t * The fact that we acquire WALBufMappingLock while reading the WAL\n> +\t\t * buffer page itself guarantees that no one else initializes it or\n> +\t\t * makes it ready for next use in AdvanceXLInsertBuffer(). However, we\n> +\t\t * need to ensure that we are not reading a page that just got\n> +\t\t * initialized. For this, we look at the needed page header.\n> +\t\t */\n> +\t\tphdr = (XLogPageHeader) page;\n> +\n> +\t\t/* Return, if WAL buffer page doesn't look valid. */\n> +\t\tif (!(phdr->xlp_magic == XLOG_PAGE_MAGIC &&\n> +\t\t\t phdr->xlp_pageaddr == (ptr - (ptr % XLOG_BLCKSZ)) &&\n> +\t\t\t phdr->xlp_tli == tli))\n> +\t\t\tbreak;\n\nI don't think this code should ever encounter a page where this is not the\ncase? We particularly shouldn't do so silently, seems that could hide all\nkinds of problems.\n\n\n\n> +\t\t/*\n> +\t\t * Note that we don't perform all page header checks here to avoid\n> +\t\t * extra work in production builds; callers will anyway do those\n> +\t\t * checks extensively. However, in an assert-enabled build, we perform\n> +\t\t * all the checks here and raise an error if failed.\n> +\t\t */\n\nWhy?\n\n\n> +\t\t/* Count what is wanted, not the whole page. */\n> +\t\tif ((data + nbytes) <= (page + XLOG_BLCKSZ))\n> +\t\t{\n> +\t\t\t/* All the bytes are in one page. */\n> +\t\t\tnbatch += nbytes;\n> +\t\t\tntotal += nbytes;\n> +\t\t\tnbytes = 0;\n> +\t\t}\n> +\t\telse\n> +\t\t{\n> +\t\t\tSize\t\tnavailable;\n> +\n> +\t\t\t/*\n> +\t\t\t * All the bytes are not in one page. Deduce available bytes on\n> +\t\t\t * the current page, count them and continue to look for remaining\n> +\t\t\t * bytes.\n> +\t\t\t */\ns/deducate/deduct/? Perhaps better subtract?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 Nov 2023 17:43:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Sat, Nov 4, 2023 at 1:17 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> > > I think it needs something like:\n> > >\n> > > pg_atomic_write_u64(&XLogCtl->xlblocks[nextidx],\n> > > InvalidXLogRecPtr);\n> > > pg_write_barrier();\n> > >\n> > > before the MemSet.\n> >\n> > I think it works. First, xlblocks needs to be turned to an array of\n> > 64-bit atomics and then the above change.\n>\n> Does anyone see a reason we shouldn't move to atomics here?\n>\n> >\n> > pg_write_barrier();\n> >\n> > *((volatile XLogRecPtr *) &XLogCtl->xlblocks[nextidx]) =\n> > NewPageEndPtr;\n>\n> I am confused why the \"volatile\" is required on that line (not from\n> your patch). I sent a separate message about that:\n>\n> https://www.postgresql.org/message-id/784f72ac09061fe5eaa5335cc347340c367c73ac.camel@j-davis.com\n>\n> > I think the 3 things that helps read from WAL buffers without\n> > WALBufMappingLock are: 1) couple of the read barriers in\n> > XLogReadFromBuffers, 2) atomically initializing xlblocks[idx] to\n> > InvalidXLogRecPtr plus a write barrier in AdvanceXLInsertBuffer(), 3)\n> > the following sanity check to see if the read page is valid in\n> > XLogReadFromBuffers(). If it sounds sensible, I'll work towards\n> > coding\n> > it up. Thoughts?\n>\n> I like it. I think it will ultimately be a fairly simple loop. And by\n> moving to atomics, we won't need the delicate comment in\n> GetXLogBuffer().\n\nI'm attaching the v15 patch set implementing the above ideas. Please\nhave a look.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 4 Nov 2023 20:55:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Sat, 2023-11-04 at 20:55 +0530, Bharath Rupireddy wrote:\n> +\t\tXLogRecPtr\tEndPtr =\n> pg_atomic_read_u64(&XLogCtl->xlblocks[curridx]);\n> +\n> +\t\t/*\n> +\t\t * xlblocks value can be InvalidXLogRecPtr before\n> the new WAL buffer\n> +\t\t * page gets initialized in AdvanceXLInsertBuffer.\n> In such a case\n> +\t\t * re-read the xlblocks value under the lock to\n> ensure the correct\n> +\t\t * value is read.\n> +\t\t */\n> +\t\tif (unlikely(XLogRecPtrIsInvalid(EndPtr)))\n> +\t\t{\n> +\t\t\tLWLockAcquire(WALBufMappingLock,\n> LW_EXCLUSIVE);\n> +\t\t\tEndPtr = pg_atomic_read_u64(&XLogCtl-\n> >xlblocks[curridx]);\n> +\t\t\tLWLockRelease(WALBufMappingLock);\n> +\t\t}\n> +\n> +\t\tAssert(!XLogRecPtrIsInvalid(EndPtr));\n\nCan that really happen? If the EndPtr is invalid, that means the page\nis in the process of being cleared, so the contents of the page are\nundefined at that time, right?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sat, 04 Nov 2023 14:27:39 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Sun, Nov 5, 2023 at 2:57 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Sat, 2023-11-04 at 20:55 +0530, Bharath Rupireddy wrote:\n> > + XLogRecPtr EndPtr =\n> > pg_atomic_read_u64(&XLogCtl->xlblocks[curridx]);\n> > +\n> > + /*\n> > + * xlblocks value can be InvalidXLogRecPtr before\n> > the new WAL buffer\n> > + * page gets initialized in AdvanceXLInsertBuffer.\n> > In such a case\n> > + * re-read the xlblocks value under the lock to\n> > ensure the correct\n> > + * value is read.\n> > + */\n> > + if (unlikely(XLogRecPtrIsInvalid(EndPtr)))\n> > + {\n> > + LWLockAcquire(WALBufMappingLock,\n> > LW_EXCLUSIVE);\n> > + EndPtr = pg_atomic_read_u64(&XLogCtl-\n> > >xlblocks[curridx]);\n> > + LWLockRelease(WALBufMappingLock);\n> > + }\n> > +\n> > + Assert(!XLogRecPtrIsInvalid(EndPtr));\n>\n> Can that really happen? If the EndPtr is invalid, that means the page\n> is in the process of being cleared, so the contents of the page are\n> undefined at that time, right?\n\nMy initial thoughts were this way - xlblocks is being read without\nholding WALBufMappingLock in XLogWrite() and since we write\nInvalidXLogRecPtr to xlblocks array elements temporarily before\nMemSet-ting the page in AdvanceXLInsertBuffer(), the PANIC \"xlog write\nrequest %X/%X is past end of log %X/%X\" might get hit if EndPtr read\nfrom xlblocks is InvalidXLogRecPtr. FWIW, an Assert(false); within the\nif (unlikely(XLogRecPtrIsInvalid(EndPtr))) block didn't hit in make\ncheck-world.\n\nIt looks like my above understanding isn't correct because it can\nnever happen that the page that's being written to the WAL file gets\ninitialized in AdvanceXLInsertBuffer(). I'll remove this piece of code\nin next version of the patch unless there are any other thoughts.\n\n[1]\n /*\n * Within the loop, curridx is the cache block index of the page to\n * consider writing. Begin at the buffer containing the next unwritten\n * page, or last partially written page.\n */\n curridx = XLogRecPtrToBufIdx(LogwrtResult.Write);\n\n while (LogwrtResult.Write < WriteRqst.Write)\n {\n /*\n * Make sure we're not ahead of the insert process. This could happen\n * if we're passed a bogus WriteRqst.Write that is past the end of the\n * last page that's been initialized by AdvanceXLInsertBuffer.\n */\n XLogRecPtr EndPtr = pg_atomic_read_u64(&XLogCtl->xlblocks[curridx]);\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 5 Nov 2023 04:15:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Sat, Nov 4, 2023 at 6:13 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> > + cur_lsn = GetFlushRecPtr(NULL);\n> > + if (unlikely(startptr > cur_lsn))\n> > + elog(ERROR, \"WAL start LSN %X/%X specified for reading from WAL buffers must be less than current database system WAL LSN %X/%X\",\n> > + LSN_FORMAT_ARGS(startptr), LSN_FORMAT_ARGS(cur_lsn));\n>\n> Hm, why does this check belong here? For some tools it might be legitimate to\n> read the WAL before it was fully flushed.\n\nAgreed and removed the check.\n\n> > + /*\n> > + * Holding WALBufMappingLock ensures inserters don't overwrite this value\n> > + * while we are reading it. We try to acquire it in shared mode so that\n> > + * the concurrent WAL readers are also allowed. We try to do as less work\n> > + * as possible while holding the lock to not impact concurrent WAL writers\n> > + * much. We quickly exit to not cause any contention, if the lock isn't\n> > + * immediately available.\n> > + */\n> > + if (!LWLockConditionalAcquire(WALBufMappingLock, LW_SHARED))\n> > + return 0;\n>\n> That seems problematic - that lock is often heavily contended. We could\n> instead check IsWALRecordAvailableInXLogBuffers() once before reading the\n> page, then read the page contents *without* holding a lock, and then check\n> IsWALRecordAvailableInXLogBuffers() again - if the page was replaced in the\n> interim we read bogus data, but that's a bit of a wasted effort.\n\nIn the new approach described upthread here\nhttps://www.postgresql.org/message-id/c3455ab9da42e09ca9d059879b5c512b2d1f9681.camel%40j-davis.com,\nthere's no lock required for reading from WAL buffers. PSA patches for\nmore details.\n\n> > + /*\n> > + * The fact that we acquire WALBufMappingLock while reading the WAL\n> > + * buffer page itself guarantees that no one else initializes it or\n> > + * makes it ready for next use in AdvanceXLInsertBuffer(). However, we\n> > + * need to ensure that we are not reading a page that just got\n> > + * initialized. For this, we look at the needed page header.\n> > + */\n> > + phdr = (XLogPageHeader) page;\n> > +\n> > + /* Return, if WAL buffer page doesn't look valid. */\n> > + if (!(phdr->xlp_magic == XLOG_PAGE_MAGIC &&\n> > + phdr->xlp_pageaddr == (ptr - (ptr % XLOG_BLCKSZ)) &&\n> > + phdr->xlp_tli == tli))\n> > + break;\n>\n> I don't think this code should ever encounter a page where this is not the\n> case? We particularly shouldn't do so silently, seems that could hide all\n> kinds of problems.\n\nI think it's possible to read a \"just got initialized\" page with the\nnew approach to read WAL buffer pages without WALBufMappingLock if the\npage is read right after it is initialized and xlblocks is filled in\nAdvanceXLInsertBuffer() but before actual WAL is written.\n\n> > + /*\n> > + * Note that we don't perform all page header checks here to avoid\n> > + * extra work in production builds; callers will anyway do those\n> > + * checks extensively. However, in an assert-enabled build, we perform\n> > + * all the checks here and raise an error if failed.\n> > + */\n>\n> Why?\n\nMinimal page header checks are performed to ensure we don't read the\npage that just got initialized unlike what\nXLogReaderValidatePageHeader(). Are you suggesting to remove page\nheader checks with XLogReaderValidatePageHeader() for assert-enabled\nbuilds? Or are you suggesting to do page header checks with\nXLogReaderValidatePageHeader() for production builds too?\n\nPSA v16 patch set. Note that 0004 patch adds support for WAL read\nstats (both from WAL file and WAL buffers) to walsenders and may not\nnecessarily the best approach to capture WAL read stats in light of\nhttps://www.postgresql.org/message-id/CALj2ACU_f5_c8F%2BxyNR4HURjG%3DJziiz07wCpQc%3DAqAJUFh7%2B8w%40mail.gmail.com\nwhich adds WAL read/write/fsync stats to pg_stat_io.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 8 Nov 2023 13:10:34 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-08 13:10:34 +0530, Bharath Rupireddy wrote:\n> > > + /*\n> > > + * The fact that we acquire WALBufMappingLock while reading the WAL\n> > > + * buffer page itself guarantees that no one else initializes it or\n> > > + * makes it ready for next use in AdvanceXLInsertBuffer(). However, we\n> > > + * need to ensure that we are not reading a page that just got\n> > > + * initialized. For this, we look at the needed page header.\n> > > + */\n> > > + phdr = (XLogPageHeader) page;\n> > > +\n> > > + /* Return, if WAL buffer page doesn't look valid. */\n> > > + if (!(phdr->xlp_magic == XLOG_PAGE_MAGIC &&\n> > > + phdr->xlp_pageaddr == (ptr - (ptr % XLOG_BLCKSZ)) &&\n> > > + phdr->xlp_tli == tli))\n> > > + break;\n> >\n> > I don't think this code should ever encounter a page where this is not the\n> > case? We particularly shouldn't do so silently, seems that could hide all\n> > kinds of problems.\n> \n> I think it's possible to read a \"just got initialized\" page with the\n> new approach to read WAL buffer pages without WALBufMappingLock if the\n> page is read right after it is initialized and xlblocks is filled in\n> AdvanceXLInsertBuffer() but before actual WAL is written.\n\nI think the code needs to make sure that *never* happens. That seems unrelated\nto holding or not holding WALBufMappingLock. Even if the page header is\nalready valid, I don't think it's ok to just read/parse WAL data that's\nconcurrently being modified.\n\nWe can never allow WAL being read that's past\n XLogBytePosToRecPtr(XLogCtl->Insert->CurrBytePos)\nas it does not exist.\n\nAnd if the to-be-read LSN is between\nXLogCtl->LogwrtResult->Write and XLogBytePosToRecPtr(Insert->CurrBytePos)\nwe need to call WaitXLogInsertionsToFinish() before copying the data.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 Nov 2023 12:58:36 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Fri, Nov 10, 2023 at 2:28 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-11-08 13:10:34 +0530, Bharath Rupireddy wrote:\n> > > > + /*\n> > > > + * The fact that we acquire WALBufMappingLock while reading the WAL\n> > > > + * buffer page itself guarantees that no one else initializes it or\n> > > > + * makes it ready for next use in AdvanceXLInsertBuffer(). However, we\n> > > > + * need to ensure that we are not reading a page that just got\n> > > > + * initialized. For this, we look at the needed page header.\n> > > > + */\n> > > > + phdr = (XLogPageHeader) page;\n> > > > +\n> > > > + /* Return, if WAL buffer page doesn't look valid. */\n> > > > + if (!(phdr->xlp_magic == XLOG_PAGE_MAGIC &&\n> > > > + phdr->xlp_pageaddr == (ptr - (ptr % XLOG_BLCKSZ)) &&\n> > > > + phdr->xlp_tli == tli))\n> > > > + break;\n> > >\n> > > I don't think this code should ever encounter a page where this is not the\n> > > case? We particularly shouldn't do so silently, seems that could hide all\n> > > kinds of problems.\n> >\n> > I think it's possible to read a \"just got initialized\" page with the\n> > new approach to read WAL buffer pages without WALBufMappingLock if the\n> > page is read right after it is initialized and xlblocks is filled in\n> > AdvanceXLInsertBuffer() but before actual WAL is written.\n>\n> I think the code needs to make sure that *never* happens. That seems unrelated\n> to holding or not holding WALBufMappingLock. Even if the page header is\n> already valid, I don't think it's ok to just read/parse WAL data that's\n> concurrently being modified.\n>\n> We can never allow WAL being read that's past\n> XLogBytePosToRecPtr(XLogCtl->Insert->CurrBytePos)\n> as it does not exist.\n\nAgreed. Erroring out in XLogReadFromBuffers() if passed in WAL is past\nthe CurrBytePos is an option. Another cleaner way is to just let the\ncaller decide what it needs to do (retry or error out) - fill an error\nmessage in XLogReadFromBuffers() and return as-if nothing was read or\nreturn a special negative error code like XLogDecodeNextRecord so that\nthe caller can deal with it.\n\nAlso, reading CurrBytePos with insertpos_lck spinlock can come in the\nway of concurrent inserters. A possible way is to turn both\nCurrBytePos and PrevBytePos 64-bit atomics so that\nXLogReadFromBuffers() can read CurrBytePos without any lock atomically\nand leave it to the caller to deal with non-existing WAL reads.\n\n> And if the to-be-read LSN is between\n> XLogCtl->LogwrtResult->Write and XLogBytePosToRecPtr(Insert->CurrBytePos)\n> we need to call WaitXLogInsertionsToFinish() before copying the data.\n\nAgree to wait for all in-flight insertions to the pages we're about to\nread to finish. But, reading XLogCtl->LogwrtRqst.Write requires either\nXLogCtl->info_lck spinlock or WALWriteLock. Maybe turn\nXLogCtl->LogwrtRqst.Write a 64-bit atomic and read it without any\nlock, rely on\nWaitXLogInsertionsToFinish()'s return value i.e. if\nWaitXLogInsertionsToFinish() returns a value >= Insert->CurrBytePos,\nthen go read that page from WAL buffers.\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 13 Nov 2023 19:02:07 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 7:02 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Nov 10, 2023 at 2:28 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> > I think the code needs to make sure that *never* happens. That seems unrelated\n> > to holding or not holding WALBufMappingLock. Even if the page header is\n> > already valid, I don't think it's ok to just read/parse WAL data that's\n> > concurrently being modified.\n> >\n> > We can never allow WAL being read that's past\n> > XLogBytePosToRecPtr(XLogCtl->Insert->CurrBytePos)\n> > as it does not exist.\n>\n> Agreed. Erroring out in XLogReadFromBuffers() if passed in WAL is past\n> the CurrBytePos is an option. Another cleaner way is to just let the\n> caller decide what it needs to do (retry or error out) - fill an error\n> message in XLogReadFromBuffers() and return as-if nothing was read or\n> return a special negative error code like XLogDecodeNextRecord so that\n> the caller can deal with it.\n\nIn the attached v17 patch, I've ensured that the XLogReadFromBuffers\nreturns when the caller requests a WAL that's past the current insert\nposition at the moment.\n\n> Also, reading CurrBytePos with insertpos_lck spinlock can come in the\n> way of concurrent inserters. A possible way is to turn both\n> CurrBytePos and PrevBytePos 64-bit atomics so that\n> XLogReadFromBuffers() can read CurrBytePos without any lock atomically\n> and leave it to the caller to deal with non-existing WAL reads.\n>\n> > And if the to-be-read LSN is between\n> > XLogCtl->LogwrtResult->Write and XLogBytePosToRecPtr(Insert->CurrBytePos)\n> > we need to call WaitXLogInsertionsToFinish() before copying the data.\n>\n> Agree to wait for all in-flight insertions to the pages we're about to\n> read to finish. But, reading XLogCtl->LogwrtRqst.Write requires either\n> XLogCtl->info_lck spinlock or WALWriteLock. Maybe turn\n> XLogCtl->LogwrtRqst.Write a 64-bit atomic and read it without any\n> lock, rely on\n> WaitXLogInsertionsToFinish()'s return value i.e. if\n> WaitXLogInsertionsToFinish() returns a value >= Insert->CurrBytePos,\n> then go read that page from WAL buffers.\n\nIn the attached v17 patch, I've ensured that the XLogReadFromBuffers\nwaits for all in-progress insertions to finish when the caller\nrequests WAL that's past the current write position and before the\ncurrent insert position.\n\nI've also ensured that the XLogReadFromBuffers returns special return\ncodes for various scenarios (when asked to read in recovery, read on a\ndifferent TLI, read a non-existent WAL and so on.) instead of it\nerroring out. This gives flexibility to the caller to decide what to\ndo.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 7 Dec 2023 15:59:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Thu, 2023-12-07 at 15:59 +0530, Bharath Rupireddy wrote:\n> In the attached v17 patch\n\n0001 could impact performance could be impacted in a few ways:\n\n * There's one additional write barrier inside\n AdvanceXLInsertBuffer()\n * AdvanceXLInsertBuffer() already holds WALBufMappingLock, so\n the atomic access inside of it is somewhat redundant\n * On some platforms, the XLogCtlData structure size will change\n\nThe patch has been out for a while and nobody seems concerned about\nthose things, and they look fine to me, so I assume these are not real\nproblems. I just wanted to highlight them.\n\nAlso, the description and the comments seem off. The patch does two\nthings: (a) make it possible to read a page without a lock, which means\nwe need to mark with InvalidXLogRecPtr while it's being initialized;\nand (b) use 64-bit atomics to make it safer (or at least more\nreadable).\n\n(a) feels like the most important thing, and it's a hard requirement\nfor the rest of the work, right?\n\n(b) seems like an implementation choice, and I agree with it on\nreadability grounds.\n\nAlso:\n\n+ * But it means that when we do this\n+ * unlocked read, we might see a value that appears to be ahead of\nthe\n+ * page we're looking for. Don't PANIC on that, until we've verified\nthe\n+ * value while holding the lock.\n\nIs that still true even without a torn read?\n\nThe code for 0001 itself looks good. These are minor concerns and I am\ninclined to commit something like it fairly soon.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 07 Dec 2023 16:34:57 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Fri, Dec 8, 2023 at 6:04 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Thu, 2023-12-07 at 15:59 +0530, Bharath Rupireddy wrote:\n> > In the attached v17 patch\n>\n> The code for 0001 itself looks good. These are minor concerns and I am\n> inclined to commit something like it fairly soon.\n\nThanks. Attaching remaining patches as v18 patch-set after commits\nc3a8e2a7cb16 and 766571be1659.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 20 Dec 2023 15:36:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Wed, 2023-12-20 at 15:36 +0530, Bharath Rupireddy wrote:\n> Thanks. Attaching remaining patches as v18 patch-set after commits\n> c3a8e2a7cb16 and 766571be1659.\n\nComments:\n\nI still think the right thing for this patch is to call\nXLogReadFromBuffers() directly from the callers who need it, and not\nchange WALRead(). I am open to changing this later, but for now that\nmakes sense to me so that we can clearly identify which callers benefit\nand why. I have brought this up a few times before[1][2], so there must\nbe some reason that I don't understand -- can you explain it?\n\nThe XLogReadFromBuffersResult is never used. I can see how it might be\nuseful for testing or asserts, but it's not used even in the test\nmodule. I don't think we should clutter the API with that kind of thing\n-- let's just return the nread.\n\nI also do not like the terminology \"partial hit\" to be used in this\nway. Perhaps \"short read\" or something about hitting the end of\nreadable WAL would be better?\n\nI like how the callers of WALRead() are being more precise about the\nbytes they are requesting.\n\nYou've added several spinlock acquisitions to the loop. Two explicitly,\nand one implicitly in WaitXLogInsertionsToFinish(). These may allow you\nto read slightly further, but introduce performance risk. Was this\ndiscussed?\n\nThe callers are not checking for XLREADBUGS_UNINITIALIZED_WAL, so it\nseems like there's a risk of getting partially-written data? And it's\nnot clear to me the check of the wal page headers is the right one\nanyway.\n\nIt seems like all of this would be simpler if you checked first how far\nyou can safely read data, and then just loop and read that far. I'm not\nsure that it's worth it to try to mix the validity checks with the\nreading of the data.\n\nRegards,\n\tJeff Davis\n\n[1] https://www.postgresql.org/message-id/4132fe48f831ed6f73a9eb191af5fe475384969c.camel%40j-davis.com\n[2]\nhttps://www.postgresql.org/message-id/2ef04861c0f77e7ae78b703770cc2bbbac3d85e6.camel@j-davis.com\n\n\n",
"msg_date": "Thu, 04 Jan 2024 17:50:05 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 7:20 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Wed, 2023-12-20 at 15:36 +0530, Bharath Rupireddy wrote:\n> > Thanks. Attaching remaining patches as v18 patch-set after commits\n> > c3a8e2a7cb16 and 766571be1659.\n>\n> Comments:\n\nThanks for reviewing.\n\n> I still think the right thing for this patch is to call\n> XLogReadFromBuffers() directly from the callers who need it, and not\n> change WALRead(). I am open to changing this later, but for now that\n> makes sense to me so that we can clearly identify which callers benefit\n> and why. I have brought this up a few times before[1][2], so there must\n> be some reason that I don't understand -- can you explain it?\n\nIMO, WALRead() is the best place to have XLogReadFromBuffers() for 2\nreasons: 1) All of the WALRead() callers (except FRONTEND tools) will\nbenefit if WAL is read from WAL buffers. I don't see any reason for a\ncaller to skip reading from WAL buffers. If there's a caller (in\nfuture) wanting to skip reading from WAL buffers, I'm open to adding a\nflag in XLogReaderState to skip. 2) The amount of code is reduced if\nXLogReadFromBuffers() sits in WALRead().\n\n> The XLogReadFromBuffersResult is never used. I can see how it might be\n> useful for testing or asserts, but it's not used even in the test\n> module. I don't think we should clutter the API with that kind of thing\n> -- let's just return the nread.\n\nRemoved.\n\n> I also do not like the terminology \"partial hit\" to be used in this\n> way. Perhaps \"short read\" or something about hitting the end of\n> readable WAL would be better?\n\n\"short read\" seems good. Done that way in the new patch.\n\n> I like how the callers of WALRead() are being more precise about the\n> bytes they are requesting.\n>\n> You've added several spinlock acquisitions to the loop. Two explicitly,\n> and one implicitly in WaitXLogInsertionsToFinish(). These may allow you\n> to read slightly further, but introduce performance risk. Was this\n> discussed?\n\nI opted to read slightly further thinking that the loops aren't going\nto get longer for spinlocks to appear costly. Basically, I wasn't sure\nwhich approach was the best. Now that there's an opinion to keep them\noutside, I'd agree with it. Done that way in the new patch.\n\n> The callers are not checking for XLREADBUGS_UNINITIALIZED_WAL, so it\n> seems like there's a risk of getting partially-written data? And it's\n> not clear to me the check of the wal page headers is the right one\n> anyway.\n>\n> It seems like all of this would be simpler if you checked first how far\n> you can safely read data, and then just loop and read that far. I'm not\n> sure that it's worth it to try to mix the validity checks with the\n> reading of the data.\n\nXLogReadFromBuffers needs the page header check in after reading the\npage from WAL buffers. Typically, we must not read a WAL buffer page\nthat just got initialized. Because we waited enough for the\nin-progress WAL insertions to finish above. However, there can exist a\nslight window after the above wait finishes in which the read buffer\npage can get replaced especially under high WAL generation rates.\nAfter all, we are reading from WAL buffers without any locks here. So,\nlet's not count such a page in.\n\nI've addressed the above review comments and attached v19 patch-set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 10 Jan 2024 19:59:29 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Wed, 2024-01-10 at 19:59 +0530, Bharath Rupireddy wrote:\n> I've addressed the above review comments and attached v19 patch-set.\n\nRegarding:\n\n- if (!WALRead(state, cur_page, targetPagePtr, XLOG_BLCKSZ, tli,\n- &errinfo))\n+ if (!WALRead(state, cur_page, targetPagePtr, count, tli,\n&errinfo))\n\nI'd like to understand the reason it was using XLOG_BLCKSZ before. Was\nit a performance optimization? Or was it to zero the remainder of the\ncaller's buffer (readBuf)? Or something else?\n\nIf it was to zero the remainder of the caller's buffer, then we should\nexplicitly make that the caller's responsibility.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 19 Jan 2024 17:47:57 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Hi Bharath,\n\nThanks for working on this. It seems like a nice improvement to have.\n\nHere are some comments on 0001 patch.\n\n1- xlog.c\n+ /*\n+ * Fast paths for the following reasons: 1) WAL buffers aren't in use when\n+ * server is in recovery. 2) WAL is inserted into WAL buffers on current\n+ * server's insertion TLI. 3) Invalid starting WAL location.\n+ */\n\nShouldn't the comment be something like \"2) WAL is *not* inserted into WAL\nbuffers on current server's insertion TLI\" since the condition to break is tli\n!= GetWALInsertionTimeLine()\n\n2-\n+ /*\n+ * WAL being read doesn't yet exist i.e. past the current insert position.\n+ */\n+ if ((startptr + count) > reservedUpto)\n+ return ntotal;\n\nThis question may not even make sense but I wonder whether we can read from\nstartptr only to reservedUpto in case of startptr+count exceeds\nreservedUpto?\n\n3-\n+ /* Wait for any in-progress WAL insertions to WAL buffers to finish. */\n+ if ((startptr + count) > LogwrtResult.Write &&\n+ (startptr + count) <= reservedUpto)\n+ WaitXLogInsertionsToFinish(startptr + count);\n\nDo we need to check if (startptr + count) <= reservedUpto as we already\nverified this condition a few lines above?\n\n4-\n+ Assert(nread > 0);\n+ memcpy(dst, data, nread);\n+\n+ /*\n+ * Make sure we don't read xlblocks down below before the page\n+ * contents up above.\n+ */\n+ pg_read_barrier();\n+\n+ /* Recheck if the read page still exists in WAL buffers. */\n+ endptr = pg_atomic_read_u64(&XLogCtl->xlblocks[idx]);\n+\n+ /* Return if the page got initalized while we were reading it. */\n+ if (expectedEndPtr != endptr)\n+ break;\n+\n+ /*\n+ * Typically, we must not read a WAL buffer page that just got\n+ * initialized. Because we waited enough for the in-progress WAL\n+ * insertions to finish above. However, there can exist a slight\n+ * window after the above wait finishes in which the read buffer page\n+ * can get replaced especially under high WAL generation rates. After\n+ * all, we are reading from WAL buffers without any locks here. So,\n+ * let's not count such a page in.\n+ */\n+ phdr = (XLogPageHeader) page;\n+ if (!(phdr->xlp_magic == XLOG_PAGE_MAGIC &&\n+ phdr->xlp_pageaddr == (ptr - (ptr % XLOG_BLCKSZ)) &&\n+ phdr->xlp_tli == tli))\n+ break;\n\nI see that you recheck if the page still exists and so at the end. What\nwould you think about memcpy'ing only after being sure that we will need\nand use the recently read data? If we break the loop during the recheck, we\nsimply discard the data read in the latest attempt. I guess that this may\nnot be a big deal but the data would be unnecessarily copied into the\ndestination in such a case.\n\n5- xlogreader.c\n+ nread = XLogReadFromBuffers(startptr, tli, count, buf);\n+\n+ if (nread > 0)\n+ {\n+ /*\n+ * Check if its a full read, short read or no read from WAL buffers.\n+ * For short read or no read, continue to read the remaining bytes\n+ * from WAL file.\n+ *\n+ * XXX: It might be worth to expose WAL buffer read stats.\n+ */\n+ if (nread == count) /* full read */\n+ return true;\n+ else if (nread < count) /* short read */\n+ {\n+ buf += nread;\n+ startptr += nread;\n+ count -= nread;\n+ }\n\nTypo in the comment. Should be like \"Check if *it's* a full read, short\nread or no read from WAL buffers.\"\n\nAlso I don't think XLogReadFromBuffers() returns anything less than 0 and\nmore than count. Is verifying nread > 0 necessary? I think if nread does\nnot equal to count, we can simply assume that it's a short read. (or no\nread at all in case nread is 0 which we don't need to handle specifically)\n\n6-\n+ /*\n+ * We determined how much of the page can be validly read as 'count', read\n+ * that much only, not the entire page. Since WALRead() can read the page\n+ * from WAL buffers, in which case, the page is not guaranteed to be\n+ * zero-padded up to the page boundary because of the concurrent\n+ * insertions.\n+ */\n\nI'm not sure about pasting this into the most places we call WalRead().\nWouldn't it be better if we mention this somewhere around WALRead() only\nonce?\n\n\nBest,\n-- \nMelih Mutlu\nMicrosoft\n\nHi Bharath,Thanks for working on this. It seems like a nice improvement to have.Here are some comments on 0001 patch.1- xlog.c+ /*+ * Fast paths for the following reasons: 1) WAL buffers aren't in use when+ * server is in recovery. 2) WAL is inserted into WAL buffers on current+ * server's insertion TLI. 3) Invalid starting WAL location.+ */Shouldn't the comment be something like \"2) WAL is *not* inserted into WAL buffers on current server's insertion TLI\" since the condition to break is tli != GetWALInsertionTimeLine() 2-+ /*+ * WAL being read doesn't yet exist i.e. past the current insert position.+ */+ if ((startptr + count) > reservedUpto)+ return ntotal;This question may not even make sense but I wonder whether we can read from startptr only to reservedUpto in case of startptr+count exceeds reservedUpto? 3-+ /* Wait for any in-progress WAL insertions to WAL buffers to finish. */+ if ((startptr + count) > LogwrtResult.Write &&+ (startptr + count) <= reservedUpto)+ WaitXLogInsertionsToFinish(startptr + count);Do we need to check if (startptr + count) <= reservedUpto as we already verified this condition a few lines above?4-+ Assert(nread > 0);+ memcpy(dst, data, nread);++ /*+ * Make sure we don't read xlblocks down below before the page+ * contents up above.+ */+ pg_read_barrier();++ /* Recheck if the read page still exists in WAL buffers. */+ endptr = pg_atomic_read_u64(&XLogCtl->xlblocks[idx]);++ /* Return if the page got initalized while we were reading it. */+ if (expectedEndPtr != endptr)+ break;++ /*+ * Typically, we must not read a WAL buffer page that just got+ * initialized. Because we waited enough for the in-progress WAL+ * insertions to finish above. However, there can exist a slight+ * window after the above wait finishes in which the read buffer page+ * can get replaced especially under high WAL generation rates. After+ * all, we are reading from WAL buffers without any locks here. So,+ * let's not count such a page in.+ */+ phdr = (XLogPageHeader) page;+ if (!(phdr->xlp_magic == XLOG_PAGE_MAGIC &&+ phdr->xlp_pageaddr == (ptr - (ptr % XLOG_BLCKSZ)) &&+ phdr->xlp_tli == tli))+ break;I see that you recheck if the page still exists and so at the end. What would you think about memcpy'ing only after being sure that we will need and use the recently read data? If we break the loop during the recheck, we simply discard the data read in the latest attempt. I guess that this may not be a big deal but the data would be unnecessarily copied into the destination in such a case.5- xlogreader.c+ nread = XLogReadFromBuffers(startptr, tli, count, buf);++ if (nread > 0)+ {+ /*+ * Check if its a full read, short read or no read from WAL buffers.+ * For short read or no read, continue to read the remaining bytes+ * from WAL file.+ *+ * XXX: It might be worth to expose WAL buffer read stats.+ */+ if (nread == count) /* full read */+ return true;+ else if (nread < count) /* short read */+ {+ buf += nread;+ startptr += nread;+ count -= nread;+ }Typo in the comment. Should be like \"Check if *it's* a full read, short read or no read from WAL buffers.\"Also I don't think XLogReadFromBuffers() returns anything less than 0 and more than count. Is verifying nread > 0 necessary? I think if nread does not equal to count, we can simply assume that it's a short read. (or no read at all in case nread is 0 which we don't need to handle specifically)6-+ /*+ * We determined how much of the page can be validly read as 'count', read+ * that much only, not the entire page. Since WALRead() can read the page+ * from WAL buffers, in which case, the page is not guaranteed to be+ * zero-padded up to the page boundary because of the concurrent+ * insertions.+ */I'm not sure about pasting this into the most places we call WalRead(). Wouldn't it be better if we mention this somewhere around WALRead() only once? Best,-- Melih MutluMicrosoft",
"msg_date": "Mon, 22 Jan 2024 17:03:28 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Hi,\n\nOn 2024-01-10 19:59:29 +0530, Bharath Rupireddy wrote:\n> +\t\t/*\n> +\t\t * Typically, we must not read a WAL buffer page that just got\n> +\t\t * initialized. Because we waited enough for the in-progress WAL\n> +\t\t * insertions to finish above. However, there can exist a slight\n> +\t\t * window after the above wait finishes in which the read buffer page\n> +\t\t * can get replaced especially under high WAL generation rates. After\n> +\t\t * all, we are reading from WAL buffers without any locks here. So,\n> +\t\t * let's not count such a page in.\n> +\t\t */\n> +\t\tphdr = (XLogPageHeader) page;\n> +\t\tif (!(phdr->xlp_magic == XLOG_PAGE_MAGIC &&\n> +\t\t\t phdr->xlp_pageaddr == (ptr - (ptr % XLOG_BLCKSZ)) &&\n> +\t\t\t phdr->xlp_tli == tli))\n> +\t\t\tbreak;\n\nI still think that anything that requires such checks shouldn't be\nmerged. It's completely bogus to check page contents for validity when we\nshould have metadata telling us which range of the buffers is valid and which\nnot.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Jan 2024 12:12:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Mon, 2024-01-22 at 12:12 -0800, Andres Freund wrote:\n> I still think that anything that requires such checks shouldn't be\n> merged. It's completely bogus to check page contents for validity\n> when we\n> should have metadata telling us which range of the buffers is valid\n> and which\n> not.\n\nThe check seems entirely unnecessary, to me. A leftover from v18?\n\nI have attached a new patch (version \"19j\") to illustrate some of my\nprevious suggestions. I didn't spend a lot of time on it so it's not\nready for commit, but I believe my suggestions are easier to understand\nin code form.\n\nNote that, right now, it only works for XLogSendPhysical(). I believe\nit's best to just make it work for 1-3 callers that we understand well,\nand we can generalize later if it makes sense.\n\nI'm still not clear on why some callers are reading XLOG_BLCKSZ\n(expecting zeros at the end), and if it's OK to just change them to use\nthe exact byte count.\n\nAlso, if we've detected that the first requested buffer has been\nevicted, is there any value in continuing the loop to see if more\nrecent buffers are available? For example, if the requested LSNs range\nover buffers 4, 5, and 6, and 4 has already been evicted, should we try\nto return LSN data from 5 and 6 at the proper offset in the dest\nbuffer? If so, we'd need to adjust the API so the caller knows what\nparts of the dest buffer were filled in.\n\nRegards,\n\tJeff Davis",
"msg_date": "Mon, 22 Jan 2024 20:07:03 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Tue, Jan 23, 2024 at 9:37 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Mon, 2024-01-22 at 12:12 -0800, Andres Freund wrote:\n> > I still think that anything that requires such checks shouldn't be\n> > merged. It's completely bogus to check page contents for validity\n> > when we\n> > should have metadata telling us which range of the buffers is valid\n> > and which\n> > not.\n>\n> The check seems entirely unnecessary, to me. A leftover from v18?\n>\n> I have attached a new patch (version \"19j\") to illustrate some of my\n> previous suggestions. I didn't spend a lot of time on it so it's not\n> ready for commit, but I believe my suggestions are easier to understand\n> in code form.\n\n> Note that, right now, it only works for XLogSendPhysical(). I believe\n> it's best to just make it work for 1-3 callers that we understand well,\n> and we can generalize later if it makes sense.\n\n+1 to do it for XLogSendPhysical() first. Enabling it for others can\njust be done as something like the attached v20-0003.\n\n> I'm still not clear on why some callers are reading XLOG_BLCKSZ\n> (expecting zeros at the end), and if it's OK to just change them to use\n> the exact byte count.\n\n\"expecting zeros at the end\" - this can't always be true as the WAL\ncan get flushed after determining the flush ptr before reading it from\nthe WAL file. FWIW, here's what I've tried previoulsy -\nhttps://github.com/BRupireddy2/postgres/tree/ensure_extra_read_WAL_page_is_zero_padded_at_the_end_WIP,\nthe tests hit the Assert(false); added. Which means, the zero-padding\ncomment around WALRead() call-sites isn't quite right.\n\n /*\n * Even though we just determined how much of the page can be validly read\n * as 'count', read the whole page anyway. It's guaranteed to be\n * zero-padded up to the page boundary if it's incomplete.\n */\n if (!WALRead(state, cur_page, targetPagePtr, XLOG_BLCKSZ, tli,\n\nI think this needs to be discussed separately. If okay, I'll start a new thread.\n\n> Also, if we've detected that the first requested buffer has been\n> evicted, is there any value in continuing the loop to see if more\n> recent buffers are available? For example, if the requested LSNs range\n> over buffers 4, 5, and 6, and 4 has already been evicted, should we try\n> to return LSN data from 5 and 6 at the proper offset in the dest\n> buffer? If so, we'd need to adjust the API so the caller knows what\n> parts of the dest buffer were filled in.\n\nI'd second this capability for now to keep the API simple and clear,\nbut we can consider expanding it as needed.\n\nI reviewed the v19j and attached v20 patch set:\n\n1.\n * The caller must ensure that it's reasonable to read from the WAL buffers,\n * i.e. that the requested data is from the current timeline, that we're not\n * in recovery, etc.\n\nI still think the XLogReadFromBuffers can just return in any of the\nabove cases instead of comments. I feel we must assume the caller is\ngoing to ask the WAL from a different timeline and/or in recovery and\ndesign the API to deal with it. Done that way in v20 patch.\n\n2. Fixed some typos, reworded a few comments (i.e. used \"current\ninsert/write position\" instead of \"Insert/Write pointer\" like\nelsewhere), ran pgindent.\n\n3.\n- * XXX probably this should be improved to suck data directly from the\n- * WAL buffers when possible.\n\nRemoved the above comment before WALRead() since we have that facility\nnow. Perhaps, we can say the callers can suck data directly from the\nWAL buffers using XLogReadFromBuffers. But I have no strong opinion on\nthis.\n\n4.\n+ * Most callers will have already updated LogwrtResult when determining\n+ * how far to read, but it's OK if it's out of date. (XXX: is it worth\n+ * taking a spinlock to update LogwrtResult and check again before calling\n+ * WaitXLogInsertionsToFinish()?)\n\nIf the callers use GetFlushRecPtr() to determine how far to read,\nLogwrtResult will be *reasonably* latest, otherwise not. If\nLogwrtResult is a bit old, XLogReadFromBuffers will call\nWaitXLogInsertionsToFinish which will just loop over all insertion\nlocks and return.\n\nAs far as the current WAL readers are concerned, we don't need an\nexplicit spinlock to determine LogwrtResult because all of them use\nGetFlushRecPtr() to determine how far to read. If there's any caller\nthat's not updating LogwrtResult at all, we can consider reading\nLogwrtResult it ourselves in future.\n\n5. I think the two requirements specified at\nhttps://www.postgresql.org/message-id/20231109205836.zjoawdrn4q77yemv%40awork3.anarazel.de\nstill hold with the v19j.\n\n5.1 Never allow WAL being read that's past\nXLogBytePosToRecPtr(XLogCtl->Insert->CurrBytePos) as it does not\nexist.\n5.2 If the to-be-read LSN is between XLogCtl->LogwrtResult->Write and\nXLogBytePosToRecPtr(Insert->CurrBytePos) we need to call\nWaitXLogInsertionsToFinish() before copying the data.\n\n+ if (upto > LogwrtResult.Write)\n+ {\n+ XLogRecPtr writtenUpto = WaitXLogInsertionsToFinish(upto, false);\n+\n+ upto = Min(upto, writtenUpto);\n+ nbytes = upto - startptr;\n+ }\n\nXLogReadFromBuffers ensures the above two with adjusting upto based on\nMin(upto, writtenUpto) as WaitXLogInsertionsToFinish returns the\noldest insertion that is still in-progress.\n\nFor instance, the current write LSN is 100, current insert LSN is 150\nand upto is 200 - we only read upto 150 if startptr is < 150; we don't\nread anything if startptr is > 150.\n\n6. I've modified the test module in v20-0002 patch as follows:\n6.1 Renamed the module to read_wal_from_buffers stripping \"test_\"\nwhich otherwise is making the name longer. Longer names can cause\nfailures on some Windows BF members if the PATH/FILE name is too long.\n6.2 Tweaked tests to hit WaitXLogInsertionsToFinish() and upto =\nMin(upto, writtenUpto); in XLogReadFromBuffers.\n\nPSA v20 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 25 Jan 2024 14:35:30 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Thu, 2024-01-25 at 14:35 +0530, Bharath Rupireddy wrote:\n> \n> \"expecting zeros at the end\" - this can't always be true as the WAL\n> \n...\n\n> I think this needs to be discussed separately. If okay, I'll start a\n> new thread.\n\nThank you for investigating. When the above issue is handled, I'll be\nmore comfortable expanding the call sites for XLogReadFromBuffers().\n\n> > Also, if we've detected that the first requested buffer has been\n> > evicted, is there any value in continuing the loop to see if more\n> > recent buffers are available? For example, if the requested LSNs\n> > range\n> > over buffers 4, 5, and 6, and 4 has already been evicted, should we\n> > try\n> > to return LSN data from 5 and 6 at the proper offset in the dest\n> > buffer? If so, we'd need to adjust the API so the caller knows what\n> > parts of the dest buffer were filled in.\n> \n> I'd second this capability for now to keep the API simple and clear,\n> but we can consider expanding it as needed.\n\nAgreed. This case doesn't seem important; I just thought I'd ask about\nit.\n\n> If the callers use GetFlushRecPtr() to determine how far to read,\n> LogwrtResult will be *reasonably* latest\n\nIt will be up-to-date enough that we'd never go through\nWaitXLogInsertionsToFinish(), which is all we care about.\n\n> As far as the current WAL readers are concerned, we don't need an\n> explicit spinlock to determine LogwrtResult because all of them use\n> GetFlushRecPtr() to determine how far to read. If there's any caller\n> that's not updating LogwrtResult at all, we can consider reading\n> LogwrtResult it ourselves in future.\n\nSo we don't actually need that path yet, right?\n\n> 5. I think the two requirements specified at\n> https://www.postgresql.org/message-id/20231109205836.zjoawdrn4q77yemv%40awork3.anarazel.de\n> still hold with the v19j.\n\nAgreed.\n\n> PSA v20 patch set.\n\n0001 is very close. I have the following suggestions:\n\n * Don't just return zero. If the caller is doing something we don't\nexpect, we want to fix the caller. I understand you'd like this to be\nmore like a transparent optimization, and we may do that later, but I\ndon't think it's a good idea to do that now.\n\n * There's currently no use for reading LSNs between Write and Insert,\nso remove the WaitXLogInsertionsToFinish() code path. That also means\nwe don't need the extra emitLog parameter, so we can remove that. When\nwe have a use case, we can bring it all back.\n\nIf you agree, I can just make those adjustments (and do some final\nchecking) and commit 0001. Otherwise let me know what you think.\n\n0002: How does the test control whether the data requested is before\nthe Flush pointer, the Write pointer, or the Insert pointer? What if\nthe walwriter comes in and moves one of those pointers before the next\nstatement is executed? Also, do you think a test module is required for\nthe basic functionality in 0001, or only when we start doing more\ncomplex things like reading past the Flush pointer?\n\n0003: can you explain why this is useful for wal summarizer to read\nfrom the buffers?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 25 Jan 2024 19:01:52 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Fri, Jan 26, 2024 at 8:31 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> > PSA v20 patch set.\n>\n> 0001 is very close. I have the following suggestions:\n>\n> * Don't just return zero. If the caller is doing something we don't\n> expect, we want to fix the caller. I understand you'd like this to be\n> more like a transparent optimization, and we may do that later, but I\n> don't think it's a good idea to do that now.\n\n+ if (RecoveryInProgress() ||\n+ tli != GetWALInsertionTimeLine())\n+ return ntotal;\n+\n+ Assert(!XLogRecPtrIsInvalid(startptr));\n\nAre you suggesting to error out instead of returning 0? If yes, I\ndisagree with it. Because, failure to read due to unmet pre-conditions\ndoesn't necessarily have to be to error out. If we error out, the\nimmediate failure we see is in the src/bin/psql TAP test for calling\nXLogReadFromBuffers when the server is in recovery. How about\nreturning a negative value instead of just 0 or returning true/false\njust like WALRead?\n\n> * There's currently no use for reading LSNs between Write and Insert,\n> so remove the WaitXLogInsertionsToFinish() code path. That also means\n> we don't need the extra emitLog parameter, so we can remove that. When\n> we have a use case, we can bring it all back.\n\nI disagree with this. I don't see anything wrong with\nXLogReadFromBuffers having the capability to wait for in-progress\ninsertions to finish. In fact, it makes the function near-complete.\nImagine, implementing an extension (may be for fun or learning or\neducational or production purposes) to read unflushed WAL directly\nfrom WAL buffers using XLogReadFromBuffers as page_read callback with\nxlogreader facility. AFAICT, I don't see a problem with\nWaitXLogInsertionsToFinish logic in XLogReadFromBuffers.\n\nFWIW, one important aspect of XLogReadFromBuffers is its ability to\nread the unflushed WAL from WAL buffers. Also, see a note from Andres\nhere https://www.postgresql.org/message-id/20231109205836.zjoawdrn4q77yemv%40awork3.anarazel.de.\n\n> If you agree, I can just make those adjustments (and do some final\n> checking) and commit 0001. Otherwise let me know what you think.\n\nThanks. Please see my responses above.\n\n> 0002: How does the test control whether the data requested is before\n> the Flush pointer, the Write pointer, or the Insert pointer? What if\n> the walwriter comes in and moves one of those pointers before the next\n> statement is executed?\n\nTried to keep wal_writer quiet with wal_writer_delay=10000ms and\nwal_writer_flush_after = 1GB to not to flush WAL in the background.\nAlso, disabled autovacuum, and set checkpoint_timeout to a higher\nvalue. All of this is done to generate minimal WAL so that WAL buffers\ndon't get overwritten. Do you see any problems with it?\n\n> Also, do you think a test module is required for\n> the basic functionality in 0001, or only when we start doing more\n> complex things like reading past the Flush pointer?\n\nWith WaitXLogInsertionsToFinish in XLogReadFromBuffers, we have that\ncapability already in. Having a separate test module ensures the code\nis tested properly.\n\nAs far as the test is concerned, it verifies 2 cases:\n1. Check if WAL is successfully read from WAL buffers. For this, the\ntest generates minimal WAL and reads from WAL buffers from the start\nLSN = current insert LSN captured before the WAL generation.\n2. Check with a WAL that doesn't yet exist. For this, the test reads\nfrom WAL buffers from the start LSN = current flush LSN+16MB (a\nrandomly chosen higher value).\n\n> 0003: can you explain why this is useful for wal summarizer to read\n> from the buffers?\n\nCan the WAL summarizer ever read the WAL on current TLI? I'm not so\nsure about it, I haven't explored it in detail.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 26 Jan 2024 19:31:24 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Fri, 2024-01-26 at 19:31 +0530, Bharath Rupireddy wrote:\n> Are you suggesting to error out instead of returning 0?\n\nWe'd do neither of those things, because no caller should actually call\nit while RecoveryInProgress() or on a different timeline.\n\n> How about\n> returning a negative value instead of just 0 or returning true/false\n> just like WALRead?\n\nAll of these things are functionally equivalent -- the same thing is\nhappening at the end. This is just a discussion about API style and how\nthat will interact with hypothetical callers that don't exist today.\nAnd it can also be easily changed later, so we aren't stuck with\nwhatever decision happens here.\n\n> \n> Imagine, implementing an extension (may be for fun or learning or\n> educational or production purposes) to read unflushed WAL directly\n> from WAL buffers using XLogReadFromBuffers as page_read callback with\n> xlogreader facility.\n\nThat makes sense, I didn't realize you intended to use this fron an\nextension. I'm fine considering that as a separate patch that could\npotentially be committed soon after this one.\n\nI'd like some more details, but can I please just commit the basic\nfunctionality now-ish?\n\n> Tried to keep wal_writer quiet with wal_writer_delay=10000ms and\n> wal_writer_flush_after = 1GB to not to flush WAL in the background.\n> Also, disabled autovacuum, and set checkpoint_timeout to a higher\n> value. All of this is done to generate minimal WAL so that WAL\n> buffers\n> don't get overwritten. Do you see any problems with it?\n\nMaybe check it against pg_current_wal_lsn(), and see if the Write\npointer moved ahead? Perhaps even have a (limited) loop that tries\nagain to catch it at the right time?\n\n> Can the WAL summarizer ever read the WAL on current TLI? I'm not so\n> sure about it, I haven't explored it in detail.\n\nLet's just not call XLogReadFromBuffers from there.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 26 Jan 2024 11:34:26 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Sat, Jan 27, 2024 at 1:04 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> All of these things are functionally equivalent -- the same thing is\n> happening at the end. This is just a discussion about API style and how\n> that will interact with hypothetical callers that don't exist today.\n> And it can also be easily changed later, so we aren't stuck with\n> whatever decision happens here.\n\nI'll leave that up to you. I'm okay either ways - 1) ensure the caller\ndoesn't use XLogReadFromBuffers, 2) XLogReadFromBuffers returning\nas-if nothing was read when in recovery or on a different timeline.\n\n> > Imagine, implementing an extension (may be for fun or learning or\n> > educational or production purposes) to read unflushed WAL directly\n> > from WAL buffers using XLogReadFromBuffers as page_read callback with\n> > xlogreader facility.\n>\n> That makes sense, I didn't realize you intended to use this fron an\n> extension. I'm fine considering that as a separate patch that could\n> potentially be committed soon after this one.\n\nYes, I've turned that into 0002 patch.\n\n> I'd like some more details, but can I please just commit the basic\n> functionality now-ish?\n\n+1.\n\n> > Tried to keep wal_writer quiet with wal_writer_delay=10000ms and\n> > wal_writer_flush_after = 1GB to not to flush WAL in the background.\n> > Also, disabled autovacuum, and set checkpoint_timeout to a higher\n> > value. All of this is done to generate minimal WAL so that WAL\n> > buffers\n> > don't get overwritten. Do you see any problems with it?\n>\n> Maybe check it against pg_current_wal_lsn(), and see if the Write\n> pointer moved ahead? Perhaps even have a (limited) loop that tries\n> again to catch it at the right time?\n\nAdding a loop seems to be reasonable here and done in v21-0003. Also,\nI've added wal_level = minimal per\nsrc/test/recovery/t/039_end_of_wal.pl introduced by commit bae868caf22\nwhich also tries to keep WAL activity to minimum.\n\n> > Can the WAL summarizer ever read the WAL on current TLI? I'm not so\n> > sure about it, I haven't explored it in detail.\n>\n> Let's just not call XLogReadFromBuffers from there.\n\nRemoved.\n\nPSA v21 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 27 Jan 2024 13:30:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Hmm, this looks quite nice and simple. My only comment is that a\nsequence like this\n\n /* Read from WAL buffers, if available. */\n rbytes = XLogReadFromBuffers(&output_message.data[output_message.len],\n startptr, nbytes, xlogreader->seg.ws_tli);\n output_message.len += rbytes;\n startptr += rbytes;\n nbytes -= rbytes;\n\n if (!WALRead(xlogreader,\n &output_message.data[output_message.len],\n startptr,\n\nleaves you wondering if WALRead() should be called at all or not, in the\ncase when all bytes were read by XLogReadFromBuffers. I think in many\ncases what's going to happen is that nbytes is going to be zero, and\nthen WALRead is going to return having done nothing in its inner loop.\nI think this warrants a comment somewhere. Alternatively, we could\nshort-circuit the 'if' expression so that WALRead() is not called in\nthat case (but I'm not sure it's worth the loss of code clarity).\n\nAlso, but this is really quite minor, it seems sad to add more functions\nwith the prefix XLog, when we have renamed things to use the prefix WAL,\nand we have kept the old names only to avoid backpatchability issues.\nI mean, if we have WALRead() already, wouldn't it make perfect sense to\nname the new routine WALReadFromBuffers?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Tiene valor aquel que admite que es un cobarde\" (Fernandel)\n\n\n",
"msg_date": "Tue, 30 Jan 2024 18:31:41 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Tue, Jan 30, 2024 at 11:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Hmm, this looks quite nice and simple.\n\nThanks for looking at it.\n\n> My only comment is that a\n> sequence like this\n>\n> /* Read from WAL buffers, if available. */\n> rbytes = XLogReadFromBuffers(&output_message.data[output_message.len],\n> startptr, nbytes, xlogreader->seg.ws_tli);\n> output_message.len += rbytes;\n> startptr += rbytes;\n> nbytes -= rbytes;\n>\n> if (!WALRead(xlogreader,\n> &output_message.data[output_message.len],\n> startptr,\n>\n> leaves you wondering if WALRead() should be called at all or not, in the\n> case when all bytes were read by XLogReadFromBuffers. I think in many\n> cases what's going to happen is that nbytes is going to be zero, and\n> then WALRead is going to return having done nothing in its inner loop.\n> I think this warrants a comment somewhere. Alternatively, we could\n> short-circuit the 'if' expression so that WALRead() is not called in\n> that case (but I'm not sure it's worth the loss of code clarity).\n\nIt might help avoid a function call in case reading from WAL buffers\nsatisfies the read fully. And, it's not that clumsy with the change,\nsee following. I've changed it in the attached v22 patch set.\n\nif (nbytes > 0 &&\n !WALRead(xlogreader,\n\n> Also, but this is really quite minor, it seems sad to add more functions\n> with the prefix XLog, when we have renamed things to use the prefix WAL,\n> and we have kept the old names only to avoid backpatchability issues.\n> I mean, if we have WALRead() already, wouldn't it make perfect sense to\n> name the new routine WALReadFromBuffers?\n\nWALReadFromBuffers looks better. Used that in v22 patch.\n\nPlease see the attached v22 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 31 Jan 2024 14:30:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Looking at 0003, where an XXX comment is added about taking a spinlock\nto read LogwrtResult, I suspect the answer is probably not, because it\nis likely to slow down the other uses of LogwrtResult. But I wonder if\na better path forward would be to base further work on my older\nuncommitted patch to make LogwrtResult use atomics. With that, you\nwouldn't have to block others in order to read the value. I last posted\nthat patch in [1] in case you're curious.\n\n[1] https://postgr.es/m/20220728065920.oleu2jzsatchakfj@alvherre.pgsql\n\nThe reason I abandoned that patch is that the performance problem that I\nwas fixing no longer existed -- it was fixed in a different way.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"In fact, the basic problem with Perl 5's subroutines is that they're not\ncrufty enough, so the cruft leaks out into user-defined code instead, by\nthe Conservation of Cruft Principle.\" (Larry Wall, Apocalypse 6)\n\n\n",
"msg_date": "Wed, 31 Jan 2024 10:31:29 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Wed, Jan 31, 2024 at 3:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Looking at 0003, where an XXX comment is added about taking a spinlock\n> to read LogwrtResult, I suspect the answer is probably not, because it\n> is likely to slow down the other uses of LogwrtResult.\n\nWe avoided keeping LogwrtResult latest as the current callers for\nWALReadFromBuffers() all determine the flush LSN using\nGetFlushRecPtr(), see comment #4 from\nhttps://www.postgresql.org/message-id/CALj2ACV%3DC1GZT9XQRm4iN1NV1T%3DhLA_hsGWNx2Y5-G%2BmSwdhNg%40mail.gmail.com.\n\n> But I wonder if\n> a better path forward would be to base further work on my older\n> uncommitted patch to make LogwrtResult use atomics. With that, you\n> wouldn't have to block others in order to read the value. I last posted\n> that patch in [1] in case you're curious.\n>\n> [1] https://postgr.es/m/20220728065920.oleu2jzsatchakfj@alvherre.pgsql\n>\n> The reason I abandoned that patch is that the performance problem that I\n> was fixing no longer existed -- it was fixed in a different way.\n\nNice. I'll respond in that thread. FWIW, there's been a recent\nattempt at turning unloggedLSN to 64-bit atomic -\nhttps://commitfest.postgresql.org/46/4330/ and that might need\npg_atomic_monotonic_advance_u64. I guess we would have to bring your\npatch and the unloggedLSN into a single thread to have a better\ndiscussion.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 31 Jan 2024 17:06:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Wed, 2024-01-31 at 14:30 +0530, Bharath Rupireddy wrote:\n> Please see the attached v22 patch set.\n\nCommitted 0001.\n\nFor 0002 & 0003, I'd like more clarity on how they will actually be\nused by an extension.\n\nFor 0004, we need to resolve why callers are using XLOG_BLCKSZ and we\ncan fix that independently, as discussed here:\n\nhttps://www.postgresql.org/message-id/CALj2ACV=C1GZT9XQRm4iN1NV1T=hLA_hsGWNx2Y5-G+mSwdhNg@mail.gmail.com\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 12 Feb 2024 11:33:24 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-12 11:33:24 -0800, Jeff Davis wrote:\n> On Wed, 2024-01-31 at 14:30 +0530, Bharath Rupireddy wrote:\n> > Please see the attached v22 patch set.\n>\n> Committed 0001.\n\nYay, I think this is very cool. There are plenty other improvements than can\nbe based on this...\n\n\nOne thing I'm a bit confused in the code is the following:\n\n+ /*\n+ * Don't read past the available WAL data.\n+ *\n+ * Check using local copy of LogwrtResult. Ordinarily it's been updated by\n+ * the caller when determining how far to read; but if not, it just means\n+ * we'll read less data.\n+ *\n+ * XXX: the available WAL could be extended to the WAL insert pointer by\n+ * calling WaitXLogInsertionsToFinish().\n+ */\n+ upto = Min(startptr + count, LogwrtResult.Write);\n+ nbytes = upto - startptr;\n\nShouldn't it pretty much be a bug to ever encounter this? There aren't\nequivalent checks in WALRead(), so any user of WALReadFromBuffers() that then\nfalls back to WALRead() is just going to send unwritten data.\n\nISTM that this should be an assertion or error.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Feb 2024 12:18:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Mon, 2024-02-12 at 12:18 -0800, Andres Freund wrote:\n> + upto = Min(startptr + count, LogwrtResult.Write);\n> + nbytes = upto - startptr;\n> \n> Shouldn't it pretty much be a bug to ever encounter this?\n\nIn the current code it's impossible, though Bharath hinted at an\nextension which could reach that path.\n\nWhat I committed was a bit of a compromise -- earlier versions of the\npatch supported reading right up to the Insert pointer (which requires\na call to WaitXLogInsertionsToFinish()). I wasn't ready to commit that\ncode without seeing a more about how that would be used, but I thought\nit was reasonable to have some simple code in there to allow reading up\nto the Write pointer.\n\nIt seems closer to the structure that we will ultimately need to\nreplicate unflushed data, right?\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/message-id/CALj2ACW65mqn6Ukv57SqDTMzAJgd1N_AdQtDgy+gMDqu6v618Q@mail.gmail.com\n\n\n",
"msg_date": "Mon, 12 Feb 2024 12:46:00 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-12 12:46:00 -0800, Jeff Davis wrote:\n> On Mon, 2024-02-12 at 12:18 -0800, Andres Freund wrote:\n> > +��� upto = Min(startptr + count, LogwrtResult.Write);\n> > +��� nbytes = upto - startptr;\n> >\n> > Shouldn't it pretty much be a bug to ever encounter this?\n>\n> In the current code it's impossible, though Bharath hinted at an\n> extension which could reach that path.\n>\n> What I committed was a bit of a compromise -- earlier versions of the\n> patch supported reading right up to the Insert pointer (which requires\n> a call to WaitXLogInsertionsToFinish()). I wasn't ready to commit that\n> code without seeing a more about how that would be used, but I thought\n> it was reasonable to have some simple code in there to allow reading up\n> to the Write pointer.\n\nI doubt there's a sane way to use WALRead() without *first* ensuring that the\nrange of data is valid. I think we're better of moving that responsibility\nexplicitly to the caller and adding an assertion verifying that.\n\n\n> It seems closer to the structure that we will ultimately need to\n> replicate unflushed data, right?\n\nIt doesn't really seem like a necessary, or even particularly useful,\npart. You couldn't just call WALRead() for that, since the caller would need\nto know the range up to which WAL is valid but not yet flushed as well. Thus\nthe caller would need to first use WaitXLogInsertionsToFinish() or something\nlike it anyway - and then there's no point in doing the WALRead() anymore.\n\nNote that for replicating unflushed data, we *still* might need to fall back\nto reading WAL data from disk. In which case not asserting in WALRead() would\njust make it hard to find bugs, because not using WaitXLogInsertionsToFinish()\nwould appear to work as long as data is in wal buffers, but as soon as we'd\nfall back to on-disk (but unflushed) data, we'd send bogus WAL.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Feb 2024 15:36:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Mon, 2024-02-12 at 11:33 -0800, Jeff Davis wrote:\n> For 0002 & 0003, I'd like more clarity on how they will actually be\n> used by an extension.\n\nIn patch 0002, I'm concerned about calling\nWaitXLogInsertionsToFinish(). It loops through all the locks, but\ndoesn't have any early return path or advance any state.\n\nSo if it's repeatedly called with the same or similar values it seems\nlike it would be doing a lot of extra work.\n\nI'm not sure of the best fix. We could add something to LogwrtResult to\ntrack a new LSN that represents the highest known point where all\ninserters are finished (in other words, the latest return value of\nWaitXLogInsertionsToFinish()). That seems invasive, though.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 12 Feb 2024 15:56:19 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-12 15:56:19 -0800, Jeff Davis wrote:\n> On Mon, 2024-02-12 at 11:33 -0800, Jeff Davis wrote:\n> > For 0002 & 0003, I'd like more clarity on how they will actually be\n> > used by an extension.\n>\n> In patch 0002, I'm concerned about calling\n> WaitXLogInsertionsToFinish(). It loops through all the locks, but\n> doesn't have any early return path or advance any state.\n\nI doubt it'd be too bad - we call that at much much higher frequency during\nwrite heavy OLTP workloads (c.f. XLogFlush()). It can be a performance issue\nthere, but only after increasing NUM_XLOGINSERT_LOCKS - before that the\nlimited number of writers is the limit. Compared to that walsender shouldn't\nbe a significant factor.\n\nHowever, I think it's a very bad idea to call WALReadFromBuffers() from\nWALReadFromBuffers(). This needs to be at the caller, not down in\nWALReadFromBuffers().\n\nI don't see why we would want to weaken the error condition in\nWaitXLogInsertionsToFinish() - I suspect it'd not work correctly to wait for\ninsertions that aren't yet in progress and it just seems like an API misuse.\n\n\n> So if it's repeatedly called with the same or similar values it seems like\n> it would be doing a lot of extra work.\n>\n> I'm not sure of the best fix. We could add something to LogwrtResult to\n> track a new LSN that represents the highest known point where all\n> inserters are finished (in other words, the latest return value of\n> WaitXLogInsertionsToFinish()). That seems invasive, though.\n\nFWIW, I think LogwrtResult is an anti-pattern, perhaps introduced due to\nmisunderstanding how cache coherency works. It's not fundamentally faster to\naccess non-shared memory. It'd make far more sense to allow lock-free access\nto the shared LogwrtResult and\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Feb 2024 16:11:50 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Mon, 2024-02-12 at 15:36 -0800, Andres Freund wrote:\n> \n> It doesn't really seem like a necessary, or even particularly useful,\n> part. You couldn't just call WALRead() for that, since the caller\n> would need\n> to know the range up to which WAL is valid but not yet flushed as\n> well. Thus\n> the caller would need to first use WaitXLogInsertionsToFinish() or\n> something\n> like it anyway - and then there's no point in doing the WALRead()\n> anymore.\n\nI follow until the last part. Did you mean \"and then there's no point\nin doing the WaitXLogInsertionsToFinish() in WALReadFromBuffers()\nanymore\"?\n\nFor now, should I assert that the requested WAL data is before the\nFlush pointer or assert that it's before the Write pointer?\n\n> Note that for replicating unflushed data, we *still* might need to\n> fall back\n> to reading WAL data from disk. In which case not asserting in\n> WALRead() would\n> just make it hard to find bugs, because not using\n> WaitXLogInsertionsToFinish()\n> would appear to work as long as data is in wal buffers, but as soon\n> as we'd\n> fall back to on-disk (but unflushed) data, we'd send bogus WAL.\n\nThat makes me wonder whether my previous idea[1] might matter: when\nsome buffers have been evicted, should WALReadFromBuffers() keep going\nthrough the loop and return the end portion of the requested data\nrather than the beginning?\n\nWe can sort that out when we get closer to replicating unflushed WAL.\n\nRegards,\n\tJeff Davis\n\n[1] \nhttps://www.postgresql.org/message-id/2b36bf99e762e65db0dafbf8d338756cf5fa6ece.camel@j-davis.com\n\n\n",
"msg_date": "Mon, 12 Feb 2024 17:33:24 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Tue, Feb 13, 2024 at 1:03 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> For 0004, we need to resolve why callers are using XLOG_BLCKSZ and we\n> can fix that independently, as discussed here:\n>\n> https://www.postgresql.org/message-id/CALj2ACV=C1GZT9XQRm4iN1NV1T=hLA_hsGWNx2Y5-G+mSwdhNg@mail.gmail.com\n\nThanks. I started a new thread for this -\nhttps://www.postgresql.org/message-id/CALj2ACWBRFac2TingD3PE3w2EBHXUHY3%3DAEEZPJmqhpEOBGExg%40mail.gmail.com.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 13 Feb 2024 15:00:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Tue, Feb 13, 2024 at 5:06 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> I doubt there's a sane way to use WALRead() without *first* ensuring that the\n> range of data is valid. I think we're better of moving that responsibility\n> explicitly to the caller and adding an assertion verifying that.\n>\n> It doesn't really seem like a necessary, or even particularly useful,\n> part. You couldn't just call WALRead() for that, since the caller would need\n> to know the range up to which WAL is valid but not yet flushed as well. Thus\n> the caller would need to first use WaitXLogInsertionsToFinish() or something\n> like it anyway - and then there's no point in doing the WALRead() anymore.\n>\n> Note that for replicating unflushed data, we *still* might need to fall back\n> to reading WAL data from disk. In which case not asserting in WALRead() would\n> just make it hard to find bugs, because not using WaitXLogInsertionsToFinish()\n> would appear to work as long as data is in wal buffers, but as soon as we'd\n> fall back to on-disk (but unflushed) data, we'd send bogus WAL.\n\nCallers of WALRead() do a good amount of work to figure out what's\nbeen flushed out but they read the un-flushed and/or invalid data see\nthe comment [1] around WALRead() call sites as well as a recent thread\n[2] for more details.\n\nIIUC, here's the summary of the discussion that has happened so far:\na) If only replicating flushed data, then ensure all the WALRead()\ncallers read how much ever is valid out of startptr+count. Fix\nprovided in [2] can help do that.\nb) If only replicating flushed data, then ensure all the\nWALReadFromBuffers() callers read how much ever is valid out of\nstartptr+count. Current and expected WALReadFromBuffers() callers will\nanyway determine how much of it is flushed and can validly be read.\nc) If planning to replicate unflushed data, then ensure all the\nWALRead() callers wait until startptr+count is past the current insert\nposition with WaitXLogInsertionsToFinish().\nd) If planning to replicate unflushed data, then ensure all the\nWALReadFromBuffers() callers wait until startptr+count is past the\ncurrent insert position with WaitXLogInsertionsToFinish().\n\nAdding an assertion or error in WALReadFromBuffers() for ensuring the\ncallers do follow the above set of rules is easy. We can just do\nAssert(startptr+count <= LogwrtResult.Flush).\n\nHowever, adding a similar assertion or error in WALRead() gets\ntrickier as it's being called from many places - walsenders, backends,\nexternal tools etc. even when the server is in recovery. Therefore,\ndetermining the actual valid LSN is a bit of a challenge.\n\nWhat I think is the best way:\n- Try and get the fix provided for (a) at [2].\n- Implement both (c) and (d).\n- Have the assertion in WALReadFromBuffers() ensuring the callers wait\nuntil startptr+count is past the current insert position with\nWaitXLogInsertionsToFinish().\n- Have a comment around WALRead() to ensure the callers are requesting\nthe WAL that's written to the disk because it's hard to determine\nwhat's written to disk as this gets called in many scenarios - when\nserver is in recovery, for walsummarizer etc.\n- In the new test module, demonstrate how one can implement reading\nunflushed data with WALReadFromBuffers() and/or WALRead() +\nWaitXLogInsertionsToFinish().\n\nThoughts?\n\n[1]\n/*\n * Even though we just determined how much of the page can be validly read\n * as 'count', read the whole page anyway. It's guaranteed to be\n * zero-padded up to the page boundary if it's incomplete.\n */\nif (!WALRead(state, cur_page, targetPagePtr, XLOG_BLCKSZ, tli,\n &errinfo))\n\n[2] https://www.postgresql.org/message-id/CALj2ACWBRFac2TingD3PE3w2EBHXUHY3%3DAEEZPJmqhpEOBGExg%40mail.gmail.com\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 13 Feb 2024 22:47:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Attached 2 patches.\n\nPer Andres's suggestion, 0001 adds an:\n Assert(startptr + count <= LogwrtResult.Write)\n\nThough if we want to allow the caller (e.g. in an extension) to\ndetermine the valid range, perhaps using WaitXLogInsertionsToFinish(),\nthen the check is wrong. Maybe we should just get rid of that code\nentirely and trust the caller to request a reasonable range?\n\nOn Mon, 2024-02-12 at 17:33 -0800, Jeff Davis wrote:\n> That makes me wonder whether my previous idea[1] might matter: when\n> some buffers have been evicted, should WALReadFromBuffers() keep\n> going\n> through the loop and return the end portion of the requested data\n> rather than the beginning?\n> [1] \n> https://www.postgresql.org/message-id/2b36bf99e762e65db0dafbf8d338756cf5fa6ece.camel@j-davis.com\n\n0002 is to illustrate the above idea. It's a strange API so I don't\nintend to commit it in this form, but I think we will ultimately need\nto do something like it when we want to replicate unflushed data.\n\nThe idea is that data past the Write pointer is always (and only)\navailable in the WAL buffers, so WALReadFromBuffers() should always\nreturn it. That way we can always safely fall through to ordinary\nWALRead(), which can only see before the Write pointer. There's also\ndata before the Write pointer that could be in the WAL buffers, and we\nmight as well copy that, too, if it's not evicted.\n\nIf some buffers are evicted, it will fill in the *end* of the buffer,\nleaving a gap at the beginning. The nice thing is that if there is any\ngap, it will be before the Write pointer, so we can always fall back to\nWALRead() to fill the gap and it should always succeed.\n\nRegards,\n\tJeff Davis",
"msg_date": "Tue, 13 Feb 2024 17:29:47 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Tue, 2024-02-13 at 22:47 +0530, Bharath Rupireddy wrote:\n> c) If planning to replicate unflushed data, then ensure all the\n> WALRead() callers wait until startptr+count is past the current\n> insert\n> position with WaitXLogInsertionsToFinish().\n\nWALRead() can't read past the Write pointer, so there's no point in\ncalling WaitXLogInsertionsToFinish(), right?\n\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 13 Feb 2024 17:32:30 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "Hi,\n\nOn 2024-02-12 17:33:24 -0800, Jeff Davis wrote:\n> On Mon, 2024-02-12 at 15:36 -0800, Andres Freund wrote:\n> >\n> > It doesn't really seem like a necessary, or even particularly useful,\n> > part. You couldn't just call WALRead() for that, since the caller\n> > would need\n> > to know the range up to which WAL is valid but not yet flushed as\n> > well. Thus\n> > the caller would need to first use WaitXLogInsertionsToFinish() or\n> > something\n> > like it anyway - and then there's no point in doing the WALRead()\n> > anymore.\n>\n> I follow until the last part. Did you mean \"and then there's no point\n> in doing the WaitXLogInsertionsToFinish() in WALReadFromBuffers()\n> anymore\"?\n\nYes, not sure what happened in my brain there.\n\n\n> For now, should I assert that the requested WAL data is before the\n> Flush pointer or assert that it's before the Write pointer?\n\nYes, I think that'd be good.\n\n\n> > Note that for replicating unflushed data, we *still* might need to\n> > fall back\n> > to reading WAL data from disk. In which case not asserting in\n> > WALRead() would\n> > just make it hard to find bugs, because not using\n> > WaitXLogInsertionsToFinish()\n> > would appear to work as long as data is in wal buffers, but as soon\n> > as we'd\n> > fall back to on-disk (but unflushed) data, we'd send bogus WAL.\n>\n> That makes me wonder whether my previous idea[1] might matter: when\n> some buffers have been evicted, should WALReadFromBuffers() keep going\n> through the loop and return the end portion of the requested data\n> rather than the beginning?\n\nI still doubt that that will help very often, but it'll take some\nexperimentation to figure it out, I guess.\n\n\n> We can sort that out when we get closer to replicating unflushed WAL.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 13 Feb 2024 18:55:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 6:59 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> Attached 2 patches.\n>\n> Per Andres's suggestion, 0001 adds an:\n> Assert(startptr + count <= LogwrtResult.Write)\n>\n> Though if we want to allow the caller (e.g. in an extension) to\n> determine the valid range, perhaps using WaitXLogInsertionsToFinish(),\n> then the check is wrong.\n\nRight.\n\n> Maybe we should just get rid of that code\n> entirely and trust the caller to request a reasonable range?\n\nI'd suggest we strike a balance here - error out in assert builds if\nstartptr+count is past the current insert position and trust the\ncallers for production builds. It has a couple of advantages over\ndoing just Assert(startptr + count <= LogwrtResult.Write):\n1) It allows the caller to read unflushed WAL directly from WAL\nbuffers, see the attached 0005 for an example.\n2) All the existing callers where WALReadFromBuffers() is thought to\nbe used are ensuring WAL availability by reading upto the flush\nposition so no problem with it.\n\nAlso, a note before WALRead() stating the caller must request the WAL\nat least that's written out (upto LogwrtResult.Write). I'm not so sure\nabout this, perhaps, we don't need this comment at all.\n\nHere, I'm with v23 patch set:\n\n0001 - Adds assertion in WALReadFromBuffers() to ensure the requested\nWAL isn't beyond the current insert position.\n0002 - Adds a new test module to demonstrate how one can use\nWALReadFromBuffers() ensuring WaitXLogInsertionsToFinish() if need be.\n0003 - Uses WALReadFromBuffers in more places like logical walsenders\nand backends.\n0004 - Removes zero-padding related stuff as discussed in\nhttps://www.postgresql.org/message-id/CALj2ACWBRFac2TingD3PE3w2EBHXUHY3=AEEZPJmqhpEOBGExg@mail.gmail.com.\nThis is needed in this patch set otherwise the assertion added in 0001\nfails after 0003.\n0005 - Adds a page_read callback for reading from WAL buffers in the\nnew test module added in 0002. Also, adds tests.\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 16 Feb 2024 13:08:45 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Fri, 2024-02-16 at 13:08 +0530, Bharath Rupireddy wrote:\n> I'd suggest we strike a balance here - error out in assert builds if\n> startptr+count is past the current insert position and trust the\n> callers for production builds.\n\nIt's not reasonable to have divergent behavior between assert-enabled\nbuilds and production. I think for now I will just commit the Assert as\nAndres suggested until we work out a few more details.\n\nOne idea is to use Álvaro's work to eliminate the spinlock, and then\nadd a variable to represent the last known point returned by\nWaitXLogInsertionsToFinish(). Then we can cheaply Assert that the\ncaller requested something before that point.\n\n> Here, I'm with v23 patch set:\n\nThank you, I'll look at these.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 16 Feb 2024 09:31:18 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Fri, Feb 16, 2024 at 11:01 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> > Here, I'm with v23 patch set:\n>\n> Thank you, I'll look at these.\n\nThanks. Here's the v24 patch set after rebasing.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 17 Feb 2024 10:27:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Sat, Feb 17, 2024 at 10:27 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Feb 16, 2024 at 11:01 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > > Here, I'm with v23 patch set:\n> >\n> > Thank you, I'll look at these.\n>\n> Thanks. Here's the v24 patch set after rebasing.\n\nRan pgperltidy on the new TAP test file added. Please see the attached\nv25 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 20 Feb 2024 11:40:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 11:40 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Ran pgperltidy on the new TAP test file added. Please see the attached\n> v25 patch set.\n\nPlease find the v26 patches after rebasing.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 21 Mar 2024 23:33:12 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Thu, Mar 21, 2024 at 11:33 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Please find the v26 patches after rebasing.\n\nCommit f3ff7bf83b added a check in WALReadFromBuffers to ensure the\nrequested WAL is not past the WAL that's copied to WAL buffers. So,\nI've dropped v26-0001 patch.\n\nI've attached v27 patches for further review.\n\n0001 adds a test module to demonstrate reading from WAL buffers\npatterns like the caller ensuring the requested WAL is fully copied to\nWAL buffers using WaitXLogInsertionsToFinish and an implementation of\nxlogreader page_read\ncallback to read unflushed/not-yet-flushed WAL directly from WAL buffers.\n\n0002 Use WALReadFromBuffers in more places like for logical\nwalsenders, logical decoding functions, backends reading WAL.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 8 Apr 2024 10:47:46 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "\n\n> On 8 Apr 2024, at 08:17, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n\nHi Bharath!\n\nAs far as I understand CF entry [0] is committed? I understand that there are some open followups, but I just want to determine correct CF item status...\n\nThanks!\n\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/47/4060/\n\n",
"msg_date": "Tue, 9 Apr 2024 09:33:49 +0300",
"msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Tue, Apr 09, 2024 at 09:33:49AM +0300, Andrey M. Borodin wrote:\n> As far as I understand CF entry [0] is committed? I understand that\n> there are some open followups, but I just want to determine correct\n> CF item status... \n\nSo much work has happened on this thread with things that has been\ncommitted, so switching the entry to committed makes sense to me. I\nhave just done that.\n\nBharath, could you create a new thread with the new things you are\nproposing? All that should be v18 work, particularly v27-0002:\nhttps://www.postgresql.org/message-id/CALj2ACWCibnX2jcnRreBHFesFeQ6vbKiFstML=w-JVTvUKD_EA@mail.gmail.com\n--\nMichael",
"msg_date": "Thu, 11 Apr 2024 10:01:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Thu, Apr 11, 2024 at 6:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Apr 09, 2024 at 09:33:49AM +0300, Andrey M. Borodin wrote:\n> > As far as I understand CF entry [0] is committed? I understand that\n> > there are some open followups, but I just want to determine correct\n> > CF item status...\n>\n> So much work has happened on this thread with things that has been\n> committed, so switching the entry to committed makes sense to me. I\n> have just done that.\n>\n> Bharath, could you create a new thread with the new things you are\n> proposing? All that should be v18 work, particularly v27-0002:\n> https://www.postgresql.org/message-id/CALj2ACWCibnX2jcnRreBHFesFeQ6vbKiFstML=w-JVTvUKD_EA@mail.gmail.com\n\nThanks. I started a new thread\nhttps://www.postgresql.org/message-id/CALj2ACVfF2Uj9NoFy-5m98HNtjHpuD17EDE9twVeJng-jTAe7A%40mail.gmail.com.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 24 Apr 2024 21:46:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
},
{
"msg_contents": "On Wed, Apr 24, 2024 at 09:46:20PM +0530, Bharath Rupireddy wrote:\n> Thanks. I started a new thread\n> https://www.postgresql.org/message-id/CALj2ACVfF2Uj9NoFy-5m98HNtjHpuD17EDE9twVeJng-jTAe7A%40mail.gmail.com.\n\nCool, thanks.\n--\nMichael",
"msg_date": "Thu, 25 Apr 2024 08:36:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Improve WALRead() to suck data directly from WAL buffers when\n possible"
}
] |
[
{
"msg_contents": "If we intend to generate a memoize node atop a path, we need some kind\nof cache key. Currently we search the path's parameterized clauses and\nits parent's lateral_vars for that. ISTM this is not sufficient because\ntheir might be lateral references derived from PlaceHolderVars, which\ncan also act as cache key but we neglect to take into consideration. As\nan example, consider\n\ncreate table t(a int);\ninsert into t values (1), (1), (1), (1);\nanalyze t;\n\nexplain (costs off) select * from t t1 left join lateral (select t1.a as\nt1a, t2.a as t2a from t t2) s on true where s.t1a = s.t2a;\n QUERY PLAN\n----------------------------\n Nested Loop\n -> Seq Scan on t t1\n -> Seq Scan on t t2\n Filter: (t1.a = a)\n(4 rows)\n\nWe cannot find available cache keys for memoize node because the inner\nside has neither parameterized path clauses nor lateral_vars. However\nif we are able to look in the PHV for lateral references, we will find\nthe cache key 't1.a'.\n\nActually we do have checked PHVs for lateral references, earlier in\ncreate_lateral_join_info. But that time we only marked lateral_relids\nand direct_lateral_relids, without remembering the lateral expressions.\nSo I'm wondering whether we can fix that by fetching Vars (or PHVs) of\nlateral references within PlaceHolderVars and remembering them in the\nbaserel's lateral_vars.\n\nAttach a draft patch to show my thoughts.\n\nThanks\nRichard",
"msg_date": "Fri, 9 Dec 2022 17:16:47 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Fri, Dec 9, 2022 at 5:16 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> Actually we do have checked PHVs for lateral references, earlier in\n> create_lateral_join_info. But that time we only marked lateral_relids\n> and direct_lateral_relids, without remembering the lateral expressions.\n> So I'm wondering whether we can fix that by fetching Vars (or PHVs) of\n> lateral references within PlaceHolderVars and remembering them in the\n> baserel's lateral_vars.\n>\n> Attach a draft patch to show my thoughts.\n>\n\nUpdate the patch to fix test failures.\n\nThanks\nRichard",
"msg_date": "Fri, 30 Dec 2022 11:00:25 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Fri, 30 Dec 2022 at 16:00, Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> On Fri, Dec 9, 2022 at 5:16 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>>\n>> Actually we do have checked PHVs for lateral references, earlier in\n>> create_lateral_join_info. But that time we only marked lateral_relids\n>> and direct_lateral_relids, without remembering the lateral expressions.\n>> So I'm wondering whether we can fix that by fetching Vars (or PHVs) of\n>> lateral references within PlaceHolderVars and remembering them in the\n>> baserel's lateral_vars.\n>>\n>> Attach a draft patch to show my thoughts.\n\nI'm surprised to see that it's only Memoize that ever makes use of\nlateral_vars. I'd need a bit more time to process your patch, but one\nadditional thought I had was that I wonder if the following code is\nstill needed in nodeMemoize.c\n\nif (bms_nonempty_difference(outerPlan->chgParam, node->keyparamids))\n cache_purge_all(node);\n\nIdeally, that would be an Assert failure, but possibly we should\nprobably still call cache_purge_all(node) after Assert(false) so that\nat least we'd not start returning wrong results if we've happened to\nmiss other cache keys. I thought maybe something like:\n\nif (bms_nonempty_difference(outerPlan->chgParam, node->keyparamids))\n{\n /*\n * Really the planner should have added all the possible parameters to\n * the cache keys, so let's Assert fail here so we get the memo to fix\n * that can fix that. On production builds, we'd better purge the\n * cache to account for the changed parameter value.\n */\n Assert(false);\n\n cache_purge_all(node);\n}\n\nI've not run the tests to ensure we don't get an Assert failure with\nthat, however.\n\nAll that cache_purge_all code added in 411137a42 likely was an\nincorrect fix for what you've raised here, but it's maybe a good\nfailsafe to keep around even if we think we've now found all possible\nparameters that can invalidate the memorized results.\n\nDavid\n\n\n",
"msg_date": "Tue, 24 Jan 2023 15:07:36 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 10:07 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> I'm surprised to see that it's only Memoize that ever makes use of\n> lateral_vars. I'd need a bit more time to process your patch, but one\n> additional thought I had was that I wonder if the following code is\n> still needed in nodeMemoize.c\n>\n> if (bms_nonempty_difference(outerPlan->chgParam, node->keyparamids))\n> cache_purge_all(node);\n>\n> Ideally, that would be an Assert failure, but possibly we should\n> probably still call cache_purge_all(node) after Assert(false) so that\n> at least we'd not start returning wrong results if we've happened to\n> miss other cache keys. I thought maybe something like:\n\n\nHmm, I think this code is still needed because the parameter contained\nin the subplan below a Memoize node may come from parent plan, as in the\ntest query added in 411137a42.\n\nEXPLAIN (COSTS OFF)\nSELECT unique1 FROM tenk1 t0\nWHERE unique1 < 3\n AND EXISTS (\n SELECT 1 FROM tenk1 t1\n INNER JOIN tenk1 t2 ON t1.unique1 = t2.hundred\n WHERE t0.ten = t1.twenty AND t0.two <> t2.four OFFSET 0);\n QUERY PLAN\n----------------------------------------------------------------\n Index Scan using tenk1_unique1 on tenk1 t0\n Index Cond: (unique1 < 3)\n Filter: (SubPlan 1)\n SubPlan 1\n -> Nested Loop\n -> Index Scan using tenk1_hundred on tenk1 t2\n Filter: (t0.two <> four)\n -> Memoize\n Cache Key: t2.hundred\n Cache Mode: logical\n -> Index Scan using tenk1_unique1 on tenk1 t1\n Index Cond: (unique1 = t2.hundred)\n Filter: (t0.ten = twenty)\n(13 rows)\n\nCurrently we don't have a way to add Params of uplevel vars to Memoize\ncache keys. So I think we still need to call cache_purge_all() each\ntime uplevel Params change.\n\nThanks\nRichard\n\nOn Tue, Jan 24, 2023 at 10:07 AM David Rowley <dgrowleyml@gmail.com> wrote:\nI'm surprised to see that it's only Memoize that ever makes use of\nlateral_vars. I'd need a bit more time to process your patch, but one\nadditional thought I had was that I wonder if the following code is\nstill needed in nodeMemoize.c\n\nif (bms_nonempty_difference(outerPlan->chgParam, node->keyparamids))\n cache_purge_all(node);\n\nIdeally, that would be an Assert failure, but possibly we should\nprobably still call cache_purge_all(node) after Assert(false) so that\nat least we'd not start returning wrong results if we've happened to\nmiss other cache keys. I thought maybe something like: Hmm, I think this code is still needed because the parameter containedin the subplan below a Memoize node may come from parent plan, as in thetest query added in 411137a42.EXPLAIN (COSTS OFF)SELECT unique1 FROM tenk1 t0WHERE unique1 < 3 AND EXISTS ( SELECT 1 FROM tenk1 t1 INNER JOIN tenk1 t2 ON t1.unique1 = t2.hundred WHERE t0.ten = t1.twenty AND t0.two <> t2.four OFFSET 0); QUERY PLAN---------------------------------------------------------------- Index Scan using tenk1_unique1 on tenk1 t0 Index Cond: (unique1 < 3) Filter: (SubPlan 1) SubPlan 1 -> Nested Loop -> Index Scan using tenk1_hundred on tenk1 t2 Filter: (t0.two <> four) -> Memoize Cache Key: t2.hundred Cache Mode: logical -> Index Scan using tenk1_unique1 on tenk1 t1 Index Cond: (unique1 = t2.hundred) Filter: (t0.ten = twenty)(13 rows)Currently we don't have a way to add Params of uplevel vars to Memoizecache keys. So I think we still need to call cache_purge_all() eachtime uplevel Params change.ThanksRichard",
"msg_date": "Mon, 30 Jan 2023 16:55:31 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Fri, Dec 30, 2022 at 11:00 AM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> On Fri, Dec 9, 2022 at 5:16 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>> Actually we do have checked PHVs for lateral references, earlier in\n>> create_lateral_join_info. But that time we only marked lateral_relids\n>> and direct_lateral_relids, without remembering the lateral expressions.\n>> So I'm wondering whether we can fix that by fetching Vars (or PHVs) of\n>> lateral references within PlaceHolderVars and remembering them in the\n>> baserel's lateral_vars.\n>>\n>> Attach a draft patch to show my thoughts.\n>>\n>\n> Update the patch to fix test failures.\n>\n\nRebase the patch on HEAD as cfbot reminds.\n\nThanks\nRichard",
"msg_date": "Tue, 4 Jul 2023 15:33:22 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Tue, Jul 4, 2023 at 12:33 AM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> Rebase the patch on HEAD as cfbot reminds.\n\nAll of this seems good to me. I can reproduce the problem, tests pass,\nand the change is sensible as far as I can tell.\n\nOne adjacent thing I noticed is that when we renamed \"Result Cache\" to\n\"Memoize\" this bit of the docs in config.sgml got skipped (probably\nbecause of the line break):\n\n Hash tables are used in hash joins, hash-based aggregation, result\n cache nodes and hash-based processing of <literal>IN</literal>\n subqueries.\n\nI believe that should say \"memoize nodes\" instead. Is it worth\ncorrecting that as part of this patch? Or perhaps another one?\n\nRegards,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com\n\n\n",
"msg_date": "Fri, 7 Jul 2023 22:24:27 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> Rebase the patch on HEAD as cfbot reminds.\n\nThis fix seems like a mess. The function that is in charge of filling\nRelOptInfo.lateral_vars is extract_lateral_references; or at least\nthat was how it was done up to now. Can't we compute these additional\nreferences there? If not, maybe we ought to just merge\nextract_lateral_references into create_lateral_join_info, rather than\nhaving the responsibility split. I also wonder whether this change\nisn't creating hidden dependencies on RTE order (which would likely be\nbugs), since create_lateral_join_info itself examines the lateral_vars\nlists as it walks the rtable.\n\nMore generally, it's not clear to me why we should need to look inside\nlateral PHVs in the first place. Wouldn't the lateral PHV itself\nserve fine as a cache key?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Jul 2023 13:28:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Sun, 9 Jul 2023 at 05:28, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> More generally, it's not clear to me why we should need to look inside\n> lateral PHVs in the first place. Wouldn't the lateral PHV itself\n> serve fine as a cache key?\n\nFor Memoize specifically, I purposefully made it so the expression was\nused as a cache key rather than extracting the Vars from it and using\nthose. The reason for that was that the expression may result in\nfewer distinct values to cache tuples for. For example:\n\ncreate table t1 (a int primary key);\ncreate table t2 (a int primary key);\n\ncreate statistics on (a % 10) from t2;\ninsert into t2 select x from generate_Series(1,1000000)x;\ninsert into t1 select x from generate_Series(1,1000000)x;\n\nanalyze t1,t2;\nexplain (analyze, costs off) select * from t1 inner join t2 on t1.a=t2.a%10;\n QUERY PLAN\n--------------------------------------------------------------------------------------------\n Nested Loop (actual time=0.015..212.798 rows=900000 loops=1)\n -> Seq Scan on t2 (actual time=0.006..33.479 rows=1000000 loops=1)\n -> Memoize (actual time=0.000..0.000 rows=1 loops=1000000)\n Cache Key: (t2.a % 10)\n Cache Mode: logical\n Hits: 999990 Misses: 10 Evictions: 0 Overflows: 0 Memory Usage: 1kB\n -> Index Only Scan using t1_pkey on t1 (actual\ntime=0.001..0.001 rows=1 loops=10)\n Index Cond: (a = (t2.a % 10))\n Heap Fetches: 0\n Planning Time: 0.928 ms\n Execution Time: 229.621 ms\n(11 rows)\n\n\n",
"msg_date": "Sun, 9 Jul 2023 13:11:41 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Sat, 8 Jul 2023 at 17:24, Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n> One adjacent thing I noticed is that when we renamed \"Result Cache\" to\n> \"Memoize\" this bit of the docs in config.sgml got skipped (probably\n> because of the line break):\n>\n> Hash tables are used in hash joins, hash-based aggregation, result\n> cache nodes and hash-based processing of <literal>IN</literal>\n> subqueries.\n>\n> I believe that should say \"memoize nodes\" instead. Is it worth\n> correcting that as part of this patch? Or perhaps another one?\n\nI just pushed a fix for this. Thanks for reporting it.\n\nDavid\n\n\n",
"msg_date": "Sun, 9 Jul 2023 16:17:03 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Sun, Jul 9, 2023 at 1:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > Rebase the patch on HEAD as cfbot reminds.\n>\n> This fix seems like a mess. The function that is in charge of filling\n> RelOptInfo.lateral_vars is extract_lateral_references; or at least\n> that was how it was done up to now. Can't we compute these additional\n> references there? If not, maybe we ought to just merge\n> extract_lateral_references into create_lateral_join_info, rather than\n> having the responsibility split. I also wonder whether this change\n> isn't creating hidden dependencies on RTE order (which would likely be\n> bugs), since create_lateral_join_info itself examines the lateral_vars\n> lists as it walks the rtable.\n\n\nYeah, you're right. Currently all RelOptInfo.lateral_vars are filled in\nextract_lateral_references. And then in create_lateral_join_info these\nlateral_vars, together with all PlaceHolderInfos, are examined to\ncompute the per-base-relation lateral refs relids. However, in the\nprocess of extract_lateral_references, it's possible that we create new\nPlaceHolderInfos. So I think it may not be a good idea to extract the\nlateral references within PHVs there. But I agree with you that it's\nalso not a good idea to compute these additional lateral Vars within\nPHVs in create_lateral_join_info as the patch does. Actually with the\npatch I find that with PHVs that are due to be evaluated at a join we\nmay get a problematic plan. For instance\n\nexplain (costs off)\nselect * from t t1 left join lateral\n(select t1.a as t1a, t2.a as t2a from t t2 join t t3 on true) s on true\nwhere s.t1a = s.t2a;\n QUERY PLAN\n------------------------------------\n Nested Loop\n -> Seq Scan on t t1\n -> Nested Loop\n Join Filter: (t1.a = t2.a)\n -> Seq Scan on t t2\n -> Memoize\n Cache Key: t1.a\n Cache Mode: binary\n -> Seq Scan on t t3\n(9 rows)\n\nThere are no lateral refs in the subtree of the Memoize node, so it\nshould be a Materialize node rather than a Memoize node. This is caused\nby that for a PHV that is due to be evaluated at a join, we fill its\nlateral refs in each baserel in the join, which is wrong.\n\nSo I'm wondering if it'd be better that we move all this logic of\ncomputing additional lateral references within PHVs to get_memoize_path,\nwhere we can examine only PHVs that are evaluated at innerrel. And\nconsidering that these lateral refs are only used by Memoize, it seems\nmore sensible to compute them there. But I'm a little worried that\ndoing this would make get_memoize_path too expensive.\n\nPlease see v4 patch for this change.\n\nThanks\nRichard",
"msg_date": "Thu, 13 Jul 2023 15:12:29 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Sun, Jul 9, 2023 at 1:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> More generally, it's not clear to me why we should need to look inside\n> lateral PHVs in the first place. Wouldn't the lateral PHV itself\n> serve fine as a cache key?\n\n\nDo you mean we use the lateral PHV directly as a cache key? Hmm, it\nseems to me that we'd have problem if the PHV references rels that are\ninside the PHV's syntactic scope. For instance\n\nselect * from t t1 left join\n lateral (select t1.a+t2.a as t1a, t2.a as t2a from t t2) s on true\nwhere s.t1a = s.t2a;\n\nThe PHV references t1.a so it's lateral. But it also references t2.a,\nso if we use the PHV itself as cache key, the plan would look like\n\n QUERY PLAN\n----------------------------------------\n Nested Loop\n -> Seq Scan on t t1\n -> Memoize\n Cache Key: (t1.a + t2.a)\n Cache Mode: binary\n -> Seq Scan on t t2\n Filter: ((t1.a + a) = a)\n(7 rows)\n\nwhich is an invalid plan as the cache key contains t2.a.\n\nThanks\nRichard\n\nOn Sun, Jul 9, 2023 at 1:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nMore generally, it's not clear to me why we should need to look inside\nlateral PHVs in the first place. Wouldn't the lateral PHV itself\nserve fine as a cache key?Do you mean we use the lateral PHV directly as a cache key? Hmm, itseems to me that we'd have problem if the PHV references rels that areinside the PHV's syntactic scope. For instanceselect * from t t1 left join lateral (select t1.a+t2.a as t1a, t2.a as t2a from t t2) s on truewhere s.t1a = s.t2a;The PHV references t1.a so it's lateral. But it also references t2.a,so if we use the PHV itself as cache key, the plan would look like QUERY PLAN---------------------------------------- Nested Loop -> Seq Scan on t t1 -> Memoize Cache Key: (t1.a + t2.a) Cache Mode: binary -> Seq Scan on t t2 Filter: ((t1.a + a) = a)(7 rows)which is an invalid plan as the cache key contains t2.a.ThanksRichard",
"msg_date": "Thu, 13 Jul 2023 17:21:03 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Sun, Jul 9, 2023 at 12:17 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> I just pushed a fix for this. Thanks for reporting it.\n\n\nBTW, I noticed a typo in the comment inside paraminfo_get_equal_hashops.\n\n foreach(lc, innerrel->lateral_vars)\n {\n Node *expr = (Node *) lfirst(lc);\n TypeCacheEntry *typentry;\n\n /* Reject if there are any volatile functions in PHVs */\n if (contain_volatile_functions(expr))\n {\n list_free(*operators);\n list_free(*param_exprs);\n return false;\n }\n\nThe expressions in RelOptInfo.lateral_vars are not necessarily from\nPHVs. For instance\n\nexplain (costs off)\nselect * from t t1 join\n lateral (select * from t t2 where t1.a = t2.a offset 0) on true;\n QUERY PLAN\n----------------------------------\n Nested Loop\n -> Seq Scan on t t1\n -> Memoize\n Cache Key: t1.a\n Cache Mode: binary\n -> Seq Scan on t t2\n Filter: (t1.a = a)\n(7 rows)\n\nThe lateral Var 't1.a' comes from the lateral subquery, not PHV.\n\nThis seems a typo from 63e4f13d. How about we change it to the below?\n\n- /* Reject if there are any volatile functions in PHVs */\n+ /* Reject if there are any volatile functions in lateral vars */\n\nThanks\nRichard\n\nOn Sun, Jul 9, 2023 at 12:17 PM David Rowley <dgrowleyml@gmail.com> wrote:\nI just pushed a fix for this. Thanks for reporting it.BTW, I noticed a typo in the comment inside paraminfo_get_equal_hashops. foreach(lc, innerrel->lateral_vars) { Node *expr = (Node *) lfirst(lc); TypeCacheEntry *typentry; /* Reject if there are any volatile functions in PHVs */ if (contain_volatile_functions(expr)) { list_free(*operators); list_free(*param_exprs); return false; }The expressions in RelOptInfo.lateral_vars are not necessarily fromPHVs. For instanceexplain (costs off)select * from t t1 join lateral (select * from t t2 where t1.a = t2.a offset 0) on true; QUERY PLAN---------------------------------- Nested Loop -> Seq Scan on t t1 -> Memoize Cache Key: t1.a Cache Mode: binary -> Seq Scan on t t2 Filter: (t1.a = a)(7 rows)The lateral Var 't1.a' comes from the lateral subquery, not PHV.This seems a typo from 63e4f13d. How about we change it to the below?- /* Reject if there are any volatile functions in PHVs */+ /* Reject if there are any volatile functions in lateral vars */ThanksRichard",
"msg_date": "Thu, 13 Jul 2023 17:29:42 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 3:12 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> So I'm wondering if it'd be better that we move all this logic of\n> computing additional lateral references within PHVs to get_memoize_path,\n> where we can examine only PHVs that are evaluated at innerrel. And\n> considering that these lateral refs are only used by Memoize, it seems\n> more sensible to compute them there. But I'm a little worried that\n> doing this would make get_memoize_path too expensive.\n>\n> Please see v4 patch for this change.\n>\n\nI'd like to add that not checking PHVs for lateral references can lead\nto performance regressions with Memoize node. For instance,\n\n-- by default, enable_memoize is on\nregression=# explain (analyze, costs off) select * from tenk1 t1 left join\nlateral (select *, t1.four as x from tenk1 t2) s on t1.two = s.two;\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Nested Loop Left Join (actual time=0.028..105245.547 rows=50000000 loops=1)\n -> Seq Scan on tenk1 t1 (actual time=0.011..3.760 rows=10000 loops=1)\n -> Memoize (actual time=0.010..8.051 rows=5000 loops=10000)\n Cache Key: t1.two\n Cache Mode: logical\n Hits: 0 Misses: 10000 Evictions: 9999 Overflows: 0 Memory\nUsage: 1368kB\n -> Seq Scan on tenk1 t2 (actual time=0.004..3.594 rows=5000\nloops=10000)\n Filter: (t1.two = two)\n Rows Removed by Filter: 5000\n Planning Time: 1.943 ms\n Execution Time: 106806.043 ms\n(11 rows)\n\n-- turn enable_memoize off\nregression=# set enable_memoize to off;\nSET\nregression=# explain (analyze, costs off) select * from tenk1 t1 left join\nlateral (select *, t1.four as x from tenk1 t2) s on t1.two = s.two;\n QUERY PLAN\n-----------------------------------------------------------------------------\n Nested Loop Left Join (actual time=0.048..44831.707 rows=50000000 loops=1)\n -> Seq Scan on tenk1 t1 (actual time=0.026..2.340 rows=10000 loops=1)\n -> Seq Scan on tenk1 t2 (actual time=0.002..3.282 rows=5000 loops=10000)\n Filter: (t1.two = two)\n Rows Removed by Filter: 5000\n Planning Time: 0.641 ms\n Execution Time: 46472.609 ms\n(7 rows)\n\nAs we can see, when Memoize enabled (which is the default setting), the\nexecution time increases by around 129.83%, indicating a significant\nperformance regression.\n\nThis is caused by that we fail to realize that 't1.four', which is from\nthe PHV, should be included in the cache keys. And that makes us have\nto purge the entire cache every time we get a new outer tuple. This is\nalso implied by the abnormal Memoize runtime stats:\n\n Hits: 0 Misses: 10000 Evictions: 9999 Overflows: 0\n\nThis regression can be fixed by the patch here. After applying the v4\npatch, 't1.four' is added into the cache keys, and the same query runs\nmuch faster.\n\nregression=# explain (analyze, costs off) select * from tenk1 t1 left join\nlateral (select *, t1.four as x from tenk1 t2) s on t1.two = s.two;\n QUERY PLAN\n---------------------------------------------------------------------------------\n Nested Loop Left Join (actual time=0.060..20446.004 rows=50000000 loops=1)\n -> Seq Scan on tenk1 t1 (actual time=0.027..5.845 rows=10000 loops=1)\n -> Memoize (actual time=0.001..0.209 rows=5000 loops=10000)\n Cache Key: t1.two, t1.four\n Cache Mode: binary\n Hits: 9996 Misses: 4 Evictions: 0 Overflows: 0 Memory Usage:\n5470kB\n -> Seq Scan on tenk1 t2 (actual time=0.005..3.659 rows=5000\nloops=4)\n Filter: (t1.two = two)\n Rows Removed by Filter: 5000\n Planning Time: 0.579 ms\n Execution Time: 21756.598 ms\n(11 rows)\n\nComparing the first plan and the third plan, this query runs ~5 times\nfaster.\n\nThanks\nRichard\n\nOn Thu, Jul 13, 2023 at 3:12 PM Richard Guo <guofenglinux@gmail.com> wrote:So I'm wondering if it'd be better that we move all this logic ofcomputing additional lateral references within PHVs to get_memoize_path,where we can examine only PHVs that are evaluated at innerrel. Andconsidering that these lateral refs are only used by Memoize, it seemsmore sensible to compute them there. But I'm a little worried thatdoing this would make get_memoize_path too expensive.Please see v4 patch for this change.I'd like to add that not checking PHVs for lateral references can leadto performance regressions with Memoize node. For instance,-- by default, enable_memoize is onregression=# explain (analyze, costs off) select * from tenk1 t1 left join lateral (select *, t1.four as x from tenk1 t2) s on t1.two = s.two; QUERY PLAN------------------------------------------------------------------------------------- Nested Loop Left Join (actual time=0.028..105245.547 rows=50000000 loops=1) -> Seq Scan on tenk1 t1 (actual time=0.011..3.760 rows=10000 loops=1) -> Memoize (actual time=0.010..8.051 rows=5000 loops=10000) Cache Key: t1.two Cache Mode: logical Hits: 0 Misses: 10000 Evictions: 9999 Overflows: 0 Memory Usage: 1368kB -> Seq Scan on tenk1 t2 (actual time=0.004..3.594 rows=5000 loops=10000) Filter: (t1.two = two) Rows Removed by Filter: 5000 Planning Time: 1.943 ms Execution Time: 106806.043 ms(11 rows)-- turn enable_memoize offregression=# set enable_memoize to off;SETregression=# explain (analyze, costs off) select * from tenk1 t1 left join lateral (select *, t1.four as x from tenk1 t2) s on t1.two = s.two; QUERY PLAN----------------------------------------------------------------------------- Nested Loop Left Join (actual time=0.048..44831.707 rows=50000000 loops=1) -> Seq Scan on tenk1 t1 (actual time=0.026..2.340 rows=10000 loops=1) -> Seq Scan on tenk1 t2 (actual time=0.002..3.282 rows=5000 loops=10000) Filter: (t1.two = two) Rows Removed by Filter: 5000 Planning Time: 0.641 ms Execution Time: 46472.609 ms(7 rows)As we can see, when Memoize enabled (which is the default setting), theexecution time increases by around 129.83%, indicating a significantperformance regression.This is caused by that we fail to realize that 't1.four', which is fromthe PHV, should be included in the cache keys. And that makes us haveto purge the entire cache every time we get a new outer tuple. This isalso implied by the abnormal Memoize runtime stats: Hits: 0 Misses: 10000 Evictions: 9999 Overflows: 0This regression can be fixed by the patch here. After applying the v4patch, 't1.four' is added into the cache keys, and the same query runsmuch faster.regression=# explain (analyze, costs off) select * from tenk1 t1 left join lateral (select *, t1.four as x from tenk1 t2) s on t1.two = s.two; QUERY PLAN--------------------------------------------------------------------------------- Nested Loop Left Join (actual time=0.060..20446.004 rows=50000000 loops=1) -> Seq Scan on tenk1 t1 (actual time=0.027..5.845 rows=10000 loops=1) -> Memoize (actual time=0.001..0.209 rows=5000 loops=10000) Cache Key: t1.two, t1.four Cache Mode: binary Hits: 9996 Misses: 4 Evictions: 0 Overflows: 0 Memory Usage: 5470kB -> Seq Scan on tenk1 t2 (actual time=0.005..3.659 rows=5000 loops=4) Filter: (t1.two = two) Rows Removed by Filter: 5000 Planning Time: 0.579 ms Execution Time: 21756.598 ms(11 rows)Comparing the first plan and the third plan, this query runs ~5 timesfaster.ThanksRichard",
"msg_date": "Mon, 25 Dec 2023 15:01:51 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Mon, Dec 25, 2023 at 3:01 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> On Thu, Jul 13, 2023 at 3:12 PM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n>\n>> So I'm wondering if it'd be better that we move all this logic of\n>> computing additional lateral references within PHVs to get_memoize_path,\n>> where we can examine only PHVs that are evaluated at innerrel. And\n>> considering that these lateral refs are only used by Memoize, it seems\n>> more sensible to compute them there. But I'm a little worried that\n>> doing this would make get_memoize_path too expensive.\n>>\n>> Please see v4 patch for this change.\n>>\n>\n> I'd like to add that not checking PHVs for lateral references can lead\n> to performance regressions with Memoize node.\n>\n\nThe v4 patch does not apply any more. I've rebased it on master.\nNothing else has changed.\n\nThanks\nRichard",
"msg_date": "Fri, 2 Feb 2024 17:18:59 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Fri, Feb 2, 2024 at 5:18 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> The v4 patch does not apply any more. I've rebased it on master.\n> Nothing else has changed.\n>\n\nHere is another rebase over master so it applies again. I also added a\ncommit message to help review. Nothing else has changed.\n\nThanks\nRichard",
"msg_date": "Mon, 18 Mar 2024 16:36:01 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 4:36 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> Here is another rebase over master so it applies again. I also added a\n> commit message to help review. Nothing else has changed.\n\nAFAIU currently we do not add Memoize nodes on top of join relation\npaths. This is because the ParamPathInfos for join relation paths do\nnot maintain ppi_clauses, as the set of relevant clauses varies\ndepending on how the join is formed. In addition, joinrels do not\nmaintain lateral_vars. So we do not have a way to extract cache keys\nfrom joinrels.\n\n(Besides, there are places where the code doesn't cope with Memoize path\non top of a joinrel path, such as in get_param_path_clause_serials.)\n\nTherefore, when extracting lateral references within PlaceHolderVars,\nthere is no need to consider those that are due to be evaluated at\njoinrels.\n\nHence, here is v7 patch for that. In passing, this patch also includes\na comment explaining that Memoize nodes are currently not added on top\nof join relation paths (maybe we should have a separate patch for this?).\n\nThanks\nRichard",
"msg_date": "Tue, 18 Jun 2024 09:47:27 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On 6/18/24 08:47, Richard Guo wrote:\n> On Mon, Mar 18, 2024 at 4:36 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>> Here is another rebase over master so it applies again. I also added a\n>> commit message to help review. Nothing else has changed.\n> \n> AFAIU currently we do not add Memoize nodes on top of join relation\n> paths. This is because the ParamPathInfos for join relation paths do\n> not maintain ppi_clauses, as the set of relevant clauses varies\n> depending on how the join is formed. In addition, joinrels do not\n> maintain lateral_vars. So we do not have a way to extract cache keys\n> from joinrels.\n> \n> (Besides, there are places where the code doesn't cope with Memoize path\n> on top of a joinrel path, such as in get_param_path_clause_serials.)\n> \n> Therefore, when extracting lateral references within PlaceHolderVars,\n> there is no need to consider those that are due to be evaluated at\n> joinrels.\n> \n> Hence, here is v7 patch for that. In passing, this patch also includes\n> a comment explaining that Memoize nodes are currently not added on top\n> of join relation paths (maybe we should have a separate patch for this?).\nHi,\nI have reviewed v7 of the patch. This improvement is good enough to be \napplied, thought. Here is some notes:\n\nComment may be rewritten for clarity:\n\"Determine if the clauses in param_info and innerrel's lateral_vars\" -\nI'd replace lateral_vars with 'lateral references' to combine in one \nphrase PHV from rel and root->placeholder_list sources.\n\nI wonder if we can add whole PHV expression instead of the Var (as \ndiscussed above) just under some condition:\nif (!bms_intersect(pull_varnos(root, (Node *) phinfo->ph_var->phexpr), \ninnerrelids))\n{\n // Add whole PHV\n}\nelse\n{\n // Add only pulled vars\n}\n\nI got the point about Memoize over join, but as a join still calls \nreplace_nestloop_params to replace parameters in its clauses, why not to \ninvent something similar to find Memoize keys inside specific JoinPath \nnode? It is not the issue of this patch, though - but is it doable?\n\nIMO, the code:\nif (bms_nonempty_difference(outerPlan->chgParam, node->keyparamids))\n cache_purge_all(node);\n\nis a good place to check an assertion: is it really the parent query \nparameters that make a difference between memoize keys and node list of \nparameters?\n\nGenerally, this patch looks good for me to be committed.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Fri, 28 Jun 2024 21:14:53 +0700",
"msg_from": "Andrei Lepikhov <lepihov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Fri, Jun 28, 2024 at 10:14 PM Andrei Lepikhov <lepihov@gmail.com> wrote:\n> I have reviewed v7 of the patch. This improvement is good enough to be\n> applied, thought.\n\nThank you for reviewing this patch!\n\n> Comment may be rewritten for clarity:\n> \"Determine if the clauses in param_info and innerrel's lateral_vars\" -\n> I'd replace lateral_vars with 'lateral references' to combine in one\n> phrase PHV from rel and root->placeholder_list sources.\n\nMakes sense. I ended up using 'innerrel's lateral vars' to include\nboth the lateral Vars/PHVs found in innerrel->lateral_vars and those\nextracted from within PlaceHolderVars that are due to be evaluated at\ninnerrel.\n\n> I wonder if we can add whole PHV expression instead of the Var (as\n> discussed above) just under some condition:\n> if (!bms_intersect(pull_varnos(root, (Node *) phinfo->ph_var->phexpr),\n> innerrelids))\n> {\n> // Add whole PHV\n> }\n> else\n> {\n> // Add only pulled vars\n> }\n\nGood point. After considering it further, I think we should do this.\nAs David explained, this can be beneficial in cases where the whole\nexpression results in fewer distinct values to cache tuples for.\n\n> I got the point about Memoize over join, but as a join still calls\n> replace_nestloop_params to replace parameters in its clauses, why not to\n> invent something similar to find Memoize keys inside specific JoinPath\n> node? It is not the issue of this patch, though - but is it doable?\n\nI don't think it's impossible to do, but I'm skeptical that there's an\neasy way to identify all the cache keys for joinrels, without having\navailable ppi_clauses and lateral_vars.\n\n> IMO, the code:\n> if (bms_nonempty_difference(outerPlan->chgParam, node->keyparamids))\n> cache_purge_all(node);\n>\n> is a good place to check an assertion: is it really the parent query\n> parameters that make a difference between memoize keys and node list of\n> parameters?\n\nI don't think we have enough info available here to identify which\nparams within outerPlan->chgParam are from outer levels. Maybe we can\nstore root->outer_params in the MemoizeState node to help with this\nassertion, but I'm not sure if it's worth the trouble.\n\nAttached is an updated version of this patch.\n\nThanks\nRichard",
"msg_date": "Thu, 11 Jul 2024 17:18:39 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On 7/11/24 16:18, Richard Guo wrote:\n> On Fri, Jun 28, 2024 at 10:14 PM Andrei Lepikhov <lepihov@gmail.com> wrote:\n>> I got the point about Memoize over join, but as a join still calls\n>> replace_nestloop_params to replace parameters in its clauses, why not to\n>> invent something similar to find Memoize keys inside specific JoinPath\n>> node? It is not the issue of this patch, though - but is it doable?\n> \n> I don't think it's impossible to do, but I'm skeptical that there's an\n> easy way to identify all the cache keys for joinrels, without having\n> available ppi_clauses and lateral_vars.\nOk\n\n> \n>> IMO, the code:\n>> if (bms_nonempty_difference(outerPlan->chgParam, node->keyparamids))\n>> cache_purge_all(node);\n>>\n>> is a good place to check an assertion: is it really the parent query\n>> parameters that make a difference between memoize keys and node list of\n>> parameters?\n> \n> I don't think we have enough info available here to identify which\n> params within outerPlan->chgParam are from outer levels. Maybe we can\n> store root->outer_params in the MemoizeState node to help with this\n> assertion, but I'm not sure if it's worth the trouble.\nGot it\n> \n> Attached is an updated version of this patch.\nI'm not sure about stability of output format of AVG aggregate across \ndifferent platforms. Maybe better to return the result of comparison \nbetween the AVG() and expected value?\n\n-- \nregards, Andrei Lepikhov\n\n\n\n",
"msg_date": "Fri, 12 Jul 2024 10:18:10 +0700",
"msg_from": "Andrei Lepikhov <lepihov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Fri, Jul 12, 2024 at 11:18 AM Andrei Lepikhov <lepihov@gmail.com> wrote:\n> I'm not sure about stability of output format of AVG aggregate across\n> different platforms. Maybe better to return the result of comparison\n> between the AVG() and expected value?\n\nI don't think this is a problem. AFAIK we use AVG() a lot in the\nexisting test cases.\n\nThanks\nRichard\n\n\n",
"msg_date": "Fri, 12 Jul 2024 15:11:19 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
},
{
"msg_contents": "On Thu, Jul 11, 2024 at 5:18 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> Attached is an updated version of this patch.\n\nI've pushed this patch. Thanks for all the reviews.\n\nThanks\nRichard\n\n\n",
"msg_date": "Mon, 15 Jul 2024 09:36:37 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Check lateral references within PHVs for memoize cache keys"
}
] |
[
{
"msg_contents": "In the thread about user space SCRAM functions [0] I mentioned that it might be\nwise to consider raising our SCRAM iteration count. The iteration count is an\nimportant defence against brute-force attacks.\n\nOur current hardcoded value for iteration count is 4096, which is based on a\nrecommendation from RFC 7677. This is however the lower end of the scale, and\nis related to computing power in 2015 generation handheld devices. The\nrelevant paragraph in section 4 of RFC 7677 [1] reads:\n\n \"As a rule of thumb, the hash iteration-count should be such that a modern\n machine will take 0.1 seconds to perform the complete algorithm; however,\n this is unlikely to be practical on mobile devices and other relatively low-\n performance systems. At the time this was written, the rule of thumb gives\n around 15,000 iterations required; however, a hash iteration- count of 4096\n takes around 0.5 seconds on current mobile handsets.\"\n\nIt goes on to say:\n\n \"..the recommendation of this specification is that the hash iteration- count\n SHOULD be at least 4096, but careful consideration ought to be given to\n using a significantly higher value, particularly where mobile use is less\n important.\"\n\nSelecting 4096 was thus a conservative take already in 2015, and is now very\nmuch so. On my 2020-vintage Macbook I need ~200k iterations to consume 0.1\nseconds (in a build with assertions). Calculating tens of thousands of hashes\nper second on a consumer laptop at a 4096 iteration count is no stretch. A\nbrief look shows that MongoDB has a minimum of 5000 with a default of 15000\n[2]; Kafka has a minimum of 4096 [3].\n\nMaking the iteration count a configurable setting would allow installations to\nraise the iteration count to strengthen against brute force attacks, while\nstill supporting those with lower end clients who prefer the trade-off of\nshorter authentication times.\n\nThe attached introduces a scram_iteration_count GUC with a default of 15000\n(still conservative, from RFC7677) and a minimum of 4096. Since the iterations\nare stored per secret it can be altered with backwards compatibility.\n\nClientside the count is still at 4096 to limit the scope of this patch a bit.\nFor psql it would mean adding options to \\password which should be a thread of\nits own. For libpq one can imagine specifying this in the algorithm parameter\npassed to PQencryptPasswordConn like \"scram-sha-256:100000\" or something\nsimilar. It's premature to pursue those without agreement that we should make\nthe count configurable though. If this patch is accepted, I'll work on that\nnext.\n\nThoughts?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] fce7228e-d0d6-64a1-3dcb-bba85c2fac85@postgresql.org\n[1] https://www.rfc-editor.org/rfc/rfc7677#section-4\n[2] https://www.mongodb.com/docs/manual/reference/parameters/#mongodb-parameter-param.scramSHA256IterationCount\n[3] https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_scram.html#security-considerations-for-sasl-scram",
"msg_date": "Fri, 9 Dec 2022 11:55:07 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Raising the SCRAM iteration count"
},
{
"msg_contents": "On 09/12/2022 12:55, Daniel Gustafsson wrote:\n> In the thread about user space SCRAM functions [0] I mentioned that it might be\n> wise to consider raising our SCRAM iteration count. The iteration count is an\n> important defence against brute-force attacks.\n> \n> Our current hardcoded value for iteration count is 4096, which is based on a\n> recommendation from RFC 7677. This is however the lower end of the scale, and\n> is related to computing power in 2015 generation handheld devices. The\n> relevant paragraph in section 4 of RFC 7677 [1] reads:\n> \n> \"As a rule of thumb, the hash iteration-count should be such that a modern\n> machine will take 0.1 seconds to perform the complete algorithm; however,\n> this is unlikely to be practical on mobile devices and other relatively low-\n> performance systems. At the time this was written, the rule of thumb gives\n> around 15,000 iterations required; however, a hash iteration- count of 4096\n> takes around 0.5 seconds on current mobile handsets.\"\n> \n> It goes on to say:\n> \n> \"..the recommendation of this specification is that the hash iteration- count\n> SHOULD be at least 4096, but careful consideration ought to be given to\n> using a significantly higher value, particularly where mobile use is less\n> important.\"\n> \n> Selecting 4096 was thus a conservative take already in 2015, and is now very\n> much so. On my 2020-vintage Macbook I need ~200k iterations to consume 0.1\n> seconds (in a build with assertions). Calculating tens of thousands of hashes\n> per second on a consumer laptop at a 4096 iteration count is no stretch. A\n> brief look shows that MongoDB has a minimum of 5000 with a default of 15000\n> [2]; Kafka has a minimum of 4096 [3].\n> \n> Making the iteration count a configurable setting would allow installations to\n> raise the iteration count to strengthen against brute force attacks, while\n> still supporting those with lower end clients who prefer the trade-off of\n> shorter authentication times.\n> \n> The attached introduces a scram_iteration_count GUC with a default of 15000\n> (still conservative, from RFC7677) and a minimum of 4096. Since the iterations\n> are stored per secret it can be altered with backwards compatibility.\n\nWe just had a discussion with a colleague about using a *smaller* \niteration count. Why? To make the connection startup faster. We're \nexperimenting with a client that runs in a Cloudflare worker, which is a \nwasm runtime with very small limits on how much CPU time you're allowed \nto use (without paying extra). And we know that the password is randomly \ngenerated and long enough. If I understand correctly, the point of \niterations is to slow down brute-force or dictionary attacks, but if the \npassword is strong enough to begin with, those attacks are not possible \nregardless of iteration count. So I would actually like to set the \nminimum iteration count all the way down to 1.\n\n- Heikki\n\n\n\n",
"msg_date": "Fri, 9 Dec 2022 17:50:00 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On Fri, Dec 09, 2022 at 05:50:00PM +0200, Heikki Linnakangas wrote:\n>> The attached introduces a scram_iteration_count GUC with a default of 15000\n>> (still conservative, from RFC7677) and a minimum of 4096. Since the iterations\n>> are stored per secret it can be altered with backwards compatibility.\n> \n> We just had a discussion with a colleague about using a *smaller* iteration\n> count. Why? To make the connection startup faster. We're experimenting with\n> a client that runs in a Cloudflare worker, which is a wasm runtime with very\n> small limits on how much CPU time you're allowed to use (without paying\n> extra). And we know that the password is randomly generated and long enough.\n> If I understand correctly, the point of iterations is to slow down\n> brute-force or dictionary attacks, but if the password is strong enough to\n> begin with, those attacks are not possible regardless of iteration count. So\n> I would actually like to set the minimum iteration count all the way down to\n> 1.\n\nThis is the kind of thing that should be easily measurable with\npgbench -C and an empty script. How much difference are you seeing\nwith 1, 4096 and more than that?\n\nAll that comes down to provide more capability for the existing\nroutines in my opinion. So what if we finally extended with a new\nflavor PQencryptPasswordConn() able to get a list of options, say\nPQencryptPasswordConn() extended that has a string with all the\noptions? psql could use for \\password a grammar consistent with \\g,\nas of: \\password (iteration=4096, salt_length=123) PASS_STR\n\nNote that scram_build_secret() is already able to handle any iteration\ncount, even at 1, so IMO it is not a good idea to lower the default to\nbe so. I'd agree with Daniel to make it higher by default and follow\nthe RFCs, though like you I have wanted also in core much more control\nover that.\n--\nMichael",
"msg_date": "Sat, 10 Dec 2022 08:21:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-09 11:55:07 +0100, Daniel Gustafsson wrote:\n> Our current hardcoded value for iteration count is 4096, which is based on a\n> recommendation from RFC 7677. This is however the lower end of the scale, and\n> is related to computing power in 2015 generation handheld devices. The\n> relevant paragraph in section 4 of RFC 7677 [1] reads:\n> \n> \"As a rule of thumb, the hash iteration-count should be such that a modern\n> machine will take 0.1 seconds to perform the complete algorithm; however,\n> this is unlikely to be practical on mobile devices and other relatively low-\n> performance systems. At the time this was written, the rule of thumb gives\n> around 15,000 iterations required; however, a hash iteration- count of 4096\n> takes around 0.5 seconds on current mobile handsets.\"\n> \n> It goes on to say:\n> \n> \"..the recommendation of this specification is that the hash iteration- count\n> SHOULD be at least 4096, but careful consideration ought to be given to\n> using a significantly higher value, particularly where mobile use is less\n> important.\"\n> \n> Selecting 4096 was thus a conservative take already in 2015, and is now very\n> much so. On my 2020-vintage Macbook I need ~200k iterations to consume 0.1\n> seconds (in a build with assertions). Calculating tens of thousands of hashes\n> per second on a consumer laptop at a 4096 iteration count is no stretch. A\n> brief look shows that MongoDB has a minimum of 5000 with a default of 15000\n> [2]; Kafka has a minimum of 4096 [3].\n> \n> Making the iteration count a configurable setting would allow installations to\n> raise the iteration count to strengthen against brute force attacks, while\n> still supporting those with lower end clients who prefer the trade-off of\n> shorter authentication times.\n>\n> The attached introduces a scram_iteration_count GUC with a default of 15000\n> (still conservative, from RFC7677) and a minimum of 4096. Since the iterations\n> are stored per secret it can be altered with backwards compatibility.\n\nI am extremely doubtful it's a good idea to increase the default (if anything\nthe opposite). 0.1 seconds is many times the connection establishment\noverhead, even over network. I've seen users complain about postgres\nconnection establishment overhead being high, and it just turned out to be due\nto scram - yes, they ended up switching to md5, because that was the only\nviable alternative.\n\nPGPASSWORD=passme pgbench -n -C -f ~/tmp/select.sql -h 127.0.0.1 -T10 -U passme\n\nmd5: tps = 158.577609\nscram: tps = 38.196362\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 9 Dec 2022 16:15:38 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "> On 10 Dec 2022, at 01:15, Andres Freund <andres@anarazel.de> wrote:\n> On 2022-12-09 11:55:07 +0100, Daniel Gustafsson wrote:\n\n>> The attached introduces a scram_iteration_count GUC with a default of 15000\n>> (still conservative, from RFC7677) and a minimum of 4096. Since the iterations\n>> are stored per secret it can be altered with backwards compatibility.\n> \n> I am extremely doubtful it's a good idea to increase the default (if anything\n> the opposite). 0.1 seconds is many times the connection establishment\n> overhead, even over network. I've seen users complain about postgres\n> connection establishment overhead being high, and it just turned out to be due\n> to scram - yes, they ended up switching to md5, because that was the only\n> viable alternative.\n\nThat's a fair point. For the record I don't think we should raise the default\nto match 0.1 seconds, but we should make the option available to those who want\nit. If we provide a GUC for the iteration count which has a lower limit than\ntodays hardcoded value, then maybe we can help workloads with long-lived\nconnections who want increased on-disk safety as well as workloads where low\nconnection establishment is critical (or where the env is constrained like in\nHeikki's example).\n\n> PGPASSWORD=passme pgbench -n -C -f ~/tmp/select.sql -h 127.0.0.1 -T10 -U passme\n> \n> md5: tps = 158.577609\n> scram: tps = 38.196362\n\nLowering the minimum for scram_iteration_count I tried out the patch on a set\nof iteration counts of interest. Values are averaged over three runs, using\nthe same pgbench setup you had above with basically a noop select.sql. The\nrelative difference between the values are way off from your results, but I\nhaven't done much digging to figure that out yet (different OpenSSL versions\nmight be one factor).\n\nmd5: tps = 154.052690\nscram 1: tps = 150.060285\nscram 1024: tps = 138.191224\nscram 4096: tps = 115.197533\nscram 15000: tps = 75.156399\n\nFor the fun of it, 100000 iterations yields tps = 20.822393.\n\nSCRAM with an iteration count of 1 still provides a lot of benefits over md5,\nso if we can make those comparable in performance then that could be a way\nforward (with the tradeoffs properly documented).\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sun, 11 Dec 2022 00:46:23 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On Sun, Dec 11, 2022 at 12:46:23AM +0100, Daniel Gustafsson wrote:\n> SCRAM with an iteration count of 1 still provides a lot of benefits over md5,\n> so if we can make those comparable in performance then that could be a way\n> forward (with the tradeoffs properly documented).\n\nOkay, it looks like there is a wish to make that configurable anyway,\nand I have a few comments about that.\n\n {\"scram_iteration_count\", PGC_SUSET, CONN_AUTH_AUTH,\n+ gettext_noop(\"Sets the iteration count for SCRAM secret generation.\"),\n+ NULL,\n+ GUC_NOT_IN_SAMPLE | GUC_SUPERUSER_ONLY\n+ },\n\nShouldn't this be user-settable as a PGC_USERSET rather than\nPGC_SUSET which would limit its updates to superusers?\n\nAs shaped, the GUC would not benefit to \\password, and we should not\nencourage users to give a raw password over the wire if possible if\nthey wish to compute a verifier with a given interation number.\nHence, wouldn't it be better to mark it as GUC_REPORT, and store its \nstatus in pg_conn@libpq-int.h in the same fashion as\ndefault_transaction_read_only and hot_standby? This way,\nPQencryptPasswordConn() would be able to feed on it automatically\nrather than always assume the default implied by\npg_fe_scram_build_secret().\n--\nMichael",
"msg_date": "Sun, 11 Dec 2022 12:32:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On 12/9/22 7:15 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2022-12-09 11:55:07 +0100, Daniel Gustafsson wrote:\r\n>> Our current hardcoded value for iteration count is 4096, which is based on a\r\n>> recommendation from RFC 7677. This is however the lower end of the scale, and\r\n>> is related to computing power in 2015 generation handheld devices. The\r\n>> relevant paragraph in section 4 of RFC 7677 [1] reads:\r\n>>\r\n>> \"As a rule of thumb, the hash iteration-count should be such that a modern\r\n>> machine will take 0.1 seconds to perform the complete algorithm; however,\r\n>> this is unlikely to be practical on mobile devices and other relatively low-\r\n>> performance systems. At the time this was written, the rule of thumb gives\r\n>> around 15,000 iterations required; however, a hash iteration- count of 4096\r\n>> takes around 0.5 seconds on current mobile handsets.\"\r\n>>\r\n>> It goes on to say:\r\n>>\r\n>> \"..the recommendation of this specification is that the hash iteration- count\r\n>> SHOULD be at least 4096, but careful consideration ought to be given to\r\n>> using a significantly higher value, particularly where mobile use is less\r\n>> important.\"\r\n>>\r\n>> Selecting 4096 was thus a conservative take already in 2015, and is now very\r\n>> much so. On my 2020-vintage Macbook I need ~200k iterations to consume 0.1\r\n>> seconds (in a build with assertions). Calculating tens of thousands of hashes\r\n>> per second on a consumer laptop at a 4096 iteration count is no stretch. A\r\n>> brief look shows that MongoDB has a minimum of 5000 with a default of 15000\r\n>> [2]; Kafka has a minimum of 4096 [3].\r\n>>\r\n>> Making the iteration count a configurable setting would allow installations to\r\n>> raise the iteration count to strengthen against brute force attacks, while\r\n>> still supporting those with lower end clients who prefer the trade-off of\r\n>> shorter authentication times.\r\n>>\r\n>> The attached introduces a scram_iteration_count GUC with a default of 15000\r\n>> (still conservative, from RFC7677) and a minimum of 4096. Since the iterations\r\n>> are stored per secret it can be altered with backwards compatibility.\r\n\r\nTo throw on a bit of paint, if we do change it, we should likely follow \r\nwhat would come out in a RFC.\r\n\r\nWhile the SCRAM-SHA-512 RFC is still in draft[1], the latest draft it \r\ncontains a \"SHOULD\" recommendation of 10000, which was bumped up from \r\n4096 in an earlier version of the draft:\r\n\r\n==snip==\r\nTherefore, the recommendation of this specification is that the hash \r\niteration- count SHOULD be at least 10000, but careful consideration \r\nought to be given to using a significantly higher value, particularly \r\nwhere mobile use is less important.¶\r\n==snip==\r\n\r\nI'm currently ambivalent (+0) on changing the default. I think giving \r\nthe user more control over iterations ([2], and follow up work to make \r\nit easier to set iteration account via client) can help with this.\r\n\r\nHowever, I do like the idea of a GUC.\r\n\r\n> I am extremely doubtful it's a good idea to increase the default (if anything\r\n> the opposite). 0.1 seconds is many times the connection establishment\r\n> overhead, even over network. I've seen users complain about postgres\r\n> connection establishment overhead being high, and it just turned out to be due\r\n> to scram - yes, they ended up switching to md5, because that was the only\r\n> viable alternative.\r\n\r\nUgh, I'd be curious to know how often that is the case. That said, I \r\nthink some of the above work could help with that.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://datatracker.ietf.org/doc/html/draft-melnikov-scram-sha-512\r\n[2] https://postgr.es/m/fce7228e-d0d6-64a1-3dcb-bba85c2fac85@postgresql.org/",
"msg_date": "Mon, 12 Dec 2022 09:47:56 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "> On 12 Dec 2022, at 15:47, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n\n> To throw on a bit of paint, if we do change it, we should likely follow what would come out in a RFC.\n> \n> While the SCRAM-SHA-512 RFC is still in draft[1], the latest draft it contains a \"SHOULD\" recommendation of 10000, which was bumped up from 4096 in an earlier version of the draft:\n\nThis is however the draft for a different algorithm: SCRAM-SHA-512. We are\nsupporting SCRAM-SHA-256 which is defined in RFC7677. The slightly lower\nrecommendation there makes sense as SHA-512 is more computationally expensive\nthan SHA-256.\n\nIt does raise an interesting point though, if we in the future add suppprt for\nSCRAM-SHA-512 (which seems reasonable to do) it's not good enough to have a\nsingle GUC for SCRAM iterations; we'd need to be able to set the iteration\ncount per algorithm. I'll account for that when updating the patch downthread.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 13 Dec 2022 12:17:58 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 12:17:58PM +0100, Daniel Gustafsson wrote:\n> It does raise an interesting point though, if we in the future add suppprt for\n> SCRAM-SHA-512 (which seems reasonable to do) it's not good enough to have a\n> single GUC for SCRAM iterations; we'd need to be able to set the iteration\n> count per algorithm. I'll account for that when updating the patch downthread.\n\nSo, you mean that the GUC should be named like password_iterations,\ntaking a grammar with a list like 'scram-sha-256=4096,algo2=5000'?\n--\nMichael",
"msg_date": "Wed, 14 Dec 2022 10:00:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "> On 14 Dec 2022, at 02:00, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Dec 13, 2022 at 12:17:58PM +0100, Daniel Gustafsson wrote:\n>> It does raise an interesting point though, if we in the future add suppprt for\n>> SCRAM-SHA-512 (which seems reasonable to do) it's not good enough to have a\n>> single GUC for SCRAM iterations; we'd need to be able to set the iteration\n>> count per algorithm. I'll account for that when updating the patch downthread.\n> \n> So, you mean that the GUC should be named like password_iterations,\n> taking a grammar with a list like 'scram-sha-256=4096,algo2=5000'?\n\nI was thinking about it but opted for the simpler approach of a GUC name with\nthe algorithm baked into it: scram_sha256_iterations. It doesn't seem all that\nlikely that we'll have more than two versions of SCRAM (sha256/sha512) so\nthe additional complexity doesn't seem worth it.\n\nThe attached v2 has the GUC rename and a change to GUC_REPORT such that the\nfrontend can use the real value rather than the default. I kept it for super\nusers so far, do you think it should be a user setting being somewhat sensitive? \n\nThe default in this version is rolled back to 4096 as there was pushback\nagainst raising it, and the lower limit is one in order to potentially assist\nsituations like the one Andres mentioned where md5 is used.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Wed, 14 Dec 2022 12:25:19 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On 12/14/22 6:25 AM, Daniel Gustafsson wrote:\r\n>> On 14 Dec 2022, at 02:00, Michael Paquier <michael@paquier.xyz> wrote:\r\n>>\r\n>> On Tue, Dec 13, 2022 at 12:17:58PM +0100, Daniel Gustafsson wrote:\r\n>>> It does raise an interesting point though, if we in the future add suppprt for\r\n>>> SCRAM-SHA-512 (which seems reasonable to do) it's not good enough to have a\r\n>>> single GUC for SCRAM iterations; we'd need to be able to set the iteration\r\n>>> count per algorithm. I'll account for that when updating the patch downthread.\r\n>>\r\n>> So, you mean that the GUC should be named like password_iterations,\r\n>> taking a grammar with a list like 'scram-sha-256=4096,algo2=5000'?\r\n> \r\n> I was thinking about it but opted for the simpler approach of a GUC name with\r\n> the algorithm baked into it: scram_sha256_iterations. It doesn't seem all that\r\n> likely that we'll have more than two versions of SCRAM (sha256/sha512) so\r\n> the additional complexity doesn't seem worth it.\r\n\r\nI would not rule this out. There is a RFC draft for SCRAM-SHA3-512[1].\r\n\r\nI do have mixed feelings on the 'x1=y1,x2=y2' style GUC, but we do have \r\nmachinery to handle it and it gives a bit more flexibility over how many \r\nSCRAM hash methods get added. I'd like to hear more feedback.\r\n\r\n(I don't know if there will be a world if we ever let users BYO-hash, \r\nbut that case may force separate GUCs anyway).\r\n\r\n[1] https://datatracker.ietf.org/doc/draft-melnikov-scram-sha3-512/\r\n\r\n> The attached v2 has the GUC rename and a change to GUC_REPORT such that the\r\n> frontend can use the real value rather than the default. I kept it for super\r\n> users so far, do you think it should be a user setting being somewhat sensitive?\r\n\r\nNo, because a user can set the number of iterations today if they build \r\ntheir own SCRAM secret. I think it's OK if they change it in a session.\r\n\r\nIf a superuser wants to enforce a minimum iteration count, they can \r\nwrite a password_check_hook. (Or we could add another GUC to enforce that).\r\n\r\n> The default in this version is rolled back to 4096 as there was pushback\r\n> against raising it, and the lower limit is one in order to potentially assist\r\n> situations like the one Andres mentioned where md5 is used.\r\n\r\nReviewing patch as is.\r\n\r\nSuggestion on text:\r\n\r\n==snip==\r\nThe number of computational iterations to perform when generating\r\na SCRAM-SHA-256 secret. The default is <literal>4096</literal>. A\r\nhigher number of iterations provides additional protection against\r\nbrute-force attacks on stored passwords, but makes authentication\r\nslower. Changing the value has no effect on previously created\r\nSCRAM-SHA-256 secrets as the iteration count at the time of creation\r\nis fixed. A password must be re-hashed to use an updated iteration\r\nvalue.\r\n==snip==\r\n\r\n /*\r\n- * Default number of iterations when generating secret. Should be at least\r\n- * 4096 per RFC 7677.\r\n+ * Default number of iterations when generating secret.\r\n */\r\n\r\nI don't think we should remove the RFC 7677 reference entirely. Perhaps:\r\n\r\n/*\r\n * Default number of iterations when generating secret. RFC 7677\r\n * recommend 4096 for SCRAM-SHA-256, which we set as the default,\r\n * but we allow users to select their own values.\r\n */\r\n\r\n\r\n-pg_fe_scram_build_secret(const char *password, const char **errstr)\r\n+pg_fe_scram_build_secret(const char *password, int iterations, const \r\nchar **errstr)\r\n\r\nI have mild worry about changing this function definition for downstream \r\nusage, esp. for drivers. Perhaps it's not that big of a deal, and \r\nperhaps this will end up being needed for the work we've discussed \r\naround \"\\password\" but I do want to note that this could be a breaking \r\nchange.\r\n\r\n\r\n+\telse if (strcmp(name, \"scram_sha256_iterations\") == 0)\r\n+\t{\r\n+\t\tconn->scram_iterations = atoi(value);\r\n+\t}\r\n\r\nMaybe out of scope for this patch based on what else is in the patch, \r\nbut I was wondering why we don't use a \"strncmp\" here?\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Wed, 14 Dec 2022 13:59:04 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 01:59:04PM -0500, Jonathan S. Katz wrote:\n> On 12/14/22 6:25 AM, Daniel Gustafsson wrote:\n>> I was thinking about it but opted for the simpler approach of a GUC name with\n>> the algorithm baked into it: scram_sha256_iterations. It doesn't seem all that\n>> likely that we'll have more than two versions of SCRAM (sha256/sha512) so\n>> the additional complexity doesn't seem worth it.\n> \n> I would not rule this out. There is a RFC draft for SCRAM-SHA3-512[1].\n> \n> I do have mixed feelings on the 'x1=y1,x2=y2' style GUC, but we do have\n> machinery to handle it and it gives a bit more flexibility over how many\n> SCRAM hash methods get added. I'd like to hear more feedback.\n\nTechnically, I would put the logic to parse the GUC to scram-common.c\nand let libpq and the backend use it. Saying that, we are just\ntalking about what looks like one new hashing method, so a separate\nGUC is fine by me.\n\n> (I don't know if there will be a world if we ever let users BYO-hash, but\n> that case may force separate GUCs anyway).\n> \n> [1] https://datatracker.ietf.org/doc/draft-melnikov-scram-sha3-512/\n\nStill, the odds is that we are going to see one update to\nSCRAM-SHA-256 that we will just need to pick up?\n\n>> The attached v2 has the GUC rename and a change to GUC_REPORT such that the\n>> frontend can use the real value rather than the default. I kept it for super\n>> users so far, do you think it should be a user setting being somewhat sensitive?\n> \n> No, because a user can set the number of iterations today if they build\n> their own SCRAM secret. I think it's OK if they change it in a session.\n> \n> If a superuser wants to enforce a minimum iteration count, they can write a\n> password_check_hook. (Or we could add another GUC to enforce that).\n\nHm? check_password_hook does not allow one to recompile the password\ngiven by the user, except if I am missing your point?\n\n> -pg_fe_scram_build_secret(const char *password, const char **errstr)\n> +pg_fe_scram_build_secret(const char *password, int iterations, const char\n> **errstr)\n> \n> I have mild worry about changing this function definition for downstream\n> usage, esp. for drivers. Perhaps it's not that big of a deal, and perhaps\n> this will end up being needed for the work we've discussed around\n> \"\\password\" but I do want to note that this could be a breaking change.\n\nFWIW, an extension would be required to enforce the type of hash\nused, which is an extra parameter on top of the iteration number when\nbuilding the SCRAM verifier.\n\n> +\telse if (strcmp(name, \"scram_sha256_iterations\") == 0)\n> +\t{\n> +\t\tconn->scram_iterations = atoi(value);\n> +\t}\n> \n> Maybe out of scope for this patch based on what else is in the patch, but I\n> was wondering why we don't use a \"strncmp\" here?\n\nWhat would that change? This needs an equal match.\n\n conn->in_hot_standby = PG_BOOL_UNKNOWN;\n+ conn->scram_iterations = SCRAM_DEFAULT_ITERATIONS;\n\ns/SCRAM_DEFAULT_ITERATIONS/SCRAM_SHA_256_DEFAULT_ITERATIONS/ and\ns/scram_iterations/scram_sha_256_interations/ perhaps? It does not\nlook like we'd have the same default across the various SHA variations\nif we stick with the RFC definitions..\n\n+#ifndef FRONTEND\n+/*\n+ * Number of iterations when generating new secrets.\n+ */\n+extern PGDLLIMPORT int scram_sha256_iterations;\n+#endif\n\nIt looks like libpq/scram.h, which is backend-only, would be a better\nlocation.\n\n@@ -692,7 +697,7 @@ mock_scram_secret(const char *username, int *iterations, char **salt,\n encoded_salt[encoded_len] = '\\0';\n \n *salt = encoded_salt;\n- *iterations = SCRAM_DEFAULT_ITERATIONS;\n+ *iterations = scram_sha256_iterations;\n\nThis looks incorrect to me? The mock authentication is here to\nproduce a realistic verifier, still it will fail. It seems to me that\nwe'd better stick to the default in all the cases.\n\n(FWIW, extending \\password with custom options would have the\nadvantage to allow older server versions to use a custom iteration\nnumber. Perhaps that's not worth bothering about, just saying as a\nseparate thing to consider.)\n--\nMichael",
"msg_date": "Thu, 15 Dec 2022 08:52:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On 12/14/22 6:52 PM, Michael Paquier wrote:\r\n> On Wed, Dec 14, 2022 at 01:59:04PM -0500, Jonathan S. Katz wrote:\r\nHA-256 that we will just need to pick up?\r\n> \r\n>>> The attached v2 has the GUC rename and a change to GUC_REPORT such that the\r\n>>> frontend can use the real value rather than the default. I kept it for super\r\n>>> users so far, do you think it should be a user setting being somewhat sensitive?\r\n>>\r\n>> No, because a user can set the number of iterations today if they build\r\n>> their own SCRAM secret. I think it's OK if they change it in a session.\r\n>>\r\n>> If a superuser wants to enforce a minimum iteration count, they can write a\r\n>> password_check_hook. (Or we could add another GUC to enforce that).\r\n> \r\n> Hm? check_password_hook does not allow one to recompile the password\r\n> given by the user, except if I am missing your point?\r\nMy point is you can write a hook to reject the password if the iteration \r\ncount is \"too low\". Not to re-hash the password.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Wed, 14 Dec 2022 19:00:54 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "> On 14 Dec 2022, at 19:59, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> On 12/14/22 6:25 AM, Daniel Gustafsson wrote:\n>>> On 14 Dec 2022, at 02:00, Michael Paquier <michael@paquier.xyz> wrote:\n\n>>> So, you mean that the GUC should be named like password_iterations,\n>>> taking a grammar with a list like 'scram-sha-256=4096,algo2=5000'?\n>> I was thinking about it but opted for the simpler approach of a GUC name with\n>> the algorithm baked into it: scram_sha256_iterations. It doesn't seem all that\n>> likely that we'll have more than two versions of SCRAM (sha256/sha512) so\n>> the additional complexity doesn't seem worth it.\n> \n> I would not rule this out. There is a RFC draft for SCRAM-SHA3-512[1].\n\nNote that this draft is very far from RFC status, it has alredy expired twice\nand hasn't been updated for a year. The SCRAM-SHA-512 draft has an almost\nidentical history and neither are assigned a work group. The author is also\ndrafting scram-bis which is setting up more context around these proposals,\nthis has yet to expire but is also very early. The work on SCRAM-2FA seems the\nmost promising right now.\n\nThere might be additional versions of SCRAM published but it's looking pretty\ndistant now.\n\n> Reviewing patch as is.\n\nThanks for review! Fixes coming downthread in an updated version.\n\n> ==snip==\n> The number of computational iterations to perform when generating\n> a SCRAM-SHA-256 secret. The default is <literal>4096</literal>. A\n> higher number of iterations provides additional protection against\n> brute-force attacks on stored passwords, but makes authentication\n> slower. Changing the value has no effect on previously created\n> SCRAM-SHA-256 secrets as the iteration count at the time of creation\n> is fixed. A password must be re-hashed to use an updated iteration\n> value.\n> ==snip==\n\nI've rewritten to a version of this. We don't use the terminology \"SCRAM\nsecret\" anywhere else so I used password instead.\n\n> /*\n> - * Default number of iterations when generating secret. Should be at least\n> - * 4096 per RFC 7677.\n> + * Default number of iterations when generating secret.\n> */\n> \n> I don't think we should remove the RFC 7677 reference entirely.\n\nFixed.\n\n> -pg_fe_scram_build_secret(const char *password, const char **errstr)\n> +pg_fe_scram_build_secret(const char *password, int iterations, const char **errstr)\n> \n> I have mild worry about changing this function definition for downstream usage, esp. for drivers. Perhaps it's not that big of a deal, and perhaps this will end up being needed for the work we've discussed around \"\\password\" but I do want to note that this could be a breaking change.\n\nNot sure driver authors should be relying on this function.. Code scans\ndoesn't turn up any public consumers of it right now at least. If we want to\nsupport multiple SCRAM versions we'd still need to change it though as noted\ndownthread.\n\n> +\telse if (strcmp(name, \"scram_sha256_iterations\") == 0)\n> +\t{\n> +\t\tconn->scram_iterations = atoi(value);\n> +\t}\n> \n> Maybe out of scope for this patch based on what else is in the patch, but I was wondering why we don't use a \"strncmp\" here?\n\nstrncmp() would allow scram_sha256_iterations_foo to match, which we don't\nwant, we want an exact match.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 15 Dec 2022 12:09:01 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "> On 15 Dec 2022, at 00:52, Michael Paquier <michael@paquier.xyz> wrote:\n\n> conn->in_hot_standby = PG_BOOL_UNKNOWN;\n> + conn->scram_iterations = SCRAM_DEFAULT_ITERATIONS;\n> \n> s/SCRAM_DEFAULT_ITERATIONS/SCRAM_SHA_256_DEFAULT_ITERATIONS/ and\n> s/scram_iterations/scram_sha_256_interations/ perhaps? \n\nDistinct members in the conn object is only of interest if there is a way for\nthe user to select a different password method in \\password right? I can\nrename it now but I think doing too much here is premature, awaiting work on\n\\password (should that materialize) seems reasonable no?\n\n> +#ifndef FRONTEND\n> +/*\n> + * Number of iterations when generating new secrets.\n> + */\n> +extern PGDLLIMPORT int scram_sha256_iterations;\n> +#endif\n> \n> It looks like libpq/scram.h, which is backend-only, would be a better\n> location.\n\nFixed.\n\n> @@ -692,7 +697,7 @@ mock_scram_secret(const char *username, int *iterations, char **salt,\n> encoded_salt[encoded_len] = '\\0';\n> \n> *salt = encoded_salt;\n> - *iterations = SCRAM_DEFAULT_ITERATIONS;\n> + *iterations = scram_sha256_iterations;\n> \n> This looks incorrect to me? The mock authentication is here to\n> produce a realistic verifier, still it will fail. It seems to me that\n> we'd better stick to the default in all the cases.\n\nFor avoiding revealing anything, I think a case can be argued for both. I've\nreverted back to the default though.\n\nI also renamed the GUC sha_256 to match terminology we use.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Thu, 15 Dec 2022 12:09:15 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 12:09:15PM +0100, Daniel Gustafsson wrote:\n>> On 15 Dec 2022, at 00:52, Michael Paquier <michael@paquier.xyz> wrote:\n>> conn->in_hot_standby = PG_BOOL_UNKNOWN;\n>> + conn->scram_iterations = SCRAM_DEFAULT_ITERATIONS;\n>> \n>> s/SCRAM_DEFAULT_ITERATIONS/SCRAM_SHA_256_DEFAULT_ITERATIONS/ and\n>> s/scram_iterations/scram_sha_256_interations/ perhaps? \n> \n> Distinct members in the conn object is only of interest if there is a way for\n> the user to select a different password method in \\password right? I can\n> rename it now but I think doing too much here is premature, awaiting work on\n> \\password (should that materialize) seems reasonable no?\n\nYou could do that already, somewhat indirectly, with\npassword_encryption, assuming that it supports more than one mode\nwhose password build is influenced by it. If you wish to keep it\nnamed this way, this is no big deal for me either way, so feel free to\nuse what you think is best based on the state of HEAD. I think that\nI'd value more the consistency with the backend in terms of naming,\nthough.\n\n>> @@ -692,7 +697,7 @@ mock_scram_secret(const char *username, int *iterations, char **salt,\n>> encoded_salt[encoded_len] = '\\0';\n>> \n>> *salt = encoded_salt;\n>> - *iterations = SCRAM_DEFAULT_ITERATIONS;\n>> + *iterations = scram_sha256_iterations;\n>> \n>> This looks incorrect to me? The mock authentication is here to\n>> produce a realistic verifier, still it will fail. It seems to me that\n>> we'd better stick to the default in all the cases.\n> \n> For avoiding revealing anything, I think a case can be argued for both. I've\n> reverted back to the default though.\n> \n> I also renamed the GUC sha_256 to match terminology we use.\n\n+ \"SET password_encryption='scram-sha-256';\n+ SET scram_sha_256_iterations=100000;\nMaybe use a lower value to keep the test cheap?\n\n+ time of encryption. In order to make use of a changed value, new\n+ password must be set.\n\"A new password must be set\".\n\nSuperuser-only GUCs should be documented as such, or do you intend to\nmake it user-settable like I suggested upthread :) ?\n--\nMichael",
"msg_date": "Sat, 17 Dec 2022 12:27:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "> On 17 Dec 2022, at 04:27, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Dec 15, 2022 at 12:09:15PM +0100, Daniel Gustafsson wrote:\n>>> On 15 Dec 2022, at 00:52, Michael Paquier <michael@paquier.xyz> wrote:\n>>> conn->in_hot_standby = PG_BOOL_UNKNOWN;\n>>> + conn->scram_iterations = SCRAM_DEFAULT_ITERATIONS;\n>>> \n>>> s/SCRAM_DEFAULT_ITERATIONS/SCRAM_SHA_256_DEFAULT_ITERATIONS/ and\n>>> s/scram_iterations/scram_sha_256_interations/ perhaps? \n>> \n>> Distinct members in the conn object is only of interest if there is a way for\n>> the user to select a different password method in \\password right? I can\n>> rename it now but I think doing too much here is premature, awaiting work on\n>> \\password (should that materialize) seems reasonable no?\n> \n> You could do that already, somewhat indirectly, with\n> password_encryption, assuming that it supports more than one mode\n> whose password build is influenced by it. If you wish to keep it\n> named this way, this is no big deal for me either way, so feel free to\n> use what you think is best based on the state of HEAD. I think that\n> I'd value more the consistency with the backend in terms of naming,\n> though.\n\nok, renamed.\n\n>>> @@ -692,7 +697,7 @@ mock_scram_secret(const char *username, int *iterations, char **salt,\n>>> encoded_salt[encoded_len] = '\\0';\n>>> \n>>> *salt = encoded_salt;\n>>> - *iterations = SCRAM_DEFAULT_ITERATIONS;\n>>> + *iterations = scram_sha256_iterations;\n>>> \n>>> This looks incorrect to me? The mock authentication is here to\n>>> produce a realistic verifier, still it will fail. It seems to me that\n>>> we'd better stick to the default in all the cases.\n>> \n>> For avoiding revealing anything, I think a case can be argued for both. I've\n>> reverted back to the default though.\n>> \n>> I also renamed the GUC sha_256 to match terminology we use.\n> \n> + \"SET password_encryption='scram-sha-256';\n> + SET scram_sha_256_iterations=100000;\n> Maybe use a lower value to keep the test cheap?\n\nFixed.\n\n> + time of encryption. In order to make use of a changed value, new\n> + password must be set.\n> \"A new password must be set\".\n\nFixed.\n\n> Superuser-only GUCs should be documented as such, or do you intend to\n> make it user-settable like I suggested upthread :) ?\n\nI don't really have strong feelings, so I reverted to being user-settable since\nI can't really present a strong argument for superuser-only.\n\nThe attached is a rebase on top of master with no other additional hacking done\non top of the above review comments.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 22 Feb 2023 14:39:23 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On 2/22/23 8:39 AM, Daniel Gustafsson wrote:\r\n>> On 17 Dec 2022, at 04:27, Michael Paquier <michael@paquier.xyz> wrote:\r\n>>\r\n\r\n>> Superuser-only GUCs should be documented as such, or do you intend to\r\n>> make it user-settable like I suggested upthread :) ?\r\n> \r\n> I don't really have strong feelings, so I reverted to being user-settable since\r\n> I can't really present a strong argument for superuser-only.\r\n\r\nI was going to present some weak arguments, but not worth it. Anything \r\naround using up CPU cycles would be true of just writing plain old queries.\r\n\r\n> The attached is a rebase on top of master with no other additional hacking done\r\n> on top of the above review comments.\r\n\r\nGenerally LGTM. I read through earlier comments (sorry I missed \r\nreplying) and have nothing to add or object to.\r\n\r\nJonathan",
"msg_date": "Wed, 22 Feb 2023 12:21:03 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "> On 22 Feb 2023, at 18:21, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> On 2/22/23 8:39 AM, Daniel Gustafsson wrote:\n\n>> The attached is a rebase on top of master with no other additional hacking done\n>> on top of the above review comments.\n> \n> Generally LGTM. I read through earlier comments (sorry I missed replying) and have nothing to add or object to.\n\nThanks for reviewing!\n\nIn fixing the CFBot test error in the previous version I realized through\noff-list discussion that the GUC name was badly chosen. Incorporating the\nvalue of another GUC in the name is a bad idea, so the attached version reverts\nto \"scram_iterations=<int>\". Should there ever be another SCRAM method\nstandardized (which seems a slim chance to happen before the v17 freeze) we can\nmake a backwards compatible change to \"<method>:<iterations> | <iterations>\"\nwhere the latter is a default for all. Internally the variable contains\nsha_256 though, that part I think is fine for readability.\n\n--\nDaniel Gustafsson",
"msg_date": "Thu, 23 Feb 2023 15:10:05 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 03:10:05PM +0100, Daniel Gustafsson wrote:\n> In fixing the CFBot test error in the previous version I realized through\n> off-list discussion that the GUC name was badly chosen. Incorporating the\n> value of another GUC in the name is a bad idea, so the attached version reverts\n> to \"scram_iterations=<int>\". Should there ever be another SCRAM method\n> standardized (which seems a slim chance to happen before the v17 freeze) we can\n> make a backwards compatible change to \"<method>:<iterations> | <iterations>\"\n> where the latter is a default for all. Internally the variable contains\n> sha_256 though, that part I think is fine for readability.\n\nOkay by me if you want to go this way. We could always have the\ncompatibility argument later on if it proves necessary.\n\nAnyway, the patch does that in libpq:\n@@ -1181,6 +1181,10 @@ pqSaveParameterStatus(PGconn *conn, const char *name, const char *value)\n conn->in_hot_standby =\n (strcmp(value, \"on\") == 0) ? PG_BOOL_YES : PG_BOOL_NO;\n }\n+ else if (strcmp(name, \"scram_sha_256_iterations\") == 0)\n+ {\n+ conn->scram_sha_256_iterations = atoi(value);\n+ }\nThis should match on \"scram_iterations\", which is the name of the\nGUC. Would the long-term plan be to use multiple variables in conn if\nwe ever get to <method>:<iterations> that would require more parsing?\nThis is fine by me, just asking. \n\nPerhaps there should be a test with \\password to make sure that libpq\ngets the call when the GUC is updated by a SET command?\n--\nMichael",
"msg_date": "Mon, 27 Feb 2023 16:06:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "> On 27 Feb 2023, at 08:06, Michael Paquier <michael@paquier.xyz> wrote:\n\n> + conn->scram_sha_256_iterations = atoi(value);\n> + }\n> This should match on \"scram_iterations\", which is the name of the\n> GUC.\n\nFixed.\n\n> Would the long-term plan be to use multiple variables in conn if\n> we ever get to <method>:<iterations> that would require more parsing?\n\nI personally don't think we'll see more than 2 or at most 3 values so parsing\nthat format shouldn't be a problem, but it can always be revisited if/when we\nget there.\n\n> Perhaps there should be a test with \\password to make sure that libpq\n> gets the call when the GUC is updated by a SET command?\n\nThat would indeed be nice, but is there a way to do this without a complicated\npump TAP expression? I was unable to think of a way but I might be missing\nsomething?\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 3 Mar 2023 23:13:36 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On Fri, Mar 03, 2023 at 11:13:36PM +0100, Daniel Gustafsson wrote:\n> That would indeed be nice, but is there a way to do this without a complicated\n> pump TAP expression? I was unable to think of a way but I might be missing\n> something?\n\nA SET command refreshes immediately the cache information of the\nconnection in pqSaveParameterStatus()@libpq, so a test in password.sql\nwith \\password would be enough to check the computation happens in\npg_fe_scram_build_secret() with the correct iteration number. Say\nlike:\n=# SET scram_iterations = 234;\nSET\n=# \\password\nEnter new password for user \"postgres\": TYPEME \nEnter it again: TYPEME\n=# select substr(rolpassword, 1, 18) from pg_authid\n where oid::regrole::name = current_role;\n substr \n--------------------\n SCRAM-SHA-256$234:\n(1 row)\n\nOr perhaps I am missing something?\n\nThanks,\n--\nMichael",
"msg_date": "Tue, 7 Mar 2023 13:53:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "> On 7 Mar 2023, at 05:53, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Mar 03, 2023 at 11:13:36PM +0100, Daniel Gustafsson wrote:\n>> That would indeed be nice, but is there a way to do this without a complicated\n>> pump TAP expression? I was unable to think of a way but I might be missing\n>> something?\n> \n> A SET command refreshes immediately the cache information of the\n> connection in pqSaveParameterStatus()@libpq, so a test in password.sql\n> with \\password would be enough to check the computation happens in\n> pg_fe_scram_build_secret() with the correct iteration number. Say\n> like:\n> =# SET scram_iterations = 234;\n> SET\n> =# \\password\n> Enter new password for user \"postgres\": TYPEME \n> Enter it again: TYPEME\n> =# select substr(rolpassword, 1, 18) from pg_authid\n> where oid::regrole::name = current_role;\n> substr \n> --------------------\n> SCRAM-SHA-256$234:\n> (1 row)\n> \n> Or perhaps I am missing something?\n\nRight, what I meant was: can a pg_regress sql/expected test drive a psql\ninteractive prompt? Your comments suggested using password.sql so I was\ncurious if I was missing a neat trick for doing this.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 7 Mar 2023 09:26:41 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "> On 7 Mar 2023, at 09:26, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> Right, what I meant was: can a pg_regress sql/expected test drive a psql\n> interactive prompt? Your comments suggested using password.sql so I was\n> curious if I was missing a neat trick for doing this.\n\nThe attached v7 adds a TAP test for verifying that \\password use the changed\nSCRAM iteration count setting, and dials back the other added test to use fewer\niterations than the default setting in order to shave (barely noticeable\namounts of) cpu cycles.\n\nRunning interactive tests against psql adds a fair bit of complexity and isn't\nall that pleasing on the eye, but it can be cleaned up and refactored when\nhttps://commitfest.postgresql.org/42/4228/ is committed.\n\n--\nDaniel Gustafsson",
"msg_date": "Tue, 7 Mar 2023 14:03:05 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On Tue, Mar 07, 2023 at 02:03:05PM +0100, Daniel Gustafsson wrote:\n> On 7 Mar 2023, at 09:26, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> Right, what I meant was: can a pg_regress sql/expected test drive a psql\n>> interactive prompt? Your comments suggested using password.sql so I was\n>> curious if I was missing a neat trick for doing this.\n\nYes, I meant to rely just on password.sql to do that. I think that I\nsee your point now.. You are worried that the SET command changing a\nGUC to-be-reported would not affect the client before \\password is\ndone. That could be possible, I guess. ReportChangedGUCOptions() is\ncalled before ReadyForQuery() that would tell psql that the backend is\nready to receive the next query. A trick would be to stick an extra\ndummy query between the SET and \\password in password.sql?\n\n> Running interactive tests against psql adds a fair bit of complexity and isn't\n> all that pleasing on the eye, but it can be cleaned up and refactored when\n> https://commitfest.postgresql.org/42/4228/ is committed.\n\nI have not looked at that, so no idea.\n--\nMichael",
"msg_date": "Wed, 8 Mar 2023 16:48:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "> On 8 Mar 2023, at 08:48, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Mar 07, 2023 at 02:03:05PM +0100, Daniel Gustafsson wrote:\n>> On 7 Mar 2023, at 09:26, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> Right, what I meant was: can a pg_regress sql/expected test drive a psql\n>>> interactive prompt? Your comments suggested using password.sql so I was\n>>> curious if I was missing a neat trick for doing this.\n> \n> Yes, I meant to rely just on password.sql to do that. I think that I\n> see your point now.. You are worried that the SET command changing a\n> GUC to-be-reported would not affect the client before \\password is\n> done.\n\nNo, I just did not think it was possible to feed input to the interactive\n\\password prompt with a normal pg_regress SQL file test. If you are able to do\nthat I'd love to see an example.\n\nAFAIK a TAP test with psql_interactive is the only way to do this so that's\nwhat I've implemented.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 8 Mar 2023 09:07:36 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On Wed, Mar 08, 2023 at 09:07:36AM +0100, Daniel Gustafsson wrote:\n> No, I just did not think it was possible to feed input to the interactive\n> \\password prompt with a normal pg_regress SQL file test. If you are able to do\n> that I'd love to see an example.\n> \n> AFAIK a TAP test with psql_interactive is the only way to do this so that's\n> what I've implemented.\n\nBah, of course. I was really not following your point here, sorry for\nthe noise. Better to call it a day..\n--\nMichael",
"msg_date": "Wed, 8 Mar 2023 17:21:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On Wed, Mar 08, 2023 at 05:21:20PM +0900, Michael Paquier wrote:\n> On Wed, Mar 08, 2023 at 09:07:36AM +0100, Daniel Gustafsson wrote:\n>> AFAIK a TAP test with psql_interactive is the only way to do this so that's\n>> what I've implemented.\n\nI cannot think of a better idea than what you have here, so I am\nmarking this patch as ready for committer. I am wondering how stable\na logic based on a timer of 5s would be..\n--\nMichael",
"msg_date": "Thu, 9 Mar 2023 16:09:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "> On 9 Mar 2023, at 08:09, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Mar 08, 2023 at 05:21:20PM +0900, Michael Paquier wrote:\n>> On Wed, Mar 08, 2023 at 09:07:36AM +0100, Daniel Gustafsson wrote:\n>>> AFAIK a TAP test with psql_interactive is the only way to do this so that's\n>>> what I've implemented.\n> \n> I cannot think of a better idea than what you have here, so I am\n> marking this patch as ready for committer. \n\nThanks for review!\n\n> I am wondering how stable a logic based on a timer of 5s would be..\n\nActually that was a bug, it should be using the default timeout and restarting\nfor each operation to ensure that even overloaded hosts wont time out unless\nsomething is actually broken/incorrect. I've fixed that in the attached rev\nand also renamed the password in the regress test from \"raisediterationcount\"\nas it's now lowering the count in the test.\n\nUnless there objections to this version I plan to commit that during this CF.\n\n--\nDaniel Gustafsson",
"msg_date": "Thu, 9 Mar 2023 11:01:57 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "CFBot is failing with this test failure... I'm not sure if this just\nrepresents a timing dependency or a bad test or what?\n\n[09:44:49.937] --- stdout ---\n[09:44:49.937] # executing test in\n/tmp/cirrus-ci-build/build-32/testrun/authentication/001_password\ngroup authentication test 001_password\n[09:44:49.937] ok 1 - scram_iterations in server side ROLE\n[09:44:49.937] # test failed\n[09:44:49.937] --- stderr ---\n[09:44:49.937] # Tests were run but no plan was declared and\ndone_testing() was not seen.\n[09:44:49.937] # Looks like your test exited with 2 just after 1.\n[09:44:49.937]\n[09:44:49.937] (test program exited with status code 2)\n\nIt looks like perhaps a Perl issue?\n\n# Running: /tmp/cirrus-ci-build/build-32/src/test/regress/pg_regress\n--config-auth /tmp/cirrus-ci-build/build-32/testrun/authentication/001_password/data/t_001_password_primary_data/pgdata\n### Starting node \"primary\"\n# Running: pg_ctl -w -D\n/tmp/cirrus-ci-build/build-32/testrun/authentication/001_password/data/t_001_password_primary_data/pgdata\n-l /tmp/cirrus-ci-build/build-32/testrun/authentication/001_password/log/001_password_primary.log\n-o --cluster-name=primary start\nwaiting for server to start.... done\nserver started\n# Postmaster PID for node \"primary\" is 66930\n[09:44:07.411](1.875s) ok 1 - scram_iterations in server side ROLE\nCan't locate IO/Pty.pm in @INC (you may need to install the IO::Pty\nmodule) (@INC contains: /tmp/cirrus-ci-build/src/test/perl\n/tmp/cirrus-ci-build/src/test/authentication /etc/perl\n/usr/local/lib/i386-linux-gnu/perl/5.32.1 /usr/local/share/perl/5.32.1\n/usr/lib/i386-linux-gnu/perl5/5.32 /usr/share/perl5\n/usr/lib/i386-linux-gnu/perl/5.32 /usr/share/perl/5.32\n/usr/local/lib/site_perl) at /usr/share/perl5/IPC/Run.pm line 1828.\nUnexpected SCALAR(0x5814b508) in harness() parameter 3 at\n/tmp/cirrus-ci-build/src/test/perl/PostgreSQL/Test/Cluster.pm line\n2112.\nCan't locate IO/Pty.pm in @INC (you may need to install the IO::Pty\nmodule) (@INC contains: /tmp/cirrus-ci-build/src/test/perl\n/tmp/cirrus-ci-build/src/test/authentication /etc/perl\n/usr/local/lib/i386-linux-gnu/perl/5.32.1 /usr/local/share/perl/5.32.1\n/usr/lib/i386-linux-gnu/perl5/5.32 /usr/share/perl5\n/usr/lib/i386-linux-gnu/perl/5.32 /usr/share/perl/5.32\n/usr/local/lib/site_perl) at /usr/share/perl5/IPC/Run.pm line 1939.\n# Postmaster PID for node \"primary\" is 66930\n### Stopping node \"primary\" using mode immediate\n# Running: pg_ctl -D\n/tmp/cirrus-ci-build/build-32/testrun/authentication/001_password/data/t_001_password_primary_data/pgdata\n-m immediate stop\nwaiting for server to shut down.... done\nserver stopped\n# No postmaster PID for node \"primary\"\n[09:44:07.521](0.110s) # Tests were run but no plan was declared and\ndone_testing() was not seen.\n[09:44:07.521](0.000s) # Looks like your test exited with 2 just after 1.\n\n\n",
"msg_date": "Tue, 14 Mar 2023 14:54:55 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On Tue, 14 Mar 2023 at 14:54, Gregory Stark (as CFM)\n<stark.cfm@gmail.com> wrote:\n>\n> CFBot is failing with this test failure... I'm not sure if this just\n> represents a timing dependency or a bad test or what?\n\nCFBot is now consistently showing these test failures. I think there\nmight actually be a problem here?\n\n\n> [09:44:49.937] --- stdout ---\n> [09:44:49.937] # executing test in\n> /tmp/cirrus-ci-build/build-32/testrun/authentication/001_password\n> group authentication test 001_password\n> [09:44:49.937] ok 1 - scram_iterations in server side ROLE\n> [09:44:49.937] # test failed\n> [09:44:49.937] --- stderr ---\n> [09:44:49.937] # Tests were run but no plan was declared and\n> done_testing() was not seen.\n> [09:44:49.937] # Looks like your test exited with 2 just after 1.\n> [09:44:49.937]\n> [09:44:49.937] (test program exited with status code 2)\n>\n> It looks like perhaps a Perl issue?\n>\n> # Running: /tmp/cirrus-ci-build/build-32/src/test/regress/pg_regress\n> --config-auth /tmp/cirrus-ci-build/build-32/testrun/authentication/001_password/data/t_001_password_primary_data/pgdata\n> ### Starting node \"primary\"\n> # Running: pg_ctl -w -D\n> /tmp/cirrus-ci-build/build-32/testrun/authentication/001_password/data/t_001_password_primary_data/pgdata\n> -l /tmp/cirrus-ci-build/build-32/testrun/authentication/001_password/log/001_password_primary.log\n> -o --cluster-name=primary start\n> waiting for server to start.... done\n> server started\n> # Postmaster PID for node \"primary\" is 66930\n> [09:44:07.411](1.875s) ok 1 - scram_iterations in server side ROLE\n> Can't locate IO/Pty.pm in @INC (you may need to install the IO::Pty\n> module) (@INC contains: /tmp/cirrus-ci-build/src/test/perl\n> /tmp/cirrus-ci-build/src/test/authentication /etc/perl\n> /usr/local/lib/i386-linux-gnu/perl/5.32.1 /usr/local/share/perl/5.32.1\n> /usr/lib/i386-linux-gnu/perl5/5.32 /usr/share/perl5\n> /usr/lib/i386-linux-gnu/perl/5.32 /usr/share/perl/5.32\n> /usr/local/lib/site_perl) at /usr/share/perl5/IPC/Run.pm line 1828.\n> Unexpected SCALAR(0x5814b508) in harness() parameter 3 at\n> /tmp/cirrus-ci-build/src/test/perl/PostgreSQL/Test/Cluster.pm line\n> 2112.\n> Can't locate IO/Pty.pm in @INC (you may need to install the IO::Pty\n> module) (@INC contains: /tmp/cirrus-ci-build/src/test/perl\n> /tmp/cirrus-ci-build/src/test/authentication /etc/perl\n> /usr/local/lib/i386-linux-gnu/perl/5.32.1 /usr/local/share/perl/5.32.1\n> /usr/lib/i386-linux-gnu/perl5/5.32 /usr/share/perl5\n> /usr/lib/i386-linux-gnu/perl/5.32 /usr/share/perl/5.32\n> /usr/local/lib/site_perl) at /usr/share/perl5/IPC/Run.pm line 1939.\n> # Postmaster PID for node \"primary\" is 66930\n> ### Stopping node \"primary\" using mode immediate\n> # Running: pg_ctl -D\n> /tmp/cirrus-ci-build/build-32/testrun/authentication/001_password/data/t_001_password_primary_data/pgdata\n> -m immediate stop\n> waiting for server to shut down.... done\n> server stopped\n> # No postmaster PID for node \"primary\"\n> [09:44:07.521](0.110s) # Tests were run but no plan was declared and\n> done_testing() was not seen.\n> [09:44:07.521](0.000s) # Looks like your test exited with 2 just after 1.\n\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Tue, 21 Mar 2023 23:14:07 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "> On 22 Mar 2023, at 04:14, Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:\n> On Tue, 14 Mar 2023 at 14:54, Gregory Stark (as CFM)\n> <stark.cfm@gmail.com> wrote:\n>> \n>> CFBot is failing with this test failure... I'm not sure if this just\n>> represents a timing dependency or a bad test or what?\n> \n> CFBot is now consistently showing these test failures. I think there\n> might actually be a problem here?\n\nI'm fairly convinced it's a timeout in the interactive psql session. Given how\nugly the use of that is I'm sort of waiting for Andres' refactoring patch [0] to\ncommit this such that I can rewrite the test in a saner and more robust way.\n\n--\nDaniel Gustafsson\n\n[0] https://commitfest.postgresql.org/42/4228/\n\n",
"msg_date": "Thu, 23 Mar 2023 22:46:56 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 10:46:56PM +0100, Daniel Gustafsson wrote:\n> I'm fairly convinced it's a timeout in the interactive psql session. Given how\n> ugly the use of that is I'm sort of waiting for Andres' refactoring patch [0] to\n> commit this such that I can rewrite the test in a saner and more robust way.\n\nFWIW, I'd be OK here even if you don't have a test for libpq in the\nfirst change as what you have sent is already testing for the core\nmachinery in scram-common.c. You could always add one later.\n--\nMichael",
"msg_date": "Fri, 24 Mar 2023 08:33:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "> On 24 Mar 2023, at 00:33, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Mar 23, 2023 at 10:46:56PM +0100, Daniel Gustafsson wrote:\n>> I'm fairly convinced it's a timeout in the interactive psql session. Given how\n>> ugly the use of that is I'm sort of waiting for Andres' refactoring patch [0] to\n>> commit this such that I can rewrite the test in a saner and more robust way.\n> \n> FWIW, I'd be OK here even if you don't have a test for libpq in the\n> first change as what you have sent is already testing for the core\n> machinery in scram-common.c. You could always add one later.\n\nYeah, that's my fallback in case we are unable to get the TAP refactoring done\nin time for the end of the CF/feature freeze.\n\nI've actually ripped out the test in question in the attached v9 to have it\nready and building green in CFbot.\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 24 Mar 2023 09:56:29 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 09:56:29AM +0100, Daniel Gustafsson wrote:\n> I've actually ripped out the test in question in the attached v9 to have it\n> ready and building green in CFbot.\n\nWhile reading through v9, I have noticed a few things.\n\n+-- Changing the SCRAM iteration count\n+SET scram_iterations = 1024;\n+CREATE ROLE regress_passwd9 PASSWORD 'alterediterationcount';\n\nPerhaps scram_iterations should be reset once this CREATE ROLE is run\nto not impact any tests after that?\n\n+/*\n+ * The number of iterations to use when generating new secrets.\n+ */\n+int scram_sha_256_iterations;\n\nThis variable in auth-scram.c should be initialized to\nSCRAM_SHA_256_DEFAULT_ITERATIONS.\n\n+use IPC::Run qw(pump finish timer);\n\nThis can be removed.\n--\nMichael",
"msg_date": "Sat, 25 Mar 2023 09:56:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "> On 25 Mar 2023, at 01:56, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Mar 24, 2023 at 09:56:29AM +0100, Daniel Gustafsson wrote:\n>> I've actually ripped out the test in question in the attached v9 to have it\n>> ready and building green in CFbot.\n> \n> While reading through v9, I have noticed a few things.\n\nThe attached rebase fixes all of these comments, and features a slightly\nreworded commit message. I plan to go ahead with this tomorrow to close the CF\npatch item.\n\n--\nDaniel Gustafsson",
"msg_date": "Sun, 26 Mar 2023 23:14:37 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Raising the SCRAM iteration count"
},
{
"msg_contents": "On Sun, Mar 26, 2023 at 11:14:37PM +0200, Daniel Gustafsson wrote:\n> > On 25 Mar 2023, at 01:56, Michael Paquier <michael@paquier.xyz> wrote:\n> > \n> > On Fri, Mar 24, 2023 at 09:56:29AM +0100, Daniel Gustafsson wrote:\n> >> I've actually ripped out the test in question in the attached v9 to have it\n> >> ready and building green in CFbot.\n> > \n> > While reading through v9, I have noticed a few things.\n> \n> The attached rebase fixes all of these comments, and features a slightly\n> reworded commit message. I plan to go ahead with this tomorrow to close the CF\n> patch item.\n\nLooks OK by me.\n\n+ \"SELECT substr(rolpassword,1,19)\nI would have perhaps used a regexp_replace() for that. What you have\nhere is of course fine, so feel free to ignore :p\n--\nMichael",
"msg_date": "Mon, 27 Mar 2023 10:38:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Raising the SCRAM iteration count"
}
] |
[
{
"msg_contents": "Hi Chris Travers\r\n Robertmhaas said that the project Zheap is dead(https://twitter.com/andy_pavlo/status/1590703943176589312), which means that we cannot use Zheap to deal with the issue of xid wraparound and dead tuples in tables. The dead tuple issue is not a big deal because I can still use pg_repack to handle, although pg_repack will cause wal log to increase dramatically and may take one or two days to handle a large table. During this time the database can be accessed by external users, but the xid wraparound will cause PostgreSQL to be down, which is a disaster for DBAs. Maybe you are not a DBA, or your are from a small country, Database system tps is very low, so xid32 is enough for your database system , Oracle's scn was also 32bits, however, Oracle realized the issue and changed scn to 64 bits. The transaction id in mysql is 48 bits. MySQL didn't fix the transaction id wraparound problem because they think that 48 bits is enough for the transaction id. This project has been running for almost 1 year and now it is coming to an end. I strongly disagree with your idea of stopping this patch, and I suspect you are a saboteur. I strongly disagree with your viewpoint, as it is not a fundamental way to solve the xid wraparound problem. The PostgreSQL community urgently needs developers who solve problems like this, not bury one' head in the sand\r\n\r\n\r\nBest whish\r\n\r\n________________________________\r\n发件人: Peter Geoghegan <pg@bowt.ie>\r\n发送时间: 2022年12月1日 0:35\r\n收件人: Robert Haas <robertmhaas@gmail.com>\r\n抄送: Chris Travers <chris@orioledata.com>; Bruce Momjian <bruce@momjian.us>; Aleksander Alekseev <aleksander@timescale.com>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>; Chris Travers <chris.travers@gmail.com>; Fedor Sigaev <teodor@sigaev.ru>; Alexander Korotkov <aekorotkov@gmail.com>; Konstantin Knizhnik <knizhnik@garret.ru>; Nikita Glukhov <n.gluhov@postgrespro.ru>; Yura Sokolov <y.sokolov@postgrespro.ru>; Maxim Orlov <orlovmg@gmail.com>; Pavel Borisov <pashkin.elfe@gmail.com>; Simon Riggs <simon.riggs@enterprisedb.com>\r\n主题: Re: Add 64-bit XIDs into PostgreSQL 15\r\n\r\nOn Wed, Nov 30, 2022 at 8:13 AM Robert Haas <robertmhaas@gmail.com> wrote:\r\n> I haven't checked the patches to see whether they look correct, and\r\n> I'm concerned in particular about upgrade scenarios. But if there's a\r\n> way we can get that part committed, I think it would be a clear win.\r\n\r\n+1\r\n\r\n--\r\nPeter Geoghegan\r\n\r\n\r\n\n\n\n\n\n\n\n\nHi Chris Travers \n\n\n Robertmhaas said that the project Zheap is dead(https://twitter.com/andy_pavlo/status/1590703943176589312), which means that\n we cannot use Zheap to deal with the issue of xid wraparound and dead tuples in tables. The dead tuple issue is not a big deal because I can still use pg_repack to handle, although pg_repack will cause wal log to increase dramatically and may take one or two\n days to handle a large table. During this time the database can be accessed by external users, but the xid wraparound will cause PostgreSQL to be down, which is a disaster for DBAs. Maybe you are not a DBA, or your are from a small country, Database system\n tps is very low, so xid32 is enough for your database system , Oracle's scn was also 32bits, however, Oracle realized the issue and changed scn to 64 bits. The transaction id in mysql is 48 bits. MySQL didn't fix the transaction id wraparound problem because\n they think that 48 bits is enough for the transaction id. This project has been running for almost 1 year and now it is coming to an end. I strongly disagree with your idea of stopping this patch, and I suspect you are a saboteur. I strongly disagree with\n your viewpoint, as it is not a fundamental way to solve the xid wraparound problem. The PostgreSQL community urgently needs developers who solve problems like this, not bury one' head in the sand\n\n\nBest whish \n\n\n\n\n发件人: Peter Geoghegan <pg@bowt.ie>\n发送时间: 2022年12月1日 0:35\n收件人: Robert Haas <robertmhaas@gmail.com>\n抄送: Chris Travers <chris@orioledata.com>; Bruce Momjian <bruce@momjian.us>; Aleksander Alekseev <aleksander@timescale.com>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>; Chris Travers <chris.travers@gmail.com>; Fedor Sigaev\n <teodor@sigaev.ru>; Alexander Korotkov <aekorotkov@gmail.com>; Konstantin Knizhnik <knizhnik@garret.ru>; Nikita Glukhov <n.gluhov@postgrespro.ru>; Yura Sokolov <y.sokolov@postgrespro.ru>; Maxim Orlov <orlovmg@gmail.com>; Pavel Borisov <pashkin.elfe@gmail.com>;\n Simon Riggs <simon.riggs@enterprisedb.com>\n主题: Re: Add 64-bit XIDs into PostgreSQL 15\n \n\n\nOn Wed, Nov 30, 2022 at 8:13 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I haven't checked the patches to see whether they look correct, and\n> I'm concerned in particular about upgrade scenarios. But if there's a\n> way we can get that part committed, I think it would be a clear win.\n\n+1\n\n-- \nPeter Geoghegan",
"msg_date": "Fri, 9 Dec 2022 12:34:38 +0000",
"msg_from": "adherent postgres <adherent_postgres@hotmail.com>",
"msg_from_op": true,
"msg_subject": "=?gb2312?B?u9i4tDogQWRkIDY0LWJpdCBYSURzIGludG8gUG9zdGdyZVNRTCAxNQ==?="
},
{
"msg_contents": "Hi adherent,\n\n> Robertmhaas said that the project Zheap is dead(https://twitter.com/andy_pavlo/status/1590703943176589312), which means that we cannot use Zheap to deal with the issue of xid wraparound and dead tuples in tables. The dead tuple issue is not a big deal because I can still use pg_repack to handle, although pg_repack will cause wal log to increase dramatically and may take one or two days to handle a large table. During this time the database can be accessed by external users, but the xid wraparound will cause PostgreSQL to be down, which is a disaster for DBAs. Maybe you are not a DBA, or your are from a small country, Database system tps is very low, so xid32 is enough for your database system , Oracle's scn was also 32bits, however, Oracle realized the issue and changed scn to 64 bits. The transaction id in mysql is 48 bits. MySQL didn't fix the transaction id wraparound problem because they think that 48 bits is enough for the transaction id. This project has been running for almost 1 year and now it is coming to an end. I strongly disagree with your idea of stopping this patch, and I suspect you are a saboteur. I strongly disagree with your viewpoint, as it is not a fundamental way to solve the xid wraparound problem. The PostgreSQL community urgently needs developers who solve problems like this, not bury one' head in the sand\n\nThis is not uncommon for people on the mailing list to have\ndisagreements. This is part of the process, we all are looking for\nconsensus. It's true that different people have different use cases in\nmind and different backgrounds as well. It doesn't mean these use\ncases are wrong and/or the experience is irrelevant and/or the\nreceived feedback should be just discarded.\n\nAlthough I also expressed my disagreement with Chris before, let's not\nassume any bad intent and especially sabotage as you put it. (Unless\nyou have a strong proof of this of course which I doubt you have.) We\nwant all kinds of feedback to be welcomed here. I'm sure our goal here\nis mutual, to make PostgreSQL even better than it is now. The only\nproblem is that the definition of \"better\" varies sometimes.\n\nI see you believe that 64-bit XIDs are going to be useful. That's\ngreat! Tell us more about your case and how the patch is going to help\nwith it. Also, maybe you could test your load with the applied\npatchset and tell us whether it makes things better or worse?\nPersonally I would love hearing this from you.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 9 Dec 2022 15:49:18 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15"
},
{
"msg_contents": "Hi Aleksander Alekseev\r\n I think the xids 32bit transformation project has been dragged on for too long. Huawei's openGauss referenced this patch to implement xids 64bit, and Postgrespro also implemented xids 64bit, which is enough to prove that their worries are redundant.I think postgresql has no reason not to implement xid 64 bit. What about your opinion?\r\n\r\nBest whish\r\n________________________________\r\n发件人: Aleksander Alekseev <aleksander@timescale.com>\r\n发送时间: 2022年12月9日 20:49\r\n收件人: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\r\n抄送: adherent postgres <adherent_postgres@hotmail.com>; Chris Travers <chris.travers@gmail.com>; Chris Travers <chris@orioledata.com>; Bruce Momjian <bruce@momjian.us>\r\n主题: Re: Add 64-bit XIDs into PostgreSQL 15\r\n\r\nHi adherent,\r\n\r\n> Robertmhaas said that the project Zheap is dead(https://twitter.com/andy_pavlo/status/1590703943176589312), which means that we cannot use Zheap to deal with the issue of xid wraparound and dead tuples in tables. The dead tuple issue is not a big deal because I can still use pg_repack to handle, although pg_repack will cause wal log to increase dramatically and may take one or two days to handle a large table. During this time the database can be accessed by external users, but the xid wraparound will cause PostgreSQL to be down, which is a disaster for DBAs. Maybe you are not a DBA, or your are from a small country, Database system tps is very low, so xid32 is enough for your database system , Oracle's scn was also 32bits, however, Oracle realized the issue and changed scn to 64 bits. The transaction id in mysql is 48 bits. MySQL didn't fix the transaction id wraparound problem because they think that 48 bits is enough for the transaction id. This project has been running for almost 1 year and now it is coming to an end. I strongly disagree with your idea of stopping this patch, and I suspect you are a saboteur. I strongly disagree with your viewpoint, as it is not a fundamental way to solve the xid wraparound problem. The PostgreSQL community urgently needs developers who solve problems like this, not bury one' head in the sand\r\n\r\nThis is not uncommon for people on the mailing list to have\r\ndisagreements. This is part of the process, we all are looking for\r\nconsensus. It's true that different people have different use cases in\r\nmind and different backgrounds as well. It doesn't mean these use\r\ncases are wrong and/or the experience is irrelevant and/or the\r\nreceived feedback should be just discarded.\r\n\r\nAlthough I also expressed my disagreement with Chris before, let's not\r\nassume any bad intent and especially sabotage as you put it. (Unless\r\nyou have a strong proof of this of course which I doubt you have.) We\r\nwant all kinds of feedback to be welcomed here. I'm sure our goal here\r\nis mutual, to make PostgreSQL even better than it is now. The only\r\nproblem is that the definition of \"better\" varies sometimes.\r\n\r\nI see you believe that 64-bit XIDs are going to be useful. That's\r\ngreat! Tell us more about your case and how the patch is going to help\r\nwith it. Also, maybe you could test your load with the applied\r\npatchset and tell us whether it makes things better or worse?\r\nPersonally I would love hearing this from you.\r\n\r\n--\r\nBest regards,\r\nAleksander Alekseev\r\n\n\n\n\n\n\n\n\nHi Aleksander Alekseev \n\n\n I think the xids 32bit transformation project has been dragged on for too long. Huawei's openGauss referenced this patch to implement xids 64bit, and Postgrespro also implemented xids 64bit, which is enough to prove that their worries are redundant.I think\n postgresql has no reason not to implement xid 64 bit. What about your opinion?\n\n\n\n\n\n\nBest whish\n\n发件人: Aleksander Alekseev <aleksander@timescale.com>\n发送时间: 2022年12月9日 20:49\n收件人: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\n抄送: adherent postgres <adherent_postgres@hotmail.com>; Chris Travers <chris.travers@gmail.com>; Chris Travers <chris@orioledata.com>; Bruce Momjian <bruce@momjian.us>\n主题: Re: Add 64-bit XIDs into PostgreSQL 15\n \n\n\nHi adherent,\n\n> Robertmhaas said that the project Zheap is dead(https://twitter.com/andy_pavlo/status/1590703943176589312), which means that we cannot use Zheap to deal with the issue of xid wraparound and dead tuples in tables. The dead tuple issue is not a big deal\n because I can still use pg_repack to handle, although pg_repack will cause wal log to increase dramatically and may take one or two days to handle a large table. During this time the database can be accessed by external users, but the xid wraparound will cause\n PostgreSQL to be down, which is a disaster for DBAs. Maybe you are not a DBA, or your are from a small country, Database system tps is very low, so xid32 is enough for your database system , Oracle's scn was also 32bits, however, Oracle realized the issue\n and changed scn to 64 bits. The transaction id in mysql is 48 bits. MySQL didn't fix the transaction id wraparound problem because they think that 48 bits is enough for the transaction id. This project has been running for almost 1 year and now it is coming\n to an end. I strongly disagree with your idea of stopping this patch, and I suspect you are a saboteur. I strongly disagree with your viewpoint, as it is not a fundamental way to solve the xid wraparound problem. The PostgreSQL community urgently needs developers\n who solve problems like this, not bury one' head in the sand\n\nThis is not uncommon for people on the mailing list to have\ndisagreements. This is part of the process, we all are looking for\nconsensus. It's true that different people have different use cases in\nmind and different backgrounds as well. It doesn't mean these use\ncases are wrong and/or the experience is irrelevant and/or the\nreceived feedback should be just discarded.\n\nAlthough I also expressed my disagreement with Chris before, let's not\nassume any bad intent and especially sabotage as you put it. (Unless\nyou have a strong proof of this of course which I doubt you have.) We\nwant all kinds of feedback to be welcomed here. I'm sure our goal here\nis mutual, to make PostgreSQL even better than it is now. The only\nproblem is that the definition of \"better\" varies sometimes.\n\nI see you believe that 64-bit XIDs are going to be useful. That's\ngreat! Tell us more about your case and how the patch is going to help\nwith it. Also, maybe you could test your load with the applied\npatchset and tell us whether it makes things better or worse?\nPersonally I would love hearing this from you.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Fri, 9 Dec 2022 13:54:23 +0000",
"msg_from": "adherent postgres <adherent_postgres@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15"
},
{
"msg_contents": "Hi, Adherent!\n\nOn Fri, 9 Dec 2022 at 17:54, adherent postgres\n<adherent_postgres@hotmail.com> wrote:\n>\n> Hi Aleksander Alekseev\n> I think the xids 32bit transformation project has been dragged on for too long. Huawei's openGauss referenced this patch to implement xids 64bit, and Postgrespro also implemented xids 64bit, which is enough to prove that their worries are redundant.I think postgresql has no reason not to implement xid 64 bit. What about your opinion?\n\nI agree it's high time to commit 64xids into PostgreSQL.\n\nIf you can do your review of the whole proposed patchset or only the\npart that is likely to be committed earlier [1] it would help a lot!\nI'd recommend beginning with the last version of the patch in thread\n[1]. First, it is easier. Also, this review is going to be useful\nsooner and will help a committer on January commitfest a lot.\n[1]: https://www.postgresql.org/message-id/CAFiTN-uudj2PY8GsUzFtLYFpBoq_rKegW3On_8ZHdxB1mVv3-A%40mail.gmail.com\n\nRegards,\nPavel Borisov,\nSupabase\n\n\n",
"msg_date": "Fri, 9 Dec 2022 18:13:17 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15"
},
{
"msg_contents": "Hi Pavel Borisov\r\n Now the disk performance has been improved many times, and the capacity has also been increased many times,The wal log already supports lz4 and zstd compression, I think each XLOG record size will increase at least by 4 bytes which is not a big problem.What about your opinion?\r\n\r\nBest whish\r\n\r\n\r\n________________________________\r\n发件人: Pavel Borisov <pashkin.elfe@gmail.com>\r\n发送时间: 2022年12月9日 22:13\r\n收件人: adherent postgres <adherent_postgres@hotmail.com>\r\n抄送: Aleksander Alekseev <aleksander@timescale.com>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>; Chris Travers <chris.travers@gmail.com>; Chris Travers <chris@orioledata.com>; Bruce Momjian <bruce@momjian.us>\r\n主题: Re: Add 64-bit XIDs into PostgreSQL 15\r\n\r\nHi, Adherent!\r\n\r\nOn Fri, 9 Dec 2022 at 17:54, adherent postgres\r\n<adherent_postgres@hotmail.com> wrote:\r\n>\r\n> Hi Aleksander Alekseev\r\n> I think the xids 32bit transformation project has been dragged on for too long. Huawei's openGauss referenced this patch to implement xids 64bit, and Postgrespro also implemented xids 64bit, which is enough to prove that their worries are redundant.I think postgresql has no reason not to implement xid 64 bit. What about your opinion?\r\n\r\nI agree it's high time to commit 64xids into PostgreSQL.\r\n\r\nIf you can do your review of the whole proposed patchset or only the\r\npart that is likely to be committed earlier [1] it would help a lot!\r\nI'd recommend beginning with the last version of the patch in thread\r\n[1]. First, it is easier. Also, this review is going to be useful\r\nsooner and will help a committer on January commitfest a lot.\r\n[1]: https://www.postgresql.org/message-id/CAFiTN-uudj2PY8GsUzFtLYFpBoq_rKegW3On_8ZHdxB1mVv3-A%40mail.gmail.com\r\n\r\nRegards,\r\nPavel Borisov,\r\nSupabase\r\n\n\n\n\n\n\n\n\nHi Pavel Borisov \n Now the disk performance has been improved many times, and the capacity has also been increased many times,The wal log already supports lz4 and zstd compression, I\n think each XLOG record size will increase at least by 4 bytes which is not a big problem.What about your opinion?\n\nBest whish\n\n\n\n\n\n\n\n发件人: Pavel Borisov <pashkin.elfe@gmail.com>\n发送时间: 2022年12月9日 22:13\n收件人: adherent postgres <adherent_postgres@hotmail.com>\n抄送: Aleksander Alekseev <aleksander@timescale.com>; pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>; Chris Travers <chris.travers@gmail.com>; Chris Travers <chris@orioledata.com>; Bruce Momjian <bruce@momjian.us>\n主题: Re: Add 64-bit XIDs into PostgreSQL 15\n \n\n\nHi, Adherent!\n\nOn Fri, 9 Dec 2022 at 17:54, adherent postgres\n<adherent_postgres@hotmail.com> wrote:\n>\n> Hi Aleksander Alekseev\n> I think the xids 32bit transformation project has been dragged on for too long. Huawei's openGauss referenced this patch to implement xids 64bit, and Postgrespro also implemented xids 64bit, which is enough to prove that their worries are redundant.I think\n postgresql has no reason not to implement xid 64 bit. What about your opinion?\n\nI agree it's high time to commit 64xids into PostgreSQL.\n\nIf you can do your review of the whole proposed patchset or only the\npart that is likely to be committed earlier [1] it would help a lot!\nI'd recommend beginning with the last version of the patch in thread\n[1]. First, it is easier. Also, this review is going to be useful\nsooner and will help a committer on January commitfest a lot.\n[1]: \nhttps://www.postgresql.org/message-id/CAFiTN-uudj2PY8GsUzFtLYFpBoq_rKegW3On_8ZHdxB1mVv3-A%40mail.gmail.com\n\nRegards,\nPavel Borisov,\nSupabase",
"msg_date": "Fri, 9 Dec 2022 14:27:54 +0000",
"msg_from": "adherent postgres <adherent_postgres@hotmail.com>",
"msg_from_op": true,
"msg_subject": "=?gb2312?B?u9i4tDogQWRkIDY0LWJpdCBYSURzIGludG8gUG9zdGdyZVNRTCAxNQ==?="
},
{
"msg_contents": "On Fri, 9 Dec 2022 at 16:54, adherent postgres <\nadherent_postgres@hotmail.com> wrote:\n\n> Hi Aleksander Alekseev\n> I think the xids 32bit transformation project has been dragged on for too\n> long. Huawei's openGauss referenced this patch to implement xids 64bit, and\n> Postgrespro also implemented xids 64bit, which is enough to prove that\n> their worries are redundant.I think postgresql has no reason not to\n> implement xid 64 bit. What about your opinion?\n>\n\nYeah, I totally agree, the time has come. With a high transaction load,\nPostgres become more and more difficult to maintain.\nThe problem is in the overall complexity of the patch set. We need more\nreviewers.\n\nSince committing such a big patch is not viable once at a time, from the\nstart of the work we did split it into several logical parts.\nThe evolution concept is more preferable in this case. As far as I can see,\nthere is overall consensus to commit SLRU related\nchanges first.\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Fri, 9 Dec 2022 at 16:54, adherent postgres <adherent_postgres@hotmail.com> wrote:\n\n\nHi Aleksander Alekseev \n\n\n I think the xids 32bit transformation project has been dragged on for too long. Huawei's openGauss referenced this patch to implement xids 64bit, and Postgrespro also implemented xids 64bit, which is enough to prove that their worries are redundant.I think\n postgresql has no reason not to implement xid 64 bit. What about your opinion?\n\n\nYeah, I totally agree, the time has come. With a high transaction load, Postgres become more and more difficult to maintain. The problem is in the overall complexity of the patch set. We need more reviewers. Since committing such a big patch is not viable once at a time, from the start of the work we did split it into several logical parts.The evolution concept is more preferable in this case. As far as I can see, there is overall consensus to commit SLRU related changes first.-- Best regards,Maxim Orlov.",
"msg_date": "Fri, 9 Dec 2022 17:29:58 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add 64-bit XIDs into PostgreSQL 15"
}
] |
[
{
"msg_contents": "Dear PostgreSQL Hackers,\n\nSome time ago we faced a small issue in libpq regarding connections \nconfigured in the pg_hba.conf as type *hostssl* and using *md5* as \nauthentication method.\n\nOne of our users placed the client certificates in ~/.postgresql/ \n(*postgresql.crt,**postgresql.key*), so that libpq sends them to the \nserver without having to manually set *sslcert* and *sslkey* - which is \nquite convenient. However, there are other servers where the same user \nauthenticates with password (md5), but libpq still sends the client \ncertificates for authentication by default. This causes the \nauthentication to fail even before the user has the chance to enter his \npassword, since he has no certificate registered in the server.\n\nTo make it clearer:\n\nAlthough the connection is configured as ...\n\n*host all dummyuser 192.168.178.42/32 md5\n*\n\n... and the client uses the following connection string ...\n\n*psql \"host=myserver dbname=db user=***dummyuser*\" *\n\n... the server tries to authenticate the user using the client \ncertificates in *~/.postgresql/* and, as expected, the authentication fails:\n\n*psql: error: connection to server at \"myserver\" (xx.xx.xx.xx), port \n5432 failed: SSL error: tlsv1 alert unknown ca*\n\nServer log:\n**\n\n*2022-12-09 10:50:59.376 UTC [13896] LOG: could not accept SSL \nconnection: certificate verify failed\n*\n\nAm I missing something?**\n\nObviously it would suffice to just remove or rename \n*~/.postgresql/**postgresql.{crt,key}*, but the user needs them to \nauthenticate in other servers. So we came up with the workaround to \ncreate a new sslmode (no-clientcert) to make libpq explicitly ignore the \nclient certificates, so that we can avoid ssl authentication errors. \nThese small changes can be seen in the patch file attached.\n\n*psql \"host=myserver dbname=db user=****dummyuser** \nsslrootcert=server.crt sslmode=no-clientcert\"*\n\nAny better ideas to make libpq ignore \n*~/.postgresql/**postgresql.{crt,key}***? Preferably without having to \nchange the source code :) Thanks in advance!\n\nBest,\n\nJim",
"msg_date": "Fri, 9 Dec 2022 13:55:25 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Authentication fails for md5 connections if\n ~/.postgresql/postgresql.{crt and key} exist"
},
{
"msg_contents": "The easiest way to achieve the same (without patching libpq) is by setting\nsslcert to something non-existent. While maybe not the most obvious way, I\nwould consider this the recommended approach.\n\n(sorry for the resend Jim, my original message got blocked to the wider\nmailing list)\n\nOn Fri, 6 Jan 2023 at 09:15, Jim Jones <jim.jones@uni-muenster.de> wrote:\n\n> Dear PostgreSQL Hackers,\n>\n> Some time ago we faced a small issue in libpq regarding connections\n> configured in the pg_hba.conf as type *hostssl* and using *md5* as\n> authentication method.\n>\n> One of our users placed the client certificates in ~/.postgresql/ (\n> *postgresql.crt,**postgresql.key*), so that libpq sends them to the\n> server without having to manually set *sslcert* and *sslkey* - which is\n> quite convenient. However, there are other servers where the same user\n> authenticates with password (md5), but libpq still sends the client\n> certificates for authentication by default. This causes the authentication\n> to fail even before the user has the chance to enter his password, since he\n> has no certificate registered in the server.\n>\n> To make it clearer:\n>\n> Although the connection is configured as ...\n>\n>\n> *host all dummyuser 192.168.178.42/32 <http://192.168.178.42/32> md5 *\n>\n> ... and the client uses the following connection string ...\n>\n> *psql \"host=myserver dbname=db user=**dummyuser\" *\n>\n> ... the server tries to authenticate the user using the client\n> certificates in *~/.postgresql/* and, as expected, the authentication\n> fails:\n>\n> *psql: error: connection to server at \"myserver\" (xx.xx.xx.xx), port 5432\n> failed: SSL error: tlsv1 alert unknown ca*\n>\n> Server log:\n>\n>\n> *2022-12-09 10:50:59.376 UTC [13896] LOG: could not accept SSL\n> connection: certificate verify failed *\n>\n> Am I missing something?\n>\n> Obviously it would suffice to just remove or rename *~/.postgresql/*\n> *postgresql.{crt,key}*, but the user needs them to authenticate in other\n> servers. So we came up with the workaround to create a new sslmode\n> (no-clientcert) to make libpq explicitly ignore the client certificates, so\n> that we can avoid ssl authentication errors. These small changes can be\n> seen in the patch file attached.\n>\n> *psql \"host=myserver dbname=db user=**dummyuser sslrootcert=server.crt\n> sslmode=no-clientcert\"*\n>\n> Any better ideas to make libpq ignore *~/.postgresql/*\n> *postgresql.{crt,key}*? Preferably without having to change the source\n> code :) Thanks in advance!\n>\n> Best,\n>\n> Jim\n>\n>\n\n\nThe easiest way to achieve the same (without patching libpq) is by \nsetting sslcert to something non-existent. While maybe not the most \nobvious way, I would consider this the recommended approach.\n\n(sorry for the resend Jim, my original message got blocked to the wider mailing list)On Fri, 6 Jan 2023 at 09:15, Jim Jones <jim.jones@uni-muenster.de> wrote:\n\nDear PostgreSQL Hackers,\n\nSome time ago we faced a small issue in\n libpq regarding connections configured in the pg_hba.conf as\n type hostssl and using md5 as authentication\n method.\n \nOne of our users placed the client\n certificates in ~/.postgresql/ (postgresql.crt,postgresql.key),\n so that libpq sends them to the server without having to manually set sslcert\n and sslkey - which is quite convenient. However, there\n are other servers where the same user authenticates with\n password (md5), but libpq still sends the client certificates\n for authentication by default. This causes the authentication to\n fail even before the user has the chance to enter his password,\n since he has no certificate registered in the server. \n \n \nTo make it clearer:\n \n \nAlthough the connection is configured as\n ...\n \nhost all dummyuser \n 192.168.178.42/32 md5\n \n \n... and the client uses the following\n connection string ... \n \n \npsql\n \"host=myserver dbname=db user=dummyuser\" \n \n \n... the server tries to authenticate the\n user using the client certificates in ~/.postgresql/\n and, as expected, the authentication fails:\n \npsql: error: connection to server at\n \"myserver\" (xx.xx.xx.xx), port 5432 failed: SSL error: tlsv1\n alert unknown ca\n\nServer log:\n \n2022-12-09 10:50:59.376 UTC [13896]\n LOG: could not accept SSL connection: certificate verify\n failed\n\nAm I missing something? \n \nObviously it would suffice to just remove\n or rename ~/.postgresql/postgresql.{crt,key}, but\n the user needs them to authenticate in other servers. So we came\n up with the workaround to create a new sslmode (no-clientcert)\n to make libpq explicitly\n ignore the client certificates, so that we can avoid ssl\n authentication errors. These small changes can be seen in the\n patch file attached.\n \npsql \"host=myserver dbname=db user=dummyuser\n sslrootcert=server.crt sslmode=no-clientcert\"\n \nAny better ideas to make libpq ignore ~/.postgresql/postgresql.{crt,key}? Preferably without having to change\n the source code :) Thanks in advance!\n \n \nBest,\n Jim",
"msg_date": "Fri, 6 Jan 2023 09:37:05 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: Authentication fails for md5 connections if\n ~/.postgresql/postgresql.{crt\n and key} exist"
},
{
"msg_contents": "Hi Jelte, thanks for the message. You're right, an invalid cert path \ndoes solve the issue - I even use it for tests. Although it solves the \nauthentication issue it still looks in my eyes like a non intuitive \nworkaround/hack. Perhaps a new sslmode isn't the right place for this \n\"feature\"? Thanks again for the suggestion!\n\nJim\n\nOn 06.01.23 09:32, Jelte Fennema wrote:\n> The easiest way to achieve the same (without patching libpq) is by \n> setting sslcert to something non-existent. While maybe not the most \n> obvious way, I would consider this the recommended approach.\n>\n> On Fri, 6 Jan 2023 at 09:15, Jim Jones <jim.jones@uni-muenster.de> wrote:\n>\n> Dear PostgreSQL Hackers,\n>\n> Some time ago we faced a small issue in libpq regarding\n> connections configured in the pg_hba.conf as type *hostssl* and\n> using *md5* as authentication method.\n>\n> One of our users placed the client certificates in ~/.postgresql/\n> (*postgresql.crt,**postgresql.key*), so that libpq sends them to\n> the server without having to manually set *sslcert* and *sslkey* -\n> which is quite convenient. However, there are other servers where\n> the same user authenticates with password (md5), but libpq still\n> sends the client certificates for authentication by default. This\n> causes the authentication to fail even before the user has the\n> chance to enter his password, since he has no certificate\n> registered in the server.\n>\n> To make it clearer:\n>\n> Although the connection is configured as ...\n>\n> *host all dummyuser 192.168.178.42/32\n> <http://192.168.178.42/32> md5\n> *\n>\n> ... and the client uses the following connection string ...\n>\n> *psql \"host=myserver dbname=db user=***dummyuser*\" *\n>\n> ... the server tries to authenticate the user using the client\n> certificates in *~/.postgresql/* and, as expected, the\n> authentication fails:\n>\n> *psql: error: connection to server at \"myserver\" (xx.xx.xx.xx),\n> port 5432 failed: SSL error: tlsv1 alert unknown ca*\n>\n> Server log:\n> **\n>\n> *2022-12-09 10:50:59.376 UTC [13896] LOG: could not accept SSL\n> connection: certificate verify failed\n> *\n>\n> Am I missing something?**\n>\n> Obviously it would suffice to just remove or rename\n> *~/.postgresql/**postgresql.{crt,key}*, but the user needs them to\n> authenticate in other servers. So we came up with the workaround\n> to create a new sslmode (no-clientcert) to make libpq explicitly\n> ignore the client certificates, so that we can avoid ssl\n> authentication errors. These small changes can be seen in the\n> patch file attached.\n>\n> *psql \"host=myserver dbname=db user=****dummyuser**\n> sslrootcert=server.crt sslmode=no-clientcert\"*\n>\n> Any better ideas to make libpq ignore\n> *~/.postgresql/**postgresql.{crt,key}*? Preferably without having\n> to change the source code :) Thanks in advance!\n>\n> Best,\n>\n> Jim\n>\n\n\n\n\n\nHi Jelte, thanks for the message. You're right, an invalid cert\n path does solve the issue - I even use it for tests. Although it\n solves the authentication issue it still looks in my eyes like a\n non intuitive workaround/hack. Perhaps a new sslmode isn't the\n right place for this \"feature\"? Thanks again for the suggestion! \n\n Jim\n\nOn 06.01.23 09:32, Jelte Fennema wrote:\n\n\n\nThe easiest way to achieve the same (without\n patching libpq) is by setting sslcert to something non-existent.\n While maybe not the most obvious way, I would consider this the\n recommended approach.\n\n\n\nOn Fri, 6 Jan 2023 at 09:15,\n Jim Jones <jim.jones@uni-muenster.de>\n wrote:\n\n\n\nDear PostgreSQL Hackers,\n\nSome time ago we faced a small\n issue in libpq regarding connections configured in the\n pg_hba.conf as type hostssl and using md5\n as authentication method.\n \nOne of our users placed the client\n certificates in ~/.postgresql/ (postgresql.crt,postgresql.key),\n so that libpq sends them to the server without having to manually set sslcert\n and sslkey - which is quite convenient. However,\n there are other servers where the same user\n authenticates with password (md5), but libpq still sends\n the client certificates for authentication by default.\n This causes the authentication to fail even before the\n user has the chance to enter his password, since he has\n no certificate registered in the server. \n \n \nTo make it clearer:\n \n \nAlthough the connection is\n configured as ...\n \nhost all dummyuser 192.168.178.42/32 md5\n \n \n... and the client uses the\n following connection string ... \n \n \npsql\n \"host=myserver dbname=db user=dummyuser\" \n \n \n... the server tries to\n authenticate the user using the client certificates in ~/.postgresql/\n and, as expected, the authentication fails:\n \npsql: error: connection to\n server at \"myserver\" (xx.xx.xx.xx), port 5432 failed:\n SSL error: tlsv1 alert unknown ca\n\nServer log:\n \n2022-12-09 10:50:59.376 UTC\n [13896] LOG: could not accept SSL connection:\n certificate verify failed\n\nAm I missing something? \n \nObviously it would suffice to just\n remove or rename ~/.postgresql/postgresql.{crt,key},\n but the user needs them to authenticate in other\n servers. So we came up with the workaround to create a\n new sslmode (no-clientcert) to make libpq explicitly\n ignore the client certificates, so that we can avoid ssl\n authentication errors. These small changes can be seen\n in the patch file attached.\n \npsql \"host=myserver dbname=db\n user=dummyuser\n sslrootcert=server.crt sslmode=no-clientcert\"\n \nAny better ideas to make libpq\n ignore ~/.postgresql/postgresql.{crt,key}? Preferably without having to change\n the source code :) Thanks in advance!\n \n \nBest,\n Jim",
"msg_date": "Fri, 6 Jan 2023 20:04:10 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: Authentication fails for md5 connections if\n ~/.postgresql/postgresql.{crt and key} exist"
},
{
"msg_contents": "Hello Jim,\n\n> Hi Jelte, thanks for the message. You're right, an invalid cert path\n> does solve the issue - I even use it for tests. Although it solves the\n> authentication issue it still looks in my eyes like a non intuitive\n> workaround/hack. Perhaps a new sslmode isn't the right place for this\n> \"feature\"? Thanks again for the suggestion!\n\nI do not think it is worth it to change the current behavior of PostgreSQL\nin that sense.\n\nPostgreSQL looks for the cert and key under `~/.postgresql` as a facility.\nThese files do not exist by default, so if PostgreSQL finds something in\nthere it assumes you want to use it.\n\nI also think it is correct in the sense of choosing the certificate over\na password based authentication when it finds a certificate as the cert\nbased would provide you with stronger checks.\n\nI believe that using libpq services would be a better approach if you\nwant to connect to several PostgreSQL clusters from the very same\nsource machine. That way you would specify whatever is specific to each\ntarget cluster in a centralized configuration file and just reference each\ntarget cluster by its service name in the connection string. It would\nrequire that you move the SSL cert and key from `~/.postgresql` to somewhere\nelse and specify `sslcert` and `sslkey` in the expected service in the\n`~/.pg_service.conf` file.\n\nMore info about that can be found at:\n\nhttps://www.postgresql.org/docs/current/libpq-pgservice.html\n\nBest regards,\nIsrael.\n\n>\n\nHello Jim,> Hi Jelte, thanks for the message. You're right, an invalid cert path > does solve the issue - I even use it for tests. Although it solves the > authentication issue it still looks in my eyes like a non intuitive > workaround/hack. Perhaps a new sslmode isn't the right place for this > \"feature\"? Thanks again for the suggestion!I do not think it is worth it to change the current behavior of PostgreSQLin that sense.PostgreSQL looks for the cert and key under `~/.postgresql` as a facility.These files do not exist by default, so if PostgreSQL finds something in there it assumes you want to use it.I also think it is correct in the sense of choosing the certificate overa password based authentication when it finds a certificate as the certbased would provide you with stronger checks.I believe that using libpq services would be a better approach if youwant to connect to several PostgreSQL clusters from the very samesource machine. That way you would specify whatever is specific to eachtarget cluster in a centralized configuration file and just reference eachtarget cluster by its service name in the connection string. It wouldrequire that you move the SSL cert and key from `~/.postgresql` to somewhereelse and specify `sslcert` and `sslkey` in the expected service in the`~/.pg_service.conf` file.More info about that can be found at:https://www.postgresql.org/docs/current/libpq-pgservice.htmlBest regards,Israel.",
"msg_date": "Thu, 19 Jan 2023 18:12:44 -0300",
"msg_from": "Israel Barth Rubio <barthisrael@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Authentication fails for md5 connections if\n ~/.postgresql/postgresql.{crt\n and key} exist"
},
{
"msg_contents": "Hello Israel,\n\nThanks a lot for the suggestion!\n\n > I do not think it is worth it to change the current behavior of \nPostgreSQL\n > in that sense.\n\nWell, I am not suggesting to change the current behavior of PostgreSQL in\nthat matter. Quite the contrary, I find this feature very convenient,\nspecially when you need to deal with many different clusters. What I am\nproposing is rather the possibility to disable it on demand :) I mean,\nin case I do not want libpq to try to authenticate using the certificates\nin `~/.postgresql`.\n\n > PostgreSQL looks for the cert and key under `~/.postgresql` as a \nfacility.\n > These files do not exist by default, so if PostgreSQL finds something in\n > there it assumes you want to use it.\n\nYes. I'm just trying to find an elegant way to disable this assumption \non demand.\n\n > I also think it is correct in the sense of choosing the certificate over\n > a password based authentication when it finds a certificate as the cert\n > based would provide you with stronger checks.\n\nI couldn't agree more.\n\n > It would require that you move the SSL cert and key from \n`~/.postgresql` to\n > somewhere else and specify `sslcert` and `sslkey` in the expected \nservice in the\n > `~/.pg_service.conf` file.\n\nThat's exactly what I am trying to avoid. IOW, I want to avoid having to \nmove\nthe cert files to another path and consequently having to configure 30\ndifferent entries in the pg_service.conf because of a single server that\ndoes not support ssl authentication.\n\nI do realize that this patch is a big ask, since probably nobody except \nme \"needs it\" :D\n\nThanks again for the message. Much appreciated!\n\nBest,\n\nJim",
"msg_date": "Fri, 20 Jan 2023 20:09:42 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: Authentication fails for md5 connections if\n ~/.postgresql/postgresql.{crt and key} exist"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 11:09 AM Jim Jones <jim.jones@uni-muenster.de> wrote:\n> Well, I am not suggesting to change the current behavior of PostgreSQL in\n> that matter. Quite the contrary, I find this feature very convenient,\n> specially when you need to deal with many different clusters. What I am\n> proposing is rather the possibility to disable it on demand :) I mean,\n> in case I do not want libpq to try to authenticate using the certificates\n> in `~/.postgresql`.\n\nI think the sslcertmode=disable option that I introduced in [1] solves\nthis issue too; would it work for your case? That whole patchset is\nmeant to tackle the general case of the problem you've described.\n\n(Eventually I'd like to teach the server not to ask for a client\ncertificate if it's not going to use it.)\n\n> I do realize that this patch is a big ask, since probably nobody except\n> me \"needs it\" :D\n\nI'd imagine other people have run into it too; it's just a matter of\nhow palatable the workarounds were to them. :)\n\n--Jacob\n\n[1] https://www.postgresql.org/message-id/flat/CAAWbhmi4V9zEAvfUSCDFx1pOr3ZWrV9fuxkv_2maRqvyc-m9PQ%40mail.gmail.com#199c1f49fbefa6be401db35f5cfa7742\n\n\n",
"msg_date": "Fri, 20 Jan 2023 11:24:49 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Authentication fails for md5 connections if\n ~/.postgresql/postgresql.{crt\n and key} exist"
},
{
"msg_contents": "Hi Jacob,\n\n > I think the sslcertmode=disable option that I introduced in [1] \nsolves this issue too;\n\nWell, I see there is indeed a significant overlap between our patches -\nbut yours has a much more comprehensive approach! If I got it right,\nthe new slcertmode=disable would indeed cancel the existing certs in\n'~/.postgresql/ in case they exist. Right?\n\n+ if (conn->sslcertmode[0] == 'd') /* disable */\n+ {\n+ /* don't send a client cert even if we have one */\n+ have_cert = false;\n+ }\n+ else if (fnbuf[0] == '\\0')\n\nMy idea was rather to use the existing sslmode with a new option\n\"no-clientcert\" that does actually the same:\n\n /* sslmode no-clientcert */\n if (conn->sslmode[0] == 'n')\n {\n\n fnbuf[0] = '\\0';\n\n }\n\n ...\n\n if (fnbuf[0] == '\\0')\n {\n /* no home directory, proceed without a client cert */\n have_cert = false;\n }\n\nI wish I had found your patchset some months ago. Now I hate myself\nfor the duplication of efforts :D\n\nWhat is the status of your patchset?\n\nCheers\nJim\n\n\n\n",
"msg_date": "Sat, 21 Jan 2023 13:35:49 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: Authentication fails for md5 connections if\n ~/.postgresql/postgresql.{crt and key} exist"
},
{
"msg_contents": "On Sat, Jan 21, 2023 at 4:35 AM Jim Jones <jim.jones@uni-muenster.de> wrote:\n> Well, I see there is indeed a significant overlap between our patches -\n> but yours has a much more comprehensive approach! If I got it right,\n> the new slcertmode=disable would indeed cancel the existing certs in\n> '~/.postgresql/ in case they exist. Right?\n\nRight!\n\n> I wish I had found your patchset some months ago. Now I hate myself\n> for the duplication of efforts :D\n\nIt's a big list... I missed your thread back when you first posted it.\n\n> What is the status of your patchset?\n\nCurrently waiting for a committer to sign on. But it's now being\ndiscussed in another feature thread [1], coincidentally, so I think\nthe odds are fairly good.\n\n--Jacob\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoZbFiYJWqxakw0fcNrPSPCqc_QnF8iCdXZqyM%3Dd5jA-KA%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 23 Jan 2023 10:36:00 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Authentication fails for md5 connections if\n ~/.postgresql/postgresql.{crt\n and key} exist"
},
{
"msg_contents": "Hello Jim/Jacob,\n\n> > I do not think it is worth it to change the current behavior of\n> PostgreSQL\n> > in that sense.\n>\n> Well, I am not suggesting to change the current behavior of PostgreSQL in\n> that matter. Quite the contrary, I find this feature very convenient,\n> specially when you need to deal with many different clusters. What I am\n> proposing is rather the possibility to disable it on demand :) I mean,\n> in case I do not want libpq to try to authenticate using the certificates\n> in `~/.postgresql`.\n>\n> > PostgreSQL looks for the cert and key under `~/.postgresql` as a\n> facility.\n> > These files do not exist by default, so if PostgreSQL finds something in\n> > there it assumes you want to use it.\n>\n> Yes. I'm just trying to find an elegant way to disable this assumption\n> on demand.\n\nRight, I do understand your proposal. I was just thinking out loud and\nwondering about the broad audience of such a mode in the sslmode\nargument.\n\nSomething else that came to my mind is that sslmode itself seems more\nlike an argument covering the client expectations regarding the connection\nto the server, I mean, if it expects channel encryption and/or validation\nof the\nserver identity.\n\nI wonder if we are willing to add some functionality around the expectations\nregarding the client certificate, if it wouldn't make more sense to be\ncontrolled\nthrough something like the clientcert option of pg_hba? If so, the downside\nof\nthat is the fact that the client would still send the certificate even if\nit would not\nbe used at all by the server. Again, just thinking out loud about what your\ngoal\nis and possible ways of accomplishing that:)\n\n> > I do realize that this patch is a big ask, since probably nobody except\n> > me \"needs it\" :D\n>\n> I'd imagine other people have run into it too; it's just a matter of\n> how palatable the workarounds were to them. :)\n\nI imagine more people might have already hit a similar situation too. While\nthe\nworkaround can seem a bit weird, in my very humble opinion the user/client\nis\nsomehow still the one to blame in this case as it is providing the \"wrong\"\nfile in\na path that is checked by libpq. With that in mind I would be inclined to\nsay it is\nan acceptable workaround.\n\n> > I think the sslcertmode=disable option that I introduced in [1]\nsolves this issue too;\n>\n> Well, I see there is indeed a significant overlap between our patches -\n> but yours has a much more comprehensive approach! If I got it right,\n> the new slcertmode=disable would indeed cancel the existing certs in\n> '~/.postgresql/ in case they exist. Right?\n>\n> + if (conn->sslcertmode[0] == 'd') /* disable */\n> + {\n> + /* don't send a client cert even if we have one */\n> + have_cert = false;\n> + }\n> + else if (fnbuf[0] == '\\0')\n>\n> My idea was rather to use the existing sslmode with a new option\n> \"no-clientcert\" that does actually the same:\n>\n> /* sslmode no-clientcert */\n> if (conn->sslmode[0] == 'n')\n> {\n> fnbuf[0] = '\\0';\n> }\n>\n> ...\n>\n> if (fnbuf[0] == '\\0')\n> {\n> /* no home directory, proceed without a client cert */\n> have_cert = false;\n> }\n>\n> I wish I had found your patchset some months ago. Now I hate myself\n> for the duplication of efforts :D\n\nAlthough both patches achieve a similar goal regarding not sending the\nclient certificate there is still a slight but in my opinion important\ndifference\nbetween them: sslmode=disable will also disable channel encryption. It\nmay or may not be acceptable depending on how the connection is between\nyour client and the server.\n\nKind regards,\nIsrael.\n\nHello Jim/Jacob,> > I do not think it is worth it to change the current behavior of > PostgreSQL> > in that sense.> > Well, I am not suggesting to change the current behavior of PostgreSQL in> that matter. Quite the contrary, I find this feature very convenient,> specially when you need to deal with many different clusters. What I am> proposing is rather the possibility to disable it on demand :) I mean,> in case I do not want libpq to try to authenticate using the certificates> in `~/.postgresql`.> > > PostgreSQL looks for the cert and key under `~/.postgresql` as a > facility.> > These files do not exist by default, so if PostgreSQL finds something in> > there it assumes you want to use it.> > Yes. I'm just trying to find an elegant way to disable this assumption > on demand.Right, I do understand your proposal. I was just thinking out loud andwondering about the broad audience of such a mode in the sslmodeargument.Something else that came to my mind is that sslmode itself seems morelike an argument covering the client expectations regarding the connectionto the server, I mean, if it expects channel encryption and/or validation of theserver identity.I wonder if we are willing to add some functionality around the expectationsregarding the client certificate, if it wouldn't make more sense to be controlledthrough something like the clientcert option of pg_hba? If so, the downside ofthat is the fact that the client would still send the certificate even if it would notbe used at all by the server. Again, just thinking out loud about what your goalis and possible ways of accomplishing that:)> > I do realize that this patch is a big ask, since probably nobody except> > me \"needs it\" :D> > I'd imagine other people have run into it too; it's just a matter of> how palatable the workarounds were to them. :)I imagine more people might have already hit a similar situation too. While theworkaround can seem a bit weird, in my very humble opinion the user/client issomehow still the one to blame in this case as it is providing the \"wrong\" file ina path that is checked by libpq. With that in mind I would be inclined to say it isan acceptable workaround.> > I think the sslcertmode=disable option that I introduced in [1] solves this issue too;> > Well, I see there is indeed a significant overlap between our patches -> but yours has a much more comprehensive approach! If I got it right,> the new slcertmode=disable would indeed cancel the existing certs in> '~/.postgresql/ in case they exist. Right?> > + if (conn->sslcertmode[0] == 'd') /* disable */> + {> + /* don't send a client cert even if we have one */> + have_cert = false;> + }> + else if (fnbuf[0] == '\\0')> > My idea was rather to use the existing sslmode with a new option> \"no-clientcert\" that does actually the same:> > /* sslmode no-clientcert */> if (conn->sslmode[0] == 'n')> {> fnbuf[0] = '\\0';> }> > ...> > if (fnbuf[0] == '\\0')> {> /* no home directory, proceed without a client cert */> have_cert = false;> }> > I wish I had found your patchset some months ago. Now I hate myself> for the duplication of efforts :DAlthough both patches achieve a similar goal regarding not sending theclient certificate there is still a slight but in my opinion important differencebetween them: sslmode=disable will also disable channel encryption. Itmay or may not be acceptable depending on how the connection is betweenyour client and the server.Kind regards,Israel.",
"msg_date": "Wed, 25 Jan 2023 12:46:55 -0300",
"msg_from": "Israel Barth Rubio <barthisrael@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Authentication fails for md5 connections if\n ~/.postgresql/postgresql.{crt\n and key} exist"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 7:47 AM Israel Barth Rubio\n<barthisrael@gmail.com> wrote:\n> I imagine more people might have already hit a similar situation too. While the\n> workaround can seem a bit weird, in my very humble opinion the user/client is\n> somehow still the one to blame in this case as it is providing the \"wrong\" file in\n> a path that is checked by libpq. With that in mind I would be inclined to say it is\n> an acceptable workaround.\n\nI'm not sure how helpful it is to assign \"blame\" here. I think the\nrequested improvement is reasonable -- it should be possible to\noverride the default for a particular connection, without having to\npick a junk value that you hope doesn't match up with an actual file\non the disk.\n\n> Although both patches achieve a similar goal regarding not sending the\n> client certificate there is still a slight but in my opinion important difference\n> between them: sslmode=disable will also disable channel encryption. It\n> may or may not be acceptable depending on how the connection is between\n> your client and the server.\n\nsslmode=disable isn't used in either of our proposals, though. Unless\nI'm missing what you mean?\n\n--Jacob\n\n\n",
"msg_date": "Wed, 25 Jan 2023 09:09:47 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Authentication fails for md5 connections if\n ~/.postgresql/postgresql.{crt\n and key} exist"
},
{
"msg_contents": "Hello Jacob,\n\n> I'm not sure how helpful it is to assign \"blame\" here. I think the\n> requested improvement is reasonable -- it should be possible to\n> override the default for a particular connection, without having to\n> pick a junk value that you hope doesn't match up with an actual file\n> on the disk.\n\nRight, I agree we can look for improvements. \"blame\" was likely\nnot the best word to express myself in that message.\n\n> sslmode=disable isn't used in either of our proposals, though. Unless\n> I'm missing what you mean?\n\nSorry about the noise, I misread the code snippet shared earlier\n(sslmode x sslcertmode). I just took a closer read at the previously\nmentioned patch about sslcertmode and it seems a bit\nmore elegant way of achieving something similar to what has\nbeen proposed here.\n\nBest regards,\nIsrael.\n\nEm qua., 25 de jan. de 2023 às 14:09, Jacob Champion <\njchampion@timescale.com> escreveu:\n\n> On Wed, Jan 25, 2023 at 7:47 AM Israel Barth Rubio\n> <barthisrael@gmail.com> wrote:\n> > I imagine more people might have already hit a similar situation too.\n> While the\n> > workaround can seem a bit weird, in my very humble opinion the\n> user/client is\n> > somehow still the one to blame in this case as it is providing the\n> \"wrong\" file in\n> > a path that is checked by libpq. With that in mind I would be inclined\n> to say it is\n> > an acceptable workaround.\n>\n> I'm not sure how helpful it is to assign \"blame\" here. I think the\n> requested improvement is reasonable -- it should be possible to\n> override the default for a particular connection, without having to\n> pick a junk value that you hope doesn't match up with an actual file\n> on the disk.\n>\n> > Although both patches achieve a similar goal regarding not sending the\n> > client certificate there is still a slight but in my opinion important\n> difference\n> > between them: sslmode=disable will also disable channel encryption. It\n> > may or may not be acceptable depending on how the connection is between\n> > your client and the server.\n>\n> sslmode=disable isn't used in either of our proposals, though. Unless\n> I'm missing what you mean?\n>\n> --Jacob\n>\n\nHello Jacob,> I'm not sure how helpful it is to assign \"blame\" here. I think the> requested improvement is reasonable -- it should be possible to> override the default for a particular connection, without having to> pick a junk value that you hope doesn't match up with an actual file> on the disk.Right, I agree we can look for improvements. \"blame\" was likelynot the best word to express myself in that message.> sslmode=disable isn't used in either of our proposals, though. Unless> I'm missing what you mean?Sorry about the noise, I misread the code snippet shared earlier(sslmode x sslcertmode). I just took a closer read at the previouslymentioned patch about sslcertmode and it seems a bitmore elegant way of achieving something similar to what hasbeen proposed here.Best regards,Israel.Em qua., 25 de jan. de 2023 às 14:09, Jacob Champion <jchampion@timescale.com> escreveu:On Wed, Jan 25, 2023 at 7:47 AM Israel Barth Rubio\n<barthisrael@gmail.com> wrote:\n> I imagine more people might have already hit a similar situation too. While the\n> workaround can seem a bit weird, in my very humble opinion the user/client is\n> somehow still the one to blame in this case as it is providing the \"wrong\" file in\n> a path that is checked by libpq. With that in mind I would be inclined to say it is\n> an acceptable workaround.\n\nI'm not sure how helpful it is to assign \"blame\" here. I think the\nrequested improvement is reasonable -- it should be possible to\noverride the default for a particular connection, without having to\npick a junk value that you hope doesn't match up with an actual file\non the disk.\n\n> Although both patches achieve a similar goal regarding not sending the\n> client certificate there is still a slight but in my opinion important difference\n> between them: sslmode=disable will also disable channel encryption. It\n> may or may not be acceptable depending on how the connection is between\n> your client and the server.\n\nsslmode=disable isn't used in either of our proposals, though. Unless\nI'm missing what you mean?\n\n--Jacob",
"msg_date": "Wed, 25 Jan 2023 15:27:04 -0300",
"msg_from": "Israel Barth Rubio <barthisrael@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Authentication fails for md5 connections if\n ~/.postgresql/postgresql.{crt\n and key} exist"
},
{
"msg_contents": " > I think the sslcertmode=disable option that I introduced in [1] solves\n > this issue too; would it work for your case? That whole patchset is\n > meant to tackle the general case of the problem you've described.\n >\n > (Eventually I'd like to teach the server not to ask for a client\n > certificate if it's not going to use it.)\n\nthere is an option in pg_hba.conf on the server side called \"clientcert\" \nthat can be specified besides the auth method that controls if certain \nclient connections are required to send client certificate for \nadditional verification. The value of \"clientcert\" can be \"verify-ca\" or \n\"verify-full\". For example:\n\nhostssl all all 127.0.0.1/32 md5 \nclientcert=verify-full\n\nIf clientcert is not requested by the server, but yet the client still \nsends the certificate, the server will still verify it. This is the case \nin this discussion.\n\nI agree that it is a more elegant approach to add \"sslcertmode=disable\" \non the client side to prevent sending default certificate.\n\nBut, if the server does request clientcert but client uses \n\"sslcertmode=disable\" to connect and not give a certificate, it would \nalso result in authentication failure. In this case, we actually would \nwant to ignore \"sslcertmode=disable\" and send default certificates if found.\n\nIt would perhaps to better change the parameter to \n\"defaultclientcert=on-demand\" on the client side that will:\n\n1. not send the existing default certificate if server does not request \na certificate\n2. send the existing default certificate if server does request a \ncertificate while the client does not use \"sslcert\" parameter to specify \nanother non-default certificate\n\nI put \"default\" in the parameter name to indicate that it only applies \nto default certificate. If user specifies a non-default certificate \nusing \"sslcert\" parameter, \"defaultclientcert\" should not be used and \nclient should give error if both exists.\n\n\nCary Huang\n--------------------------------\nHighGo Software Canada\nwww.highgo.ca\n\n\n\n\n",
"msg_date": "Fri, 27 Jan 2023 12:13:32 -0800",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Authentication fails for md5 connections if\n ~/.postgresql/postgresql.{crt and key} exist"
},
{
"msg_contents": "On 27.01.23 21:13, Cary Huang wrote:\n\n> I agree that it is a more elegant approach to add \n> \"sslcertmode=disable\" on the client side to prevent sending default \n> certificate.\n>\n> But, if the server does request clientcert but client uses \n> \"sslcertmode=disable\" to connect and not give a certificate, it would \n> also result in authentication failure. In this case, we actually would \n> want to ignore \"sslcertmode=disable\" and send default certificates if \n> found.\n\nThose are all very good points.\n\n > But, if the server does request clientcert but client uses \n\"sslcertmode=disable\" to connect and not give a certificate, it would \nalso result in authentication failure. In this case, we actually would \nwant to ignore \"sslcertmode=disable\" and send default certificates if \nfound.\n\nI'm just wondering if this is really necessary. If the server asks for a \ncertificate and the user explicitly says \"I don't want to send it\", \nshouldn't it be ok for the server return an authentication failure? I \nmean, wouldn't it defeat the purpose of \"sslcertmode=disable\"? Although \nit might be indeed quite handy I'm not sure how I feel about explicitly \ntelling the client to not send a certificate and having it being sent \nanyway :)\n\nBest, Jim",
"msg_date": "Sun, 29 Jan 2023 14:02:18 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: Authentication fails for md5 connections if\n ~/.postgresql/postgresql.{crt and key} exist"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 12:13 PM Cary Huang <cary.huang@highgo.ca> wrote:\n> > (Eventually I'd like to teach the server not to ask for a client\n> > certificate if it's not going to use it.)\n>\n> If clientcert is not requested by the server, but yet the client still\n> sends the certificate, the server will still verify it. This is the case\n> in this discussion.\n\nI think this is maybe conflating the application-level behavior with\nthe protocol-level behavior. A client certificate is requested by the\nserver if ssl_ca_file is set, whether clientcert is set in the HBA or\nnot. It's this disconnect between the intuitive behavior and the\nactual behavior that I'd like to eventually improve.\n\n--Jacob\n\n\n",
"msg_date": "Mon, 30 Jan 2023 13:01:29 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Authentication fails for md5 connections if\n ~/.postgresql/postgresql.{crt\n and key} exist"
},
{
"msg_contents": "On Sun, Jan 29, 2023 at 5:02 AM Jim Jones <jim.jones@uni-muenster.de> wrote:\n> On 27.01.23 21:13, Cary Huang wrote:\n> > But, if the server does request clientcert but client uses\n> \"sslcertmode=disable\" to connect and not give a certificate, it would\n> also result in authentication failure. In this case, we actually would\n> want to ignore \"sslcertmode=disable\" and send default certificates if\n> found.\n>\n> I'm just wondering if this is really necessary. If the server asks for a\n> certificate and the user explicitly says \"I don't want to send it\",\n> shouldn't it be ok for the server return an authentication failure? I\n> mean, wouldn't it defeat the purpose of \"sslcertmode=disable\"?\n\n+1. In my opinion, if I tell libpq not to share my certificate with\nthe server, and it then fails to authenticate, that's intended and\nuseful behavior. (I don't really want libpq to try to find more ways\nto authenticate me; that causes other security issues [1, 2].)\n\n--Jacob\n\n[1] https://www.postgresql.org/message-id/0adf992619e7bf138eb4119622d37e3efb6515d5.camel%40j-davis.com\n[2] https://www.postgresql.org/message-id/46562.1637695110%40sss.pgh.pa.us\n\n\n",
"msg_date": "Mon, 30 Jan 2023 13:02:04 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Authentication fails for md5 connections if\n ~/.postgresql/postgresql.{crt\n and key} exist"
},
{
"msg_contents": "I'm withdrawing this patch, as the same feature was already implemented \nin a different patch written by Jacob[1]\n\nThanks everyone!\n\nBest, Jim\n\n1- \nhttps://www.postgresql.org/message-id/flat/CAAWbhmi4V9zEAvfUSCDFx1pOr3ZWrV9fuxkv_2maRqvyc-m9PQ@mail.gmail.com#199c1f49fbefa6be401db35f5cfa7742",
"msg_date": "Tue, 21 Feb 2023 08:23:26 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: Authentication fails for md5 connections if\n ~/.postgresql/postgresql.{crt and key} exist"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nIs there a way to use exported snapshots in autocommit mode ?\n\nEither something similar to defaults\nin default_transaction_deferrable, default_transaction_isolation,\ndefault_transaction_read_only\nOr something that could be set in the function's SET part.\n\nContext: I am working on speeding up large table copy via parallelism using\npl/proxy SPLIT functionality and would like to have all the parallel\nsub-copies run with the same snapshot.\n\nBest Regards\nHannu\n\nHello hackers,Is there a way to use exported snapshots in autocommit mode ?Either something similar to defaults in default_transaction_deferrable, default_transaction_isolation, default_transaction_read_onlyOr something that could be set in the function's SET part.Context: I am working on speeding up large table copy via parallelism using pl/proxy SPLIT functionality and would like to have all the parallel sub-copies run with the same snapshot.Best RegardsHannu",
"msg_date": "Fri, 9 Dec 2022 17:07:40 +0100",
"msg_from": "Hannu Krosing <hannuk@google.com>",
"msg_from_op": true,
"msg_subject": "Is there a way to use exported snapshots in autocommit mode ?"
}
] |
[
{
"msg_contents": "Hi,\n\nTImescale makes use of inheritance in its partitioning implementation,\nso we can't make use of the publish_via_partition_root publication\noption during logical replication. We don't guarantee that the exact\nsame partitions exist on both sides, so that's a major roadblock for\nour implementing logical subscription support, and by the same token\nit's not possible to replicate out to a \"standard\" table.\n\nIf we were to work on a corresponding publish_via_inheritance_root\noption, is there a chance that it'd be accepted, or is there some\nother technical reason preventing it? In addition to Timescale, it\nseems like other installations using extensions like pg_partman could\npotentially make use of this, during online migrations from the old\nstyle of partitioning to the new.\n\nSome inheritance hierarchies won't be \"partitioned\" hierarchies, of\ncourse, but the user can simply not set that replication option for\nthose publications. (Alternatively, I can imagine a system where an\nextension explicitly marks a table as having a different \"publication\nroot\", and then handling that marker with the existing replication\noption. But that may be overengineering things.)\n\nWDYT?\n\n--Jacob\n\n\n",
"msg_date": "Fri, 9 Dec 2022 10:21:21 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "RFC: logical publication via inheritance root?"
},
{
"msg_contents": "On Fri, Dec 9, 2022 at 10:21 AM Jacob Champion <jchampion@timescale.com> wrote:\n> Some inheritance hierarchies won't be \"partitioned\" hierarchies, of\n> course, but the user can simply not set that replication option for\n> those publications.\n\nThe more I noodle around with this approach, the less I like it: it\nfeels overly brittle, we have to deal with multiple inheritance\nsomehow, and there seem to be many code paths that need to be\npartially duplicated. And my suggestion that the user could just opt\nout of problematic cases would be a bad user experience, since any\nnon-partition inheritance hierarchies would just silently break.\n\nInstead...\n\n> (Alternatively, I can imagine a system where an\n> extension explicitly marks a table as having a different \"publication\n> root\", and then handling that marker with the existing replication\n> option. But that may be overengineering things.)\n\n...I'm going to try this approach next, since it's opt-in and may be\nable to better use the existing code paths.\n\n--Jacob\n\n\n",
"msg_date": "Fri, 6 Jan 2023 13:55:47 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "Hi Jacob,\n\n> we have to deal with multiple inheritance somehow\n\nI would like to point out that we shouldn't necessarily support\nmultiple inheritance in all the possible cases, at least not in the\nfirst implementation. Supporting simple cases of inheritance would be\nalready a valuable feature even if it will have certain limitations.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 9 Jan 2023 11:41:19 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 12:41 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> I would like to point out that we shouldn't necessarily support\n> multiple inheritance in all the possible cases, at least not in the\n> first implementation. Supporting simple cases of inheritance would be\n> already a valuable feature even if it will have certain limitations.\n\nI agree. What I'm trying to avoid is the case where replication works\nnicely for a table until someone performs an ALTER TABLE ... [NO]\nINHERIT, and then Something Bad happens because we can't support the\nnew edge case. If every inheritance tree is automatically opted into\nthis new publication behavior, I think it'll be easier to hit that by\naccident, making the whole thing feel brittle.\n\nBy contrast, if we have to opt specific tables into this feature by\nmarking them in the catalog, then not only will it be harder to hit by\naccident (because we can document the requirements for the marker\nfunction, and then it's up to the callers/extension authors/DBAs to\nmaintain those requirements), but we even have the chance to bail out\nduring an inheritance change if we see that the table is marked in\nthis way.\n\nTwo general pieces of progress to report:\n\n1) I'm playing around with a marker in pg_inherits, where the inhseqno\nis set to a sentinel value (0) for an inheritance relationship that\nhas been marked for logical publication. The intent is that the\npg_inherits helpers will prevent further inheritance relationships\nwhen they see that marker, and reusing inhseqno means we can make use\nof the existing index to do the lookups. An example:\n\n =# CREATE TABLE root (a int);\n =# CREATE TABLE root_p1 () INHERITS (root);\n =# SELECT pg_set_logical_root('root_p1', 'root');\n\nand then any data written to root_p1 gets replicated via root instead,\nif publish_via_partition_root = true. If root_p1 is set up with extra\ncolumns, they'll be omitted from replication.\n\n2) While this strategy works well for ongoing replication, it's not\nenough to get the initial synchronization correct. The subscriber\nstill does a COPY of the root table directly, missing out on all the\nlogical descendant data. The publisher will have to tell the\nsubscriber about the relationship somehow, and older subscriber\nversions won't understand how to use that (similar to how old\nsubscribers can't correctly handle row filters).\n\n--Jacob\n\n\n",
"msg_date": "Tue, 10 Jan 2023 11:36:12 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "On 1/10/23 11:36, Jacob Champion wrote:\n> 1) I'm playing around with a marker in pg_inherits, where the inhseqno\n> is set to a sentinel value (0) for an inheritance relationship that\n> has been marked for logical publication. The intent is that the\n> pg_inherits helpers will prevent further inheritance relationships\n> when they see that marker, and reusing inhseqno means we can make use\n> of the existing index to do the lookups. An example:\n> \n> =# CREATE TABLE root (a int);\n> =# CREATE TABLE root_p1 () INHERITS (root);\n> =# SELECT pg_set_logical_root('root_p1', 'root');\n> \n> and then any data written to root_p1 gets replicated via root instead,\n> if publish_via_partition_root = true. If root_p1 is set up with extra\n> columns, they'll be omitted from replication.\n\nFirst draft attached. (Due to some indentation changes, it's easiest to\nread with --ignore-all-space.)\n\nThe overall strategy is\n- introduce pg_set_logical_root, which sets the sentinel in pg_inherits,\n- swap out any checks for partition parents with checks for logical\nparents in the publishing code, and\n- introduce the ability for a subscriber to perform an initial table\nsync from multiple tables on the publisher.\n\n> 2) While this strategy works well for ongoing replication, it's not\n> enough to get the initial synchronization correct. The subscriber\n> still does a COPY of the root table directly, missing out on all the\n> logical descendant data. The publisher will have to tell the\n> subscriber about the relationship somehow, and older subscriber\n> versions won't understand how to use that (similar to how old\n> subscribers can't correctly handle row filters).\n\nI partially solved this by having the subscriber pull the logical\nhierarchy from the publisher to figure out which tables to COPY. This\nworks when publish_via_partition_root=true, but it doesn't correctly\nreturn to the previous behavior when the setting is false. I need to\ncheck the publication setting from the subscriber, too, but that opens\nup the question of what to do if two different publications conflict.\n\nAnd while I go down that rabbit hole, I wanted to see if anyone thinks\nthis whole thing is unacceptable. :D\n\nThanks,\n--Jacob",
"msg_date": "Fri, 20 Jan 2023 09:53:28 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "Hi,\n\nI'm going to register this in CF for feedback.\n\nSummary for potential reviewers: we don't use declarative partitions in\nthe Timescale partitioning scheme, but it'd be really nice to be able to\nreplicate between our tables and standard tables, or between two\nTimescale-partitioned tables with different layouts. This patch lets\nextensions (or savvy users) upgrade an existing inheritance relationship\nbetween two tables into a \"logical partition\" relationship, so that they\ncan be handled with the publish_via_partition_root machinery.\n\nI hope this might also help pg_partman users migrate between old- and\nnew-style partition schemes, but that's speculation.\n\nOn 1/20/23 09:53, Jacob Champion wrote:\n>> 2) While this strategy works well for ongoing replication, it's not\n>> enough to get the initial synchronization correct. The subscriber\n>> still does a COPY of the root table directly, missing out on all the\n>> logical descendant data. The publisher will have to tell the\n>> subscriber about the relationship somehow, and older subscriber\n>> versions won't understand how to use that (similar to how old\n>> subscribers can't correctly handle row filters).\n> \n> I partially solved this by having the subscriber pull the logical\n> hierarchy from the publisher to figure out which tables to COPY. This\n> works when publish_via_partition_root=true, but it doesn't correctly\n> return to the previous behavior when the setting is false. I need to\n> check the publication setting from the subscriber, too, but that opens\n> up the question of what to do if two different publications conflict.\n\nSecond draft attached, which fixes that bug. I kept thinking to myself\nthat this would be much easier if the publisher told the subscriber what\ndata to copy rather than having the subscriber hardcode the initial sync\nprocess... and then I realized that I could, sort of, move in that\ndirection.\n\nThis version adds a SQL function to determine the list of source tables\nto COPY into a subscriber's target table. Now the publisher can make use\nof whatever catalogs it needs to make that list and the subscriber\ndoesn't need to couple to them. (This could also provide a way for\npublishers to provide more generic \"table indirection\" in the future,\nbut I'm wary of selling genericism as a feature here.)\n\nI haven't solved the problem where two publications of the same table\nhave different settings for publish_via_partition_root. I was curious to\nsee how the existing partition code prevented problems, but I'm not\nreally sure that it does... Here are some situations where the existing\nimplementation duplicates data on the initial sync:\n\n1) A single subscription to two publications, one with\npublish_via_partition_root on and the other off, which publish the same\npartitioned table\n\n2) A single subscription to two publications with\npublish_via_partition_root on, one of which publishes a root partition\nand the other of which publishes a descendant/leaf\n\n3) A single subscription to two publications with\npublish_via_partition_root on, one of which publishes FOR ALL TABLES and\nthe other of which publishes a descendant/leaf\n\nIs it expected that DBAs should avoid these cases, or are they worth\npursuing with a bug fix?\n\nThanks,\n--Jacob",
"msg_date": "Tue, 28 Feb 2023 14:47:09 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "Hi Jacob,\n\n> I'm going to register this in CF for feedback.\n\nMany thanks for the updated patch.\n\nDespite the fact that the patch is still work in progress all in all\nit looks very good to me.\n\nSo far I only have a couple of nitpicks, mostly regarding the code coverage [1]:\n\n```\n+ tablename = get_rel_name(tableoid);\n+ if (tablename == NULL)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_UNDEFINED_TABLE),\n+ errmsg(\"OID %u does not refer to a table\", tableoid)));\n+ rootname = get_rel_name(rootoid);\n+ if (rootname == NULL)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_UNDEFINED_TABLE),\n+ errmsg(\"OID %u does not refer to a table\", rootoid)));\n```\n\n```\n+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data,\n+ lengthof(descRow), descRow);\n+\n+ if (res->status != WALRCV_OK_TUPLES)\n+ ereport(ERROR,\n+ (errmsg(\"could not fetch logical descendants for\ntable \\\"%s.%s\\\" from publisher: %s\",\n+ nspname, relname, res->err)));\n```\n\n```\n+ res = walrcv_exec(LogRepWorkerWalRcvConn, cmd.data, 0, NULL);\n+ pfree(cmd.data);\n+ if (res->status != WALRCV_OK_COPY_OUT)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_CONNECTION_FAILURE),\n+ errmsg(\"could not start initial contents copy\nfor table \\\"%s.%s\\\" from remote %s: %s\",\n+ lrel.nspname, lrel.relname, quoted_name,\nres->err)));\n```\n\nThese new ereport() paths are never executed when we run the tests.\nI'm not 100% sure if they are \"should never happen in practice\" cases\nor not. If they are, I suggest adding corresponding comments.\nOtherwise we have to test these paths.\n\n```\n+ else\n+ {\n+ /* For older servers, we only COPY the table itself. */\n+ char *quoted = quote_qualified_identifier(lrel->nspname,\n+ lrel->relname);\n+ *to_copy = lappend(*to_copy, quoted);\n+ }\n```\n\nAlso we have to be extra careful with this code path because it is not\ntest-covered too.\n\n```\n+Datum\n+pg_get_publication_rels_to_sync(PG_FUNCTION_ARGS)\n+{\n+#define NUM_SYNC_TABLES_ELEM 1\n```\n\nWhat is this macro for?\n\n```\n+{ oid => '8137', descr => 'get list of tables to copy during initial sync',\n+ proname => 'pg_get_publication_rels_to_sync', prorows => '10',\nproretset => 't',\n+ provolatile => 's', prorettype => 'regclass', proargtypes => 'regclass text',\n+ proargnames => '{rootid,pubname}',\n+ prosrc => 'pg_get_publication_rels_to_sync' },\n```\n\nSomething seems odd here. Is there a chance that it can return\ndifferent results even within one statement, especially considering\nthe fact that pg_set_logical_root() is VOLATILE? Maybe\npg_get_publication_rels_to_sync() should be VOLATILE too [2].\n\n```\n+{ oid => '8136', descr => 'mark a table root for logical replication',\n+ proname => 'pg_set_logical_root', provolatile => 'v', proparallel => 'u',\n+ prorettype => 'void', proargtypes => 'regclass regclass',\n+ prosrc => 'pg_set_logical_root' },\n```\n\nShouldn't we also have pg_unset(reset?)_logical_root?\n\n[1]: https://github.com/afiskon/pgscripts/blob/master/code-coverage.sh\n[2]: https://www.postgresql.org/docs/current/xfunc-volatility.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 7 Mar 2023 13:40:34 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 2:40 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> So far I only have a couple of nitpicks, mostly regarding the code coverage [1]:\n\nYeah, I need to work on error cases and their coverage in general.\nThere are more cases that I need to reject as well (marked TODO).\n\n> +Datum\n> +pg_get_publication_rels_to_sync(PG_FUNCTION_ARGS)\n> +{\n> +#define NUM_SYNC_TABLES_ELEM 1\n> ```\n>\n> What is this macro for?\n\nWhoops, that's cruft from an intermediate implementation. Will fix in\nthe next draft.\n\n> +{ oid => '8137', descr => 'get list of tables to copy during initial sync',\n> + proname => 'pg_get_publication_rels_to_sync', prorows => '10',\n> proretset => 't',\n> + provolatile => 's', prorettype => 'regclass', proargtypes => 'regclass text',\n> + proargnames => '{rootid,pubname}',\n> + prosrc => 'pg_get_publication_rels_to_sync' },\n> ```\n>\n> Something seems odd here. Is there a chance that it can return\n> different results even within one statement, especially considering\n> the fact that pg_set_logical_root() is VOLATILE? Maybe\n> pg_get_publication_rels_to_sync() should be VOLATILE too [2].\n\nHm. I'm not sure how this all should behave in the face of concurrent\nstructural changes, or how the existing publication queries handle\nthat same situation (e.g. partition attachment), so that's definitely\nsomething for me to look into. At a glance, I'm not sure that\nreturning different results for the same table is more correct. And I\nfeel like a VOLATILE implementation might significantly impact the\nJOIN/LATERAL performance in the pg_dump query? But I don't really know\nhow that's planned.\n\n> +{ oid => '8136', descr => 'mark a table root for logical replication',\n> + proname => 'pg_set_logical_root', provolatile => 'v', proparallel => 'u',\n> + prorettype => 'void', proargtypes => 'regclass regclass',\n> + prosrc => 'pg_set_logical_root' },\n> ```\n>\n> Shouldn't we also have pg_unset(reset?)_logical_root?\n\nMy initial thought was that a one-way \"upgrade\" makes things easier to\nreason about. But a one-way function is not good UX, so maybe we\nshould provide that. We'd need to verify and test what happens if you\nundo/\"detach\" the logical tree during replication.\n\nIf it's okay to blindly replace any existing inhseqno with, say, 1 (on\na table with single inheritance), then we can reverse the process\nsafely. If not, we can't -- at least not with the current\nimplementation -- because we don't save the previous value anywhere.\n\nThanks for the review!\n\n--Jacob\n\n\n",
"msg_date": "Wed, 8 Mar 2023 14:07:22 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 9:47 AM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> Hi,\n>\n> I'm going to register this in CF for feedback.\n>\n> Summary for potential reviewers: we don't use declarative partitions in\n> the Timescale partitioning scheme, but it'd be really nice to be able to\n> replicate between our tables and standard tables, or between two\n> Timescale-partitioned tables with different layouts. This patch lets\n> extensions (or savvy users) upgrade an existing inheritance relationship\n> between two tables into a \"logical partition\" relationship, so that they\n> can be handled with the publish_via_partition_root machinery.\n>\n> I hope this might also help pg_partman users migrate between old- and\n> new-style partition schemes, but that's speculation.\n>\n\nOK, my understanding is that TimescaleDB uses some kind of\nquasi-partitioned/inherited tables (aka hypertables? [1]) internally,\nand your proposed WIP patch provides a set_logical_root() function\nwhich combines with the logical replication (LR) PUBLICATION option\n\"publish_via_partition_root\" to help to replicate those.\n\nYou also mentioned pg_partman. IIUC pg_partman is a partitioning\nextension [2] that pre-dated the native PostgreSQL partitioning\nintroduced in PG10 (i.e. quite a while ago). I guess it would be a\nvery niche group of users that are still using pg_partman old-style\n(pre-PG10) partitions and want to migrate them but have not already\ndone so. Also, the pg_partman README [3] says since v4.0.0 there is\nextensive support for native PostgreSQL partitions, so perhaps\nexisting LR already works for those.\n\nOutside the scope of special TimescaleDB tables and the speculated\npg_partman old-style table migration, will this proposed new feature\nhave any other application? In other words, do you know if this\nproposal will be of any benefit to the *normal* users who just have\nnative PostgreSQL inherited tables they want to replicate? I haven’t\nyet looked at the WIP patch TAP tests – so apologies for my question\nif the benefits to normal users are self-evident from your test cases.\n\n------\n[1] https://docs.timescale.com/use-timescale/latest/hypertables/about-hypertables/\n[2] https://www.crunchydata.com/blog/native-partitioning-with-postgres\n[3] https://github.com/pgpartman/pg_partman\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Sat, 1 Apr 2023 09:17:14 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "On Fri, Mar 31, 2023 at 3:17 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> OK, my understanding is that TimescaleDB uses some kind of\n> quasi-partitioned/inherited tables (aka hypertables? [1]) internally,\n> and your proposed WIP patch provides a set_logical_root() function\n> which combines with the logical replication (LR) PUBLICATION option\n> \"publish_via_partition_root\" to help to replicate those.\n\nCorrect!\n\n> You also mentioned pg_partman. IIUC pg_partman is a partitioning\n> extension [2] that pre-dated the native PostgreSQL partitioning\n> introduced in PG10 (i.e. quite a while ago). I guess it would be a\n> very niche group of users that are still using pg_partman old-style\n> (pre-PG10) partitions and want to migrate them but have not already\n> done so.\n\nYeah. I've got no evidence either way, unfortunately -- on the one\nhand, surely people have been able to upgrade by now? And on the\nother, implementation inertia seems to override most other engineering\ngoals...\n\nProbably best to ask the partman users, and not me. :D Or assume it's\na non-benefit unless someone says otherwise (even then, the partman\nmaintainers would need to agree it's useful and add support for this).\n\n> Outside the scope of special TimescaleDB tables and the speculated\n> pg_partman old-style table migration, will this proposed new feature\n> have any other application? In other words, do you know if this\n> proposal will be of any benefit to the *normal* users who just have\n> native PostgreSQL inherited tables they want to replicate?\n\nI think it comes down to why an inheritance scheme was used. If it's\nbecause you want to group rows into named, queryable subsets (e.g. the\n\"cities/capitals\" example in the docs [1]), I don't think this has any\nutility, because I assume you'd want to replicate your subsets as-is.\n\nBut if it's because you've implemented a partitioning scheme of your\nown (the docs still list reasons you might want to [2], even today),\nand all you ever really do is interact with the root table, I think\nthis feature will give you some of the same benefits that\npublish_via_partition_root gives native partition users. We're very\nmuch in that boat, but I don't know how many others are.\n\nThanks!\n--Jacob\n\n[1] https://www.postgresql.org/docs/15/tutorial-inheritance.html\n[2] https://www.postgresql.org/docs/15/ddl-partitioning.html#DDL-PARTITIONING-USING-INHERITANCE\n\n\n",
"msg_date": "Fri, 31 Mar 2023 16:35:50 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "Hi,\n\n> > Outside the scope of special TimescaleDB tables and the speculated\n> > pg_partman old-style table migration, will this proposed new feature\n> > have any other application? In other words, do you know if this\n> > proposal will be of any benefit to the *normal* users who just have\n> > native PostgreSQL inherited tables they want to replicate?\n>\n> I think it comes down to why an inheritance scheme was used. If it's\n> because you want to group rows into named, queryable subsets (e.g. the\n> \"cities/capitals\" example in the docs [1]), I don't think this has any\n> utility, because I assume you'd want to replicate your subsets as-is.\n>\n> But if it's because you've implemented a partitioning scheme of your\n> own (the docs still list reasons you might want to [2], even today),\n> and all you ever really do is interact with the root table, I think\n> this feature will give you some of the same benefits that\n> publish_via_partition_root gives native partition users. We're very\n> much in that boat, but I don't know how many others are.\n\nI would like to point out that inheritance is merely a tool for\nmodeling data. Its use cases are not limited to only partitioning,\nalthough many people ended up using it for this purpose back when we\ndidn't have a proper built-in partitioning. So unless we are going to\nremove inheritance in nearest releases (*) I believe it should work\nwith logical replication in a sane and convenient way.\n\nCorrect me if I'm wrong, but I got an impression that the patch tries\nto accomplish just that.\n\n(*) Which personally I believe would be a good change. Unlikely to\nhappen, though.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 3 Apr 2023 16:13:29 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "FYI, the WIP patch does not seem to apply cleanly anymore using the latest HEAD.\n\nSee the cfbot rebase logs [1].\n\n------\n[1] http://cfbot.cputube.org/patch_42_4225.log\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 4 Apr 2023 13:53:00 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "On Mon, Apr 3, 2023 at 8:53 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> FYI, the WIP patch does not seem to apply cleanly anymore using the latest HEAD.\n\nYes, sorry -- after 062a84442, the architecture needs to change in a\nway that I'm still working through. I've moved the patch to Waiting on\nAuthor while I figure out the rebase.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 4 Apr 2023 08:14:10 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "On 4/4/23 08:14, Jacob Champion wrote:\n> Yes, sorry -- after 062a84442, the architecture needs to change in a\n> way that I'm still working through. I've moved the patch to Waiting on\n> Author while I figure out the rebase.\n\nOkay -- that took longer than I wanted, but here's a rebased patchset\nthat I'll call v2.\n\nCommit 062a84442 necessitated some rework of the new\npg_get_publication_rels_to_sync() helper. It now takes a list of\npublications so that we can handle conflicts in the pubviaroot settings.\nThis is more complicated than before -- unlike partitions, standard\ninheritance trees can selectively publish tables that aren't leaves. But\nI think I've finally settled on some semantics for it which are\nunsurprising.\n\nAs part of that, I've pulled out a patch in 0001 which I hope is\nindependently useful. Today, there appears to be no way to check which\nrelid a table will be published through, short of creating a\nsubscription just to see what happens. 0001 introduces\npg_get_relation_publishing_info() to surface this information, which\nmakes testing it easier and also makes it possible to inspect what's\nhappening with more complicated publication setups.\n\n0001 also moves the determination of publish_as_relid out of the\npgoutput plugin and into a pg_publication helper function, because\nunless I've missed something crucial, it doesn't seem like an output\nplugin is really free to make that decision independently of the\npublication settings. The subscriber is not going to ask a plugin for\nthe right tables to COPY during initial sync, so the plugin had better\nbe using the same logic as the core.\n\nMany TODOs and upthread points of feedback are still pending, and I\nthink that several of them are actually symptoms of one architectural\nproblem with my patch:\n\n- The volatility classifications of pg_set_logical_root() and\npg_get_publication_rels_to_sync() appear to conflict\n- A dump/restore cycle loses the new marker\n- Inheritance can be tampered with after the logical root has been set\n- There's currently no way to clear a logical root after setting it\n\nI wonder if pg_set_logical_root() might be better implemented as part of\nALTER TABLE. Maybe with a relation option? If it all went through ALTER\nTABLE ONLY ... SET, then we wouldn't have to worry about a user\nmodifying roots while reading pg_get_publication_rels_to_sync() in the\nsame query. The permissions checks should be more consistent with less\neffort, and there's an existing way to set/clear the option that already\nplays well with pg_dump and pg_upgrade. The downsides I can see are the\nneed to handle simultaneous changes to INHERIT and SET (since we'd be\nmanipulating pg_inherits in both), as well as the fact that ALTER TABLE\n... SET defaults to altering the entire table hierarchy, which may be\nbad UX for this case.\n\n--Jacob",
"msg_date": "Tue, 6 Jun 2023 08:50:36 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "On Sat, Apr 1, 2023 at 5:06 AM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Fri, Mar 31, 2023 at 3:17 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> > Outside the scope of special TimescaleDB tables and the speculated\n> > pg_partman old-style table migration, will this proposed new feature\n> > have any other application? In other words, do you know if this\n> > proposal will be of any benefit to the *normal* users who just have\n> > native PostgreSQL inherited tables they want to replicate?\n>\n> I think it comes down to why an inheritance scheme was used. If it's\n> because you want to group rows into named, queryable subsets (e.g. the\n> \"cities/capitals\" example in the docs [1]), I don't think this has any\n> utility, because I assume you'd want to replicate your subsets as-is.\n>\n\nI also think so and your idea to have a function like\npg_set_logical_root() seems to make the inheritance hierarchy behaves\nas a declarative partitioning scheme for the purpose of logical\nreplication.\n\n> But if it's because you've implemented a partitioning scheme of your\n> own (the docs still list reasons you might want to [2], even today),\n> and all you ever really do is interact with the root table, I think\n> this feature will give you some of the same benefits that\n> publish_via_partition_root gives native partition users. We're very\n> much in that boat, but I don't know how many others are.\n>\n\nI agree that there may still be cases as pointed out by you where\npeople want to use inheritance as a mechanism for partitioning but I\nfeel those would still be in the minority. Personally, I am not very\nexcited about this idea.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 16 Jun 2023 18:55:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 6:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Sat, Apr 1, 2023 at 5:06 AM Jacob Champion <jchampion@timescale.com> wrote:\n> > I think it comes down to why an inheritance scheme was used. If it's\n> > because you want to group rows into named, queryable subsets (e.g. the\n> > \"cities/capitals\" example in the docs [1]), I don't think this has any\n> > utility, because I assume you'd want to replicate your subsets as-is.\n>\n> I also think so and your idea to have a function like\n> pg_set_logical_root() seems to make the inheritance hierarchy behaves\n> as a declarative partitioning scheme for the purpose of logical\n> replication.\n\nRight.\n\n> > But if it's because you've implemented a partitioning scheme of your\n> > own (the docs still list reasons you might want to [2], even today),\n> > and all you ever really do is interact with the root table, I think\n> > this feature will give you some of the same benefits that\n> > publish_via_partition_root gives native partition users. We're very\n> > much in that boat, but I don't know how many others are.\n> >\n>\n> I agree that there may still be cases as pointed out by you where\n> people want to use inheritance as a mechanism for partitioning but I\n> feel those would still be in the minority.\n\n(Just to clarify -- timescaledb is one of those cases. They definitely\nstill exist.)\n\n> Personally, I am not very\n> excited about this idea.\n\nYeah, \"exciting\" isn't how I'd describe this feature either :D But I\nthink we're probably locked out of logical replication without the\nability to override publish_as_relid for our internal tables, somehow.\nAnd I don't think DDL replication will help, just like it wouldn't\nnecessarily help existing publish_via_partition_root use cases,\nbecause we don't want to force the source table's hierarchy on the\ntarget table. (A later version of timescaledb may not even use the\nsame internal layout.)\n\nIs there an alternative implementation I'm missing, maybe, or a way to\nmake this feature more generally applicable? \"We have table Y and want\nit to be migrated as part of table X\" seems to fall squarely under the\nlogical replication umbrella.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Fri, 16 Jun 2023 13:21:43 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "On Sat, Jun 17, 2023 at 1:51 AM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Fri, Jun 16, 2023 at 6:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > > But if it's because you've implemented a partitioning scheme of your\n> > > own (the docs still list reasons you might want to [2], even today),\n> > > and all you ever really do is interact with the root table, I think\n> > > this feature will give you some of the same benefits that\n> > > publish_via_partition_root gives native partition users. We're very\n> > > much in that boat, but I don't know how many others are.\n> > >\n> >\n> > I agree that there may still be cases as pointed out by you where\n> > people want to use inheritance as a mechanism for partitioning but I\n> > feel those would still be in the minority.\n>\n> (Just to clarify -- timescaledb is one of those cases. They definitely\n> still exist.)\n>\n\nNoted, but I think that can't be the reason to accept this feature in core.\n\n> > Personally, I am not very\n> > excited about this idea.\n>\n> Yeah, \"exciting\" isn't how I'd describe this feature either :D But I\n> think we're probably locked out of logical replication without the\n> ability to override publish_as_relid for our internal tables, somehow.\n> And I don't think DDL replication will help, just like it wouldn't\n> necessarily help existing publish_via_partition_root use cases,\n> because we don't want to force the source table's hierarchy on the\n> target table. (A later version of timescaledb may not even use the\n> same internal layout.)\n>\n> Is there an alternative implementation I'm missing, maybe, or a way to\n> make this feature more generally applicable? \"We have table Y and want\n> it to be migrated as part of table X\" seems to fall squarely under the\n> logical replication umbrella.\n>\n\nAre you talking about this w.r.t inheritance/partition hierarchy? I\ndon't see any other way except \"publish_via_partition_root\" because we\nexpect the same schema and relation name on the subscriber to\nreplicate. You haven't explained why exactly you have such a\nrequirement of replicating via inheritance root aka why you want\ninheritance hierarchy to be different on target db.\n\nThe other idea that came across my mind was to provide some schema\nmapping kind of feature on subscribers where we could route the tuples\nfrom table X to table Y provided they have the same or compatible\nschema. I don't know if this is feasible or how generally it will be\nuseful and whether any other DB (replication solution) provides such a\nfeature but I guess something like that would have helped your use\ncase.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 17 Jun 2023 09:54:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "On Fri, Jun 16, 2023 at 9:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Is there an alternative implementation I'm missing, maybe, or a way to\n> > make this feature more generally applicable? \"We have table Y and want\n> > it to be migrated as part of table X\" seems to fall squarely under the\n> > logical replication umbrella.\n>\n> Are you talking about this w.r.t inheritance/partition hierarchy? I\n> don't see any other way except \"publish_via_partition_root\" because we\n> expect the same schema and relation name on the subscriber to\n> replicate. You haven't explained why exactly you have such a\n> requirement of replicating via inheritance root aka why you want\n> inheritance hierarchy to be different on target db.\n\nI think all the \"standard\" use cases for publish_via_partition_root\nstill apply to our hypertables, and then add on the fact that our\npartitions are dynamically created as needed. The subscriber may have\ndifferent ideas on how to divide and size those partitions based on\nthe extension version. (I'm still trying to figure out how to make\nsure those new partitions are automatically included in the\npublication, for what it's worth.)\n\n> The other idea that came across my mind was to provide some schema\n> mapping kind of feature on subscribers where we could route the tuples\n> from table X to table Y provided they have the same or compatible\n> schema. I don't know if this is feasible or how generally it will be\n> useful and whether any other DB (replication solution) provides such a\n> feature but I guess something like that would have helped your use\n> case.\n\nYes, that may have also worked. Making it a subscriber-side feature\nrequires tight coupling between the two peers, though. (For the\ntimescaledb case, how does the subscriber know which new partitions\nbelong to which root? The publisher knows already.) And if it's\npublisher-side instead, it would still need something like the\npg_get_publication_rels_to_sync() proposed here.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 20 Jun 2023 10:09:19 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "On Tue, Jun 20, 2023 at 10:39 PM Jacob Champion <jchampion@timescale.com> wrote:\n>\n> On Fri, Jun 16, 2023 at 9:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > The other idea that came across my mind was to provide some schema\n> > mapping kind of feature on subscribers where we could route the tuples\n> > from table X to table Y provided they have the same or compatible\n> > schema. I don't know if this is feasible or how generally it will be\n> > useful and whether any other DB (replication solution) provides such a\n> > feature but I guess something like that would have helped your use\n> > case.\n>\n> Yes, that may have also worked. Making it a subscriber-side feature\n> requires tight coupling between the two peers, though. (For the\n> timescaledb case, how does the subscriber know which new partitions\n> belong to which root?\n>\n\nYeah, the subscriber can't figure that out automatically. Users need\nto provide the mapping manually.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 21 Jun 2023 15:58:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "On Wed, Jun 21, 2023 at 3:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Tue, Jun 20, 2023 at 10:39 PM Jacob Champion <jchampion@timescale.com> wrote:\n> > Making it a subscriber-side feature\n> > requires tight coupling between the two peers, though. (For the\n> > timescaledb case, how does the subscriber know which new partitions\n> > belong to which root?\n>\n> Yeah, the subscriber can't figure that out automatically. Users need\n> to provide the mapping manually.\n\nRight. For that reason, I think subscriber-side mappings probably\nwon't help this particular use case. This patchset is pushing more in\nthe direction of publisher-side mappings.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Wed, 21 Jun 2023 10:13:44 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "On 6/6/23 08:50, Jacob Champion wrote:\n> Commit 062a84442 necessitated some rework of the new\n> pg_get_publication_rels_to_sync() helper. It now takes a list of\n> publications so that we can handle conflicts in the pubviaroot settings.\n> This is more complicated than before -- unlike partitions, standard\n> inheritance trees can selectively publish tables that aren't leaves. But\n> I think I've finally settled on some semantics for it which are\n> unsurprising.\nThe semantics I've picked are wrong :( I've accidentally reintroduced a\nbug that's similar to the one discussed upthread, where if you have two\nleaf tables published through different roots:\n\n tree pub1 pub2\n ----- ----- -----\n A - A\n B C B - - -\n D D D\n\nthen a subscription on both publications will duplicate the data in the\nleaf (D in the above example). I need to choose semantics that are\ncloser to the current behavior of partitions.\n\n> I wonder if pg_set_logical_root() might be better implemented as part of\n> ALTER TABLE. Maybe with a relation option? If it all went through ALTER\n> TABLE ONLY ... SET, then we wouldn't have to worry about a user\n> modifying roots while reading pg_get_publication_rels_to_sync() in the\n> same query. The permissions checks should be more consistent with less\n> effort, and there's an existing way to set/clear the option that already\n> plays well with pg_dump and pg_upgrade.\n\nI've implemented ALTER TABLE in v3; I like it a lot more. The new\nreloption is named publish_via_parent. So now we can unset the flag and\ndump/restore.\n\nv3 also fixes a nasty uninitialized stack variable, along with a bad\ncollation assumption I made.\n\n> The downsides I can see are the\n> need to handle simultaneous changes to INHERIT and SET (since we'd be\n> manipulating pg_inherits in both),\n\n(This didn't turn out to be as bad as I feared.)\n\n> as well as the fact that ALTER TABLE\n> ... SET defaults to altering the entire table hierarchy, which may be\n> bad UX for this case.\n\nThis was just wrong -- ALTER TABLE ... SET does not recurse. I think the\ndocs are misleading here, and I'm not sure why we allow an explicit '*'\nafter a table name if we're going to ignore it. But I also don't really\nwant to poke at that in this patchset, if I can avoid it.\n\n--Jacob",
"msg_date": "Thu, 29 Jun 2023 16:46:44 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "Hi,\n\n> v3 also fixes a nasty uninitialized stack variable, along with a bad\n> collation assumption I made.\n\nI decided to take a closer look at 0001.\n\nSince pg_get_relation_publishing_info() is exposed to the users I\nthink it should be described in a bit more detail than:\n\n```\n+ descr => 'get information on how a relation will be published via a\nlist of publications',\n```\n\nThis description in \\df+ output doesn't seem to be particularly\nuseful. Also the function should be documented. In order to accomplish\nall this it could make sense to reconsider the signature of the\nfunction and/or split it into several separate functions.\n\nThe volatility is declared as STABLE. This is probably correct. At\nleast at first glance I don't see any calls of VOLATILE functions and\noff the top of my head can't give an example when it will not behave\nas STABLE. This being said, a second opinion would be appreciated.\n\nprocess_relation_publications() misses a brief comment before the\ndeclaration. What are the arguments, what is the return value, are\nthere any pre/postconditions (locks, memory), etc.\n\nOtherwise 0001 is in a decent shape, it passes make\ninstallcheck-world, etc. I would suggest focusing on delivering this\npart, assuming there will be no push-back to the refactorings and\nslight test improvements. If 0002 could be further decomposed into\nseparate iterative improvements this could be helpful.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 19 Jul 2023 14:54:53 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
},
{
"msg_contents": "On 7/19/23 04:54, Aleksander Alekseev wrote:\n> I decided to take a closer look at 0001.\n\nHi Aleks! I saw you put this back into Needs Review; thanks.\n\nThis thread has been pretty quiet from me, because we've run into\ndifficulties on the subscriber end. Our original optimistic assumption\nwas that we just needed to funnel all the leaf tables through the root\non the publisher, and then a sufficiently complex replica trigger on\nthe subscriber would be able to correctly route the data through the\nroot into new leaves.\n\nThat has not panned out for a number of reasons. Probably the easiest\none to describe is that replica identity handling breaks: if we move\nthe incoming tuples to different tables, and an UPDATE or DELETE comes\nin later for those rows, the replication logic checks the root table\n(bypassing our existing routing logic) and sees that they don't exist.\nWe never get the chance to handle routing the way that partitions do\n[1]. Given that, I think I need to pivot and focus on the subscriber\nside first. That might(?) be a smaller effort anyway, and if we can't\nmake headway there then publisher-side support probably doesn't make\nsense at all.\n\nSo I'll pause this CF entry for now. This would also be a good time to\nask the crowd: are there alternative approaches to solve the OP that I\nmay be missing?\n\nThanks!\n--Jacob\n\n[1] https://git.postgresql.org/cgit/postgresql.git/tree/src/backend/replication/logical/worker.c?h=8bf7db02#n2631\n\nP.S. I've attached a v4, which fixes the semantics problem I mentioned\nupthread, so it doesn't get lost in the shuffle.",
"msg_date": "Thu, 31 Aug 2023 11:16:49 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: RFC: logical publication via inheritance root?"
}
] |
[
{
"msg_contents": "Recently, buildfarm member elver has started spewing literally\nthousands of $SUBJECT:\n\n elver | 2022-12-10 01:17:29 | ../../src/include/utils/float.h:223:33: warning: due to lvalue conversion of the controlling expression, association of type 'volatile float' will never be selected because it is qualified [-Wunreachable-code-generic-assoc]\n elver | 2022-12-10 01:17:29 | ../../src/include/utils/float.h:223:33: warning: due to lvalue conversion of the controlling expression, association of type 'volatile double' will never be selected because it is qualified [-Wunreachable-code-generic-assoc]\n elver | 2022-12-10 01:17:29 | ../../src/include/utils/float.h:223:33: warning: due to lvalue conversion of the controlling expression, association of type 'volatile long double' will never be selected because it is qualified [-Wunreachable-code-generic-assoc]\n[ etc etc, about 9200 times per build ]\n\nI have no idea what that means, and consulting the clang documentation\ndidn't leave me much wiser. I do see that a lot of these seem to be\nassociated with isnan() calls, which makes me guess that clang does\nnot play nice with however isnan() is declared on that box.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Dec 2022 21:50:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "-Wunreachable-code-generic-assoc warnings on elver"
},
{
"msg_contents": "On Sat, Dec 10, 2022 at 3:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Recently, buildfarm member elver has started spewing literally\n> thousands of $SUBJECT:\n>\n> elver | 2022-12-10 01:17:29 | ../../src/include/utils/float.h:223:33: warning: due to lvalue conversion of the controlling expression, association of type 'volatile float' will never be selected because it is qualified [-Wunreachable-code-generic-assoc]\n> elver | 2022-12-10 01:17:29 | ../../src/include/utils/float.h:223:33: warning: due to lvalue conversion of the controlling expression, association of type 'volatile double' will never be selected because it is qualified [-Wunreachable-code-generic-assoc]\n> elver | 2022-12-10 01:17:29 | ../../src/include/utils/float.h:223:33: warning: due to lvalue conversion of the controlling expression, association of type 'volatile long double' will never be selected because it is qualified [-Wunreachable-code-generic-assoc]\n> [ etc etc, about 9200 times per build ]\n>\n> I have no idea what that means, and consulting the clang documentation\n> didn't leave me much wiser. I do see that a lot of these seem to be\n> associated with isnan() calls, which makes me guess that clang does\n> not play nice with however isnan() is declared on that box.\n\nIt was using LLVM and clang 15 for the JIT support (the base compiler\ncc is clang 13 on this system, but CLANG is set to 15 for the .bc\nfiles, to match the LLVM version). Apparently clang 15 started\nissuing a new warning for math.h. That header has since been\nadjusted[1] to fix that, but that's not going to show up in the\nrelease that elver's using for a while. I've told it to use\nLLVM/clang 14 instead for now; let's see if that helps.\n\n[1] https://github.com/freebsd/freebsd-src/commit/8432a5a4fa3c4f34acf6136a9077b9ab7bbd723e\n\n\n",
"msg_date": "Sun, 11 Dec 2022 09:28:14 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: -Wunreachable-code-generic-assoc warnings on elver"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sat, Dec 10, 2022 at 3:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Recently, buildfarm member elver has started spewing literally\n>> thousands of $SUBJECT:\n\n> It was using LLVM and clang 15 for the JIT support (the base compiler\n> cc is clang 13 on this system, but CLANG is set to 15 for the .bc\n> files, to match the LLVM version). Apparently clang 15 started\n> issuing a new warning for math.h. That header has since been\n> adjusted[1] to fix that, but that's not going to show up in the\n> release that elver's using for a while. I've told it to use\n> LLVM/clang 14 instead for now; let's see if that helps.\n\nLooks like that did the trick, thanks!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Dec 2022 19:22:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: -Wunreachable-code-generic-assoc warnings on elver"
}
] |
[
{
"msg_contents": "One of the things I notice which comes up in a profile of a pgbench -S\ntest is the generation of the command completion tag.\n\nAt the moment, this is done by a snprintf() call, namely:\n\nsnprintf(completionTag, COMPLETION_TAG_BUFSIZE,\ntag == CMDTAG_INSERT ?\n\"%s 0 \" UINT64_FORMAT : \"%s \" UINT64_FORMAT,\ntagname, qc->nprocessed);\n\nEver since aa2387e2f, and the relevant discussion in [1], it's been\nclear that the sprintf() functions are not the fastest. I think\nthere's just some unavoidable overhead to __VA_ARGS__.\n\nThe generation of the completion tag is not hugely dominant in the\nprofiles, but it does appear:\n\n 0.36% postgres [.] dopr.constprop.0\n\nIn the attached, there are a few things done to make the generation of\ncompletion tags faster:\n\nNamely:\n1. Store the tag length in struct CommandTagBehavior so that we can\nmemcpy() a fixed length rather than having to copy byte-by-byte\nlooking for the \\0.\n2. Use pg_ulltoa_n to write the number of rows affected by the command tag.\n3. Have the function that builds the tag return its length so save\nfrom having to do a strlen before writing the tag in pq_putmessage().\n\nIt's difficult to measure the performance of something that takes\n0.36% of execution. I have previously seen the tag generation take\nover 1% of execution time.\n\nOne thing that's changed in the patch vs master is that if the\nsnprintf's buffer, for some reason had not been long enough to store\nthe entire completion tag, it would have truncated it and perhaps sent\na truncated version of the row count to the client. For this to\nhappen, we'd have to have some excessively long command name. Since\nthese cannot be added by users, I've opted to just add an Assert that\nwill trigger if we're ever asked to write a command tag that won't\nfit. Assuming we test our new excessively-long-named-command, we'll\ntrigger that Assert and realise long before it's a problem.\n\nBoth Andres and I seemed to have independently written almost exactly\nthe same patch for this. The attached is mine but with the\nGetCommandTagNameAndLen function from his. I'd written\nGetCommandTagLen() for mine, which required a couple of function calls\ninstead of Andres' 1 function call to get the name and length in one\ngo.\n\nDoes anyone object to this?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKJS1f8oeW8ZEKqD4X3e+TFwZt+MWV6O-TF8MBpdO4XNNarQvA@mail.gmail.com",
"msg_date": "Sat, 10 Dec 2022 20:32:06 +1300",
"msg_from": "David Rowley <dgrowley@gmail.com>",
"msg_from_op": true,
"msg_subject": "Speedup generation of command completion tags"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-10 20:32:06 +1300, David Rowley wrote:\n> @@ -20,13 +20,14 @@\n> typedef struct CommandTagBehavior\n> {\n> \tconst char *name;\n> +\tconst uint8 namelen;\n\nPerhaps worth adding a comment noting that namelen is the length without the\nnull byte?\n\n\n> +static inline Size\n> +make_completion_tag(char *buff, const QueryCompletion *qc,\n> +\t\t\t\t\tbool force_undecorated_output)\n> +{\n> +\tCommandTag\ttag = qc->commandTag;\n> +\tSize\t\ttaglen;\n> +\tconst char *tagname = GetCommandTagNameAndLen(tag, &taglen);\n> +\tchar\t *bufp;\n> +\n> +\t/*\n> +\t * We assume the tagname is plain ASCII and therefore requires no encoding\n> +\t * conversion.\n> +\t */\n> +\tmemcpy(buff, tagname, taglen + 1);\n> +\tbufp = buff + taglen;\n> +\n> +\t/* ensure that the tagname isn't long enough to overrun the buffer */\n> +\tAssert(taglen <= COMPLETION_TAG_BUFSIZE - MAXINT8LEN - 4);\n> +\n> +\t/*\n> +\t * In PostgreSQL versions 11 and earlier, it was possible to create a\n> +\t * table WITH OIDS. When inserting into such a table, INSERT used to\n> +\t * include the Oid of the inserted record in the completion tag. To\n> +\t * maintain compatibility in the wire protocol, we now write a \"0\" (for\n> +\t * InvalidOid) in the location where we once wrote the new record's Oid.\n> +\t */\n> +\tif (command_tag_display_rowcount(tag) && !force_undecorated_output)\n\nThis does another external function call to cmdtag.c...\n\nWhat about moving make_completion_tag() to cmdtag.c? Then we could just get\nthe entire CommandTagBehaviour struct at once. It's not super pretty to pass\nQueryCompletion to a routine in cmdtag.c, but it's not awful. And if we deem\nit problematic, we could just pass qc->commandTag, qc->nprocessed as a\nseparate arguments.\n\nI wonder if any of the other GetCommandTagName() would benefit noticably from\nnot having to compute the length. I guess the calls\nset_ps_display(GetCommandTagName()) calls in exec_simple_query() and\nexec_execute_message() might, although set_ps_display() isn't exactly zero\noverhead. But I do see it show up as a few percent in profiles, with the\nbiggest contributor being the call to strlen.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 10 Dec 2022 14:05:01 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Speedup generation of command completion tags"
},
{
"msg_contents": "Thanks for having a look at this.\n\nOn Sun, 11 Dec 2022 at 11:05, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-12-10 20:32:06 +1300, David Rowley wrote:\n> > @@ -20,13 +20,14 @@\n> > typedef struct CommandTagBehavior\n> > {\n> > const char *name;\n> > + const uint8 namelen;\n>\n> Perhaps worth adding a comment noting that namelen is the length without the\n> null byte?\n\nI've added that now plus a few other fields that could be easily\ndocumented. I left out commenting all of the remaining fields as\ndocumenting table_rewrite_ok seemed slightly more than this patch\nshould be doing. There's commands that rewrite tables, e.g. VACUUM\nFULL that have that as false. Looks like this is only used for\ncommands that *might* rewrite the table. I didn't want to tackle that\nin this patch.\n\n> What about moving make_completion_tag() to cmdtag.c? Then we could just get\n> the entire CommandTagBehaviour struct at once. It's not super pretty to pass\n> QueryCompletion to a routine in cmdtag.c, but it's not awful. And if we deem\n> it problematic, we could just pass qc->commandTag, qc->nprocessed as a\n> separate arguments.\n\nThat seems like a good idea. I've renamed and moved the function in\nthe attached. I also adjusted how the trailing NUL char is appended to\navoid having to calculate len + 1 and append the NUL char twice for\nthe most commonly taken path.\n\n> I wonder if any of the other GetCommandTagName() would benefit noticably from\n> not having to compute the length. I guess the calls\n> set_ps_display(GetCommandTagName()) calls in exec_simple_query() and\n> exec_execute_message() might, although set_ps_display() isn't exactly zero\n> overhead. But I do see it show up as a few percent in profiles, with the\n> biggest contributor being the call to strlen.\n\nI think that could be improved for sure. It does seem like we'd need\nto add set_ps_display_with_len() to make what you said work. There's\nprobably lower hanging fruit in that function that could be fixed\nwithout having to do that, however. For example:\n\nstrlcpy(ps_buffer + ps_buffer_fixed_size, activity,\n ps_buffer_size - ps_buffer_fixed_size);\nps_buffer_cur_len = strlen(ps_buffer);\n\ncould be written as:\n\nstrlcpy(ps_buffer + ps_buffer_fixed_size, activity,\n ps_buffer_size - ps_buffer_fixed_size);\nps_buffer_cur_len = ps_buffer_fixed_size + Min(strlen(activity),\nps_buffer_size - ps_buffer_fixed_size - 1);\n\nThat's pretty horrible to read though.\n\nThis sort of thing also makes me think that our investment in having\nmore usages of strlcpy() and fewer usages of strncpy was partially a\nmistake. There are exactly 2 usages of the return value of strlcpy in\nour entire source tree. That's about 1% of all calls. Likely what\nwould be better is a function that returns the number of bytes\n*actually* copied instead of one that returns the number of bytes that\nit would have copied if it hadn't run out of space. Such a function\ncould be defined as:\n\nsize_t\nstrdcpy(char * const dst, const char *src, ptrdiff_t len)\n{\n char *dstp = dst;\n\n while (len-- > 0)\n {\n if ((*dstp = *src++) == '\\0')\n {\n *dstp = '\\0';\n break;\n }\n dstp++;\n }\n return (dstp - dst);\n}\n\nThen we could append to strings like:\n\nchar buffer[STRING_SIZE];\nchar *bufp = buffer;\n\nbufp += strdcpy(bufp, \"01\", STRING_SIZE - (bufp - buffer));\nbufp += strdcpy(bufp, \"23\", STRING_SIZE - (bufp - buffer));\nbufp += strdcpy(bufp, \"45\", STRING_SIZE - (bufp - buffer));\n\nwhich allows transformation of the set_ps_display() code to:\n\npg_buffer_cur_len = ps_buffer_fixed_size;\npg_buffer_cur_len += strdcpy(ps_buffer + ps_buffer_fixed_size,\nactivity, ps_buffer_size - ps_buffer_fixed_size);\n\n(Assume the name strdcpy as a placeholder name for an actual name that\ndoes not conflict with something that it's not.)\n\nI'd rather not go into too much detail about that here though. I don't\nsee any places that can make use of the known tag length without going\nto the trouble of inventing new functions or changing the signature of\nexisting ones.\n\nDavid",
"msg_date": "Mon, 12 Dec 2022 14:48:44 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speedup generation of command completion tags"
},
{
"msg_contents": "On Mon, 12 Dec 2022 at 14:48, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sun, 11 Dec 2022 at 11:05, Andres Freund <andres@anarazel.de> wrote:\n> > What about moving make_completion_tag() to cmdtag.c? Then we could just get\n> > the entire CommandTagBehaviour struct at once. It's not super pretty to pass\n> > QueryCompletion to a routine in cmdtag.c, but it's not awful. And if we deem\n> > it problematic, we could just pass qc->commandTag, qc->nprocessed as a\n> > separate arguments.\n>\n> That seems like a good idea. I've renamed and moved the function in\n> the attached. I also adjusted how the trailing NUL char is appended to\n> avoid having to calculate len + 1 and append the NUL char twice for\n> the most commonly taken path.\n\nI've pushed the updated patch. Thanks for having a look.\n\nDavid\n\n\n",
"msg_date": "Fri, 16 Dec 2022 10:33:13 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speedup generation of command completion tags"
}
] |
[
{
"msg_contents": "Hi,\n\nI have noticed that progress reporting for CREATE INDEX of partitioned\ntables seems to be working poorly for nested partitioned tables. In\nparticular, it overwrites total and done partitions count when it\nrecurses down to child partitioned tables and it only reports top-level\npartitions. So it's not hard to see something like this during CREATE\nINDEX now:\n\npostgres=# select partitions_total, partitions_done from\npg_stat_progress_create_index ;\n partitions_total | partitions_done \n------------------+-----------------\n 1 | 2\n(1 row)\n\n\nI changed current behaviour to report the total number of partitions in\nthe inheritance tree and fixed recursion in the attached patch. I used\na static variable to keep the counter to avoid ABI breakage of\nDefineIndex, so that we could backpatch this to previous versions.\n\nThanks,\nIlya Gladyshev",
"msg_date": "Sat, 10 Dec 2022 12:18:32 +0400",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Sat, Dec 10, 2022 at 12:18:32PM +0400, Ilya Gladyshev wrote:\n> Hi,\n> \n> I have noticed that progress reporting for CREATE INDEX of partitioned\n> tables seems to be working poorly for nested partitioned tables. In\n> particular, it overwrites total and done partitions count when it\n> recurses down to child partitioned tables and it only reports top-level\n> partitions.�So it's not hard to see something like this during CREATE\n> INDEX now:\n> \n> postgres=# select partitions_total, partitions_done from\n> pg_stat_progress_create_index ;\n> partitions_total | partitions_done \n> ------------------+-----------------\n> 1 | 2\n> (1 row)\n\nYeah. I didn't verify, but it looks like this is a bug going to back to\nv12. As you said, when called recursively, DefineIndex() clobbers the\nnumber of completed partitions.\n\nMaybe DefineIndex() could flatten the list of partitions. But I don't\nthink that can work easily with iteration rather than recursion.\n\nCould you check what I've written as a counter-proposal ?\n\nAs long as we're changing partitions_done to include nested\nsub-partitions, it seems to me like we should exclude intermediate\n\"catalog-only\" partitioned indexes, and count only physical leaf\npartitions. Should it alo exclude any children with matching indexes,\nwhich will also be catalog-only changes? Probably not.\n\nThe docs say:\n|When creating an index on a partitioned table, this column is set to the\n|total number of partitions on which the index is to be created. This\n|field is 0 during a REINDEX.\n\n> I changed current behaviour to report the total number of partitions in\n> the inheritance tree and fixed recursion in the attached patch. I used\n> a static variable to keep the counter to avoid ABI breakage of\n> DefineIndex, so that we could backpatch this to previous versions.\n\nI wrote a bunch of assertions for this, which seems to have uncovered an\nsimilar issue with COPY progress reporting, dating to 8a4f618e7. I'm\nnot sure the assertions are okay. I imagine they may break other\nextensions, as with file_fdw.\n\n-- \nJustin",
"msg_date": "Sun, 11 Dec 2022 00:33:34 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "> Could you check what I've written as a counter-proposal ?\n\nI think that this might be a good solution to start with, it gives us the opportunity to improve the granularity later without any surprising changes for the end user. We could use this patch for previous versions and make more granular output in the latest. What do you think?\n\n> As long as we're changing partitions_done to include nested\n> sub-partitions, it seems to me like we should exclude intermediate\n> \"catalog-only\" partitioned indexes, and count only physical leaf\n> partitions. Should it alo exclude any children with matching indexes,\n> which will also be catalog-only changes? Probably not.\n> \n> The docs say:\n> |When creating an index on a partitioned table, this column is set to the\n> |total number of partitions on which the index is to be created. This\n> |field is 0 during a REINDEX.\n\nI agree with you on catalog-only partitioned indexes, but I think that in the perfect world we should exclude all the relations where the index isn’t actually created, so that means excluding attached indexes as well. However, IMO doing it this way will require too much of a code rewrite for quite a minor feature (but we could do it, ofc). I actually think that the progress view would be better off without the total number of partitions, but I’m not sure we have this option now. With this in mind, I think your proposal to exclude catalog-only indexes sounds reasonable to me, but I feel like the docs are off in this case, because the attached indexes are not created, but we pretend like they are in this metric, so we should fix one or the other.\n\n> \n>> I changed current behaviour to report the total number of partitions in\n>> the inheritance tree and fixed recursion in the attached patch. I used\n>> a static variable to keep the counter to avoid ABI breakage of\n>> DefineIndex, so that we could backpatch this to previous versions.\n> \n> I wrote a bunch of assertions for this, which seems to have uncovered an\n> similar issue with COPY progress reporting, dating to 8a4f618e7. I'm\n> not sure the assertions are okay. I imagine they may break other\n> extensions, as with file_fdw.\n> \n> -- \n> Justin\n> <0001-fix-progress-reporting-of-nested-partitioned-indexes.patch>\n\n\nCould you check what I've written as a counter-proposal ?I think that this might be a good solution to start with, it gives us the opportunity to improve the granularity later without any surprising changes for the end user. We could use this patch for previous versions and make more granular output in the latest. What do you think?As long as we're changing partitions_done to include nestedsub-partitions, it seems to me like we should exclude intermediate\"catalog-only\" partitioned indexes, and count only physical leafpartitions. Should it alo exclude any children with matching indexes,which will also be catalog-only changes? Probably not.The docs say:|When creating an index on a partitioned table, this column is set to the|total number of partitions on which the index is to be created. This|field is 0 during a REINDEX.I agree with you on catalog-only partitioned indexes, but I think that in the perfect world we should exclude all the relations where the index isn’t actually created, so that means excluding attached indexes as well. However, IMO doing it this way will require too much of a code rewrite for quite a minor feature (but we could do it, ofc). I actually think that the progress view would be better off without the total number of partitions, but I’m not sure we have this option now. With this in mind, I think your proposal to exclude catalog-only indexes sounds reasonable to me, but I feel like the docs are off in this case, because the attached indexes are not created, but we pretend like they are in this metric, so we should fix one or the other.I changed current behaviour to report the total number of partitions inthe inheritance tree and fixed recursion in the attached patch. I useda static variable to keep the counter to avoid ABI breakage ofDefineIndex, so that we could backpatch this to previous versions.I wrote a bunch of assertions for this, which seems to have uncovered ansimilar issue with COPY progress reporting, dating to 8a4f618e7. I'mnot sure the assertions are okay. I imagine they may break otherextensions, as with file_fdw.-- Justin<0001-fix-progress-reporting-of-nested-partitioned-indexes.patch>",
"msg_date": "Mon, 12 Dec 2022 23:39:23 +0400",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Mon, Dec 12, 2022 at 11:39:23PM +0400, Ilya Gladyshev wrote:\n> \n> > Could you check what I've written as a counter-proposal ?\n> \n> I think that this might be a good solution to start with, it gives us the opportunity to improve the granularity later without any surprising changes for the end user. We could use this patch for previous versions and make more granular output in the latest. What do you think?\n\nSomehow, it hadn't occured to me that my patch \"lost granularity\" by\nincrementing the progress bar by more than one... Shoot.\n\n> I actually think that the progress view would be better off without the total number of partitions, \n\nJust curious - why ?\n\n> With this in mind, I think your proposal to exclude catalog-only indexes sounds reasonable to me, but I feel like the docs are off in this case, because the attached indexes are not created, but we pretend like they are in this metric, so we should fix one or the other.\n\nI agree that the docs should indicate whether we're counting \"all\npartitions\", \"direct partitions\", and whether or not that includes\npartitioned partitions, or just leaf partitions.\n\nI have another proposal: since the original patch 3.5 years ago didn't\nconsider or account for sub-partitions, let's not start counting them\nnow. It was never defined whether they were included or not (and I\nguess that they're not common) so we can take this opportunity to\nclarify the definition.\n\nAlternately, if it's okay to add nparts_done to the IndexStmt, then\nthat's easy.\n\n-- \nJustin",
"msg_date": "Mon, 12 Dec 2022 22:43:31 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Mon, 2022-12-12 at 22:43 -0600, Justin Pryzby wrote:\n> On Mon, Dec 12, 2022 at 11:39:23PM +0400, Ilya Gladyshev wrote:\n> > \n> > > Could you check what I've written as a counter-proposal ?\n> > \n> > I think that this might be a good solution to start with, it gives\n> > us the opportunity to improve the granularity later without any\n> > surprising changes for the end user. We could use this patch for\n> > previous versions and make more granular output in the latest. What\n> > do you think?\n> \n> Somehow, it hadn't occured to me that my patch \"lost granularity\" by\n> incrementing the progress bar by more than one... Shoot.\n> \n> > I actually think that the progress view would be better off without\n> > the total number of partitions, \n> \n> Just curious - why ?\n\nWe don't really know how many indexes we are going to create, unless we\nhave some kind of preliminary \"planning\" stage where we acumulate all\nthe relations that will need to have indexes created (rather than\nattached). And if someone wants the total, it can be calculated\nmanually without this view, it's less user-friendly, but if we can't do\nit well, I would leave it up to the user.\n\n> \n> > With this in mind, I think your proposal to exclude catalog-only\n> > indexes sounds reasonable to me, but I feel like the docs are off\n> > in this case, because the attached indexes are not created, but we\n> > pretend like they are in this metric, so we should fix one or the\n> > other.\n> \n> I agree that the docs should indicate whether we're counting \"all\n> partitions\", \"direct partitions\", and whether or not that includes\n> partitioned partitions, or just leaf partitions.\n\nAgree. I think that docs should also be explicit about the attached\nindexes, if we decide to count them in as \"created\".\n\n> I have another proposal: since the original patch 3.5 years ago\n> didn't\n> consider or account for sub-partitions, let's not start counting them\n> now. It was never defined whether they were included or not (and I\n> guess that they're not common) so we can take this opportunity to\n> clarify the definition.\n\nI have had this thought initially, but then I thought that it's not\nwhat I would want, if I was to track progress of multi-level\npartitioned tables (but yeah, I guess it's pretty uncommon). In this\nrespect, I like your initial counter-proposal more, because it leaves\nus room to improve this in the future. Otherwise, if we commit to\nreporting only top-level partitions now, I'm not sure we will have the\nopportunity to change this.\n\n\n> Alternately, if it's okay to add nparts_done to the IndexStmt, then\n> that's easy.\n\nYeah, or we could add another argument to DefineIndex. I don't know if\nit's ok, or which option is better here in terms of compatibility and\ninterface-wise, so I have tried both of them, see the attached patches.",
"msg_date": "Tue, 13 Dec 2022 23:07:06 +0400",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 10:18:58PM +0400, Ilya Gladyshev wrote:\n> > > I actually think that the progress view would be better off without\n> > > the total number of partitions, \n> > \n> > Just curious - why ?\n> \n> We don't really know how many indexes we are going to create, unless we\n> have some kind of preliminary \"planning\" stage where we acumulate all\n> the relations that will need to have indexes created (rather than\n> attached). And if someone wants the total, it can be calculated\n> manually without this view, it's less user-friendly, but if we can't do\n> it well, I would leave it up to the user.\n\nThanks. One other reason is that the partitions (and sub-partitions)\nmay not be equally sized. Also, I've said before that it's weird to\nreport macroscopic progress about the number of partitions finihed in\nthe same place as reporting microscopic details like the number of\nblocks done of the relation currently being processed.\n\n> > I have another proposal: since the original patch 3.5 years ago\n> > didn't\n> > consider or account for sub-partitions, let's not start counting them\n> > now.� It was never defined whether they were included or not (and I\n> > guess that they're not common) so we can take this opportunity to\n> > clarify the definition.\n> \n> I have had this thought initially, but then I thought that it's not\n> what I would want, if I was to track progress of multi-level\n> partitioned tables (but yeah, I guess it's pretty uncommon). In this\n> respect, I like your initial counter-proposal more, because it leaves\n> us room to improve this in the future. Otherwise, if we commit to\n> reporting only top-level partitions now, I'm not sure we will have the\n> opportunity to change this.\n\nWe have the common problem of too many patches.\n\nhttps://www.postgresql.org/message-id/a15f904a70924ffa4ca25c3c744cff31e0e6e143.camel%40gmail.com\nThis changes the progress reporting to show indirect children as\n\"total\", and adds a global variable to track recursion into\nDefineIndex(), allowing it to be incremented without the value being\nlost to the caller.\n\nhttps://www.postgresql.org/message-id/20221211063334.GB27893%40telsasoft.com\nThis also counts indirect children, but only increments the progress\nreporting in the parent. This has the disadvantage that when\nintermediate partitions are in use, the done_partitions counter will\n\"jump\" from (say) 20 to 30 without ever hitting 21-29.\n\nhttps://www.postgresql.org/message-id/20221213044331.GJ27893%40telsasoft.com\nThis has two alternate patches:\n- One patch changes to only update progress reporting of *direct*\n children. This is minimal, but discourages any future plan to track\n progress involving intermediate partitions with finer granularity.\n- A alternative patch adds IndexStmt.nparts_done, and allows reporting\n fine-grained progress involving intermediate partitions.\n\nhttps://www.postgresql.org/message-id/flat/039564d234fc3d014c555a7ee98be69a9e724836.camel@gmail.com\nThis also reports progress of intermediate children. The first patch\ndoes it by adding an argument to DefineIndex() (which isn't okay to\nbackpatch). And an alternate patch does it by adding to IndexStmt.\n\n@committers: Is it okay to add nparts_done to IndexStmt ?\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 17 Dec 2022 08:30:02 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Sat, Dec 17, 2022 at 08:30:02AM -0600, Justin Pryzby wrote:\n> We have the common problem of too many patches.\n> \n> https://www.postgresql.org/message-id/a15f904a70924ffa4ca25c3c744cff31e0e6e143.camel%40gmail.com\n> This changes the progress reporting to show indirect children as\n> \"total\", and adds a global variable to track recursion into\n> DefineIndex(), allowing it to be incremented without the value being\n> lost to the caller.\n> \n> https://www.postgresql.org/message-id/20221211063334.GB27893%40telsasoft.com\n> This also counts indirect children, but only increments the progress\n> reporting in the parent. This has the disadvantage that when\n> intermediate partitions are in use, the done_partitions counter will\n> \"jump\" from (say) 20 to 30 without ever hitting 21-29.\n> \n> https://www.postgresql.org/message-id/20221213044331.GJ27893%40telsasoft.com\n> This has two alternate patches:\n> - One patch changes to only update progress reporting of *direct*\n> children. This is minimal, but discourages any future plan to track\n> progress involving intermediate partitions with finer granularity.\n> - A alternative patch adds IndexStmt.nparts_done, and allows reporting\n> fine-grained progress involving intermediate partitions.\n> \n> https://www.postgresql.org/message-id/flat/039564d234fc3d014c555a7ee98be69a9e724836.camel@gmail.com\n> This also reports progress of intermediate children. The first patch\n> does it by adding an argument to DefineIndex() (which isn't okay to\n> backpatch). And an alternate patch does it by adding to IndexStmt.\n> \n> @committers: Is it okay to add nparts_done to IndexStmt ?\n\nAny hint about this ?\n\nThis should be resolved before the \"CIC on partitioned tables\" patch,\nwhich I think is otherwise done.\n\n\n",
"msg_date": "Sun, 8 Jan 2023 10:48:49 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Sun, 2023-01-08 at 10:48 -0600, Justin Pryzby wrote:\n> On Sat, Dec 17, 2022 at 08:30:02AM -0600, Justin Pryzby wrote:\n> > We have the common problem of too many patches.\n> > \n> > https://www.postgresql.org/message-id/a15f904a70924ffa4ca25c3c744cff31e0e6e143.camel%40gmail.com\n> > This changes the progress reporting to show indirect children as\n> > \"total\", and adds a global variable to track recursion into\n> > DefineIndex(), allowing it to be incremented without the value\n> > being\n> > lost to the caller.\n> > \n> > https://www.postgresql.org/message-id/20221211063334.GB27893%40telsasoft.com\n> > This also counts indirect children, but only increments the\n> > progress\n> > reporting in the parent. This has the disadvantage that when\n> > intermediate partitions are in use, the done_partitions counter\n> > will\n> > \"jump\" from (say) 20 to 30 without ever hitting 21-29.\n> > \n> > https://www.postgresql.org/message-id/20221213044331.GJ27893%40telsasoft.com\n> > This has two alternate patches:\n> > - One patch changes to only update progress reporting of *direct*\n> > children. This is minimal, but discourages any future plan to\n> > track\n> > progress involving intermediate partitions with finer\n> > granularity.\n> > - A alternative patch adds IndexStmt.nparts_done, and allows\n> > reporting\n> > fine-grained progress involving intermediate partitions.\n> > \n> > https://www.postgresql.org/message-id/flat/039564d234fc3d014c555a7ee98be69a9e724836.camel@gmail.com\n> > This also reports progress of intermediate children. The first\n> > patch\n> > does it by adding an argument to DefineIndex() (which isn't okay to\n> > backpatch). And an alternate patch does it by adding to IndexStmt.\n> > \n> > @committers: Is it okay to add nparts_done to IndexStmt ?\n> \n> Any hint about this ?\n> \n> This should be resolved before the \"CIC on partitioned tables\" patch,\n> which I think is otherwise done.\n\nI suggest that we move on with the IndexStmt patch and see what the\ncommitters have to say about it. I have brushed the patch up a bit,\nfixing TODOs and adding docs as per our discussion above.",
"msg_date": "Mon, 09 Jan 2023 12:44:22 +0400",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On 1/9/23 09:44, Ilya Gladyshev wrote:\n> On Sun, 2023-01-08 at 10:48 -0600, Justin Pryzby wrote:\n>> On Sat, Dec 17, 2022 at 08:30:02AM -0600, Justin Pryzby wrote:\n>>> ...\n>>>\n>>> @committers: Is it okay to add nparts_done to IndexStmt ?\n>>\n>> Any hint about this ?\n>>\n\nAFAIK fields added at the end of a struct is seen as acceptable from the\nABI point of view. It's not risk-free, but we did that multiple times\nwhen fixing bugs, IIRC.\n\nThe primary risk is old extensions (built on older minor version)\nrunning on new server, getting confused by new fields (and implied\nshifts in the structs). But fields at the end should be safe - the\nextension simply ignores the stuff at the end. The one problem would be\narrays of structs, because even a field at the end changes the array\nstride. But I don't think we do that with IndexStmt ...\n\nOf course, if the \"old\" extension itself allocates the struct and passes\nit to core code, that might still be an issue, because it'll allocate a\nsmaller struct, and core might see bogus data at the end.\n\nOn the other hand, new extensions on old server may get confused too,\nbecause it may try setting a field that does not exist.\n\nSo ultimately it's about weighing risks vs. benefits - evaluating\nwhether fixing the issue is actually worth it.\n\nThe question is if/how many such extensions messing with IndexStmt in\nthis way actually exist. That is, allocate IndexStmt (or array of it). I\nhaven't found any, but maybe some extensions for index or partition\nmanagement do it? Not sure.\n\nBut ...\n\nDo we actually need the new parts_done field? I mean, we already do\ntrack the value - at PROGRESS_CREATEIDX_PARTITIONS_DONE index in the\nst_progress_param array. Can't we simply read it from there? Then we\nwould not have ABI issues with the new field added to IndexStmt.\n\n>> This should be resolved before the \"CIC on partitioned tables\" patch,\n>> which I think is otherwise done.\n> \n> I suggest that we move on with the IndexStmt patch and see what the\n> committers have to say about it. I have brushed the patch up a bit,\n> fixing TODOs and adding docs as per our discussion above.\n> \n\nI did take a look at the patch, so here are my 2c:\n\n1) num_leaf_partitions says it's \"excluding foreign tables\" but then it\nuses RELKIND_HAS_STORAGE() which excludes various others relkinds, e.g.\npartitioned tables etc. Minor, but perhaps a bit confusing.\n\n2) I'd probably say count_leaf_partitions() instead.\n\n3) The new part in DefineIndex counting leaf partitions should have a\ncomment before\n\n if (!OidIsValid(parentIndexId))\n { ... }\n\n4) It's a bit confusing that one of the branches in DefineIndex just\nsets stmt->parts_done without calling pgstat_progress_update_param\n(while the other one does both). AFAICS the call is not needed because\nwe already updated it during the recursive DefineIndex call, but maybe\nthe comment should mention that?\n\n\nAs for the earlier discussion about the \"correct\" behavior for leaf vs.\nnon-leaf partitions and whether to calculate partitions in advance:\n\n* I agree it's desirable to count partitions in advance, instead of\nadding incrementally. The view is meant to provide \"overview\" of the\nCREATE INDEX progress, and imagine you get\n\n partitions_total partitions_done\n 10 9\n\nso that you believe you're ~90% done. But then it jumps to the next\nchild and now you get\n\n partitions_total partitions_done\n 20 10\n\nwhich makes the view a bit useless for it's primary purpose, IMHO.\n\n\n* I don't care very much about leaf vs. non-leaf partitions. If we\nexclude non-leaf ones, fine with me. But the number of non-leaf ones\nshould be much smaller than leaf ones, and if the partition already has\na matching index that distorts the tracking too. Furthermore the\npartitions may have different size etc. so the progress is only\napproximate anyway.\n\nI wonder if we could improve this to track the size of partitions for\ntotal/done? That'd make leaf/non-leaf distinction unnecessary, because\nnon-leaf partitions have size 0.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 17 Jan 2023 20:44:36 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 08:44:36PM +0100, Tomas Vondra wrote:\n> On 1/9/23 09:44, Ilya Gladyshev wrote:\n> > On Sun, 2023-01-08 at 10:48 -0600, Justin Pryzby wrote:\n> >> On Sat, Dec 17, 2022 at 08:30:02AM -0600, Justin Pryzby wrote:\n> >>> ...\n> >>>\n> >>> @committers: Is it okay to add nparts_done to IndexStmt ?\n> >>\n> >> Any hint about this ?\n> >>\n> \n> AFAIK fields added at the end of a struct is seen as acceptable from the\n> ABI point of view. It's not risk-free, but we did that multiple times\n> when fixing bugs, IIRC.\n\nMy question isn't whether it's okay to add a field at the end of a\nstruct in general, but rather whether it's acceptable to add this field\nat the end of this struct, and not because it's unsafe to do in a minor\nrelease, but whether someone is going to say that it's an abuse of the\ndata structure.\n\n> Do we actually need the new parts_done field? I mean, we already do\n> track the value - at PROGRESS_CREATEIDX_PARTITIONS_DONE index in the\n> st_progress_param array. Can't we simply read it from there? Then we\n> would not have ABI issues with the new field added to IndexStmt.\n\nGood idea to try.\n\n> As for the earlier discussion about the \"correct\" behavior for leaf vs.\n> non-leaf partitions and whether to calculate partitions in advance:\n> \n> * I agree it's desirable to count partitions in advance, instead of\n> adding incrementally. The view is meant to provide \"overview\" of the\n> CREATE INDEX progress, and imagine you get\n> \n> partitions_total partitions_done\n> 10 9\n> \n> so that you believe you're ~90% done. But then it jumps to the next\n> child and now you get\n> \n> partitions_total partitions_done\n> 20 10\n> \n> which makes the view a bit useless for it's primary purpose, IMHO.\n\nTo be clear, that's the current, buggy behavior, and this thread is\nabout fixing it. The proposed patches all ought to avoid that.\n\nBut the bug isn't caused by not \"calculating partitions in advance\".\nRather, the issue is that currently, the \"total\" is overwritten while\nrecursing.\n\nThat's a separate question from whether indirect partitions are counted\nor not.\n\n> I wonder if we could improve this to track the size of partitions for\n> total/done? That'd make leaf/non-leaf distinction unnecessary, because\n> non-leaf partitions have size 0.\n\nMaybe, but it's out of scope for this patch.\n\nThanks for looking.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 17 Jan 2023 14:59:04 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "\n\nOn 1/17/23 21:59, Justin Pryzby wrote:\n> On Tue, Jan 17, 2023 at 08:44:36PM +0100, Tomas Vondra wrote:\n>> On 1/9/23 09:44, Ilya Gladyshev wrote:\n>>> On Sun, 2023-01-08 at 10:48 -0600, Justin Pryzby wrote:\n>>>> On Sat, Dec 17, 2022 at 08:30:02AM -0600, Justin Pryzby wrote:\n>>>>> ...\n>>>>>\n>>>>> @committers: Is it okay to add nparts_done to IndexStmt ?\n>>>>\n>>>> Any hint about this ?\n>>>>\n>>\n>> AFAIK fields added at the end of a struct is seen as acceptable from the\n>> ABI point of view. It's not risk-free, but we did that multiple times\n>> when fixing bugs, IIRC.\n> \n> My question isn't whether it's okay to add a field at the end of a\n> struct in general, but rather whether it's acceptable to add this field\n> at the end of this struct, and not because it's unsafe to do in a minor\n> release, but whether someone is going to say that it's an abuse of the\n> data structure.\n> \n\nAh, you mean whether it's the right place for the parameter?\n\nI don't think it is, really. IndexStmt is meant to be a description of\nthe CREATE INDEX statement, not something that includes info about how\nit's processed. But it's the only struct we pass to the DefineIndex for\nchild indexes, so the only alternatives I can think of is a global\nvariable and the (existing) param array field.\n\nNevertheless, ABI compatibility is still relevant for backbranches.\n\n\n>> Do we actually need the new parts_done field? I mean, we already do\n>> track the value - at PROGRESS_CREATEIDX_PARTITIONS_DONE index in the\n>> st_progress_param array. Can't we simply read it from there? Then we\n>> would not have ABI issues with the new field added to IndexStmt.\n> \n> Good idea to try.\n> \n\nOK\n\n>> As for the earlier discussion about the \"correct\" behavior for leaf vs.\n>> non-leaf partitions and whether to calculate partitions in advance:\n>>\n>> * I agree it's desirable to count partitions in advance, instead of\n>> adding incrementally. The view is meant to provide \"overview\" of the\n>> CREATE INDEX progress, and imagine you get\n>>\n>> partitions_total partitions_done\n>> 10 9\n>>\n>> so that you believe you're ~90% done. But then it jumps to the next\n>> child and now you get\n>>\n>> partitions_total partitions_done\n>> 20 10\n>>\n>> which makes the view a bit useless for it's primary purpose, IMHO.\n> \n> To be clear, that's the current, buggy behavior, and this thread is\n> about fixing it. The proposed patches all ought to avoid that.\n> \n> But the bug isn't caused by not \"calculating partitions in advance\".\n> Rather, the issue is that currently, the \"total\" is overwritten while\n> recursing.\n> \n\nYou're right the issue us about overwriting the total - not sure what I\nwas thinking about when writing this. I guess I got distracted by the\ndiscussion about \"preliminary planning\" etc. Sorry for the confusion.\n\n> That's a separate question from whether indirect partitions are counted\n> or not.\n> \n>> I wonder if we could improve this to track the size of partitions for\n>> total/done? That'd make leaf/non-leaf distinction unnecessary, because\n>> non-leaf partitions have size 0.\n> \n> Maybe, but it's out of scope for this patch.\n> \n\n+1, it was just an idea for future.\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 18 Jan 2023 01:04:06 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "TBH, I think the best approach is what I did in:\n0001-report-top-parent-progress-for-CREATE-INDEX.txt\n\nThat's a minimal patch, ideal for backpatching.\n\n..which defines/clarifies that the progress reporting is only for\n*direct* children. That avoids the need to change any data structures,\nand it's what was probably intended by the original patch, which doesn't\nseem to have considered intermediate partitioned tables.\n\nI think it'd be fine to re-define that in some future release, to allow\nshowing indirect children (probably only \"leaves\", and not intermediate\npartitioned tables). Or \"total_bytes\" or other global progress.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 18 Jan 2023 09:25:35 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Wed, 18 Jan 2023 at 15:25, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> TBH, I think the best approach is what I did in:\n> 0001-report-top-parent-progress-for-CREATE-INDEX.txt\n>\n> That's a minimal patch, ideal for backpatching.\n>\n> ..which defines/clarifies that the progress reporting is only for\n> *direct* children. That avoids the need to change any data structures,\n> and it's what was probably intended by the original patch, which doesn't\n> seem to have considered intermediate partitioned tables.\n>\n> I think it'd be fine to re-define that in some future release, to allow\n> showing indirect children (probably only \"leaves\", and not intermediate\n> partitioned tables). Or \"total_bytes\" or other global progress.\n>\n\nHmm. My expectation as a user is that partitions_total includes both\ndirect and indirect (leaf) child partitions, that it is set just once\nat the start of the process, and that partitions_done increases from\nzero to partitions_total as the index-build proceeds. I think that\nshould be achievable with a minimally invasive patch that doesn't\nchange any data structures.\n\nI agree with all the review comments Tomas posted. In particular, this\nshouldn't need any changes to IndexStmt. I think the best approach\nwould be to just add a new function to backend_progress.c that offsets\na specified progress parameter by a specified amount, so that you can\njust increment partitions_done by one or more, at the appropriate\npoints.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 25 Jan 2023 09:51:03 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "> 17 янв. 2023 г., в 23:44, Tomas Vondra <tomas.vondra@enterprisedb.com> написал(а):\n> Do we actually need the new parts_done field? I mean, we already do\n> track the value - at PROGRESS_CREATEIDX_PARTITIONS_DONE index in the\n> st_progress_param array. Can't we simply read it from there? Then we\n> would not have ABI issues with the new field added to IndexStmt.\n\nI think it’s a good approach and it could be useful outside of scope of this patch too. So I have attached a patch, that introduces pgstat_progress_incr_param function for this purpose. There’s one thing I am not sure about, IIUC, we can assume that the only process that can write into MyBEEntry of the current backend is the current backend itself, therefore looping to get consistent reads from this array is not required. Please correct me, if I am wrong here.",
"msg_date": "Tue, 31 Jan 2023 19:32:20 +0400",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 07:32:20PM +0400, Ilya Gladyshev wrote:\n> > 17 янв. 2023 г., в 23:44, Tomas Vondra <tomas.vondra@enterprisedb.com> написал(а):\n> > Do we actually need the new parts_done field? I mean, we already do\n> > track the value - at PROGRESS_CREATEIDX_PARTITIONS_DONE index in the\n> > st_progress_param array. Can't we simply read it from there? Then we\n> > would not have ABI issues with the new field added to IndexStmt.\n> \n> I think it’s a good approach and it could be useful outside of scope of this patch too. So I have attached a patch, that introduces pgstat_progress_incr_param function for this purpose. There’s one thing I am not sure about, IIUC, we can assume that the only process that can write into MyBEEntry of the current backend is the current backend itself, therefore looping to get consistent reads from this array is not required. Please correct me, if I am wrong here.\n\nThanks for the updated patch.\n\nI think you're right - pgstat_begin_read_activity() is used for cases\ninvolving other backends. It ought to be safe for a process to read its\nown status bits, since we know they're not also being written.\n\nYou changed DefineIndex() to update progress for the leaf indexes' when\ncalled recursively. The caller updates the progress for \"attached\"\nindexes, but not created ones. That allows providing fine-granularity\nprogress updates when using intermediate partitions, right ? (Rather\nthan updating the progress by more than one at a time in the case of\nintermediate partitioning).\n\nIf my understanding is right, that's subtle, and adds a bit of\ncomplexity to the current code, so could use careful commentary. I\nsuggest:\n\n* If the index was attached, update progress for all its direct and\n* indirect leaf indexes all at once. If the index was built by calling\n* DefineIndex() recursively, the called function is responsible for\n* updating the progress report for built indexes.\n\n...\n\n* If this is the top-level index, we're done. When called recursively\n* for child tables, the done partition counter is incremented now,\n* rather than in the caller.\n\nI guess you know that there were compiler warnings (related to your\nquestion).\nhttps://cirrus-ci.com/task/6571212386598912\n\npgstat_progress_incr_param() could call pgstat_progress_update_param()\nrather than using its own Assert() and WRITE_ACTIVITY calls. I'm not\nsure which I prefer, though.\n\nAlso, there are whitespace/tab/style issues in\npgstat_progress_incr_param().\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 31 Jan 2023 22:29:49 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "> 1 февр. 2023 г., в 08:29, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n> \n> On Tue, Jan 31, 2023 at 07:32:20PM +0400, Ilya Gladyshev wrote:\n>>> 17 янв. 2023 г., в 23:44, Tomas Vondra <tomas.vondra@enterprisedb.com> написал(а):\n>>> Do we actually need the new parts_done field? I mean, we already do\n>>> track the value - at PROGRESS_CREATEIDX_PARTITIONS_DONE index in the\n>>> st_progress_param array. Can't we simply read it from there? Then we\n>>> would not have ABI issues with the new field added to IndexStmt.\n>> \n>> I think it’s a good approach and it could be useful outside of scope of this patch too. So I have attached a patch, that introduces pgstat_progress_incr_param function for this purpose. There’s one thing I am not sure about, IIUC, we can assume that the only process that can write into MyBEEntry of the current backend is the current backend itself, therefore looping to get consistent reads from this array is not required. Please correct me, if I am wrong here.\n> \n> Thanks for the updated patch.\n> \n> I think you're right - pgstat_begin_read_activity() is used for cases\n> involving other backends. It ought to be safe for a process to read its\n> own status bits, since we know they're not also being written.\n> \n> You changed DefineIndex() to update progress for the leaf indexes' when\n> called recursively. The caller updates the progress for \"attached\"\n> indexes, but not created ones. That allows providing fine-granularity\n> progress updates when using intermediate partitions, right ? (Rather\n> than updating the progress by more than one at a time in the case of\n> intermediate partitioning).\n> \n> If my understanding is right, that's subtle, and adds a bit of\n> complexity to the current code, so could use careful commentary. I\n> suggest:\n> \n> * If the index was attached, update progress for all its direct and\n> * indirect leaf indexes all at once. If the index was built by calling\n> * DefineIndex() recursively, the called function is responsible for\n> * updating the progress report for built indexes.\n> \n> ...\n> \n> * If this is the top-level index, we're done. When called recursively\n> * for child tables, the done partition counter is incremented now,\n> * rather than in the caller.\n\nYes, you are correct about the intended behavior, I added your comments to the patch.\n\n> I guess you know that there were compiler warnings (related to your\n> question).\n> https://cirrus-ci.com/task/6571212386598912\n> \n> pgstat_progress_incr_param() could call pgstat_progress_update_param()\n> rather than using its own Assert() and WRITE_ACTIVITY calls. I'm not\n> sure which I prefer, though.\n> \n> Also, there are whitespace/tab/style issues in\n> pgstat_progress_incr_param().\n> \n> -- \n> Justin\n\nThank you for the review, I fixed the aforementioned issues in the v2.",
"msg_date": "Wed, 1 Feb 2023 11:21:35 +0400",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "Hmm, count_leaf_partitions has to scan pg_inherits and do a syscache\nlookup for every single element therein ... this sounds slow. \n\nIn one of the callsites, we already have the partition descriptor\navailable. We could just scan partdesc->is_leaf[] and add one for each\n'true' value we see there.\n\nIn the other callsite, we had the table open just a few lines before the\nplace you call count_leaf_partitions. Maybe we can rejigger things by\nexamining its state before closing it: if relkind is not partitioned we\nknow leaf_partitions=0, and only if partitioned we count leaf partitions.\nI think that would save some work. I also wonder if it's worth writing\na bespoke function for counting leaf partitions rather than relying on\nfind_all_inheritors.\n\nI think there's probably not much point optimizing it further than that.\nIf there was, then we could think about creating a data representation\nthat we can build for the entire partitioning hierarchy in a single pass\nwith the count of leaf partitions that sit below each specific non-leaf;\nbut I think that's just over-engineering.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"After a quick R of TFM, all I can say is HOLY CR** THAT IS COOL! PostgreSQL was\namazing when I first started using it at 7.2, and I'm continually astounded by\nlearning new features and techniques made available by the continuing work of\nthe development team.\"\nBerend Tober, http://archives.postgresql.org/pgsql-hackers/2007-08/msg01009.php\n\n\n",
"msg_date": "Wed, 1 Feb 2023 13:01:26 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "> 1 февр. 2023 г., в 16:01, Alvaro Herrera <alvherre@alvh.no-ip.org> написал(а):\n> \n> Hmm, count_leaf_partitions has to scan pg_inherits and do a syscache\n> lookup for every single element therein ... this sounds slow. \n> \n> In one of the callsites, we already have the partition descriptor\n> available. We could just scan partdesc->is_leaf[] and add one for each\n> 'true' value we see there.\n\nThe problem is that partdesc contains only direct children of the table and we need all the children down the inheritance tree to count the total number of leaf partitions in the first callsite.\n\n> In the other callsite, we had the table open just a few lines before the\n> place you call count_leaf_partitions. Maybe we can rejigger things by\n> examining its state before closing it: if relkind is not partitioned we\n> know leaf_partitions=0, and only if partitioned we count leaf partitions.\n> I think that would save some work. I also wonder if it's worth writing\n> a bespoke function for counting leaf partitions rather than relying on\n> find_all_inheritors.\n\nSure, added this condition to avoid the extra work here.",
"msg_date": "Wed, 1 Feb 2023 18:21:00 +0400",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Wed, 1 Feb 2023 at 15:21, Ilya Gladyshev <ilya.v.gladyshev@gmail.com> wrote:\n>\n> > 1 февр. 2023 г., в 16:01, Alvaro Herrera <alvherre@alvh.no-ip.org> написал(а):\n> >\n> > Hmm, count_leaf_partitions has to scan pg_inherits and do a syscache\n> > lookup for every single element therein ... this sounds slow.\n> >\n> > In one of the callsites, we already have the partition descriptor\n> > available. We could just scan partdesc->is_leaf[] and add one for each\n> > 'true' value we see there.\n>\n> The problem is that partdesc contains only direct children of the table and we need all the children down the inheritance tree to count the total number of leaf partitions in the first callsite.\n>\n> > In the other callsite, we had the table open just a few lines before the\n> > place you call count_leaf_partitions. Maybe we can rejigger things by\n> > examining its state before closing it: if relkind is not partitioned we\n> > know leaf_partitions=0, and only if partitioned we count leaf partitions.\n> > I think that would save some work. I also wonder if it's worth writing\n> > a bespoke function for counting leaf partitions rather than relying on\n> > find_all_inheritors.\n>\n> Sure, added this condition to avoid the extra work here.\n>\n\n> When creating an index on a partitioned table, this column is set to\n> - the total number of partitions on which the index is to be created.\n> + the total number of leaf partitions on which the index is to be created or attached.\n\nI think we should also add a note about the (now) non-constant nature\nof the value, something along the lines of \"This value is updated as\nwe're processing and discovering partitioned tables in the partition\nhierarchy\".\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 1 Feb 2023 16:21:35 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Wed, Feb 01, 2023 at 04:21:35PM +0100, Matthias van de Meent wrote:\n> On Wed, 1 Feb 2023 at 15:21, Ilya Gladyshev <ilya.v.gladyshev@gmail.com> wrote:\n> > > 1 февр. 2023 г., в 16:01, Alvaro Herrera <alvherre@alvh.no-ip.org> написал(а):\n> > > Hmm, count_leaf_partitions has to scan pg_inherits and do a syscache\n> > > lookup for every single element therein ... this sounds slow.\n> > >\n> > > In one of the callsites, we already have the partition descriptor\n> > > available. We could just scan partdesc->is_leaf[] and add one for each\n> > > 'true' value we see there.\n> >\n> > The problem is that partdesc contains only direct children of the table and we need all the children down the inheritance tree to count the total number of leaf partitions in the first callsite.\n> >\n> > > In the other callsite, we had the table open just a few lines before the\n> > > place you call count_leaf_partitions. Maybe we can rejigger things by\n> > > examining its state before closing it: if relkind is not partitioned we\n> > > know leaf_partitions=0, and only if partitioned we count leaf partitions.\n> > > I think that would save some work. I also wonder if it's worth writing\n> > > a bespoke function for counting leaf partitions rather than relying on\n> > > find_all_inheritors.\n> >\n> > Sure, added this condition to avoid the extra work here.\n> >\n> \n> > When creating an index on a partitioned table, this column is set to\n> > - the total number of partitions on which the index is to be created.\n> > + the total number of leaf partitions on which the index is to be created or attached.\n> \n> I think we should also add a note about the (now) non-constant nature\n> of the value, something along the lines of \"This value is updated as\n> we're processing and discovering partitioned tables in the partition\n> hierarchy\".\n\nBut the TOTAL is constant, right ? Updating the total when being called\nrecursively is the problem these patches fix.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 1 Feb 2023 09:53:36 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Wed, 1 Feb 2023 at 16:53, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Feb 01, 2023 at 04:21:35PM +0100, Matthias van de Meent wrote:\n> > On Wed, 1 Feb 2023 at 15:21, Ilya Gladyshev <ilya.v.gladyshev@gmail.com> wrote:\n> > > > 1 февр. 2023 г., в 16:01, Alvaro Herrera <alvherre@alvh.no-ip.org> написал(а):\n> > > > Hmm, count_leaf_partitions has to scan pg_inherits and do a syscache\n> > > > lookup for every single element therein ... this sounds slow.\n> > > >\n> > > > In one of the callsites, we already have the partition descriptor\n> > > > available. We could just scan partdesc->is_leaf[] and add one for each\n> > > > 'true' value we see there.\n> > >\n> > > The problem is that partdesc contains only direct children of the table and we need all the children down the inheritance tree to count the total number of leaf partitions in the first callsite.\n> > >\n> > > > In the other callsite, we had the table open just a few lines before the\n> > > > place you call count_leaf_partitions. Maybe we can rejigger things by\n> > > > examining its state before closing it: if relkind is not partitioned we\n> > > > know leaf_partitions=0, and only if partitioned we count leaf partitions.\n> > > > I think that would save some work. I also wonder if it's worth writing\n> > > > a bespoke function for counting leaf partitions rather than relying on\n> > > > find_all_inheritors.\n> > >\n> > > Sure, added this condition to avoid the extra work here.\n> > >\n> >\n> > > When creating an index on a partitioned table, this column is set to\n> > > - the total number of partitions on which the index is to be created.\n> > > + the total number of leaf partitions on which the index is to be created or attached.\n> >\n> > I think we should also add a note about the (now) non-constant nature\n> > of the value, something along the lines of \"This value is updated as\n> > we're processing and discovering partitioned tables in the partition\n> > hierarchy\".\n>\n> But the TOTAL is constant, right ? Updating the total when being called\n> recursively is the problem these patches fix.\n\nIf that's the case, then I'm not seeing the 'fix' part of the patch. I\nthought this patch was fixing the provably incorrect TOTAL value where\nDONE > TOTAL due to the recursive operation overwriting the DONE/TOTAL\nvalues instead of updating them.\n\nIn HEAD we set TOTAL to whatever number partitioned table we're\ncurrently processing has - regardless of whether we're the top level\nstatement.\nWith the patch we instead add the number of child relations to that\ncount, for which REL_HAS_STORAGE(child) -- or at least, in the v3\nposted by Ilya. Approximately immediately after updating that count we\nrecurse to the child relations, and that only returns once it is done\ncreating the indexes, so both TOTAL and DONE go up as we process more\npartitions in the hierarchy.\n\nAn example hierarchy:\nCREATE TABLE parent (a, b) partition by list (a);\nCREATE TABLE a1\n PARTITION OF parent FOR VALUES IN (1)\n PARTITION BY LIST (b);\nCREATE TABLE a1bd\n PARTITION OF a1 DEFAULT;\n\nCREATE TABLE a2\n PARTITION OF parent FOR VALUES IN (2)\n PARTITION BY LIST (b);\nCREATE TABLE a2bd\n PARTITION OF a2 DEFAULT;\n\nINSERT INTO parent (a, b) SELECT * from generate_series(1, 2) a(a)\ncross join generate_series(1, 100000),b(b);\nCREATE INDEX ON parent(a,b);\n\nThis will only discover that a2bd will need to be indexed after a1bd\nis done (or vice versa, depending on which order a1 and a2 are\nprocessed in the ).\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 1 Feb 2023 17:27:26 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "> 1 февр. 2023 г., в 20:27, Matthias van de Meent <boekewurm+postgres@gmail.com> написал(а):\n> \n> On Wed, 1 Feb 2023 at 16:53, Justin Pryzby <pryzby@telsasoft.com <mailto:pryzby@telsasoft.com>> wrote:\n>> \n>> On Wed, Feb 01, 2023 at 04:21:35PM +0100, Matthias van de Meent wrote:\n>>> On Wed, 1 Feb 2023 at 15:21, Ilya Gladyshev <ilya.v.gladyshev@gmail.com> wrote:\n>>>>> 1 февр. 2023 г., в 16:01, Alvaro Herrera <alvherre@alvh.no-ip.org> написал(а):\n>>>>> Hmm, count_leaf_partitions has to scan pg_inherits and do a syscache\n>>>>> lookup for every single element therein ... this sounds slow.\n>>>>> \n>>>>> In one of the callsites, we already have the partition descriptor\n>>>>> available. We could just scan partdesc->is_leaf[] and add one for each\n>>>>> 'true' value we see there.\n>>>> \n>>>> The problem is that partdesc contains only direct children of the table and we need all the children down the inheritance tree to count the total number of leaf partitions in the first callsite.\n>>>> \n>>>>> In the other callsite, we had the table open just a few lines before the\n>>>>> place you call count_leaf_partitions. Maybe we can rejigger things by\n>>>>> examining its state before closing it: if relkind is not partitioned we\n>>>>> know leaf_partitions=0, and only if partitioned we count leaf partitions.\n>>>>> I think that would save some work. I also wonder if it's worth writing\n>>>>> a bespoke function for counting leaf partitions rather than relying on\n>>>>> find_all_inheritors.\n>>>> \n>>>> Sure, added this condition to avoid the extra work here.\n>>>> \n>>> \n>>>> When creating an index on a partitioned table, this column is set to\n>>>> - the total number of partitions on which the index is to be created.\n>>>> + the total number of leaf partitions on which the index is to be created or attached.\n>>> \n>>> I think we should also add a note about the (now) non-constant nature\n>>> of the value, something along the lines of \"This value is updated as\n>>> we're processing and discovering partitioned tables in the partition\n>>> hierarchy\".\n>> \n>> But the TOTAL is constant, right ? Updating the total when being called\n>> recursively is the problem these patches fix.\n> \n> If that's the case, then I'm not seeing the 'fix' part of the patch. I\n> thought this patch was fixing the provably incorrect TOTAL value where\n> DONE > TOTAL due to the recursive operation overwriting the DONE/TOTAL\n> values instead of updating them.\n> \n> In HEAD we set TOTAL to whatever number partitioned table we're\n> currently processing has - regardless of whether we're the top level\n> statement.\n> With the patch we instead add the number of child relations to that\n> count, for which REL_HAS_STORAGE(child) -- or at least, in the v3\n> posted by Ilya. Approximately immediately after updating that count we\n> recurse to the child relations, and that only returns once it is done\n> creating the indexes, so both TOTAL and DONE go up as we process more\n> partitions in the hierarchy.\n\nThe TOTAL in the patch is set only when processing the top-level parent and it is not updated when we recurse, so yes, it is constant. From v3:\n\n@@ -1219,8 +1243,14 @@ DefineIndex(Oid relationId,\n \t\t\tRelation\tparentIndex;\n \t\t\tTupleDesc\tparentDesc;\n \n-\t\t\tpgstat_progress_update_param(PROGRESS_CREATEIDX_PARTITIONS_TOTAL,\n-\t\t\t\t\t\t\t\t\t\t nparts);\n+\t\t\tif (!OidIsValid(parentIndexId))\n+\t\t\t{\n+\t\t\t\tint total_parts;\n+\n+\t\t\t\ttotal_parts = count_leaf_partitions(relationId);\n+\t\t\t\tpgstat_progress_update_param(PROGRESS_CREATEIDX_PARTITIONS_TOTAL,\n+\t\t\t\t\t\t\t\t\t\t\t total_parts);\n+\t\t\t}\n\n\nIt is set to the total number of children on all levels of the hierarchy, not just the current one, so the total value doesn’t need to be updated later, because it is set to the correct value from the very beginning. \n\nIt is the DONE counter that is updated, and when we attach an index of a partition that is itself a partitioned table (like a2 in your example, if it already had an index created), it will be updated by the number of children of the partition.\n\n@@ -1431,9 +1463,25 @@ DefineIndex(Oid relationId,\n \t\t\t\t\tSetUserIdAndSecContext(child_save_userid,\n \t\t\t\t\t\t\t\t\t\t child_save_sec_context);\n \t\t\t\t}\n+\t\t\t\telse\n+\t\t\t\t{\n+\t\t\t\t\tint attached_parts = 1;\n+\n+\t\t\t\t\tif (RELKIND_HAS_PARTITIONS(child_relkind))\n+\t\t\t\t\t\tattached_parts = count_leaf_partitions(childRelid);\n+\n+\t\t\t\t\t/*\n+\t\t\t\t\t * If the index was attached, we need to update progress\n+\t\t\t\t\t * here, in its parent. For a partitioned index, we need\n+\t\t\t\t\t * to mark all of its children that were included in\n+\t\t\t\t\t * PROGRESS_CREATEIDX_PARTITIONS_TOTAL as done. If the\n+\t\t\t\t\t * index was built by calling DefineIndex() recursively,\n+\t\t\t\t\t * the called function is responsible for updating the\n+\t\t\t\t\t * progress report for built indexes.\n+\t\t\t\t\t */\n+\t\t\t\t\tpgstat_progress_incr_param(PROGRESS_CREATEIDX_PARTITIONS_DONE, attached_parts);\n+\t\t\t\t}\n \n-\t\t\t\tpgstat_progress_update_param(PROGRESS_CREATEIDX_PARTITIONS_DONE,\n-\t\t\t\t\t\t\t\t\t\t\t i + 1);\n \t\t\t\tfree_attrmap(attmap);\n \t\t\t}\n \n\n\n1 февр. 2023 г., в 20:27, Matthias van de Meent <boekewurm+postgres@gmail.com> написал(а):On Wed, 1 Feb 2023 at 16:53, Justin Pryzby <pryzby@telsasoft.com> wrote:On Wed, Feb 01, 2023 at 04:21:35PM +0100, Matthias van de Meent wrote:On Wed, 1 Feb 2023 at 15:21, Ilya Gladyshev <ilya.v.gladyshev@gmail.com> wrote:1 февр. 2023 г., в 16:01, Alvaro Herrera <alvherre@alvh.no-ip.org> написал(а):Hmm, count_leaf_partitions has to scan pg_inherits and do a syscachelookup for every single element therein ... this sounds slow.In one of the callsites, we already have the partition descriptoravailable. We could just scan partdesc->is_leaf[] and add one for each'true' value we see there.The problem is that partdesc contains only direct children of the table and we need all the children down the inheritance tree to count the total number of leaf partitions in the first callsite.In the other callsite, we had the table open just a few lines before theplace you call count_leaf_partitions. Maybe we can rejigger things byexamining its state before closing it: if relkind is not partitioned weknow leaf_partitions=0, and only if partitioned we count leaf partitions.I think that would save some work. I also wonder if it's worth writinga bespoke function for counting leaf partitions rather than relying onfind_all_inheritors.Sure, added this condition to avoid the extra work here. When creating an index on a partitioned table, this column is set to- the total number of partitions on which the index is to be created.+ the total number of leaf partitions on which the index is to be created or attached.I think we should also add a note about the (now) non-constant natureof the value, something along the lines of \"This value is updated aswe're processing and discovering partitioned tables in the partitionhierarchy\".But the TOTAL is constant, right ? Updating the total when being calledrecursively is the problem these patches fix.If that's the case, then I'm not seeing the 'fix' part of the patch. Ithought this patch was fixing the provably incorrect TOTAL value whereDONE > TOTAL due to the recursive operation overwriting the DONE/TOTALvalues instead of updating them.In HEAD we set TOTAL to whatever number partitioned table we'recurrently processing has - regardless of whether we're the top levelstatement.With the patch we instead add the number of child relations to thatcount, for which REL_HAS_STORAGE(child) -- or at least, in the v3posted by Ilya. Approximately immediately after updating that count werecurse to the child relations, and that only returns once it is donecreating the indexes, so both TOTAL and DONE go up as we process morepartitions in the hierarchy.The TOTAL in the patch is set only when processing the top-level parent and it is not updated when we recurse, so yes, it is constant. From v3:@@ -1219,8 +1243,14 @@ DefineIndex(Oid relationId, Relation parentIndex; TupleDesc parentDesc; - pgstat_progress_update_param(PROGRESS_CREATEIDX_PARTITIONS_TOTAL,- nparts);+ if (!OidIsValid(parentIndexId))+ {+ int total_parts;++ total_parts = count_leaf_partitions(relationId);+ pgstat_progress_update_param(PROGRESS_CREATEIDX_PARTITIONS_TOTAL,+ total_parts);+ }It is set to the total number of children on all levels of the hierarchy, not just the current one, so the total value doesn’t need to be updated later, because it is set to the correct value from the very beginning. It is the DONE counter that is updated, and when we attach an index of a partition that is itself a partitioned table (like a2 in your example, if it already had an index created), it will be updated by the number of children of the partition.@@ -1431,9 +1463,25 @@ DefineIndex(Oid relationId, SetUserIdAndSecContext(child_save_userid, child_save_sec_context); }+ else+ {+ int attached_parts = 1;++ if (RELKIND_HAS_PARTITIONS(child_relkind))+ attached_parts = count_leaf_partitions(childRelid);++ /*+ * If the index was attached, we need to update progress+ * here, in its parent. For a partitioned index, we need+ * to mark all of its children that were included in+ * PROGRESS_CREATEIDX_PARTITIONS_TOTAL as done. If the+ * index was built by calling DefineIndex() recursively,+ * the called function is responsible for updating the+ * progress report for built indexes.+ */+ pgstat_progress_incr_param(PROGRESS_CREATEIDX_PARTITIONS_DONE, attached_parts);+ } - pgstat_progress_update_param(PROGRESS_CREATEIDX_PARTITIONS_DONE,- i + 1); free_attrmap(attmap); }",
"msg_date": "Wed, 1 Feb 2023 21:51:06 +0400",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Wed, 1 Feb 2023 at 18:51, Ilya Gladyshev <ilya.v.gladyshev@gmail.com> wrote:\n>\n> 1 февр. 2023 г., в 20:27, Matthias van de Meent <boekewurm+postgres@gmail.com> написал(а):\n>\n>> In HEAD we set TOTAL to whatever number partitioned table we're\n>> currently processing has - regardless of whether we're the top level\n>> statement.\n>> With the patch we instead add the number of child relations to that\n>> count, for which REL_HAS_STORAGE(child) -- or at least, in the v3\n>> posted by Ilya. Approximately immediately after updating that count we\n>> recurse to the child relations, and that only returns once it is done\n>> creating the indexes, so both TOTAL and DONE go up as we process more\n>> partitions in the hierarchy.\n>\n>\n> The TOTAL in the patch is set only when processing the top-level parent and it is not updated when we recurse, so yes, it is constant. From v3:\n\nUgh, I misread the patch, more specifically count_leaf_partitions and\nthe !OidIsValid(parentIndexId) condition changes.\n\nYou are correct, sorry for the noise.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 1 Feb 2023 19:24:48 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Wed, Feb 01, 2023 at 07:24:48PM +0100, Matthias van de Meent wrote:\n> On Wed, 1 Feb 2023 at 18:51, Ilya Gladyshev <ilya.v.gladyshev@gmail.com> wrote:\n> >\n> > 1 февр. 2023 г., в 20:27, Matthias van de Meent <boekewurm+postgres@gmail.com> написал(а):\n> >\n> >> In HEAD we set TOTAL to whatever number partitioned table we're\n> >> currently processing has - regardless of whether we're the top level\n> >> statement.\n> >> With the patch we instead add the number of child relations to that\n> >> count, for which REL_HAS_STORAGE(child) -- or at least, in the v3\n> >> posted by Ilya. Approximately immediately after updating that count we\n> >> recurse to the child relations, and that only returns once it is done\n> >> creating the indexes, so both TOTAL and DONE go up as we process more\n> >> partitions in the hierarchy.\n> >\n> >\n> > The TOTAL in the patch is set only when processing the top-level parent and it is not updated when we recurse, so yes, it is constant. From v3:\n> \n> Ugh, I misread the patch, more specifically count_leaf_partitions and\n> the !OidIsValid(parentIndexId) condition changes.\n> \n> You are correct, sorry for the noise.\n\nThat suggests that the comments could've been more clear. I added a\ncomment suggested by Tomas and adjusted some others and wrote a commit\nmessage. I even ran pgindent for about the 3rd time ever.\n\n002 are my changes as a separate patch, which you could apply to your\nlocal branch.\n\nAnd 003/4 are assertions that I wrote to demonstrate the problem and the\nverify the fixes, but not being proposed for commit.\n\n-- \nJustin",
"msg_date": "Thu, 2 Feb 2023 09:18:07 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Thu, Feb 02, 2023 at 09:18:07AM -0600, Justin Pryzby wrote:\n> On Wed, Feb 01, 2023 at 07:24:48PM +0100, Matthias van de Meent wrote:\n> > On Wed, 1 Feb 2023 at 18:51, Ilya Gladyshev <ilya.v.gladyshev@gmail.com> wrote:\n> > > 1 февр. 2023 г., в 20:27, Matthias van de Meent <boekewurm+postgres@gmail.com> написал(а):\n> > >\n> > >> In HEAD we set TOTAL to whatever number partitioned table we're\n> > >> currently processing has - regardless of whether we're the top level\n> > >> statement.\n> > >> With the patch we instead add the number of child relations to that\n> > >> count, for which REL_HAS_STORAGE(child) -- or at least, in the v3\n> > >> posted by Ilya. Approximately immediately after updating that count we\n> > >> recurse to the child relations, and that only returns once it is done\n> > >> creating the indexes, so both TOTAL and DONE go up as we process more\n> > >> partitions in the hierarchy.\n> > >\n> > > The TOTAL in the patch is set only when processing the top-level parent and it is not updated when we recurse, so yes, it is constant. From v3:\n> > \n> > Ugh, I misread the patch, more specifically count_leaf_partitions and\n> > the !OidIsValid(parentIndexId) condition changes.\n> > \n> > You are correct, sorry for the noise.\n> \n> That suggests that the comments could've been more clear. I added a\n> comment suggested by Tomas and adjusted some others and wrote a commit\n> message. I even ran pgindent for about the 3rd time ever.\n> \n> 002 are my changes as a separate patch, which you could apply to your\n> local branch.\n> \n> And 003/4 are assertions that I wrote to demonstrate the problem and the\n> verify the fixes, but not being proposed for commit.\n\nThat was probably a confusing way to present it - I should've sent the\nrelative diff as a .txt rather than as patch 002.\n\nThis squishes together 001/2 as the main patch.\nI believe it's ready.\n\n-- \nJustin",
"msg_date": "Wed, 8 Feb 2023 16:40:49 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Wed, Feb 08, 2023 at 04:40:49PM -0600, Justin Pryzby wrote:\n> This squishes together 001/2 as the main patch.\n> I believe it's ready.\n\nUpdate to address a compiler warning in the supplementary patches adding\nassertions.",
"msg_date": "Thu, 16 Feb 2023 18:26:51 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Update to address a compiler warning in the supplementary patches adding\n> assertions.\n\nI took a look through this. It seems like basically a good solution,\nbut the count_leaf_partitions() function is bothering me, for two\nreasons:\n\n1. It seems like a pretty expensive thing to do. Don't we have the\ninfo at hand somewhere already?\n\n2. Is it really safe to do find_all_inheritors with NoLock? If so,\na comment explaining why would be good.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Mar 2023 15:36:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 03:36:10PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > Update to address a compiler warning in the supplementary patches adding\n> > assertions.\n> \n> I took a look through this. It seems like basically a good solution,\n> but the count_leaf_partitions() function is bothering me, for two\n> reasons:\n> \n> 1. It seems like a pretty expensive thing to do. Don't we have the\n> info at hand somewhere already?\n\nI don't know where that would be. We need the list of both direct *and*\nindirect partitions. See:\nhttps://www.postgresql.org/message-id/5073D187-4200-4A2D-BAC0-91C657E3C22E%40gmail.com\n\nIf it would help to avoid the concern, then I might consider proposing\nnot to call get_rel_relkind() ...\n\n> 2. Is it really safe to do find_all_inheritors with NoLock? If so,\n> a comment explaining why would be good.\n\nIn both cases (both for the parent and for case of a partitioned child\nwith pre-existing indexes being ATTACHed), the table itself is already\nlocked by DefineIndex():\n\n lockmode = concurrent ? ShareUpdateExclusiveLock : ShareLock;\n rel = table_open(relationId, lockmode);\n\nand\n childrel = table_open(childRelid, lockmode);\n ...\n table_close(childrel, NoLock);\n\nAnd, find_all_inheritors() will also have been called by\nProcessUtilitySlow(). Maybe it's sufficient to mention that ?\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 12 Mar 2023 15:09:57 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Mar 10, 2023 at 03:36:10PM -0500, Tom Lane wrote:\n>> I took a look through this. It seems like basically a good solution,\n>> but the count_leaf_partitions() function is bothering me, for two\n>> reasons:\n\n> ... find_all_inheritors() will also have been called by\n> ProcessUtilitySlow(). Maybe it's sufficient to mention that ?\n\nHm. Could we get rid of count_leaf_partitions by doing the work in\nProcessUtilitySlow? Or at least passing that OID list forward instead\nof recomputing it?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 12 Mar 2023 16:14:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Sun, Mar 12, 2023 at 04:14:06PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Fri, Mar 10, 2023 at 03:36:10PM -0500, Tom Lane wrote:\n> >> I took a look through this. It seems like basically a good solution,\n> >> but the count_leaf_partitions() function is bothering me, for two\n> >> reasons:\n> \n> > ... find_all_inheritors() will also have been called by\n> > ProcessUtilitySlow(). Maybe it's sufficient to mention that ?\n> \n> Hm. Could we get rid of count_leaf_partitions by doing the work in\n> ProcessUtilitySlow? Or at least passing that OID list forward instead\n> of recomputing it?\n\ncount_leaf_partitions() is called in two places:\n\nOnce to get PROGRESS_CREATEIDX_PARTITIONS_TOTAL. It'd be easy enough to\npass an integer total via IndexStmt (but I think we wanted to avoid\nadding anything there, since it's not a part of the statement).\n\ncount_leaf_partitions() is also called for sub-partitions, in the case\nthat a matching \"partitioned index\" already exists, and the progress\nreport needs to be incremented by the number of leaves for which indexes\nwere ATTACHED. We'd need a mapping from OID => npartitions (or to\ncompile some data structure of all the partitioned partitions). I guess\nCreateIndex() could call CreatePartitionDirectory(). But it looks like\nthat would be *more* expensive.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 12 Mar 2023 17:06:03 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sun, Mar 12, 2023 at 04:14:06PM -0400, Tom Lane wrote:\n>> Hm. Could we get rid of count_leaf_partitions by doing the work in\n>> ProcessUtilitySlow? Or at least passing that OID list forward instead\n>> of recomputing it?\n\n> count_leaf_partitions() is called in two places:\n\n> Once to get PROGRESS_CREATEIDX_PARTITIONS_TOTAL. It'd be easy enough to\n> pass an integer total via IndexStmt (but I think we wanted to avoid\n> adding anything there, since it's not a part of the statement).\n\nI agree that adding such a field to IndexStmt would be a very bad idea.\nHowever, adding another parameter to DefineIndex doesn't seem like a\nproblem. Or could we just call pgstat_progress_update_param directly from\nProcessUtilitySlow, after counting the partitions in the existing loop?\n\n> count_leaf_partitions() is also called for sub-partitions, in the case\n> that a matching \"partitioned index\" already exists, and the progress\n> report needs to be incremented by the number of leaves for which indexes\n> were ATTACHED.\n\nCan't you increment progress by one at the point where the actual attach\nhappens?\n\nI also wonder whether leaving non-leaf partitions out of the total\nis making things more complicated rather than simpler ...\n\n> We'd need a mapping from OID => npartitions (or to\n> compile some data structure of all the partitioned partitions). I guess\n> CreateIndex() could call CreatePartitionDirectory(). But it looks like\n> that would be *more* expensive.\n\nThe reason I find this annoying is that the non-optional nature of the\nprogress reporting mechanism was sold on the basis that it would add\nonly negligible overhead. Adding extra pass(es) over pg_inherits\nbreaks that promise. Maybe it's cheap enough to not matter in the\nbig scheme of things, but we should not be having to make arguments\nlike that one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 12 Mar 2023 18:25:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "I wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n>> count_leaf_partitions() is also called for sub-partitions, in the case\n>> that a matching \"partitioned index\" already exists, and the progress\n>> report needs to be incremented by the number of leaves for which indexes\n>> were ATTACHED.\n\n> Can't you increment progress by one at the point where the actual attach\n> happens?\n\nOh, never mind; now I realize that the point is that you didn't ever\niterate over those leaf indexes.\n\nHowever, consider a thought experiment: assume for whatever reason that\nall the actual index builds happen first, then all the cases where you\nsucceed in attaching a sub-partitioned index happen at the end of the\ncommand. In that case, the percentage-done indicator would go from\nsome-number to 100% more or less instantly.\n\nWhat if we simply do nothing at sub-partitioned indexes? Or if that's\nslightly too radical, just increase the PARTITIONS_DONE counter by 1?\nThat would look indistinguishable from the case where all the attaches\nhappen at the end.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 12 Mar 2023 20:15:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Sun, Mar 12, 2023 at 06:25:13PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Sun, Mar 12, 2023 at 04:14:06PM -0400, Tom Lane wrote:\n> >> Hm. Could we get rid of count_leaf_partitions by doing the work in\n> >> ProcessUtilitySlow? Or at least passing that OID list forward instead\n> >> of recomputing it?\n> \n> > count_leaf_partitions() is called in two places:\n> \n> > Once to get PROGRESS_CREATEIDX_PARTITIONS_TOTAL. It'd be easy enough to\n> > pass an integer total via IndexStmt (but I think we wanted to avoid\n> > adding anything there, since it's not a part of the statement).\n> \n> I agree that adding such a field to IndexStmt would be a very bad idea.\n> However, adding another parameter to DefineIndex doesn't seem like a\n> problem.\n\nIt's a problem since this is a bug and it's desirable to backpatch a\nfix, right ?\n\n> Or could we just call pgstat_progress_update_param directly from\n> ProcessUtilitySlow, after counting the partitions in the existing loop?\n\nThat'd be fine if it was only needed for TOTAL, but it doesn't handle\nthe 2nd call to count_leaf_partitions().\n\nOn Sun, Mar 12, 2023 at 08:15:28PM -0400, Tom Lane wrote:\n> I wrote:\n> > Justin Pryzby <pryzby@telsasoft.com> writes:\n> >> count_leaf_partitions() is also called for sub-partitions, in the case\n> >> that a matching \"partitioned index\" already exists, and the progress\n> >> report needs to be incremented by the number of leaves for which indexes\n> >> were ATTACHED.\n> \n> > Can't you increment progress by one at the point where the actual attach\n> > happens?\n> \n> Oh, never mind; now I realize that the point is that you didn't ever\n> iterate over those leaf indexes.\n> \n> However, consider a thought experiment: assume for whatever reason that\n> all the actual index builds happen first, then all the cases where you\n> succeed in attaching a sub-partitioned index happen at the end of the\n> command. In that case, the percentage-done indicator would go from\n> some-number to 100% more or less instantly.\n> \n> What if we simply do nothing at sub-partitioned indexes? Or if that's\n> slightly too radical, just increase the PARTITIONS_DONE counter by 1?\n> That would look indistinguishable from the case where all the attaches\n> happen at the end.\n\nIncrementing by 0 sounds terrible, since someone who has intermediate\npartitioned tables is likely to always see 0% done. (It's true that\nintermediate partitioned tables don't seem to have been considered by\nthe original patch, and it's indisputable that progress reporting\ncurrently misbehaves in that case).\n\nAnd incrementing PARTITIONS_DONE by 1 could lead to bogus progress\nreporting with \"N_done > N_Total\" if an intermediate partitioned table\nhad no leaf partitions at all. That's one of the problems this thread\nis trying to fix (the other being \"total changing in the middle of the\ncommand\").\n\nMaybe your idea is usable though, since indirect partitioned indexes\n*can* be counted correctly during recursion. What's hard to fix is the\ncase that an index is both *partitioned* and *attached*. Maybe it's\nokay to count that case as 0. The consequence is that the command would\nend before the progress report got to 100%.\n\nThe other option seems to be to define the progress report to count only\n*direct* children.\nhttps://www.postgresql.org/message-id/20221213044331.GJ27893%40telsasoft.com\n\n> The reason I find this annoying is that the non-optional nature of the\n> progress reporting mechanism was sold on the basis that it would add\n> only negligible overhead. Adding extra pass(es) over pg_inherits\n> breaks that promise. Maybe it's cheap enough to not matter in the\n> big scheme of things, but we should not be having to make arguments\n> like that one.\n\nIf someone is running a DDL command involving nested partitions, I'm not\nso concerned about the cost of additional scans of pg_inherits. They\neither have enough data to justify partitioning partitions, or they're\ndoing something silly.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 13 Mar 2023 08:50:08 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sun, Mar 12, 2023 at 06:25:13PM -0400, Tom Lane wrote:\n>> I agree that adding such a field to IndexStmt would be a very bad idea.\n>> However, adding another parameter to DefineIndex doesn't seem like a\n>> problem.\n\n> It's a problem since this is a bug and it's desirable to backpatch a\n> fix, right ?\n\nI do not think this is important enough to justify a back-patch.\n\n> Incrementing by 0 sounds terrible, since someone who has intermediate\n> partitioned tables is likely to always see 0% done.\n\nHow so? The counter will increase after there's some actual work done,\nie building an index. If there's no indexes to build then it hardly\nmatters, because the command will complete in very little time.\n\n> And incrementing PARTITIONS_DONE by 1 could lead to bogus progress\n> reporting with \"N_done > N_Total\" if an intermediate partitioned table\n> had no leaf partitions at all.\n\nWell, we could fix that if we made TOTAL be the total number of\ndescendants rather than just the leaves ;-). But I think not\nincrementing is probably better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Mar 2023 10:42:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 10:42:59AM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Sun, Mar 12, 2023 at 06:25:13PM -0400, Tom Lane wrote:\n> >> I agree that adding such a field to IndexStmt would be a very bad idea.\n> >> However, adding another parameter to DefineIndex doesn't seem like a\n> >> problem.\n> \n> > It's a problem since this is a bug and it's desirable to backpatch a\n> > fix, right ?\n> \n> I do not think this is important enough to justify a back-patch.\n\nThat's fine with me, but it comes as a surprise, and it might invalidate\nearlier discussions, which were working under the constraint of\nmaintaining a compatible ABI.\n\n> > Incrementing by 0 sounds terrible, since someone who has intermediate\n> > partitioned tables is likely to always see 0% done.\n> \n> How so? The counter will increase after there's some actual work done,\n> ie building an index. If there's no indexes to build then it hardly\n> matters, because the command will complete in very little time.\n\nI misunderstood your idea as suggesting to skip progress reporting for\n*every* intermediate partitioned index, and its leaves. But I guess\nwhat you meant is to skip progress reporting when ATTACHing an\nintermediate partitioned index. That seems okay, since 1) intermediate\npartitioned tables are probably rare, and 2) ATTACH is fast, so the\neffect is indistinguisable from querying the progress report a few\nmoments later.\n\nThe idea would be for:\n1) TOTAL to show the number of direct and indirect leaf partitions;\n2) update progress while building direct or indirect indexes;\n3) ATTACHing intermediate partitioned tables to increment by 0;\n4) ATTACHing a direct child should continue to increment by 1,\nsince that common case already works as expected and shouldn't be\nchanged.\n\nThe only change from the current patch is (3). (1) still calls\ncount_leaf_partitions(), but only once. I'd prefer that to rearranging\nthe progress reporting to set the TOTAL in ProcessUtilitySlow().\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 14 Mar 2023 09:34:16 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> The idea would be for:\n> 1) TOTAL to show the number of direct and indirect leaf partitions;\n> 2) update progress while building direct or indirect indexes;\n> 3) ATTACHing intermediate partitioned tables to increment by 0;\n> 4) ATTACHing a direct child should continue to increment by 1,\n> since that common case already works as expected and shouldn't be\n> changed.\n\nOK.\n\n> The only change from the current patch is (3). (1) still calls\n> count_leaf_partitions(), but only once. I'd prefer that to rearranging\n> the progress reporting to set the TOTAL in ProcessUtilitySlow().\n\nI don't agree with that. find_all_inheritors is fairly expensive\nand it seems completely silly to do it twice just to avoid adding\na parameter to DefineIndex.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Mar 2023 10:46:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "\n\n> 14 марта 2023 г., в 18:34, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n> \n> On Mon, Mar 13, 2023 at 10:42:59AM -0400, Tom Lane wrote:\n>> Justin Pryzby <pryzby@telsasoft.com> writes:\n>>> On Sun, Mar 12, 2023 at 06:25:13PM -0400, Tom Lane wrote:\n>>>> I agree that adding such a field to IndexStmt would be a very bad idea.\n>>>> However, adding another parameter to DefineIndex doesn't seem like a\n>>>> problem.\n>> \n>>> It's a problem since this is a bug and it's desirable to backpatch a\n>>> fix, right ?\n>> \n>> I do not think this is important enough to justify a back-patch.\n> \n> That's fine with me, but it comes as a surprise, and it might invalidate\n> earlier discussions, which were working under the constraint of\n> maintaining a compatible ABI.\n> \n>>> Incrementing by 0 sounds terrible, since someone who has intermediate\n>>> partitioned tables is likely to always see 0% done.\n>> \n>> How so? The counter will increase after there's some actual work done,\n>> ie building an index. If there's no indexes to build then it hardly\n>> matters, because the command will complete in very little time.\n> \n> I misunderstood your idea as suggesting to skip progress reporting for\n> *every* intermediate partitioned index, and its leaves. But I guess\n> what you meant is to skip progress reporting when ATTACHing an\n> intermediate partitioned index. That seems okay, since 1) intermediate\n> partitioned tables are probably rare, and 2) ATTACH is fast, so the\n> effect is indistinguisable from querying the progress report a few\n> moments later.\n\n+1 that counting attached partitioned indexes as 0 is fine. \n\n> The idea would be for:\n> 1) TOTAL to show the number of direct and indirect leaf partitions;\n> 2) update progress while building direct or indirect indexes;\n> 3) ATTACHing intermediate partitioned tables to increment by 0;\n> 4) ATTACHing a direct child should continue to increment by 1,\n> since that common case already works as expected and shouldn't be\n> changed.\n> \n> The only change from the current patch is (3). (1) still calls\n> count_leaf_partitions(), but only once. I'd prefer that to rearranging\n> the progress reporting to set the TOTAL in ProcessUtilitySlow().\n> \n> -- \n> Justin\n\nAs for reusing TOTAL calculated outside of DefineIndex, as I can see, ProcessUtilitySlow is not the only call site for DefineIndex (although, I don’t know whether all of them need progress tracking), for instance, there is ALTER TABLE that calls DefineIndex to create index for constraints. So I feel like rearranging progress reporting will result in unnecessary code duplication in those call sites, so passing in an optional parameter seems to be easier here, if we are going to optimize it, after all. Especially if back-patching is a non-issue.\n\n\n\n",
"msg_date": "Tue, 14 Mar 2023 18:58:14 +0400",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 06:58:14PM +0400, Ilya Gladyshev wrote:\n> > The only change from the current patch is (3). (1) still calls\n> > count_leaf_partitions(), but only once. I'd prefer that to rearranging\n> > the progress reporting to set the TOTAL in ProcessUtilitySlow().\n>\n> As for reusing TOTAL calculated outside of DefineIndex, as I can see, ProcessUtilitySlow is not the only call site for DefineIndex (although, I don’t know whether all of them need progress tracking), for instance, there is ALTER TABLE that calls DefineIndex to create index for constraints. So I feel like rearranging progress reporting will result in unnecessary code duplication in those call sites, so passing in an optional parameter seems to be easier here, if we are going to optimize it, after all. Especially if back-patching is a non-issue.\n\nYeah. See attached. I don't like duplicating the loop. Is this really\nthe right direction to go ?\n\nI haven't verified if the child tables are locked in all the paths which\nwould call count_leaf_partitions(). But why is it important to lock\nthem for this? If they weren't locked before, that'd be a pre-existing\nproblem...",
"msg_date": "Wed, 15 Mar 2023 19:07:33 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "\n\n> 16 марта 2023 г., в 04:07, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n> \n> On Tue, Mar 14, 2023 at 06:58:14PM +0400, Ilya Gladyshev wrote:\n>>> The only change from the current patch is (3). (1) still calls\n>>> count_leaf_partitions(), but only once. I'd prefer that to rearranging\n>>> the progress reporting to set the TOTAL in ProcessUtilitySlow().\n>> \n>> As for reusing TOTAL calculated outside of DefineIndex, as I can see, ProcessUtilitySlow is not the only call site for DefineIndex (although, I don’t know whether all of them need progress tracking), for instance, there is ALTER TABLE that calls DefineIndex to create index for constraints. So I feel like rearranging progress reporting will result in unnecessary code duplication in those call sites, so passing in an optional parameter seems to be easier here, if we are going to optimize it, after all. Especially if back-patching is a non-issue.\n> \n> Yeah. See attached. I don't like duplicating the loop. Is this really\n> the right direction to go ?\n> \n> I haven't verified if the child tables are locked in all the paths which\n> would call count_leaf_partitions(). But why is it important to lock\n> them for this? If they weren't locked before, that'd be a pre-existing\n> problem...\n> <0001-fix-CREATE-INDEX-progress-report-with-nested-partiti.patch>\n\nI’m not sure what the general policy on locking is, but I have checked ALTER TABLE ADD INDEX, and the all the partitions seem to be locked on the first entry to DefineIndex there. All other call sites pass in the parentIndexId, which means the progress tracking machinery will not be initialized, so I think, we don’t need to do locking in count_leaf_partitions(). \n\nThe approach in the patch looks good to me. Some nitpicks on the patch: \n1. There’s an unnecessary second call to get_rel_relkind in ProcessUtilitySlow, we can just use what’s in the variable relkind.\n2. We can also combine else and if to have one less nested level like that:\n+\t\t\t\telse if (!RELKIND_HAS_PARTITIONS(child_relkind))\n\n3. There was a part of the comment saying \"If the index was built by calling DefineIndex() recursively, the called function is responsible for updating the progress report for built indexes.\", I think it is still useful to have it there.\n\n\n\n\n",
"msg_date": "Thu, 16 Mar 2023 19:04:16 +0400",
"msg_from": "Ilya Gladyshev <ilya.v.gladyshev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 07:04:16PM +0400, Ilya Gladyshev wrote:\n> > 16 марта 2023 г., в 04:07, Justin Pryzby <pryzby@telsasoft.com> написал(а):\n> > \n> > On Tue, Mar 14, 2023 at 06:58:14PM +0400, Ilya Gladyshev wrote:\n> >>> The only change from the current patch is (3). (1) still calls\n> >>> count_leaf_partitions(), but only once. I'd prefer that to rearranging\n> >>> the progress reporting to set the TOTAL in ProcessUtilitySlow().\n> >> \n> >> As for reusing TOTAL calculated outside of DefineIndex, as I can see, ProcessUtilitySlow is not the only call site for DefineIndex (although, I don’t know whether all of them need progress tracking), for instance, there is ALTER TABLE that calls DefineIndex to create index for constraints. So I feel like rearranging progress reporting will result in unnecessary code duplication in those call sites, so passing in an optional parameter seems to be easier here, if we are going to optimize it, after all. Especially if back-patching is a non-issue.\n> > \n> > Yeah. See attached. I don't like duplicating the loop. Is this really\n> > the right direction to go ?\n> > \n> > I haven't verified if the child tables are locked in all the paths which\n> > would call count_leaf_partitions(). But why is it important to lock\n> > them for this? If they weren't locked before, that'd be a pre-existing\n> > problem...\n> > <0001-fix-CREATE-INDEX-progress-report-with-nested-partiti.patch>\n> \n> I’m not sure what the general policy on locking is, but I have checked ALTER TABLE ADD INDEX, and the all the partitions seem to be locked on the first entry to DefineIndex there. All other call sites pass in the parentIndexId, which means the progress tracking machinery will not be initialized, so I think, we don’t need to do locking in count_leaf_partitions(). \n\n> The approach in the patch looks good to me. Some nitpicks on the patch: \n> 1. There’s an unnecessary second call to get_rel_relkind in ProcessUtilitySlow, we can just use what’s in the variable relkind.\n> 2. We can also combine else and if to have one less nested level like that:\n> +\t\t\t\telse if (!RELKIND_HAS_PARTITIONS(child_relkind))\n> \n> 3. There was a part of the comment saying \"If the index was built by calling DefineIndex() recursively, the called function is responsible for updating the progress report for built indexes.\", I think it is still useful to have it there.\n\nThanks, I addressed (1) and (3). (2) is deliberate, to allow a place to\nput the comment which is not specific to the !HAS_PARTITIONS case.\n\n-- \nJustin",
"msg_date": "Tue, 21 Mar 2023 13:43:43 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "So I'm still pretty desperately unhappy with count_leaf_partitions.\nI don't like expending significant cost purely for progress tracking\npurposes, I don't like the undocumented assumption that NoLock is\nsafe, and what's more, if it is safe then we've already traversed\nthe inheritance tree to lock everything so in principle we could\nhave the count already. However, it does seem like getting that\nknowledge from point A to point B would be a mess in most places.\n\nOne thing we could do to reduce the cost (and improve the safety)\nis to forget the idea of checking the relkinds and just set the\nPARTITIONS_TOTAL count to list_length() of the find_all_inheritors\nresult. We've already agreed that it's okay if the PARTITIONS_DONE\ncount never reaches PARTITIONS_TOTAL, so this would just be taking\nthat idea further. (Or we could increment PARTITIONS_DONE for\nnon-leaf partitions when we visit them, thus making that TOTAL\nmore nearly correct.) Furthermore, as things stand it's not hard\nfor PARTITIONS_TOTAL to be zero --- there's at least one such case\nin the regression tests --- and that seems just weird to me.\n\nBy the by, this is awful code:\n\n+\t\tif (RELKIND_HAS_STORAGE(get_rel_relkind(partrelid)))\n\nConsult the definition of RELKIND_HAS_STORAGE to see why.\nBut I want to get rid of that rather than fixing it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 Mar 2023 16:35:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 04:35:46PM -0400, Tom Lane wrote:\n> So I'm still pretty desperately unhappy with count_leaf_partitions.\n> I don't like expending significant cost purely for progress tracking\n> purposes, I don't like the undocumented assumption that NoLock is\n> safe, and what's more, if it is safe then we've already traversed\n> the inheritance tree to lock everything so in principle we could\n> have the count already. However, it does seem like getting that\n> knowledge from point A to point B would be a mess in most places.\n> \n> One thing we could do to reduce the cost (and improve the safety)\n> is to forget the idea of checking the relkinds and just set the\n> PARTITIONS_TOTAL count to list_length() of the find_all_inheritors\n> result.\n\nActually list_length() minus 1 ...\n\n> We've already agreed that it's okay if the PARTITIONS_DONE\n> count never reaches PARTITIONS_TOTAL, so this would just be taking\n> that idea further. (Or we could increment PARTITIONS_DONE for\n> non-leaf partitions when we visit them, thus making that TOTAL\n> more nearly correct.)\n\nYes, I think that's actually more correct. If TOTAL is set without\nregard to relkind, then DONE ought to be set the same way.\n\nI updated the documentation to indicate that the counters include the\nintermediate partitioned rels, but I wonder if it's better to say\nnothing and leave that undefined.\n\n> Furthermore, as things stand it's not hard\n> for PARTITIONS_TOTAL to be zero --- there's at least one such case\n> in the regression tests --- and that seems just weird to me.\n\nI don't know why it'd seem weird. postgres doesn't create partitions\nautomatically, so by default there are none. If we create a table but\nnever load any data, it'll have no partitions. Also, the TOTAL=0 case\nwon't go away just because we start counting intermediate partitions.\n\n> By the by, this is awful code:\n> \n> +\t\tif (RELKIND_HAS_STORAGE(get_rel_relkind(partrelid)))\n> \n> Consult the definition of RELKIND_HAS_STORAGE to see why.\n> But I want to get rid of that rather than fixing it.\n\nGood point, but I'd burden-shift the blame to RELKIND_HAS_STORAGE().\n\nBTW, I promoted myself to a co-author of the patch. My interest here is\nto resolve this hoping to allow the CIC patch to progress.\n\n-- \nJustin",
"msg_date": "Fri, 24 Mar 2023 21:53:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, Mar 23, 2023 at 04:35:46PM -0400, Tom Lane wrote:\n>> Furthermore, as things stand it's not hard\n>> for PARTITIONS_TOTAL to be zero --- there's at least one such case\n>> in the regression tests --- and that seems just weird to me.\n\n> I don't know why it'd seem weird. postgres doesn't create partitions\n> automatically, so by default there are none. If we create a table but\n> never load any data, it'll have no partitions.\n\nMy problem with it is that it's not clear how to tell \"no partitioned\nindex creation in progress\" from \"partitioned index creation in progress,\nbut total = 0\". Maybe there's some out-of-band way to tell that in the\nstats reporting system, but still it's a weird corner case.\n\n> Also, the TOTAL=0 case\n> won't go away just because we start counting intermediate partitions.\n\nThat's why I wanted list_length() not list_length() - 1. We are\ndoing *something* at the top partitioned table, it just doesn't\ninvolve a table scan, so I don't find this totally unreasonable.\nIf you agree we are doing work at intermediate partitioned tables,\nhow are we not doing work at the top one?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Mar 2023 11:55:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Sat, Mar 25, 2023 at 11:55:13AM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Thu, Mar 23, 2023 at 04:35:46PM -0400, Tom Lane wrote:\n> >> Furthermore, as things stand it's not hard\n> >> for PARTITIONS_TOTAL to be zero --- there's at least one such case\n> >> in the regression tests --- and that seems just weird to me.\n> \n> > I don't know why it'd seem weird. postgres doesn't create partitions\n> > automatically, so by default there are none. If we create a table but\n> > never load any data, it'll have no partitions.\n> \n> My problem with it is that it's not clear how to tell \"no partitioned\n> index creation in progress\" from \"partitioned index creation in progress,\n> but total = 0\". Maybe there's some out-of-band way to tell that in the\n> stats reporting system, but still it's a weird corner case.\n> \n> > Also, the TOTAL=0 case\n> > won't go away just because we start counting intermediate partitions.\n> \n> That's why I wanted list_length() not list_length() - 1. We are\n> doing *something* at the top partitioned table, it just doesn't\n> involve a table scan, so I don't find this totally unreasonable.\n> If you agree we are doing work at intermediate partitioned tables,\n> how are we not doing work at the top one?\n\nWhat you're proposing would redefine the meaning of\nPARTITIONS_DONE/TOTAL, even in the absence of intermediate partitioned\ntables. Which might be okay, but the scope of this thread/patch was to\nfix the behavior involving intermediate partitioned tables.\n\nIt's somewhat weird to me that find_all_inheritors(rel) returns the rel\nitself. But it's an internal function, and evidently that's what's\nneeded/desirable to do, so that's fine.\n\nHowever, \"PARTITIONS_TOTAL\" has a certain user-facing definition, and\n\"Number of partitions\" is easier to explain than \"Number of partitions\nplus the rel itself\", and IMO an easier definition is a better one.\n\nYour complaint seems similar to something I've said a few times before:\nit's weird to expose macroscopic progress reporting of partitioned\ntables in the same view and in the same *row* as microscopic progress of\nits partitions. But changing that is a job for another patch. I won't\nbe opposed to it if someone were to propose a patch to remove\npartitions_{done,total}. See also:\nhttps://www.postgresql.org/message-id/flat/YCy5ZMt8xAyoOMmv%40paquier.xyz#b20d1be226a93dacd3fd40b402315105\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 25 Mar 2023 11:36:08 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sat, Mar 25, 2023 at 11:55:13AM -0400, Tom Lane wrote:\n>> That's why I wanted list_length() not list_length() - 1. We are\n>> doing *something* at the top partitioned table, it just doesn't\n>> involve a table scan, so I don't find this totally unreasonable.\n>> If you agree we are doing work at intermediate partitioned tables,\n>> how are we not doing work at the top one?\n\n> What you're proposing would redefine the meaning of\n> PARTITIONS_DONE/TOTAL, even in the absence of intermediate partitioned\n> tables. Which might be okay, but the scope of this thread/patch was to\n> fix the behavior involving intermediate partitioned tables.\n\nI'm a little skeptical of that argument, because this patch is already\nredefining the meaning of PARTITIONS_TOTAL. The fact that the existing\ndocumentation is vague enough to be read either way doesn't make it not\na change.\n\nStill, in the interests of getting something done I'll drop the issue.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Mar 2023 13:11:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "I pushed 0001 with some cosmetic changes (for instance, trying to\nmake the style of the doc entries for partitions_total/partitions_done\nmatch the rest of their table).\n\nI'm not touching 0002 or 0003, because I think they're fundamentally\na bad idea. Progress reporting is inherently inexact, because it's\nso hard to predict the amount of work to be done in advance -- have\nyou ever seen a system anywhere whose progress bars reliably advance\nat a uniform rate? I think adding assertions that the estimates are\nerror-free is just going to cause headaches. As an example, I added\na comment pointing out that the current fix won't crash and burn if\nthe caller failed to lock all the child tables in advance: the\nfind_all_inheritors call should be safe anyway, so the worst consequence\nwould be an imprecise partitions_total estimate. But that argument\nfalls down if we're going to add assertions that partitions_total\nisn't in error.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Mar 2023 15:43:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
},
{
"msg_contents": "On Sat, Mar 25, 2023 at 03:43:32PM -0400, Tom Lane wrote:\n> I pushed 0001 with some cosmetic changes (for instance, trying to\n> make the style of the doc entries for partitions_total/partitions_done\n> match the rest of their table).\n\nThanks.\n\n> I'm not touching 0002 or 0003, because I think they're fundamentally\n> a bad idea. Progress reporting is inherently inexact, because it's\n\nNobody could disagree that it's inexact. The assertions are for minimal\nsanity tests and consistency. Like if \"total\" is set multiple times (as\nin this patch), or if a progress value goes backwards. Anyway the\nassertions exposed two other issues that would need to be fixed before\nthe assertions themselves could be proposed.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 26 Mar 2023 09:08:41 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Progress report of CREATE INDEX for nested partitioned tables"
}
] |
[
{
"msg_contents": "Hi all,\n\nThere have been multiple threads in the past discussing infinite\nintervals:\nhttps://www.postgresql.org/message-id/flat/4EB095C8.1050703%40agliodbs.com\nhttps://www.postgresql.org/message-id/flat/200101241913.f0OJDUu45423%40hub.org\nhttps://www.postgresql.org/message-id/flat/CANP8%2BjKTxQh4Mj%2BU3mWO3JHYb11SeQX9FW8SENrGbTdVxu6NNA%40mail.gmail.com\n\nAs well as an entry in the TODO list:\nhttps://wiki.postgresql.org/wiki/Todo#Dates_and_Times\n\nHowever, it doesn't seem like this was ever implemented. Is there still\nany interest in this feature? If so, I'd like to try and implement it.\n\nThe proposed design from the most recent thread was to reserve\nINT32_MAX months for infinity and INT32_MIN months for negative\ninfinity. As pointed out in the thread, these are currently valid\nnon-infinite intervals, but they are out of the documented range.\n\nThanks,\nJoe Koshakow\n\n\n",
"msg_date": "Sat, 10 Dec 2022 14:21:12 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Infinite Interval"
},
{
"msg_contents": "Hi Joseph,\nI stumbled upon this requirement a few times. So I started working on\nthis support in my spare time as a hobby project to understand\nhorology code in PostgreSQL. This was sitting in my repositories for\nmore than an year. Now that I have someone else showing an interest,\nit's time for it to face the world. Rebased it, fixed conflicts.\n\nPFA patch implementing infinite interval. It's still WIP, there are\nTODOs in the code and also the commit message lists things that are\nknown to be incomplete. You might want to assess expected output\ncarefully\n\nOn Sun, Dec 11, 2022 at 12:51 AM Joseph Koshakow <koshy44@gmail.com> wrote:>\n> The proposed design from the most recent thread was to reserve\n> INT32_MAX months for infinity and INT32_MIN months for negative\n> infinity. As pointed out in the thread, these are currently valid\n> non-infinite intervals, but they are out of the documented range.\n\nThe patch uses both months and days together to avoid this problem.\n\nPlease feel free to complete the patch, work on review comments etc. I\nwill help as and when I find time.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Mon, 12 Dec 2022 18:35:45 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Mon, Dec 12, 2022 at 8:05 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Hi Joseph,\n> I stumbled upon this requirement a few times. So I started working on\n> this support in my spare time as a hobby project to understand\n> horology code in PostgreSQL. This was sitting in my repositories for\n> more than an year. Now that I have someone else showing an interest,\n> it's time for it to face the world. Rebased it, fixed conflicts.\n>\n> PFA patch implementing infinite interval. It's still WIP, there are\n> TODOs in the code and also the commit message lists things that are\n> known to be incomplete. You might want to assess expected output\n> carefully\n\nThat's great! I was also planning to just work on it as a hobby\nproject, so I'll try and review and add updates as I find free\ntime as well.\n\n> > The proposed design from the most recent thread was to reserve\n> > INT32_MAX months for infinity and INT32_MIN months for negative\n> > infinity. As pointed out in the thread, these are currently valid\n> > non-infinite intervals, but they are out of the documented range.\n>\n> The patch uses both months and days together to avoid this problem.\n\nCan you expand on this part? I believe the full range of representable\nintervals are considered valid as of v15.\n\n- Joe Koshakow\n\n\n",
"msg_date": "Thu, 15 Dec 2022 18:43:29 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Hi Ashutosh,\n\nI've added tests for all the operators and functions involving\nintervals and what I think the expected behaviors to be. The\nformatting might be slightly off and I've left the contents of the\nerror messages as TODOs. Hopefully it's a good reference for the\nimplementation.\n\n> Adding infinite interval to an infinite timestamp with opposite\n> direction is not going to yield 0 but some infinity. Since we are adding\n> interval to the timestamp the resultant timestamp is an infinity\n> preserving the direction.\n\nI think I disagree with this. Tom Lane in one of the previous threads\nsaid:\n> tl;dr: we should model it after the behavior of IEEE float infinities,\n> except we'll want to throw errors where those produce NaNs.\nand I agree with this opinion. I believe that means that adding an\ninfinite interval to an infinite timestamp with opposite directions\nshould yield an error instead of some infinity. Since with floats this\nwould yield a NaN.\n\n> Dividing infinite interval by finite number keeps it infinite.\n> TODO: Do we change the sign of infinity if factor is negative?\nAgain if we model this after the IEEE float behavior, then the answer\nis yes, we do change the sign of infinity.\n\n- Joe Koshakow",
"msg_date": "Sat, 17 Dec 2022 14:34:09 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sat, Dec 17, 2022 at 2:34 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> Hi Ashutosh,\n>\n> I've added tests for all the operators and functions involving\n> intervals and what I think the expected behaviors to be. The\n> formatting might be slightly off and I've left the contents of the\n> error messages as TODOs. Hopefully it's a good reference for the\n> implementation.\n>\n> > Adding infinite interval to an infinite timestamp with opposite\n> > direction is not going to yield 0 but some infinity. Since we are adding\n> > interval to the timestamp the resultant timestamp is an infinity\n> > preserving the direction.\n>\n> I think I disagree with this. Tom Lane in one of the previous threads\n> said:\n> > tl;dr: we should model it after the behavior of IEEE float infinities,\n> > except we'll want to throw errors where those produce NaNs.\n> and I agree with this opinion. I believe that means that adding an\n> infinite interval to an infinite timestamp with opposite directions\n> should yield an error instead of some infinity. Since with floats this\n> would yield a NaN.\n>\n> > Dividing infinite interval by finite number keeps it infinite.\n> > TODO: Do we change the sign of infinity if factor is negative?\n> Again if we model this after the IEEE float behavior, then the answer\n> is yes, we do change the sign of infinity.\n>\n> - Joe Koshakow\nI ended up doing some more work in the attached patch. Here are some\nupdates:\n\n- I modified the arithmetic operators to more closely match IEEE\nfloats. Error messages are still all TODO, and they may have the wrong\nerror code.\n- I implemented some more operators and functions.\n- I moved the helper functions you created into macros in timestamp.h\nto more closely match the implementation of infinite timestamps and\ndates. Also so dates.c could access them.\n- There seems to be an existing overflow error with interval\nsubtraction. Many of the arithmetic operators of the form\n`X - Interval` are converted to `X + (-Interval)`. This will overflow\nin the case that some interval field is INT32_MIN or INT64_MIN.\nAdditionally, negating a positive infinity interval won't result in a\nnegative infinity interval and vice versa. We'll have to come up with\nan efficient solution for this.\n\n- Joe Koshakow",
"msg_date": "Sat, 17 Dec 2022 18:32:24 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Hi Ashutosh,\n\nI ended up doing some more work on this today. All of the major\nfeatures should be implemented now. Below are what I think are the\noutstanding TODOs:\n- Clean up error messages and error codes\n- Figure out how to correctly implement interval_part for infinite\nintervals. For now I pretty much copied the implementation of\ntimestamp_part, but I'm not convinced that's correct.\n- Fix horology tests.\n- Test consolidation. After looking through the interval tests, I\nrealized that I may have duplicated some test cases. It would probably\nbe best to remove those duplicate tests.\n- General cleanup, remove TODOs.\n\nAttached is my most recent patch.\n\n- Joe Koshakow",
"msg_date": "Fri, 23 Dec 2022 18:03:36 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "I have another update, I cleaned up some of the error messages, fixed\nthe horology tests, and ran pgindent.\n\n- Joe",
"msg_date": "Fri, 30 Dec 2022 12:17:36 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Fri, Dec 30, 2022 at 10:47 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n\n> I have another update, I cleaned up some of the error messages, fixed\n> the horology tests, and ran pgindent.\n>\n> - Joe\n>\n\nHi, there.\n\nSince in float8 you can use '+inf', '+infinity', So should we also make\ninterval '+infinity' valid?\nAlso in timestamp. '+infinity'::timestamp is invalid, should we make it\nvalid.\n\nIn float8, select float8 'inf' / float8 'inf' return NaN. Now in your patch\n select interval 'infinity' / float8 'infinity'; returns infinity.\nI am not sure it's right. I found this related post (\nhttps://math.stackexchange.com/questions/181304/what-is-infinity-divided-by-infinity\n).\n\n\n I recommend David Deutsch's <<The Beginning of Infinity>>\n\n Jian\n\nOn Fri, Dec 30, 2022 at 10:47 PM Joseph Koshakow <koshy44@gmail.com> wrote:I have another update, I cleaned up some of the error messages, fixed\nthe horology tests, and ran pgindent.\n\n- Joe\nHi, there.Since in float8 you can use '+inf', '+infinity', So should we also make interval '+infinity' valid?Also in timestamp. '+infinity'::timestamp is invalid, should we make it valid.In float8, select float8 'inf' / float8 'inf' return NaN. Now in your patch select interval 'infinity' / float8 'infinity'; returns infinity.I am not sure it's right. I found this related post (https://math.stackexchange.com/questions/181304/what-is-infinity-divided-by-infinity). I recommend David Deutsch's <<The Beginning of Infinity>> Jian",
"msg_date": "Sat, 31 Dec 2022 10:39:10 +0530",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On 12/31/22 06:09, jian he wrote:\n> \n> Since in float8 you can use '+inf', '+infinity', So should we also make\n> interval '+infinity' valid?\n\nYes.\n\n> Also in timestamp. '+infinity'::timestamp is invalid, should we make it\n> valid.\n\nYes, we should. I wrote a trivial patch for this a while ago but it \nappears I never posted it. I will post that in a new thread so as not \nto confuse the bots.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Sun, 1 Jan 2023 03:05:14 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sat, Dec 31, 2022 at 12:09 AM jian he <jian.universality@gmail.com> wrote:\n> In float8, select float8 'inf' / float8 'inf' return NaN. Now in your patch select interval 'infinity' / float8 'infinity'; returns infinity.\n> I am not sure it's right. I found this related post (https://math.stackexchange.com/questions/181304/what-is-infinity-divided-by-infinity).\n\nGood point, I agree this should return an error. We also need to\nproperly handle multiplication and division of infinite intervals by\nfloat8 'nan'. My patch is returning an infinite interval, but it should\nbe returning an error. I'll upload a new patch shortly.\n\n- Joe\n\n\n",
"msg_date": "Mon, 2 Jan 2023 13:21:50 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Mon, Jan 2, 2023 at 1:21 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> On Sat, Dec 31, 2022 at 12:09 AM jian he <jian.universality@gmail.com> wrote:\n> > In float8, select float8 'inf' / float8 'inf' return NaN. Now in your patch select interval 'infinity' / float8 'infinity'; returns infinity.\n> > I am not sure it's right. I found this related post (https://math.stackexchange.com/questions/181304/what-is-infinity-divided-by-infinity).\n>\n> Good point, I agree this should return an error. We also need to\n> properly handle multiplication and division of infinite intervals by\n> float8 'nan'. My patch is returning an infinite interval, but it should\n> be returning an error. I'll upload a new patch shortly.\n>\n> - Joe\n\nAttached is the patch to handle these scenarios. Apparently dividing by\nNaN is currently broken:\n postgres=# SELECT INTERVAL '1 day' / float8 'nan';\n ?column?\n ---------------------------------------------------\n -178956970 years -8 mons -2562047788:00:54.775808\n (1 row)\n\nThis patch will fix the issue, but we may want a separate patch that\nhandles this specific, existing issue. Any thoughts?\n\n- Joe",
"msg_date": "Mon, 2 Jan 2023 13:53:08 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "I have another patch, this one adds validations to operations that\nreturn intervals and updated error messages. I tried to give all of the\nerror messages meaningful text, but I'm starting to think that almost all\nof them should just say \"interval out of range\". The current approach\nmay reveal some implementation details and lead to confusion. For\nexample, some subtractions are converted to additions which would lead\nto an error message about addition.\n\n SELECT date 'infinity' - interval 'infinity';\n ERROR: cannot add infinite values with opposite signs\n\nI've also updated the commit message to include the remaining TODOs,\nwhich I've copied below\n\n 1. Various TODOs in code.\n 2. Correctly implement interval_part for infinite intervals.\n 3. Test consolidation.\n 4. Should we just use the months field to test for infinity?",
"msg_date": "Mon, 2 Jan 2023 19:44:25 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 6:14 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n\n> I have another patch, this one adds validations to operations that\n> return intervals and updated error messages. I tried to give all of the\n> error messages meaningful text, but I'm starting to think that almost all\n> of them should just say \"interval out of range\". The current approach\n> may reveal some implementation details and lead to confusion. For\n> example, some subtractions are converted to additions which would lead\n> to an error message about addition.\n>\n> SELECT date 'infinity' - interval 'infinity';\n> ERROR: cannot add infinite values with opposite signs\n>\n> I've also updated the commit message to include the remaining TODOs,\n> which I've copied below\n>\n> 1. Various TODOs in code.\n> 2. Correctly implement interval_part for infinite intervals.\n> 3. Test consolidation.\n> 4. Should we just use the months field to test for infinity?\n>\n\n\n3. Test consolidation.\nI used the DO command, reduced a lot of test sql code.\nI don't know how to generate an interval.out file.\nI hope the format is ok. I use https://sqlformat.darold.net/ format the sql\ncode.\nThen I saw on the internet that one line should be no more than 80 chars.\nso I slightly changed the format.\n\n-- \n I recommend David Deutsch's <<The Beginning of Infinity>>\n\n Jian",
"msg_date": "Wed, 4 Jan 2023 22:13:46 +0530",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 10:13 PM jian he <jian.universality@gmail.com> wrote:\n\n>\n>\n> On Tue, Jan 3, 2023 at 6:14 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n>> I have another patch, this one adds validations to operations that\n>> return intervals and updated error messages. I tried to give all of the\n>> error messages meaningful text, but I'm starting to think that almost all\n>> of them should just say \"interval out of range\". The current approach\n>> may reveal some implementation details and lead to confusion. For\n>> example, some subtractions are converted to additions which would lead\n>> to an error message about addition.\n>>\n>> SELECT date 'infinity' - interval 'infinity';\n>> ERROR: cannot add infinite values with opposite signs\n>>\n>> I've also updated the commit message to include the remaining TODOs,\n>> which I've copied below\n>>\n>> 1. Various TODOs in code.\n>> 2. Correctly implement interval_part for infinite intervals.\n>> 3. Test consolidation.\n>> 4. Should we just use the months field to test for infinity?\n>>\n>\n>\n> 3. Test consolidation.\n> I used the DO command, reduced a lot of test sql code.\n> I don't know how to generate an interval.out file.\n> I hope the format is ok. I use https://sqlformat.darold.net/ format the\n> sql code.\n> Then I saw on the internet that one line should be no more than 80 chars.\n> so I slightly changed the format.\n>\n> --\n> I recommend David Deutsch's <<The Beginning of Infinity>>\n>\n> Jian\n>\n>\n>\n\n1. Various TODOs in code.\nlogic combine and clean up for functions in backend/utils/adt/timestamp.c\n(timestamp_pl_interval,timestamptz_pl_interval, interval_pl, interval_mi).\n3. Test consolidation in /regress/sql/interval.sql\n\nFor 1. I don't know how to format the code. I have a problem installing\npg_indent. If the format is wrong, please reformat.\n3. As the previous email thread.",
"msg_date": "Thu, 5 Jan 2023 15:50:42 +0530",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Thu, Jan 5, 2023 at 5:20 AM jian he <jian.universality@gmail.com> wrote:\n>\n>\n>\n> On Wed, Jan 4, 2023 at 10:13 PM jian he <jian.universality@gmail.com> wrote:\n>>\n>>\n>>\n>> I don't know how to generate an interval.out file.\n\nPersonally I just write the .out files manually. I think it especially\nhelps as a way to double-check that the results are what you expected.\nAfter running make check a regressions.diff file will be generated with\nall the differences between your .out file and the results of the test.\n\n\n> logic combine and clean up for functions in backend/utils/adt/timestamp.c (timestamp_pl_interval,timestamptz_pl_interval, interval_pl, interval_mi).\n\nOne thing I was hoping to achieve was to avoid redundant checks if\npossible. For example, in the following code:\n> + if ((INTERVAL_IS_NOBEGIN(span1) && INTERVAL_IS_NOEND(span2))\n> + ||(INTERVAL_IS_NOBEGIN(span1) && !INTERVAL_NOT_FINITE(span2))\n> + ||(!INTERVAL_NOT_FINITE(span1) && INTERVAL_IS_NOEND(span2)))\n> + INTERVAL_NOBEGIN(result);\nIf `(INTERVAL_IS_NOBEGIN(span1) && INTERVAL_IS_NOEND(span2))` is false,\nthen we end up checking `INTERVAL_IS_NOBEGIN(span1)` twice\n\n> For 1. I don't know how to format the code. I have a problem installing pg_indent. If the format is wrong, please reformat.\n\nI'll run pg_indent and send an updated patch if anything changes.\n\nThanks for your help on this patch!\n\n- Joe Koshakow\n\n\n",
"msg_date": "Thu, 5 Jan 2023 10:39:51 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Jian,\n\nI incorporated your changes and updated interval.out and ran\npgindent. Looks like some of the error messages have changed and we\nhave some issues with parsing \"+infinity\" after rebasing.\n\n- Joe",
"msg_date": "Thu, 5 Jan 2023 20:24:38 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Fri, Jan 6, 2023 at 6:54 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n\n> Jian,\n>\n> I incorporated your changes and updated interval.out and ran\n> pgindent. Looks like some of the error messages have changed and we\n> have some issues with parsing \"+infinity\" after rebasing.\n>\n> - Joe\n>\n\nLooks like some of the error messages have changed and we\n> have some issues with parsing \"+infinity\" after rebasing.\n>\n\nThere is a commit 2ceea5adb02603ef52579b568ca2c5aebed87358\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=2ceea5adb02603ef52579b568ca2c5aebed87358\nif you pull this commit then you can do select interval '+infinity', even\nthough I don't know why.\n\nOn Fri, Jan 6, 2023 at 6:54 AM Joseph Koshakow <koshy44@gmail.com> wrote:Jian,\n\nI incorporated your changes and updated interval.out and ran\npgindent. Looks like some of the error messages have changed and we\nhave some issues with parsing \"+infinity\" after rebasing.\n\n- Joe\n Looks like some of the error messages have changed and we\nhave some issues with parsing \"+infinity\" after rebasing.There is a commit 2ceea5adb02603ef52579b568ca2c5aebed87358 https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=2ceea5adb02603ef52579b568ca2c5aebed87358if you pull this commit then you can do select interval '+infinity', even though I don't know why.",
"msg_date": "Fri, 6 Jan 2023 09:59:49 +0530",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Thu, Jan 5, 2023 at 11:30 PM jian he <jian.universality@gmail.com> wrote:\n>\n>\n>\n> On Fri, Jan 6, 2023 at 6:54 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n>>\n>> Looks like some of the error messages have changed and we\n>> have some issues with parsing \"+infinity\" after rebasing.\n>\n>\n> There is a commit 2ceea5adb02603ef52579b568ca2c5aebed87358\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=2ceea5adb02603ef52579b568ca2c5aebed87358\n> if you pull this commit then you can do select interval '+infinity', even though I don't know why.\n\nIt turns out that I was just misreading the error. The test was\nexpecting us to fail on \"+infinity\" but we succeeded. I just removed\nthat test case.\n\n>> pgindent. Looks like some of the error messages have changed\n\nThe conditions for checking valid addition/subtraction between infinite\nvalues were missing some cases which explains the change in error\nmessages. I've updated the logic and removed duplicate checks.\n\nI removed the extract/date_part tests since they were duplicated in a\ntest above. I also converted the DO command tests to using SQL with\njoins so it more closely matches the existing tests.\n\nI've updated the extract/date_part logic for infinite intervals. Fields\nthat are monotonically increasing should return +/-infinity and all\nothers should return NULL. For Intervals, the fields are the same as\ntimestamps plus the hour and day fields since those don't overflow into\nthe next highest field.\n\nI think this patch is just about ready for review, except for the\nfollowing two questions:\n 1. Should finite checks on intervals only look at months or all three\n fields?\n 2. Should we make the error messages for adding/subtracting infinite\n values more generic or leave them as is?\n\nMy opinions are\n 1. We should only look at months.\n 2. We should make the errors more generic.\n\nAnyone else have any thoughts?\n\n- Joe\n\n\n",
"msg_date": "Sat, 7 Jan 2023 15:04:25 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sat, Jan 7, 2023 at 3:04 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> On Thu, Jan 5, 2023 at 11:30 PM jian he <jian.universality@gmail.com> wrote:\n> >\n> >\n> >\n> > On Fri, Jan 6, 2023 at 6:54 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n> >>\n> >> Looks like some of the error messages have changed and we\n> >> have some issues with parsing \"+infinity\" after rebasing.\n> >\n> >\n> > There is a commit 2ceea5adb02603ef52579b568ca2c5aebed87358\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=2ceea5adb02603ef52579b568ca2c5aebed87358\n> > if you pull this commit then you can do select interval '+infinity', even though I don't know why.\n>\n> It turns out that I was just misreading the error. The test was\n> expecting us to fail on \"+infinity\" but we succeeded. I just removed\n> that test case.\n>\n> >> pgindent. Looks like some of the error messages have changed\n>\n> The conditions for checking valid addition/subtraction between infinite\n> values were missing some cases which explains the change in error\n> messages. I've updated the logic and removed duplicate checks.\n>\n> I removed the extract/date_part tests since they were duplicated in a\n> test above. I also converted the DO command tests to using SQL with\n> joins so it more closely matches the existing tests.\n>\n> I've updated the extract/date_part logic for infinite intervals. Fields\n> that are monotonically increasing should return +/-infinity and all\n> others should return NULL. For Intervals, the fields are the same as\n> timestamps plus the hour and day fields since those don't overflow into\n> the next highest field.\n>\n> I think this patch is just about ready for review, except for the\n> following two questions:\n> 1. Should finite checks on intervals only look at months or all three\n> fields?\n> 2. Should we make the error messages for adding/subtracting infinite\n> values more generic or leave them as is?\n>\n> My opinions are\n> 1. We should only look at months.\n> 2. We should make the errors more generic.\n>\n> Anyone else have any thoughts?\n>\n> - Joe\n\nOops I forgot the actual patch. Please see attached.",
"msg_date": "Sat, 7 Jan 2023 15:05:14 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sat, Jan 7, 2023 at 3:05 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> On Sat, Jan 7, 2023 at 3:04 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n> >\n> > I think this patch is just about ready for review, except for the\n> > following two questions:\n> > 1. Should finite checks on intervals only look at months or all three\n> > fields?\n> > 2. Should we make the error messages for adding/subtracting infinite\n> > values more generic or leave them as is?\n> >\n> > My opinions are\n> > 1. We should only look at months.\n> > 2. We should make the errors more generic.\n> >\n> > Anyone else have any thoughts?\n\nHere's a patch with the more generic error messages.\n\n- Joe",
"msg_date": "Sat, 7 Jan 2023 17:52:16 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sun, Jan 8, 2023 at 4:22 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n\n> On Sat, Jan 7, 2023 at 3:05 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n> >\n> > On Sat, Jan 7, 2023 at 3:04 PM Joseph Koshakow <koshy44@gmail.com>\n> wrote:\n> > >\n> > > I think this patch is just about ready for review, except for the\n> > > following two questions:\n> > > 1. Should finite checks on intervals only look at months or all three\n> > > fields?\n> > > 2. Should we make the error messages for adding/subtracting infinite\n> > > values more generic or leave them as is?\n> > >\n> > > My opinions are\n> > > 1. We should only look at months.\n> > > 2. We should make the errors more generic.\n> > >\n> > > Anyone else have any thoughts?\n>\n> Here's a patch with the more generic error messages.\n>\n> - Joe\n>\n\nHI.\n\nI just found out another problem.\n\nselect * from generate_series(timestamp'-infinity', timestamp 'infinity',\ninterval 'infinity');\nERROR: timestamp out of range\n\nselect * from generate_series(timestamp'-infinity',timestamp 'infinity',\ninterval '-infinity'); --return following\n\n generate_series\n-----------------\n(0 rows)\n\n\nselect * from generate_series(timestamp 'infinity',timestamp 'infinity',\ninterval 'infinity');\n--will run all the time.\n\nselect * from generate_series(timestamp 'infinity',timestamp 'infinity',\ninterval '-infinity');\nERROR: timestamp out of range\n\n select * from generate_series(timestamp'-infinity',timestamp'-infinity',\ninterval 'infinity');\nERROR: timestamp out of range\n\nselect * from generate_series(timestamp'-infinity',timestamp'-infinity',\ninterval '-infinity');\n--will run all the time.\n\n-- \n I recommend David Deutsch's <<The Beginning of Infinity>>\n\n Jian\n\nOn Sun, Jan 8, 2023 at 4:22 AM Joseph Koshakow <koshy44@gmail.com> wrote:On Sat, Jan 7, 2023 at 3:05 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> On Sat, Jan 7, 2023 at 3:04 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n> >\n> > I think this patch is just about ready for review, except for the\n> > following two questions:\n> > 1. Should finite checks on intervals only look at months or all three\n> > fields?\n> > 2. Should we make the error messages for adding/subtracting infinite\n> > values more generic or leave them as is?\n> >\n> > My opinions are\n> > 1. We should only look at months.\n> > 2. We should make the errors more generic.\n> >\n> > Anyone else have any thoughts?\n\nHere's a patch with the more generic error messages.\n\n- Joe\nHI.I just found out another problem. select * from generate_series(timestamp'-infinity', timestamp 'infinity', interval 'infinity');ERROR: timestamp out of rangeselect * from generate_series(timestamp'-infinity',timestamp 'infinity', interval '-infinity'); --return following generate_series-----------------(0 rows)select * from generate_series(timestamp 'infinity',timestamp 'infinity', interval 'infinity'); --will run all the time.select * from generate_series(timestamp 'infinity',timestamp 'infinity', interval '-infinity');ERROR: timestamp out of range select * from generate_series(timestamp'-infinity',timestamp'-infinity', interval 'infinity');ERROR: timestamp out of rangeselect * from generate_series(timestamp'-infinity',timestamp'-infinity', interval '-infinity');--will run all the time.-- I recommend David Deutsch's <<The Beginning of Infinity>> Jian",
"msg_date": "Mon, 9 Jan 2023 09:47:24 +0530",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sun, Jan 8, 2023 at 11:17 PM jian he <jian.universality@gmail.com> wrote:\n>\n>\n>\n> On Sun, Jan 8, 2023 at 4:22 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n>>\n>> On Sat, Jan 7, 2023 at 3:05 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>> >\n>> > On Sat, Jan 7, 2023 at 3:04 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>> > >\n>> > > I think this patch is just about ready for review, except for the\n>> > > following two questions:\n>> > > 1. Should finite checks on intervals only look at months or all three\n>> > > fields?\n>> > > 2. Should we make the error messages for adding/subtracting infinite\n>> > > values more generic or leave them as is?\n>> > >\n>> > > My opinions are\n>> > > 1. We should only look at months.\n>> > > 2. We should make the errors more generic.\n>> > >\n>> > > Anyone else have any thoughts?\n>>\n>> Here's a patch with the more generic error messages.\n>>\n>> - Joe\n>\n>\n> HI.\n>\n> I just found out another problem.\n>\n> select * from generate_series(timestamp'-infinity', timestamp 'infinity', interval 'infinity');\n> ERROR: timestamp out of range\n>\n> select * from generate_series(timestamp'-infinity',timestamp 'infinity', interval '-infinity'); --return following\n>\n> generate_series\n> -----------------\n> (0 rows)\n>\n>\n> select * from generate_series(timestamp 'infinity',timestamp 'infinity', interval 'infinity');\n> --will run all the time.\n>\n> select * from generate_series(timestamp 'infinity',timestamp 'infinity', interval '-infinity');\n> ERROR: timestamp out of range\n>\n> select * from generate_series(timestamp'-infinity',timestamp'-infinity', interval 'infinity');\n> ERROR: timestamp out of range\n>\n> select * from generate_series(timestamp'-infinity',timestamp'-infinity', interval '-infinity');\n> --will run all the time.\n\nGood catch, I didn't think to check non date/time functions.\nUnfortunately, I think you may have opened Pandoras box. I went through\npg_proc.dat and found the following functions that all involve\nintervals. We should probably investigate all of them and make sure\nthat they handle infinite intervals properly.\n\n{ oid => '1026', descr => 'adjust timestamp to new time zone',\nproname => 'timezone', prorettype => 'timestamp',\nproargtypes => 'interval timestamptz', prosrc => 'timestamptz_izone' },\n\n{ oid => '4133', descr => 'window RANGE support',\nproname => 'in_range', prorettype => 'bool',\nproargtypes => 'date date interval bool bool',\nprosrc => 'in_range_date_interval' },\n\n{ oid => '1305', descr => 'intervals overlap?',\nproname => 'overlaps', prolang => 'sql', proisstrict => 'f',\nprovolatile => 's', prorettype => 'bool',\nproargtypes => 'timestamptz interval timestamptz interval',\nprosrc => 'see system_functions.sql' },\n\n{ oid => '1305', descr => 'intervals overlap?',\nproname => 'overlaps', prolang => 'sql', proisstrict => 'f',\nprovolatile => 's', prorettype => 'bool',\nproargtypes => 'timestamptz interval timestamptz interval',\nprosrc => 'see system_functions.sql' },\n{ oid => '1306', descr => 'intervals overlap?',\nproname => 'overlaps', prolang => 'sql', proisstrict => 'f',\nprovolatile => 's', prorettype => 'bool',\nproargtypes => 'timestamptz timestamptz timestamptz interval',\nprosrc => 'see system_functions.sql' },\n{ oid => '1307', descr => 'intervals overlap?',\nproname => 'overlaps', prolang => 'sql', proisstrict => 'f',\nprovolatile => 's', prorettype => 'bool',\nproargtypes => 'timestamptz interval timestamptz timestamptz',\nprosrc => 'see system_functions.sql' },\n\n{ oid => '1308', descr => 'intervals overlap?',\nproname => 'overlaps', proisstrict => 'f', prorettype => 'bool',\nproargtypes => 'time time time time', prosrc => 'overlaps_time' },\n{ oid => '1309', descr => 'intervals overlap?',\nproname => 'overlaps', prolang => 'sql', proisstrict => 'f',\nprorettype => 'bool', proargtypes => 'time interval time interval',\nprosrc => 'see system_functions.sql' },\n{ oid => '1310', descr => 'intervals overlap?',\nproname => 'overlaps', prolang => 'sql', proisstrict => 'f',\nprorettype => 'bool', proargtypes => 'time time time interval',\nprosrc => 'see system_functions.sql' },\n{ oid => '1311', descr => 'intervals overlap?',\nproname => 'overlaps', prolang => 'sql', proisstrict => 'f',\nprorettype => 'bool', proargtypes => 'time interval time time',\nprosrc => 'see system_functions.sql' },\n\n{ oid => '1386',\ndescr => 'date difference from today preserving months and years',\nproname => 'age', prolang => 'sql', provolatile => 's',\nprorettype => 'interval', proargtypes => 'timestamptz',\nprosrc => 'see system_functions.sql' },\n\n{ oid => '2042', descr => 'intervals overlap?',\nproname => 'overlaps', prolang => 'sql', proisstrict => 'f',\nprorettype => 'bool', proargtypes => 'timestamp interval timestamp interval',\nprosrc => 'see system_functions.sql' },\n{ oid => '2043', descr => 'intervals overlap?',\nproname => 'overlaps', prolang => 'sql', proisstrict => 'f',\nprorettype => 'bool', proargtypes => 'timestamp timestamp timestamp interval',\nprosrc => 'see system_functions.sql' },\n{ oid => '2044', descr => 'intervals overlap?',\nproname => 'overlaps', prolang => 'sql', proisstrict => 'f',\nprorettype => 'bool', proargtypes => 'timestamp interval timestamp timestamp',\nprosrc => 'see system_functions.sql' },\n\n{ oid => '4134', descr => 'window RANGE support',\nproname => 'in_range', prorettype => 'bool',\nproargtypes => 'timestamp timestamp interval bool bool',\nprosrc => 'in_range_timestamp_interval' },\n{ oid => '4135', descr => 'window RANGE support',\nproname => 'in_range', provolatile => 's', prorettype => 'bool',\nproargtypes => 'timestamptz timestamptz interval bool bool',\nprosrc => 'in_range_timestamptz_interval' },\n{ oid => '4136', descr => 'window RANGE support',\nproname => 'in_range', prorettype => 'bool',\nproargtypes => 'interval interval interval bool bool',\nprosrc => 'in_range_interval_interval' },\n{ oid => '4137', descr => 'window RANGE support',\nproname => 'in_range', prorettype => 'bool',\nproargtypes => 'time time interval bool bool',\nprosrc => 'in_range_time_interval' },\n{ oid => '4138', descr => 'window RANGE support',\nproname => 'in_range', prorettype => 'bool',\nproargtypes => 'timetz timetz interval bool bool',\nprosrc => 'in_range_timetz_interval' },\n\n{ oid => '2058', descr => 'date difference preserving months and years',\nproname => 'age', prorettype => 'interval',\nproargtypes => 'timestamp timestamp', prosrc => 'timestamp_age' },\n{ oid => '2059',\ndescr => 'date difference from today preserving months and years',\nproname => 'age', prolang => 'sql', provolatile => 's',\nprorettype => 'interval', proargtypes => 'timestamp',\nprosrc => 'see system_functions.sql' },\n\n{ oid => '2070', descr => 'adjust timestamp to new time zone',\nproname => 'timezone', prorettype => 'timestamptz',\nproargtypes => 'interval timestamp', prosrc => 'timestamp_izone' },\n\n{ oid => '3935', descr => 'sleep for the specified interval',\nproname => 'pg_sleep_for', prolang => 'sql', provolatile => 'v',\nprorettype => 'void', proargtypes => 'interval',\nprosrc => 'see system_functions.sql' },\n\n{ oid => '2599', descr => 'get the available time zone abbreviations',\nproname => 'pg_timezone_abbrevs', prorows => '1000', proretset => 't',\nprovolatile => 's', prorettype => 'record', proargtypes => '',\nproallargtypes => '{text,interval,bool}', proargmodes => '{o,o,o}',\nproargnames => '{abbrev,utc_offset,is_dst}',\nprosrc => 'pg_timezone_abbrevs' },\n{ oid => '2856', descr => 'get the available time zone names',\nproname => 'pg_timezone_names', prorows => '1000', proretset => 't',\nprovolatile => 's', prorettype => 'record', proargtypes => '',\nproallargtypes => '{text,text,interval,bool}', proargmodes => '{o,o,o,o}',\nproargnames => '{name,abbrev,utc_offset,is_dst}',\nprosrc => 'pg_timezone_names' },\n\n{ oid => '939', descr => 'non-persistent series generator',\nproname => 'generate_series', prorows => '1000', proretset => 't',\nprovolatile => 's', prorettype => 'timestamptz',\nproargtypes => 'timestamptz timestamptz interval',\nprosrc => 'generate_series_timestamptz' },\n\n{ oid => '3976', descr => 'continuous distribution percentile',\nproname => 'percentile_cont', prokind => 'a', proisstrict => 'f',\nprorettype => 'interval', proargtypes => 'float8 interval',\nprosrc => 'aggregate_dummy' },\n{ oid => '3977', descr => 'aggregate final function',\nproname => 'percentile_cont_interval_final', proisstrict => 'f',\nprorettype => 'interval', proargtypes => 'internal float8',\nprosrc => 'percentile_cont_interval_final' },\n\n{ oid => '3982', descr => 'multiple continuous percentiles',\nproname => 'percentile_cont', prokind => 'a', proisstrict => 'f',\nprorettype => '_interval', proargtypes => '_float8 interval',\nprosrc => 'aggregate_dummy' },\n{ oid => '3983', descr => 'aggregate final function',\nproname => 'percentile_cont_interval_multi_final', proisstrict => 'f',\nprorettype => '_interval', proargtypes => 'internal _float8',\nprosrc => 'percentile_cont_interval_multi_final' },\n\n- Joe\n\n\n",
"msg_date": "Tue, 10 Jan 2023 20:34:15 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Ok, I've updated the patch to handle every function that inputs or\noutputs intervals, as well as added some tests. In the process I\nnoticed that some of the existing date/timestamp/timestamptz don't\nhandle infinite values properly. For example,\npostgres=# SELECT age('infinity'::timestamp);\nage\n--------------------------------------------------\n-292253 years -11 mons -26 days -04:00:54.775807\n(1 row)\n\nIt might be worth going through all those functions separately\nand making sure they are correct.\n\nI also added some overflow handling to make_interval.\n\nI also added handling of infinite timestamp subtraction.\n\nAt this point the patch is ready for review again except for the one\noutstanding question of: Should finite checks on intervals only look at\nmonths or all three fields?\n\n- Joe",
"msg_date": "Sat, 14 Jan 2023 16:22:49 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sat, Jan 14, 2023 at 4:22 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> At this point the patch is ready for review again except for the one\n> outstanding question of: Should finite checks on intervals only look at\n> months or all three fields?\n>\n> - Joe\n\nI've gone ahead and updated the patch to only look at the months field.\nI'll submit this email and patch to the Feb commitfest.\n\n- Joe",
"msg_date": "Sun, 15 Jan 2023 11:44:22 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sun, 15 Jan 2023 at 11:44, Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> On Sat, Jan 14, 2023 at 4:22 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n\n> I've gone ahead and updated the patch to only look at the months field.\n> I'll submit this email and patch to the Feb commitfest.\n\n\nIt looks like this patch needs a (perhaps trivial) rebase.\n\nIt sounds like all the design questions are resolved so perhaps this\ncan be set to Ready for Committer once it's rebased?\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Wed, 1 Mar 2023 15:02:55 -0500",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 3:03 PM Gregory Stark (as CFM) <stark.cfm@gmail.com>\nwrote:\n>\n> It looks like this patch needs a (perhaps trivial) rebase.\n\nAttached is a rebased patch.\n\n> It sounds like all the design questions are resolved so perhaps this\n> can be set to Ready for Committer once it's rebased?\n\nThere hasn't really been a review of this patch yet. It's just been\nmostly me talking to myself in this thread, and a couple of\ncontributions from jian.\n\n- Joe Koshakow",
"msg_date": "Wed, 1 Mar 2023 17:21:03 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Hi Joseph,\n\nThanks for working on the patch. Sorry for taking so long to review\nthis patch. But here's it finally, review of code changes\n\n static pg_tz *FetchDynamicTimeZone(TimeZoneAbbrevTable *tbl, const datetkn *tp,\n- DateTimeErrorExtra *extra);\n+ DateTimeErrorExtra * extra);\n\nThere are a lot of these diffs. PG code doesn't leave an extra space between\nvariable name and *.\n\n\n /* Handle the integer part */\n- if (!int64_multiply_add(val, scale, &itm_in->tm_usec))\n+ if (pg_mul_add_s64_overflow(val, scale, &itm_in->tm_usec))\n\nI think this is a good change, since we are moving the function to int.h where\nit belongs. We could separate these kind of changes into another patch for easy\nreview.\n\n+\n+ result->day = days;\n+ if (pg_mul_add_s32_overflow(weeks, 7, &result->day))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n+ errmsg(\"interval out of range\")));\n\nI think such changes are also good, but probably a separate patch for ease of\nreview.\n\n secs = rint(secs * USECS_PER_SEC);\n- result->time = hours mj* ((int64) SECS_PER_HOUR * USECS_PER_SEC) +\n- mins * ((int64) SECS_PER_MINUTE * USECS_PER_SEC) +\n- (int64) secs;\n+\n+ result->time = secs;\n+ if (pg_mul_add_s64_overflow(mins, ((int64) SECS_PER_MINUTE *\nUSECS_PER_SEC), &result->time))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n+ errmsg(\"interval out of range\")));\n+ if (pg_mul_add_s64_overflow(hours, ((int64) SECS_PER_HOUR *\nUSECS_PER_SEC), &result->time))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n+ errmsg(\"interval out of range\")));\n\n shouldn't this be\n secs = rint(secs);\n result->time = 0;\n pg_mul_add_s64_overflow(secs, USECS_PER_SEC, &result->time) to catch\n overflow error early?\n\n+ if TIMESTAMP_IS_NOBEGIN\n+ (dt2)\n\nBetter be written as if (TIMESTAMP_IS_NOBEGIN(dt2))? There are more corrections\nlike this.\n\n+ if (INTERVAL_NOT_FINITE(result))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n+ errmsg(\"interval out of range\")));\n\nProbably, I added these kind of checks. But I don't remember if those are\ndefensive checks or whether it's really possible that the arithmetic above\nthese lines can yield an non-finite interval.\n\n\n+ else\n+ {\n+ result->time = -interval->time;\n+ result->day = -interval->day;\n+ result->month = -interval->month;\n+\n+ if (INTERVAL_NOT_FINITE(result))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n+ errmsg(\"interval out of range\")));\n\nIf this error ever gets to the user, it could be confusing. Can we elaborate by\nadding context e.g. errcontext(\"while negating an interval\") or some such?\n\n-\n- result->time = -interval->time;\n- /* overflow check copied from int4um */\n- if (interval->time != 0 && SAMESIGN(result->time, interval->time))\n- ereport(ERROR,\n- (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n- errmsg(\"interval out of range\")));\n- result->day = -interval->day;\n- if (interval->day != 0 && SAMESIGN(result->day, interval->day))\n- ereport(ERROR,\n- (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n- errmsg(\"interval out of range\")));\n- result->month = -interval->month;\n- if (interval->month != 0 && SAMESIGN(result->month, interval->month))\n- ereport(ERROR,\n- (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n- errmsg(\"interval out of range\")));\n+ interval_um_internal(interval, result);\n\nShouldn't we incorporate these checks in interval_um_internal()? I don't think\nINTERVAL_NOT_FINITE() covers all of those.\n\n+ /*\n+ * Subtracting two infinite intervals with different signs results in an\n+ * infinite interval with the same sign as the left operand. Subtracting\n+ * two infinte intervals with the same sign results in an error.\n+ */\n\nI think we need someone to validate these assumptions and similar assumptions\nin interval_pl(). Googling gives confusing results in some cases. I have not\nlooked for IEEE standard around this specificallly.\n\n+ if (INTERVAL_NOT_FINITE(interval))\n+ {\n+ double r = NonFiniteIntervalPart(type, val, lowunits,\n+ INTERVAL_IS_NOBEGIN(interval),\n+ false);\n+\n+ if (r)\n\nI see that this code is very similar to the corresponding code in timestamp and\ntimestamptz, so it's bound to be correct. But I always thought float equality\nis unreliable. if (r) is equivalent to if (r == 0.0) so it will not work as\nintended. But may be (float) 0.0 is a special value for which equality holds\ntrue.\n\n+static inline bool\n+pg_mul_add_s64_overflow(int64 val, int64 multiplier, int64 *sum)\n\nI think this needs a prologue similar to int64_multiply_add(), that the patch\nremoves. Similarly for pg_mul_add_s32_overflow().\n\nOn Thu, Mar 2, 2023 at 3:51 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> On Wed, Mar 1, 2023 at 3:03 PM Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:\n> >\n> > It looks like this patch needs a (perhaps trivial) rebase.\n>\n> Attached is a rebased patch.\n>\n> > It sounds like all the design questions are resolved so perhaps this\n> > can be set to Ready for Committer once it's rebased?\n>\n> There hasn't really been a review of this patch yet. It's just been\n> mostly me talking to myself in this thread, and a couple of\n> contributions from jian.\n>\n> - Joe Koshakow\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 9 Mar 2023 23:12:22 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 12:42 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n>\n>\n> static pg_tz *FetchDynamicTimeZone(TimeZoneAbbrevTable *tbl, const\ndatetkn *tp,\n> - DateTimeErrorExtra *extra);\n> + DateTimeErrorExtra * extra);\n>\n> There are a lot of these diffs. PG code doesn't leave an extra space\nbetween\n> variable name and *.\n\nThose appeared from running pg_indent. I've removed them all.\n\n> /* Handle the integer part */\n> - if (!int64_multiply_add(val, scale, &itm_in->tm_usec))\n> + if (pg_mul_add_s64_overflow(val, scale, &itm_in->tm_usec))\n>\n> I think this is a good change, since we are moving the function to\nint.h where\n> it belongs. We could separate these kind of changes into another patch\nfor easy\n> review.\n\nI've separated this out into another patch attached to this email.\nShould I start a new email thread or is it ok to include it in this\none?\n\n> +\n> + result->day = days;\n> + if (pg_mul_add_s32_overflow(weeks, 7, &result->day))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> + errmsg(\"interval out of range\")));\n>\n> I think such changes are also good, but probably a separate patch for\nease of\n> review.\n\nI've separated a patch for this too, which I've also included in this\nemail.\n\n> secs = rint(secs * USECS_PER_SEC);\n> - result->time = hours mj* ((int64) SECS_PER_HOUR * USECS_PER_SEC) +\n> - mins * ((int64) SECS_PER_MINUTE * USECS_PER_SEC) +\n> - (int64) secs;\n> +\n> + result->time = secs;\n> + if (pg_mul_add_s64_overflow(mins, ((int64) SECS_PER_MINUTE *\n> USECS_PER_SEC), &result->time))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> + errmsg(\"interval out of range\")));\n> + if (pg_mul_add_s64_overflow(hours, ((int64) SECS_PER_HOUR *\n> USECS_PER_SEC), &result->time))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> + errmsg(\"interval out of range\")));\n>\n> shouldn't this be\n> secs = rint(secs);\n> result->time = 0;\n> pg_mul_add_s64_overflow(secs, USECS_PER_SEC, &result->time) to\ncatch\n> overflow error early?\n\nThe problem is that `secs = rint(secs)` rounds the seconds too early\nand loses any fractional seconds. Do we have an overflow detecting\nmultiplication function for floats?\n\n> + if TIMESTAMP_IS_NOBEGIN\n> + (dt2)\n>\n> Better be written as if (TIMESTAMP_IS_NOBEGIN(dt2))? There are more\ncorrections\n> like this.\n\nI think this may have also been done by pg_indent, I've reverted all\nthe examples of this.\n\n> + if (INTERVAL_NOT_FINITE(result))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> + errmsg(\"interval out of range\")));\n>\n> Probably, I added these kind of checks. But I don't remember if those\nare\n> defensive checks or whether it's really possible that the arithmetic\nabove\n> these lines can yield an non-finite interval.\n\nThese checks appear in `make_interval`, `justify_X`,\n`interval_um_internal`, `interval_pl`, `interval_mi`, `interval_mul`,\n`interval_div`. For all of these it's possible that the interval\noverflows/underflows the non-finite ranges, but does not\noverflow/underflow the data type. For example\n`SELECT INTERVAL '2147483646 months' + INTERVAL '1 month'` would error\non this check.\n\n\n> + else\n> + {\n> + result->time = -interval->time;\n> + result->day = -interval->day;\n> + result->month = -interval->month;\n> +\n> + if (INTERVAL_NOT_FINITE(result))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> + errmsg(\"interval out of range\")));\n>\n> If this error ever gets to the user, it could be confusing. Can we\nelaborate by\n> adding context e.g. errcontext(\"while negating an interval\") or some\nsuch?\n\nDone.\n\n> -\n> - result->time = -interval->time;\n> - /* overflow check copied from int4um */\n> - if (interval->time != 0 && SAMESIGN(result->time, interval->time))\n> - ereport(ERROR,\n> - (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> - errmsg(\"interval out of range\")));\n> - result->day = -interval->day;\n> - if (interval->day != 0 && SAMESIGN(result->day, interval->day))\n> - ereport(ERROR,\n> - (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> - errmsg(\"interval out of range\")));\n> - result->month = -interval->month;\n> - if (interval->month != 0 && SAMESIGN(result->month,\ninterval->month))\n> - ereport(ERROR,\n> - (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> - errmsg(\"interval out of range\")));\n> + interval_um_internal(interval, result);\n>\n> Shouldn't we incorporate these checks in interval_um_internal()? I\ndon't think\n> INTERVAL_NOT_FINITE() covers all of those.\n\nI replaced these checks with the following:\n\n+ else if (interval->time == PG_INT64_MIN || interval->day == PG_INT32_MIN\n|| interval->month == PG_INT32_MIN)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n+ errmsg(\"interval out of range\")));\n\nI think this covers the same overflow check but is maybe a bit more\nobvious. Unless, there's something I'm missing?\n\n> + /*\n> + * Subtracting two infinite intervals with different signs\nresults in an\n> + * infinite interval with the same sign as the left operand.\nSubtracting\n> + * two infinte intervals with the same sign results in an error.\n> + */\n>\n> I think we need someone to validate these assumptions and similar\nassumptions\n> in interval_pl(). Googling gives confusing results in some cases. I\nhave not\n> looked for IEEE standard around this specificallly.\n\nI used to Python and Rust to check my assumptions on the IEEE standard:\n\nPython:\n>>> float('inf') + float('inf')\ninf\n>>> float('-inf') + float('inf')\nnan\n>>> float('inf') + float('-inf')\nnan\n>>> float('-inf') + float('-inf')\n-inf\n\n>>> float('inf') - float('inf')\nnan\n>>> float('-inf') - float('inf')\n-inf\n>>> float('inf') - float('-inf')\ninf\n>>> float('-inf') - float('-inf')\nnan\n\nRust:\ninf + inf = inf\n-inf + inf = NaN\ninf + -inf = NaN\n-inf + -inf = -inf\n\ninf - inf = NaN\n-inf - inf = -inf\ninf - -inf = inf\n-inf - -inf = NaN\n\nI'll try and look up the actual standard and see what it says.\n\n> + if (INTERVAL_NOT_FINITE(interval))\n> + {\n> + double r = NonFiniteIntervalPart(type, val, lowunits,\n> +\n INTERVAL_IS_NOBEGIN(interval),\n> + false);\n> +\n> + if (r)\n>\n> I see that this code is very similar to the corresponding code in\ntimestamp and\n> timestamptz, so it's bound to be correct. But I always thought float\nequality\n> is unreliable. if (r) is equivalent to if (r == 0.0) so it will not\nwork as\n> intended. But may be (float) 0.0 is a special value for which equality\nholds\n> true.\n\nI'm not familiar with float equality being unreliable, but I'm by no\nmeans a C or float expert. Can you link me to some docs/explanation?\n\n> +static inline bool\n> +pg_mul_add_s64_overflow(int64 val, int64 multiplier, int64 *sum)\n>\n> I think this needs a prologue similar to int64_multiply_add(), that\nthe patch\n> removes. Similarly for pg_mul_add_s32_overflow().\n\nI've added this to the first patch.\n\nThanks for the review! Sorry for the delayed response.\n\n- Joe Koshakow",
"msg_date": "Sat, 18 Mar 2023 14:48:33 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> On Thu, Mar 9, 2023 at 12:42 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\n> wrote:\n>> There are a lot of these diffs. PG code doesn't leave an extra space\n>> between variable name and *.\n\n> Those appeared from running pg_indent. I've removed them all.\n\nMore specifically, those are from running pg_indent with an obsolete\ntypedefs list. Good practice is to fetch an up-to-date list from\nthe buildfarm:\n\ncurl https://buildfarm.postgresql.org/cgi-bin/typedefs.pl -o .../typedefs.list\n\nand use that. (If your patch adds any typedefs, you can then add them\nto that list.) There's been talk of trying harder to keep\nsrc/tools/pgindent/typedefs.list up to date, but not much has happened\nyet.\n\n> I've separated this out into another patch attached to this email.\n> Should I start a new email thread or is it ok to include it in this\n> one?\n\nHaving separate threads with interdependent patches is generally a\nbad idea :-( ... the cfbot certainly won't cope.\n\n>> I see that this code is very similar to the corresponding code in\n>> timestamp and\n>> timestamptz, so it's bound to be correct. But I always thought float\n>> equality\n>> is unreliable. if (r) is equivalent to if (r == 0.0) so it will not\n>> work as\n>> intended. But may be (float) 0.0 is a special value for which equality\n>> holds\n>> true.\n\n> I'm not familiar with float equality being unreliable, but I'm by no\n> means a C or float expert. Can you link me to some docs/explanation?\n\nThe specific issue with float zero is that plus zero and minus zero\nare distinct concepts with distinct bit patterns, but the IEEE spec\nsays that they compare as equal. The C standard says about \"if\":\n\n [#1] The controlling expression of an if statement shall\n have scalar type.\n [#2] In both forms, the first substatement is executed if\n the expression compares unequal to 0. In the else form, the\n second substatement is executed if the expression compares\n equal to 0.\n\nso it sure looks to me like a float control expression is valid and\nminus zero should be treated as \"false\". Nonetheless, personally\nI'd consider this to be poor style and would write \"r != 0\" or\n\"r != 0.0\" rather than depending on that.\n\nBTW, this may already need a rebase over 75bd846b6.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 18 Mar 2023 15:08:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sat, Mar 18, 2023 at 3:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Joseph Koshakow <koshy44@gmail.com> writes:\n>> On Thu, Mar 9, 2023 at 12:42 PM Ashutosh Bapat <\nashutosh.bapat.oss@gmail.com>\n>> wrote:\n>>> There are a lot of these diffs. PG code doesn't leave an extra space\n>>> between variable name and *.\n>\n>> Those appeared from running pg_indent. I've removed them all.\n>\n> More specifically, those are from running pg_indent with an obsolete\n> typedefs list. Good practice is to fetch an up-to-date list from\n> the buildfarm:\n>\n> curl https://buildfarm.postgresql.org/cgi-bin/typedefs.pl -o\n.../typedefs.list\n>\n> and use that. (If your patch adds any typedefs, you can then add them\n> to that list.) There's been talk of trying harder to keep\n> src/tools/pgindent/typedefs.list up to date, but not much has happened\n> yet.\n\nI must be doing something wrong because even after doing that I get the\nsame strange formatting. Specifically from the root directory I ran\n curl https://buildfarm.postgresql.org/cgi-bin/typedefs.pl -o\nsrc/tools/pgindent/typedefs.list\n src/tools/pgindent/pgindent src/backend/utils/adt/datetime.c\nsrc/include/common/int.h src/backend/utils/adt/timestamp.c\nsrc/backend/utils/adt/date.c src/backend/utils/adt/formatting.c\nsrc/backend/utils/adt/selfuncs.c src/include/datatype/timestamp.h\nsrc/include/utils/timestamp.h\n\n> The specific issue with float zero is that plus zero and minus zero\n> are distinct concepts with distinct bit patterns, but the IEEE spec\n> says that they compare as equal. The C standard says about \"if\":\n>\n> [#1] The controlling expression of an if statement shall\n> have scalar type.\n> [#2] In both forms, the first substatement is executed if\n> the expression compares unequal to 0. In the else form, the\n> second substatement is executed if the expression compares\n> equal to 0.\n>\n> so it sure looks to me like a float control expression is valid and\n> minus zero should be treated as \"false\". Nonetheless, personally\n> I'd consider this to be poor style and would write \"r != 0\" or\n> \"r != 0.0\" rather than depending on that.\n\nThanks for the info, I've updated the three instances of the check to\nbe \"r != 0.0\"\n\n> BTW, this may already need a rebase over 75bd846b6.\n\nThe patches in this email should be rebased over master.\n\n- Joe Koshakow",
"msg_date": "Sat, 18 Mar 2023 15:33:58 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> On Sat, Mar 18, 2023 at 3:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> More specifically, those are from running pg_indent with an obsolete\n>> typedefs list.\n\n> I must be doing something wrong because even after doing that I get the\n> same strange formatting. Specifically from the root directory I ran\n\nHmm, I dunno what's going on there. When I do this:\n\n> curl https://buildfarm.postgresql.org/cgi-bin/typedefs.pl -o\n> src/tools/pgindent/typedefs.list\n\nI end up with a plausible set of updates, notably\n\n$ git diff\ndiff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list\nindex 097f42e1b3..667f8e13ed 100644\n--- a/src/tools/pgindent/typedefs.list\n+++ b/src/tools/pgindent/typedefs.list\n...\n@@ -545,10 +548,12 @@ DataDumperPtr\n DataPageDeleteStack\n DatabaseInfo\n DateADT\n+DateTimeErrorExtra\n Datum\n DatumTupleFields\n DbInfo\n DbInfoArr\n+DbLocaleInfo\n DeClonePtrType\n DeadLockState\n DeallocateStmt\n\nso it sure ought to know DateTimeErrorExtra is a typedef.\nI then tried pgindent'ing datetime.c and timestamp.c,\nand it did not want to change either file. I do get\ndiffs like\n\n DecodeDateTime(char **field, int *ftype, int nf,\n int *dtype, struct pg_tm *tm, fsec_t *fsec, int *tzp,\n- DateTimeErrorExtra *extra)\n+ DateTimeErrorExtra * extra)\n {\n int fmask = 0,\n\nif I try to pgindent datetime.c with typedefs.list as it\nstands in HEAD. That's pretty much pgindent's normal\nbehavior when it doesn't recognize a name as a typedef.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 18 Mar 2023 15:55:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sat, Mar 18, 2023 at 3:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Joseph Koshakow <koshy44@gmail.com> writes:\n> > On Sat, Mar 18, 2023 at 3:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> More specifically, those are from running pg_indent with an obsolete\n> >> typedefs list.\n>\n> > I must be doing something wrong because even after doing that I get\nthe\n> > same strange formatting. Specifically from the root directory I ran\n>\n> Hmm, I dunno what's going on there. When I do this:\n>\n> > curl https://buildfarm.postgresql.org/cgi-bin/typedefs.pl -o\n> > src/tools/pgindent/typedefs.list\n>\n> I end up with a plausible set of updates, notably\n>\n> $ git diff\n> diff --git a/src/tools/pgindent/typedefs.list\nb/src/tools/pgindent/typedefs.list\n> index 097f42e1b3..667f8e13ed 100644\n> --- a/src/tools/pgindent/typedefs.list\n> +++ b/src/tools/pgindent/typedefs.list\n> ...\n> @@ -545,10 +548,12 @@ DataDumperPtr\n> DataPageDeleteStack\n> DatabaseInfo\n> DateADT\n> +DateTimeErrorExtra\n> Datum\n> DatumTupleFields\n> DbInfo\n> DbInfoArr\n> +DbLocaleInfo\n> DeClonePtrType\n> DeadLockState\n> DeallocateStmt\n>\n> so it sure ought to know DateTimeErrorExtra is a typedef.\n> I then tried pgindent'ing datetime.c and timestamp.c,\n> and it did not want to change either file. I do get\n> diffs like\n\n> DecodeDateTime(char **field, int *ftype, int nf,\n> int *dtype, struct pg_tm *tm, fsec_t *fsec, int *tzp,\n> - DateTimeErrorExtra *extra)\n> + DateTimeErrorExtra * extra)\n> {\n> int fmask = 0,\n>\n> if I try to pgindent datetime.c with typedefs.list as it\n> stands in HEAD. That's pretty much pgindent's normal\n> behavior when it doesn't recognize a name as a typedef.\n\nI must have been doing something wrong because I tried again today and\nit worked fine. However, I go get a lot of changes like the following:\n\n - if TIMESTAMP_IS_NOBEGIN(dt2)\n - ereport(ERROR,\n -\n(errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n - errmsg(\"timestamp out of\nrange\")));\n + if TIMESTAMP_IS_NOBEGIN\n + (dt2)\n + ereport(ERROR,\n +\n(errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n + errmsg(\"timestamp out of\nrange\")));\n\nShould I keep these pgindent changes or keep it the way I have it?\n\n- Joe Koshakow\n\nOn Sat, Mar 18, 2023 at 3:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> Joseph Koshakow <koshy44@gmail.com> writes:> > On Sat, Mar 18, 2023 at 3:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:> >> More specifically, those are from running pg_indent with an obsolete> >> typedefs list.>> > I must be doing something wrong because even after doing that I get the> > same strange formatting. Specifically from the root directory I ran>> Hmm, I dunno what's going on there. When I do this:>> > curl https://buildfarm.postgresql.org/cgi-bin/typedefs.pl -o> > src/tools/pgindent/typedefs.list>> I end up with a plausible set of updates, notably>> $ git diff> diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list> index 097f42e1b3..667f8e13ed 100644> --- a/src/tools/pgindent/typedefs.list> +++ b/src/tools/pgindent/typedefs.list> ...> @@ -545,10 +548,12 @@ DataDumperPtr> DataPageDeleteStack> DatabaseInfo> DateADT> +DateTimeErrorExtra> Datum> DatumTupleFields> DbInfo> DbInfoArr> +DbLocaleInfo> DeClonePtrType> DeadLockState> DeallocateStmt>> so it sure ought to know DateTimeErrorExtra is a typedef.> I then tried pgindent'ing datetime.c and timestamp.c,> and it did not want to change either file. I do get> diffs like> DecodeDateTime(char **field, int *ftype, int nf,> int *dtype, struct pg_tm *tm, fsec_t *fsec, int *tzp,> - DateTimeErrorExtra *extra)> + DateTimeErrorExtra * extra)> {> int fmask = 0,>> if I try to pgindent datetime.c with typedefs.list as it> stands in HEAD. That's pretty much pgindent's normal> behavior when it doesn't recognize a name as a typedef.I must have been doing something wrong because I tried again today andit worked fine. However, I go get a lot of changes like the following: - if TIMESTAMP_IS_NOBEGIN(dt2) - ereport(ERROR, - (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE), - errmsg(\"timestamp out of range\"))); + if TIMESTAMP_IS_NOBEGIN + (dt2) + ereport(ERROR, + (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE), + errmsg(\"timestamp out of range\")));Should I keep these pgindent changes or keep it the way I have it?- Joe Koshakow",
"msg_date": "Sun, 19 Mar 2023 16:46:58 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> I must have been doing something wrong because I tried again today and\n> it worked fine. However, I go get a lot of changes like the following:\n\n> - if TIMESTAMP_IS_NOBEGIN(dt2)\n> - ereport(ERROR,\n> -\n> (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> - errmsg(\"timestamp out of\n> range\")));\n> + if TIMESTAMP_IS_NOBEGIN\n> + (dt2)\n> + ereport(ERROR,\n> +\n> (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> + errmsg(\"timestamp out of\n> range\")));\n\n> Should I keep these pgindent changes or keep it the way I have it?\n\nDid you actually write \"if TIMESTAMP_IS_NOBEGIN(dt2)\" and not\n\"if (TIMESTAMP_IS_NOBEGIN(dt2))\"? If the former, I'm not surprised\nthat pgindent gets confused. The parentheses are required by the\nC standard. Your code might accidentally work because the macro\nhas parentheses internally, but call sites have no business\nknowing that. For example, it would be completely legit to change\nTIMESTAMP_IS_NOBEGIN to be a plain function, and then this would be\nsyntactically incorrect.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 19 Mar 2023 17:13:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sun, Mar 19, 2023 at 5:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Did you actually write \"if TIMESTAMP_IS_NOBEGIN(dt2)\" and not\n> \"if (TIMESTAMP_IS_NOBEGIN(dt2))\"? If the former, I'm not surprised\n> that pgindent gets confused. The parentheses are required by the\n> C standard. Your code might accidentally work because the macro\n> has parentheses internally, but call sites have no business\n> knowing that. For example, it would be completely legit to change\n> TIMESTAMP_IS_NOBEGIN to be a plain function, and then this would be\n> syntactically incorrect.\n\nOh duh. I've been doing too much Rust development and did this without\nthinking. I've attached a patch with a fix.\n\n- Joe Koshakow",
"msg_date": "Sun, 19 Mar 2023 17:46:02 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sun, Mar 19, 2023 at 1:04 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> The patches in this email should be rebased over master.\n>\n\nReviewed 0001 -\nLooks good to me. The new function is properly placed along with other\nsigned 64 bit functions. All existing calls to int64_multiply_add()\nhave been replaced with the new function and negated the result.\n\nReviewed 0002\n+ result->day = days;\n+ if (pg_mul_add_s32_overflow(weeks, 7, &result->day))\n\nYou don't need to do this, but looks like we can add DAYS_PER_WEEK macro and\nuse it here.\n\nThe first two patches look good to me; ready for a committer. Can be\ncommitted independent of the third patch.\n\nWill look at the third patch soon.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 24 Mar 2023 19:13:39 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 9:43 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n>\n> You don't need to do this, but looks like we can add DAYS_PER_WEEK\nmacro and\n> use it here.\n\nI've attached a patch with this new macro. There's probably tons of\nplaces it can be used instead of hardcoding the number 7, but I'll save\nthat for a future patch.\n\n- Joe Koshakow",
"msg_date": "Sat, 25 Mar 2023 11:42:58 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "In terms of adding/subtracting infinities, the IEEE standard is pay\nwalled and I don't have a copy. I tried finding information online but\nI also wasn't able to find anything useful. I additionally checked to see\nthe results of C++, C, and Java, and they all match which increases my\nconfidence that we're doing the right thing. Does anyone happen to have\na copy of the standard and can confirm?\n\n- Joe Koshakow\n\nIn terms of adding/subtracting infinities, the IEEE standard is paywalled and I don't have a copy. I tried finding information online butI also wasn't able to find anything useful. I additionally checked to seethe results of C++, C, and Java, and they all match which increases myconfidence that we're doing the right thing. Does anyone happen to havea copy of the standard and can confirm?- Joe Koshakow",
"msg_date": "Sat, 25 Mar 2023 12:02:35 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> In terms of adding/subtracting infinities, the IEEE standard is pay\n> walled and I don't have a copy. I tried finding information online but\n> I also wasn't able to find anything useful. I additionally checked to see\n> the results of C++, C, and Java, and they all match which increases my\n> confidence that we're doing the right thing. Does anyone happen to have\n> a copy of the standard and can confirm?\n\nI think you can take it as read that simple C test programs on modern\nplatforms will exhibit IEEE-compliant handling of float infinities.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Mar 2023 15:58:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sat, 25 Mar 2023 at 15:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Joseph Koshakow <koshy44@gmail.com> writes:\n> > In terms of adding/subtracting infinities, the IEEE standard is pay\n> > walled and I don't have a copy. I tried finding information online but\n> > I also wasn't able to find anything useful. I additionally checked to see\n> > the results of C++, C, and Java, and they all match which increases my\n> > confidence that we're doing the right thing. Does anyone happen to have\n> > a copy of the standard and can confirm?\n>\n> I think you can take it as read that simple C test programs on modern\n> platforms will exhibit IEEE-compliant handling of float infinities.\n>\n\nAdditionally, the Java language specification claims to follow IEEE 754:\n\nhttps://docs.oracle.com/javase/specs/jls/se11/html/jls-15.html#jls-15.18.2\n\nSo either C and Java agree with each other and with the spec, or they\ndisagree in the same way even while at least one of them explicitly claims\nto be following the spec. I think you're on pretty firm ground.\n\nOn Sat, 25 Mar 2023 at 15:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:Joseph Koshakow <koshy44@gmail.com> writes:\n> In terms of adding/subtracting infinities, the IEEE standard is pay\n> walled and I don't have a copy. I tried finding information online but\n> I also wasn't able to find anything useful. I additionally checked to see\n> the results of C++, C, and Java, and they all match which increases my\n> confidence that we're doing the right thing. Does anyone happen to have\n> a copy of the standard and can confirm?\n\nI think you can take it as read that simple C test programs on modern\nplatforms will exhibit IEEE-compliant handling of float infinities.\nAdditionally, the Java language specification claims to follow IEEE 754:https://docs.oracle.com/javase/specs/jls/se11/html/jls-15.html#jls-15.18.2So either C and Java agree with each other and with the spec, or they disagree in the same way even while at least one of them explicitly claims to be following the spec. I think you're on pretty firm ground.",
"msg_date": "Sat, 25 Mar 2023 16:25:19 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sat, Mar 25, 2023 at 9:13 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> On Fri, Mar 24, 2023 at 9:43 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > You don't need to do this, but looks like we can add DAYS_PER_WEEK macro and\n> > use it here.\n>\n> I've attached a patch with this new macro. There's probably tons of\n> places it can be used instead of hardcoding the number 7, but I'll save\n> that for a future patch.\n\nThanks. Yes, changing other existing usages is out of scope for this patch.\n\nLooks good to me.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 27 Mar 2023 19:12:35 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sun, Mar 19, 2023 at 12:18 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> The problem is that `secs = rint(secs)` rounds the seconds too early\n> and loses any fractional seconds. Do we have an overflow detecting\n> multiplication function for floats?\n\nWe have float8_mul() which checks for overflow. typedef double float8;\n\n>\n> > + if (INTERVAL_NOT_FINITE(result))\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> > + errmsg(\"interval out of range\")));\n> >\n> > Probably, I added these kind of checks. But I don't remember if those are\n> > defensive checks or whether it's really possible that the arithmetic above\n> > these lines can yield an non-finite interval.\n>\n> These checks appear in `make_interval`, `justify_X`,\n> `interval_um_internal`, `interval_pl`, `interval_mi`, `interval_mul`,\n> `interval_div`. For all of these it's possible that the interval\n> overflows/underflows the non-finite ranges, but does not\n> overflow/underflow the data type. For example\n> `SELECT INTERVAL '2147483646 months' + INTERVAL '1 month'` would error\n> on this check.\n\nWithout this patch\npostgres@64807=#SELECT INTERVAL '2147483646 months' + INTERVAL '1 month';\n ?column?\n------------------------\n 178956970 years 7 mons\n(1 row)\n\nThat result looks correct\n\npostgres@64807=#select 178956970 * 12 + 7;\n ?column?\n------------\n 2147483647\n(1 row)\n\nSo some backward compatibility break. I don't think we can avoid the\nbackward compatibility break without expanding interval structure and\nthus causing on-disk breakage. But we can reduce the chances of\nbreaking, if we change INTERVAL_NOT_FINITE to check all the three\nfields, instead of just month.\n\n>\n>\n> > + else\n> > + {\n> > + result->time = -interval->time;\n> > + result->day = -interval->day;\n> > + result->month = -interval->month;\n> > +\n> > + if (INTERVAL_NOT_FINITE(result))\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> > + errmsg(\"interval out of range\")));\n> >\n> > If this error ever gets to the user, it could be confusing. Can we elaborate by\n> > adding context e.g. errcontext(\"while negating an interval\") or some such?\n>\n> Done.\n\nThanks. Can we add relevant contexts at similar other places?\n\nAlso if we use all the three fields, we will need to add such checks\nin interval_justify_hours()\n\n>\n> I replaced these checks with the following:\n>\n> + else if (interval->time == PG_INT64_MIN || interval->day == PG_INT32_MIN || interval->month == PG_INT32_MIN)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> + errmsg(\"interval out of range\")));\n>\n> I think this covers the same overflow check but is maybe a bit more\n> obvious. Unless, there's something I'm missing?\n\nThanks. Your current version is closer to int4um().\n\nSome more review comments in the following email.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 28 Mar 2023 19:08:40 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sun, Mar 26, 2023 at 1:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I think you can take it as read that simple C test programs on modern\n> platforms will exhibit IEEE-compliant handling of float infinities.\n>\n\nFor the record, I tried the attached. It gives a warning at compilation time.\n\n$gcc float_inf.c\nfloat_inf.c: In function ‘main’:\nfloat_inf.c:10:17: warning: division by zero [-Wdiv-by-zero]\n 10 | float inf = 1.0/0;\n | ^\nfloat_inf.c:11:20: warning: division by zero [-Wdiv-by-zero]\n 11 | float n_inf = -1.0/0;\n | ^\n$ ./a.out\ninf = inf\n-inf = -inf\ninf + inf = inf\ninf + -inf = -nan\n-inf + inf = -nan\n-inf + -inf = -inf\ninf - inf = -nan\ninf - -inf = inf\n-inf - inf = -inf\n-inf - -inf = -nan\nfloat 0.0 equals 0.0\nfloat 1.0 equals 1.0\n 5.0 * inf = inf\n 5.0 * - inf = -inf\n 5.0 / inf = 0.000000\n 5.0 / - inf = -0.000000\n inf / 5.0 = inf\n - inf / 5.0 = -inf\n\nThe changes in the patch are compliant with the observations above.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Tue, 28 Mar 2023 19:13:54 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 3:16 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n>\n>\n> On Sun, Mar 19, 2023 at 5:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Did you actually write \"if TIMESTAMP_IS_NOBEGIN(dt2)\" and not\n> > \"if (TIMESTAMP_IS_NOBEGIN(dt2))\"? If the former, I'm not surprised\n> > that pgindent gets confused. The parentheses are required by the\n> > C standard. Your code might accidentally work because the macro\n> > has parentheses internally, but call sites have no business\n> > knowing that. For example, it would be completely legit to change\n> > TIMESTAMP_IS_NOBEGIN to be a plain function, and then this would be\n> > syntactically incorrect.\n>\n> Oh duh. I've been doing too much Rust development and did this without\n> thinking. I've attached a patch with a fix.\n>\n\nThanks for fixing this.\n\nOn this latest patch, I have one code comment\n\n@@ -3047,7 +3180,30 @@ timestamptz_pl_interval_internal(TimestampTz timestamp,\n TimestampTz result;\n int tz;\n\n- if (TIMESTAMP_NOT_FINITE(timestamp))\n+ /*\n+ * Adding two infinites with the same sign results in an infinite\n+ * timestamp with the same sign. Adding two infintes with different signs\n+ * results in an error.\n+ */\n+ if (INTERVAL_IS_NOBEGIN(span))\n+ {\n+ if TIMESTAMP_IS_NOEND(timestamp)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n+ errmsg(\"interval out of range\")));\n+ else\n+ TIMESTAMP_NOBEGIN(result);\n+ }\n+ else if (INTERVAL_IS_NOEND(span))\n+ {\n+ if TIMESTAMP_IS_NOBEGIN(timestamp)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n+ errmsg(\"interval out of range\")));\n+ else\n+ TIMESTAMP_NOEND(result);\n+ }\n+ else if (TIMESTAMP_NOT_FINITE(timestamp))\n\nThis code is duplicated in timestamp_pl_interval(). We could create a function\nto encode the infinity handling rules and then call it in these two places. The\nargument types are different, Timestamp and TimestampTz viz. which map to in64,\nso shouldn't be a problem. But it will be slightly unreadable. Or use macros\nbut then it will be difficult to debug.\n\nWhat do you think?\n\nNext I will review the test changes and also make sure that every\noperator that interval as one of its operands or the result has been\ncovered in the code. This is the following list\n\n#select oprname, oprcode from pg_operator where oprleft =\n'interval'::regtype or oprright = 'interval'::regtype or oprresult =\n'interval'::regtype;\n oprname | oprcode\n---------+-------------------------\n + | date_pl_interval\n - | date_mi_interval\n + | timestamptz_pl_interval\n - | timestamptz_mi\n - | timestamptz_mi_interval\n = | interval_eq\n <> | interval_ne\n < | interval_lt\n <= | interval_le\n > | interval_gt\n >= | interval_ge\n - | interval_um\n + | interval_pl\n - | interval_mi\n - | time_mi_time\n * | interval_mul\n * | mul_d_interval\n / | interval_div\n + | time_pl_interval\n - | time_mi_interval\n + | timetz_pl_interval\n - | timetz_mi_interval\n + | interval_pl_time\n + | timestamp_pl_interval\n - | timestamp_mi\n - | timestamp_mi_interval\n + | interval_pl_date\n + | interval_pl_timetz\n + | interval_pl_timestamp\n + | interval_pl_timestamptz\n(30 rows)\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 28 Mar 2023 19:17:48 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 7:17 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n > make sure that every\n> operator that interval as one of its operands or the result has been\n> covered in the code.\n\ntime_mi_time - do we want to add an Assert to make sure that this\nfunction does not produce an Interval structure which looks like\nnon-finite interval?\n\nmultiplying an interval by infinity throws an error\n#select '5 days'::interval * 'infinity'::float8;\n2023-03-29 19:40:15.797 IST [136240] ERROR: interval out of range\n2023-03-29 19:40:15.797 IST [136240] STATEMENT: select '5\ndays'::interval * 'infinity'::float8;\nERROR: interval out of range\n\nI think this should produce an infinite interval now. Attached patch\nto fix this, to be applied on top of your patch. With the patch\n#select '5 days'::interval * 'infinity'::float8;\n ?column?\n----------\n infinity\n(1 row)\n\nGoing through the tests now.\n\n--\nBest Wishes,\nAshutosh Bapat",
"msg_date": "Fri, 31 Mar 2023 15:46:36 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "I hurried too much on the previous patch. It introduced other\nproblems. Attached is a better patch and also fixes problem below\n#select 'infinity'::interval * 0;\n ?column?\n----------\n infinity\n(1 row)\n\nwith the patch we see\n#select 'infinity'::interval * 0;\n2023-03-31 18:00:43.131 IST [240892] ERROR: interval out of range\n2023-03-31 18:00:43.131 IST [240892] STATEMENT: select\n'infinity'::interval * 0;\nERROR: interval out of range\n\nwhich looks more appropriate given 0 * inf = Nan for float.\n\nThere's some way to avoid separate checks for infinite-ness of\ninterval and factor and use a single block using some integer\narithmetic. But I think this is more readable. So I avoided doing\nthat. Let me know if this works for you.\n\nAlso added some test cases.\n\n--\nBest Wishes,\nAshutosh Bapat\n\nOn Fri, Mar 31, 2023 at 3:46 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Tue, Mar 28, 2023 at 7:17 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > make sure that every\n> > operator that interval as one of its operands or the result has been\n> > covered in the code.\n>\n> time_mi_time - do we want to add an Assert to make sure that this\n> function does not produce an Interval structure which looks like\n> non-finite interval?\n>\n> multiplying an interval by infinity throws an error\n> #select '5 days'::interval * 'infinity'::float8;\n> 2023-03-29 19:40:15.797 IST [136240] ERROR: interval out of range\n> 2023-03-29 19:40:15.797 IST [136240] STATEMENT: select '5\n> days'::interval * 'infinity'::float8;\n> ERROR: interval out of range\n>\n> I think this should produce an infinite interval now. Attached patch\n> to fix this, to be applied on top of your patch. With the patch\n> #select '5 days'::interval * 'infinity'::float8;\n> ?column?\n> ----------\n> infinity\n> (1 row)\n>\n> Going through the tests now.\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat",
"msg_date": "Fri, 31 Mar 2023 18:16:12 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "> > The problem is that `secs = rint(secs)` rounds the seconds too early\n> > and loses any fractional seconds. Do we have an overflow detecting\n> > multiplication function for floats?\n>\n> We have float8_mul() which checks for overflow. typedef double float8;\n\nI've updated patch 2 to use this. I also realized that the implicit\ncast from double to int64 can also result in an overflow. For example,\neven after adding float8_mul() we can see this:\nSELECT make_interval(0, 0, 0, 0, 0,\n0,17976931348623157000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000);\n make_interval\n--------------------------\n -2562047788:00:54.775808\n(1 row)\n\nSo I added a check for FLOAT8_FITS_IN_INT64() and a test with this\nscenario.\n\n> Without this patch\n> postgres@64807=#SELECT INTERVAL '2147483646 months' + INTERVAL '1 month';\n> ?column?\n> ------------------------\n> 178956970 years 7 mons\n> (1 row)\n>\n> That result looks correct\n>\n> postgres@64807=#select 178956970 * 12 + 7;\n> ?column?\n> ------------\n> 2147483647\n>\n> (1 row)\n>\n> So some backward compatibility break. I don't think we can avoid the\n> backward compatibility break without expanding interval structure and\n> thus causing on-disk breakage. But we can reduce the chances of\n> breaking, if we change INTERVAL_NOT_FINITE to check all the three\n> fields, instead of just month.\n\nFor what it's worth I think that 2147483647 months only became a valid\ninterval in v15 as part of this commit [0]. It's also outside of the\ndocumented valid range [1], which is\n[-178000000 years, 178000000 years] or\n[-14833333 months, 14833333 months].\n\nThe rationale for only checking the month's field is that it's faster\nthan checking all three fields, though I'm not entirely sure if it's\nthe right trade-off. Any thoughts on this?\n\n> >\n> >\n> > > + else\n> > > + {\n> > > + result->time = -interval->time;\n> > > + result->day = -interval->day;\n> > > + result->month = -interval->month;\n> > > +\n> > > + if (INTERVAL_NOT_FINITE(result))\n> > > + ereport(ERROR,\n> > > + (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> > > + errmsg(\"interval out of range\")));\n> > >\n> > > If this error ever gets to the user, it could be confusing. Can we\nelaborate by\n> > > adding context e.g. errcontext(\"while negating an interval\") or\nsome such?\n> >\n> > Done.\n>\n> Thanks. Can we add relevant contexts at similar other places?\n\nI've added an errcontext to all the errors of the form \"X out of\nrange\". My one concern is that some of the messages can be slightly\nconfusing. For example date arithmetic is converted to timestamp\narithmetic, so the errcontext talks about timestamps even though the\nactual operation used dates. For example,\n\nSELECT date 'infinity' + interval '-infinity';\nERROR: interval out of range\nCONTEXT: while adding an interval and timestamp\n\n> Also if we use all the three fields, we will need to add such checks\n> in interval_justify_hours()\n\nI added these for now because even if we stick to just using the month\nfield, it will be good future proofing.\n\n> @@ -3047,7 +3180,30 @@ timestamptz_pl_interval_internal(TimestampTz\ntimestamp,\n> TimestampTz result;\n> int tz;\n>\n> - if (TIMESTAMP_NOT_FINITE(timestamp))\n> + /*\n> + * Adding two infinites with the same sign results in an infinite\n> + * timestamp with the same sign. Adding two infintes with different\nsigns\n> + * results in an error.\n> + */\n> + if (INTERVAL_IS_NOBEGIN(span))\n> + {\n> + if TIMESTAMP_IS_NOEND(timestamp)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> + errmsg(\"interval out of range\")));\n> + else\n> + TIMESTAMP_NOBEGIN(result);\n> + }\n> + else if (INTERVAL_IS_NOEND(span))\n> + {\n> + if TIMESTAMP_IS_NOBEGIN(timestamp)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> + errmsg(\"interval out of range\")));\n> + else\n> + TIMESTAMP_NOEND(result);\n> + }\n> + else if (TIMESTAMP_NOT_FINITE(timestamp))\n>\n> This code is duplicated in timestamp_pl_interval(). We could create > a\nfunction\n> to encode the infinity handling rules and then call it in these two >\nplaces. The\n> argument types are different, Timestamp and TimestampTz viz. which map to\nin64,\n> so shouldn't be a problem. But it will be slightly unreadable. Or use\nmacros\n> but then it will be difficult to debug.\n>\n> What do you think?\n\nI was hoping that I could come up with a macro that we could re-use for\nall the similar logic. If that doesn't work then I'll try the helper\nfunctions. I'll update the patch in a follow-up email to give myself some\ntime to think about this.\n\n> time_mi_time - do we want to add an Assert to make sure that this\n> function does not produce an Interval structure which looks like\n> non-finite interval?\n\nSince the month and day field of the interval result is hard-coded as\n0, it's not possible to produce a non-finite interval result, but I\ndon't think it would hurt. I've added an assert to the end.\n\n> There's some way to avoid separate checks for infinite-ness of\n> interval and factor and use a single block using some integer\n> arithmetic. But I think this is more readable. So I avoided doing\n> that. Let me know if this works for you.\n\nI think the patch looks good, I've combined it with the existing patch.\n\n> Also added some test cases.\n\nI didn't see any tests in the patch, did you forget to include it?\n\nI've attached the updated patches. I've also rebased them against main.\n\nThanks,\nJoe Koshakow\n\n[0]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=e39f9904671082c5ad3a2c5acbdbd028fa93bf35\n[1] https://www.postgresql.org/docs/15/datatype-datetime.html\n\nOn Fri, Mar 31, 2023 at 8:46 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> I hurried too much on the previous patch. It introduced other\n> problems. Attached is a better patch and also fixes problem below\n> #select 'infinity'::interval * 0;\n> ?column?\n> ----------\n> infinity\n> (1 row)\n>\n> with the patch we see\n> #select 'infinity'::interval * 0;\n> 2023-03-31 18:00:43.131 IST [240892] ERROR: interval out of range\n> 2023-03-31 18:00:43.131 IST [240892] STATEMENT: select\n> 'infinity'::interval * 0;\n> ERROR: interval out of range\n>\n> which looks more appropriate given 0 * inf = Nan for float.\n>\n> There's some way to avoid separate checks for infinite-ness of\n> interval and factor and use a single block using some integer\n> arithmetic. But I think this is more readable. So I avoided doing\n> that. Let me know if this works for you.\n>\n> Also added some test cases.\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>\n> On Fri, Mar 31, 2023 at 3:46 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Tue, Mar 28, 2023 at 7:17 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > make sure that every\n> > > operator that interval as one of its operands or the result has been\n> > > covered in the code.\n> >\n> > time_mi_time - do we want to add an Assert to make sure that this\n> > function does not produce an Interval structure which looks like\n> > non-finite interval?\n> >\n> > multiplying an interval by infinity throws an error\n> > #select '5 days'::interval * 'infinity'::float8;\n> > 2023-03-29 19:40:15.797 IST [136240] ERROR: interval out of range\n> > 2023-03-29 19:40:15.797 IST [136240] STATEMENT: select '5\n> > days'::interval * 'infinity'::float8;\n> > ERROR: interval out of range\n> >\n> > I think this should produce an infinite interval now. Attached patch\n> > to fix this, to be applied on top of your patch. With the patch\n> > #select '5 days'::interval * 'infinity'::float8;\n> > ?column?\n> > ----------\n> > infinity\n> > (1 row)\n> >\n> > Going through the tests now.\n> >\n> > --\n> > Best Wishes,\n> > Ashutosh Bapat\n>",
"msg_date": "Sat, 1 Apr 2023 13:23:46 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "> > This code is duplicated in timestamp_pl_interval(). We could create a\nfunction\n> > to encode the infinity handling rules and then call it in these two\nplaces. The\n> > argument types are different, Timestamp and TimestampTz viz. which map\nto in64,\n> > so shouldn't be a problem. But it will be slightly unreadable. Or use\nmacros\n> > but then it will be difficult to debug.\n> >\n> > What do you think?\n>\n> I was hoping that I could come up with a macro that we could re-use for\n> all the similar logic. If that doesn't work then I'll try the helper\n> functions. I'll update the patch in a follow-up email to give myself some\n> time to think about this.\n\nSo I checked where are all the places that we do arithmetic between two\npotentially infinite values, and it's at the top of the following\nfunctions:\n\n- timestamp_mi()\n- timestamp_pl_interval()\n- timestamptz_pl_interval_internal()\n- interval_pl()\n- interval_mi()\n- timestamp_age()\n- timestamptz_age()\n\nI was able to get an extremely generic macro to work, but it was very\nugly and unmaintainable in my view. Instead I took the following steps\nto clean this up:\n\n- I rewrote interval_mi() to be implemented in terms of interval_um()\nand interval_pl().\n- I abstracted the infinite arithmetic from timestamp_mi(),\ntimestamp_age(), and timestamptz_age() into a helper function called\ninfinite_timestamp_mi_internal()\n- I abstracted the infinite arithmetic from timestamp_pl_interval() and\ntimestamptz_pl_interval_internal() into a helper function called\ninfinite_timestamp_pl_interval_internal()\n\nThe helper functions return a bool to indicate if they set the result.\nAn alternative approach would be to check for finiteness in either of\nthe inputs, then call the helper function which would have a void\nreturn type. I think this alternative approach would be slightly more\nreadable, but involve duplicate finiteness checks before and during the\nhelper function.\n\nI've attached a patch with these changes that is meant to be applied\nover the previous three patches. Let me know what you think.\n\nWith this patch I believe that I've addressed all open comments except\nfor the discussion around whether we should check just the months field\nor all three fields for finiteness. Please let me know if I've missed\nsomething.\n\nThanks,\nJoe Koshakow",
"msg_date": "Sun, 2 Apr 2023 17:25:50 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> I've attached a patch with these changes that is meant to be applied\n> over the previous three patches. Let me know what you think.\n\nDoes not really seem like an improvement to me --- I think it's\nadding more complexity than it removes. The changes in CONTEXT\nmessages are definitely not an improvement; you might as well\nnot have the context messages at all as give misleading ones.\n(Those context messages are added by the previous patches, no?\nThey do not really seem per project style, and I'm not sure\nthat they are helpful.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 02 Apr 2023 17:36:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": ">On Sun, Apr 2, 2023 at 5:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Joseph Koshakow <koshy44@gmail.com> writes:\n> > I've attached a patch with these changes that is meant to be applied\n> > over the previous three patches. Let me know what you think.\n>\n> Does not really seem like an improvement to me --- I think it's\n> adding more complexity than it removes. The changes in CONTEXT\n> messages are definitely not an improvement; you might as well\n> not have the context messages at all as give misleading ones.\n> (Those context messages are added by the previous patches, no?\n> They do not really seem per project style, and I'm not sure\n> that they are helpful.)\n\nYes they were added in the previous patch,\nv17-0003-Add-infinite-interval-values.patch. I also had the following\nnote about them.\n\n> I've added an errcontext to all the errors of the form \"X out of\n> range\". My one concern is that some of the messages can be slightly\n> confusing. For example date arithmetic is converted to timestamp\n> arithmetic, so the errcontext talks about timestamps even though the\n> actual operation used dates. For example,\n>\n> SELECT date 'infinity' + interval '-infinity';\n> ERROR: interval out of range\n> CONTEXT: while adding an interval and timestamp\n\nI would be OK with removing all of the context messages or maybe only\nkeeping a select few, like the ones in interval_um.\n\nHow do you feel about redefining interval_mi in terms of interval_um\nand interval_pl? That one felt like an improvement to me even outside\nof the context of this change.\n\nThanks,\nJoe Koshakow\n\n>On Sun, Apr 2, 2023 at 5:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> Joseph Koshakow <koshy44@gmail.com> writes:> > I've attached a patch with these changes that is meant to be applied> > over the previous three patches. Let me know what you think.>> Does not really seem like an improvement to me --- I think it's> adding more complexity than it removes. The changes in CONTEXT> messages are definitely not an improvement; you might as well> not have the context messages at all as give misleading ones.> (Those context messages are added by the previous patches, no?> They do not really seem per project style, and I'm not sure> that they are helpful.)Yes they were added in the previous patch, v17-0003-Add-infinite-interval-values.patch. I also had the followingnote about them.> I've added an errcontext to all the errors of the form \"X out of> range\". My one concern is that some of the messages can be slightly> confusing. For example date arithmetic is converted to timestamp> arithmetic, so the errcontext talks about timestamps even though the> actual operation used dates. For example,> > SELECT date 'infinity' + interval '-infinity';> ERROR: interval out of range> CONTEXT: while adding an interval and timestampI would be OK with removing all of the context messages or maybe onlykeeping a select few, like the ones in interval_um.How do you feel about redefining interval_mi in terms of interval_umand interval_pl? That one felt like an improvement to me even outsideof the context of this change.Thanks,Joe Koshakow",
"msg_date": "Sun, 2 Apr 2023 18:34:20 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n>> I've added an errcontext to all the errors of the form \"X out of\n>> range\".\n\nPlease note the style guidelines [1]:\n\n errcontext(const char *msg, ...) is not normally called directly from\n an ereport message site; rather it is used in error_context_stack\n callback functions to provide information about the context in which\n an error occurred, such as the current location in a PL function.\n\nIf we should have this at all, which I doubt, it's probably\nerrdetail not errcontext.\n\n> How do you feel about redefining interval_mi in terms of interval_um\n> and interval_pl? That one felt like an improvement to me even outside\n> of the context of this change.\n\nI did not think so. For one thing, it introduces integer-overflow\nhazards that you would not have otherwise; ie, interval_um might have\nto throw an error for INT_MIN input, even though the end result of\nthe calculation would have been in range.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/error-message-reporting.html\n\n\n",
"msg_date": "Sun, 02 Apr 2023 18:54:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sun, Apr 2, 2023 at 6:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Joseph Koshakow <koshy44@gmail.com> writes:\n> >> I've added an errcontext to all the errors of the form \"X out of\n> >> range\".\n>\n> Please note the style guidelines [1]:\n>\n> errcontext(const char *msg, ...) is not normally called directly\nfrom\n> an ereport message site; rather it is used in error_context_stack\n> callback functions to provide information about the context in\nwhich\n> an error occurred, such as the current location in a PL function.\n>\n> If we should have this at all, which I doubt, it's probably\n> errdetail not errcontext.\n\nI've attached a patch with all of the errcontext calls removed. None of\nthe existing out of range errors have an errdetail call so I think this\nis more consistent. If we do want to add errdetail, then we should\nprobably do it in a later patch and add it to all out of range errors,\nnot just the ones related to infinity.\n\n> > How do you feel about redefining interval_mi in terms of interval_um\n> > and interval_pl? That one felt like an improvement to me even outside\n> > of the context of this change.\n>\n> I did not think so. For one thing, it introduces integer-overflow\n> hazards that you would not have otherwise; ie, interval_um might have\n> to throw an error for INT_MIN input, even though the end result of\n> the calculation would have been in range.\n\nGood point, I didn't think of that.\n\nThanks,\nJoe Koshakow",
"msg_date": "Sun, 2 Apr 2023 20:32:26 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Hi Joseph,\nthanks for addressing comments.\n\nOn Sat, Apr 1, 2023 at 10:53 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n> So I added a check for FLOAT8_FITS_IN_INT64() and a test with this\n> scenario.\n\nI like that. Thanks.\n\n>\n> For what it's worth I think that 2147483647 months only became a valid\n> interval in v15 as part of this commit [0]. It's also outside of the\n> documented valid range [1], which is\n> [-178000000 years, 178000000 years] or\n> [-14833333 months, 14833333 months].\n\nyou mean +/-2136000000 months :). In that sense the current code\nactually fixes a bug introduced in v15. So I am fine with it.\n\n>\n> The rationale for only checking the month's field is that it's faster\n> than checking all three fields, though I'm not entirely sure if it's\n> the right trade-off. Any thoughts on this?\n\nHmm, comparing one integer is certainly faster than comparing three.\nWe do that check at least once per interval operation. So the thrice\nCPU cycles might show some impact when millions of rows are processed.\n\nGiven that we have clear documentation of bounds, just using months\nfield is fine. If needed we can always expand it later.\n\n>\n> > There's some way to avoid separate checks for infinite-ness of\n> > interval and factor and use a single block using some integer\n> > arithmetic. But I think this is more readable. So I avoided doing\n> > that. Let me know if this works for you.\n>\n> I think the patch looks good, I've combined it with the existing patch.\n>\n> > Also added some test cases.\n>\n> I didn't see any tests in the patch, did you forget to include it?\n\nSorry I forgot to include those. Attached.\n\nPlease see my reply to your latest email as well.\n\n\n--\nBest Wishes,\nAshutosh Bapat",
"msg_date": "Mon, 3 Apr 2023 19:37:34 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Hi Joseph,\n\n\nOn Mon, Apr 3, 2023 at 6:02 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n\n>\n> I've attached a patch with all of the errcontext calls removed. None of\n> the existing out of range errors have an errdetail call so I think this\n> is more consistent. If we do want to add errdetail, then we should\n> probably do it in a later patch and add it to all out of range errors,\n> not just the ones related to infinity.\n\nHmm, I realize my errcontext suggestion was in wrong direction. We can\nuse errdetail if required in future. But not for this patch.\n\nHere are comments on the test and output.\n\n+ infinity | | |\n | | Infinity | Infinity | | | Infinity |\nInfinity | Infinity | Infinity | Infinity\n+ -infinity | | |\n | | -Infinity | -Infinity | | | -Infinity |\n-Infinity | -Infinity | -Infinity | -Infinity\n\nThis is more for my education. It looks like for oscillating units we report\nNULL here but for monotonically increasing units we report infinity. I came\nacross those terms in the code. But I didn't find definitions of those terms.\nCan you please point me to the document/resources defining those terms.\n\ndiff --git a/src/test/regress/sql/horology.sql\nb/src/test/regress/sql/horology.sql\nindex f7f8c8d2dd..1d0ab322c0 100644\n--- a/src/test/regress/sql/horology.sql\n+++ b/src/test/regress/sql/horology.sql\n@@ -207,14 +207,17 @@ SELECT t.d1 AS t, i.f1 AS i, t.d1 + i.f1 AS\n\"add\", t.d1 - i.f1 AS \"subtract\"\n FROM TIMESTAMP_TBL t, INTERVAL_TBL i\n WHERE t.d1 BETWEEN '1990-01-01' AND '2001-01-01'\n AND i.f1 BETWEEN '00:00' AND '23:00'\n+ AND isfinite(i.f1)\n\nI removed this and it did not have any effect on results. I think the\nisfinite(i.f1) is already covered by the two existing conditions.\n\n SELECT t.f1 AS t, i.f1 AS i, t.f1 + i.f1 AS \"add\", t.f1 - i.f1 AS \"subtract\"\n FROM TIME_TBL t, INTERVAL_TBL i\n+ WHERE isfinite(i.f1)\n ORDER BY 1,2;\n\n SELECT t.f1 AS t, i.f1 AS i, t.f1 + i.f1 AS \"add\", t.f1 - i.f1 AS \"subtract\"\n FROM TIMETZ_TBL t, INTERVAL_TBL i\n+ WHERE isfinite(i.f1)\n ORDER BY 1,2;\n\n -- SQL9x OVERLAPS operator\n@@ -287,11 +290,12 @@ SELECT f1 AS \"timestamp\"\n\n SELECT d.f1 AS \"timestamp\", t.f1 AS \"interval\", d.f1 + t.f1 AS plus\n FROM TEMP_TIMESTAMP d, INTERVAL_TBL t\n+ WHERE isfinite(t.f1)\n ORDER BY plus, \"timestamp\", \"interval\";\n\n SELECT d.f1 AS \"timestamp\", t.f1 AS \"interval\", d.f1 - t.f1 AS minus\n FROM TEMP_TIMESTAMP d, INTERVAL_TBL t\n- WHERE isfinite(d.f1)\n+ WHERE isfinite(t.f1)\n ORDER BY minus, \"timestamp\", \"interval\";\n\nIIUC, the isfinite() conditions are added to avoid any changes to the\noutput due to new\nvalues added to INTERVAL_TBL. Instead, it might be a good idea to not add these\nconditions and avoid extra queries testing infinity arithmetic in interval.sql,\ntimestamptz.sql and timestamp.sql like below\n\n+\n+-- infinite intervals\n\n... some lines folded\n\n+\n+SELECT date '1995-08-06' + interval 'infinity';\n+SELECT date '1995-08-06' + interval '-infinity';\n+SELECT date '1995-08-06' - interval 'infinity';\n+SELECT date '1995-08-06' - interval '-infinity';\n\n... block truncated\n\nWith that I have reviewed the entire patch-set. Once you address these\ncomments, we can mark it as ready for committer. I already see Tom\nlooking at the patch. So that might be just a formality.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 3 Apr 2023 19:41:37 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Mon, Apr 3, 2023 at 10:11 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n>\n> + infinity | | |\n> | | Infinity | Infinity | | | Infinity |\n> Infinity | Infinity | Infinity | Infinity\n> + -infinity | | |\n> | | -Infinity | -Infinity | | | -Infinity |\n> -Infinity | -Infinity | -Infinity | -Infinity\n>\n> This is more for my education. It looks like for oscillating units we\nreport\n> NULL here but for monotonically increasing units we report infinity. I\ncame\n> across those terms in the code. But I didn't find definitions of those\nterms.\n> Can you please point me to the document/resources defining those terms.\n\nI was also unable to find a definition of oscillating or monotonically\nincreasing in this context. I used the existing timestamps and dates\ncode to form my own definition:\n\nIf there exists an two intervals with the same sign, such that adding\nthem together results in an interval with a unit that is less than the\nunit of at least one of the original intervals, then that unit is\noscillating. Otherwise it is monotonically increasing.\n\nSo for example `INTERVAL '30 seconds' + INTERVAL '30 seconds'` results\nin an interval with 0 seconds, so seconds are oscillating. You couldn't\nfind a similar example for days or hours, so they're monotonically\nincreasing.\n\n> diff --git a/src/test/regress/sql/horology.sql\n> b/src/test/regress/sql/horology.sql\n> index f7f8c8d2dd..1d0ab322c0 100644\n> --- a/src/test/regress/sql/horology.sql\n> +++ b/src/test/regress/sql/horology.sql\n> @@ -207,14 +207,17 @@ SELECT t.d1 AS t, i.f1 AS i, t.d1 + i.f1 AS\n> \"add\", t.d1 - i.f1 AS \"subtract\"\n> FROM TIMESTAMP_TBL t, INTERVAL_TBL i\n> WHERE t.d1 BETWEEN '1990-01-01' AND '2001-01-01'\n> AND i.f1 BETWEEN '00:00' AND '23:00'\n> + AND isfinite(i.f1)\n>\n> I removed this and it did not have any effect on results. I think the\n> isfinite(i.f1) is already covered by the two existing conditions.\n\nThanks for pointing this out, I've removed this in the attached patch.\n\n> SELECT t.f1 AS t, i.f1 AS i, t.f1 + i.f1 AS \"add\", t.f1 - i.f1 AS\n\"subtract\"\n> FROM TIME_TBL t, INTERVAL_TBL i\n> + WHERE isfinite(i.f1)\n> ORDER BY 1,2;\n>\n> SELECT t.f1 AS t, i.f1 AS i, t.f1 + i.f1 AS \"add\", t.f1 - i.f1 AS\n\"subtract\"\n> FROM TIMETZ_TBL t, INTERVAL_TBL i\n> + WHERE isfinite(i.f1)\n> ORDER BY 1,2;\n>\n> -- SQL9x OVERLAPS operator\n> @@ -287,11 +290,12 @@ SELECT f1 AS \"timestamp\"\n>\n> SELECT d.f1 AS \"timestamp\", t.f1 AS \"interval\", d.f1 + t.f1 AS plus\n> FROM TEMP_TIMESTAMP d, INTERVAL_TBL t\n> + WHERE isfinite(t.f1)\n> ORDER BY plus, \"timestamp\", \"interval\";\n>\n> SELECT d.f1 AS \"timestamp\", t.f1 AS \"interval\", d.f1 - t.f1 AS minus\n> FROM TEMP_TIMESTAMP d, INTERVAL_TBL t\n> - WHERE isfinite(d.f1)\n> + WHERE isfinite(t.f1)\n> ORDER BY minus, \"timestamp\", \"interval\";\n>\n> IIUC, the isfinite() conditions are added to avoid any changes to the\n> output due to new\n> values added to INTERVAL_TBL. Instead, it might be a good idea to not\nadd these\n> conditions and avoid extra queries testing infinity arithmetic in\ninterval.sql,\n> timestamptz.sql and timestamp.sql like below\n>\n> +\n> +-- infinite intervals\n>\n> ... some lines folded\n>\n> +\n> +SELECT date '1995-08-06' + interval 'infinity';\n> +SELECT date '1995-08-06' + interval '-infinity';\n> +SELECT date '1995-08-06' - interval 'infinity';\n> +SELECT date '1995-08-06' - interval '-infinity';\n>\n> ... block truncated\n\nI originally tried that, but the issue here is that errors propagate\nthrough the whole query. So if one row produces an error then no rows\nare produced and instead a single error is returned. So the rows that\nwould execute, for example,\nSELECT date 'infinity' + interval '-infinity' would cause the entire\nquery to error out. If you have any suggestions to get around this\nplease let me know.\n\n> With that I have reviewed the entire patch-set. Once you address these\n> comments, we can mark it as ready for committer. I already see Tom\n> looking at the patch. So that might be just a formality.\n\nThanks so much for taking the time to review this!\n\nThanks,\nJoe Koshakow",
"msg_date": "Sat, 8 Apr 2023 11:23:59 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sat, Apr 8, 2023 at 8:54 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> I was also unable to find a definition of oscillating or monotonically\n> increasing in this context. I used the existing timestamps and dates\n> code to form my own definition:\n>\n> If there exists an two intervals with the same sign, such that adding\n> them together results in an interval with a unit that is less than the\n> unit of at least one of the original intervals, then that unit is\n> oscillating. Otherwise it is monotonically increasing.\n>\n> So for example `INTERVAL '30 seconds' + INTERVAL '30 seconds'` results\n> in an interval with 0 seconds, so seconds are oscillating. You couldn't\n> find a similar example for days or hours, so they're monotonically\n> increasing.\n\nThanks for the explanation with an example. Makes sense considering\nthat the hours and days are not convertible to their wider units\nwithout temporal context.\n\n>\n> > SELECT t.f1 AS t, i.f1 AS i, t.f1 + i.f1 AS \"add\", t.f1 - i.f1 AS \"subtract\"\n> > FROM TIME_TBL t, INTERVAL_TBL i\n> > + WHERE isfinite(i.f1)\n> > ORDER BY 1,2;\n> >\n> > SELECT t.f1 AS t, i.f1 AS i, t.f1 + i.f1 AS \"add\", t.f1 - i.f1 AS \"subtract\"\n> > FROM TIMETZ_TBL t, INTERVAL_TBL i\n> > + WHERE isfinite(i.f1)\n> > ORDER BY 1,2;\n\nThose two are operations with Time which does not allow infinity. So I\nthink this is fine.\n\n> >\n> > -- SQL9x OVERLAPS operator\n> > @@ -287,11 +290,12 @@ SELECT f1 AS \"timestamp\"\n> >\n> > SELECT d.f1 AS \"timestamp\", t.f1 AS \"interval\", d.f1 + t.f1 AS plus\n> > FROM TEMP_TIMESTAMP d, INTERVAL_TBL t\n> > + WHERE isfinite(t.f1)\n> > ORDER BY plus, \"timestamp\", \"interval\";\n> >\n> > SELECT d.f1 AS \"timestamp\", t.f1 AS \"interval\", d.f1 - t.f1 AS minus\n> > FROM TEMP_TIMESTAMP d, INTERVAL_TBL t\n> > - WHERE isfinite(d.f1)\n> > + WHERE isfinite(t.f1)\n> > ORDER BY minus, \"timestamp\", \"interval\";\n> I originally tried that, but the issue here is that errors propagate\n> through the whole query. So if one row produces an error then no rows\n> are produced and instead a single error is returned. So the rows that\n> would execute, for example,\n> SELECT date 'infinity' + interval '-infinity' would cause the entire\n> query to error out. If you have any suggestions to get around this\n> please let me know.\n\nI modified this to WHERE isfinite(t.f1) or isfinite(d.f1). The output\ncontains a lot of additions with infinity::interval but that might be\nok. No errors. We could further improve it to allow operations between\ninfinity which do not result in error e.g, both operands being same\nsigned for plus and opposite signed for minus. But I think we can\nleave this to the committer's judgement. Which route to choose.\n\n>\n> > With that I have reviewed the entire patch-set. Once you address these\n> > comments, we can mark it as ready for committer. I already see Tom\n> > looking at the patch. So that might be just a formality.\n>\n> Thanks so much for taking the time to review this!\n\nMy pleasure. I am very much interested to see this being part of code.\nGiven that the last commit fest for v16 has ended, let's target this\nfor v17. I will mark this as ready for committer now.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 12 Apr 2023 18:41:11 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 9:11 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> I modified this to WHERE isfinite(t.f1) or isfinite(d.f1). The output\n> contains a lot of additions with infinity::interval but that might be\n> ok. No errors.\n\nAttached is a patch with this testing change.\n\nThanks,\nJoe Koshakow",
"msg_date": "Wed, 12 Apr 2023 14:35:02 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Looks like cfbot didn't like the names of these patches. It tried to\napply v20-0003 first and that failed. Attached patches with names in\nsequential order. Let's see if that makes cfbot happy.\n\nOn Thu, Apr 13, 2023 at 12:05 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n>\n>\n> On Wed, Apr 12, 2023 at 9:11 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n>\n> > I modified this to WHERE isfinite(t.f1) or isfinite(d.f1). The output\n> > contains a lot of additions with infinity::interval but that might be\n> > ok. No errors.\n>\n> Attached is a patch with this testing change.\n>\n> Thanks,\n> Joe Koshakow\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Fri, 23 Jun 2023 12:57:33 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Resending with .patch at the end in case cfbot needs that too.\n\nOn Fri, Jun 23, 2023 at 12:57 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Looks like cfbot didn't like the names of these patches. It tried to\n> apply v20-0003 first and that failed. Attached patches with names in\n> sequential order. Let's see if that makes cfbot happy.\n>\n> On Thu, Apr 13, 2023 at 12:05 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n> >\n> >\n> >\n> > On Wed, Apr 12, 2023 at 9:11 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > > I modified this to WHERE isfinite(t.f1) or isfinite(d.f1). The output\n> > > contains a lot of additions with infinity::interval but that might be\n> > > ok. No errors.\n> >\n> > Attached is a patch with this testing change.\n> >\n> > Thanks,\n> > Joe Koshakow\n>\n>\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Fri, 23 Jun 2023 13:13:44 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Fixed assertion in time_mi_time(). It needed to assert that the result\nis FINITE but it was doing the other way round and that triggered some\nfailures in cfbot.\n\nOn Fri, Jun 23, 2023 at 1:13 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Resending with .patch at the end in case cfbot needs that too.\n>\n> On Fri, Jun 23, 2023 at 12:57 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Looks like cfbot didn't like the names of these patches. It tried to\n> > apply v20-0003 first and that failed. Attached patches with names in\n> > sequential order. Let's see if that makes cfbot happy.\n> >\n> > On Thu, Apr 13, 2023 at 12:05 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n> > >\n> > >\n> > >\n> > > On Wed, Apr 12, 2023 at 9:11 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n> > >\n> > > > I modified this to WHERE isfinite(t.f1) or isfinite(d.f1). The output\n> > > > contains a lot of additions with infinity::interval but that might be\n> > > > ok. No errors.\n> > >\n> > > Attached is a patch with this testing change.\n> > >\n> > > Thanks,\n> > > Joe Koshakow\n> >\n> >\n> >\n> > --\n> > Best Wishes,\n> > Ashutosh Bapat\n>\n>\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Tue, 27 Jun 2023 16:43:41 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> Fixed assertion in time_mi_time(). It needed to assert that the result\n> is FINITE but it was doing the other way round and that triggered some\n> failures in cfbot.\n\nIt's still not passing in the cfbot, at least not on any non-Linux\nplatforms. I believe the reason is that the patch thinks isinf()\ndelivers a three-way result, but per POSIX you can only expect\nzero or nonzero (ie, finite or not).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Jul 2023 13:50:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sat, Jul 8, 2023 at 1:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>> Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n>> Fixed assertion in time_mi_time(). It needed to assert that the result\n>> is FINITE but it was doing the other way round and that triggered some\n>> failures in cfbot.\n\n> It's still not passing in the cfbot, at least not on any non-Linux\n> platforms. I believe the reason is that the patch thinks isinf()\n> delivers a three-way result, but per POSIX you can only expect\n> zero or nonzero (ie, finite or not).\n\nThat looks right to me. I've updated the patch to fix this issue.\n\nThanks,\nJoe Koshakow",
"msg_date": "Sat, 8 Jul 2023 14:38:39 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sat, Jul 8, 2023 at 2:38 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n\n> I've updated the patch to fix this issue.\n\nThat seems to have fixed the cfbot failures. Though I left in an\nunused variable. Here's another set of patches with the compiler\nwarnings fixed.\n\nThanks,\nJoe Koshakow",
"msg_date": "Sat, 8 Jul 2023 15:27:51 -0400",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "The patches still apply. But here's a rebased version with one white\nspace error fixed. Also ran pgindent.\n\nOn Sun, Jul 9, 2023 at 12:58 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> On Sat, Jul 8, 2023 at 2:38 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> > I've updated the patch to fix this issue.\n>\n> That seems to have fixed the cfbot failures. Though I left in an\n> unused variable. Here's another set of patches with the compiler\n> warnings fixed.\n>\n> Thanks,\n> Joe Koshakow\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Thu, 24 Aug 2023 19:20:59 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Thu, 24 Aug 2023 at 14:51, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> The patches still apply. But here's a rebased version with one white\n> space error fixed. Also ran pgindent.\n>\n\nThis needs another rebase, and it looks like the infinite interval\ninput code is broken.\n\nI took a quick look, and had a couple of other review comments:\n\n1). In interval_mul(), I think \"result_sign\" would be a more accurate\nname than \"result_is_inf\" for the local variable.\n\n2). interval_accum() and interval_accum_inv() don't work correctly\nwith infinite intervals. To make them work, they need to count the\nnumber of infinities seen, to allow them to be subtracted off by the\ninverse function (similar to the code in numeric.c, except for the\nNaN-handling, which will need to be different). Consider, for example:\n\nSELECT x, avg(x) OVER(ROWS BETWEEN CURRENT ROW AND 1 FOLLOWING)\n FROM (VALUES ('1 day'::interval),\n ('3 days'::interval),\n ('infinity'::timestamptz - now()),\n ('4 days'::interval),\n ('6 days'::interval)) v(x);\nERROR: interval out of range\n\nas compared to:\n\nSELECT x, avg(x) OVER(ROWS BETWEEN CURRENT ROW AND 1 FOLLOWING)\n FROM (VALUES (1::numeric),\n (3::numeric),\n ('infinity'::numeric),\n (4::numeric),\n (6::numeric)) v(x);\n\n x | avg\n----------+--------------------\n 1 | 2.0000000000000000\n 3 | Infinity\n Infinity | Infinity\n 4 | 5.0000000000000000\n 6 | 6.0000000000000000\n(5 rows)\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 12 Sep 2023 10:09:17 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Tue, Sep 12, 2023 at 2:39 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Thu, 24 Aug 2023 at 14:51, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > The patches still apply. But here's a rebased version with one white\n> > space error fixed. Also ran pgindent.\n> >\n>\n> This needs another rebase,\n\nFixed.\n\n> and it looks like the infinite interval\n> input code is broken.\n>\n\nThe code required to handle 'infinity' as an input value was removed by\nd6d1430f404386162831bc32906ad174b2007776. I have added a separate\ncommit which reverts that commit as 0004, which should be merged into\n0003.\n\n> I took a quick look, and had a couple of other review comments:\n>\n> 1). In interval_mul(), I think \"result_sign\" would be a more accurate\n> name than \"result_is_inf\" for the local variable.\n\nFixed as part of 0003.\n\n>\n> 2). interval_accum() and interval_accum_inv() don't work correctly\n> with infinite intervals. To make them work, they need to count the\n> number of infinities seen, to allow them to be subtracted off by the\n> inverse function (similar to the code in numeric.c, except for the\n> NaN-handling, which will need to be different). Consider, for example:\n>\n> SELECT x, avg(x) OVER(ROWS BETWEEN CURRENT ROW AND 1 FOLLOWING)\n> FROM (VALUES ('1 day'::interval),\n> ('3 days'::interval),\n> ('infinity'::timestamptz - now()),\n> ('4 days'::interval),\n> ('6 days'::interval)) v(x);\n> ERROR: interval out of range\n>\n> as compared to:\n>\n> SELECT x, avg(x) OVER(ROWS BETWEEN CURRENT ROW AND 1 FOLLOWING)\n> FROM (VALUES (1::numeric),\n> (3::numeric),\n> ('infinity'::numeric),\n> (4::numeric),\n> (6::numeric)) v(x);\n>\n> x | avg\n> ----------+--------------------\n> 1 | 2.0000000000000000\n> 3 | Infinity\n> Infinity | Infinity\n> 4 | 5.0000000000000000\n> 6 | 6.0000000000000000\n> (5 rows)\n\nNice catch. I agree that we need to do something similar to\nnumeric_accum and numeric_accum_inv. As part of that also add test for\nwindow aggregates on interval data type. We might also need some fix\nto sum(). I am planning to work on this next week but in case somebody\nelse wants to pick this up here are patches with other things fixed.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Wed, 13 Sep 2023 15:43:09 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Wed, Sep 13, 2023 at 6:13 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> to sum(). I am planning to work on this next week but in case somebody\n> else wants to pick this up here are patches with other things fixed.\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n\n\nhi. some doc issues.\n\n- <literal>decade</literal>, <literal>century</literal>, and\n<literal>millennium</literal>).\n+ <literal>decade</literal>, <literal>century</literal>, and\n<literal>millennium</literal>\n+ for all types and <literal>hour</literal> and\n<literal>day</literal> just for <type>interval</type>).\n\nThe above part seems not right. some fields do not apply to interval data types.\ntest case:\nSELECT EXTRACT(epoch FROM interval 'infinity') as epoch\n ,EXTRACT(YEAR FROM interval 'infinity') as year\n ,EXTRACT(decade FROM interval 'infinity') as decade\n ,EXTRACT(century FROM interval 'infinity') as century\n ,EXTRACT(millennium FROM interval 'infinity') as millennium\n ,EXTRACT(month FROM interval 'infinity') as mon\n ,EXTRACT(day FROM interval 'infinity') as day\n ,EXTRACT(hour FROM interval 'infinity') as hour\n ,EXTRACT(min FROM interval 'infinity') as min\n ,EXTRACT(second FROM interval 'infinity') as sec;\n\n--------------------\n\n- <entry><type>date</type>, <type>timestamp</type></entry>\n+ <entry><type>date</type>, <type>timestamp</type>,\n<type>interval</type></entry>\n <entry>later than all other time stamps</entry>\n\nit seems we have forgotten to mention the -infinity case, we can fix\nthe doc together, since <type>timestamptz</type> also applies to\n+/-infinity.\n\n\n",
"msg_date": "Thu, 14 Sep 2023 14:28:09 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "hi.\n\nfixed the doc special value inf/-inf reference. didn't fix the EXTRACT\nfunction doc issue.\n\nI refactor the avg(interval), sum(interval), so moving aggregate,\nplain aggregate both work with +inf/-inf.\nno performance degradation, in fact, some performance gains.\n\n--setup for test performance.\ncreate unlogged table interval_aggtest AS\nselect g::int as a\n ,make_interval(years => g % 100, days => g % 100, hours => g %\n200 , secs => random()::numeric(3,2) *100 ) as b\nfrom generate_series(1, 100_000) g;\n--use foreign data wrapper to copy exact content to interval_aggtest_no_patch\ncreate unlogged table interval_aggtest_no_patch AS\nselect * from interval_aggtest;\n\n--queryA\nexplain (analyze, costs off, buffers)\nSELECT a, avg(b) OVER(ROWS BETWEEN 1 preceding AND 2 FOLLOWING)\nfrom interval_aggtest \\watch i=0.1 c=10\n\n--queryB\nexplain (analyze, costs off, buffers)\nSELECT a, avg(b) OVER(ROWS BETWEEN 1 preceding AND 2 FOLLOWING)\nfrom interval_aggtest_no_patch \\watch i=0.1 c=10\n\n--queryC\nexplain (analyze, costs off, buffers)\nSELECT a, sum(b) OVER(ROWS BETWEEN 1 preceding AND 2 FOLLOWING)\nfrom interval_aggtest \\watch i=0.1 c=10\n\n--queryD\nexplain (analyze, costs off, buffers)\nSELECT a, sum(b) OVER(ROWS BETWEEN 1 preceding AND 2 FOLLOWING)\nfrom interval_aggtest_no_patch \\watch i=0.1 c=10\n\n--queryE\nexplain (analyze, costs off, buffers)\nSELECT sum(b), avg(b)\nfrom interval_aggtest \\watch i=0.1 c=10\n\n--queryF\nexplain (analyze, costs off, buffers)\nSELECT sum(b), avg(b)\nfrom interval_aggtest_no_patch \\watch i=0.1 c=10\n\nqueryA execute 10 time, last executed time(ms) 748.258\nqueryB execute 10 time, last executed time(ms) 1059.750\n\nqueryC execute 10 time, last executed time(ms) 697.887\nqueryD execute 10 time, last executed time(ms) 708.462\n\nqueryE execute 10 time, last executed time(ms) 156.237\nqueryF execute 10 time, last executed time(ms) 405.451\n---------------------------------------------------------------------\nThe result seems right, I am not %100 sure the code it's correct.\nThat's the best I can think of. You can work based on that.",
"msg_date": "Sat, 16 Sep 2023 08:00:00 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 11:58 AM jian he <jian.universality@gmail.com> wrote:\n>\n> - <literal>decade</literal>, <literal>century</literal>, and\n> <literal>millennium</literal>).\n> + <literal>decade</literal>, <literal>century</literal>, and\n> <literal>millennium</literal>\n> + for all types and <literal>hour</literal> and\n> <literal>day</literal> just for <type>interval</type>).\n\nIt seems you have changed a paragraph from\nhttps://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT.\nBut that section is only for interval \"8.5.4. Interval Input \". So\nmentioning \" ... for all types ...\" wouldn't fit the section's title.\nI don't see why it needs to be changed.\n\n>\n> The above part seems not right. some fields do not apply to interval data types.\n> test case:\n> SELECT EXTRACT(epoch FROM interval 'infinity') as epoch\n> ,EXTRACT(YEAR FROM interval 'infinity') as year\n> ,EXTRACT(decade FROM interval 'infinity') as decade\n> ,EXTRACT(century FROM interval 'infinity') as century\n> ,EXTRACT(millennium FROM interval 'infinity') as millennium\n> ,EXTRACT(month FROM interval 'infinity') as mon\n> ,EXTRACT(day FROM interval 'infinity') as day\n> ,EXTRACT(hour FROM interval 'infinity') as hour\n> ,EXTRACT(min FROM interval 'infinity') as min\n> ,EXTRACT(second FROM interval 'infinity') as sec;\n\nFor this query, I get output\n#SELECT EXTRACT(epoch FROM interval 'infinity') as epoch\n ,EXTRACT(YEAR FROM interval 'infinity') as year\n ,EXTRACT(decade FROM interval 'infinity') as decade\n ,EXTRACT(century FROM interval 'infinity') as century\n ,EXTRACT(millennium FROM interval 'infinity') as millennium\n ,EXTRACT(month FROM interval 'infinity') as mon\n ,EXTRACT(day FROM timestamp 'infinity') as day\n ,EXTRACT(hour FROM interval 'infinity') as hour\n ,EXTRACT(min FROM interval 'infinity') as min\n ,EXTRACT(second FROM interval 'infinity') as sec;\n epoch | year | decade | century | millennium | mon | day |\n hour | min | sec\n----------+----------+----------+----------+------------+-----+-----+----------+-----+-----\n Infinity | Infinity | Infinity | Infinity | Infinity | | |\nInfinity | |\n\nEXTRACT( .... FROM interval '[-]infinity') is implemented similar to\nEXTRACT (... FROM timestamp '[-]infinity). Hence this is the output.\nThis has been discussed earlier [1].\n\n>\n> --------------------\n>\n> - <entry><type>date</type>, <type>timestamp</type></entry>\n> + <entry><type>date</type>, <type>timestamp</type>,\n> <type>interval</type></entry>\n> <entry>later than all other time stamps</entry>\n>\n> it seems we have forgotten to mention the -infinity case, we can fix\n> the doc together, since <type>timestamptz</type> also applies to\n> +/-infinity.\n\nYour point about -infinity is right. But timestamp corresponds to both\ntimestamp with and without timezone as per table 8.9 on the same page\n. https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-DATETIME-TABLE.\nSo I don't see a need to specify timestamptz separately.\n\n[1] https://www.postgresql.org/message-id/CAExHW5ut4bR4KSNWAhXb_EZ8PyY=J100guA6ZumNhvoia1ZRjw@mail.gmail.com\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 18 Sep 2023 14:49:16 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Wed, 13 Sept 2023 at 11:13, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Tue, Sep 12, 2023 at 2:39 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> >\n> > and it looks like the infinite interval\n> > input code is broken.\n>\n> The code required to handle 'infinity' as an input value was removed by\n> d6d1430f404386162831bc32906ad174b2007776. I have added a separate\n> commit which reverts that commit as 0004, which should be merged into\n> 0003.\n>\n\nI think that simply reverting d6d1430f404386162831bc32906ad174b2007776\nis not sufficient. This does not make it clear what the point is of\nthe code in the \"case RESERV\" block. That code really should check the\nvalue returned by DecodeSpecial(), otherwise invalid inputs are not\ncaught until later, and the error reported is not ideal. For example:\n\nselect interval 'now';\nERROR: unexpected dtype 12 while parsing interval \"now\"\n\nSo DecodeInterval() should return DTERR_BAD_FORMAT in such cases (see\nsimilar code in DecodeTimeOnly(), for example).\n\nI'd also suggest a comment to indicate why itm_in isn't updated in\nthis case (see similar case in DecodeDateTime(), for example).\n\n\nAnother point to consider is what should happen if \"ago\" is specified\nwith infinite inputs. As it stands, it is accepted, but does nothing:\n\nselect interval 'infinity ago';\n interval\n----------\n infinity\n(1 row)\n\nselect interval '-infinity ago';\n interval\n-----------\n -infinity\n(1 row)\n\nThis could be made to invert the sign, as it does for finite inputs,\nbut I think perhaps it would be better to simply reject such inputs.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 18 Sep 2023 12:39:35 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sat, 16 Sept 2023 at 01:00, jian he <jian.universality@gmail.com> wrote:\n>\n> I refactor the avg(interval), sum(interval), so moving aggregate,\n> plain aggregate both work with +inf/-inf.\n> no performance degradation, in fact, some performance gains.\n>\n\nI haven't reviewed this part in any detail yet, but I can confirm that\nthere are some impressive performance improvements for avg(). However,\nfor me, sum() seems to be consistently a few percent slower with this\npatch.\n\nThe introduction of an internal transition state struct seems like a\npromising approach, but I think there is more to be gained by\neliminating per-row pallocs, and IntervalAggState's MemoryContext\n(interval addition, unlike numeric addition, doesn't require memory\nallocation, right?).\n\nAlso, this needs to include serialization and deserialization\nfunctions, otherwise these aggregates will no longer be able to use\nparallel workers. That makes a big difference to queryE, if the size\nof the test data is scaled up.\n\nThis comment:\n\n+ int64 N; /* count of processed numbers */\n\nshould be \"count of processed intervals\".\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 19 Sep 2023 12:14:20 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Tue, Sep 19, 2023 at 7:14 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n>\n> I haven't reviewed this part in any detail yet, but I can confirm that\n> there are some impressive performance improvements for avg(). However,\n> for me, sum() seems to be consistently a few percent slower with this\n> patch.\n\nNow there should be no degradation.\n\n> The introduction of an internal transition state struct seems like a\n> promising approach, but I think there is more to be gained by\n> eliminating per-row pallocs, and IntervalAggState's MemoryContext\n> (interval addition, unlike numeric addition, doesn't require memory\n> allocation, right?).\n\n\"eliminating per-row pallocs\"\nI guess I understand. If not , please point it out.\n\n> IntervalAggState's MemoryContext\n> (interval addition, unlike numeric addition, doesn't require memory\n> allocation, right?).\n\nif I remove IntervalAggState's element: MemoryContext, it will not work.\nso I don't understand what the above sentence means...... Sorry. (it's\nmy problem)\n\n> Also, this needs to include serialization and deserialization\n> functions, otherwise these aggregates will no longer be able to use\n> parallel workers. That makes a big difference to queryE, if the size\n> of the test data is scaled up.\n>\nI tried, but failed. sum(interval) result is correct, but\navg(interval) result is wrong.\n\nDatum\ninterval_avg_serialize(PG_FUNCTION_ARGS)\n{\nIntervalAggState *state;\nStringInfoData buf;\nbytea *result;\n/* Ensure we disallow calling when not in aggregate context */\nif (!AggCheckCallContext(fcinfo, NULL))\nelog(ERROR, \"aggregate function called in non-aggregate context\");\nstate = (IntervalAggState *) PG_GETARG_POINTER(0);\npq_begintypsend(&buf);\n/* N */\npq_sendint64(&buf, state->N);\n/* Interval struct elements, one by one. */\npq_sendint64(&buf, state->sumX.time);\npq_sendint32(&buf, state->sumX.day);\npq_sendint32(&buf, state->sumX.month);\n/* pInfcount */\npq_sendint64(&buf, state->pInfcount);\n/* nInfcount */\npq_sendint64(&buf, state->nInfcount);\nresult = pq_endtypsend(&buf);\nPG_RETURN_BYTEA_P(result);\n}\n\nSELECT sum(b) ,avg(b)\n ,avg(b) = sum(b)/count(*) as should_be_true\n ,avg(b) * count(*) = sum(b) as should_be_true_too\nfrom interval_aggtest_1m; --1million row.\nThe above query expects two bool columns to return true, but actually\nboth returns false.(spend some time found out parallel mode will make\nthe number of rows to 1_000_002, should be 1_000_0000).\n\n\n> This comment:\n>\n> + int64 N; /* count of processed numbers */\n>\n> should be \"count of processed intervals\".\n\nfixed.",
"msg_date": "Wed, 20 Sep 2023 18:27:16 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Wed, 20 Sept 2023 at 11:27, jian he <jian.universality@gmail.com> wrote:\n>\n> if I remove IntervalAggState's element: MemoryContext, it will not work.\n> so I don't understand what the above sentence means...... Sorry. (it's\n> my problem)\n>\n\nI don't see why it won't work. The point is to try to simplify\ndo_interval_accum() as much as possible. Looking at the current code,\nI see a few places that could be simpler:\n\n+ X.day = newval->day;\n+ X.month = newval->month;\n+ X.time = newval->time;\n+\n+ temp.day = state->sumX.day;\n+ temp.month = state->sumX.month;\n+ temp.time = state->sumX.time;\n\nWhy do we need these local variables X and temp? It could just add the\nvalues from newval directly to those in state->sumX.\n\n+ /* The rest of this needs to work in the aggregate context */\n+ old_context = MemoryContextSwitchTo(state->agg_context);\n\nWhy? It's not allocating any memory here, so I don't see a need to\nswitch context.\n\nSo basically, do_interval_accum() could be simplified to:\n\nstatic void\ndo_interval_accum(IntervalAggState *state, Interval *newval)\n{\n /* Count infinite intervals separately from all else */\n if (INTERVAL_IS_NOBEGIN (newval))\n {\n state->nInfcount++;\n return;\n }\n if (INTERVAL_IS_NOEND(newval))\n {\n state->pInfcount++;\n return;\n }\n\n /* Update count of finite intervals */\n state->N++;\n\n /* Update sum of finite intervals */\n if (unlikely(pg_add_s32_overflow(state->sumX.month, newval->month,\n &state->sumX.month)))\n ereport(ERROR,\n errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n errmsg(\"interval out of range\"));\n\n if (unlikely(pg_add_s32_overflow(state->sumX.day, newval->day,\n &state->sumX.day)))\n ereport(ERROR,\n errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n errmsg(\"interval out of range\"));\n\n if (unlikely(pg_add_s64_overflow(state->sumX.time, newval->time,\n &state->sumX.time)))\n ereport(ERROR,\n errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n errmsg(\"interval out of range\"));\n\n return;\n}\n\nand that can be further refactored, as described below, and similarly\nfor do_interval_discard(), except using pg_sub_s32/64_overflow().\n\n\n> > Also, this needs to include serialization and deserialization\n> > functions, otherwise these aggregates will no longer be able to use\n> > parallel workers. That makes a big difference to queryE, if the size\n> > of the test data is scaled up.\n> >\n> I tried, but failed. sum(interval) result is correct, but\n> avg(interval) result is wrong.\n>\n> SELECT sum(b) ,avg(b)\n> ,avg(b) = sum(b)/count(*) as should_be_true\n> ,avg(b) * count(*) = sum(b) as should_be_true_too\n> from interval_aggtest_1m; --1million row.\n> The above query expects two bool columns to return true, but actually\n> both returns false.(spend some time found out parallel mode will make\n> the number of rows to 1_000_002, should be 1_000_0000).\n>\n\nI think the reason for your wrong results is this code in\ninterval_avg_combine():\n\n+ if (state2->N > 0)\n+ {\n+ /* The rest of this needs to work in the aggregate context */\n+ old_context = MemoryContextSwitchTo(agg_context);\n+\n+ /* Accumulate interval values */\n+ do_interval_accum(state1, &state2->sumX);\n+\n+ MemoryContextSwitchTo(old_context);\n+ }\n\nThe problem is that using do_interval_accum() to add the 2 sums\ntogether also adds 1 to the count N, making it incorrect. This code\nshould only be adding state2->sumX to state1->sumX, not touching\nstate1->N. And, as in do_interval_accum(), there is no need to switch\nmemory context.\n\nGiven that there are multiple places in this file that need to add\nintervals, I think it makes sense to further refactor, and add a local\nfunction to add 2 finite intervals, along the lines of the code above.\nThis can then be called from do_interval_accum(),\ninterval_avg_combine(), and interval_pl(). And similarly for\nsubtracting 2 finite intervals.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 20 Sep 2023 13:09:00 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Wed, 20 Sept 2023 at 13:09, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> So basically, do_interval_accum() could be simplified to:\n>\n\nOh, and I guess it also needs an INTERVAL_NOT_FINITE() check, to make\nsure that finite values don't sum to our representation of infinity,\nas in interval_pl().\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 20 Sep 2023 13:13:48 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Mon, Sep 18, 2023 at 5:09 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Wed, 13 Sept 2023 at 11:13, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Tue, Sep 12, 2023 at 2:39 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> > >\n> > > and it looks like the infinite interval\n> > > input code is broken.\n> >\n> > The code required to handle 'infinity' as an input value was removed by\n> > d6d1430f404386162831bc32906ad174b2007776. I have added a separate\n> > commit which reverts that commit as 0004, which should be merged into\n> > 0003.\n> >\n>\n> I think that simply reverting d6d1430f404386162831bc32906ad174b2007776\n> is not sufficient. This does not make it clear what the point is of\n> the code in the \"case RESERV\" block. That code really should check the\n> value returned by DecodeSpecial(), otherwise invalid inputs are not\n> caught until later, and the error reported is not ideal. For example:\n>\n> select interval 'now';\n> ERROR: unexpected dtype 12 while parsing interval \"now\"\n>\n> So DecodeInterval() should return DTERR_BAD_FORMAT in such cases (see\n> similar code in DecodeTimeOnly(), for example).\n\nThe since the code was there earlier, I missed that part. Sorry.\n\n>\n> I'd also suggest a comment to indicate why itm_in isn't updated in\n> this case (see similar case in DecodeDateTime(), for example).\n>\n\nAdded but in the function prologue since it's part of the API.\n\n>\n> Another point to consider is what should happen if \"ago\" is specified\n> with infinite inputs. As it stands, it is accepted, but does nothing:\n>\n> select interval 'infinity ago';\n> interval\n> ----------\n> infinity\n> (1 row)\n>\n> select interval '-infinity ago';\n> interval\n> -----------\n> -infinity\n> (1 row)\n>\n> This could be made to invert the sign, as it does for finite inputs,\n> but I think perhaps it would be better to simply reject such inputs.\n\nFixed this. After studying what DecodeInterval is doing, I think it's\nbetter to treat all infinity specifications similar to \"ago\". They\nneed to be the last part of the input string. Rest of the code makes\nsure that nothing preceds infinity specification as other case blocks\ndo not handle RESERVE or DTK_LATE and DTK_EARLY. This means that\n\"+infinity\", \"-infinity\" or \"infinity\" can be the only allowed word as\na valid interval input when either of them is specified.\n\nI will post these changes in another email along with other patches.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 20 Sep 2023 18:59:52 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 5:39 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Wed, 20 Sept 2023 at 11:27, jian he <jian.universality@gmail.com> wrote:\n> >\n> > if I remove IntervalAggState's element: MemoryContext, it will not work.\n> > so I don't understand what the above sentence means...... Sorry. (it's\n> > my problem)\n> >\n>\n> I don't see why it won't work. The point is to try to simplify\n> do_interval_accum() as much as possible. Looking at the current code,\n> I see a few places that could be simpler:\n>\n> + X.day = newval->day;\n> + X.month = newval->month;\n> + X.time = newval->time;\n> +\n> + temp.day = state->sumX.day;\n> + temp.month = state->sumX.month;\n> + temp.time = state->sumX.time;\n>\n> Why do we need these local variables X and temp? It could just add the\n> values from newval directly to those in state->sumX.\n>\n> + /* The rest of this needs to work in the aggregate context */\n> + old_context = MemoryContextSwitchTo(state->agg_context);\n>\n> Why? It's not allocating any memory here, so I don't see a need to\n> switch context.\n>\n> So basically, do_interval_accum() could be simplified to:\n>\n> static void\n> do_interval_accum(IntervalAggState *state, Interval *newval)\n> {\n> /* Count infinite intervals separately from all else */\n> if (INTERVAL_IS_NOBEGIN (newval))\n> {\n> state->nInfcount++;\n> return;\n> }\n> if (INTERVAL_IS_NOEND(newval))\n> {\n> state->pInfcount++;\n> return;\n> }\n>\n> /* Update count of finite intervals */\n> state->N++;\n>\n> /* Update sum of finite intervals */\n> if (unlikely(pg_add_s32_overflow(state->sumX.month, newval->month,\n> &state->sumX.month)))\n> ereport(ERROR,\n> errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> errmsg(\"interval out of range\"));\n>\n> if (unlikely(pg_add_s32_overflow(state->sumX.day, newval->day,\n> &state->sumX.day)))\n> ereport(ERROR,\n> errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> errmsg(\"interval out of range\"));\n>\n> if (unlikely(pg_add_s64_overflow(state->sumX.time, newval->time,\n> &state->sumX.time)))\n> ereport(ERROR,\n> errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> errmsg(\"interval out of range\"));\n>\n> return;\n> }\n>\n> and that can be further refactored, as described below, and similarly\n> for do_interval_discard(), except using pg_sub_s32/64_overflow().\n>\n>\n> > > Also, this needs to include serialization and deserialization\n> > > functions, otherwise these aggregates will no longer be able to use\n> > > parallel workers. That makes a big difference to queryE, if the size\n> > > of the test data is scaled up.\n> > >\n> > I tried, but failed. sum(interval) result is correct, but\n> > avg(interval) result is wrong.\n> >\n> > SELECT sum(b) ,avg(b)\n> > ,avg(b) = sum(b)/count(*) as should_be_true\n> > ,avg(b) * count(*) = sum(b) as should_be_true_too\n> > from interval_aggtest_1m; --1million row.\n> > The above query expects two bool columns to return true, but actually\n> > both returns false.(spend some time found out parallel mode will make\n> > the number of rows to 1_000_002, should be 1_000_0000).\n> >\n>\n> I think the reason for your wrong results is this code in\n> interval_avg_combine():\n>\n> + if (state2->N > 0)\n> + {\n> + /* The rest of this needs to work in the aggregate context */\n> + old_context = MemoryContextSwitchTo(agg_context);\n> +\n> + /* Accumulate interval values */\n> + do_interval_accum(state1, &state2->sumX);\n> +\n> + MemoryContextSwitchTo(old_context);\n> + }\n>\n> The problem is that using do_interval_accum() to add the 2 sums\n> together also adds 1 to the count N, making it incorrect. This code\n> should only be adding state2->sumX to state1->sumX, not touching\n> state1->N. And, as in do_interval_accum(), there is no need to switch\n> memory context.\n>\n> Given that there are multiple places in this file that need to add\n> intervals, I think it makes sense to further refactor, and add a local\n> function to add 2 finite intervals, along the lines of the code above.\n> This can then be called from do_interval_accum(),\n> interval_avg_combine(), and interval_pl(). And similarly for\n> subtracting 2 finite intervals.\n\nI was working on refactoring Jian's patches but forgot to mention it\nthere. I think the patchset attached has addressed all your comments.\nBut they do not implement serialization and deserialization yet. I\nwill take a look at Jian's patch for the same and incorporate/refactor\nthose changes.\n\nJian,\nI don't understand why there's two sets of test queries, one with\nORDER BY and one without? Does ORDER BY specification make any\ndifference in the testing?\n\nPatches are thus\n0001, 0002 are same\n0003 - earlier 0003 + incorporated doc changes suggested by Jian\n0004 - fixes DecodeInterval()\n0005 - Refactored Jian's code fixing window functions. Does not\ncontain the changes for serialization and deserialization. Jian,\nplease let me know if I have missed anything else.\n\nIn my testing, I saw the timings for queries as below\nQuery A: 145.164 ms\nQuery B: 222.419 ms\n\nQuery C: 136.995 ms\nQuery D: 146.893 ms\n\nQuery E: 38.053 ms\nQuery F: 80.112 ms\n\nI didn't see degradation in case of sum().\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Wed, 20 Sep 2023 19:30:25 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 5:44 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Wed, 20 Sept 2023 at 13:09, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> >\n> > So basically, do_interval_accum() could be simplified to:\n> >\n>\n> Oh, and I guess it also needs an INTERVAL_NOT_FINITE() check, to make\n> sure that finite values don't sum to our representation of infinity,\n> as in interval_pl().\n\nFixed in the latest patch set.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 20 Sep 2023 19:30:49 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Wed, 20 Sept 2023 at 15:00, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> 0005 - Refactored Jian's code fixing window functions. Does not\n> contain the changes for serialization and deserialization. Jian,\n> please let me know if I have missed anything else.\n>\n\nThat looks a lot neater. One thing I don't care for is this code\npattern in finite_interval_pl():\n\n+ result->month = span1->month + span2->month;\n+ /* overflow check copied from int4pl */\n+ if (SAMESIGN(span1->month, span2->month) &&\n+ !SAMESIGN(result->month, span1->month))\n+ ereport(ERROR,\n\nThe problem is that this is a bug waiting to happen for anyone who\nuses this function with \"result\" pointing to the same Interval struct\nas \"span1\" or \"span2\". I understand that the current code avoids this\nby careful use of temporary Interval structs, but it's still a pretty\nugly pattern. This can be avoided by using pg_add_s32/64_overflow(),\nwhich then allows the callers to be simplified, getting rid of the\ntemporary Interval structs and memcpy()'s.\n\nAlso, in do_interval_discard(), this seems a bit risky:\n\n+ neg_val.day = -newval->day;\n+ neg_val.month = -newval->month;\n+ neg_val.time = -newval->time;\n\nbecause it could in theory turn a finite large negative interval into\nan infinite one (-INT_MAX -> INT_MAX), leading to an assertion failure\nin finite_interval_pl(). Now maybe that's not possible for some other\nreasons, but I think we may as well do the same refactoring for\ninterval_mi() as we're doing for interval_pl() -- i.e., introduce a\nfinite_interval_mi() function, making the addition and subtraction\ncode match, and removing the need for neg_val in\ndo_interval_discard().\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 20 Sep 2023 15:53:31 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "> On Wed, 20 Sept 2023 at 15:00, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > 0005 - Refactored Jian's code fixing window functions. Does not\n> > contain the changes for serialization and deserialization. Jian,\n> > please let me know if I have missed anything else.\n> >\n\nattached serialization and deserialization function.\n\n\n>\n> Also, in do_interval_discard(), this seems a bit risky:\n>\n> + neg_val.day = -newval->day;\n> + neg_val.month = -newval->month;\n> + neg_val.time = -newval->time;\n>\n\nwe already have interval negate function, So I changed to interval_um_internal.\nbased on 20230920 patches. I have made the attached changes.\n\nThe serialization do make big difference when configure to parallel mode.",
"msg_date": "Thu, 21 Sep 2023 11:05:26 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 8:23 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Wed, 20 Sept 2023 at 15:00, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > 0005 - Refactored Jian's code fixing window functions. Does not\n> > contain the changes for serialization and deserialization. Jian,\n> > please let me know if I have missed anything else.\n> >\n>\n> That looks a lot neater. One thing I don't care for is this code\n> pattern in finite_interval_pl():\n>\n> + result->month = span1->month + span2->month;\n> + /* overflow check copied from int4pl */\n> + if (SAMESIGN(span1->month, span2->month) &&\n> + !SAMESIGN(result->month, span1->month))\n> + ereport(ERROR,\n>\n> The problem is that this is a bug waiting to happen for anyone who\n> uses this function with \"result\" pointing to the same Interval struct\n> as \"span1\" or \"span2\". I understand that the current code avoids this\n> by careful use of temporary Interval structs, but it's still a pretty\n> ugly pattern. This can be avoided by using pg_add_s32/64_overflow(),\n> which then allows the callers to be simplified, getting rid of the\n> temporary Interval structs and memcpy()'s.\n\nThat's a good idea. Done.\n\n>\n> Also, in do_interval_discard(), this seems a bit risky:\n>\n> + neg_val.day = -newval->day;\n> + neg_val.month = -newval->month;\n> + neg_val.time = -newval->time;\n>\n> because it could in theory turn a finite large negative interval into\n> an infinite one (-INT_MAX -> INT_MAX), leading to an assertion failure\n> in finite_interval_pl(). Now maybe that's not possible for some other\n> reasons, but I think we may as well do the same refactoring for\n> interval_mi() as we're doing for interval_pl() -- i.e., introduce a\n> finite_interval_mi() function, making the addition and subtraction\n> code match, and removing the need for neg_val in\n> do_interval_discard().\n\nYour suspicion is correct. It did throw an error. Added tests for the\nsame. Introduced finite_interval_mi() which uses\npg_sub_s32/s64_overflow() functions.\n\nI will send updated patches with my next reply.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 21 Sep 2023 19:07:04 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 8:35 AM jian he <jian.universality@gmail.com> wrote:\n>\n> > On Wed, 20 Sept 2023 at 15:00, Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > >\n> > > 0005 - Refactored Jian's code fixing window functions. Does not\n> > > contain the changes for serialization and deserialization. Jian,\n> > > please let me know if I have missed anything else.\n> > >\n>\n> attached serialization and deserialization function.\n>\n\nThanks. They look good. I have incorporated those in the attached patch set.\n\nOne thing I didn't understand though is the use of\nmakeIntervalAggState() in interval_avg_deserialize(). In all other\ndeserialization functions like numeric_avg_deserialize() we create the\nAgg State in CurrentMemoryContext but makeIntervalAggState() creates\nit in aggcontext. And it works. We could change the code to allocate\nagg state in aggcontext. Not a big change. But I did not find any\nexplanation as to why we use CurrentMemoryContext in other places.\nDean, do you have any idea?\n\n>\n> >\n> > Also, in do_interval_discard(), this seems a bit risky:\n> >\n> > + neg_val.day = -newval->day;\n> > + neg_val.month = -newval->month;\n> > + neg_val.time = -newval->time;\n> >\n>\n> we already have interval negate function, So I changed to interval_um_internal.\n> based on 20230920 patches. I have made the attached changes.\n\nI didn't use this since it still requires neg_val variable and the\nimplementation for finite interval subtraction would still be differ\nin interval_mi and do_interval_discard().\n\n>\n> The serialization do make big difference when configure to parallel mode.\n\nYes. On my machine queryE shows following timings, that's huge change\nbecause of parallel query.\nwith the ser/deser functions: 112.193 ms\nwithout those functions: 272.759 ms.\n\nBefore the introduction of Internal IntervalAggState, there were no\nserialize, deserialize functions. I wonder how did parallel query\nwork. Did it just use serialize/deserialize functions of _interval?\n\nThe attached patches are thus\n0001 - 0005 - same as the last patch set.\nDean, if you are fine with the changes in 0004, I would like to merge\nthat into 0003.\n\n0006 - uses pg_add/sub_s32/64_overflow functions in finite_interval_pl\nand also introduces finite_interval_mi as suggested by Dean.\n0007 - implements serialization and deserialization functions, but\nuses aggcontext for deser.\n\nOnce we are fine with the last three patches, they need to be merged into 0003.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Thu, 21 Sep 2023 19:21:30 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 7:21 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> One thing I didn't understand though is the use of\n> makeIntervalAggState() in interval_avg_deserialize(). In all other\n> deserialization functions like numeric_avg_deserialize() we create the\n> Agg State in CurrentMemoryContext but makeIntervalAggState() creates\n> it in aggcontext. And it works. We could change the code to allocate\n> agg state in aggcontext. Not a big change. But I did not find any\n> explanation as to why we use CurrentMemoryContext in other places.\n> Dean, do you have any idea?\n\nFollowing code in ExecInterpExpr makes it clear that the\ndeserialization function is be executed in per tuple memory context.\nWhereas the aggregate's context is different from this context and may\nlives longer that the context in which deserialization is expected to\nhappen.\n\n/* evaluate aggregate deserialization function (non-strict portion) */\nEEO_CASE(EEOP_AGG_DESERIALIZE)\n{\nFunctionCallInfo fcinfo = op->d.agg_deserialize.fcinfo_data;\nAggState *aggstate = castNode(AggState, state->parent);\nMemoryContext oldContext;\n\n/*\n* We run the deserialization functions in per-input-tuple memory\n* context.\n*/\noldContext = MemoryContextSwitchTo(aggstate->tmpcontext->ecxt_per_tuple_memory);\nfcinfo->isnull = false;\n*op->resvalue = FunctionCallInvoke(fcinfo);\n*op->resnull = fcinfo->isnull;\nMemoryContextSwitchTo(oldContext);\n\nHence I have changed interval_avg_deserialize() in 0007 to use\nCurrentMemoryContext instead of aggcontext. Rest of the patches are\nsame as previous set.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Fri, 22 Sep 2023 13:18:51 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Fri, 22 Sept 2023 at 08:49, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Following code in ExecInterpExpr makes it clear that the\n> deserialization function is be executed in per tuple memory context.\n> Whereas the aggregate's context is different from this context and may\n> lives longer that the context in which deserialization is expected to\n> happen.\n>\n\nRight. I was about to reply, saying much the same thing, but it's\nalways better when you see it for yourself.\n\n> Hence I have changed interval_avg_deserialize() in 0007 to use\n> CurrentMemoryContext instead of aggcontext.\n\n+1. And consistency with other deserialisation functions is good.\n\n> Rest of the patches are\n> same as previous set.\n>\n\nOK, I'll take a look.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 22 Sep 2023 09:09:12 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 3:49 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Thu, Sep 21, 2023 at 7:21 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> Hence I have changed interval_avg_deserialize() in 0007 to use\n> CurrentMemoryContext instead of aggcontext. Rest of the patches are\n> same as previous set.\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n\n/* TODO: Handle NULL inputs? */\nsince interval_avg_serialize is strict, so handle null would be like:\nif (PG_ARGISNULL(0)) then PG_RETURN_NULL();\n\n\n",
"msg_date": "Fri, 22 Sep 2023 17:05:00 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Fri, Sep 22, 2023 at 2:35 PM jian he <jian.universality@gmail.com> wrote:\n>\n> /* TODO: Handle NULL inputs? */\n> since interval_avg_serialize is strict, so handle null would be like:\n> if (PG_ARGISNULL(0)) then PG_RETURN_NULL();\n\nThat's automatically taken care of by the executor. Functions need to\nhandle NULL inputs if they are *not* strict.\n\n#select proisstrict from pg_proc where proname = 'interval_avg_serialize';\n proisstrict\n-------------\n t\n(1 row)\n\n#select proisstrict from pg_proc where proname = 'interval_avg_deserialize';\n proisstrict\n-------------\n t\n(1 row)\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 22 Sep 2023 15:42:53 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Fri, 22 Sept 2023 at 09:09, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Fri, 22 Sept 2023 at 08:49, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Rest of the patches are\n> > same as previous set.\n>\n> OK, I'll take a look.\n>\n\nI've been going over this in more detail, and I'm attaching my review\ncomments as an incremental patch to make it easier to see the changes.\n\nAside from some cosmetic stuff, I've made the following more\nsubstantive changes:\n\n1. I don't think it's really worth adding the\npg_mul_add_sNN_overflow() functions to int.h.\n\nI thought about this for a while, and it looks to me as though they're\nreally only of very limited general use, and even this patch only used\nthem in a couple of places.\n\nLooking around more widely, the majority of places that multiply then\nadd actually require a 4-argument function that computes result = a *\nb + c. But even then, such functions would offer no performance\nbenefit, and wouldn't really do much to improve code readability at\ncall sites.\n\nAnd there's another issue: someone using these functions might\nreasonably expect them to return true if the result overflows, but\nactually, as written, they also return true if the intermediate\nproduct overflows, which isn't necessarily what's expected / wanted.\n\nSo I think it's best to just drop these functions, getting rid of\n0001, and rewriting 0002 to just use the existing int.h functions.\nAfter a little more copy-editing, I think that actually makes the code\nin make_interval() more readable.\n\nI think that part is now ready to commit, and I plan to push this fix\nto make_interval() separately, since it's really a bug-fix, not\nrelated to support for infinite intervals. In line with recent\nprecedent, I don't think it's worth back-patching though, since such\ninputs are pretty unlikely in production.\n\n\n2. The various in_range() functions needed adjusting to handle\ninfinite interval offsets.\n\nFor timestamp values, I followed the precedent set by the equivalent\nfloat/numeric code. I.e., all (finite and non-finite) timestamps are\nregarded as infinitely following -infinity and infinitely preceding\n+infinity.\n\nFor time values, it's a bit different because no time values precede\nor follow any other by more than 24 hours, so a window frame between\n+inf following and +inf following is empty (whereas in the timestamp\ncase it contains +inf). Put another way, such a window frame is empty\nbecause a time value can't be infinity.\n\n\n3. I got rid of interval2timestamp_no_overflow() because I don't think\nit really makes much sense to convert an interval to a timestamp, and\nit's a bit of a hack anyway (as selfuncs.c itself admits). Actually, I\nthink it's OK to just leave selfuncs.c as it is. The existing code\nwill cope just fine with infinite intervals, since they aren't really\ninfinite, just larger than any others.\n\n\n4. I tested pg_upgrade on a table with an interval with INT_MAX\nmonths, and it was silently converted to infinity. I think that's\nprobably the best outcome (better than failing). However, this means\nthat we really should require all 3 fields of an interval to be\nINT_MIN/MAX for it to be considered infinite, otherwise it would be\npossible to have multiple internal representations of infinity that do\nnot compare as equal.\n\nSimilarly, interval_in() needs to accept such inputs, otherwise things\nlike pg_dump/restore from pre-17 databases could fail. But since it\nnow requires all 3 fields of the interval to be INT_MIN/MAX for it to\nbe infinite, the odds of that happening by accident are vanishingly\nsmall in practice.\n\nThis approach also means that the range of allowed finite intervals is\nonly reduced by 1 microsecond at each end of the range, rather than a\nwhole month.\n\nAlso, it means that it is no longer necessary to change a number of\nthe regression tests (such as the justify_interval() tests) for values\nnear INT_MIN/MAX.\n\n\nOverall, I think this is now pretty close to being ready for commit.\n\nRegards,\nDean",
"msg_date": "Fri, 29 Sep 2023 08:12:56 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Fri, Sep 29, 2023 at 12:43 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> I think that part is now ready to commit, and I plan to push this fix\n> to make_interval() separately, since it's really a bug-fix, not\n> related to support for infinite intervals. In line with recent\n> precedent, I don't think it's worth back-patching though, since such\n> inputs are pretty unlikely in production.\n\nThe changes look good to me. I am not a fan of goto construct. But\nthis looks nicer.\n\nI think we should introduce interval_out_of_range_error() function on\nthe lines of float_overflow_error(). Later patches introduce more\nplaces where we raise that error. We can introduce the function as\npart of those patches.\n\n>\n>\n> 2. The various in_range() functions needed adjusting to handle\n> infinite interval offsets.\n>\n> For timestamp values, I followed the precedent set by the equivalent\n> float/numeric code. I.e., all (finite and non-finite) timestamps are\n> regarded as infinitely following -infinity and infinitely preceding\n> +infinity.\n>\n> For time values, it's a bit different because no time values precede\n> or follow any other by more than 24 hours, so a window frame between\n> +inf following and +inf following is empty (whereas in the timestamp\n> case it contains +inf). Put another way, such a window frame is empty\n> because a time value can't be infinity.\n>\n\nI will review and test this. I will also take a look at what else we\nmight be missing in the patch. [5] did mention that in_range()\nfunctions need to be assessed but I don't see corresponding changes in\nthe subsequent patches. I will go over that list again.\n\n>\n> 3. I got rid of interval2timestamp_no_overflow() because I don't think\n> it really makes much sense to convert an interval to a timestamp, and\n> it's a bit of a hack anyway (as selfuncs.c itself admits). Actually, I\n> think it's OK to just leave selfuncs.c as it is. The existing code\n> will cope just fine with infinite intervals, since they aren't really\n> infinite, just larger than any others.\n>\n\nThis looks odd next to date2timestamp_no_overflow() which returns\n-DBL_MIN/DBL_MAX for infinite value. But it's in agreement with what\nwe do with timestamp i.e. we don't convert infinities to DBL_MIN/MAX.\nSo I am fine with just adding a comment, the way you have done it.\nDon't have much preference here.\n\n>\n> 4. I tested pg_upgrade on a table with an interval with INT_MAX\n> months, and it was silently converted to infinity. I think that's\n> probably the best outcome (better than failing).\n\n[1] mentions that Interval with month = INT_MAX is a valid finite\nvalue but out of documented range of interval [2]. The highest value\nof Interval = 178000000 (years) * 12 = 2136000000 months which is less\nthan (2^32 - 1). But we do not prohibit such a value from entering the\ndatabase, albeit very less probable.\n\n> However, this means\n> that we really should require all 3 fields of an interval to be\n> INT_MIN/MAX for it to be considered infinite, otherwise it would be\n> possible to have multiple internal representations of infinity that do\n> not compare as equal.\n>\n> Similarly, interval_in() needs to accept such inputs, otherwise things\n> like pg_dump/restore from pre-17 databases could fail. But since it\n> now requires all 3 fields of the interval to be INT_MIN/MAX for it to\n> be infinite, the odds of that happening by accident are vanishingly\n> small in practice.\n>\n> This approach also means that the range of allowed finite intervals is\n> only reduced by 1 microsecond at each end of the range, rather than a\n> whole month.\n>\n> Also, it means that it is no longer necessary to change a number of\n> the regression tests (such as the justify_interval() tests) for values\n> near INT_MIN/MAX.\n\nMy first patch was comparing all the three fields to determine whether\na given Interval value represents infinity. [3] changed that to use\nonly the month field. I guess that was based on the discussion at [4].\nYou may want to review that discussion if not already done. I am fine\neither way. We should be able to change the comparison code later if\nwe see performance getting impacted.\n\n>\n>\n> Overall, I think this is now pretty close to being ready for commit.\n\nThanks.\n\n[1] https://www.postgresql.org/message-id/CAAvxfHea4%2BsPybKK7agDYOMo9N-Z3J6ZXf3BOM79pFsFNcRjwA%40mail.gmail.com\n[2] Table 8.9 at\nhttps://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-DATETIME\n[3] https://www.postgresql.org/message-id/CAAvxfHf0-T99i%3DOrve_xfonVCvsCuPy7C4avVm%3D%2Byu128ujSGg%40mail.gmail.com\n[4] https://www.postgresql.org/message-id/26022.1545087636%40sss.pgh.pa.us\n[5] https://www.postgresql.org/message-id/CAAvxfHdzd5JLRBXDAW7OPhsNNACvhsCP3f5R4LNhRVaDuQG0gg%40mail.gmail.com\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 4 Oct 2023 18:59:16 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "Hi Dean,\n\nOn Wed, Oct 4, 2023 at 6:59 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> I think we should introduce interval_out_of_range_error() function on\n> the lines of float_overflow_error(). Later patches introduce more\n> places where we raise that error. We can introduce the function as\n> part of those patches.\n>\n\nNot done yet. The error is raised from multiple places and we might\nfind improving error message at some of those places. This refactoring\nwill make that work difficult. Let me know if you think otherwise.\n\n> >\n> >\n> > 2. The various in_range() functions needed adjusting to handle\n> > infinite interval offsets.\n> >\n> > For timestamp values, I followed the precedent set by the equivalent\n> > float/numeric code. I.e., all (finite and non-finite) timestamps are\n> > regarded as infinitely following -infinity and infinitely preceding\n> > +infinity.\n> >\n> > For time values, it's a bit different because no time values precede\n> > or follow any other by more than 24 hours, so a window frame between\n> > +inf following and +inf following is empty (whereas in the timestamp\n> > case it contains +inf). Put another way, such a window frame is empty\n> > because a time value can't be infinity.\n> >\n\nI think this code is reasonable. But users may find it inconsistent\nwith the other ways to achieve the same outcome. For example, We don't\nallow non-finite intervals to be added or subtracted from time or\ntimetz. So if someone tries to compare the rows using val >/<= base\n+/- offset, those queries will fail whereas similar implied conditions\nin window specification will not throw an error. If you have\nconsidered this already, I am fine with the code as is.\n\nThis code doesn't handle non-finite intervals explicitly. But that's\ninline with the interval comparison functions (interval_le/ge etc.)\nwhich rely on infinities being represented by extreme values.\n\n>\n> I will review and test this. I will also take a look at what else we\n> might be missing in the patch. [5] did mention that in_range()\n> functions need to be assessed but I don't see corresponding changes in\n> the subsequent patches. I will go over that list again.\n\nAdded a separate patch (0009) to fix\nbrin_minmax_multi_distance_interval(). The fix is inconsistent with\nthe way infinte timestamp and date is handled in that file. But I\nthink infinite timestamp and date handling itself is inconsistent with\nthe way infinite values of float are handled. I have tried to be\nconsistent with float. May be we should fix date and timestamp\nfunctions as well.\n\nI also changed brin_multi.sql to test infinte interval values in BRIN\nindex. This required some further changes to existing queries.\nI thought about combining these two INSERTs but decided against that\nsince we would loose NULL interval values.\n-- throw in some NULL's and different values\nINSERT INTO brintest_multi (inetcol, cidrcol) SELECT\ninet 'fe80::6e40:8ff:fea9:8c46' + tenthous,\ncidr 'fe80::6e40:8ff:fea9:8c46' + tenthous\nFROM tenk1 ORDER BY thousand, tenthous LIMIT 25;\n\nINSERT INTO brintest_multi(intervalcol) VALUES ('-infinity'), ('+infinity');\n\nI took some time understand BRIN before making any changes. That took time.\n\nOn your patches.\nI like the eval() function you have used and its usage. It's a bit\nharder to understand it initially but makes the query and output\ncrisp. Saves some SQL and output lines too. I tried to use the same\ntrick for time and timetz\nselect t as time, i as interval,\n eval(format('time %L + interval %L', t, i)) AS time_pl,\n eval(format('time %L - interval %L', t, i)) AS time_mi,\n eval(format('timetz %L + interval %L', t, i)) AS timetz_pl,\n eval(format('timetz %L - interval %L', t, i)) AS timetz_mi\n from (values ('11:27:42')) t1(t),\n (values ('infinity'),\n ('-infinity')) as t2(i);\nThe query and output take the same space. So I decided against using it.\n\nI have added a separate patch (0008) to test negative interval values,\nincluding -infinity, in preceding and following specification.\n\nPatches from 0001 to 0007 are same as what you attached but rebased on\nthe latest HEAD.\n\nI think we should squash 0002 to 0007.\n\n--\nBest Wishes,\nAshutosh Bapat",
"msg_date": "Tue, 10 Oct 2023 17:06:38 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Wed, 4 Oct 2023 at 14:29, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> I think we should introduce interval_out_of_range_error() function on\n> the lines of float_overflow_error(). Later patches introduce more\n> places where we raise that error. We can introduce the function as\n> part of those patches.\n>\n\nI'm not convinced that it is really worth it. Note also that even with\nthis patch, there are still more places that throw \"timestamp out of\nrange\" errors than \"interval out of range\" errors.\n\n> > 4. I tested pg_upgrade on a table with an interval with INT_MAX\n> > months, and it was silently converted to infinity. I think that's\n> > probably the best outcome (better than failing). However, this means\n> > that we really should require all 3 fields of an interval to be\n> > INT_MIN/MAX for it to be considered infinite, otherwise it would be\n> > possible to have multiple internal representations of infinity that do\n> > not compare as equal.\n> >\n> My first patch was comparing all the three fields to determine whether\n> a given Interval value represents infinity. [3] changed that to use\n> only the month field. I guess that was based on the discussion at [4].\n> You may want to review that discussion if not already done. I am fine\n> either way. We should be able to change the comparison code later if\n> we see performance getting impacted.\n>\n\nBefore looking at the details more closely, I might have agreed with\nthat earlier discussion. However, given that things like pg_upgrade\nhave the possibility of turning formerly allowed, finite intervals\ninto infinity, we really need to ensure that there is only one value\nequal to infinity, otherwise the results are likely to be very\nconfusing and counter-intuitive. That means that we have to continue\nto regard intervals like INT32_MAX months + 10 days as finite.\n\nWhile I haven't done any performance testing, I wouldn't expect this\nto have much impact. In a 64-bit build, this actually generates 2\ncomparisons rather than 3 -- one comparing the combined month and day\nfields against a 64-bit value containing 2 copies of INT32_MAX, and\none testing the time field. In practice, only the first test will be\nexecuted in the vast majority of cases.\n\n\nSomething that perhaps does need discussing is the fact that\n'2147483647 months 2147483647 days 9223372036854775807 usecs' is now\naccepted by interval_in() and gives infinity. That's a bit ugly, but I\nthink it's defensible as a measure to prevent dump/restore errors from\nolder databases, and in any case, such an interval is outside the\ndocumented range of supported intervals, and is a highly contrived\nexample, vanishingly improbable in practice.\n\nAlternatively, we could have interval_in() reject this, which would\nopen up the possibility of dump/restore errors. It could be argued\nthat that's OK, for similar reasons -- the failing value is highly\nunlikely/contrived, and out of the documented range. I don't like that\nthough. I don't think dump/restore should fail under any\ncircumstances, however unlikely.\n\nAnother alternative is to accept this input, but emit a WARNING. I\ndon't particularly like that either, since it's forcing a check on\nevery input value, just to cater for this one specific highly unlikely\ninput. In fact, both these alternative approaches (rejecting the\nvalue, or emitting a warning), would impose a small performance\npenalty on every interval input, which I don't think is really worth\nit.\n\nSo overall, my preference is to just accept it. Anything else is more\nwork, for no practical benefit.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 27 Oct 2023 09:37:23 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Tue, 10 Oct 2023 at 12:36, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> > > 2. The various in_range() functions needed adjusting to handle\n> > > infinite interval offsets.\n> > >\n> I think this code is reasonable. But users may find it inconsistent\n> with the other ways to achieve the same outcome. For example, We don't\n> allow non-finite intervals to be added or subtracted from time or\n> timetz. So if someone tries to compare the rows using val >/<= base\n> +/- offset, those queries will fail whereas similar implied conditions\n> in window specification will not throw an error. If you have\n> considered this already, I am fine with the code as is.\n>\n\nIt's consistent with the documented contract of the in_range support\nfunctions. See https://www.postgresql.org/docs/current/btree-support-funcs.html\n\nIn particular, this part:\n\n An additional expectation is that in_range functions should, if\n practical, avoid throwing an error if base + offset or base - offset\n would overflow. The correct comparison result can be determined even\n if that value would be out of the data type's range. Note that if the\n data type includes concepts such as \"infinity\" or \"NaN\", extra\n care may be needed to ensure that in_range's results agree with the\n normal sort order of the operator family.\n\n> Added a separate patch (0009) to fix\n> brin_minmax_multi_distance_interval().\n>\n\nI think we can drop this from this thread now, given the discussion\nover on the other thread\n(https://www.postgresql.org/message-id/eef0ea8c-4aaa-8d0d-027f-58b1f35dd170%40enterprisedb.com)\n\n> I have added a separate patch (0008) to test negative interval values,\n> including -infinity, in preceding and following specification.\n>\n> Patches from 0001 to 0007 are same as what you attached but rebased on\n> the latest HEAD.\n>\n\nI'm attaching another update, with a minor change to the aggregate\ndeserialization function, in line with the recent change to how these\nnow work elsewhere (see 0c882a298881056176a27ccc44c5c3bb7c8f308c).\n\n0008 seems reasonable. I have added some comments to indicate that\nthose tests are expected to fail, and why.\n\n> I think we should squash 0002 to 0007.\n>\n\nYes, let's do that with the next update. In fact, we may as well\nsquash 0002 to 0008.\n\nRegards,\nDean",
"msg_date": "Fri, 27 Oct 2023 09:38:29 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 2:07 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Wed, 4 Oct 2023 at 14:29, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > I think we should introduce interval_out_of_range_error() function on\n> > the lines of float_overflow_error(). Later patches introduce more\n> > places where we raise that error. We can introduce the function as\n> > part of those patches.\n> >\n>\n> I'm not convinced that it is really worth it. Note also that even with\n> this patch, there are still more places that throw \"timestamp out of\n> range\" errors than \"interval out of range\" errors.\n\nFine with me.\n\n>\n> > > 4. I tested pg_upgrade on a table with an interval with INT_MAX\n> > > months, and it was silently converted to infinity. I think that's\n> > > probably the best outcome (better than failing). However, this means\n> > > that we really should require all 3 fields of an interval to be\n> > > INT_MIN/MAX for it to be considered infinite, otherwise it would be\n> > > possible to have multiple internal representations of infinity that do\n> > > not compare as equal.\n> > >\n> > My first patch was comparing all the three fields to determine whether\n> > a given Interval value represents infinity. [3] changed that to use\n> > only the month field. I guess that was based on the discussion at [4].\n> > You may want to review that discussion if not already done. I am fine\n> > either way. We should be able to change the comparison code later if\n> > we see performance getting impacted.\n> >\n>\n> Before looking at the details more closely, I might have agreed with\n> that earlier discussion. However, given that things like pg_upgrade\n> have the possibility of turning formerly allowed, finite intervals\n> into infinity, we really need to ensure that there is only one value\n> equal to infinity, otherwise the results are likely to be very\n> confusing and counter-intuitive. That means that we have to continue\n> to regard intervals like INT32_MAX months + 10 days as finite.\n>\n> While I haven't done any performance testing, I wouldn't expect this\n> to have much impact. In a 64-bit build, this actually generates 2\n> comparisons rather than 3 -- one comparing the combined month and day\n> fields against a 64-bit value containing 2 copies of INT32_MAX, and\n> one testing the time field. In practice, only the first test will be\n> executed in the vast majority of cases.\n>\n\nThanks for the analysis.\n\n>\n> Something that perhaps does need discussing is the fact that\n> '2147483647 months 2147483647 days 9223372036854775807 usecs' is now\n> accepted by interval_in() and gives infinity. That's a bit ugly, but I\n> think it's defensible as a measure to prevent dump/restore errors from\n> older databases, and in any case, such an interval is outside the\n> documented range of supported intervals, and is a highly contrived\n> example, vanishingly improbable in practice.\n\nAgreed.\n\n>\n> Alternatively, we could have interval_in() reject this, which would\n> open up the possibility of dump/restore errors. It could be argued\n> that that's OK, for similar reasons -- the failing value is highly\n> unlikely/contrived, and out of the documented range. I don't like that\n> though. I don't think dump/restore should fail under any\n> circumstances, however unlikely.\n\nI agree that dump/restore shouldn't fail, especially when restore on\none major version succeeds and fails on another.\n\n>\n> Another alternative is to accept this input, but emit a WARNING. I\n> don't particularly like that either, since it's forcing a check on\n> every input value, just to cater for this one specific highly unlikely\n> input. In fact, both these alternative approaches (rejecting the\n> value, or emitting a warning), would impose a small performance\n> penalty on every interval input, which I don't think is really worth\n> it.\n\nAgreed.\n\n>\n> So overall, my preference is to just accept it. Anything else is more\n> work, for no practical benefit.\n>\n\nOk.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 27 Oct 2023 17:39:24 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Fri, 27 Oct 2023 at 09:38, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Tue, 10 Oct 2023 at 12:36, Ashutosh Bapat\n>\n> > I think we should squash 0002 to 0007.\n>\n> Yes, let's do that with the next update. In fact, we may as well\n> squash 0002 to 0008.\n>\n\nI have pushed 0001. Here is 0002-0008, squashed down to one commit,\nplus the change discussed to use INTERVAL_NOBEGIN() in the btree_gin\ncode.\n\nIt could use another read-through, and then I think it will be ready for commit.\n\nRegards,\nDean",
"msg_date": "Sun, 29 Oct 2023 16:39:31 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Sun, Oct 29, 2023 at 10:09 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Fri, 27 Oct 2023 at 09:38, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> >\n> > On Tue, 10 Oct 2023 at 12:36, Ashutosh Bapat\n> >\n> > > I think we should squash 0002 to 0007.\n> >\n> > Yes, let's do that with the next update. In fact, we may as well\n> > squash 0002 to 0008.\n> >\n>\n> I have pushed 0001. Here is 0002-0008, squashed down to one commit,\n> plus the change discussed to use INTERVAL_NOBEGIN() in the btree_gin\n> code.\n\nThanks. I had to leave this halfway on Friday because of severe tooth ache.\n\nI was actually working on the test part of 0009. Though the code\nchanges are not required, I think it's better to have a test case\nalong the same lines as the tests added by Tomas (now committed in\n8da86d62a11269e926765c0d6ef6f532b2b8b749). I have attached 0002 for\nthe same along with your v28.\n\n>\n> It could use another read-through, and then I think it will be ready for commit.\n>\nThanks. I went through the whole patch again and am quite fine with it.\n\nHere's my version of commit message\n\n```\nSupport Infinite interval values\n\nInterval datatype uses the same input and output representation for\ninfinite intervals as other datatypes representing time that support\ninfinity. An interval larger than any other interval is represented by\nstring literal 'infinity' or '+infinity'. An interval which is smaller\nthan any other interval is represented as '-infinity'. Internally\npositive infinity is represented as maximum values supported by all\nthe member types of Interval datastructure and negative infinity is\nrepresented as minimum values set to all the members. INTERVAL_NOBEGIN\nand INTERVAL_NOEND macros can be used to set an Interval structure to\nnegative and positive infinity respectively. INTERVAL_IS_NOBEGIN and\nINTERVAL_IS_NOEND macros are used to test respective values.\nINTERVAL_NOT_FINITE macro is used to test whether a given Interval\nvalue is infinite.\n\nImplementation of all known operators now handles infinite interval\nvalues along with operations related to BRIN index, windowing and\nselectivity. Regression tests are added to test these implementation.\n\nIf a user has stored interval values '-2147483648 months -2147483648\ndays -9223372036854775807 us' and '2147483647 months 2147483647 days\n9223372036854775806 us' in PostgreSQL versions 16 or earlier. Those\nvalues will turn into '-infinity' and 'infinity' respectively after\nupgrading to v17. These values are outside the documented range\nsupported by interval datatype and thus there's almost no possibility\nof this occurrence. But it will be good to watch for these values\nduring upgrade.\n```\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Mon, 30 Oct 2023 15:30:56 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 6:01 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n>\n> Here's my version of commit message\n>\n> ```\n> Support Infinite interval values\n>\n> Interval datatype uses the same input and output representation for\n> infinite intervals as other datatypes representing time that support\n> infinity. An interval larger than any other interval is represented by\n> string literal 'infinity' or '+infinity'. An interval which is smaller\n> than any other interval is represented as '-infinity'. Internally\n> positive infinity is represented as maximum values supported by all\n> the member types of Interval datastructure and negative infinity is\n> represented as minimum values set to all the members. INTERVAL_NOBEGIN\n> and INTERVAL_NOEND macros can be used to set an Interval structure to\n> negative and positive infinity respectively. INTERVAL_IS_NOBEGIN and\n> INTERVAL_IS_NOEND macros are used to test respective values.\n> INTERVAL_NOT_FINITE macro is used to test whether a given Interval\n> value is infinite.\n>\n> Implementation of all known operators now handles infinite interval\n> values along with operations related to BRIN index, windowing and\n> selectivity. Regression tests are added to test these implementation.\n>\n> If a user has stored interval values '-2147483648 months -2147483648\n> days -9223372036854775807 us' and '2147483647 months 2147483647 days\n> 9223372036854775806 us' in PostgreSQL versions 16 or earlier. Those\n> values will turn into '-infinity' and 'infinity' respectively after\n> upgrading to v17. These values are outside the documented range\n> supported by interval datatype and thus there's almost no possibility\n> of this occurrence. But it will be good to watch for these values\n> during upgrade.\n> ```\n>\nthe message is plain enough. I can understand it. thanks!\n\n\n",
"msg_date": "Tue, 31 Oct 2023 08:00:00 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Mon, 30 Oct 2023 at 10:01, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Thanks. I went through the whole patch again and am quite fine with it.\n>\n> Here's my version of commit message\n>\n\nGoing over this again, I noticed a pre-existing integer overflow\nproblem in interval_time(), which patch 0001 attached fixes.\n\nI squashed the other 2 patches (main patch + new BRIN tests) together\ninto 0002, and did another copy-editing pass over it. I found one\nother issue, which is that overflow checking in interval_um() had gone\nmissing, and there didn't seem to be any regression test coverage for\nthat, so I added some.\n\nI also changed the error message in interval_time to \"cannot convert\ninfinite interval to time\", which is slightly more informative, and\nmore consistent with the nearby error messages in time_pl_interval()\nand time_mi_interval().\n\nFinally, I rewrote the commit message in slightly higher-level terms,\nbut that's really up to the committer to decide on.\n\nI'm marking this as ready-for-committer. I'll probably pick it up\nmyself in a few days, unless another committer claims it first.\n\nRegards,\nDean",
"msg_date": "Mon, 6 Nov 2023 17:08:56 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Mon, 6 Nov 2023 at 17:08, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> I'm marking this as ready-for-committer. I'll probably pick it up\n> myself in a few days, unless another committer claims it first.\n>\n\nAh, it seems that one of this patch's new OIDs conflicts with a recent\ncommit. The best way to avoid that (or at least make it much less\nlikely) is by using the suggestion at the end of the unused_oids\nscript output, which is a random value in the 8000-9999 range.\n\nNew version attached doing that, to run it past the cfbot again.\n\nRegards,\nDean",
"msg_date": "Tue, 7 Nov 2023 14:33:55 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Tue, 7 Nov 2023 at 14:33, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> New version attached doing that, to run it past the cfbot again.\n>\n\nAh, Windows Server didn't like that. Trying again with \"INT64CONST(0)\"\ninstead of just \"0\" in interval_um().\n\nRegards,\nDean",
"msg_date": "Tue, 7 Nov 2023 23:42:13 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Wed, Nov 8, 2023 at 7:42 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Tue, 7 Nov 2023 at 14:33, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> >\n> > New version attached doing that, to run it past the cfbot again.\n> >\n>\n> Ah, Windows Server didn't like that. Trying again with \"INT64CONST(0)\"\n> instead of just \"0\" in interval_um().\n>\n> Regards,\n> Dean\n\nI found this:\nhttps://developercommunity.visualstudio.com/t/please-implement-integer-overflow-detection/409051\nmaybe related.\n\n\n",
"msg_date": "Wed, 8 Nov 2023 14:56:09 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Wed, 8 Nov 2023 at 06:56, jian he <jian.universality@gmail.com> wrote:\n>\n> > On Tue, 7 Nov 2023 at 14:33, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> >\n> > Ah, Windows Server didn't like that. Trying again with \"INT64CONST(0)\"\n> > instead of just \"0\" in interval_um().\n> >\n> I found this:\n> https://developercommunity.visualstudio.com/t/please-implement-integer-overflow-detection/409051\n> maybe related.\n\nHmm, actually, this has revealed a bug in our 64-bit integer\nsubtraction code, on platforms that don't have builtins or 128-bit\ninteger support. I have created a new thread for that, since it's\nnothing to do with this patch.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 8 Nov 2023 12:02:22 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Wed, Nov 8, 2023 at 5:32 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Wed, 8 Nov 2023 at 06:56, jian he <jian.universality@gmail.com> wrote:\n> >\n> > > On Tue, 7 Nov 2023 at 14:33, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> > >\n> > > Ah, Windows Server didn't like that. Trying again with \"INT64CONST(0)\"\n> > > instead of just \"0\" in interval_um().\n> > >\n> > I found this:\n> > https://developercommunity.visualstudio.com/t/please-implement-integer-overflow-detection/409051\n> > maybe related.\n>\n> Hmm, actually, this has revealed a bug in our 64-bit integer\n> subtraction code, on platforms that don't have builtins or 128-bit\n> integer support. I have created a new thread for that, since it's\n> nothing to do with this patch.\n\nJust to test whether that bug fix also fixes the failure seen with\nthis patchset, I am attaching the patchset including the patch with\nthe fix.\n\n0001 - fix in other thread\n0002 and 0003 are 0001 and 0002 in the previous patch set.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Thu, 9 Nov 2023 12:45:01 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Thu, 9 Nov 2023 at 07:15, Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Just to test whether that bug fix also fixes the failure seen with\n> this patchset, I am attaching the patchset including the patch with\n> the fix.\n>\n> 0001 - fix in other thread\n> 0002 and 0003 are 0001 and 0002 in the previous patch set.\n>\n\nThanks. That's confirmed, it has indeed turned green!\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 9 Nov 2023 08:37:42 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Thu, 9 Nov 2023 at 08:37, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Thu, 9 Nov 2023 at 07:15, Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Just to test whether that bug fix also fixes the failure seen with\n> > this patchset, I am attaching the patchset including the patch with\n> > the fix.\n> >\n> > 0001 - fix in other thread\n> > 0002 and 0003 are 0001 and 0002 in the previous patch set.\n>\n> Thanks. That's confirmed, it has indeed turned green!\n>\n\nOK, I have pushed 0001 and 0002. Here's the remaining (main) patch.\n\nI couldn't resist making one more cosmetic change -- I moved\nfinite_interval_pl() and finite_interval_mi() up to where they're\nfirst used, which is where that code was originally, making it\nslightly easier to compare old-vs-new code side-by-side, and because I\nthink that's the more natural place for them.\n\nRegards,\nDean",
"msg_date": "Thu, 9 Nov 2023 12:49:53 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Thu, 9 Nov 2023 at 12:49, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> OK, I have pushed 0001 and 0002. Here's the remaining (main) patch.\n>\n\nOK, I have now pushed the main patch.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 14 Nov 2023 11:09:41 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Tue, Nov 14, 2023 at 4:39 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Thu, 9 Nov 2023 at 12:49, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> >\n> > OK, I have pushed 0001 and 0002. Here's the remaining (main) patch.\n> >\n>\n> OK, I have now pushed the main patch.\n\nThanks a lot Dean.\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 16 Nov 2023 12:32:53 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Infinite Interval"
},
{
"msg_contents": "On Thu, Nov 16, 2023 at 2:03 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n>\n> On Tue, Nov 14, 2023 at 4:39 PM Dean Rasheed <dean.a.rasheed@gmail.com>\nwrote:\n> >\n> > On Thu, 9 Nov 2023 at 12:49, Dean Rasheed <dean.a.rasheed@gmail.com>\nwrote:\n> > >\n> > > OK, I have pushed 0001 and 0002. Here's the remaining (main) patch.\n> > >\n> >\n> > OK, I have now pushed the main patch.\n>\n> Thanks a lot Dean.\n\nYes, thanks Dean!\n\nOn Thu, Nov 16, 2023 at 2:03 AM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:>> On Tue, Nov 14, 2023 at 4:39 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:> >> > On Thu, 9 Nov 2023 at 12:49, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:> > >> > > OK, I have pushed 0001 and 0002. Here's the remaining (main) patch.> > >> >> > OK, I have now pushed the main patch.>> Thanks a lot Dean.Yes, thanks Dean!",
"msg_date": "Sat, 18 Nov 2023 11:53:47 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Infinite Interval"
}
] |
[
{
"msg_contents": "While browsing through varsup.c, I noticed this comment on GetNewObjectId:\n\n* Hence, this routine should generally not be used directly. The only direct\n* callers should be GetNewOidWithIndex() and GetNewRelFileNumber() in\n* catalog/catalog.c.\n\nBut AddRoleMems in user.c appears to also call the function directly:\n\n/* get an OID for the new row and insert it */\nobjectId = GetNewObjectId();\nnew_record[Anum_pg_auth_members_oid - 1] = objectId;\ntuple = heap_form_tuple(pg_authmem_dsc,\n new_record, new_record_nulls);\nCatalogTupleInsert(pg_authmem_rel, tuple);\n\nI'm not sure if that call is right, but this seems inconsistent.\nShould that caller be using GetNewOidWithIndex instead? Or should the\ncomment be updated?\n\nThanks,\nMaciek\n\n\n",
"msg_date": "Sat, 10 Dec 2022 15:17:22 -0800",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": true,
"msg_subject": "GetNewObjectId question"
},
{
"msg_contents": "Maciek Sakrejda <m.sakrejda@gmail.com> writes:\n> While browsing through varsup.c, I noticed this comment on GetNewObjectId:\n> * Hence, this routine should generally not be used directly. The only direct\n> * callers should be GetNewOidWithIndex() and GetNewRelFileNumber() in\n> * catalog/catalog.c.\n\n> But AddRoleMems in user.c appears to also call the function directly:\n\n> /* get an OID for the new row and insert it */\n> objectId = GetNewObjectId();\n\nYeah, that looks like somebody didn't read the memo.\nWant to submit a patch?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 10 Dec 2022 19:11:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: GetNewObjectId question"
},
{
"msg_contents": "On Sat, Dec 10, 2022 at 07:11:13PM -0500, Tom Lane wrote:\n> Yeah, that looks like somebody didn't read the memo.\n> Want to submit a patch?\n\nThe comment has been added in e3ce2de but the call originates from\n6566133, so that's a HEAD-only issue.\n--\nMichael",
"msg_date": "Sun, 11 Dec 2022 10:31:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: GetNewObjectId question"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sat, Dec 10, 2022 at 07:11:13PM -0500, Tom Lane wrote:\n>> Yeah, that looks like somebody didn't read the memo.\n>> Want to submit a patch?\n\n> The comment has been added in e3ce2de but the call originates from\n> 6566133, so that's a HEAD-only issue.\n\nAh, good that the bug hasn't made it to a released version yet.\nBut that comment is *way* older than e3ce2de; it's been touched\na couple of times, but it dates to 721e53785 AFAICS.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 11 Dec 2022 01:12:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: GetNewObjectId question"
},
{
"msg_contents": "Sure. My C is pretty limited, but I think it's just the attached? I\npatterned the usage on the way this is done in CreateRole. It passes\ncheck-world here.",
"msg_date": "Sat, 10 Dec 2022 23:03:38 -0800",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: GetNewObjectId question"
},
{
"msg_contents": "On Sat, Dec 10, 2022 at 11:03:38PM -0800, Maciek Sakrejda wrote:\n> Sure. My C is pretty limited, but I think it's just the attached? I\n> patterned the usage on the way this is done in CreateRole. It passes\n> check-world here.\n\nLooks OK seen from here. Thanks for the patch! I don't have much\nfreshness and time today, but I can get back to this thread tomorrow.\n--\nMichael",
"msg_date": "Sun, 11 Dec 2022 16:20:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: GetNewObjectId question"
},
{
"msg_contents": "On Sun, Dec 11, 2022 at 04:20:17PM +0900, Michael Paquier wrote:\n> Looks OK seen from here. Thanks for the patch! I don't have much\n> freshness and time today, but I can get back to this thread tomorrow.\n\nApplied as of eae7fe4.\n--\nMichael",
"msg_date": "Mon, 12 Dec 2022 09:31:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: GetNewObjectId question"
}
] |
[
{
"msg_contents": "Hi all,\n\nAttached is a patch to fix a parsing error for date-time types that\nallow dangling units in the input. For example,\n`date '1995-08-06 m y d'` was considered a valid date and the dangling\nunits were ignored.\n\nIntervals also suffer from a similar issue, but the attached patch\ndoesn't fix that issue. For example,\n`interval '1 day second month 6 hours days years ago'` is parsed as a\nvalid interval with -1 days and -6 hours. I'm hoping to fix that in a\nlater patch, but it will likely be more complicated than the other\ndate-time fixes.\n\n- Joe Koshakow",
"msg_date": "Sun, 11 Dec 2022 10:29:23 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Date-Time dangling unit fix"
},
{
"msg_contents": "On Sun, Dec 11, 2022 at 10:29 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> Hi all,\n>\n> Attached is a patch to fix a parsing error for date-time types that\n> allow dangling units in the input. For example,\n> `date '1995-08-06 m y d'` was considered a valid date and the dangling\n> units were ignored.\n>\n> Intervals also suffer from a similar issue, but the attached patch\n> doesn't fix that issue. For example,\n> `interval '1 day second month 6 hours days years ago'` is parsed as a\n> valid interval with -1 days and -6 hours. I'm hoping to fix that in a\n> later patch, but it will likely be more complicated than the other\n> date-time fixes.\n>\n> - Joe Koshakow\n\nI think I sent that to the wrong email address.",
"msg_date": "Sun, 11 Dec 2022 10:41:27 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Date-Time dangling unit fix"
},
{
"msg_contents": "I just found another class of this bug that the submitted patch does\nnot fix. If the units are at the beginning of the string, then they are\nalso ignored. For example, `date 'm d y2020m11d3'` is also valid. I\nthink the fix here is to check and make sure that ptype is 0 before\nreassigning the value to a non-zero number. I'll send an updated patch\nwith this tonight.\n\nI just found another class of this bug that the submitted patch doesnot fix. If the units are at the beginning of the string, then they arealso ignored. For example, `date 'm d y2020m11d3'` is also valid. Ithink the fix here is to check and make sure that ptype is 0 beforereassigning the value to a non-zero number. I'll send an updated patchwith this tonight.",
"msg_date": "Mon, 12 Dec 2022 10:55:36 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Date-Time dangling unit fix"
},
{
"msg_contents": "On Mon, Dec 12, 2022 at 10:55 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> I just found another class of this bug that the submitted patch does\n> not fix. If the units are at the beginning of the string, then they are\n> also ignored. For example, `date 'm d y2020m11d3'` is also valid. I\n> think the fix here is to check and make sure that ptype is 0 before\n> reassigning the value to a non-zero number. I'll send an updated patch\n> with this tonight.\n\nAttached is the described patch.\n\n- Joe Koshakow",
"msg_date": "Mon, 12 Dec 2022 19:11:16 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Date-Time dangling unit fix"
},
{
"msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> On Mon, Dec 12, 2022 at 10:55 AM Joseph Koshakow <koshy44@gmail.com> wrote:\n>> I just found another class of this bug that the submitted patch does\n>> not fix. If the units are at the beginning of the string, then they are\n>> also ignored. For example, `date 'm d y2020m11d3'` is also valid. I\n>> think the fix here is to check and make sure that ptype is 0 before\n>> reassigning the value to a non-zero number. I'll send an updated patch\n>> with this tonight.\n\n> Attached is the described patch.\n\nI started to look at this, and soon noticed that while we have test cases\nmatching this sort of date input, there is no documentation for it. The\ncode claims it's an \"ISO\" (presumably ISO 8601) format, and maybe it is\nbecause it looks a lot like the ISO 8601 format for intervals (durations).\nBut I don't have a copy of ISO 8601, and some googling fails to find any\nindication that anybody else believes this is a valid datetime format.\nWikipedia for example documents a lot of variants of ISO 8601 [1],\nbut nothing that looks like this.\n\nI wonder if we should just rip this code out instead of fixing it.\nI suspect its real-world usage is not different from zero. We'd\nhave to keep the \"Jnnn\" Julian-date case, though, so maybe there's\nlittle to be saved.\n\nIf we do keep it, there's documentation work to be done. But the\nfirst bit of doco I'd want to see is a pointer to a standard.\n\n\t\t\tregards, tom lane\n\n[1] https://en.wikipedia.org/wiki/ISO_8601\n\n\n",
"msg_date": "Sat, 04 Mar 2023 16:05:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Date-Time dangling unit fix"
},
{
"msg_contents": "On Sat, Mar 4, 2023 at 4:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I started to look at this, and soon noticed that while we have test\ncases\n> matching this sort of date input, there is no documentation for it.\nThe\n> code claims it's an \"ISO\" (presumably ISO 8601) format, and maybe it is\n> because it looks a lot like the ISO 8601 format for intervals\n(durations).\n> But I don't have a copy of ISO 8601, and some googling fails to find\nany\n> indication that anybody else believes this is a valid datetime format.\n> Wikipedia for example documents a lot of variants of ISO 8601 [1],\n> but nothing that looks like this.\n>\n> I wonder if we should just rip this code out instead of fixing it.\n> I suspect its real-world usage is not different from zero. We'd\n> have to keep the \"Jnnn\" Julian-date case, though, so maybe there's\n> little to be saved.\n>\n> If we do keep it, there's documentation work to be done. But the\n> first bit of doco I'd want to see is a pointer to a standard.\n\nI also don't have a copy of ISO 8601 and wasn't able to find anything\nabout this variant on Google. I did find this comment in datetime.c\n\n/*\n* Was this an \"ISO date\" with embedded field labels? An\n* example is \"y2001m02d04\" - thomas 2001-02-04\n*/\n\nwhich comes from this commit [1], which was authored by Thomas Lockhart\n(presumably the same thomas from the comment). I've CC'ed Thomas in\ncase the email still exists and they happen to remember. The commit\nmessage mentions ISO, but not the variant mentioned in the comment.\nThe mailing list thread can be found here [2], but it doesn't provide\nmuch more information. I also found the following thread [3], which\nhappens to have you in it in case you remember it, which seemed to be\nthe motivation for commit [1]. It only contains the following line\nabout ISO:\n\n> o support for \"ISO variants\" on input, including embedded \"T\" preceeding\nthe time fields\n\nAll that seems to imply the \"y2001m02d04\" ISO variant was never really\ndiscussed in much detail and it's probably fine to remove it. Though,\nit has been around for 22 years which makes it a bit scary to remove.\n\n- Joe Koshakow\n\n[1]\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=6f58115dddfa8ca63004c4784f57ef660422861d\n[2]\nhttps://www.postgresql.org/message-id/flat/3BB433D5.3CB4164E%40fourpalms.org\n[3]\nhttps://www.postgresql.org/message-id/flat/3B970FF8.B9990807%40fourpalms.org#c57d83c80d295bfa19887c92122369c3\n\nOn Sat, Mar 4, 2023 at 4:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> I started to look at this, and soon noticed that while we have test cases> matching this sort of date input, there is no documentation for it. The> code claims it's an \"ISO\" (presumably ISO 8601) format, and maybe it is> because it looks a lot like the ISO 8601 format for intervals (durations).> But I don't have a copy of ISO 8601, and some googling fails to find any> indication that anybody else believes this is a valid datetime format.> Wikipedia for example documents a lot of variants of ISO 8601 [1],> but nothing that looks like this.>> I wonder if we should just rip this code out instead of fixing it.> I suspect its real-world usage is not different from zero. We'd> have to keep the \"Jnnn\" Julian-date case, though, so maybe there's> little to be saved.>> If we do keep it, there's documentation work to be done. But the> first bit of doco I'd want to see is a pointer to a standard.I also don't have a copy of ISO 8601 and wasn't able to find anythingabout this variant on Google. I did find this comment in datetime.c\t/*\t * Was this an \"ISO date\" with embedded field labels? An\t * example is \"y2001m02d04\" - thomas 2001-02-04\t */which comes from this commit [1], which was authored by Thomas Lockhart(presumably the same thomas from the comment). I've CC'ed Thomas incase the email still exists and they happen to remember. The commitmessage mentions ISO, but not the variant mentioned in the comment.The mailing list thread can be found here [2], but it doesn't providemuch more information. I also found the following thread [3], whichhappens to have you in it in case you remember it, which seemed to bethe motivation for commit [1]. It only contains the following lineabout ISO:> o support for \"ISO variants\" on input, including embedded \"T\" preceedingthe time fieldsAll that seems to imply the \"y2001m02d04\" ISO variant was never reallydiscussed in much detail and it's probably fine to remove it. Though,it has been around for 22 years which makes it a bit scary to remove.- Joe Koshakow[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=6f58115dddfa8ca63004c4784f57ef660422861d[2] https://www.postgresql.org/message-id/flat/3BB433D5.3CB4164E%40fourpalms.org[3] https://www.postgresql.org/message-id/flat/3B970FF8.B9990807%40fourpalms.org#c57d83c80d295bfa19887c92122369c3",
"msg_date": "Sat, 4 Mar 2023 18:31:36 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Date-Time dangling unit fix"
},
{
"msg_contents": "Hello,\n\n05.03.2023 02:31, Joseph Koshakow wrote:\n> I also don't have a copy of ISO 8601 and wasn't able to find anything\n> about this variant on Google. I did find this comment in datetime.c\n>\n> /*\n> * Was this an \"ISO date\" with embedded field labels? An\n> * example is \"y2001m02d04\" - thomas 2001-02-04\n> */\n>\n> which comes from this commit [1], which was authored by Thomas Lockhart\n> (presumably the same thomas from the comment).\n\nI've also seen another interesting comment in datetime.c:\n /*\n * Was this an \"ISO time\" with embedded field labels? An\n * example is \"h04mm05s06\" - thomas 2001-02-04\n */\nIn fact,\nSELECT time 'h04mm05s06';\ndoesn't work for many years, but\nSELECT time 'h04mm05s06.0';\nstill does.\n\nI've just found that I mentioned it some time ago:\nhttps://www.postgresql.org/message-id/dff75442-2468-f74f-568c-6006e141062f%40gmail.com\n\nBest regards,\nAlexander\n\n\n\n\n\nHello,\n\n 05.03.2023 02:31, Joseph Koshakow wrote:\n\n\n\nI also don't have a copy of ISO 8601 and wasn't\n able to find anything\n about this variant on Google. I did find this comment in\n datetime.c\n\n /*\n * Was this an \"ISO date\" with embedded field labels? An\n * example is \"y2001m02d04\" - thomas 2001-02-04\n */\n\n which comes from this commit [1], which was authored by Thomas\n Lockhart\n (presumably the same thomas from the comment).\n\n\n\n I've also seen another interesting comment in datetime.c:\n /*\n * Was this an \"ISO time\" with embedded field\n labels? An\n * example is \"h04mm05s06\" - thomas 2001-02-04\n */\n In fact,\n SELECT time 'h04mm05s06';\n doesn't work for many years, but\n SELECT time 'h04mm05s06.0';\n still does.\n\n I've just found that I mentioned it some time ago:\nhttps://www.postgresql.org/message-id/dff75442-2468-f74f-568c-6006e141062f%40gmail.com\n\n Best regards,\n Alexander",
"msg_date": "Sun, 5 Mar 2023 17:00:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Date-Time dangling unit fix"
},
{
"msg_contents": "Attached is a patch for removing the discussed format of date-times.",
"msg_date": "Sun, 5 Mar 2023 11:39:58 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Date-Time dangling unit fix"
},
{
"msg_contents": "[ I removed Lockhart, because he's taken no part in Postgres work for\n more than twenty years; if that address even still works, you're\n just bugging him ]\n\nAlexander Lakhin <exclusion@gmail.com> writes:\n> In fact,\n> SELECT time 'h04mm05s06';\n> doesn't work for many years, but\n> SELECT time 'h04mm05s06.0';\n> still does.\n\nI traced that down to this in DecodeTimeOnly:\n\n\tif ((fmask & DTK_TIME_M) != DTK_TIME_M)\n\t\treturn DTERR_BAD_FORMAT;\n\nwhere we have\n\n#define DTK_ALL_SECS_M\t(DTK_M(SECOND) | DTK_M(MILLISECOND) | DTK_M(MICROSECOND))\n#define DTK_TIME_M\t(DTK_M(HOUR) | DTK_M(MINUTE) | DTK_ALL_SECS_M)\n\nSo in other words, this test insists on seeing hour, minute, second,\n*and* fractional-second fields. That seems obviously too picky.\nIt might not matter if we rip out this syntax, but I see other similar\ntests so I suspect some of them will still be reachable.\n\nPersonally I'd say that hh:mm is a plenty complete enough time, and\nwhether you write seconds is optional, let alone fractional seconds.\nWe do accept this:\n\n=> select '12:34'::time;\n time \n----------\n 12:34:00\n(1 row)\n\nso that must be going through a different code path, which I didn't\ntry to identify yet.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 05 Mar 2023 12:54:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Date-Time dangling unit fix"
},
{
"msg_contents": "On Sun, Mar 5, 2023 at 12:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> We do accept this:\n>\n> => select '12:34'::time;\n> time\n> ----------\n> 12:34:00\n> (1 row)\n>\n> so that must be going through a different code path, which I didn't\n> try to identify yet.\n\nThat query will contain a single field of \"12:34\" with ftype DTK_TIME.\nThat will call into DecodeTime(), which calls into DecodeTimeCommon(),\nwhere we have:\n\n*tmask = DTK_TIME_M;\n\n- Joe Koshakow\n\nOn Sun, Mar 5, 2023 at 12:54 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> We do accept this:>> => select '12:34'::time;> time > ----------> 12:34:00> (1 row)>> so that must be going through a different code path, which I didn't> try to identify yet.That query will contain a single field of \"12:34\" with ftype DTK_TIME.That will call into DecodeTime(), which calls into DecodeTimeCommon(),where we have:*tmask = DTK_TIME_M;- Joe Koshakow",
"msg_date": "Sun, 5 Mar 2023 16:10:30 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Date-Time dangling unit fix"
},
{
"msg_contents": "Also I removed some dead code from the previous patch.\n\n- Joe Koshakow",
"msg_date": "Sun, 5 Mar 2023 16:14:48 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Date-Time dangling unit fix"
},
{
"msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> Also I removed some dead code from the previous patch.\n\nThis needs a rebase over bcc704b52, so I did that and made a\ncouple of other adjustments.\n\nI'm inclined to think that you removed too much from DecodeTimeOnly.\nThat does accept date specs (at least for timetz), and I see no very\ngood reason for it not to accept a Julian date spec. I also wonder\nwhy you removed the UNITS: case there. Seems like we want these\nfunctions to accept the same syntax as much as possible.\n\nI think the code is still a bit schizophrenic about placement of\nptype specs. In the UNITS: case, we don't insist that a unit\napply to exactly the very next field; instead it applies to the next\none where it disambiguates. So for instead this is accepted:\n\nregression=# select 'J PM 1234567 1:23'::timestamp;\n timestamp \n------------------------\n 1333-01-11 13:23:00 BC\n\nThat's a little weird, or maybe even a lot weird, but it's not\ninherently nonsensical so I'm hesitant to stop accepting it.\nHowever, if UNITS acts that way, then why is ISOTIME different?\nSo I'm inclined to remove ISOTIME's lookahead check\n\n if (i >= nf - 1 ||\n (ftype[i + 1] != DTK_NUMBER &&\n ftype[i + 1] != DTK_TIME &&\n ftype[i + 1] != DTK_DATE))\n return DTERR_BAD_FORMAT;\n\nand rely on the ptype-still-set error at the bottom of the loop\nto complain about nonsensical cases.\n\nAlso, if we do keep the lookahead checks, the one in DecodeTimeOnly\ncould be simplified --- it's accepting some cases that actually\naren't supported there.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 09 Mar 2023 19:26:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Date-Time dangling unit fix"
},
{
"msg_contents": "10.03.2023 03:26, Tom Lane wrote:\n> Joseph Koshakow<koshy44@gmail.com> writes:\n>> Also I removed some dead code from the previous patch.\n> That's a little weird, or maybe even a lot weird, but it's not\n> inherently nonsensical so I'm hesitant to stop accepting it.\n> However, if UNITS acts that way, then why is ISOTIME different?\n> So I'm inclined to remove ISOTIME's lookahead check\n>\n> if (i >= nf - 1 ||\n> (ftype[i + 1] != DTK_NUMBER &&\n> ftype[i + 1] != DTK_TIME &&\n> ftype[i + 1] != DTK_DATE))\n> return DTERR_BAD_FORMAT;\n>\n> and rely on the ptype-still-set error at the bottom of the loop\n> to complain about nonsensical cases.\n\nI also wonder how the units affect time zone parsing.\nWith the patch:\nSELECT time with time zone '010203m+3';\nERROR: invalid input syntax for type time with time zone: \"010203m+3\"\nBut without the patch:\nSELECT time with time zone '010203m+3';\n 01:02:03+03\n\nThough with \"non-unit\" spec:\nSELECT time with time zone '010203mmm+3';\n 01:02:03-03\n(With or without the patch.)\nIt seems like \"units\" were just ignored in a time zone specification,\nbut now they are rejected.\n\nAt the same time, I see that the time zone specification allows for any\nletters with the +/- sign following:\nSELECT time with time zone '010203anyletters+3';\n 01:02:03-03\n\nIt's definitely a separate issue, I just want to note a new erroneous\ncondition.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 10 Mar 2023 08:00:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Date-Time dangling unit fix"
},
{
"msg_contents": "On 04.03.23 22:05, Tom Lane wrote:\n> Joseph Koshakow<koshy44@gmail.com> writes:\n>> On Mon, Dec 12, 2022 at 10:55 AM Joseph Koshakow<koshy44@gmail.com> wrote:\n>>> I just found another class of this bug that the submitted patch does\n>>> not fix. If the units are at the beginning of the string, then they are\n>>> also ignored. For example, `date 'm d y2020m11d3'` is also valid. I\n>>> think the fix here is to check and make sure that ptype is 0 before\n>>> reassigning the value to a non-zero number. I'll send an updated patch\n>>> with this tonight.\n>> Attached is the described patch.\n> I started to look at this, and soon noticed that while we have test cases\n> matching this sort of date input, there is no documentation for it. The\n> code claims it's an \"ISO\" (presumably ISO 8601) format, and maybe it is\n> because it looks a lot like the ISO 8601 format for intervals (durations).\n> But I don't have a copy of ISO 8601, and some googling fails to find any\n> indication that anybody else believes this is a valid datetime format.\n> Wikipedia for example documents a lot of variants of ISO 8601 [1],\n> but nothing that looks like this.\n\nThere are additional formats in (the lesser known) ISO 8601-2, one of \nwhich looks like this:\n\n '1985Y4M12D', calendar year 1985, April 12th\n\nBut that is entirely incompatible with the above example, because it has \nthe units after the numbers.\n\nEven more reason not to support the earlier example.\n\n\n\n",
"msg_date": "Fri, 10 Mar 2023 12:17:09 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Date-Time dangling unit fix"
},
{
"msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> I also wonder how the units affect time zone parsing.\n> With the patch:\n> SELECT time with time zone '010203m+3';\n> ERROR: invalid input syntax for type time with time zone: \"010203m+3\"\n> But without the patch:\n> SELECT time with time zone '010203m+3';\n> 01:02:03+03\n\nYeah, I think this is the ptype-still-set-at-end-of-loop check.\nI'm fine with throwing an error for that.\n\n> Though with \"non-unit\" spec:\n> SELECT time with time zone '010203mmm+3';\n> 01:02:03-03\n> (With or without the patch.)\n> It seems like \"units\" were just ignored in a time zone specification,\n> but now they are rejected.\n\nI think it's reading \"mmm+3\" as a POSIX timezone spec. From memory,\nPOSIX allows any sequence of 3 or more letters as a zone abbreviation.\nIt looks like we're being lax and not enforcing the \"3 or more\" part:\n\nregression=# set time zone 'foobar+3';\nSET\nregression=# select timeofday();\n timeofday \n----------------------------------------\n Fri Mar 10 12:08:24.484853 2023 FOOBAR\n(1 row)\n\nregression=# set time zone 'fo+3';\nSET\nregression=# select timeofday();\n timeofday \n------------------------------------\n Fri Mar 10 12:08:38.207311 2023 FO\n(1 row)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Mar 2023 10:09:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Date-Time dangling unit fix"
},
{
"msg_contents": "Hearing no further comments on this, I adjusted DecodeTimeOnly to\nlook more like DecodeDateTime as I recommended upthread, and pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Mar 2023 14:20:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Date-Time dangling unit fix"
}
] |
[
{
"msg_contents": "Hi all,\n\nAttached is a patch to fix another parsing error for date-time types\nthat allow extraneous fields with certain reserved keywords. For\nexample both `date '1995-08-06 epoch'` and `date 'today epoch'` were\nconsidered valid dates that both resolve to 1970-01-01.\n\n- Joe Koshakow",
"msg_date": "Sun, 11 Dec 2022 17:30:09 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Date-time extraneous fields with reserved keywords"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHi Joseph,\r\n\r\nGood catch.\r\nOf the reserved words that are special values of type Date/Time, \r\n'now', 'today', 'tomorrow', 'yesterday', and 'allballs', \r\nI get an error even before applying the patch.\r\n\r\nOne thing I noticed is that the following SQL \r\nreturns normal results even after applying the patch.\r\n\r\npostgres=# select timestamp 'epoch 01:01:01';\r\n timestamp\r\n---------------------\r\n 1970-01-01 00:00:00\r\n(1 row)\r\n\r\nWhen 'epoch','infinity','-infinity' and time are specified together, \r\nthe time specified in the SQL is not included in result.\r\nI think it might be better to assume that this pattern is also an error.\r\nWhat do you think?\r\n\r\nAs a side note,\r\nreserved words such as 'today', 'tomorrow', and 'yesterday'\r\ncan be used to specify a time.\r\n\r\npostgres=# select timestamp 'today 01:01:01';\r\n timestamp\r\n---------------------\r\n 2023-03-03 01:01:01\r\n(1 row)\r\n\r\nBest Regards,\r\nKeisuke Kuroda\r\nNTT Comware\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Fri, 03 Mar 2023 04:41:23 +0000",
"msg_from": "Keisuke Kuroda <kuroda.keisuke@nttcom.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Date-time extraneous fields with reserved keywords"
},
{
"msg_contents": "On Sat, Mar 4, 2023 at 11:23 AM Keisuke Kuroda <kuroda.keisuke@nttcom.co.jp>\nwrote:\n>\n> Good catch.\n> Of the reserved words that are special values of type Date/Time,\n> 'now', 'today', 'tomorrow', 'yesterday', and 'allballs',\n> I get an error even before applying the patch.\n\nThanks for pointing this out. After taking a look\nat the code, 'now', 'today', 'tomorrow',\n'yesterday', and 'allballs' all set the\nappropriate tmask field which is what causes them\nto error.\n\n case DTK_NOW:\n tmask = (DTK_DATE_M | DTK_TIME_M | DTK_M(TZ));\n\n case DTK_YESTERDAY:\n tmask = DTK_DATE_M;\n\n case DTK_TODAY:\n tmask = DTK_DATE_M;\n\n case DTK_TOMORROW:\n tmask = DTK_DATE_M;\n\n case DTK_ZULU:\n tmask = (DTK_TIME_M | DTK_M(TZ));\n\n\nwhile 'epoch', 'infinity', and '-infinity' do not\nset tmask (note the default below handles all of\nthese fields)\n\n default:\n *dtype = val;\n\nSo I think a better fix here would be to also set\ntmask for those three reserved keywords.\n\n\n> One thing I noticed is that the following SQL\n> returns normal results even after applying the patch.\n>\n> postgres=# select timestamp 'epoch 01:01:01';\n> timestamp\n> ---------------------\n> 1970-01-01 00:00:00\n> (1 row)\n>\n> When 'epoch','infinity','-infinity' and time are specified together,\n> the time specified in the SQL is not included in result.\n> I think it might be better to assume that this pattern is also an\nerror.\n> What do you think?\n\nI agree this pattern should also be an error. I\nthink that the tmask approach will cause an error\nfor this pattern as well.\n\nThanks,\nJoe Koshakow\n\nOn Sat, Mar 4, 2023 at 11:23 AM Keisuke Kuroda <kuroda.keisuke@nttcom.co.jp> wrote:>> Good catch.> Of the reserved words that are special values of type Date/Time,> 'now', 'today', 'tomorrow', 'yesterday', and 'allballs',> I get an error even before applying the patch.Thanks for pointing this out. After taking a lookat the code, 'now', 'today', 'tomorrow','yesterday', and 'allballs' all set theappropriate tmask field which is what causes themto error. case DTK_NOW: tmask = (DTK_DATE_M | DTK_TIME_M | DTK_M(TZ)); case DTK_YESTERDAY: tmask = DTK_DATE_M; case DTK_TODAY: tmask = DTK_DATE_M; case DTK_TOMORROW: tmask = DTK_DATE_M; case DTK_ZULU: tmask = (DTK_TIME_M | DTK_M(TZ));while 'epoch', 'infinity', and '-infinity' do notset tmask (note the default below handles all ofthese fields) default: *dtype = val;So I think a better fix here would be to also settmask for those three reserved keywords.> One thing I noticed is that the following SQL> returns normal results even after applying the patch.>> postgres=# select timestamp 'epoch 01:01:01';> timestamp> ---------------------> 1970-01-01 00:00:00> (1 row)>> When 'epoch','infinity','-infinity' and time are specified together,> the time specified in the SQL is not included in result.> I think it might be better to assume that this pattern is also an error.> What do you think?I agree this pattern should also be an error. Ithink that the tmask approach will cause an errorfor this pattern as well.Thanks,Joe Koshakow",
"msg_date": "Sat, 4 Mar 2023 11:33:02 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Date-time extraneous fields with reserved keywords"
},
{
"msg_contents": "Attached is the described patch. I have two notes\nafter implementing it:\n - It feels like a bit of an abstraction break to\n set tmask without actually setting any fields in\n tm.\n - I'm not sure if we should hard code in those\n three specific reserved keywords or set tmask\n in the default case.\n\nAny thoughts?\n\n- Joe Koshakow",
"msg_date": "Sat, 4 Mar 2023 12:29:09 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Date-time extraneous fields with reserved keywords"
},
{
"msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> - I'm not sure if we should hard code in those\n> three specific reserved keywords or set tmask\n> in the default case.\n\nI think we should tread very carefully about disallowing inputs that\nhave been considered acceptable for 25 years. I agree with disallowing\nnumeric fields along with 'epoch' and 'infinity', but for example\nthis seems perfectly useful and sensible:\n\n# select timestamptz 'today 12:34';\n timestamptz \n------------------------\n 2023-03-04 12:34:00-05\n(1 row)\n\n> Any thoughts?\n\nWhy do you want to skip ValidateDate in some cases? If we've not\nhad to do that before, I don't see why it's a good idea now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Mar 2023 13:56:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Date-time extraneous fields with reserved keywords"
},
{
"msg_contents": "On Sat, Mar 4, 2023 at 1:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I think we should tread very carefully about disallowing inputs that\n> have been considered acceptable for 25 years. I agree with disallowing\n> numeric fields along with 'epoch' and 'infinity', but for example\n> this seems perfectly useful and sensible:\n>\n> # select timestamptz 'today 12:34';\n> timestamptz\n> ------------------------\n> 2023-03-04 12:34:00-05\n> (1 row)\n\nYeah, that makes sense. I'll leave it as is with\nthe explicit case for 'epoch', 'infinity', and\n'-infinity'.\n\n> Why do you want to skip ValidateDate in some cases? If we've not\n> had to do that before, I don't see why it's a good idea now.\n\nThis goes back to the abstraction break of\nsetting tmask without updating tm. Certain\nvalidations will check that if a field is set in\nfmask (which is an accumulation of tmask from\nevery iteration) then it's value in tm is valid.\nFor example:\n\n if (fmask & DTK_M(YEAR))\n {\n // ...\n else\n {\n /* there is no year zero in AD/BC notation */\n if (tm->tm_year <= 0)\n return DTERR_FIELD_OVERFLOW;\n }\n }\n\nAs far as I can tell dtype always equals DTK_DATE\nexcept when the timestamp/date is 'epoch',\n'infinity', '-infinity', and none of the\nvalidations apply to those date/timestamps.\nThough, I think you're right this is probably\nnot a good idea. I'll try and brainstorm a\ndifferent approach, unless you have some ideas.\n\nOn Sat, Mar 4, 2023 at 1:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> I think we should tread very carefully about disallowing inputs that> have been considered acceptable for 25 years. I agree with disallowing> numeric fields along with 'epoch' and 'infinity', but for example> this seems perfectly useful and sensible:>> # select timestamptz 'today 12:34';> timestamptz > ------------------------> 2023-03-04 12:34:00-05> (1 row)Yeah, that makes sense. I'll leave it as is withthe explicit case for 'epoch', 'infinity', and'-infinity'.> Why do you want to skip ValidateDate in some cases? If we've not> had to do that before, I don't see why it's a good idea now.This goes back to the abstraction break ofsetting tmask without updating tm. Certainvalidations will check that if a field is set infmask (which is an accumulation of tmask fromevery iteration) then it's value in tm is valid.For example: if (fmask & DTK_M(YEAR)) { // ...\t\t else \t\t{ \t\t\t/* there is no year zero in AD/BC notation */ \t\t\tif (tm->tm_year <= 0) \t\t\t\treturn DTERR_FIELD_OVERFLOW; \t\t} \t}As far as I can tell dtype always equals DTK_DATEexcept when the timestamp/date is 'epoch','infinity', '-infinity', and none of thevalidations apply to those date/timestamps.Though, I think you're right this is probablynot a good idea. I'll try and brainstorm adifferent approach, unless you have some ideas.",
"msg_date": "Sat, 4 Mar 2023 14:32:18 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Date-time extraneous fields with reserved keywords"
},
{
"msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> On Sat, Mar 4, 2023 at 1:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Why do you want to skip ValidateDate in some cases? If we've not\n>> had to do that before, I don't see why it's a good idea now.\n\n> This goes back to the abstraction break of\n> setting tmask without updating tm. Certain\n> validations will check that if a field is set in\n> fmask (which is an accumulation of tmask from\n> every iteration) then it's value in tm is valid.\n\nAh. Another way could be to fill tm with something that would\nsatisfy ValidateDate, but that seems pretty silly.\n\n> As far as I can tell dtype always equals DTK_DATE\n> except when the timestamp/date is 'epoch',\n> 'infinity', '-infinity', and none of the\n> validations apply to those date/timestamps.\n\nRight. So really we ought to move the ValidateDate call as\nwell as the next half-dozen lines about \"mer\" down into\nthe subsequent \"do additional checking\" stanza. It's all\nonly relevant to normal date specs.\n\nBTW, looking at the set of RESERV tokens in datetktbl[],\nit looks to me like this change renders the final \"default:\"\ncase unreachable, so probably we could just make that an error.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Mar 2023 14:48:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Date-time extraneous fields with reserved keywords"
},
{
"msg_contents": "On Sat, Mar 4, 2023 at 2:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Right. So really we ought to move the ValidateDate call as\n> well as the next half-dozen lines about \"mer\" down into\n> the subsequent \"do additional checking\" stanza. It's all\n> only relevant to normal date specs.\n>\n> BTW, looking at the set of RESERV tokens in datetktbl[],\n> it looks to me like this change renders the final \"default:\"\n> case unreachable, so probably we could just make that an error.\n\nPlease see the attached patch with these changes.\n\n- Joe Koshakow",
"msg_date": "Sat, 4 Mar 2023 15:05:08 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Date-time extraneous fields with reserved keywords"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nThank you for the response and new patch.\r\n\r\nThe scope of impact is limited to 'epoch' and 'infinity'.\r\nAlso, it is unlikely that these reserved words will be \r\nused in combination with time/date, so this patch is appropriate.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Wed, 08 Mar 2023 05:42:15 +0000",
"msg_from": "Keisuke Kuroda <kuroda.keisuke@nttcom.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Date-time extraneous fields with reserved keywords"
},
{
"msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> Please see the attached patch with these changes.\n\nPushed with a couple of cosmetic adjustments.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Mar 2023 16:50:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Date-time extraneous fields with reserved keywords"
}
] |
[
{
"msg_contents": "Hi,\n\nThe README in nbtree mentions that L&Y algorithm must couple\nlocks when moving right during ascent for insertion. However,\nit's hard to see why that's necessary. Since L&Y mostly\ndiscussed concurrent insertions and searches, what can go wrong\nif inserters only acquire one lock at a time?\n\nThe Lanin&ShaSha paper cited in README also agrees that B-link\nstructure allows inserts and searches to lock only one node at a\ntime although it's not apparent in L&Y itself.\n\n\n\nThanks,\n\nHong\n\nHi,The README in nbtree mentions that L&Y algorithm must couplelocks when moving right during ascent for insertion. However,it's hard to see why that's necessary. Since L&Y mostlydiscussed concurrent insertions and searches, what can go wrongif inserters only acquire one lock at a time?The Lanin&ShaSha paper cited in README also agrees that B-linkstructure allows inserts and searches to lock only one node at atime although it's not apparent in L&Y itself.Thanks,Hong",
"msg_date": "Sun, 11 Dec 2022 17:38:31 -0800",
"msg_from": "Oliver Yang <olilent2ctw@gmail.com>",
"msg_from_op": true,
"msg_subject": "Why does L&Y Blink Tree need lock coupling?"
},
{
"msg_contents": "On Sun, Dec 11, 2022 at 5:38 PM Oliver Yang <olilent2ctw@gmail.com> wrote:\n> The README in nbtree mentions that L&Y algorithm must couple\n> locks when moving right during ascent for insertion. However,\n> it's hard to see why that's necessary.\n\nYou're not the first person to ask about this exact point in the last\nfew years. The last time somebody asked me about it via private email.\nI'm surprised that there is even that level of interest.\n\nIt's not necessary to couple locks on internal levels of the tree in\nnbtree, of course. Which is why we don't couple locks in\n_bt_getstackbuf().\n\nNote that we have two \"move right\" functions here --\n_bt_getstackbuf(), and _bt_moveright(). Whereas the L&Y pseudocode has\none function for both things. Perhaps just because it's only abstract\npseudocode -- it needs to be understood in context.\n\nYou have to be realistic about how faithfully a real world system can\nbe expected to implement something like L&Y. Some of the assumptions\nmade by the paper just aren't reasonable, especially the assumption\nabout atomic page reads (I always thought that that assumption was\nodd). Plus nbtree does plenty of things that L&Y don't consider,\nincluding things that are obviously related to the stuff they do talk\nabout. For example, nbtree has left links, while L&Y does not (nor\ndoes Lanin & Shasha, though they do have something weirdly similar\nthat they call \"out links\").\n\n> Since L&Y mostly\n> discussed concurrent insertions and searches, what can go wrong\n> if inserters only acquire one lock at a time?\n\nYou have to consider that we don't match on the separator key during\nthe ascent of the B-tree structure following a split. That's another a\nbig difference between nbtree and the paper -- we store a block number\nin our descent stack instead.\n\nImagine if PostgreSQL's nbtree did match on a separator key, like\nLehman and Yao. Think about this scenario:\n\n* We descend the tree and follow a downlink, while remembering the\nassociated separator key on our descent stack. It is a simple 3 level\nB-Tree.\n\n* We split a leaf page, and have to relocate the separator 1 level up\n(in level 1).\n\n* The high key of the internal page on level 1 exactly matches our\nseparator key -- so it must be that the separator key is to the right.\n\n* We release our lock on the original parent, and then lock and read\nits right sibling. But where do we insert our new separator key?\n\nWe cannot match the original separator key because it doesn't exist in\nthis other internal page to the right -- there is no first separator\nkey in any internal page, including when it isn't the root of the\ntree. Actually, you could say that there is a separator key associated\nwith the downlink, but it's only a negative infinity sentinel key.\nNegative infinity keys represent \"absolute negative infinity\" when\nthey're from the root of the entire B-Tree, but in any other internal\npage it represents \"relative negative infinity\" -- it's only lower\nthan everything in that particular subtree only.\n\nAt the very least it seems risky to assume that it's safe to match on\nthe separator key without lock coupling so that we see that the\ndownlink really does come after the matches-descent-stack high key\nseparator in the original parent page. You could probably make it work\nif you had to, but it's annoying to explain, and not actually that\nvaluable -- moving right within _bt_getstackbuf() is a rare case in\ngeneral (and trying to relocate the downlink that happens to become\nthe first downlink following a concurrent internal page split is even\nrarer).\n\nIn PostgreSQL it's not annoying to understand why it's okay, because\nit's obviously okay to just match on the downlink/block number\ndirectly, which is how it has always worked. It only becomes a problem\nwhen you try to understand what Lehman and Yao meant. It's unfortunate\nthat they say \"at most 3 locks\", and therefore draw attention to this\nnot-very-important issue. Lehman and Yao probably found it easier to\nsay \"let's try to keep our paper simple by making the move right\nroutine couple locks in very rare cases where it is actually necessary\nto move right\".\n\n> The Lanin&ShaSha paper cited in README also agrees that B-link\n> structure allows inserts and searches to lock only one node at a\n> time although it's not apparent in L&Y itself.\n\nBut the Lanin & Shasha paper has a far more optimistic approach. They\nmake rather bold claims about how many locks they can get away with\nholding at any one time. That makes it significantly different to L&Y\nas well as nbtree (nbtree is far closer to L&Y than it is to Lanin &\nShasha).\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 11 Dec 2022 18:00:40 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Why does L&Y Blink Tree need lock coupling?"
},
{
"msg_contents": "On Sun, Dec 11, 2022 at 6:01 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sun, Dec 11, 2022 at 5:38 PM Oliver Yang <olilent2ctw@gmail.com> wrote:\n> > The README in nbtree mentions that L&Y algorithm must couple\n> > locks when moving right during ascent for insertion. However,\n> > it's hard to see why that's necessary.\n>\n> You're not the first person to ask about this exact point in the last\n> few years. The last time somebody asked me about it via private email.\n> I'm surprised that there is even that level of interest.\n>\n> It's not necessary to couple locks on internal levels of the tree in\n> nbtree, of course. Which is why we don't couple locks in\n> _bt_getstackbuf().\n>\n> Note that we have two \"move right\" functions here --\n> _bt_getstackbuf(), and _bt_moveright(). Whereas the L&Y pseudocode has\n> one function for both things. Perhaps just because it's only abstract\n> pseudocode -- it needs to be understood in context.\n>\n> You have to be realistic about how faithfully a real world system can\n> be expected to implement something like L&Y. Some of the assumptions\n> made by the paper just aren't reasonable, especially the assumption\n> about atomic page reads (I always thought that that assumption was\n> odd). Plus nbtree does plenty of things that L&Y don't consider,\n> including things that are obviously related to the stuff they do talk\n> about. For example, nbtree has left links, while L&Y does not (nor\n> does Lanin & Shasha, though they do have something weirdly similar\n> that they call \"out links\").\n>\n> > Since L&Y mostly\n> > discussed concurrent insertions and searches, what can go wrong\n> > if inserters only acquire one lock at a time?\n>\n> You have to consider that we don't match on the separator key during\n> the ascent of the B-tree structure following a split. That's another a\n> big difference between nbtree and the paper -- we store a block number\n> in our descent stack instead.\n>\n> Imagine if PostgreSQL's nbtree did match on a separator key, like\n> Lehman and Yao. Think about this scenario:\n>\n> * We descend the tree and follow a downlink, while remembering the\n> associated separator key on our descent stack. It is a simple 3 level\n> B-Tree.\n>\n> * We split a leaf page, and have to relocate the separator 1 level up\n> (in level 1).\n>\n> * The high key of the internal page on level 1 exactly matches our\n> separator key -- so it must be that the separator key is to the right.\n>\n> * We release our lock on the original parent, and then lock and read\n> its right sibling. But where do we insert our new separator key?\n>\n> We cannot match the original separator key because it doesn't exist in\n> this other internal page to the right -- there is no first separator\n> key in any internal page, including when it isn't the root of the\n> tree. Actually, you could say that there is a separator key associated\n> with the downlink, but it's only a negative infinity sentinel key.\n> Negative infinity keys represent \"absolute negative infinity\" when\n> they're from the root of the entire B-Tree, but in any other internal\n> page it represents \"relative negative infinity\" -- it's only lower\n> than everything in that particular subtree only.\n>\n> At the very least it seems risky to assume that it's safe to match on\n> the separator key without lock coupling so that we see that the\n> downlink really does come after the matches-descent-stack high key\n> separator in the original parent page. You could probably make it work\n> if you had to, but it's annoying to explain, and not actually that\n> valuable -- moving right within _bt_getstackbuf() is a rare case in\n> general (and trying to relocate the downlink that happens to become\n> the first downlink following a concurrent internal page split is even\n> rarer).\n>\n> In PostgreSQL it's not annoying to understand why it's okay, because\n> it's obviously okay to just match on the downlink/block number\n> directly, which is how it has always worked. It only becomes a problem\n> when you try to understand what Lehman and Yao meant. It's unfortunate\n> that they say \"at most 3 locks\", and therefore draw attention to this\n> not-very-important issue. Lehman and Yao probably found it easier to\n> say \"let's try to keep our paper simple by making the move right\n> routine couple locks in very rare cases where it is actually necessary\n> to move right\".\n\nAs you suggested, the coupling lock during moveright could be\navoided by tracking downlink instead of the separator key during\ndescent. In a sense, this isn't a fundamental issue and L&Y\npaper could be easily tweaked to track downlink so that it\ndoesn't require coupling lock in moveright.\n\nHowever, it's hard to see why coupling lock is needed during\nascent from child level to parent level in the L&Y setting. What can\ngo wrong if L&Y's algorithm releases lock on child page before\nacquiring lock on its parent? The correctness proof in L&Y\ndoesn't use the assumption of coupling lock anywhere. It appears\nthat a lock at a time is sufficient in principle.\n\n> > The Lanin&ShaSha paper cited in README also agrees that B-link\n> > structure allows inserts and searches to lock only one node at a\n> > time although it's not apparent in L&Y itself.\n>\n> But the Lanin & Shasha paper has a far more optimistic approach. They\n> make rather bold claims about how many locks they can get away with\n> holding at any one time. That makes it significantly different to L&Y\n> as well as nbtree (nbtree is far closer to L&Y than it is to Lanin &\n> Shasha).\n\nThe direct quote from section 1.2 of Lanin & Shasha\npaper: \"Although it is not apparent in itself, the B-link\nstructure allows inserts and searches to lock only one node at a\ntime.\" It seems to be an assertion on the property of the L&Y\nalgorithm. It doesn't seem to be related the optimistic approach\nemployed in Lanin & Shasha own algorithm.\n\n\nBest,\n\nHong\n\n\n",
"msg_date": "Mon, 12 Dec 2022 11:42:55 -0800",
"msg_from": "Oliver Yang <olilent2ctw@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Why does L&Y Blink Tree need lock coupling?"
},
{
"msg_contents": "On Mon, Dec 12, 2022 at 11:43 AM Oliver Yang <olilent2ctw@gmail.com> wrote:\n> In a sense, this isn't a fundamental issue and L&Y\n> paper could be easily tweaked to track downlink so that it\n> doesn't require coupling lock in moveright.\n\nI suppose that's true, but having 3 locks at a time is such a rare\ncase for classic L&Y that it hardly matters (it's far rarer than\nhaving only 2, which is itself very rare). They emphasized it because\nit is the highest number of locks, not because it's necessarily\nimportant.\n\nI'm confused about whether you're talking about what L&Y could do, in\ntheory, or what nbtree could do, in theory, or what nbtree should\nactually be doing. These are 3 different subjects, really.\n\n> However, it's hard to see why coupling lock is needed during\n> ascent from child level to parent level in the L&Y setting. What can\n> go wrong if L&Y's algorithm releases lock on child page before\n> acquiring lock on its parent? The correctness proof in L&Y\n> doesn't use the assumption of coupling lock anywhere. It appears\n> that a lock at a time is sufficient in principle.\n\nThat may be true, but the words \"in principle\" are doing a lot of work\nfor you here. Travelling at a speed that approaches the speed of light\nis also possible in principle.\n\nImagine that there is no lock coupling at all. What happens when there\nis a page split that doesn't complete its second phase due to a hard\ncrash? Do inserters complete the split in passing, like in nbtree? Now\nyou have to think about incomplete splits across multiple levels. That\ngets very complicated, but it needs to be addressed in a real system\nlike Postgres. Academic papers can just ignore corner cases like this.\n\nWhy do you think that consistently only holding one lock (as opposed\nto only holding one lock at a time during 99%+ of all inserts) is\ntruly valuable in a practical setting? Maybe it is valuable, but\nthat's rather unclear. It is not an area that I would choose to work\non, given that uncertainty.\n\n> The direct quote from section 1.2 of Lanin & Shasha\n> paper: \"Although it is not apparent in itself, the B-link\n> structure allows inserts and searches to lock only one node at a\n> time.\" It seems to be an assertion on the property of the L&Y\n> algorithm. It doesn't seem to be related the optimistic approach\n> employed in Lanin & Shasha own algorithm.\n\nI don't know how you can reach that conclusion. It directly\ncontradicts the claim made by the L&Y paper about requiring at most 3\nlocks. And they even say \"although it's not apparent in itself\",\npresenting it as new information.\n\nThey seem to be saying that the same basic B-Link data structure (or\none like it, with the addition of outlinks) could do that -- but\nthat's not the same as the L&Y algorithm (the original design). That\ndoes seem possible, though I doubt that it would be particularly\ncompelling, since L&Y/nbtree don't need to do lock coupling for the\nvast majority of individual inserts or searches.\n\nI don't think that the L&Y paper is particularly clear, or\nparticularly well written. It needs to be interpreted in its original\ncontext, which is quite far removed from the current concerns of\nnbtree. It's a 41 year old paper.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 12 Dec 2022 12:13:02 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Why does L&Y Blink Tree need lock coupling?"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile doing some hackery on SCRAM, I have noticed $subject giving the\nattached. I guess that this is not going to cause any objections, but\nfeel free to comment just in case.\n\nThanks,\n--\nMichael",
"msg_date": "Tue, 13 Dec 2022 13:57:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Remove SHA256_HMAC_B from scram-common.h"
},
{
"msg_contents": "On Mon, Dec 12, 2022 at 8:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n> While doing some hackery on SCRAM, I have noticed $subject giving the\n> attached. I guess that this is not going to cause any objections, but\n> feel free to comment just in case.\n\nYeah, no objection :D That cryptohash refactoring was quite nice.\n\n--Jacob\n\n\n",
"msg_date": "Tue, 13 Dec 2022 09:27:50 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove SHA256_HMAC_B from scram-common.h"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 09:27:50AM -0800, Jacob Champion wrote:\n> On Mon, Dec 12, 2022 at 8:57 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> While doing some hackery on SCRAM, I have noticed $subject giving the\n>> attached. I guess that this is not going to cause any objections, but\n>> feel free to comment just in case.\n> \n> Yeah, no objection :D That cryptohash refactoring was quite nice.\n\nThanks. I have much more refactoring work coming up in this area, and\nthe cryptohash move is helping quite a lot in terms of error handling.\n--\nMichael",
"msg_date": "Wed, 14 Dec 2022 07:13:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Remove SHA256_HMAC_B from scram-common.h"
}
] |
[
{
"msg_contents": "Hi,\n\nA review comment in another thread [1] by Michael Paquier about the\nusage of get_call_result_type() instead of explicit building of\nTupleDesc made me think about using it more widely. Actually, the\nget_call_result_type() looks at the function definitions to figure the\ncolumn names and build the required TupleDesc, usage of which avoids\nduplication of the column names between pg_proc.dat/function\ndefinitions and source code. Also, it saves a good number of LOC ~415\n[2] and the size of all the object files put together gets reduced by\n~4MB, which means, the postgres binary becomes leaner by ~4MB [3]. I'm\nattaching a patch for these changes.\n\nWhile on this, I observed that BlessTupleDesc() is called in many\n(~12) places right after get_call_result_type() which actually does\nthe job of BlessTupleDesc() before returning the TupleDesc. I think we\ncan get rid of BlessTupleDesc() after get_call_result_type(). I'm\nattaching a patch for these changes too.\n\ncirrus-ci members are happy with these patches, please see here\nhttps://github.com/BRupireddy/postgres/tree/use_get_call_result_type()_more_widely_v1.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/Y41De5NnF2sxmJPI%40paquier.xyz\n\n[2] 21 files changed, 97 insertions(+), 514 deletions(-)\n\n[3] Source code is built with CFLAGS = -O3.\nPATCHED:\n text data bss dec hex filename\n 1043 0 0 1043 413 contrib/old_snapshot/time_mapping.o\n 7192 0 0 7192 1c18 contrib/pg_visibility/pg_visibility.o\n 7144 0 120 7264 1c60 src/backend/access/transam/commit_ts.o\n 19681 24 248 19953 4df1 src/backend/access/transam/multixact.o\n 20595 0 88 20683 50cb src/backend/access/transam/twophase.o\n 6162 0 24 6186 182a src/backend/access/transam/xlogfuncs.o\n 45540 2736 8 48284 bc9c src/backend/catalog/objectaddress.o\n 9943 0 0 9943 26d7 src/backend/catalog/pg_publication.o\n 18239 0 16 18255 474f src/backend/commands/sequence.o\n 6429 0 0 6429 191d src/backend/tsearch/wparser.o\n 47049 1840 52 48941 bf2d src/backend/utils/adt/acl.o\n 43066 168 784 44018 abf2 src/backend/utils/adt/datetime.o\n 6843 0 0 6843 1abb src/backend/utils/adt/genfile.o\n 6904 120 0 7024 1b70 src/backend/utils/adt/lockfuncs.o\n 10512 7008 0 17520 4470 src/backend/utils/adt/misc.o\n 1569 0 0 1569 621 src/backend/utils/adt/partitionfuncs.o\n 16266 0 0 16266 3f8a src/backend/utils/adt/pgstatfuncs.o\n 40985 0 0 40985 a019 src/backend/utils/adt/tsvector_op.o\n 8322 0 0 8322 2082 src/backend/utils/misc/guc_funcs.o\n 2109 0 0 2109 83d src/backend/utils/misc/pg_controldata.o\n 2354 0 0 2354 932\nsrc/test/modules/test_predtest/test_predtest.o\n 9586047 226936 205536 10018519 98ded7 src/backend/postgres\n\nHEAD:\n text data bss dec hex filename\n 1019 0 0 1019 3fb contrib/old_snapshot/time_mapping.o\n 7159 0 0 7159 1bf7 contrib/pg_visibility/pg_visibility.o\n 6655 0 120 6775 1a77 src/backend/access/transam/commit_ts.o\n 19636 24 248 19908 4dc4 src/backend/access/transam/multixact.o\n 20663 0 88 20751 510f src/backend/access/transam/twophase.o\n 6206 0 24 6230 1856 src/backend/access/transam/xlogfuncs.o\n 45700 2736 8 48444 bd3c src/backend/catalog/objectaddress.o\n 9952 0 0 9952 26e0 src/backend/catalog/pg_publication.o\n 18487 0 16 18503 4847 src/backend/commands/sequence.o\n 6143 0 0 6143 17ff src/backend/tsearch/wparser.o\n 47123 1840 52 49015 bf77 src/backend/utils/adt/acl.o\n 43099 168 784 44051 ac13 src/backend/utils/adt/datetime.o\n 7016 0 0 7016 1b68 src/backend/utils/adt/genfile.o\n 7413 120 0 7533 1d6d src/backend/utils/adt/lockfuncs.o\n 10698 7008 0 17706 452a src/backend/utils/adt/misc.o\n 1593 0 0 1593 639 src/backend/utils/adt/partitionfuncs.o\n 17194 0 0 17194 432a src/backend/utils/adt/pgstatfuncs.o\n 40798 0 0 40798 9f5e src/backend/utils/adt/tsvector_op.o\n 8871 0 0 8871 22a7 src/backend/utils/misc/guc_funcs.o\n 3918 0 0 3918 f4e src/backend/utils/misc/pg_controldata.o\n 2636 0 0 2636 a4c\nsrc/test/modules/test_predtest/test_predtest.o\n 9589943 226936 205536 10022415 98ee0f src/backend/postgres\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 13 Dec 2022 13:06:48 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Use get_call_result_type() more widely"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 01:06:48PM +0530, Bharath Rupireddy wrote:\n> A review comment in another thread [1] by Michael Paquier about the\n> usage of get_call_result_type() instead of explicit building of\n> TupleDesc made me think about using it more widely. Actually, the\n> get_call_result_type() looks at the function definitions to figure the\n> column names and build the required TupleDesc, usage of which avoids\n> duplication of the column names between pg_proc.dat/function\n> definitions and source code. Also, it saves a good number of LOC ~415\n> [2] and the size of all the object files put together gets reduced by\n> ~4MB, which means, the postgres binary becomes leaner by ~4MB [3]. I'm\n> attaching a patch for these changes.\n\nI have wanted to look at that when poking at the interface for\nmaterialized SRFs but lacked of steam back then. Even after this\nchange, we still have coverage for CreateTemplateTupleDesc() and\nTupleDescInitEntry() through the GUCs/SHOW or even WAL sender, so the\ncoverage does not worry me much. Backpatch conflicts may be a point\nof contention, but that's pretty much in the same spirit as\nSetSingleFuncCall()/InitMaterializedSRF().\n\nAll in that, +1 (still need to check in details what you have here,\nlooks rather fine at quick glance).\n--\nMichael",
"msg_date": "Tue, 13 Dec 2022 17:13:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use get_call_result_type() more widely"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> A review comment in another thread [1] by Michael Paquier about the\n> usage of get_call_result_type() instead of explicit building of\n> TupleDesc made me think about using it more widely. Actually, the\n> get_call_result_type() looks at the function definitions to figure the\n> column names and build the required TupleDesc, usage of which avoids\n> duplication of the column names between pg_proc.dat/function\n> definitions and source code. Also, it saves a good number of LOC ~415\n> [2] and the size of all the object files put together gets reduced by\n> ~4MB, which means, the postgres binary becomes leaner by ~4MB [3].\n\nSaving code is nice, but I'd assume the result is slower, because\nget_call_result_type has to do a pretty substantial amount of work\nto get the data to construct the tupdesc from. Have you tried to\nquantify how much overhead this'd add? Which of these functions\ncan we safely consider to be non-performance-critical?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Dec 2022 10:42:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use get_call_result_type() more widely"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 9:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> > A review comment in another thread [1] by Michael Paquier about the\n> > usage of get_call_result_type() instead of explicit building of\n> > TupleDesc made me think about using it more widely. Actually, the\n> > get_call_result_type() looks at the function definitions to figure the\n> > column names and build the required TupleDesc, usage of which avoids\n> > duplication of the column names between pg_proc.dat/function\n> > definitions and source code. Also, it saves a good number of LOC ~415\n> > [2] and the size of all the object files put together gets reduced by\n> > ~4MB, which means, the postgres binary becomes leaner by ~4MB [3].\n>\n> Saving code is nice, but I'd assume the result is slower, because\n> get_call_result_type has to do a pretty substantial amount of work\n> to get the data to construct the tupdesc from. Have you tried to\n> quantify how much overhead this'd add? Which of these functions\n> can we safely consider to be non-performance-critical?\n\nAFAICS, most of these functions have no direct source code callers,\nthey're user-facing functions and not in a hot code path. I measured\nthe test times of these functions and I don't see much difference [1].\n\n[1]\npg_old_snapshot_time_mapping() - an extension function with no\ninternal source code callers, no test coverage.\npg_visibility_map_summary() - an extension function with no internal\nsource code callers, test coverage exists, test times on HEAD:25 ms\nPATCHED:25 ms\npg_last_committed_xact() and pg_xact_commit_timestamp_origin() - no\ninternal source code callers, test coverage exists, test times on\nHEAD:10 ms PATCHED:10 ms\npg_get_multixact_members() - no internal source code callers, no test coverage.\npg_prepared_xact() - no internal source code callers, test coverage\nexists, test times on HEAD:50 ms, subscription 108 wallclock secs,\nrecovery 111 wallclock secs PATCHED:48 ms, subscription 110 wallclock\nsecs, recovery 112 wallclock secs\npg_walfile_name_offset() - no internal source code callers, no test coverage.\npg_get_object_address() - no internal source code callers, test\ncoverage exists, test times on HEAD:66 ms PATCHED:60 ms\npg_identify_object() - no internal source code callers, test coverage\nexists, test times on HEAD:17 ms PATCHED:18 ms\npg_identify_object_as_address() - no internal source code callers,\ntest coverage exists, test times on HEAD:66 ms PATCHED:60 ms\npg_get_publication_tables() - internal source code callers exist, test\ncoverage exists, test times on HEAD:159 ms, subscription 108 wallclock\nsecs PATCHED:167 ms, subscription 110 wallclock secs\npg_sequence_parameters() - no internal source code callers, test\ncoverage exists, test times on HEAD:96 ms PATCHED:98 ms\nts_token_type_byid(), ts_token_type_byname(), ts_parse_byid() and\nts_parse_byname() - internal source code callers exists, test coverage\nexists, test times on HEAD:195 ms, pg_dump 10 wallclock secs\nPATCHED:186 ms, pg_dump 10 wallclock secs\naclexplode() - internal callers exists information_schema.sql,\nindirect test coverage exists.\npg_timezone_abbrevs() - no internal source code callers, test coverage\nexists, test times on HEAD:40 ms PATCHED:36 ms\npg_stat_file() - no internal source code callers, test coverage\nexists, test times on HEAD:42 ms PATCHED:46 ms\npg_lock_status() - no internal source code callers, test coverage\nexists, test times on HEAD:16 ms PATCHED:22 ms\npg_get_keywords() - no internal source code callers, test coverage\nexists, test times on HEAD:129 ms PATCHED:130 ms\npg_get_catalog_foreign_keys() - no internal source code callers, test\ncoverage exists, test times on HEAD:114 ms PATCHED:111 ms\npg_partition_tree() - no internal source code callers, test coverage\nexists, test times on HEAD:30 ms PATCHED:32 ms\npg_stat_get_wal(), pg_stat_get_archiver() and\npg_stat_get_replication_slot() - no internal source code callers, test\ncoverage exists, test times on HEAD:479 ms PATCHED:483 ms\npg_stat_get_subscription_stats() - no internal source code callers,\ntest coverage exists, test times on HEAD:subscription 108 wallclock\nsecs PATCHED:subscription 110 wallclock secs\ntsvector_unnest() - no internal source code callers, test coverage\nexists, test times on HEAD:26 ms PATCHED:26 ms\nts_setup_firstcall() - test coverage exists, test times on HEAD:195 ms\nPATCHED:186 ms\nshow_all_settings(), pg_control_system(), pg_control_checkpoint(),\npg_control_recovery() and pg_control_init() - test coverage exists,\ntest times on HEAD:42 ms PATCHED:44 ms\ntest_predtest() - no internal source code callers, test coverage\nexists, test times on HEAD:18 ms PATCHED:18 ms\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 14 Dec 2022 11:14:59 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use get_call_result_type() more widely"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 11:14:59AM +0530, Bharath Rupireddy wrote:\n> AFAICS, most of these functions have no direct source code callers,\n> they're user-facing functions and not in a hot code path. I measured\n> the test times of these functions and I don't see much difference [1].\n\nThanks for the summary. It looks like your tests involve single\nruns. What is the difference in run-time when invoking this\nrepeatedly with a large generate_series() for example when few or no\ntuples are returned? Do you see a difference in perf profile? Some\nof the functions could have their time mostly eaten while looking at\nthe syscache on repeated calls, but you could see the actual work this\ninvolves with a dummy function that returns a large number of\nattributes on a single record in the worst case possible?\n\nSeparating things into two buckets..\n\n> [1]\n> pg_old_snapshot_time_mapping() - an extension function with no\n> internal source code callers, no test coverage.\n> pg_visibility_map_summary() - an extension function with no internal\n> source code callers, test coverage exists, test times on HEAD:25 ms\n> PATCHED:25 ms\n> pg_last_committed_xact() and pg_xact_commit_timestamp_origin() - no\n> internal source code callers, test coverage exists, test times on\n> HEAD:10 ms PATCHED:10 ms> pg_get_multixact_members() - no internal source code callers, no test coverage.\n> pg_control_recovery() and pg_control_init() - test coverage exists,\n> test times on HEAD:42 ms PATCHED:44 ms\n> pg_identify_object() - no internal source code callers, test coverage\n> exists, test times on HEAD:17 ms PATCHED:18 ms\n> pg_identify_object_as_address() - no internal source code callers,\n> test coverage exists, test times on HEAD:66 ms PATCHED:60 ms\n> pg_get_object_address() - no internal source code callers, test\n> coverage exists, test times on HEAD:66 ms PATCHED:60 ms\n> pg_sequence_parameters() - no internal source code callers, test\n> coverage exists, test times on HEAD:96 ms PATCHED:98 ms\n> ts_token_type_byid(), ts_token_type_byname(), ts_parse_byid() and\n> ts_parse_byname() - internal source code callers exists, test coverage\n> exists, test times on HEAD:195 ms, pg_dump 10 wallclock secs\n> PATCHED:186 ms, pg_dump 10 wallclock secs\n> pg_get_keywords() - no internal source code callers, test coverage\n> exists, test times on HEAD:129 ms PATCHED:130 ms\n> pg_get_catalog_foreign_keys() - no internal source code callers, test\n> coverage exists, test times on HEAD:114 ms PATCHED:111 ms\n> tsvector_unnest() - no internal source code callers, test coverage\n> exists, test times on HEAD:26 ms PATCHED:26 ms\n> ts_setup_firstcall() - test coverage exists, test times on HEAD:195 ms\n> PATCHED:186 ms\n> pg_partition_tree() - no internal source code callers, test coverage\n> exists, test times on HEAD:30 ms PATCHED:32 ms\n> pg_timezone_abbrevs() - no internal source code callers, test coverage\n> exists, test times on HEAD:40 ms PATCHED:36 ms\n\nThese ones don't worry me much, TBH.\n\n> pg_stat_get_wal(), pg_stat_get_archiver() and\n> pg_stat_get_replication_slot() - no internal source code callers, test\n> coverage exists, test times on HEAD:479 ms PATCHED:483 ms\n> pg_prepared_xact() - no internal source code callers, test coverage\n> exists, test times on HEAD:50 ms, subscription 108 wallclock secs,\n> recovery 111 wallclock secs PATCHED:48 ms, subscription 110 wallclock\n> secs, recovery 112 wallclock secs\n> show_all_settings(), pg_control_system(), pg_control_checkpoint(),\n> test_predtest() - no internal source code callers, test coverage\n> exists, test times on HEAD:18 ms PATCHED:18 ms\n> pg_walfile_name_offset() - no internal source code callers, no test coverage.\n> aclexplode() - internal callers exists information_schema.sql,\n> indirect test coverage exists.\n> pg_stat_file() - no internal source code callers, test coverage\n> exists, test times on HEAD:42 ms PATCHED:46 ms\n> pg_get_publication_tables() - internal source code callers exist, test\n> coverage exists, test times on HEAD:159 ms, subscription 108 wallclock\n> secs PATCHED:167 ms, subscription 110 wallclock secs\n> pg_lock_status() - no internal source code callers, test coverage\n> exists, test times on HEAD:16 ms PATCHED:22 ms\n> pg_stat_get_subscription_stats() - no internal source code callers,\n> test coverage exists, test times on HEAD:subscription 108 wallclock\n> secs PATCHED:subscription 110 wallclock secs\n\nThese ones could be involved in monitoring queries run on a periodic\nbasis.\n--\nMichael",
"msg_date": "Thu, 15 Dec 2022 15:11:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use get_call_result_type() more widely"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 11:41 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Dec 14, 2022 at 11:14:59AM +0530, Bharath Rupireddy wrote:\n> > AFAICS, most of these functions have no direct source code callers,\n> > they're user-facing functions and not in a hot code path. I measured\n> > the test times of these functions and I don't see much difference [1].\n>\n> Thanks for the summary. It looks like your tests involve single\n> runs. What is the difference in run-time when invoking this\n> repeatedly with a large generate_series() for example when few or no\n> tuples are returned? Do you see a difference in perf profile? Some\n> of the functions could have their time mostly eaten while looking at\n> the syscache on repeated calls, but you could see the actual work this\n> involves with a dummy function that returns a large number of\n> attributes on a single record in the worst case possible?\n\nThanks. Yes, using get_call_result_type() for a function that gets\ncalled repeatedly does have some cost as the comment around\nget_call_result_type() says - I found in my testing that\nget_call_result_type() does seem to cost 45% increase in execution\ntimes over quick iterations of a function returning a single row with\n36 columns.\n\n> Separating things into two buckets..\n>\n> > [1]\n> > pg_old_snapshot_time_mapping() - an extension function with no\n> > internal source code callers, no test coverage.\n> > pg_visibility_map_summary() - an extension function with no internal\n> > source code callers, test coverage exists, test times on HEAD:25 ms\n> > PATCHED:25 ms\n> > pg_last_committed_xact() and pg_xact_commit_timestamp_origin() - no\n> > internal source code callers, test coverage exists, test times on\n> > HEAD:10 ms PATCHED:10 ms> pg_get_multixact_members() - no internal source code callers, no test coverage.\n> > pg_control_recovery() and pg_control_init() - test coverage exists,\n> > test times on HEAD:42 ms PATCHED:44 ms\n> > pg_identify_object() - no internal source code callers, test coverage\n> > exists, test times on HEAD:17 ms PATCHED:18 ms\n> > pg_identify_object_as_address() - no internal source code callers,\n> > test coverage exists, test times on HEAD:66 ms PATCHED:60 ms\n> > pg_get_object_address() - no internal source code callers, test\n> > coverage exists, test times on HEAD:66 ms PATCHED:60 ms\n> > pg_sequence_parameters() - no internal source code callers, test\n> > coverage exists, test times on HEAD:96 ms PATCHED:98 ms\n> > ts_token_type_byid(), ts_token_type_byname(), ts_parse_byid() and\n> > ts_parse_byname() - internal source code callers exists, test coverage\n> > exists, test times on HEAD:195 ms, pg_dump 10 wallclock secs\n> > PATCHED:186 ms, pg_dump 10 wallclock secs\n> > pg_get_keywords() - no internal source code callers, test coverage\n> > exists, test times on HEAD:129 ms PATCHED:130 ms\n> > pg_get_catalog_foreign_keys() - no internal source code callers, test\n> > coverage exists, test times on HEAD:114 ms PATCHED:111 ms\n> > tsvector_unnest() - no internal source code callers, test coverage\n> > exists, test times on HEAD:26 ms PATCHED:26 ms\n> > ts_setup_firstcall() - test coverage exists, test times on HEAD:195 ms\n> > PATCHED:186 ms\n> > pg_partition_tree() - no internal source code callers, test coverage\n> > exists, test times on HEAD:30 ms PATCHED:32 ms\n> > pg_timezone_abbrevs() - no internal source code callers, test coverage\n> > exists, test times on HEAD:40 ms PATCHED:36 ms\n>\n> These ones don't worry me much, TBH.\n>\n> > pg_stat_get_wal(), pg_stat_get_archiver() and\n> > pg_stat_get_replication_slot() - no internal source code callers, test\n> > coverage exists, test times on HEAD:479 ms PATCHED:483 ms\n> > pg_prepared_xact() - no internal source code callers, test coverage\n> > exists, test times on HEAD:50 ms, subscription 108 wallclock secs,\n> > recovery 111 wallclock secs PATCHED:48 ms, subscription 110 wallclock\n> > secs, recovery 112 wallclock secs\n> > show_all_settings(), pg_control_system(), pg_control_checkpoint(),\n> > test_predtest() - no internal source code callers, test coverage\n> > exists, test times on HEAD:18 ms PATCHED:18 ms\n> > pg_walfile_name_offset() - no internal source code callers, no test coverage.\n> > aclexplode() - internal callers exists information_schema.sql,\n> > indirect test coverage exists.\n> > pg_stat_file() - no internal source code callers, test coverage\n> > exists, test times on HEAD:42 ms PATCHED:46 ms\n> > pg_get_publication_tables() - internal source code callers exist, test\n> > coverage exists, test times on HEAD:159 ms, subscription 108 wallclock\n> > secs PATCHED:167 ms, subscription 110 wallclock secs\n> > pg_lock_status() - no internal source code callers, test coverage\n> > exists, test times on HEAD:16 ms PATCHED:22 ms\n> > pg_stat_get_subscription_stats() - no internal source code callers,\n> > test coverage exists, test times on HEAD:subscription 108 wallclock\n> > secs PATCHED:subscription 110 wallclock secs\n>\n> These ones could be involved in monitoring queries run on a periodic\n> basis.\n\nI agree with the bucketization. Please see the attached patches. 0001\n- gets rid of explicit tuple desc creation using\nget_call_result_type() for functions thought to be not-so-frequently\ncalled. 0002 - gets rid of an unnecessary call to BlessTupleDesc()\nafter get_call_result_type().\n\nPlease find the attached patches.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 19 Dec 2022 19:41:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use get_call_result_type() more widely"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 10:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Saving code is nice, but I'd assume the result is slower, because\n> get_call_result_type has to do a pretty substantial amount of work\n> to get the data to construct the tupdesc from. Have you tried to\n> quantify how much overhead this'd add? Which of these functions\n> can we safely consider to be non-performance-critical?\n\nHere's a modest proposal: let's do nothing about this. There's no\nevidence of a real problem here, so we're going to be trying to judge\nthe performance benefits against the code size savings without any\nreal data indicating that either one is an issue. I bet we could\nconvert all of these to one style or the other and it would make very\nlittle real world difference, but deciding which ones to change and in\nwhich direction will take up time and energy that could otherwise be\nspent on more worthwhile projects, and could possibly complicate\nback-patching, too.\n\nBasically, I think this is nit-picking. Let's just accept that both\nstyles have some advantages and leave it up to patch authors to pick\none that they prefer.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Dec 2022 13:43:32 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use get_call_result_type() more widely"
},
{
"msg_contents": "On 2022-Dec-19, Robert Haas wrote:\n\n> Here's a modest proposal: let's do nothing about this. There's no\n> evidence of a real problem here, so we're going to be trying to judge\n> the performance benefits against the code size savings without any\n> real data indicating that either one is an issue. I bet we could\n> convert all of these to one style or the other and it would make very\n> little real world difference, but deciding which ones to change and in\n> which direction will take up time and energy that could otherwise be\n> spent on more worthwhile projects, and could possibly complicate\n> back-patching, too.\n> \n> Basically, I think this is nit-picking. Let's just accept that both\n> styles have some advantages and leave it up to patch authors to pick\n> one that they prefer.\n\nThe code savings are substantial actually, so I think bloating things\nfor cases where performance is not an issue is not good. Some other\ndeveloper is sure to cargo-cult that stuff in the future, and that's not\ngreat.\n\nOn the other hand, the measurements have shown that going through the\nfunction is significantly slower. So I kinda like the judgement call\nthat Michael and Bharath have made: change to use the function when\nperformance is not an issue, and keep the verbose coding otherwise.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 19 Dec 2022 20:07:44 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Use get_call_result_type() more widely"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 2:07 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On the other hand, the measurements have shown that going through the\n> function is significantly slower. So I kinda like the judgement call\n> that Michael and Bharath have made: change to use the function when\n> performance is not an issue, and keep the verbose coding otherwise.\n\nSeems fairly arbitrary to me. The ones used for monitoring queries\naren't likely to be run often enough that it matters, but in theory\nit's possible that they could be. Many of the ones supposedly not used\nfor monitoring queries could reasonably be so used, too. You can get\nany answer you want by making arbitrary assumptions about which ones\nare likely to be used frequently and how frequently they're likely to\nbe used, and I think different people evaluating the list\nindependently of each other and with no knowledge of each others work\nwould likely reach substantially different conclusions, ranging all\nthe way from \"do them all this way\" to \"do them all the other way\" and\nvarious positions in the middle.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Dec 2022 14:33:06 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use get_call_result_type() more widely"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Dec 19, 2022 at 2:07 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> On the other hand, the measurements have shown that going through the\n>> function is significantly slower. So I kinda like the judgement call\n>> that Michael and Bharath have made: change to use the function when\n>> performance is not an issue, and keep the verbose coding otherwise.\n\n> Seems fairly arbitrary to me.\n\nAgreed ... but the decisions embodied in the code-as-it-stands are\neven more arbitrary, being no doubt mostly based on \"which function\ndid you copy to start from\" not on any thought about performance.\n\nNow that somebody's made an effort to identify which places are\npotentially performance-critical, I don't see why we wouldn't use\nthe fruits of their labor. Yes, somebody else might draw the line\ndifferently, but drawing a line at all seems like a step forward\nto me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Dec 2022 16:21:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use get_call_result_type() more widely"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 4:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Now that somebody's made an effort to identify which places are\n> potentially performance-critical, I don't see why we wouldn't use\n> the fruits of their labor. Yes, somebody else might draw the line\n> differently, but drawing a line at all seems like a step forward\n> to me.\n\nAll right, well, I just work here. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Dec 2022 17:50:03 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Use get_call_result_type() more widely"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 05:50:03PM -0500, Robert Haas wrote:\n> All right, well, I just work here. :-)\n\nJust to give some numbers. The original version of the patch doing\nthe full switch removed 500 lines of code. The second version that\nswitches the \"non-critical\" paths removes 200~ lines.\n--\nMichael",
"msg_date": "Tue, 20 Dec 2022 16:32:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use get_call_result_type() more widely"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 07:41:27PM +0530, Bharath Rupireddy wrote:\n> I agree with the bucketization. Please see the attached patches. 0001\n> - gets rid of explicit tuple desc creation using\n> get_call_result_type() for functions thought to be not-so-frequently\n> called.\n\nIt looks like I am OK with the code paths updated here, which refer to\nnone of the \"critical\" function paths.\n\n> 0002 - gets rid of an unnecessary call to BlessTupleDesc()\n> after get_call_result_type().\n\nHmm. I am not sure whether this is right, actually..\n--\nMichael",
"msg_date": "Tue, 20 Dec 2022 16:38:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use get_call_result_type() more widely"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Dec 19, 2022 at 07:41:27PM +0530, Bharath Rupireddy wrote:\n>> 0002 - gets rid of an unnecessary call to BlessTupleDesc()\n>> after get_call_result_type().\n\n> Hmm. I am not sure whether this is right, actually..\n\nHmm ... at least one of the paths through internal_get_result_type\nis intentionally blessing the result tupdesc:\n\n if (tupdesc->tdtypeid == RECORDOID &&\n tupdesc->tdtypmod < 0)\n assign_record_type_typmod(tupdesc);\n\nbut it's not clear if they all do, and the comments certainly\naren't promising it.\n\nI'd be in favor of making this a documented API promise,\nbut it isn't that right now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Dec 2022 03:11:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use get_call_result_type() more widely"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 1:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Mon, Dec 19, 2022 at 07:41:27PM +0530, Bharath Rupireddy wrote:\n> >> 0002 - gets rid of an unnecessary call to BlessTupleDesc()\n> >> after get_call_result_type().\n>\n> > Hmm. I am not sure whether this is right, actually..\n>\n> Hmm ... at least one of the paths through internal_get_result_type\n> is intentionally blessing the result tupdesc:\n>\n> if (tupdesc->tdtypeid == RECORDOID &&\n> tupdesc->tdtypmod < 0)\n> assign_record_type_typmod(tupdesc);\n>\n> but it's not clear if they all do, and the comments certainly\n> aren't promising it.\n\nIt looks to be safe to get rid of BlessTupleDesc() after\nget_call_result_type() for the functions that have OUT parameters and\nreturn 'record' type. This is because, the\nget_call_result_type()->internal_get_result_type()->build_function_result_tupdesc_t()\nreturns non-NULL tupdesc for such functions and all the functions that\n0002 patch touches are having OUT parameters and their return type is\n'record'. I've also verified with Assert(tupdesc->tdtypmod >= 0); -\nhttps://github.com/BRupireddy/postgres/tree/test_for_tdypmod_init_v1.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 20 Dec 2022 16:23:52 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use get_call_result_type() more widely"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Tue, Dec 20, 2022 at 1:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm ... at least one of the paths through internal_get_result_type\n>> is intentionally blessing the result tupdesc:\n>> but it's not clear if they all do, and the comments certainly\n>> aren't promising it.\n\n> It looks to be safe to get rid of BlessTupleDesc() after\n> get_call_result_type() for the functions that have OUT parameters and\n> return 'record' type.\n\nI think it's an absolutely horrid idea for callers to depend on\nsuch details of get_call_result_type's behavior --- especially\nwhen there is no function documentation promising it.\n\nIf we want to do something here, the thing to do would be to\nguarantee in get_call_result_type's API spec that any returned\ntupledesc is blessed. However, that might make some other\ncases slower, if they don't need that.\n\nOn the whole, I'm content to leave the BlessTupleDesc calls in\nthese callers. They are cheap enough if the tupdesc is already\nblessed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Dec 2022 11:12:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use get_call_result_type() more widely"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 11:12:09AM -0500, Tom Lane wrote:\n> On the whole, I'm content to leave the BlessTupleDesc calls in\n> these callers. They are cheap enough if the tupdesc is already\n> blessed.\n\nYeah, agreed.\n\nI have applied v2-0001, after fixing one error in wparser.c where some\nof the previous style was not removed, leading to unnecessary work and\nthe same TupleDesc being built twice for the two ts_token_type()'s\n(input of OID or text).\n--\nMichael",
"msg_date": "Wed, 21 Dec 2022 10:13:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Use get_call_result_type() more widely"
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 6:44 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> I have applied v2-0001.\n\nThanks for taking care of this.\n\nBy seeing the impact that get_call_result_type() can have for the\nfunctions that are possibly called repeatedly, I couldn't resist\nsharing a patch (attached herewith) that adds a note of caution and\nanother way to build TupleDesc in the documentation to help developers\nout there. Thoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 21 Dec 2022 12:49:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Use get_call_result_type() more widely"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 17717\nLogged by: Gunnar L\nEmail address: postgresql@taljaren.se\nPostgreSQL version: 15.0\nOperating system: Ubuntu Linux\nDescription: \n\nWe have observed a significant slowdown in vacuumdb performance between\ndifferent versions of postgresql. And possibly also a memory issue.\r\n\r\nWe run a specific data model, where each customer has its own schema with\nits own set of tables. Each database server hosts 16 databases, each\ncontaining around 250 customer schemas. Due to postgres creating a new file\nfor each database object, we end up with around 5 million files on each\ndatabase server. This may or may not be related to the issue we're seeing\n(new algorithms with new time complexity?)\r\n\r\nWe upgraded from postgresql 9.5 to postgresql 13, and noticed a significant\nslowdown in how vacuumdb performs. Before, we could run a vacuumdb -a -z\neach night, taking around 2 hours to complete. After the upgrade, we see a\nconstant 100% CPU utilization during the vacuumdb process (almost no I/O\nactivity), and vacuumdb cannot complete within a reasonable time. We're able\nto vacuum about 3-4 databases each night.\r\n\r\nWe are able to recreate this issue, using a simple bash script to generate a\nsimilar setup.\r\n\r\nFrom local testing, here are our findings:\r\n\r\nConcerning speed:\r\n* Version 9.5, 10, 11 are fast (9.5 slower than 10 and 11)\r\n* Version 12, 13, 14 are very, very slow\r\n* Version 15 is faster (a lot faster than 12,13,14) but not nearly as fast\nas 10 or 11.\r\n\r\nConcerning memory usage:\r\n* Version 15 is using a lot more shared memory OR it might not be releasing\nit properly after vacuuming a db.\r\n\r\nThese are the timings for vacuuming the 16 dbs.\r\n\r\nVersion Seconds Completed\r\n------------------------------\r\n9.5 412 16/16\r\n10 178 16/16\r\n11 166 16/16\r\n12 8319 1/16 or 2/16 (manually aborted)\r\n13 18853 3/16 or 4/16 (manually aborted)\r\n14 16857 3/16 or 4/16 (manually aborted)\r\n15 617 1/16 (crashed!)\r\n15 4158 6/16 (crashed! --shm-size=256mb)\r\n15 9500 16/16 (--shm-size=4096mb)\r\n\r\nThe timing of the only successful run for postgres 15 is somewhat flaky,\nsince the machine was suspended for about 1-1.5 hours so 9500 is only an\nestimate, but the first run (1 db completed in 10 minutes) gives that it is\nfaster than 12-14 but slower than 10 and 11 (3 minutes to complete\neverything)\r\n\r\n\r\nThe following describes our setup\r\nThis is the script (called setup.sh) we’re using to populate the databases\n(we give a port number as parameter)\r\n\r\n##### start of setup.sh\r\nexport PGPASSWORD=mysecretpassword\r\nPORT=$1\r\n\r\necho \"\"> tables_$PORT.sql\r\nfor schema in `seq -w 1 250`; do\r\n echo \"create schema schema$schema;\" >> tables_$PORT.sql\r\n for table in `seq -w 1 500`; do\r\n echo \"create table schema$schema.table$table (id int);\" >>\ntables_$PORT.sql\r\n done\r\ndone\r\n\r\necho \"Setting up db: 01\"\r\ncreatedb -h localhost -U postgres -p $PORT db01\r\npsql -q -h localhost -U postgres -p $PORT db01 -f tables_$PORT.sql\r\n\r\n# This seems to be the fastest way to create the databases\r\nfor db in `seq -w 2 16`; do\r\n echo \"Setting up db: $db\"\r\n createdb -h localhost -U postgres -p $PORT --template db01 db$db\r\ndone\r\n####### end of setup.sh\r\n\r\n\r\n\r\nTo execute a test for a particular postgres version (in this example PG\n9.5), we run the following. It will setup PG 9.5 on port 15432.\r\n\r\ndocker run --rm --name pg95 -e POSTGRES_PASSWORD=mysecretpassword -p\n15432:5432 -d postgres:9.5\r\n./setup.sh 15432\r\ndate; time docker exec -it pg95 bash -c \"vacuumdb -a -z -U postgres\"; date\r\n\r\n(The date commands are added to keep track of when tasks were started).\r\n\r\n\r\n\r\n\r\n\r\nHere are complete set of commands and output and comments \r\n(We use different ports for different versions of PG)\r\n\r\ndate; time docker exec -it pg95 bash -c \"vacuumdb -a -z -U postgres\"; date\r\n(The date commands since it takes some time to run)\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\ntime docker exec -it pg95 bash -c \"vacuumdb -a -z -U postgres\"\r\nvacuumdb: vacuuming database \"db01\"\r\n…<snip>...\r\nvacuumdb: vacuuming database \"db16\"\r\nvacuumdb: vacuuming database \"postgres\"\r\nvacuumdb: vacuuming database \"template1\"\r\n\r\nreal\t6m52,070s\r\nuser\t0m0,048s\r\nsys\t0m0,029s\r\n\r\n\r\ntime docker exec -it pg10 bash -c \"vacuumdb -a -z -U postgres\"\r\nvacuumdb: vacuuming database \"db01\"\r\n…<snip>...\r\nvacuumdb: vacuuming database \"db16\"\r\nvacuumdb: vacuuming database \"postgres\"\r\nvacuumdb: vacuuming database \"template1\"\r\n\r\nreal\t2m58,354s\r\nuser\t0m0,043s\r\nsys\t0m0,013s\r\n\r\n\r\n\r\n\r\n\r\ntime docker exec -it pg11 bash -c \"vacuumdb -a -z -U postgres\"\r\nvacuumdb: vacuuming database \"db01\"\r\n…<snip>...\r\nvacuumdb: vacuuming database \"db16\"\r\nvacuumdb: vacuuming database \"postgres\"\r\nvacuumdb: vacuuming database \"template1\"\r\n\r\nreal\t2m46,181s\r\nuser\t0m0,047s\r\nsys\t0m0,012s\r\n\r\n\r\n\r\n\r\ndate; time docker exec -it pg12 bash -c \"vacuumdb -a -z -U postgres\"; date\r\nlör 10 dec 2022 18:57:43 CET\r\nvacuumdb: vacuuming database \"db01\"\r\nvacuumdb: vacuuming database \"db02\"\r\n^CCancel request sent\r\nvacuumdb: error: vacuuming of table \"schema241.table177\" in database \"db02\"\nfailed: ERROR: canceling statement due to user request\r\n\r\nreal\t138m39,600s\r\nuser\t0m0,177s\r\nsys\t0m0,418s\r\nlör 10 dec 2022 21:16:22 CET\r\n\r\n\r\n\r\n\r\ndate;time docker exec -it pg13 bash -c \"vacuumdb -a -z -U postgres\"\r\nlör 10 dec 2022 07:22:32 CET\r\nvacuumdb: vacuuming database \"db01\"\r\nvacuumdb: vacuuming database \"db02\"\r\nvacuumdb: vacuuming database \"db03\"\r\nvacuumdb: vacuuming database \"db04\"\r\n^CCancel request sent\r\n\r\nreal\t314m13,172s\r\nuser\t0m0,551s\r\nsys\t0m0,663s\r\nlör 10 dec 2022 12:37:03 CET\r\n\r\n\r\n\r\ndate;time docker exec -it pg14 bash -c \"vacuumdb -a -z -U postgres\"; date\r\nlör 10 dec 2022 14:15:37 CET\r\nvacuumdb: vacuuming database \"db01\"\r\nvacuumdb: vacuuming database \"db02\"\r\nvacuumdb: vacuuming database \"db03\"\r\nvacuumdb: vacuuming database \"db04\"\r\n^CCancel request sent\r\n\r\nreal\t280m57,172s\r\nuser\t0m0,586s\r\nsys\t0m0,559s\r\nlör 10 dec 2022 18:56:34 CET\r\n\r\n\r\n\r\ndate;time docker exec -it pg15 bash -c \"vacuumdb -a -z -U postgres\"; date\r\nlör 10 dec 2022 12:50:25 CET\r\nvacuumdb: vacuuming database \"db01\"\r\nvacuumdb: vacuuming database \"db02\"\r\nvacuumdb: error: processing of database \"db02\" failed: ERROR: could not\nresize shared memory segment \"/PostgreSQL.2952321776\" to 27894720 bytes: No\nspace left on device\r\n\r\nreal\t10m17,913s\r\nuser\t0m0,030s\r\nsys\t0m0,049s\r\nlör 10 dec 2022 13:00:43 CET\r\n\r\n# it was faster, but we need to extend shared memory to make it work\r\n\r\n\r\ndocker run --rm --name pg15 --shm-size=256mb -e\nPOSTGRES_PASSWORD=mysecretpassword -p 55555:5432 -d postgres:15\r\n\r\ndate;time docker exec -it pg15 bash -c \"vacuumdb -a -z -U postgres\"; date\r\nmån 12 dec 2022 08:56:17 CET\r\nvacuumdb: vacuuming database \"db01\"\r\n…<snip>...\r\nvacuumdb: vacuuming database \"db07\"\r\nvacuumdb: error: processing of database \"db07\" failed: ERROR: could not\nresize shared memory segment \"/PostgreSQL.1003084622\" to 27894720 bytes: No\nspace left on device\r\n\r\nreal\t69m18,345s\r\nuser\t0m0,217s\r\nsys\t0m0,086s\r\nmån 12 dec 2022 10:05:36 CET\r\n\r\n\r\n\r\ndocker run --rm --name pg15 --shm-size=4096mb -e\nPOSTGRES_PASSWORD=mysecretpassword -p 55555:5432 -d postgres:15\r\n\r\ndate;time docker exec -it pg15 bash -c \"vacuumdb -a -z -U postgres\"; date\r\nmån 12 dec 2022 11:16:11 CET\r\nvacuumdb: vacuuming database \"db01\"\r\n…<snip>...\r\nvacuumdb: vacuuming database \"db16\"\r\nvacuumdb: vacuuming database \"postgres\"\r\nvacuumdb: vacuuming database \"template1\"\r\n\r\nreal\t232m46,168s\r\nuser\t0m0,227s\r\nsys\t0m0,467s\r\nmån 12 dec 2022 15:08:57 CET\r\n\r\n\r\n\r\nHere is the hardware that was used\r\nAMD Ryzen 7 PRO 5850U with Radeon Graphics\r\n8 Cores, 16 threads\r\n\r\n$ free\r\n total used free shared buff/cache \navailable\r\nMem: 28562376 5549716 752624 1088488 22260036 \n21499752\r\nSwap: 999420 325792 673628\r\n\r\nDisk:\tNVMe device, Samsung SSD 980 1TB",
"msg_date": "Tue, 13 Dec 2022 10:57:39 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "PG Bug reporting form <noreply@postgresql.org> writes:\n> We run a specific data model, where each customer has its own schema with\n> its own set of tables. Each database server hosts 16 databases, each\n> containing around 250 customer schemas. Due to postgres creating a new file\n> for each database object, we end up with around 5 million files on each\n> database server. This may or may not be related to the issue we're seeing\n> (new algorithms with new time complexity?)\n\n> We upgraded from postgresql 9.5 to postgresql 13, and noticed a significant\n> slowdown in how vacuumdb performs. Before, we could run a vacuumdb -a -z\n> each night, taking around 2 hours to complete. After the upgrade, we see a\n> constant 100% CPU utilization during the vacuumdb process (almost no I/O\n> activity), and vacuumdb cannot complete within a reasonable time. We're able\n> to vacuum about 3-4 databases each night.\n\nI poked into this a little bit. On HEAD, watching things with \"perf\"\nidentifies vac_update_datfrozenxid() as the main time sink. It's not\nhard to see why: that does a seqscan of pg_class, and it's invoked\nat the end of each vacuum() call. So if you try to vacuum each table\nin the DB separately, you're going to end up spending O(N^2) time\nin often-useless rescans of pg_class. This isn't a huge problem in\nordinary-sized DBs, but with 125000 small tables in the DB it becomes\nthe dominant cost.\n\n> Concerning speed:\n> * Version 9.5, 10, 11 are fast (9.5 slower than 10 and 11)\n> * Version 12, 13, 14 are very, very slow\n> * Version 15 is faster (a lot faster than 12,13,14) but not nearly as fast\n> as 10 or 11.\n\nThe reason for the v12 performance change is that up through v11,\n\"vacuumdb -a -z\" would just issue \"VACUUM (ANALYZE);\" in each DB.\nSo vac_update_datfrozenxid only ran once. Beginning in v12 (commit\ne0c2933a7), vacuumdb issues a separate VACUUM command for each\ntargeted table, which causes the problem.\n\nI'm not sure why there's a performance delta from 14 to 15.\nIt doesn't look like vacuumdb itself had any material changes,\nso we must have done something different on the backend side.\nThis may indicate that there's another O(N^2) behavior that\nwe got rid of in v15. Anyway, that change isn't bad, so I did\nnot poke into it too much.\n\nConclusions:\n\n* As a short-term fix, you could try using vacuumdb from v11\nwith the newer servers. Or just do \"psql -c 'vacuum analyze'\"\nand not bother with vacuumdb at all. (On HEAD, with this\nexample database, 'vacuum analyze' takes about 7 seconds per DB\nfor me, versus ~10 minutes using vacuumdb.)\n\n* To fix vacuumdb properly, it might be enough to get it to\nbatch VACUUMs, say by naming up to 1000 tables per command\ninstead of just one. I'm not sure how that would interact\nwith its parallelization logic, though. It's not really\nsolving the O(N^2) issue either, just pushing it further out.\n\n* A better idea, though sadly not very back-patchable, could\nbe to expose a VACUUM option to control whether it runs\nvac_update_datfrozenxid, so that vacuumdb can do that just\nonce at the end. Considering that vac_update_datfrozenxid\nrequires an exclusive lock, the current behavior is poison for\nparallel vacuuming quite aside from the O(N^2) issue. This\nmight tie into some work Peter G. has been pursuing, too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Dec 2022 13:56:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 10:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> * A better idea, though sadly not very back-patchable, could\n> be to expose a VACUUM option to control whether it runs\n> vac_update_datfrozenxid, so that vacuumdb can do that just\n> once at the end. Considering that vac_update_datfrozenxid\n> requires an exclusive lock, the current behavior is poison for\n> parallel vacuuming quite aside from the O(N^2) issue. This\n> might tie into some work Peter G. has been pursuing, too.\n\nThat sounds like a good idea to me. But do we actually need a VACUUM\noption for this? I wonder if we could get away with having the VACUUM\ncommand never call vac_update_datfrozenxid(), except when run in\nsingle-user mode. It would be nice to make pg_xact/clog truncation\nautovacuum's responsibility.\n\nAutovacuum already does things differently to the VACUUM command, and\nfor reasons that seem related to this complaint about vacuumdb.\nBesides, autovacuum is already on the hook to call\nvac_update_datfrozenxid() for the benefit of databases that haven't\nactually been vacuumed, per the do_autovacuum() comments right above\nits vac_update_datfrozenxid() call.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 15 Dec 2022 12:06:57 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Thu, Dec 15, 2022 at 10:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * A better idea, though sadly not very back-patchable, could\n>> be to expose a VACUUM option to control whether it runs\n>> vac_update_datfrozenxid, so that vacuumdb can do that just\n>> once at the end. Considering that vac_update_datfrozenxid\n>> requires an exclusive lock, the current behavior is poison for\n>> parallel vacuuming quite aside from the O(N^2) issue. This\n>> might tie into some work Peter G. has been pursuing, too.\n\n> That sounds like a good idea to me. But do we actually need a VACUUM\n> option for this? I wonder if we could get away with having the VACUUM\n> command never call vac_update_datfrozenxid(), except when run in\n> single-user mode. It would be nice to make pg_xact/clog truncation\n> autovacuum's responsibility.\n\nI could get behind manual VACUUM not invoking vac_update_datfrozenxid\nby default, perhaps. But if it can never call it, then that is a\nfairly important bit of housekeeping that is unreachable except by\nautovacuum. No doubt the people who turn off autovacuum are benighted,\nbut they're still out there.\n\nCould we get somewhere by saying that manual VACUUM calls\nvac_update_datfrozenxid only if it's a full-DB vacuum (ie, no table\nwas specified)? That would fix the problem at hand. However, it'd\nmean (since v12) that a vacuumdb run never calls vac_update_datfrozenxid\nat all, which would result in horrible problems for any poor sods\nwho think that a cronjob running \"vacuumdb -a\" is an adequate substitute\nfor autovacuum.\n\nOr maybe we could modify things so that \"autovacuum = off\" doesn't prevent\noccasional cycles of vac_update_datfrozenxid-and-nothing-else?\n\nIn the end I feel like a manual way to call vac_update_datfrozenxid\nwould be the least magical way of running this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Dec 2022 16:57:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 1:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I could get behind manual VACUUM not invoking vac_update_datfrozenxid\n> by default, perhaps. But if it can never call it, then that is a\n> fairly important bit of housekeeping that is unreachable except by\n> autovacuum. No doubt the people who turn off autovacuum are benighted,\n> but they're still out there.\n\nI wouldn't mind adding another option for this to VACUUM. We already\nhave a couple of VACUUM options that are only really needed as escape\nhatches, or perhaps as testing tools used by individual Postgres\nhackers. Another one doesn't seem too bad. The VACUUM command should\neventually become totally niche, so I'm not too concerned about going\noverboard here.\n\n> Could we get somewhere by saying that manual VACUUM calls\n> vac_update_datfrozenxid only if it's a full-DB vacuum (ie, no table\n> was specified)? That would fix the problem at hand.\n\nThat definitely seems reasonable.\n\n> Or maybe we could modify things so that \"autovacuum = off\" doesn't prevent\n> occasional cycles of vac_update_datfrozenxid-and-nothing-else?\n\nThat's what I was thinking of. It seems like a more natural approach\nto me, at least offhand.\n\nI have to imagine that the vast majority of individual calls to\nvac_update_datfrozenxid have just about zero chance of updating\ndatfrozenxid or datminmxid as things stand. There is bound to be some\nnumber of completely static tables in every database (maybe just\nsystem catalogs). Those static tables are bound to be the tables that\nhold back datfrozenxid/datminmxid approximately all the time. To me\nthis suggests that vac_update_datfrozenxid should fully own the fact\nthat it's supposed to be called out of band, possibly only in\nautovacuum.\n\nSeparately, I wonder if it would make sense to invent a new fast-path\nfor the VACUUM command that is designed to inexpensively determine\nthat it cannot possibly matter if vac_update_datfrozenxid is never\ncalled, given the specifics (the details of the target rel and its\nTOAST rel).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 15 Dec 2022 20:39:54 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Thu, Dec 15, 2022 at 1:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Or maybe we could modify things so that \"autovacuum = off\" doesn't prevent\n>> occasional cycles of vac_update_datfrozenxid-and-nothing-else?\n\n> That's what I was thinking of. It seems like a more natural approach\n> to me, at least offhand.\n\nSeems worth looking into. But I suppose the launch frequency would\nhave to be more often than the current behavior for autovacuum = off,\nso it would complicate the logic in that area.\n\n> I have to imagine that the vast majority of individual calls to\n> vac_update_datfrozenxid have just about zero chance of updating\n> datfrozenxid or datminmxid as things stand.\n\nThat is a really good point. How about teaching VACUUM to track\nthe oldest original relfrozenxid and relminmxid among the table(s)\nit processed, and skip vac_update_datfrozenxid unless at least one\nof those matches the database's values? For extra credit, also\nskip if we didn't successfully advance the source rel's value.\n\nThis might lead to a fix that solves the OP's problem while not\nchanging anything fundamental, which would make it reasonable\nto back-patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Dec 2022 09:49:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "On Fri, Dec 16, 2022 at 6:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I have to imagine that the vast majority of individual calls to\n> > vac_update_datfrozenxid have just about zero chance of updating\n> > datfrozenxid or datminmxid as things stand.\n>\n> That is a really good point. How about teaching VACUUM to track\n> the oldest original relfrozenxid and relminmxid among the table(s)\n> it processed, and skip vac_update_datfrozenxid unless at least one\n> of those matches the database's values? For extra credit, also\n> skip if we didn't successfully advance the source rel's value.\n\nHmm. I think that that would probably work.\n\nIt would certainly work on 15+, because there tends to be \"natural\ndiversity\" among the relfrozenxid values seen for each table, due to\nthe \"track oldest extant XID\" work; we no longer see many tables that\nall have the same relfrozenxid, that advance in lockstep. But even\nthat factor probably doesn't matter, since we only need one \"laggard\nrelfrozenxid\" static table for the scheme to work and work well. That\nis probably a safe bet on all versions, though I'd have to check to be\nsure.\n\n> This might lead to a fix that solves the OP's problem while not\n> changing anything fundamental, which would make it reasonable\n> to back-patch.\n\nThat's a big plus. This is a nasty regression. I wouldn't call it a\nmust-fix, but it's bad enough to be worth fixing if we can come up\nwith a reasonably non-invasive approach.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 16 Dec 2022 10:47:07 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Dec 16, 2022 at 6:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> That is a really good point. How about teaching VACUUM to track\n>> the oldest original relfrozenxid and relminmxid among the table(s)\n>> it processed, and skip vac_update_datfrozenxid unless at least one\n>> of those matches the database's values? For extra credit, also\n>> skip if we didn't successfully advance the source rel's value.\n\n> Hmm. I think that that would probably work.\n\n> It would certainly work on 15+, because there tends to be \"natural\n> diversity\" among the relfrozenxid values seen for each table, due to\n> the \"track oldest extant XID\" work; we no longer see many tables that\n> all have the same relfrozenxid, that advance in lockstep. But even\n> that factor probably doesn't matter, since we only need one \"laggard\n> relfrozenxid\" static table for the scheme to work and work well. That\n> is probably a safe bet on all versions, though I'd have to check to be\n> sure.\n\nOh, I see your point: if a whole lot of tables have the same relfrozenxid\nand it matches datfrozenxid, this won't help. Still, we can hope that\nthat's an uncommon situation. If we postulate somebody trying to use\nscheduled \"vacuumdb -z\" in place of autovacuum, they shouldn't really have\nthat situation. Successively vacuuming many tables should normally\nresult in the tables' relfrozenxids not being all the same, unless they\nwere unlucky enough to have a very long-running transaction holding back\nthe global xmin horizon the whole time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 16 Dec 2022 15:33:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 01:56:30PM -0500, Tom Lane wrote:\n> * To fix vacuumdb properly, it might be enough to get it to\n> batch VACUUMs, say by naming up to 1000 tables per command\n> instead of just one. I'm not sure how that would interact\n> with its parallelization logic, though. It's not really\n> solving the O(N^2) issue either, just pushing it further out.\n\nI have been thinking about this part, and using a hardcoded rule for\nthe batches would be tricky. The list of relations returned by the\nscan of pg_class are ordered by relpages, so depending on the\ndistribution of the sizes (few tables with a large size and a lot of\ntable with small sizes, exponential distribution of table sizes), we\nmay finish with more downsides than upsides in some cases, even if we\nuse a linear rule based on the number of relations, or even if we\ndistribute the relations across the slots in a round robin fashion for\nexample.\n\nIn order to control all that, rather than a hardcoded rule, could it\nbe as simple as introducing an option like vacuumdb --batch=N\ndefaulting to 1 to let users control the number of relations grouped\nin a single command with a round robin distribution for each slot?\n--\nMichael",
"msg_date": "Sun, 18 Dec 2022 11:21:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "\n\n> In order to control all that, rather than a hardcoded rule, could it\n> be as simple as introducing an option like vacuumdb --batch=N\n> defaulting to 1 to let users control the number of relations grouped\n> in a single command with a round robin distribution for each slot?\n\nMy first reaction to that is: Is it possible to explain to a DBA what N should be for a particular cluster?\n\n",
"msg_date": "Sat, 17 Dec 2022 18:23:27 -0800",
"msg_from": "Christophe Pettus <xof@thebuild.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 08:39:54PM -0800, Peter Geoghegan wrote:\n> On Thu, Dec 15, 2022 at 1:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I could get behind manual VACUUM not invoking vac_update_datfrozenxid\n>> by default, perhaps. But if it can never call it, then that is a\n>> fairly important bit of housekeeping that is unreachable except by\n>> autovacuum. No doubt the people who turn off autovacuum are benighted,\n>> but they're still out there.\n> \n> I wouldn't mind adding another option for this to VACUUM. We already\n> have a couple of VACUUM options that are only really needed as escape\n> hatches, or perhaps as testing tools used by individual Postgres\n> hackers. Another one doesn't seem too bad. The VACUUM command should\n> eventually become totally niche, so I'm not too concerned about going\n> overboard here.\n\nPerhaps there could also be an update-datfrozenxid function that vacuumdb\ncalls when finished with a database. Even if vacuum becomes smarter about\ncalling vac_update_datfrozenxid, this might still be worth doing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 18 Dec 2022 15:55:00 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "On Sat, Dec 17, 2022 at 06:23:27PM -0800, Christophe Pettus wrote:\n> My first reaction to that is: Is it possible to explain to a DBA\n> what N should be for a particular cluster?\n\nAssuming that we can come up with a rather straight-forward still\nportable rule for the distribution of the relations across of the\nslots like something I mentioned above (which is not the best thing\ndepending on the sizes and the number of tables), that would be quite\ntricky IMO.\n--\nMichael",
"msg_date": "Mon, 19 Dec 2022 12:21:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "[ redirecting to -hackers because patch attached ]\n\nPeter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Dec 16, 2022 at 6:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> That is a really good point. How about teaching VACUUM to track\n>> the oldest original relfrozenxid and relminmxid among the table(s)\n>> it processed, and skip vac_update_datfrozenxid unless at least one\n>> of those matches the database's values? For extra credit, also\n>> skip if we didn't successfully advance the source rel's value.\n\n> Hmm. I think that that would probably work.\n\nI poked into that idea some more and concluded that getting VACUUM to\nmanage it behind the user's back is not going to work very reliably.\nThe key problem is explained by this existing comment in autovacuum.c:\n\n * Even if we didn't vacuum anything, it may still be important to do\n * this, because one indirect effect of vac_update_datfrozenxid() is to\n * update ShmemVariableCache->xidVacLimit. That might need to be done\n * even if we haven't vacuumed anything, because relations with older\n * relfrozenxid values or other databases with older datfrozenxid values\n * might have been dropped, allowing xidVacLimit to advance.\n\nThat is, if the table that's holding back datfrozenxid gets dropped\nbetween VACUUM runs, VACUUM would never think that it might have\nadvanced the global minimum.\n\nI'm forced to the conclusion that we have to expose some VACUUM\noptions if we want this to work well. Attached is a draft patch\nthat invents SKIP_DATABASE_STATS and ONLY_DATABASE_STATS options\n(name bikeshedding welcome) and teaches vacuumdb to use them.\n\nLight testing says that this is a win: even on the regression\ndatabase, which isn't all that big, I see a drop in vacuumdb's\nruntime from ~260 ms to ~175 ms. Of course this is a case where\nVACUUM doesn't really have anything to do, so it's a best-case\nscenario ... but still, I was expecting the effect to be barely\nabove noise with this many tables, yet it's a good bit more.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 28 Dec 2022 15:13:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 03:13:23PM -0500, Tom Lane wrote:\n> I'm forced to the conclusion that we have to expose some VACUUM\n> options if we want this to work well. Attached is a draft patch\n> that invents SKIP_DATABASE_STATS and ONLY_DATABASE_STATS options\n> (name bikeshedding welcome) and teaches vacuumdb to use them.\n\nThis is the conclusion I arrived at, too. In fact, I was just about to\npost a similar patch set. I'm attaching it here anyway, but I'm fine with\nproceeding with your version.\n\nI think the main difference between your patch and mine is that I've\nexposed vac_update_datfrozenxid() via a function instead of a VACUUM\noption. IMHO that feels a little more natural, but I can't say I feel too\nstrongly about it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 28 Dec 2022 13:12:53 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> I think the main difference between your patch and mine is that I've\n> exposed vac_update_datfrozenxid() via a function instead of a VACUUM\n> option. IMHO that feels a little more natural, but I can't say I feel too\n> strongly about it.\n\nI thought about that but it seems fairly unsafe, because that means\nthat vac_update_datfrozenxid is executing inside a user-controlled\ntransaction. I don't think it will hurt us if the user does a\nROLLBACK afterward --- but if he sits on the open transaction,\nthat would be bad, if only because we're still holding the\nLockDatabaseFrozenIds lock which will block other VACUUMs.\nThere might be more hazards besides that; certainly no one has ever\ntried to run vac_update_datfrozenxid that way before.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Dec 2022 16:20:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 03:13:23PM -0500, Tom Lane wrote:\n> +\t/* If we used SKIP_DATABASE_STATS, mop up with ONLY_DATABASE_STATS */\n> +\tif (vacopts->skip_database_stats && stage == ANALYZE_NO_STAGE && !failed)\n> +\t{\n> +\t\texecuteCommand(conn, \"VACUUM (ONLY_DATABASE_STATS);\", echo);\n> +\t}\n\nWhen I looked at this, I thought it would be better to send the command\nthrough the parallel slot machinery so that failures would use the same\ncode path as the rest of the VACUUM commands. However, you also need to\nadjust ParallelSlotsWaitCompletion() to mark the slots as idle so that the\nslot array can be reused after it is called.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Dec 2022 13:21:50 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 04:20:19PM -0500, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> I think the main difference between your patch and mine is that I've\n>> exposed vac_update_datfrozenxid() via a function instead of a VACUUM\n>> option. IMHO that feels a little more natural, but I can't say I feel too\n>> strongly about it.\n> \n> I thought about that but it seems fairly unsafe, because that means\n> that vac_update_datfrozenxid is executing inside a user-controlled\n> transaction. I don't think it will hurt us if the user does a\n> ROLLBACK afterward --- but if he sits on the open transaction,\n> that would be bad, if only because we're still holding the\n> LockDatabaseFrozenIds lock which will block other VACUUMs.\n> There might be more hazards besides that; certainly no one has ever\n> tried to run vac_update_datfrozenxid that way before.\n\nThat's a good point.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Dec 2022 13:23:21 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Wed, Dec 28, 2022 at 03:13:23PM -0500, Tom Lane wrote:\n>> +\t\texecuteCommand(conn, \"VACUUM (ONLY_DATABASE_STATS);\", echo);\n\n> When I looked at this, I thought it would be better to send the command\n> through the parallel slot machinery so that failures would use the same\n> code path as the rest of the VACUUM commands. However, you also need to\n> adjust ParallelSlotsWaitCompletion() to mark the slots as idle so that the\n> slot array can be reused after it is called.\n\nHm. I was just copying the way commands are issued further up in the\nsame function. But I think you're right: once we've done\n\n\tParallelSlotsAdoptConn(sa, conn);\n\nit's probably not entirely kosher to use the conn directly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Dec 2022 17:05:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 03:13:23PM -0500, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > On Fri, Dec 16, 2022 at 6:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> That is a really good point. How about teaching VACUUM to track\n> >> the oldest original relfrozenxid and relminmxid among the table(s)\n> >> it processed, and skip vac_update_datfrozenxid unless at least one\n> >> of those matches the database's values? For extra credit, also\n> >> skip if we didn't successfully advance the source rel's value.\n> \n> > Hmm. I think that that would probably work.\n> \n> I'm forced to the conclusion that we have to expose some VACUUM\n> options if we want this to work well. Attached is a draft patch\n> that invents SKIP_DATABASE_STATS and ONLY_DATABASE_STATS options\n> (name bikeshedding welcome) and teaches vacuumdb to use them.\n\nI was surprised to hear that this added *two* options.\n\nI assumed it would look like:\n\nVACUUM (UPDATE_DATABASE_STATS {yes,no,only})\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 28 Dec 2022 19:23:54 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "On Sun, Dec 18, 2022 at 11:21:47AM +0900, Michael Paquier wrote:\n> On Thu, Dec 15, 2022 at 01:56:30PM -0500, Tom Lane wrote:\n> > * To fix vacuumdb properly, it might be enough to get it to\n> > batch VACUUMs, say by naming up to 1000 tables per command\n> > instead of just one. I'm not sure how that would interact\n> > with its parallelization logic, though. It's not really\n> > solving the O(N^2) issue either, just pushing it further out.\n> \n> I have been thinking about this part, and using a hardcoded rule for\n> the batches would be tricky. The list of relations returned by the\n> scan of pg_class are ordered by relpages, so depending on the\n> distribution of the sizes (few tables with a large size and a lot of\n> table with small sizes, exponential distribution of table sizes), we\n> may finish with more downsides than upsides in some cases, even if we\n> use a linear rule based on the number of relations, or even if we\n> distribute the relations across the slots in a round robin fashion for\n> example.\n\nI've always found it weird that it uses \"ORDER BY relpages\".\n\nI'd prefer if it could ORDER BY age(relfrozenxid) or\nGREATEST(age(relfrozenxid), age(relminmxid)), at least if you specify\none of the --min-*age parms. Or something less hardcoded and\nunconfigurable.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 28 Dec 2022 19:29:10 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Wed, Dec 28, 2022 at 03:13:23PM -0500, Tom Lane wrote:\n>> I'm forced to the conclusion that we have to expose some VACUUM\n>> options if we want this to work well. Attached is a draft patch\n>> that invents SKIP_DATABASE_STATS and ONLY_DATABASE_STATS options\n>> (name bikeshedding welcome) and teaches vacuumdb to use them.\n\n> I assumed it would look like:\n> VACUUM (UPDATE_DATABASE_STATS {yes,no,only})\n\nMeh. We could do it like that, but I think options that look like\nbooleans but aren't are messy.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Dec 2022 21:17:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-28 15:13:23 -0500, Tom Lane wrote:\n> [ redirecting to -hackers because patch attached ]\n> \n> Peter Geoghegan <pg@bowt.ie> writes:\n> > On Fri, Dec 16, 2022 at 6:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> That is a really good point. How about teaching VACUUM to track\n> >> the oldest original relfrozenxid and relminmxid among the table(s)\n> >> it processed, and skip vac_update_datfrozenxid unless at least one\n> >> of those matches the database's values? For extra credit, also\n> >> skip if we didn't successfully advance the source rel's value.\n> \n> > Hmm. I think that that would probably work.\n> \n> I poked into that idea some more and concluded that getting VACUUM to\n> manage it behind the user's back is not going to work very reliably.\n> The key problem is explained by this existing comment in autovacuum.c:\n> \n> * Even if we didn't vacuum anything, it may still be important to do\n> * this, because one indirect effect of vac_update_datfrozenxid() is to\n> * update ShmemVariableCache->xidVacLimit. That might need to be done\n> * even if we haven't vacuumed anything, because relations with older\n> * relfrozenxid values or other databases with older datfrozenxid values\n> * might have been dropped, allowing xidVacLimit to advance.\n> \n> That is, if the table that's holding back datfrozenxid gets dropped\n> between VACUUM runs, VACUUM would never think that it might have\n> advanced the global minimum.\n\nI wonder if a less aggressive version of this idea might still work. Perhaps\nwe could use ShmemVariableCache->latestCompletedXid or\nShmemVariableCache->nextXid to skip at least some updates?\n\nObviously this isn't going to help if there's a lot of concurrent activity,\nbut the case of just running vacuumdb -a might be substantially improved.\n\n\nSeparately I wonder if it's worth micro-optimizing vac_update_datfrozenxid() a\nbit. I e.g. see a noticable speedup bypassing systable_getnext() and using\nheap_getnext(). It's really too bad that we want to check for \"in the future\"\nxids, otherwise we could use a ScanKey to filter at a lower level.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Dec 2022 19:03:29 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "I wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n>> I assumed it would look like:\n>> VACUUM (UPDATE_DATABASE_STATS {yes,no,only})\n\n> Meh. We could do it like that, but I think options that look like\n> booleans but aren't are messy.\n\nNote that I'm not necessarily objecting to there being just one option,\nonly to its values being a superset-of-boolean. Perhaps this'd work:\n\nVACUUM (DATABASE_STATS {UPDATE,SKIP,ONLY})\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Dec 2022 12:22:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "On Thu, Dec 29, 2022 at 12:22:58PM -0500, Tom Lane wrote:\n>> Justin Pryzby <pryzby@telsasoft.com> writes:\n>>> VACUUM (UPDATE_DATABASE_STATS {yes,no,only})\n> VACUUM (DATABASE_STATS {UPDATE,SKIP,ONLY})\n\n+1 for only introducing one option. IMHO UPDATE_DATABASE_STATS fits a\nlittle better since it states the action like most of the other options,\nbut I think both choices are sufficiently clear.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 29 Dec 2022 10:26:57 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> +1 for only introducing one option. IMHO UPDATE_DATABASE_STATS fits a\n> little better since it states the action like most of the other options,\n> but I think both choices are sufficiently clear.\n\nConsistency of VACUUM's options seems like a lost cause already :-(.\nBetween them, DISABLE_PAGE_SKIPPING, SKIP_LOCKED, and PROCESS_TOAST\ncover just about the entire set of possibilities for describing a\nboolean option --- we couldn't even manage to be consistent about\nwhether ON or OFF is the default, let alone where the verb is.\nAnd it's hard to argue that FULL, VERBOSE, or PARALLEL is a verb.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Dec 2022 13:45:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 07:03:29PM -0800, Andres Freund wrote:\n> Separately I wonder if it's worth micro-optimizing vac_update_datfrozenxid() a\n> bit. I e.g. see a noticable speedup bypassing systable_getnext() and using\n> heap_getnext(). It's really too bad that we want to check for \"in the future\"\n> xids, otherwise we could use a ScanKey to filter at a lower level.\n\nAnother thing I'm exploring is looking up the datfrozenxid/datminmxid\nbefore starting the pg_class scan so that the scan can be stopped early if\nit sees that we cannot possibly advance the values. The\noverwrite-corrupt-values logic might make this a little more complicated,\nbut I think it'd be sufficient to force the pg_class scan to complete if we\never see a value \"in the future.\" Overwriting the corrupt value might be\ndelayed, but it would eventually happen once the table ages advance.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 29 Dec 2022 10:52:14 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Thu, Dec 29, 2022 at 12:22:58PM -0500, Tom Lane wrote:\n>> Justin Pryzby <pryzby@telsasoft.com> writes:\n>>> VACUUM (UPDATE_DATABASE_STATS {yes,no,only})\n>>>> VACUUM (DATABASE_STATS {UPDATE,SKIP,ONLY})\n\n> +1 for only introducing one option. IMHO UPDATE_DATABASE_STATS fits a\n> little better since it states the action like most of the other options,\n> but I think both choices are sufficiently clear.\n\nI tried to make a patch along these lines, and soon hit a stumbling\nblock: ONLY is a fully-reserved SQL keyword. I don't think this\nsyntax is attractive enough to justify requiring people to\ndouble-quote the option, so we are back to square one. Anybody\nhave a different suggestion?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Dec 2022 15:29:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "On Thu, Dec 29, 2022 at 03:29:15PM -0500, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> On Thu, Dec 29, 2022 at 12:22:58PM -0500, Tom Lane wrote:\n>>> Justin Pryzby <pryzby@telsasoft.com> writes:\n>>>> VACUUM (UPDATE_DATABASE_STATS {yes,no,only})\n>>>>> VACUUM (DATABASE_STATS {UPDATE,SKIP,ONLY})\n> \n>> +1 for only introducing one option. IMHO UPDATE_DATABASE_STATS fits a\n>> little better since it states the action like most of the other options,\n>> but I think both choices are sufficiently clear.\n> \n> I tried to make a patch along these lines, and soon hit a stumbling\n> block: ONLY is a fully-reserved SQL keyword. I don't think this\n> syntax is attractive enough to justify requiring people to\n> double-quote the option, so we are back to square one. Anybody\n> have a different suggestion?\n\nHm. I thought about using PreventInTransactionBlock() for the function,\nbut that probably won't work for a few reasons. AFAICT we'd need to\nrestrict it to only be callable via \"SELECT update_database_stats()\", which\nfeels a bit unnatural.\n\nThere was some discussion elsewhere [0] about adding a\nPROCESS_MAIN_RELATION option or expanding PROCESS_TOAST to simplify\nvacuuming the TOAST table directly. If such an option existed, you could\ncall\n\n\tVACUUM (PROCESS_MAIN_RELATION FALSE, PROCESS_TOAST FALSE, UPDATE_DATABASE_STATES TRUE) pg_class;\n\nto achieve roughly what we need. I'll admit this is hacky, though.\n\nSo, adding both SKIP_DATABASE_STATS and ONLY_DATABASE_STATS might be the\nbest bet.\n\n[0] https://postgr.es/m/20221215191246.GA252861%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 29 Dec 2022 13:37:19 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Thu, Dec 29, 2022 at 03:29:15PM -0500, Tom Lane wrote:\n>> I tried to make a patch along these lines, and soon hit a stumbling\n>> block: ONLY is a fully-reserved SQL keyword. I don't think this\n>> syntax is attractive enough to justify requiring people to\n>> double-quote the option, so we are back to square one. Anybody\n>> have a different suggestion?\n\n> ... adding both SKIP_DATABASE_STATS and ONLY_DATABASE_STATS might be the\n> best bet.\n\nNobody has proposed a different bikeshed color, so I'm going to\nproceed with that syntax. I'll incorporate the parallel-machinery\nfix from your patch and push to HEAD only (since it's hard to argue\nthis isn't a new feature).\n\nThis needn't foreclose pursuing the various ideas about making\nvac_update_datfrozenxid faster; but none of those would eliminate\nthe fundamental O(N^2) issue AFAICS.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Jan 2023 12:53:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17717: Regression in vacuumdb (15 is slower than 10/11 and\n possible memory issue)"
}
] |
[
{
"msg_contents": "The standard only defines an ORDER BY clause inside of an aggregate for \nARRAY_AGG(). As an extension to the standard, we allow it for all \naggregates, which is very convenient for non-standard things like \nstring_agg().\n\nHowever, it is completely useless for things like AVG() or SUM(). If \nyou include it, the aggregate will do the sort even though it is neither \nrequired nor desired.\n\nI am proposing something like pg_aggregate.aggordering which would be an \nenum of behaviors such as f=Forbidden, a=Allowed, r=Required. Currently \nall aggregates would have 'a' but I am thinking that a lot of them could \nbe switched to 'f'. In that case, if a user supplies an ordering, an \nerror is raised.\n\nMy main motivation behind this is to be able to optimize aggregates that \ncould stop early such as ANY_VALUE(), but also to self-optimize queries \nwritten in error (or ignorance).\n\nThere is recurring demand for a first_agg() of some sort, and that one \n(whether implemented in core or left to extensions) would use 'r' so \nthat an error is raised if the user does not supply an ordering.\n\nI have not started working on this because I envision quite a lot of \nbikeshedding, but this is the approach I am aiming for.\n\nThoughts?\n-- \nVik Fearing\n\n\n",
"msg_date": "Tue, 13 Dec 2022 13:50:48 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Ordering behavior for aggregates"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 1:51 PM Vik Fearing <vik@postgresfriends.org> wrote:\n\n> The standard only defines an ORDER BY clause inside of an aggregate for\n> ARRAY_AGG(). As an extension to the standard, we allow it for all\n> aggregates, which is very convenient for non-standard things like\n> string_agg().\n>\n> However, it is completely useless for things like AVG() or SUM(). If\n> you include it, the aggregate will do the sort even though it is neither\n> required nor desired.\n>\n> I am proposing something like pg_aggregate.aggordering which would be an\n> enum of behaviors such as f=Forbidden, a=Allowed, r=Required. Currently\n> all aggregates would have 'a' but I am thinking that a lot of them could\n> be switched to 'f'. In that case, if a user supplies an ordering, an\n> error is raised.\n>\n\nShould there perhaps also be an option for \"ignored\" where we'd allow the\nuser to specify it, but not actually do the sort because we know it's\npointless? Or maybe that should be the behaviour of \"forbidden\", which\nshould then perhaps have a different name?\n\n\nMy main motivation behind this is to be able to optimize aggregates that\n> could stop early such as ANY_VALUE(), but also to self-optimize queries\n> written in error (or ignorance).\n>\n> There is recurring demand for a first_agg() of some sort, and that one\n> (whether implemented in core or left to extensions) would use 'r' so\n> that an error is raised if the user does not supply an ordering.\n>\n> I have not started working on this because I envision quite a lot of\n> bikeshedding, but this is the approach I am aiming for.\n>\n> Thoughts?\n>\n\n For consistency, should we have a similar flag for DISITNCT? That could be\ninteresting to forbid for something like first_agg() wouldn't it? I'm not\nsure what the usecase would be to require it, but maybe there is one?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Dec 13, 2022 at 1:51 PM Vik Fearing <vik@postgresfriends.org> wrote:The standard only defines an ORDER BY clause inside of an aggregate for \nARRAY_AGG(). As an extension to the standard, we allow it for all \naggregates, which is very convenient for non-standard things like \nstring_agg().\n\nHowever, it is completely useless for things like AVG() or SUM(). If \nyou include it, the aggregate will do the sort even though it is neither \nrequired nor desired.\n\nI am proposing something like pg_aggregate.aggordering which would be an \nenum of behaviors such as f=Forbidden, a=Allowed, r=Required. Currently \nall aggregates would have 'a' but I am thinking that a lot of them could \nbe switched to 'f'. In that case, if a user supplies an ordering, an \nerror is raised.Should there perhaps also be an option for \"ignored\" where we'd allow the user to specify it, but not actually do the sort because we know it's pointless? Or maybe that should be the behaviour of \"forbidden\", which should then perhaps have a different name?\nMy main motivation behind this is to be able to optimize aggregates that \ncould stop early such as ANY_VALUE(), but also to self-optimize queries \nwritten in error (or ignorance).\n\nThere is recurring demand for a first_agg() of some sort, and that one \n(whether implemented in core or left to extensions) would use 'r' so \nthat an error is raised if the user does not supply an ordering.\n\nI have not started working on this because I envision quite a lot of \nbikeshedding, but this is the approach I am aiming for.\n\nThoughts? For consistency, should we have a similar flag for DISITNCT? That could be interesting to forbid for something like first_agg() wouldn't it? I'm not sure what the usecase would be to require it, but maybe there is one?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 13 Dec 2022 13:55:27 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Ordering behavior for aggregates"
},
{
"msg_contents": "On 12/13/22 13:55, Magnus Hagander wrote:\n> On Tue, Dec 13, 2022 at 1:51 PM Vik Fearing <vik@postgresfriends.org> wrote:\n> \n>> The standard only defines an ORDER BY clause inside of an aggregate for\n>> ARRAY_AGG(). As an extension to the standard, we allow it for all\n>> aggregates, which is very convenient for non-standard things like\n>> string_agg().\n>>\n>> However, it is completely useless for things like AVG() or SUM(). If\n>> you include it, the aggregate will do the sort even though it is neither\n>> required nor desired.\n>>\n>> I am proposing something like pg_aggregate.aggordering which would be an\n>> enum of behaviors such as f=Forbidden, a=Allowed, r=Required. Currently\n>> all aggregates would have 'a' but I am thinking that a lot of them could\n>> be switched to 'f'. In that case, if a user supplies an ordering, an\n>> error is raised.\n>>\n> \n> Should there perhaps also be an option for \"ignored\" where we'd allow the\n> user to specify it, but not actually do the sort because we know it's\n> pointless? Or maybe that should be the behaviour of \"forbidden\", which\n> should then perhaps have a different name?\n\n\nI did think about that but I can't think of any reason we would want to \nsilently ignore something the user has written. If the ordering doesn't \nmake sense, we should forbid it.\n\n\n> My main motivation behind this is to be able to optimize aggregates that\n>> could stop early such as ANY_VALUE(), but also to self-optimize queries\n>> written in error (or ignorance).\n>>\n>> There is recurring demand for a first_agg() of some sort, and that one\n>> (whether implemented in core or left to extensions) would use 'r' so\n>> that an error is raised if the user does not supply an ordering.\n>>\n>> I have not started working on this because I envision quite a lot of\n>> bikeshedding, but this is the approach I am aiming for.\n>>\n>> Thoughts?\n>>\n> \n> For consistency, should we have a similar flag for DISITNCT? That could be\n> interesting to forbid for something like first_agg() wouldn't it? I'm not\n> sure what the usecase would be to require it, but maybe there is one?\n\n\nI thought about that too, but decided it could be a separate patch \nbecause far fewer aggregates would need it.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 13 Dec 2022 14:05:10 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: Ordering behavior for aggregates"
},
{
"msg_contents": "On Tue, 13 Dec 2022 at 07:50, Vik Fearing <vik@postgresfriends.org> wrote:\n\nI am proposing something like pg_aggregate.aggordering which would be an\n> enum of behaviors such as f=Forbidden, a=Allowed, r=Required. Currently\n> all aggregates would have 'a' but I am thinking that a lot of them could\n> be switched to 'f'. In that case, if a user supplies an ordering, an\n> error is raised.\n>\n\nAlthough I find \"r\" attractive, I have two concerns about it:\n\n1) Do we really want to require ordering? I know it's weird and partially\nundefined to call something like string_agg without an ordering, but what\nif in the specific application it doesn’t matter in what order the items\nappear?\n\n2) There is a backward compatibility issue here; it’s not clear to me we\ncould apply \"r\" to any existing aggregate.\n\nActually upon consideration, I think I have similar concerns about \"f\". We\ndon’t usually forbid \"dumb\" things; e.g., I can write a function which\nignores its inputs. And in some situations, \"dumb\" things make sense. For\nexample, if I’m specifying a function to use as a filter, it could be\nreasonable in a particular instance to provide a function which ignores one\nor more of its inputs.\n\nOn Tue, 13 Dec 2022 at 07:50, Vik Fearing <vik@postgresfriends.org> wrote:\nI am proposing something like pg_aggregate.aggordering which would be an \nenum of behaviors such as f=Forbidden, a=Allowed, r=Required. Currently \nall aggregates would have 'a' but I am thinking that a lot of them could \nbe switched to 'f'. In that case, if a user supplies an ordering, an \nerror is raised.Although I find \"r\" attractive, I have two concerns about it:1) Do we really want to require ordering? I know it's weird and partially undefined to call something like string_agg without an ordering, but what if in the specific application it doesn’t matter in what order the items appear?2) There is a backward compatibility issue here; it’s not clear to me we could apply \"r\" to any existing aggregate.Actually upon consideration, I think I have similar concerns about \"f\". We don’t usually forbid \"dumb\" things; e.g., I can write a function which ignores its inputs. And in some situations, \"dumb\" things make sense. For example, if I’m specifying a function to use as a filter, it could be reasonable in a particular instance to provide a function which ignores one or more of its inputs.",
"msg_date": "Tue, 13 Dec 2022 08:25:04 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ordering behavior for aggregates"
},
{
"msg_contents": "On 12/13/22 14:25, Isaac Morland wrote:\n> On Tue, 13 Dec 2022 at 07:50, Vik Fearing <vik@postgresfriends.org> wrote:\n> \n> I am proposing something like pg_aggregate.aggordering which would be an\n>> enum of behaviors such as f=Forbidden, a=Allowed, r=Required. Currently\n>> all aggregates would have 'a' but I am thinking that a lot of them could\n>> be switched to 'f'. In that case, if a user supplies an ordering, an\n>> error is raised.\n>>\n> \n> Although I find \"r\" attractive, I have two concerns about it:\n> \n> 1) Do we really want to require ordering? I know it's weird and partially\n> undefined to call something like string_agg without an ordering, but what\n> if in the specific application it doesn’t matter in what order the items\n> appear?\n> \n> 2) There is a backward compatibility issue here; it’s not clear to me we\n> could apply \"r\" to any existing aggregate.\n\n\nI do not intend to add 'r' to any existing aggregate. I included it in \nthe hypothetical enum for future aggregates and extensions. It isn't \nperfect either because first_value((x, y) ORDER BY x) can still give a \nsemi-random result.\n\n\n> Actually upon consideration, I think I have similar concerns about \"f\". We\n> don’t usually forbid \"dumb\" things; e.g., I can write a function which\n> ignores its inputs. And in some situations, \"dumb\" things make sense. For\n> example, if I’m specifying a function to use as a filter, it could be\n> reasonable in a particular instance to provide a function which ignores one\n> or more of its inputs.\n\n\nSure, but this isn't a function; this is syntax. And in your example \nyou are ignoring the input whereas currently aggregates *do not* ignore \nthe ordering when they could/should.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 13 Dec 2022 14:44:32 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: Ordering behavior for aggregates"
},
{
"msg_contents": "Le mardi 13 décembre 2022, 14:05:10 CET Vik Fearing a écrit :\n> On 12/13/22 13:55, Magnus Hagander wrote:\n> > On Tue, Dec 13, 2022 at 1:51 PM Vik Fearing <vik@postgresfriends.org> \nwrote:\n> >> However, it is completely useless for things like AVG() or SUM(). If\n> >> you include it, the aggregate will do the sort even though it is neither\n> >> required nor desired.\n\nI'm not sure about this. For AVG and SUM, if you want reproducible results \nwith floating point numbers, you may want it. And if you disallow it for most \navg and sum implementations except for floating point types, it's not a very \nconsistent user experience.\n\n\n> >> \n> >> I am proposing something like pg_aggregate.aggordering which would be an\n> >> enum of behaviors such as f=Forbidden, a=Allowed, r=Required. Currently\n> >> all aggregates would have 'a' but I am thinking that a lot of them could\n> >> be switched to 'f'. In that case, if a user supplies an ordering, an\n> >> error is raised.\n> > \n> > Should there perhaps also be an option for \"ignored\" where we'd allow the\n> > user to specify it, but not actually do the sort because we know it's\n> > pointless? Or maybe that should be the behaviour of \"forbidden\", which\n> > should then perhaps have a different name?\n> \n> I did think about that but I can't think of any reason we would want to\n> silently ignore something the user has written. If the ordering doesn't\n> make sense, we should forbid it.\n\nIt is allowed as of now, and so it would be a compatibility issue for queries \nexisting in the wild. Ignoring it is just an optimization, just how we \noptimize away some joins entirely. \n\n--\nRonan Dunklau\n\n\n\n\n",
"msg_date": "Tue, 13 Dec 2022 15:06:49 +0100",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: Ordering behavior for aggregates"
},
{
"msg_contents": "Ronan Dunklau <ronan.dunklau@aiven.io> writes:\n> Le mardi 13 décembre 2022, 14:05:10 CET Vik Fearing a écrit :\n>> On 12/13/22 13:55, Magnus Hagander wrote:\n>>> On Tue, Dec 13, 2022 at 1:51 PM Vik Fearing <vik@postgresfriends.org> \n>>> wrote:\n>>>> However, it is completely useless for things like AVG() or SUM(). If\n>>>> you include it, the aggregate will do the sort even though it is neither\n>>>> required nor desired.\n\n> I'm not sure about this. For AVG and SUM, if you want reproducible results \n> with floating point numbers, you may want it.\n\nYeah, I was about to mention the floating-point issue. IIRC, we went\nover exactly this ground when we introduced aggregate ORDER BY, and\ndecided that it was not our business to legislate whether particular\naggregates need ordering or not. We don't try to second-guess users'\ninclusion of ORDER BY in subqueries either, and that's just about\nthe same thing. (Indeed, if you're feeding the subquery output to\nan aggregate, it's exactly the same thing.)\n\nAccordingly, I find nothing at all attractive in this proposal.\nI think the main thing it'd accomplish is to drive users back to\nthe bad old days of ordering-by-subquery, if they have a requirement\nwe failed to account for.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Dec 2022 10:13:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Ordering behavior for aggregates"
},
{
"msg_contents": "Le mardi 13 décembre 2022, 16:13:34 CET Tom Lane a écrit :\n> Accordingly, I find nothing at all attractive in this proposal.\n> I think the main thing it'd accomplish is to drive users back to\n> the bad old days of ordering-by-subquery, if they have a requirement\n> we failed to account for.\n\nI think the ability to mark certain aggregates as being able to completely \nignore the ordering because they produce exactly the same results is still a \nuseful optimization.\n\n--\nRonan Dunklau\n\n\n\n\n",
"msg_date": "Tue, 13 Dec 2022 17:45:46 +0100",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: Ordering behavior for aggregates"
},
{
"msg_contents": "Ronan Dunklau <ronan.dunklau@aiven.io> writes:\n> Le mardi 13 décembre 2022, 16:13:34 CET Tom Lane a écrit :\n>> Accordingly, I find nothing at all attractive in this proposal.\n>> I think the main thing it'd accomplish is to drive users back to\n>> the bad old days of ordering-by-subquery, if they have a requirement\n>> we failed to account for.\n\n> I think the ability to mark certain aggregates as being able to completely \n> ignore the ordering because they produce exactly the same results is still a \n> useful optimization.\n\nThat is *exactly* the position I do not accept.\n\nI think it's fairly unlikely that a user would trouble to write ORDER BY\nwithin an aggregate call if they didn't need it. So my opinion of this\nproposal is that it's a lot of work to create an optimization effect that\nwill be useless to nearly all users, and might actively break the queries\nof some.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Dec 2022 12:05:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Ordering behavior for aggregates"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 9:45 AM Ronan Dunklau <ronan.dunklau@aiven.io>\nwrote:\n\n> Le mardi 13 décembre 2022, 16:13:34 CET Tom Lane a écrit :\n> > Accordingly, I find nothing at all attractive in this proposal.\n> > I think the main thing it'd accomplish is to drive users back to\n> > the bad old days of ordering-by-subquery, if they have a requirement\n> > we failed to account for.\n>\n> I think the ability to mark certain aggregates as being able to completely\n> ignore the ordering because they produce exactly the same results is still\n> a\n> useful optimization.\n>\n>\nI seriously doubt that users are adding unnecessary ORDER BY clauses to\ntheir aggregates. The more compelling use case would be existing ORMs that\nproduce such problematic SQL - are there any though?\n\nI'm more keen on the idea of having the system understand when an ORDER BY\nis missing - that seems like what users are more likely to actually do.\nBut it doesn't seem all that useful given the lack of aggregates that would\nactually use it meaningfully.\n\nDavid J.\n\nOn Tue, Dec 13, 2022 at 9:45 AM Ronan Dunklau <ronan.dunklau@aiven.io> wrote:Le mardi 13 décembre 2022, 16:13:34 CET Tom Lane a écrit :\n> Accordingly, I find nothing at all attractive in this proposal.\n> I think the main thing it'd accomplish is to drive users back to\n> the bad old days of ordering-by-subquery, if they have a requirement\n> we failed to account for.\n\nI think the ability to mark certain aggregates as being able to completely \nignore the ordering because they produce exactly the same results is still a \nuseful optimization.I seriously doubt that users are adding unnecessary ORDER BY clauses to their aggregates. The more compelling use case would be existing ORMs that produce such problematic SQL - are there any though?I'm more keen on the idea of having the system understand when an ORDER BY is missing - that seems like what users are more likely to actually do. But it doesn't seem all that useful given the lack of aggregates that would actually use it meaningfully.David J.",
"msg_date": "Tue, 13 Dec 2022 10:15:12 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ordering behavior for aggregates"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> I'm more keen on the idea of having the system understand when an ORDER BY\n> is missing - that seems like what users are more likely to actually do.\n\nThat side of it could perhaps be useful, but not if it's an unintelligent\nanalysis. If someone has a perfectly safe query written according to\nthe old-school method:\n\n\tSELECT string_agg(...) FROM (SELECT ... ORDER BY ...) ss;\n\nthey are not going to be too pleased with a nanny-ish warning (much\nless an error) saying that the aggregate's input ordering is\nunderspecified.\n\nI also wonder whether we'd accept any ORDER BY whatsoever, or try\nto require one that produces a sufficiently-unique input ordering.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Dec 2022 12:22:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Ordering behavior for aggregates"
},
{
"msg_contents": "On 12/13/22 18:22, Tom Lane wrote:\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>> I'm more keen on the idea of having the system understand when an ORDER BY\n>> is missing - that seems like what users are more likely to actually do.\n> \n> That side of it could perhaps be useful, but not if it's an unintelligent\n> analysis. If someone has a perfectly safe query written according to\n> the old-school method:\n> \n> \tSELECT string_agg(...) FROM (SELECT ... ORDER BY ...) ss;\n> \n> they are not going to be too pleased with a nanny-ish warning (much\n> less an error) saying that the aggregate's input ordering is\n> underspecified.\n\nThat is a good point\n\n> I also wonder whether we'd accept any ORDER BY whatsoever, or try\n> to require one that produces a sufficiently-unique input ordering.\n\nI would accept anything. agg(x order by y) is a common thing.\n\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 13 Dec 2022 19:02:16 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: Ordering behavior for aggregates"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.