threads
listlengths 1
2.99k
|
|---|
[
{
"msg_contents": "I've noticed that `meson test` logs the complete environment in \nmeson_logs/testlog.txt. That seems unnecessary and probably undesirable \nfor the buildfarm client. Is there any way to suppress that, or at least \nonly print some relevant subset? (The buildfarm client itself only \nreports an approved set of environment variables).\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nI've noticed that `meson test` logs the complete environment in\n meson_logs/testlog.txt. That seems unnecessary and probably\n undesirable for the buildfarm client. Is there any way to suppress\n that, or at least only print some relevant subset? (The buildfarm\n client itself only reports an approved set of environment\n variables).\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 20 Feb 2023 20:47:59 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "meson logs environment"
},
{
"msg_contents": "Hi,\n\nOn Tue, 21 Feb 2023 at 04:48, Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> I've noticed that `meson test` logs the complete environment in meson_logs/testlog.txt. That seems unnecessary and probably undesirable for the buildfarm client. Is there any way to suppress that, or at least only print some relevant subset? (The buildfarm client itself only reports an approved set of environment variables).\n\nThere is an open issue on the meson:\nhttps://github.com/mesonbuild/meson/issues/5328 and I confirm that\n`env --ignore-environment PATH=\"$PATH\" meson test` prevents\nenvironment variables from being logged.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Tue, 21 Feb 2023 13:27:25 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson logs environment"
},
{
"msg_contents": "On 2023-02-21 Tu 05:27, Nazir Bilal Yavuz wrote:\n> Hi,\n>\n> On Tue, 21 Feb 2023 at 04:48, Andrew Dunstan<andrew@dunslane.net> wrote:\n>>\n>> I've noticed that `meson test` logs the complete environment in meson_logs/testlog.txt. That seems unnecessary and probably undesirable for the buildfarm client. Is there any way to suppress that, or at least only print some relevant subset? (The buildfarm client itself only reports an approved set of environment variables).\n> There is an open issue on the meson:\n> https://github.com/mesonbuild/meson/issues/5328 and I confirm that\n> `env --ignore-environment PATH=\"$PATH\" meson test` prevents\n> environment variables from being logged.\n>\n\nOuch, OK, I'll do something like that. The fact that this issue has been \nopen since 2019 is not encouraging.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-21 Tu 05:27, Nazir Bilal\n Yavuz wrote:\n\n\nHi,\n\nOn Tue, 21 Feb 2023 at 04:48, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\n\n\nI've noticed that `meson test` logs the complete environment in meson_logs/testlog.txt. That seems unnecessary and probably undesirable for the buildfarm client. Is there any way to suppress that, or at least only print some relevant subset? (The buildfarm client itself only reports an approved set of environment variables).\n\n\n\nThere is an open issue on the meson:\nhttps://github.com/mesonbuild/meson/issues/5328 and I confirm that\n`env --ignore-environment PATH=\"$PATH\" meson test` prevents\nenvironment variables from being logged.\n\n\n\n\n\nOuch, OK, I'll do something like that. The fact that this issue\n has been open since 2019 is not encouraging.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 21 Feb 2023 07:47:13 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: meson logs environment"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-20 20:47:59 -0500, Andrew Dunstan wrote:\n> I've noticed that `meson test` logs the complete environment in\n> meson_logs/testlog.txt. That seems unnecessary and probably undesirable for\n> the buildfarm client.\n\nIt doesn't seem unnecessary to me, at all. It's what you need to rerun the\ntest in a precise way.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Feb 2023 09:59:23 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: meson logs environment"
},
{
"msg_contents": "On 2023-02-26 Su 12:59, Andres Freund wrote:\n> Hi,\n>\n> On 2023-02-20 20:47:59 -0500, Andrew Dunstan wrote:\n>> I've noticed that `meson test` logs the complete environment in\n>> meson_logs/testlog.txt. That seems unnecessary and probably undesirable for\n>> the buildfarm client.\n> It doesn't seem unnecessary to me, at all. It's what you need to rerun the\n> test in a precise way.\n>\n\nWell, clearly I'm not the only person who is concerned about it - see \nthe upstream issue Nazir referred to. In any case, I have got a \nprocedure in my meson buildfarm client for filtering the inherited \nenvironment to accomodate this verbosity, so there's no need to do \nanything else here.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-26 Su 12:59, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-02-20 20:47:59 -0500, Andrew Dunstan wrote:\n\n\nI've noticed that `meson test` logs the complete environment in\nmeson_logs/testlog.txt. That seems unnecessary and probably undesirable for\nthe buildfarm client.\n\n\n\nIt doesn't seem unnecessary to me, at all. It's what you need to rerun the\ntest in a precise way.\n\n\n\n\n\nWell, clearly I'm not the only person who is concerned about it -\n see the upstream issue Nazir referred to. In any case, I have got\n a procedure in my meson buildfarm client for filtering the\n inherited environment to accomodate this verbosity, so there's no\n need to do anything else here.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 26 Feb 2023 15:50:45 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: meson logs environment"
}
] |
[
{
"msg_contents": "I noticed that \\bind is leaking memory for each option.\n\n=# SELECT $1, $2, $3 \\ bind 1 2 3 \\g\n\nThe leaked memory blocks are comming from\npsql_scan_slash_option(). The attached small patch resolves that\nissue. I looked through the function's call sites, but I didn't find\nthe same mistake.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 21 Feb 2023 11:55:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "psql memory leaks"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 9:56 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> I noticed that \\bind is leaking memory for each option.\n>\n> =# SELECT $1, $2, $3 \\ bind 1 2 3 \\g\n>\n> The leaked memory blocks are comming from\n> psql_scan_slash_option(). The attached small patch resolves that\n> issue. I looked through the function's call sites, but I didn't find\n> the same mistake.\n>\n> regards.\n>\n>\nGood catch. Patch passes make check-world.\n\nOn Mon, Feb 20, 2023 at 9:56 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:I noticed that \\bind is leaking memory for each option.\n\n=# SELECT $1, $2, $3 \\ bind 1 2 3 \\g\n\nThe leaked memory blocks are comming from\npsql_scan_slash_option(). The attached small patch resolves that\nissue. I looked through the function's call sites, but I didn't find\nthe same mistake.\n\nregards.Good catch. Patch passes make check-world.",
"msg_date": "Tue, 21 Feb 2023 14:03:43 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql memory leaks"
},
{
"msg_contents": "On Tue, Feb 21, 2023 at 02:03:43PM -0500, Corey Huinker wrote:\n> Good catch. Patch passes make check-world.\n\nIndeed. I was reviewing the whole and there could be a point in\nresetting bind_nparams at the end of SendQuery() to keep a correct\ntrack of what's saved in the pset data for the bind parameters, but\nnot doing so is not a big deal either because we'd just reset it once\nthe full allocation of the parameters is done and bind_flag is all\nabout that. The code paths leading to the free seem correct, seen\nfrom here.\n--\nMichael",
"msg_date": "Wed, 22 Feb 2023 14:32:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql memory leaks"
}
] |
[
{
"msg_contents": "Over on [1], Benjamin highlighted that we don't do ordered partition\nscans in some cases where we could.\n\nBasically, what was added in 959d00e9d only works when at least one\nchild path has pathkeys that suit the required query pathkeys. If the\npartitions or partitioned table does not have an index matching the\npartitioning columns or some subset thereof, then we'll not add an\nordered Append path.\n\nI've attached a patch. This is what it does:\n\ncreate table lp (a int) partition by list(a);\ncreate table lp1 partition of lp for values in(1);\ncreate table lp2 partition of lp for values in(2);\n\nexplain (costs off) select * from lp order by a;\n\nmaster;\n\n QUERY PLAN\n----------------------------------\n Sort\n Sort Key: lp.a\n -> Append\n -> Seq Scan on lp1 lp_1\n -> Seq Scan on lp2 lp_2\n(5 rows)\n\npatched:\n\n QUERY PLAN\n----------------------------------\n Append\n -> Sort\n Sort Key: lp_1.a\n -> Seq Scan on lp1 lp_1\n -> Sort\n Sort Key: lp_2.a\n -> Seq Scan on lp2 lp_2\n(7 rows)\n\nThere's still something in there that I'm not happy with which relates\nto the tests I added in inherit.sql. Anyone looking at the new tests\nmight expect that the following query should work too:\n\nexplain (costs off) select * from range_parted order by a,b,c;\n\nbut it *appears* not to. We do build an AppendPath for that, it's\njust that the AppendPath added by the following code seems to win over\nit:\n\n/*\n* If we found unparameterized paths for all children, build an unordered,\n* unparameterized Append path for the rel. (Note: this is correct even\n* if we have zero or one live subpath due to constraint exclusion.)\n*/\nif (subpaths_valid)\nadd_path(rel, (Path *) create_append_path(root, rel, subpaths, NIL,\n NIL, NULL, 0, false,\n -1));\n\nI still need to look to see if there's some small amount of data that\ncan be loaded into the table to help coax the planner into producing\nthe ordered scan for this one. It works fine as-is for ORDER BY a,b\nand ORDER BY a; so I've put tests in for that.\n\nDavid\n\n[1] https://postgr.es/m/CABTcpyuXXY1625-Mns=mPFCVSf4aouGiRVyLPiGQQ0doT0PiLQ@mail.gmail.com",
"msg_date": "Tue, 21 Feb 2023 16:14:02 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Allow ordered partition scans in more cases"
},
{
"msg_contents": "Thank you for improving this optimization !\n\nLe mardi 21 février 2023, 04:14:02 CET David Rowley a écrit :\n> I still need to look to see if there's some small amount of data that\n> can be loaded into the table to help coax the planner into producing\n> the ordered scan for this one. It works fine as-is for ORDER BY a,b\n> and ORDER BY a; so I've put tests in for that.\n\nI haven't looked too deeply into it, but it seems reasonable that the whole \nsort would cost cheaper than individual sorts on partitions + incremental \nsorts, except when the the whole sort would spill to disk much more than the \nincremental ones. I find it quite difficult to reason about what that threshold \nshould be, but I managed to find a case which could fit in a test:\n\ncreate table range_parted (a int, b int, c int) partition by range(a, b);\ncreate table range_parted1 partition of range_parted for values from (0,0) to \n(10,10);\ncreate table range_parted2 partition of range_parted for values from (10,10) \nto (20,20);\ninsert into range_parted(a, b, c) select i, j, k from generate_series(1, 19) \ni, generate_series(1, 19) j, generate_series(1, 5) k;\nanalyze range_parted;\nset random_page_cost = 10;\nset work_mem = '64kB';\nexplain (costs off) select * from range_parted order by a,b,c;\n\nIt's quite convoluted, because it needs the following:\n - estimate the individual partition sorts to fit into work_mem (even if that's \nnot the case here at runtime)\n - estimate the whole table sort to not fit into work_mem\n - the difference between the two should be big enough to compensate the \nincremental sort penalty (hence raising random_page_cost).\n\nThis is completely tangential to the subject at hand, but maybe we have \nimprovements to do with the way we estimate what type of sort will be \nperformed ? It seems to underestimate the memory amount needed. I'm not sure \nit makes a real difference in real use cases though. \n\nRegards,\n\n--\nRonan Dunklau\n\n\n\n\n\n",
"msg_date": "Wed, 22 Feb 2023 14:10:48 +0100",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: Allow ordered partition scans in more cases"
},
{
"msg_contents": "On Thu, 23 Feb 2023 at 02:10, Ronan Dunklau <ronan.dunklau@aiven.io> wrote:\n> I haven't looked too deeply into it, but it seems reasonable that the whole\n> sort would cost cheaper than individual sorts on partitions + incremental\n> sorts, except when the the whole sort would spill to disk much more than the\n> incremental ones. I find it quite difficult to reason about what that threshold\n> should be, but I managed to find a case which could fit in a test:\n\nThanks for coming up with that test case. It's a little disappointing\nto see that so many rows had to be added to get the plan to change. I\nwonder if it's really worth testing this particular case. ~1800 rows\nis a little more significant than I'd have hoped. The buildfarm has a\nfew dinosaurs that would likely see a noticeable slowdown from that.\n\nWhat's on my mind now is if turning 1 Sort into N Sorts is a\nparticularly good idea from a work_mem standpoint. I see that we don't\ndo tuplesort_end() until executor shutdown, so that would mean that we\ncould end up using 1 x work_mem per Sort node. I idly wondered if we\ncouldn't do tuplesort_end() after spitting out the final tuple when\nEXEC_FLAG_REWIND is not set, but that would still mean we could use N\nwork_mems when EXEC_FLAG_REWIND *is* set. We only really have\nvisibility of that during execution too, so can't really make a\ndecision at plan time based on that.\n\nI'm not quite sure if I'm being overly concerned here or not. All it\nwould take to get a sort per partition today would be to put a\nsuitable index on just 1 of the partitions. So this isn't exactly a\nnew problem, it's just making an old problem perhaps a little more\nlikely. The problem does also exist for things like partition-wise\njoins too for Hash and Merge joins. Partition-wise joins are disabled\nby default, however.\n\nDavid\n\n\n",
"msg_date": "Sat, 4 Mar 2023 00:56:49 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow ordered partition scans in more cases"
},
{
"msg_contents": "On Sat, 4 Mar 2023 at 00:56, David Rowley <dgrowleyml@gmail.com> wrote:\n> What's on my mind now is if turning 1 Sort into N Sorts is a\n> particularly good idea from a work_mem standpoint. I see that we don't\n> do tuplesort_end() until executor shutdown, so that would mean that we\n> could end up using 1 x work_mem per Sort node. I idly wondered if we\n> couldn't do tuplesort_end() after spitting out the final tuple when\n> EXEC_FLAG_REWIND is not set, but that would still mean we could use N\n> work_mems when EXEC_FLAG_REWIND *is* set. We only really have\n> visibility of that during execution too, so can't really make a\n> decision at plan time based on that.\n\nBecause of the above, I'm not planning on pursuing this patch any\nfurther. We can maybe revisit this if we come up with better ways to\nmanage the number of work_mems a plan can have in the future. As of\nnow, I'm a little too worried that this patch will end up consuming\ntoo many work_mems by adding a Sort node per partition.\n\nI'll mark this as withdrawn.\n\nDavid\n\n\n",
"msg_date": "Sun, 25 Jun 2023 00:29:15 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow ordered partition scans in more cases"
}
] |
[
{
"msg_contents": "llvm_release_context() calls llvm_enter_fatal_on_oom(), but it never \ncalls llvm_leave_fatal_on_oom(). Isn't that a clear leak?\n\n(spotted this while investigating \nhttps://www.postgresql.org/message-id/a53cacb0-8835-57d6-31e4-4c5ef196de1a@deepbluecap.com, \nbut it seems unrelated)\n\n- Heikki",
"msg_date": "Tue, 21 Feb 2023 16:50:53 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Missing llvm_leave_fatal_on_oom() call"
},
{
"msg_contents": "> On 21 Feb 2023, at 15:50, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> \n> llvm_release_context() calls llvm_enter_fatal_on_oom(), but it never calls llvm_leave_fatal_on_oom(). Isn't that a clear leak?\n\nNot sure how much of a leak it is since IIUC LLVM just stores a function\npointer to our error handler, but I can't see a reason not clean it up here.\nThe attached fix LGTM and passes make check with jit_above_cost set to zero.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 4 Jul 2023 18:33:10 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Missing llvm_leave_fatal_on_oom() call"
},
{
"msg_contents": "On 04/07/2023 19:33, Daniel Gustafsson wrote:\n>> On 21 Feb 2023, at 15:50, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>\n>> llvm_release_context() calls llvm_enter_fatal_on_oom(), but it never calls llvm_leave_fatal_on_oom(). Isn't that a clear leak?\n> \n> Not sure how much of a leak it is since IIUC LLVM just stores a function\n> pointer to our error handler, but I can't see a reason not clean it up here.\n> The attached fix LGTM and passes make check with jit_above_cost set to zero.\n\nPushed to all live branches, thanks for the review!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 5 Jul 2023 13:34:38 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Missing llvm_leave_fatal_on_oom() call"
}
] |
[
{
"msg_contents": "I thought I should be able to do this:\n\n=> create view testv as values (1, 'a'), (2, 'b'), (3, 'c');\nCREATE VIEW\n=> create table testt of testv;\nERROR: type testv is not a composite type\n\nBut as you can see I can’t. pg_type seems to think the type is composite:\n\nijmorlan=> select typtype from pg_type where typname = 'testv';\n typtype\n─────────\n c\n(1 row)\n\nI’m guessing there are good reasons this isn’t supported, so I’m thinking\nto provide a documentation patch for CREATE TABLE and ALTER TABLE to\nspecify that the type given for OF type_name must be a\nnon-relation-row-type composite type.\n\nAm I missing something?\n\nI thought I should be able to do this:=> create view testv as values (1, 'a'), (2, 'b'), (3, 'c');CREATE VIEW=> create table testt of testv;ERROR: type testv is not a composite typeBut as you can see I can’t. pg_type seems to think the type is composite:ijmorlan=> select typtype from pg_type where typname = 'testv'; typtype ───────── c(1 row)I’m guessing there are good reasons this isn’t supported, so I’m thinking to provide a documentation patch for CREATE TABLE and ALTER TABLE to specify that the type given for OF type_name must be a non-relation-row-type composite type.Am I missing something?",
"msg_date": "Tue, 21 Feb 2023 11:12:42 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": true,
"msg_subject": "Unable to create table of view row type"
}
] |
[
{
"msg_contents": "Is anyone else itching to be CF manager for March? If anyone new wants\nto try it out that would be good.\n\nAssuming otherwise I'll volunteer.\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 21 Feb 2023 16:17:17 +0000",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Commitfest Manager"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> Is anyone else itching to be CF manager for March? If anyone new wants\n> to try it out that would be good.\n\n> Assuming otherwise I'll volunteer.\n\nWe've generally thought that the manager for the last CF of a cycle\nneeds to be someone with experience.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 Feb 2023 12:29:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest Manager"
},
{
"msg_contents": "On Tue, 21 Feb 2023 at 12:29, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Greg Stark <stark@mit.edu> writes:\n> > Is anyone else itching to be CF manager for March? If anyone new wants\n> > to try it out that would be good.\n>\n> > Assuming otherwise I'll volunteer.\n>\n> We've generally thought that the manager for the last CF of a cycle\n> needs to be someone with experience.\n\nHm. Was that in response to the first paragraph or the second? :)\n\nI have experience in years but only limited experience as a commitfest\nmanager. I'm still up for it if you think I'm ready :)\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 21 Feb 2023 18:50:31 +0000",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest Manager"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> On Tue, 21 Feb 2023 at 12:29, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Greg Stark <stark@mit.edu> writes:\n>>> Is anyone else itching to be CF manager for March? If anyone new wants\n>>> to try it out that would be good.\n>>> \n>>> Assuming otherwise I'll volunteer.\n\n>> We've generally thought that the manager for the last CF of a cycle\n>> needs to be someone with experience.\n\n> Hm. Was that in response to the first paragraph or the second? :)\n\nThe first ;-). Sorry if I wasn't clear.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 Feb 2023 14:13:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest Manager"
},
{
"msg_contents": "On Tue, Feb 21, 2023 at 02:13:08PM -0500, Tom Lane wrote:\n> Greg Stark <stark@mit.edu> writes:\n>> On Tue, 21 Feb 2023 at 12:29, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Greg Stark <stark@mit.edu> writes:\n>>> We've generally thought that the manager for the last CF of a cycle\n>>> needs to be someone with experience.\n> \n>> Hm. Was that in response to the first paragraph or the second? :)\n> \n> The first ;-). Sorry if I wasn't clear.\n\nThe last CF is the most sensitive one as the feature freeze would\nhappen just around the beginning of April, so the CFM has a bit more\npressure in taking the correct decisions, if need be.\n\nFYI, I'd prefer spend cycles at patches that could make the cut,\nrather than doing the classification of the whole.\n--\nMichael",
"msg_date": "Wed, 22 Feb 2023 16:00:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest Manager"
}
] |
[
{
"msg_contents": "I have found that the per-column atttypmod tracking in pg_dump isn't \nactually used anywhere. (The values are read but not used for writing \nout any commands.) This is because some time ago we started formatting \nall types through format_type() on the server. So this dead code can be \nremoved.",
"msg_date": "Tue, 21 Feb 2023 22:30:54 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "pg_dump: Remove some dead code"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I have found that the per-column atttypmod tracking in pg_dump isn't \n> actually used anywhere. (The values are read but not used for writing \n> out any commands.) This is because some time ago we started formatting \n> all types through format_type() on the server. So this dead code can be \n> removed.\n\nGood catch. LGTM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 Feb 2023 16:42:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump: Remove some dead code"
}
] |
[
{
"msg_contents": "Commit e4602483e95 accidentally introduced a situation where pgindent\ndisagrees with the git whitespace check. The code is\n\n conn = libpqsrv_connect_params(keywords, values,\n /* expand_dbname = */ false,\n PG_WAIT_EXTENSION);\n\nwhere the current source file has 4 spaces before the /*, and the\nwhitespace check says that that should be a tab.\n\nI think it should actually be 3 spaces, so that the \"/*...\" lines up\nwith the \"keywords...\" and \"PG_WAIT...\" above and below.\n\nI suppose this isn't going to be a quick fix in pgindent, but if someone\nis keeping track, maybe this could be added to the to-consider list.\n\nIn the meantime, I suggest we work around this, perhaps by\n\n conn = libpqsrv_connect_params(keywords, values, /* expand_dbname = */ false,\n PG_WAIT_EXTENSION);\n\nwhich appears to be robust for both camps.\n\n\n",
"msg_date": "Wed, 22 Feb 2023 09:17:05 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "pgindent vs. git whitespace check"
},
{
"msg_contents": "On 2023-Feb-22, Peter Eisentraut wrote:\n\n> In the meantime, I suggest we work around this, perhaps by\n> \n> conn = libpqsrv_connect_params(keywords, values, /* expand_dbname = */ false,\n> PG_WAIT_EXTENSION);\n\nI suggest\n\n conn = libpqsrv_connect_params(keywords, values,\n\t\t \t\t\t\t\t\t\t\tfalse, /* expand_dbname */\n PG_WAIT_EXTENSION);\n\nwhich is what we typically do elsewhere and doesn't go overlength.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nMaybe there's lots of data loss but the records of data loss are also lost.\n(Lincoln Yeoh)\n\n\n",
"msg_date": "Wed, 22 Feb 2023 15:49:48 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pgindent vs. git whitespace check"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Commit e4602483e95 accidentally introduced a situation where pgindent\n> disagrees with the git whitespace check. The code is\n\n> conn = libpqsrv_connect_params(keywords, values,\n> /* expand_dbname = */ false,\n> PG_WAIT_EXTENSION);\n\n> where the current source file has 4 spaces before the /*, and the\n> whitespace check says that that should be a tab.\n\nHmm, I don't think that's per project style in the first place.\nMost places that annotate function arguments do it like\n\n conn = libpqsrv_connect_params(keywords, values,\n false, /* expand_dbname */\n PG_WAIT_EXTENSION);\n\npgindent has never been very kind to non-end-of-line comments, and\nI'm not excited about working on making it do so. As a thought\nexperiment, what would happen if we reversed course and started\nallowing \"//\" comments? Naive conversion of this comment could\nbreak the code altogether. (Plenty of programming languages\ndon't even *have* non-end-of-line comments.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Feb 2023 09:52:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent vs. git whitespace check"
},
{
"msg_contents": "On 2023-02-22 We 09:52, Tom Lane wrote:\n> pgindent has never been very kind to non-end-of-line comments, and\n> I'm not excited about working on making it do so. As a thought\n> experiment, what would happen if we reversed course and started\n> allowing \"//\" comments? Naive conversion of this comment could\n> break the code altogether. (Plenty of programming languages\n> don't even *have* non-end-of-line comments.)\n>\n> \t\t\t\n\n\nI suspect not allowing // is at least a minor annoyance to any new \ndeveloper we acquire under the age of about 40.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-22 We 09:52, Tom Lane wrote:\n\n\npgindent has never been very kind to non-end-of-line comments, and\nI'm not excited about working on making it do so. As a thought\nexperiment, what would happen if we reversed course and started\nallowing \"//\" comments? Naive conversion of this comment could\nbreak the code altogether. (Plenty of programming languages\ndon't even *have* non-end-of-line comments.)\n\n\t\t\t\n\n\n\nI suspect not allowing // is at least a minor annoyance to any\n new developer we acquire under the age of about 40.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 22 Feb 2023 17:03:13 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgindent vs. git whitespace check"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 5:03 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> I suspect not allowing // is at least a minor annoyance to any new\ndeveloper we acquire under the age of about 40.\n\npgindent changes those to our style, so it's not much of an annoyance if\none prefers to type it that way during development.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Feb 23, 2023 at 5:03 AM Andrew Dunstan <andrew@dunslane.net> wrote:>> I suspect not allowing // is at least a minor annoyance to any new developer we acquire under the age of about 40.pgindent changes those to our style, so it's not much of an annoyance if one prefers to type it that way during development.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 23 Feb 2023 11:12:56 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pgindent vs. git whitespace check"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Thu, Feb 23, 2023 at 5:03 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> I suspect not allowing // is at least a minor annoyance to any new\n>> developer we acquire under the age of about 40.\n\n> pgindent changes those to our style, so it's not much of an annoyance if\n> one prefers to type it that way during development.\n\nRight, it's not like we reject patches for that (or at least, we shouldn't\nreject patches for any formatting issues that pgindent can fix).\n\nFor my own taste, I really don't have any objection to // in isolation --\nthe problem with it is just that we've got megabytes of code in the other\nstyle. I fear it'd look really ugly to have an intermixture of // and /*\ncomment styles. Mass conversion of /* to // style would answer that,\nbut would also create an impossible back-patching problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Feb 2023 23:48:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent vs. git whitespace check"
},
{
"msg_contents": "> On 23 Feb 2023, at 05:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> For my own taste, I really don't have any objection to // in isolation --\n> the problem with it is just that we've got megabytes of code in the other\n> style. I fear it'd look really ugly to have an intermixture of // and /*\n> comment styles. \n\nWe could use the \"use the style of surrounding code (comments)\" approach - when\nchanging an existing commented function use the style already present; when\nadding a net new function a choice can be made (unless we mandate a style). It\nwill still look ugly, but it will be less bad than mixing within the same\nblock.\n\n> Mass conversion of /* to // style would answer that,\n> but would also create an impossible back-patching problem.\n\nYeah, that sounds incredibly invasive.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 23 Feb 2023 09:36:00 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pgindent vs. git whitespace check"
},
{
"msg_contents": "On 2023-02-22 We 23:48, Tom Lane wrote:\n> For my own taste, I really don't have any objection to // in isolation --\n> the problem with it is just that we've got megabytes of code in the other\n> style. I fear it'd look really ugly to have an intermixture of // and /*\n> comment styles.\n\n\nMaybe, I've seen some mixing elsewhere and it didn't make me shudder. I \nagree that you probably wouldn't want to mix both styles for end of line \ncomments in a single function, although a rule like that would be hard \nto enforce mechanically.\n\n\n> Mass conversion of /* to // style would answer that,\n> but would also create an impossible back-patching problem.\n>\n> \t\t\t\n\n\nYeah, I agree that's a complete non-starter.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-22 We 23:48, Tom Lane wrote:\n\n\nFor my own taste, I really don't have any objection to // in isolation --\nthe problem with it is just that we've got megabytes of code in the other\nstyle. I fear it'd look really ugly to have an intermixture of // and /*\ncomment styles. \n\n\n\nMaybe, I've seen some mixing elsewhere and it didn't make me\n shudder. I agree that you probably wouldn't want to mix both\n styles for end of line comments in a single function, although a\n rule like that would be hard to enforce mechanically.\n\n\n\n\nMass conversion of /* to // style would answer that,\nbut would also create an impossible back-patching problem.\n\n\t\t\t\n\n\n\nYeah, I agree that's a complete non-starter.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 23 Feb 2023 06:37:17 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgindent vs. git whitespace check"
},
{
"msg_contents": "On 22.02.23 15:49, Alvaro Herrera wrote:\n> On 2023-Feb-22, Peter Eisentraut wrote:\n> \n>> In the meantime, I suggest we work around this, perhaps by\n>>\n>> conn = libpqsrv_connect_params(keywords, values, /* expand_dbname = */ false,\n>> PG_WAIT_EXTENSION);\n> \n> I suggest\n> \n> conn = libpqsrv_connect_params(keywords, values,\n> \t\t \t\t\t\t\t\t\t\tfalse, /* expand_dbname */\n> PG_WAIT_EXTENSION);\n> \n> which is what we typically do elsewhere and doesn't go overlength.\n\nFixed this way.\n\n\n\n",
"msg_date": "Fri, 24 Feb 2023 16:03:29 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pgindent vs. git whitespace check"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 09:36:00AM +0100, Daniel Gustafsson wrote:\n> > On 23 Feb 2023, at 05:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > For my own taste, I really don't have any objection to // in isolation --\n> > the problem with it is just that we've got megabytes of code in the other\n> > style. I fear it'd look really ugly to have an intermixture of // and /*\n> > comment styles. \n> \n> We could use the \"use the style of surrounding code (comments)\" approach - when\n> changing an existing commented function use the style already present; when\n> adding a net new function a choice can be made (unless we mandate a style). It\n> will still look ugly, but it will be less bad than mixing within the same\n> block.\n> \n> > Mass conversion of /* to // style would answer that,\n> > but would also create an impossible back-patching problem.\n> \n> Yeah, that sounds incredibly invasive.\n\nI am replying late here but ...\n\nWe would have to convert all supported branches, and tell all forks to\ndo the same (hopefully at the same time). The new standard would then\nbe for all single-line comments to use // instead of /* ... */.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n",
"msg_date": "Wed, 29 Mar 2023 13:18:30 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent vs. git whitespace check"
},
{
"msg_contents": "> On 29 Mar 2023, at 19:18, Bruce Momjian <bruce@momjian.us> wrote:\n> On Thu, Feb 23, 2023 at 09:36:00AM +0100, Daniel Gustafsson wrote:\n>>> On 23 Feb 2023, at 05:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>>> Mass conversion of /* to // style would answer that,\n>>> but would also create an impossible back-patching problem.\n>> \n>> Yeah, that sounds incredibly invasive.\n> \n> I am replying late here but ...\n> \n> We would have to convert all supported branches, and tell all forks to\n> do the same (hopefully at the same time). The new standard would then\n> be for all single-line comments to use // instead of /* ... */.\n\nThat still leaves every patch which is in flight on -hackers, and conflicts in\nlocal development trees etc. It's doable (apart from forks, but that cannot be\nour core concern), but I personally can't see the price paid justify the result.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 29 Mar 2023 20:26:23 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pgindent vs. git whitespace check"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 08:26:23PM +0200, Daniel Gustafsson wrote:\n> > On 29 Mar 2023, at 19:18, Bruce Momjian <bruce@momjian.us> wrote:\n> > We would have to convert all supported branches, and tell all forks to\n> > do the same (hopefully at the same time). The new standard would then\n> > be for all single-line comments to use // instead of /* ... */.\n> \n> That still leaves every patch which is in flight on -hackers, and conflicts in\n> local development trees etc. It's doable (apart from forks, but that cannot be\n> our core concern), but I personally can't see the price paid justify the result.\n\nYes, this would have to be done at the start of a new release cycle.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Embrace your flaws. They make you human, rather than perfect,\n which you will never be.\n\n\n",
"msg_date": "Thu, 30 Mar 2023 10:48:25 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: pgindent vs. git whitespace check"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile finalizing some fixes in BRIN, I decided to stress-test the\nrelevant part of the code to check if I missed something. Imagine a\nsimple script that builds BRIN indexes on random data, does random\nchanges and cross-checks the results with/without the index.\n\nBut instead of I almost immediately ran into a LWLock deadlock :-(\n\nI've managed to reproduce this on PG13+, but I believe it's there since\nthe brinRevmapDesummarizeRange was introduced in PG10. I just haven't\ntried on pre-13 releases.\n\nThe stress-test-2.sh script (attached .tgz) builds a table, fills it\nwith random data and then runs a mix of updates and (de)summarization\nDDL of random fraction of the index. The lockup is usually triggered\nwithin a couple minutes, but might take longer (I guess it depends on\nparameters used to generate the random data, so it may take a couple\nruns to hit the right combination).\n\n\nThe root cause is that brin_doupdate and brinRevmapDesummarizeRange end\nup locking buffers in different orders. Attached is also a .patch that\nadds a bunch of LOG messages for buffer locking in BRIN code (it's for\nPG13, but should work on newer releases too).\n\nHere's a fairly typical example of the interaction between brin_doupdate\nand brinRevmapDesummarizeRange:\n\nbrin_doupdate (from UPDATE query):\n\n LOG: brin_doupdate: samepage 0\n LOG: number of LWLocks held: 0\n LOG: brin_getinsertbuffer: locking 898 lock 0x7f9a99a5af64\n LOG: brin_getinsertbuffer: buffer locked\n LOG: brin_getinsertbuffer B: locking 899 lock 0x7f9a99a5afa4\n LOG: brin_getinsertbuffer B: buffer locked\n LOG: number of LWLocks held: 2\n LOG: lock 0 => 0x7f9a99a5af64\n LOG: lock 1 => 0x7f9a99a5afa4\n LOG: brin_doupdate: locking buffer for update\n LOG: brinLockRevmapPageForUpdate: locking 158 lock 0x7f9a99a4f664\n\nbrinRevmapDesummarizeRange (from brin_desummarize_range):\n\n LOG: starting brinRevmapDesummarizeRange\n LOG: number of LWLocks held: 0\n LOG: brinLockRevmapPageForUpdate: locking 158 lock 0x7f9a99a4f664\n LOG: brinLockRevmapPageForUpdate: buffer locked\n LOG: number of LWLocks held: 1\n LOG: lock 0 => 0x7f9a99a4f664\n LOG: brinRevmapDesummarizeRange: locking 898 lock 0x7f9a99a5af64\n\nSo, brin_doupdate starts with no LWLocks, and locks buffers 898, 899\n(through getinsertbuffer). And then tries to lock 158.\n\nMeanwhile brinRevmapDesummarizeRange locks 158 first, and then tries to\nlock 898.\n\nSo, a LWLock deadlock :-(\n\nI've now seen a bunch of these traces, with only minor differences. For\nexample brinRevmapDesummarizeRange might gets stuck on the second buffer\nlocked by getinsertbuffer (not the first one like here).\n\n\nI don't have a great idea how to fix this - I guess we need to ensure\nthe buffers are locked in the same order, but that seems tricky.\n\nObviously, people don't call brin_desummarize_range() very often, which\nlikely explains the lack of reports.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 22 Feb 2023 11:48:15 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "LWLock deadlock in brinRevmapDesummarizeRange"
},
{
"msg_contents": "On 2023-Feb-22, Tomas Vondra wrote:\n\n> But instead of I almost immediately ran into a LWLock deadlock :-(\n\nOuch.\n\n> I've managed to reproduce this on PG13+, but I believe it's there since\n> the brinRevmapDesummarizeRange was introduced in PG10. I just haven't\n> tried on pre-13 releases.\n\nHmm, I think that might just be an \"easy\" way to hit it, but the problem\nis actually older than that, since AFAICS brin_doupdate is careless\nregarding locking order of revmap page vs. regular page.\n\nSadly, the README doesn't cover locking considerations. I had that in a\nfile called 'minmax-proposal' in version 16 of the patch here\nhttps://postgr.es/m/20140820225133.GB6343@eldon.alvh.no-ip.org\nbut by version 18 (when 'minmax' became BRIN) I seem to have removed\nthat file and replaced it with the README and apparently I didn't copy\nthis material over.\n\n... and in there, I wrote that we would first write the brin tuple in\nthe regular page, unlock that, and then lock the revmap for the update,\nwithout holding lock on the data page. I don't remember why we do it\ndifferently now, but maybe the fix is just to release the regular page\nlock before locking the revmap page? One very important change is that\nin previous versions the revmap used a separate fork, and we had to\nintroduce an \"evacuation protocol\" when we integrated the revmap into\nthe main fork, which may have changed the locking considerations.\n\nAnother point: to desummarize a range, just unlinking the entry from\nrevmap should suffice, from the POV of other index scanners. Maybe we\ncan simplify the whole procedure to: lock revmap, remove entry, remember\npage number, unlock revmap; lock regular page, delete entry, unlock.\nThen there are no two locks held at the same time during desummarize.\n\nThis comes from v16:\n\n+ Locking considerations\n+ ----------------------\n+ \n+ To read the TID during an index scan, we follow this protocol:\n+ \n+ * read revmap page\n+ * obtain share lock on the revmap buffer\n+ * read the TID\n+ * obtain share lock on buffer of main fork\n+ * LockTuple the TID (using the index as relation). A shared lock is\n+ sufficient. We need the LockTuple to prevent VACUUM from recycling\n+ the index tuple; see below.\n+ * release revmap buffer lock\n+ * read the index tuple\n+ * release the tuple lock\n+ * release main fork buffer lock\n+ \n+ \n+ To update the summary tuple for a page range, we use this protocol:\n+ \n+ * insert a new index tuple somewhere in the main fork; note its TID\n+ * read revmap page\n+ * obtain exclusive lock on revmap buffer\n+ * write the TID\n+ * release lock\n+ \n+ This ensures no concurrent reader can obtain a partially-written TID.\n+ Note we don't need a tuple lock here. Concurrent scans don't have to\n+ worry about whether they got the old or new index tuple: if they get the\n+ old one, the tighter values are okay from a correctness standpoint because\n+ due to MVCC they can't possibly see the just-inserted heap tuples anyway.\n+\n+ [vacuum stuff elided]\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Escucha y olvidarás; ve y recordarás; haz y entenderás\" (Confucio)\n\n\n",
"msg_date": "Wed, 22 Feb 2023 12:35:32 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: LWLock deadlock in brinRevmapDesummarizeRange"
},
{
"msg_contents": "\n\nOn 2/22/23 12:35, Alvaro Herrera wrote:\n> On 2023-Feb-22, Tomas Vondra wrote:\n> \n>> But instead of I almost immediately ran into a LWLock deadlock :-(\n> \n> Ouch.\n> \n>> I've managed to reproduce this on PG13+, but I believe it's there since\n>> the brinRevmapDesummarizeRange was introduced in PG10. I just haven't\n>> tried on pre-13 releases.\n> \n> Hmm, I think that might just be an \"easy\" way to hit it, but the problem\n> is actually older than that, since AFAICS brin_doupdate is careless\n> regarding locking order of revmap page vs. regular page.\n> \n\nThat's certainly possible, although I ran a lot of BRIN stress tests and\nit only started failing after I added the desummarization. Although, the\ntests are \"randomized\" like this:\n\n UPDATE t SET a = '...' WHERE random() < 0.05;\n\nwhich is fairly sequential. Maybe reordering the CTIDs a bit would hit\nadditional deadlocks, I'll probably give it a try. OTOH that'd probably\nbe much more likely to be hit by users, and I don't recall any such reports.\n\n> Sadly, the README doesn't cover locking considerations. I had that in a\n> file called 'minmax-proposal' in version 16 of the patch here\n> https://postgr.es/m/20140820225133.GB6343@eldon.alvh.no-ip.org\n> but by version 18 (when 'minmax' became BRIN) I seem to have removed\n> that file and replaced it with the README and apparently I didn't copy\n> this material over.\n> \n\nYeah :-( There's a couple more things that are missing in the README,\nlike what oi_regular_nulls mean.\n\n> ... and in there, I wrote that we would first write the brin tuple in\n> the regular page, unlock that, and then lock the revmap for the update,\n> without holding lock on the data page. I don't remember why we do it\n> differently now, but maybe the fix is just to release the regular page\n> lock before locking the revmap page? One very important change is that\n> in previous versions the revmap used a separate fork, and we had to\n> introduce an \"evacuation protocol\" when we integrated the revmap into\n> the main fork, which may have changed the locking considerations.\n> \n\nWhat would happen if two processes built the summary concurrently? How\nwould they find the other tuple, so that we don't end up with two BRIN\ntuples for the same range?\n\n> Another point: to desummarize a range, just unlinking the entry from\n> revmap should suffice, from the POV of other index scanners. Maybe we\n> can simplify the whole procedure to: lock revmap, remove entry, remember\n> page number, unlock revmap; lock regular page, delete entry, unlock.\n> Then there are no two locks held at the same time during desummarize.\n> \n\nPerhaps, as long as it doesn't confuse anything else.\n\n> This comes from v16:\n> \n\nI don't follow - what do you mean by v16? I don't see anything like that\nanywhere in the repository.\n\n> + Locking considerations\n> + ----------------------\n> + \n> + To read the TID during an index scan, we follow this protocol:\n> + \n> + * read revmap page\n> + * obtain share lock on the revmap buffer\n> + * read the TID\n> + * obtain share lock on buffer of main fork\n> + * LockTuple the TID (using the index as relation). A shared lock is\n> + sufficient. We need the LockTuple to prevent VACUUM from recycling\n> + the index tuple; see below.\n> + * release revmap buffer lock\n> + * read the index tuple\n> + * release the tuple lock\n> + * release main fork buffer lock\n> + \n> + \n> + To update the summary tuple for a page range, we use this protocol:\n> + \n> + * insert a new index tuple somewhere in the main fork; note its TID\n> + * read revmap page\n> + * obtain exclusive lock on revmap buffer\n> + * write the TID\n> + * release lock\n> + \n> + This ensures no concurrent reader can obtain a partially-written TID.\n> + Note we don't need a tuple lock here. Concurrent scans don't have to\n> + worry about whether they got the old or new index tuple: if they get the\n> + old one, the tighter values are okay from a correctness standpoint because\n> + due to MVCC they can't possibly see the just-inserted heap tuples anyway.\n> +\n> + [vacuum stuff elided]\n> \n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 22 Feb 2023 13:04:10 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: LWLock deadlock in brinRevmapDesummarizeRange"
},
{
"msg_contents": "On 2023-Feb-22, Tomas Vondra wrote:\n\n> > ... and in there, I wrote that we would first write the brin tuple in\n> > the regular page, unlock that, and then lock the revmap for the update,\n> > without holding lock on the data page. I don't remember why we do it\n> > differently now, but maybe the fix is just to release the regular page\n> > lock before locking the revmap page? One very important change is that\n> > in previous versions the revmap used a separate fork, and we had to\n> > introduce an \"evacuation protocol\" when we integrated the revmap into\n> > the main fork, which may have changed the locking considerations.\n> \n> What would happen if two processes built the summary concurrently? How\n> would they find the other tuple, so that we don't end up with two BRIN\n> tuples for the same range?\n\nWell, the revmap can only keep track of one tuple per range; if two\nprocesses build summary tuples, and each tries to insert its tuple in a\nregular page, that part may succeed; but then only one of them is going\nto successfully register the summary tuple in the revmap: when the other\ngoes to do the same, it would find that a CTID is already present.\n\n... Looking at the code (brinSetHeapBlockItemptr), I think what happens\nhere is that the second process would overwrite the TID with its own.\nNot sure if it would work to see whether the item is empty and bail out\nif it's not.\n\nBut in any case, it seems to me that the update of the regular page is\npretty much independent of the update of the revmap.\n\n> > Another point: to desummarize a range, just unlinking the entry from\n> > revmap should suffice, from the POV of other index scanners. Maybe we\n> > can simplify the whole procedure to: lock revmap, remove entry, remember\n> > page number, unlock revmap; lock regular page, delete entry, unlock.\n> > Then there are no two locks held at the same time during desummarize.\n> \n> Perhaps, as long as it doesn't confuse anything else.\n\nWell, I don't have the details fresh in mind, but I think it shouldn't,\nbecause the only way to reach a regular tuple is coming from the revmap;\nand we reuse \"items\" (lines) in a regular page only when they are\nempty (so vacuuming should also be OK).\n\n> > This comes from v16:\n> \n> I don't follow - what do you mean by v16? I don't see anything like that\n> anywhere in the repository.\n\nI meant the minmax-proposal file in patch v16, the one that I linked to.\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"You're _really_ hosed if the person doing the hiring doesn't understand\nrelational systems: you end up with a whole raft of programmers, none of\nwhom has had a Date with the clue stick.\" (Andrew Sullivan)\n\n\n",
"msg_date": "Wed, 22 Feb 2023 15:41:12 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: LWLock deadlock in brinRevmapDesummarizeRange"
}
] |
[
{
"msg_contents": "Given its nature and purpose as a module we don't want to run against an \ninstalled instance, shouldn't src/test/modules/unsafe_tests have \nNO_INSTALLCHECK=1 in its Makefile and runningcheck:false in its meson.build?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\nGiven its nature and purpose as a module\n we don't want to run against an installed instance, shouldn't\n src/test/modules/unsafe_tests have NO_INSTALLCHECK=1 in its\n Makefile and runningcheck:false in its meson.build?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 22 Feb 2023 06:47:34 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "unsafe_tests module"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-22 06:47:34 -0500, Andrew Dunstan wrote:\n> Given its nature and purpose as a module we don't want to run against an\n> installed instance, shouldn't src/test/modules/unsafe_tests have\n> NO_INSTALLCHECK=1 in its Makefile and runningcheck:false in its meson.build?\n\nSeems like a good idea to me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 11 Mar 2023 16:12:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: unsafe_tests module"
},
{
"msg_contents": "On 2023-03-11 Sa 19:12, Andres Freund wrote:\n> Hi,\n>\n> On 2023-02-22 06:47:34 -0500, Andrew Dunstan wrote:\n>> Given its nature and purpose as a module we don't want to run against an\n>> installed instance, shouldn't src/test/modules/unsafe_tests have\n>> NO_INSTALLCHECK=1 in its Makefile and runningcheck:false in its meson.build?\n> Seems like a good idea to me.\n>\n\nThanks, done.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-11 Sa 19:12, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-02-22 06:47:34 -0500, Andrew Dunstan wrote:\n\n\nGiven its nature and purpose as a module we don't want to run against an\ninstalled instance, shouldn't src/test/modules/unsafe_tests have\nNO_INSTALLCHECK=1 in its Makefile and runningcheck:false in its meson.build?\n\n\n\nSeems like a good idea to me.\n\n\n\n\n\nThanks, done.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 12 Mar 2023 09:06:46 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: unsafe_tests module"
}
] |
[
{
"msg_contents": "The configure option --disable-rpath currently has no equivalent in \nmeson. This option is used by packagers, so I think it would be good to \nhave it in meson as well. I came up with the attached patch.",
"msg_date": "Wed, 22 Feb 2023 13:56:32 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "meson: Add equivalent of configure --disable-rpath option"
},
{
"msg_contents": "On 22.02.23 13:56, Peter Eisentraut wrote:\n> The configure option --disable-rpath currently has no equivalent in \n> meson. This option is used by packagers, so I think it would be good to \n> have it in meson as well. I came up with the attached patch.\n\ncommitted\n\n\n",
"msg_date": "Wed, 1 Mar 2023 08:17:20 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: meson: Add equivalent of configure --disable-rpath option"
}
] |
[
{
"msg_contents": "Hi hackers,\nI met a coredump when backend has no enough memory at dlopen which want to allocate memory for libLLVM-10.so.1.\n\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007f10bde19859 in __GI_abort () at abort.c:79\n#2 0x00007f109c24cc33 in llvm::report_bad_alloc_error(char const*, bool) () from /lib/x86_64-linux-gnu/libLLVM-10.so.1\n#3 0x00007f109c23dc32 in ?? () from /lib/x86_64-linux-gnu/libLLVM-10.so.1\n#4 0x00007f109c23dd6c in ?? () from /lib/x86_64-linux-gnu/libLLVM-10.so.1\n#5 0x00007f109c2312db in llvm::cl::Option::addArgument() () from /lib/x86_64-linux-gnu/libLLVM-10.so.1\n#6 0x00007f109c18c08e in ?? () from /lib/x86_64-linux-gnu/libLLVM-10.so.1\n#7 0x00007f10c179fb9a in call_init (l=<optimized out>, argc=argc@entry=12, argv=argv@entry=0x7ffd8f53cc38,\n env=env@entry=0x557459ba1ce0) at dl-init.c:72\n#8 0x00007f10c179fca1 in call_init (env=0x557459ba1ce0, argv=0x7ffd8f53cc38, argc=12, l=<optimized out>) at dl-init.c:30\n#9 _dl_init (main_map=0x557459e6e9d0, argc=12, argv=0x7ffd8f53cc38, env=0x557459ba1ce0) at dl-init.c:119\n#10 0x00007f10bdf57985 in __GI__dl_catch_exception (exception=exception@entry=0x0,\n operate=operate@entry=0x7f10c17a32d0 <call_dl_init>, args=args@entry=0x7ffd8f53ac70) at dl-error-skeleton.c:182\n#11 0x00007f10c17a443d in dl_open_worker (a=a@entry=0x7ffd8f53ae20) at dl-open.c:758\n#12 0x00007f10bdf57928 in __GI__dl_catch_exception (exception=exception@entry=0x7ffd8f53ae00,\n operate=operate@entry=0x7f10c17a3c20 <dl_open_worker>, args=args@entry=0x7ffd8f53ae20) at dl-error-skeleton.c:208\n#13 0x00007f10c17a360a in _dl_open (file=0x557459df2820 \" /lib/postgresql/llvmjit.so\", mode=-2147483390,\n caller_dlopen=<optimized out>, nsid=-2, argc=12, argv=0x7ffd8f53cc38, env=0x557459ba1ce0) at dl-open.c:837\n#14 0x00007f10bfa5134c in dlopen_doit (a=a@entry=0x7ffd8f53b040) at dlopen.c:66\n#15 0x00007f10bdf57928 in __GI__dl_catch_exception (exception=exception@entry=0x7ffd8f53afe0,\n operate=operate@entry=0x7f10bfa512f0 <dlopen_doit>, args=args@entry=0x7ffd8f53b040) at dl-error-skeleton.c:208\n#16 0x00007f10bdf579f3 in __GI__dl_catch_error (objname=objname@entry=0x557459c9e450, errstring=errstring@entry=0x557459c9e458,\n mallocedp=mallocedp@entry=0x557459c9e448, operate=operate@entry=0x7f10bfa512f0 <dlopen_doit>, args=args@entry=0x7ffd8f53b040)\n at dl-error-skeleton.c:227\n#17 0x00007f10bfa51b59 in _dlerror_run (operate=operate@entry=0x7f10bfa512f0 <dlopen_doit>, args=args@entry=0x7ffd8f53b040)\n at dlerror.c:170\n#18 0x00007f10bfa513da in __dlopen (file=<optimized out>, mode=<optimized out>) at dlopen.c:87\n#19 0x0000557458502d07 in ?? ()\n#20 0x0000557458503526 in load_external_function ()\n#21 0x0000557458569c3d in ?? ()\n#22 0x0000557458569e0e in jit_compile_expr ()\n#23 0x000055745810d6c6 in ExecBuildProjectionInfoExt ()\n#24 0x000055745812921b in ExecConditionalAssignProjectionInfo ()\n#25 0x000055745814f12d in ExecInitSeqScanForPartition ()\n#26 0x000055745812205c in ExecInitNode ()\n#27 0x00005574581638ef in ExecInitMotion ()\n#28 0x0000557458121ebc in ExecInitNode ()\n#29 0x000055745811a544 in standard_ExecutorStart ()\n#30 0x000055745838c059 in PortalStart ()\n…\n\nPlatform : Ubuntu 20.04. x86_64 Linux 5.15.0-52-generic\n\nOur llvmjit.h implemented the function llvm_enter_fatal_on_oom to FATAL out when llvm meet OOM, but when we load libLLVM.so, we may met some oom situation like upper stack that loaded there lib and want to init some options with memory allocation.\nI didn’t figure out a better way to set an error_handler for this situation when load libLLVM.so.\n\n\n- Hugo.\n\n\n\n\n\n\n\n\n\n\nHi hackers,\nI met a coredump when backend has no enough memory at dlopen which want to allocate memory for libLLVM-10.so.1.\n \n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007f10bde19859 in __GI_abort () at abort.c:79\n#2 0x00007f109c24cc33 in llvm::report_bad_alloc_error(char const*, bool) () from /lib/x86_64-linux-gnu/libLLVM-10.so.1\n#3 0x00007f109c23dc32 in ?? () from /lib/x86_64-linux-gnu/libLLVM-10.so.1\n#4 0x00007f109c23dd6c in ?? () from /lib/x86_64-linux-gnu/libLLVM-10.so.1\n#5 0x00007f109c2312db in llvm::cl::Option::addArgument() () from /lib/x86_64-linux-gnu/libLLVM-10.so.1\n#6 0x00007f109c18c08e in ?? () from /lib/x86_64-linux-gnu/libLLVM-10.so.1\n#7 0x00007f10c179fb9a in call_init (l=<optimized out>, argc=argc@entry=12, argv=argv@entry=0x7ffd8f53cc38,\n env=env@entry=0x557459ba1ce0) at dl-init.c:72\n#8 0x00007f10c179fca1 in call_init (env=0x557459ba1ce0, argv=0x7ffd8f53cc38, argc=12, l=<optimized out>) at dl-init.c:30\n#9 _dl_init (main_map=0x557459e6e9d0, argc=12, argv=0x7ffd8f53cc38, env=0x557459ba1ce0) at dl-init.c:119\n#10 0x00007f10bdf57985 in __GI__dl_catch_exception (exception=exception@entry=0x0,\n operate=operate@entry=0x7f10c17a32d0 <call_dl_init>, args=args@entry=0x7ffd8f53ac70) at dl-error-skeleton.c:182\n#11 0x00007f10c17a443d in dl_open_worker (a=a@entry=0x7ffd8f53ae20) at dl-open.c:758\n#12 0x00007f10bdf57928 in __GI__dl_catch_exception (exception=exception@entry=0x7ffd8f53ae00,\n operate=operate@entry=0x7f10c17a3c20 <dl_open_worker>, args=args@entry=0x7ffd8f53ae20) at dl-error-skeleton.c:208\n#13 0x00007f10c17a360a in _dl_open (file=0x557459df2820 \" /lib/postgresql/llvmjit.so\", mode=-2147483390,\n caller_dlopen=<optimized out>, nsid=-2, argc=12, argv=0x7ffd8f53cc38, env=0x557459ba1ce0) at dl-open.c:837\n#14 0x00007f10bfa5134c in dlopen_doit (a=a@entry=0x7ffd8f53b040) at dlopen.c:66\n#15 0x00007f10bdf57928 in __GI__dl_catch_exception (exception=exception@entry=0x7ffd8f53afe0,\n operate=operate@entry=0x7f10bfa512f0 <dlopen_doit>, args=args@entry=0x7ffd8f53b040) at dl-error-skeleton.c:208\n#16 0x00007f10bdf579f3 in __GI__dl_catch_error (objname=objname@entry=0x557459c9e450, errstring=errstring@entry=0x557459c9e458,\n mallocedp=mallocedp@entry=0x557459c9e448, operate=operate@entry=0x7f10bfa512f0 <dlopen_doit>, args=args@entry=0x7ffd8f53b040)\n at dl-error-skeleton.c:227\n#17 0x00007f10bfa51b59 in _dlerror_run (operate=operate@entry=0x7f10bfa512f0 <dlopen_doit>, args=args@entry=0x7ffd8f53b040)\n at dlerror.c:170\n#18 0x00007f10bfa513da in __dlopen (file=<optimized out>, mode=<optimized out>) at dlopen.c:87\n#19 0x0000557458502d07 in ?? ()\n#20 0x0000557458503526 in load_external_function ()\n#21 0x0000557458569c3d in ?? ()\n#22 0x0000557458569e0e in jit_compile_expr ()\n#23 0x000055745810d6c6 in ExecBuildProjectionInfoExt ()\n#24 0x000055745812921b in ExecConditionalAssignProjectionInfo ()\n#25 0x000055745814f12d in ExecInitSeqScanForPartition ()\n#26 0x000055745812205c in ExecInitNode ()\n#27 0x00005574581638ef in ExecInitMotion ()\n#28 0x0000557458121ebc in ExecInitNode ()\n#29 0x000055745811a544 in standard_ExecutorStart ()\n#30 0x000055745838c059 in PortalStart ()\n…\n \nPlatform : Ubuntu 20.04. x86_64 Linux 5.15.0-52-generic\n \nOur llvmjit.h implemented the function llvm_enter_fatal_on_oom to FATAL out when llvm meet OOM, but when we load libLLVM.so, we may met some oom situation like upper stack that loaded there lib and want to init some options\n with memory allocation.\nI didn’t figure out a better way to set an error_handler for this situation when load libLLVM.so.\n \n\n- \nHugo.",
"msg_date": "Wed, 22 Feb 2023 13:56:43 +0000",
"msg_from": "Hugo Zhang <hugo.zhang@openpie.com>",
"msg_from_op": true,
"msg_subject": "Unexpected abort at llvm::report_bad_alloc_error when load JIT\n library"
}
] |
[
{
"msg_contents": "Proposal: Simply add the %T (PROMPT variable) to output the current time\n(HH24:MI:SS) into the prompt. This has been in sqlplus since I can\nremember, and I find it really useful when I forgot to time something, or\nto review for Time spent on a problem, or for how old my session is...\n\nI am recommending no formatting options, just keep it simple. No, I don't\ncare about adding the date. If I don't know the date of some line in my\nhistory, it's already a problem! (And date would logically be some other\nvariable)\n\nYes, I've found ways around it using the shell backquote. This is hacky,\nand it's also really ugly in windows. I also found it impossible to share\nmy plpgsqlrc file because between linux and windows.\n\nThis would be current time on the local machine. Keeping it simple.\n\nIt feels like a small change. The simplest test would be to capture the\nprompt, select sleep(1.1); and make sure the prompt change. This code\nshould be trivially stable.\n\nIf it seems useful, I believe I can work with others to get it implemented,\nand the documentation changed, and a patch generated. (I need to develop\nthese skills)\n\nWhat does the community say? Is there support for this?\n\nRegards, Kirk\n\nProposal: Simply add the %T (PROMPT variable) to output the current time (HH24:MI:SS) into the prompt. This has been in sqlplus since I can remember, and I find it really useful when I forgot to time something, or to review for Time spent on a problem, or for how old my session is...I am recommending no formatting options, just keep it simple. No, I don't care about adding the date. If I don't know the date of some line in my history, it's already a problem! (And date would logically be some other variable)Yes, I've found ways around it using the shell backquote. This is hacky, and it's also really ugly in windows. I also found it impossible to share my plpgsqlrc file because between linux and windows.This would be current time on the local machine. Keeping it simple.It feels like a small change. The simplest test would be to capture the prompt, select sleep(1.1); and make sure the prompt change. This code should be trivially stable.If it seems useful, I believe I can work with others to get it implemented, and the documentation changed, and a patch generated. (I need to develop these skills)What does the community say? Is there support for this?Regards, Kirk",
"msg_date": "Wed, 22 Feb 2023 12:17:46 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Proposal: %T Prompt parameter for psql for current time (like Oracle\n has)"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 9:18 AM Kirk Wolak <wolakk@gmail.com> wrote:\n\n> Proposal: Simply add the %T (PROMPT variable) to output the current time\n> (HH24:MI:SS) into the prompt. This has been in sqlplus since I can\n> remember, and I find it really useful when I forgot to time something, or\n> to review for Time spent on a problem, or for how old my session is...\n>\n\nThis is a great idea, in my opinion. I usually do something involving ts to\ntrack timestamps when executing something non-trivial via psql in\ninteractive (see below) or non-interactive mode.\n\nBut this is a not well-known thing to use (and ts is not installed by\ndefault on Ubuntu, etc.) – having timestamps in prompt would be convenient.\n\ntest=> \\o | ts\ntest=> select 1;\ntest=> Feb 22 09:49:49 ?column?\nFeb 22 09:49:49 ----------\nFeb 22 09:49:49 1\nFeb 22 09:49:49 (1 row)\nFeb 22 09:49:49\n\nOn Wed, Feb 22, 2023 at 9:18 AM Kirk Wolak <wolakk@gmail.com> wrote:Proposal: Simply add the %T (PROMPT variable) to output the current time (HH24:MI:SS) into the prompt. This has been in sqlplus since I can remember, and I find it really useful when I forgot to time something, or to review for Time spent on a problem, or for how old my session is...This is a great idea, in my opinion. I usually do something involving ts to track timestamps when executing something non-trivial via psql in interactive (see below) or non-interactive mode. But this is a not well-known thing to use (and ts is not installed by default on Ubuntu, etc.) – having timestamps in prompt would be convenient.test=> \\o | tstest=> select 1;test=> Feb 22 09:49:49 ?column?Feb 22 09:49:49 ----------Feb 22 09:49:49 1Feb 22 09:49:49 (1 row)Feb 22 09:49:49",
"msg_date": "Wed, 22 Feb 2023 09:52:28 -0800",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "Kirk Wolak <wolakk@gmail.com> writes:\n> Proposal: Simply add the %T (PROMPT variable) to output the current time\n> (HH24:MI:SS) into the prompt.\n\nI'm not really convinced that %`date` isn't a usable solution for this,\nespecially since it seems like a very niche requirement. The next\nperson who wants it might well have a different desire than you\nfor exactly what gets shown. The output of date can be customized,\nbut a hard-wired prompt.c feature not so much.\n\nOn the whole I'd rather not eat more of the limited namespace for\npsql prompt codes for this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Feb 2023 12:55:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 9:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> On the whole I'd rather not eat more of the limited namespace for\n> psql prompt codes for this.\n>\n\nIt depends on personal preferences. When I work on a large screen, I can\nafford to spend some characters in prompts, if it gives convenience – and\nmany do (looking, for example, at modern tmux/zsh prompts showing git\nbranch context, etc).\n\nDefault behavior might remain short – it wouldn't make sense to extend it\nfor everyone.\n\nOn Wed, Feb 22, 2023 at 9:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nOn the whole I'd rather not eat more of the limited namespace for\npsql prompt codes for this.It depends on personal preferences. When I work on a large screen, I can afford to spend some characters in prompts, if it gives convenience – and many do (looking, for example, at modern tmux/zsh prompts showing git branch context, etc).Default behavior might remain short – it wouldn't make sense to extend it for everyone.",
"msg_date": "Wed, 22 Feb 2023 09:59:24 -0800",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On 22/02/2023 19:59, Nikolay Samokhvalov wrote:\n> On Wed, Feb 22, 2023 at 9:55 AM Tom Lane <tgl@sss.pgh.pa.us \n> <mailto:tgl@sss.pgh.pa.us>> wrote:\n> \n> On the whole I'd rather not eat more of the limited namespace for\n> psql prompt codes for this.\n> \n> \n> It depends on personal preferences. When I work on a large screen, I can \n> afford to spend some characters in prompts, if it gives convenience – \n> and many do (looking, for example, at modern tmux/zsh prompts showing \n> git branch context, etc).\n> \n> Default behavior might remain short – it wouldn't make sense to extend \n> it for everyone.\n\nI have no objections to adding a %T option, although deciding what \nformat to use is a hassle. -1 for changing the default.\n\nBut let's look at the original request:\n\n> This has been in sqlplus since I can remember, and I find it really\n> useful when I forgot to time something, or to review for Time spent\n> on a problem, or for how old my session is...\nI've felt that pain too. You run a query, and it takes longer than I \nexpected. How long did it actually take? Too bad I didn't enable \\timing \nbeforehand..\n\nHow about a new backslash command or psql variable to show how long the \nprevious statement took? Something like:\n\npostgres=# select <unexpectedly slow query>\n ?column?\n----------\n 123\n(1 row)\n\npostgres=# \\time\n\nTime: 14011.975 ms (00:14.012)\n\nThis would solve the \"I forgot to time something\" problem.\n\n- Heikki\n\n\n\n",
"msg_date": "Wed, 22 Feb 2023 20:14:54 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "> On 22 Feb 2023, at 19:14, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> How about a new backslash command or psql variable to show how long the previous statement took? Something like:\n> \n> postgres=# select <unexpectedly slow query>\n> ?column?\n> ----------\n> 123\n> (1 row)\n> \n> postgres=# \\time\n> \n> Time: 14011.975 ms (00:14.012)\n> \n> This would solve the \"I forgot to time something\" problem.\n\nI don't have an opinion on adding a prompt option, but I've wanted this\n(without realizing this was the format of it) many times.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 22 Feb 2023 19:17:37 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "Hi\n\nst 22. 2. 2023 v 18:55 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Kirk Wolak <wolakk@gmail.com> writes:\n> > Proposal: Simply add the %T (PROMPT variable) to output the current time\n> > (HH24:MI:SS) into the prompt.\n>\n> I'm not really convinced that %`date` isn't a usable solution for this,\n> especially since it seems like a very niche requirement. The next\n> person who wants it might well have a different desire than you\n> for exactly what gets shown. The output of date can be customized,\n> but a hard-wired prompt.c feature not so much.\n>\n> On the whole I'd rather not eat more of the limited namespace for\n> psql prompt codes for this.\n>\n\nCan we introduce some special syntax that allows using words (and maybe\nsome params)?\n\nRegards\n\nPavel\n\n\n>\n> regards, tom lane\n>\n\nHist 22. 2. 2023 v 18:55 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Kirk Wolak <wolakk@gmail.com> writes:\n> Proposal: Simply add the %T (PROMPT variable) to output the current time\n> (HH24:MI:SS) into the prompt.\n\nI'm not really convinced that %`date` isn't a usable solution for this,\nespecially since it seems like a very niche requirement. The next\nperson who wants it might well have a different desire than you\nfor exactly what gets shown. The output of date can be customized,\nbut a hard-wired prompt.c feature not so much.\n\nOn the whole I'd rather not eat more of the limited namespace for\npsql prompt codes for this.Can we introduce some special syntax that allows using words (and maybe some params)?RegardsPavel \n\n regards, tom lane",
"msg_date": "Wed, 22 Feb 2023 19:19:21 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "st 22. 2. 2023 v 19:14 odesílatel Heikki Linnakangas <hlinnaka@iki.fi>\nnapsal:\n\n> On 22/02/2023 19:59, Nikolay Samokhvalov wrote:\n> > On Wed, Feb 22, 2023 at 9:55 AM Tom Lane <tgl@sss.pgh.pa.us\n> > <mailto:tgl@sss.pgh.pa.us>> wrote:\n> >\n> > On the whole I'd rather not eat more of the limited namespace for\n> > psql prompt codes for this.\n> >\n> >\n> > It depends on personal preferences. When I work on a large screen, I can\n> > afford to spend some characters in prompts, if it gives convenience –\n> > and many do (looking, for example, at modern tmux/zsh prompts showing\n> > git branch context, etc).\n> >\n> > Default behavior might remain short – it wouldn't make sense to extend\n> > it for everyone.\n>\n> I have no objections to adding a %T option, although deciding what\n> format to use is a hassle. -1 for changing the default.\n>\n> But let's look at the original request:\n>\n> > This has been in sqlplus since I can remember, and I find it really\n> > useful when I forgot to time something, or to review for Time spent\n> > on a problem, or for how old my session is...\n> I've felt that pain too. You run a query, and it takes longer than I\n> expected. How long did it actually take? Too bad I didn't enable \\timing\n> beforehand..\n>\n> How about a new backslash command or psql variable to show how long the\n> previous statement took? Something like:\n>\n> postgres=# select <unexpectedly slow query>\n> ?column?\n> ----------\n> 123\n> (1 row)\n>\n> postgres=# \\time\n>\n> Time: 14011.975 ms (00:14.012)\n>\n> This would solve the \"I forgot to time something\" problem.\n>\n\nIt is a good idea, unfortunately, it doesn't help with more commands. But\nit is a nice idea, and can be implemented.\n\nI am not sure if \\time is best way - maybe we can display another runtime\ndata (when it will be possible, like io profile or queryid)\n\nRegards\n\nPavel\n\n\n\n>\n> - Heikki\n>\n>\n\nst 22. 2. 2023 v 19:14 odesílatel Heikki Linnakangas <hlinnaka@iki.fi> napsal:On 22/02/2023 19:59, Nikolay Samokhvalov wrote:\n> On Wed, Feb 22, 2023 at 9:55 AM Tom Lane <tgl@sss.pgh.pa.us \n> <mailto:tgl@sss.pgh.pa.us>> wrote:\n> \n> On the whole I'd rather not eat more of the limited namespace for\n> psql prompt codes for this.\n> \n> \n> It depends on personal preferences. When I work on a large screen, I can \n> afford to spend some characters in prompts, if it gives convenience – \n> and many do (looking, for example, at modern tmux/zsh prompts showing \n> git branch context, etc).\n> \n> Default behavior might remain short – it wouldn't make sense to extend \n> it for everyone.\n\nI have no objections to adding a %T option, although deciding what \nformat to use is a hassle. -1 for changing the default.\n\nBut let's look at the original request:\n\n> This has been in sqlplus since I can remember, and I find it really\n> useful when I forgot to time something, or to review for Time spent\n> on a problem, or for how old my session is...\nI've felt that pain too. You run a query, and it takes longer than I \nexpected. How long did it actually take? Too bad I didn't enable \\timing \nbeforehand..\n\nHow about a new backslash command or psql variable to show how long the \nprevious statement took? Something like:\n\npostgres=# select <unexpectedly slow query>\n ?column?\n----------\n 123\n(1 row)\n\npostgres=# \\time\n\nTime: 14011.975 ms (00:14.012)\n\nThis would solve the \"I forgot to time something\" problem.It is a good idea, unfortunately, it doesn't help with more commands. But it is a nice idea, and can be implemented.I am not sure if \\time is best way - maybe we can display another runtime data (when it will be possible, like io profile or queryid)RegardsPavel \n\n- Heikki",
"msg_date": "Wed, 22 Feb 2023 19:24:26 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 12:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Kirk Wolak <wolakk@gmail.com> writes:\n> > Proposal: Simply add the %T (PROMPT variable) to output the current time\n> > (HH24:MI:SS) into the prompt.\n>\n> I'm not really convinced that %`date` isn't a usable solution for this,\n> especially since it seems like a very niche requirement. The next\n> person who wants it might well have a different desire than you\n> for exactly what gets shown. The output of date can be customized,\n> but a hard-wired prompt.c feature not so much.\n>\n> On the whole I'd rather not eat more of the limited namespace for\n> psql prompt codes for this.\n>\n> regards, tom lane\n>\nTom,\n I totally respect where you are coming from, and you are rightfully the\nbig dog!\n\nIn reverse order. That limited namespace. I assume you mean the 52 alpha\ncharacters, of which, we are using 7,\nand this change would make it 8. Can we agree that at the current pace of\nconsumption it will be decades before\nwe get to 26, and they appear to be pretty well defended?\n\nI already requested ONLY the HH24 format. 8 characters of output. no\noptions. It's a waste of time.\nAfter all these years, sqlplus still has only one setting (show it, or\nnot). I am asking the same here.\nAnd I will gladly defend not changing it! Ever!\n\nI believe that leaves the real question:\nCan't we just shell out? (which is what I do no, with issues as stated, and\na lot harder to do from memory if someplace new)\n\nIt's far easier in linux than windows to get what you want.\nIt's much more complicated if you try to use the same pgsqlrc file for\nmultiple environments and users.\n\nWe are talking about adding this much code, and consuming 1 of the\nremaining 45 namespace items.\n case 'T':\n time_t current_time = time(NULL);\n struct tm *tm_info = localtime(¤t_time);\n sprintf(buf, \"%02d:%02d:%02d\", tm_info->tm_hour,\ntm_info->tm_min, tm_info->tm_sec);\n break;\n\nDoes this help my case at all?\nIf I crossed any lines, it's not my intention. I was tired of dealing with\nthis, and helping others to set it up.\n\nWith Respect,\n\nKirk\n\nOn Wed, Feb 22, 2023 at 12:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Kirk Wolak <wolakk@gmail.com> writes:\n> Proposal: Simply add the %T (PROMPT variable) to output the current time\n> (HH24:MI:SS) into the prompt.\n\nI'm not really convinced that %`date` isn't a usable solution for this,\nespecially since it seems like a very niche requirement. The next\nperson who wants it might well have a different desire than you\nfor exactly what gets shown. The output of date can be customized,\nbut a hard-wired prompt.c feature not so much.\n\nOn the whole I'd rather not eat more of the limited namespace for\npsql prompt codes for this.\n\n regards, tom laneTom, I totally respect where you are coming from, and you are rightfully the big dog!In reverse order. That limited namespace. I assume you mean the 52 alpha characters, of which, we are using 7,and this change would make it 8. Can we agree that at the current pace of consumption it will be decades beforewe get to 26, and they appear to be pretty well defended?I already requested ONLY the HH24 format. 8 characters of output. no options. It's a waste of time.After all these years, sqlplus still has only one setting (show it, or not). I am asking the same here.And I will gladly defend not changing it! Ever!I believe that leaves the real question:Can't we just shell out? (which is what I do no, with issues as stated, and a lot harder to do from memory if someplace new)It's far easier in linux than windows to get what you want.It's much more complicated if you try to use the same pgsqlrc file for multiple environments and users. We are talking about adding this much code, and consuming 1 of the remaining 45 namespace items. case 'T': time_t current_time = time(NULL); struct tm *tm_info = localtime(¤t_time); sprintf(buf, \"%02d:%02d:%02d\", tm_info->tm_hour, tm_info->tm_min, tm_info->tm_sec); break;Does this help my case at all?If I crossed any lines, it's not my intention. I was tired of dealing with this, and helping others to set it up.With Respect,Kirk",
"msg_date": "Wed, 22 Feb 2023 13:37:31 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 1:14 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 22/02/2023 19:59, Nikolay Samokhvalov wrote:\n> > On Wed, Feb 22, 2023 at 9:55 AM Tom Lane <tgl@sss.pgh.pa.us\n> > <mailto:tgl@sss.pgh.pa.us>> wrote:\n> >\n> > On the whole I'd rather not eat more of the limited namespace for\n> > psql prompt codes for this.\n> >\n> >\n> > It depends on personal preferences. When I work on a large screen, I can\n> > afford to spend some characters in prompts, if it gives convenience –\n> > and many do (looking, for example, at modern tmux/zsh prompts showing\n> > git branch context, etc).\n> >\n> > Default behavior might remain short – it wouldn't make sense to extend\n> > it for everyone.\n>\n> I have no objections to adding a %T option, although deciding what\n> format to use is a hassle. -1 for changing the default.\n>\n> But let's look at the original request:\n>\n> > This has been in sqlplus since I can remember, and I find it really\n> > useful when I forgot to time something, or to review for Time spent\n> > on a problem, or for how old my session is...\n> I've felt that pain too. You run a query, and it takes longer than I\n> expected. How long did it actually take? Too bad I didn't enable \\timing\n> beforehand..\n>\n> How about a new backslash command or psql variable to show how long the\n> previous statement took? Something like:\n>\n> postgres=# select <unexpectedly slow query>\n> ?column?\n> ----------\n> 123\n> (1 row)\n>\n> postgres=# \\time\n>\n> Time: 14011.975 ms (00:14.012)\n>\n> This would solve the \"I forgot to time something\" problem.\n>\n> - Heikki\n>\n> TBH, I have that turned on by default. Load a script. Have 300 of those\nlines, and tell me how long it took?\nIn my case, it's much easier. The other uses cases, including noticing I\nchanged some configuration and I\nshould reconnect (because I use multiple sessions, and I am in the early\nstages with lots of changes).\n\nOn Wed, Feb 22, 2023 at 1:14 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:On 22/02/2023 19:59, Nikolay Samokhvalov wrote:\n> On Wed, Feb 22, 2023 at 9:55 AM Tom Lane <tgl@sss.pgh.pa.us \n> <mailto:tgl@sss.pgh.pa.us>> wrote:\n> \n> On the whole I'd rather not eat more of the limited namespace for\n> psql prompt codes for this.\n> \n> \n> It depends on personal preferences. When I work on a large screen, I can \n> afford to spend some characters in prompts, if it gives convenience – \n> and many do (looking, for example, at modern tmux/zsh prompts showing \n> git branch context, etc).\n> \n> Default behavior might remain short – it wouldn't make sense to extend \n> it for everyone.\n\nI have no objections to adding a %T option, although deciding what \nformat to use is a hassle. -1 for changing the default.\n\nBut let's look at the original request:\n\n> This has been in sqlplus since I can remember, and I find it really\n> useful when I forgot to time something, or to review for Time spent\n> on a problem, or for how old my session is...\nI've felt that pain too. You run a query, and it takes longer than I \nexpected. How long did it actually take? Too bad I didn't enable \\timing \nbeforehand..\n\nHow about a new backslash command or psql variable to show how long the \nprevious statement took? Something like:\n\npostgres=# select <unexpectedly slow query>\n ?column?\n----------\n 123\n(1 row)\n\npostgres=# \\time\n\nTime: 14011.975 ms (00:14.012)\n\nThis would solve the \"I forgot to time something\" problem.\n\n- Heikki\nTBH, I have that turned on by default. Load a script. Have 300 of those lines, and tell me how long it took?In my case, it's much easier. The other uses cases, including noticing I changed some configuration and Ishould reconnect (because I use multiple sessions, and I am in the early stages with lots of changes).",
"msg_date": "Wed, 22 Feb 2023 13:42:22 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "st 22. 2. 2023 v 18:59 odesílatel Nikolay Samokhvalov <samokhvalov@gmail.com>\nnapsal:\n\n> On Wed, Feb 22, 2023 at 9:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> On the whole I'd rather not eat more of the limited namespace for\n>> psql prompt codes for this.\n>>\n>\n> It depends on personal preferences. When I work on a large screen, I can\n> afford to spend some characters in prompts, if it gives convenience – and\n> many do (looking, for example, at modern tmux/zsh prompts showing git\n> branch context, etc).\n>\n> Default behavior might remain short – it wouldn't make sense to extend it\n> for everyone.\n>\n\n+1\n\nst 22. 2. 2023 v 18:59 odesílatel Nikolay Samokhvalov <samokhvalov@gmail.com> napsal:On Wed, Feb 22, 2023 at 9:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nOn the whole I'd rather not eat more of the limited namespace for\npsql prompt codes for this.It depends on personal preferences. When I work on a large screen, I can afford to spend some characters in prompts, if it gives convenience – and many do (looking, for example, at modern tmux/zsh prompts showing git branch context, etc).Default behavior might remain short – it wouldn't make sense to extend it for everyone.+1",
"msg_date": "Wed, 22 Feb 2023 19:55:16 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 07:17:37PM +0100, Daniel Gustafsson wrote:\n>> On 22 Feb 2023, at 19:14, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> \n>> How about a new backslash command or psql variable to show how long the previous statement took? Something like:\n>> \n>> postgres=# select <unexpectedly slow query>\n>> ?column?\n>> ----------\n>> 123\n>> (1 row)\n>> \n>> postgres=# \\time\n>> \n>> Time: 14011.975 ms (00:14.012)\n>> \n>> This would solve the \"I forgot to time something\" problem.\n> \n> I don't have an opinion on adding a prompt option, but I've wanted this\n> (without realizing this was the format of it) many times.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 22 Feb 2023 11:42:15 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On 22.02.23 19:14, Heikki Linnakangas wrote:\n> How about a new backslash command or psql variable to show how long the \n> previous statement took? Something like:\n\nIf you don't have \\timing turned on before the query starts, psql won't \nrecord what the time was before the query, so you can't compute the run \ntime afterwards. This kind of feature would only work if you always \ntake the start time, even if \\timing is turned off.\n\n\n\n",
"msg_date": "Thu, 23 Feb 2023 12:20:23 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On 23/02/2023 13:20, Peter Eisentraut wrote:\n> On 22.02.23 19:14, Heikki Linnakangas wrote:\n>> How about a new backslash command or psql variable to show how long the\n>> previous statement took? Something like:\n> \n> If you don't have \\timing turned on before the query starts, psql won't\n> record what the time was before the query, so you can't compute the run\n> time afterwards. This kind of feature would only work if you always\n> take the start time, even if \\timing is turned off.\n\nCorrect. That seems acceptable though? gettimeofday() can be slow on \nsome platforms, but I doubt it's *that* slow, that we couldn't call it \ntwo times per query.\n\n- Heikki\n\n\n\n",
"msg_date": "Thu, 23 Feb 2023 14:09:19 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 23/02/2023 13:20, Peter Eisentraut wrote:\n>> If you don't have \\timing turned on before the query starts, psql won't\n>> record what the time was before the query, so you can't compute the run\n>> time afterwards. This kind of feature would only work if you always\n>> take the start time, even if \\timing is turned off.\n\n> Correct. That seems acceptable though? gettimeofday() can be slow on \n> some platforms, but I doubt it's *that* slow, that we couldn't call it \n> two times per query.\n\nYeah, you'd need to capture both the start and stop times even if\n\\timing isn't on, in case you get asked later. But the backend is\ngoing to call gettimeofday at least once per query, likely more\ndepending on what features you use. And there are inherently\nmultiple kernel calls involved in sending a query and receiving\na response. I tend to agree with Heikki that this overhead would\nbe unnoticeable. (Of course, some investigation proving that\nwouldn't be unwarranted.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 Feb 2023 09:52:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "+1 on solving the general problem of \"I forgot to set \\timing--how\nlong did this run?\". I could have used that more than once in the\npast, and I'm sure it will come up again.\n\nI think Heikki's solution is probably more practical since (1) even if\nwe add the prompt parameter originally proposed, I don't see it being\nincluded in the default, so it would require users to change their\nprompt before they can benefit from it and (2) even if we commit to\nnever allowing tweaks to the format, I foresee a slow but endless\ntrickle of requests and patches to do so.\n\nThanks,\nMaciek\n\n\n",
"msg_date": "Thu, 23 Feb 2023 09:04:25 -0800",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 9:05 AM Maciek Sakrejda <m.sakrejda@gmail.com>\nwrote:\n\n> I think Heikki's solution is probably more practical since (1) ..\n\n\nNote that these ideas target two *different* problems:\n- what was the duration of the last query\n- when was the last query executed\n\nSo, having both solved would be ideal.\n\nOn Thu, Feb 23, 2023 at 9:05 AM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\nI think Heikki's solution is probably more practical since (1) ..Note that these ideas target two *different* problems:- what was the duration of the last query- when was the last query executedSo, having both solved would be ideal.",
"msg_date": "Thu, 23 Feb 2023 09:55:02 -0800",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On Thu, Feb 23, 2023, 09:55 Nikolay Samokhvalov <samokhvalov@gmail.com>\nwrote:\n\n> On Thu, Feb 23, 2023 at 9:05 AM Maciek Sakrejda <m.sakrejda@gmail.com>\n> wrote:\n>\n>> I think Heikki's solution is probably more practical since (1) ..\n>\n>\n> Note that these ideas target two *different* problems:\n> - what was the duration of the last query\n> - when was the last query executed\n>\n> So, having both solved would be ideal.\n>\n\nFair point, but since the duration solution needs to capture two timestamps\nanyway, it could print start time as well as duration.\n\nThe prompt timestamp could still be handy for more intricate session\nforensics, but I don't know if that's a common-enough use case.\n\nThanks,\nMaciek\n\n>\n\nOn Thu, Feb 23, 2023, 09:55 Nikolay Samokhvalov <samokhvalov@gmail.com> wrote:On Thu, Feb 23, 2023 at 9:05 AM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\nI think Heikki's solution is probably more practical since (1) ..Note that these ideas target two *different* problems:- what was the duration of the last query- when was the last query executedSo, having both solved would be ideal.Fair point, but since the duration solution needs to capture two timestamps anyway, it could print start time as well as duration.The prompt timestamp could still be handy for more intricate session forensics, but I don't know if that's a common-enough use case.Thanks,Maciek",
"msg_date": "Thu, 23 Feb 2023 10:15:54 -0800",
"msg_from": "Maciek Sakrejda <m.sakrejda@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "čt 23. 2. 2023 v 19:16 odesílatel Maciek Sakrejda <m.sakrejda@gmail.com>\nnapsal:\n\n> On Thu, Feb 23, 2023, 09:55 Nikolay Samokhvalov <samokhvalov@gmail.com>\n> wrote:\n>\n>> On Thu, Feb 23, 2023 at 9:05 AM Maciek Sakrejda <m.sakrejda@gmail.com>\n>> wrote:\n>>\n>>> I think Heikki's solution is probably more practical since (1) ..\n>>\n>>\n>> Note that these ideas target two *different* problems:\n>> - what was the duration of the last query\n>> - when was the last query executed\n>>\n>> So, having both solved would be ideal.\n>>\n>\n> Fair point, but since the duration solution needs to capture two\n> timestamps anyway, it could print start time as well as duration.\n>\n> The prompt timestamp could still be handy for more intricate session\n> forensics, but I don't know if that's a common-enough use case.\n>\n\n\nIt is hard to say what is a common enough case, but I cannot imagine more\nthings than this.\n\nsmall notice - bash has special support for this\n\nRegards\n\nPavel\n\n\n> Thanks,\n> Maciek\n>\n>>\n\nčt 23. 2. 2023 v 19:16 odesílatel Maciek Sakrejda <m.sakrejda@gmail.com> napsal:On Thu, Feb 23, 2023, 09:55 Nikolay Samokhvalov <samokhvalov@gmail.com> wrote:On Thu, Feb 23, 2023 at 9:05 AM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\nI think Heikki's solution is probably more practical since (1) ..Note that these ideas target two *different* problems:- what was the duration of the last query- when was the last query executedSo, having both solved would be ideal.Fair point, but since the duration solution needs to capture two timestamps anyway, it could print start time as well as duration.The prompt timestamp could still be handy for more intricate session forensics, but I don't know if that's a common-enough use case.It is hard to say what is a common enough case, but I cannot imagine more things than this. small notice - bash has special support for this RegardsPavelThanks,Maciek",
"msg_date": "Thu, 23 Feb 2023 19:45:13 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 1:16 PM Maciek Sakrejda <m.sakrejda@gmail.com>\nwrote:\n\n> On Thu, Feb 23, 2023, 09:55 Nikolay Samokhvalov <samokhvalov@gmail.com>\n> wrote:\n>\n>> On Thu, Feb 23, 2023 at 9:05 AM Maciek Sakrejda <m.sakrejda@gmail.com>\n>> wrote:\n>>\n>>> I think Heikki's solution is probably more practical since (1) ..\n>>\n>>\n>> Note that these ideas target two *different* problems:\n>> - what was the duration of the last query\n>> - when was the last query executed\n>>\n>> So, having both solved would be ideal.\n>>\n>\n> Fair point, but since the duration solution needs to capture two\n> timestamps anyway, it could print start time as well as duration.\n>\n> The prompt timestamp could still be handy for more intricate session\n> forensics, but I don't know if that's a common-enough use case.\n>\n> Thanks,\n> Maciek\n>\n\nIt's really common during migrations, and forensics. I often do a bunch of\nstuff in 2 systems. Then check the overlap.\nAndrey brought up the value of 2 people separate working on things, being\nable to go back and review when did you change that setting? Which has\nhappened to many of us in support sessions...\n\nThanks!\n\nOn Thu, Feb 23, 2023 at 1:16 PM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:On Thu, Feb 23, 2023, 09:55 Nikolay Samokhvalov <samokhvalov@gmail.com> wrote:On Thu, Feb 23, 2023 at 9:05 AM Maciek Sakrejda <m.sakrejda@gmail.com> wrote:\nI think Heikki's solution is probably more practical since (1) ..Note that these ideas target two *different* problems:- what was the duration of the last query- when was the last query executedSo, having both solved would be ideal.Fair point, but since the duration solution needs to capture two timestamps anyway, it could print start time as well as duration.The prompt timestamp could still be handy for more intricate session forensics, but I don't know if that's a common-enough use case.Thanks,MaciekIt's really common during migrations, and forensics. I often do a bunch of stuff in 2 systems. Then check the overlap.Andrey brought up the value of 2 people separate working on things, being able to go back and review when did you change that setting? Which has happened to many of us in support sessions...Thanks!",
"msg_date": "Thu, 23 Feb 2023 13:52:27 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 9:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> > On 23/02/2023 13:20, Peter Eisentraut wrote:\n> >> If you don't have \\timing turned on before the query starts, psql won't\n> >> record what the time was before the query, so you can't compute the run\n> >> time afterwards. This kind of feature would only work if you always\n> >> take the start time, even if \\timing is turned off.\n>\n> > Correct. That seems acceptable though? gettimeofday() can be slow on\n> > some platforms, but I doubt it's *that* slow, that we couldn't call it\n> > two times per query.\n>\n> Yeah, you'd need to capture both the start and stop times even if\n> \\timing isn't on, in case you get asked later. But the backend is\n> going to call gettimeofday at least once per query, likely more\n> depending on what features you use. And there are inherently\n> multiple kernel calls involved in sending a query and receiving\n> a response. I tend to agree with Heikki that this overhead would\n> be unnoticeable. (Of course, some investigation proving that\n> wouldn't be unwarranted.)\n>\n> regards, tom lane\n>\n\nNote, for this above feature, I was thinking we have a ROW_COUNT variable\nI use \\set to see.\nThe simplest way to add this is maybe a set variable: EXEC_TIME\nAnd it's set when ROW_COUNT gets set.\n+1\n\n==\nNow, since this opened a lively discussion, I am officially submitting my\nfirst patch.\nThis includes the small change to prompt.c and the documentation. I had\nhelp from Andrey Borodin,\nand Pavel Stehule, who have supported me in how to propose, and use gitlab,\netc.\n\nWe are programmers... It's literally our job to sharpen our tools. And\nPSQL is one of my most used.\nA small frustration, felt regularly was the motive.\n\nRegards, Kirk\nPS: If I am supposed to edit the subject to say there is a patch here, I\ndid not know\nPPS: I appreciate ANY and ALL feedback... This is how we learn!",
"msg_date": "Thu, 23 Feb 2023 14:05:33 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "Everyone,\n I love that my proposal for %T in the prompt, triggered some great\nconversations.\n\n This is not instead of that. That lets me run a query and come back\nHOURS later, and know it finished before 7PM like it was supposed to!\n\n This feature is simple. We forget to set \\timing on...\nWe run a query, and we WONDER... how long did that take.\n\n This, too, should be a trivial problem (the code will tell).\n\n I am proposing this to get feedback (I don't have a final design in mind,\nbut I will start by reviewing when and how ROW_COUNT gets set, and what\n\\timing reports).\n\n Next up, as I learn (and make mistakes), this toughens me up...\n\n I am not sure the name is right, but I would like to report it in the\nsame (ms) units as \\timing, since there is an implicit relationship in what\nthey are doing.\n\n I think like ROW_COUNT, it should not change because of internal commands.\nSo, you guys +1 this thing, give additional comments. When the feedback\nsettles, I commit to making it happen.\n\nThanks, Kirk\n\nEveryone, I love that my proposal for %T in the prompt, triggered some great conversations. This is not instead of that. That lets me run a query and come back HOURS later, and know it finished before 7PM like it was supposed to! This feature is simple. We forget to set \\timing on...We run a query, and we WONDER... how long did that take. This, too, should be a trivial problem (the code will tell). I am proposing this to get feedback (I don't have a final design in mind, but I will start by reviewing when and how ROW_COUNT gets set, and what \\timing reports). Next up, as I learn (and make mistakes), this toughens me up... I am not sure the name is right, but I would like to report it in the same (ms) units as \\timing, since there is an implicit relationship in what they are doing. I think like ROW_COUNT, it should not change because of internal commands.So, you guys +1 this thing, give additional comments. When the feedback settles, I commit to making it happen.Thanks, Kirk",
"msg_date": "Thu, 23 Feb 2023 14:55:33 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Proposal: :SQL_EXEC_TIME (like :ROW_COUNT) Variable (psql)"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 8:42 PM Kirk Wolak <wolakk@gmail.com> wrote:\n> I love that my proposal for %T in the prompt, triggered some great conversations.\n>\n> This is not instead of that. That lets me run a query and come back HOURS later, and know it finished before 7PM like it was supposed to!\n\nNeat! I have this info embedded in my Bash prompt [1], but many a\ntimes this is not sufficient to reconstruct the time it took to run\nthe shell command.\n\n> This feature is simple. We forget to set \\timing on...\n> We run a query, and we WONDER... how long did that take.\n\nAnd so I empathize with this need. I have set my Bash prompt to show\nme this info [2].This info is very helpful in situations where you\nfire a command, get tired of waiting for it and walk away for a few\nminutes. Upon return it's very useful to see exactly how long did it\ntake for the command to finish.\n\n> I am not sure the name is right, but I would like to report it in the same (ms) units as \\timing, since there is an implicit relationship in what they are doing.\n>\n> I think like ROW_COUNT, it should not change because of internal commands.\n\n+1\n\n> So, you guys +1 this thing, give additional comments. When the feedback settles, I commit to making it happen.\n\nThis is definitely a useful feature. I agree with everything in the\nproposed UI (reporting in milliseconds, don't track internal commands'\ntiming).\n\nI think 'duration' or 'elapsed' would be a better words in this\ncontext. So perhaps the name could be one of :sql_exec_duration (sql\nprefix feels superfluous), :exec_duration, :command_duration, or\n:elapsed_time.\n\nBy using \\timing, the user is explicitly opting into any overhead\ncaused by time-keeping. With this feature, the timing info will be\ncollected all the time. So do consider evaluating the performance\nimpact this can cause on people's workloads. They may not care for the\nimpact in interactive mode, but in automated scripts, even a moderate\nperformance overhead would be a deal-breaker.\n\n[1]: https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L278\n[2]: https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L262\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n",
"msg_date": "Thu, 23 Feb 2023 23:11:38 -0800",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: :SQL_EXEC_TIME (like :ROW_COUNT) Variable (psql)"
},
{
"msg_contents": "On 23.02.23 20:55, Kirk Wolak wrote:\n> Everyone,\n> I love that my proposal for %T in the prompt, triggered some great \n> conversations.\n>\n> This is not instead of that. That lets me run a query and come back \n> HOURS later, and know it finished before 7PM like it was supposed to!\n>\n> This feature is simple. We forget to set \\timing on...\nI've been there many times!\n> We run a query, and we WONDER... how long did that take.\n>\n> This, too, should be a trivial problem (the code will tell).\n>\n> I am proposing this to get feedback (I don't have a final design in \n> mind, but I will start by reviewing when and how ROW_COUNT gets set, \n> and what \\timing reports).\n>\n> Next up, as I learn (and make mistakes), this toughens me up...\n>\n> I am not sure the name is right, but I would like to report it in \n> the same (ms) units as \\timing, since there is an implicit \n> relationship in what they are doing.\n>\n> I think like ROW_COUNT, it should not change because of internal \n> commands.\n> So, you guys +1 this thing, give additional comments. When the \n> feedback settles, I commit to making it happen.\n>\n> Thanks, Kirk\n>\nI can see it being pretty handy to check if a certain task involving two \ndifferent terminal windows was done in the right order. Basically to see \nwhat went wrong, e.g. \"did I really stop the master database before \npromoting the replica?\"\n\n+1 !\n\nBest, Jim\n\n\n\n",
"msg_date": "Fri, 24 Feb 2023 13:09:47 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: :SQL_EXEC_TIME (like :ROW_COUNT) Variable (psql)"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 2:11 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n\n> On Thu, Feb 23, 2023 at 8:42 PM Kirk Wolak <wolakk@gmail.com> wrote:\n> > I love that my proposal for %T in the prompt, triggered some great\n> conversations.\n> >\n> > This is not instead of that. That lets me run a query and come back\n> HOURS later, and know it finished before 7PM like it was supposed to!\n>\n> Neat! I have this info embedded in my Bash prompt [1], but many a\n> times this is not sufficient to reconstruct the time it took to run\n> the shell command.\n> ...\n> > I think like ROW_COUNT, it should not change because of internal\n> commands.\n>\n> +1\n>\n> > So, you guys +1 this thing, give additional comments. When the feedback\n> settles, I commit to making it happen.\n>\n> This is definitely a useful feature. I agree with everything in the\n> proposed UI (reporting in milliseconds, don't track internal commands'\n> timing).\n>\n> I think 'duration' or 'elapsed' would be a better words in this\n> context. So perhaps the name could be one of :sql_exec_duration (sql\n> prefix feels superfluous), :exec_duration, :command_duration, or\n> :elapsed_time.\n>\n\nI chose that prefix because it sorts near ROW_COUNT (LOL) when you do \\SET\n\nI agree that the name wasn't perfect...\nI like SQL_EXEC_ELAPSED\nkeeping the result closer to ROW_COUNT, and it literally ONLY applies to SQL\n\n\n> By using \\timing, the user is explicitly opting into any overhead\n> caused by time-keeping. With this feature, the timing info will be\n> collected all the time. So do consider evaluating the performance\n> impact this can cause on people's workloads. They may not care for the\n> impact in interactive mode, but in automated scripts, even a moderate\n> performance overhead would be a deal-breaker.\n>\nExcellent point. I run lots of long scripts, but I usually set \\timing on,\njust because I turn off everything else.\nI tested 2,000+ lines of select 1; (Fast sql shouldn't matter, it's the\nmost impacted)\nHonestly, it was imperceptible, Maybe approximating 0.01 seconds\nWith timing on: ~ seconds 0.28\nWith timing of: ~ seconds 0.27\n\nThe \\timing incurs no realistic penalty at this point. The ONLY penalty we\ncould face is the time to\nwrite it to the variable, and that cannot be tested until implemented. But\nI will do that. And I will\nreport the results of the impact. But I do not expect a big impact. We\nupdate SQL_COUNT without an issue.\nAnd that might be much more expensive to get.\n\nThanks!\n\n>\n> [1]:\n> https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L278\n> [2]:\n> https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L262\n>\n> Best regards,\n> Gurjeet\n> http://Gurje.et\n>\n\nOn Fri, Feb 24, 2023 at 2:11 AM Gurjeet Singh <gurjeet@singh.im> wrote:On Thu, Feb 23, 2023 at 8:42 PM Kirk Wolak <wolakk@gmail.com> wrote:\n> I love that my proposal for %T in the prompt, triggered some great conversations.\n>\n> This is not instead of that. That lets me run a query and come back HOURS later, and know it finished before 7PM like it was supposed to!\n\nNeat! I have this info embedded in my Bash prompt [1], but many a\ntimes this is not sufficient to reconstruct the time it took to run\nthe shell command.\n...> I think like ROW_COUNT, it should not change because of internal commands.\n\n+1\n\n> So, you guys +1 this thing, give additional comments. When the feedback settles, I commit to making it happen.\n\nThis is definitely a useful feature. I agree with everything in the\nproposed UI (reporting in milliseconds, don't track internal commands'\ntiming).\n\nI think 'duration' or 'elapsed' would be a better words in this\ncontext. So perhaps the name could be one of :sql_exec_duration (sql\nprefix feels superfluous), :exec_duration, :command_duration, or\n:elapsed_time.I chose that prefix because it sorts near ROW_COUNT (LOL) when you do \\SETI agree that the name wasn't perfect...I like SQL_EXEC_ELAPSEDkeeping the result closer to ROW_COUNT, and it literally ONLY applies to SQL\nBy using \\timing, the user is explicitly opting into any overhead\ncaused by time-keeping. With this feature, the timing info will be\ncollected all the time. So do consider evaluating the performance\nimpact this can cause on people's workloads. They may not care for the\nimpact in interactive mode, but in automated scripts, even a moderate\nperformance overhead would be a deal-breaker.Excellent point. I run lots of long scripts, but I usually set \\timing on, just because I turn off everything else.I tested 2,000+ lines of select 1; (Fast sql shouldn't matter, it's the most impacted)Honestly, it was imperceptible, Maybe approximating 0.01 secondsWith timing on: ~ seconds 0.28With timing of: ~ seconds 0.27The \\timing incurs no realistic penalty at this point. The ONLY penalty we could face is the time towrite it to the variable, and that cannot be tested until implemented. But I will do that. And I willreport the results of the impact. But I do not expect a big impact. We update SQL_COUNT without an issue.And that might be much more expensive to get.Thanks!\n[1]: https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L278\n[2]: https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L262\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Fri, 24 Feb 2023 22:56:16 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: :SQL_EXEC_TIME (like :ROW_COUNT) Variable (psql)"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 7:09 AM Jim Jones <jim.jones@uni-muenster.de> wrote:\n\n> On 23.02.23 20:55, Kirk Wolak wrote:\n> > Everyone,\n> ... SQL_EXEC_TIME\n> > I think like ROW_COUNT, it should not change because of internal\n> > commands.\n> > So, you guys +1 this thing, give additional comments. When the\n> > feedback settles, I commit to making it happen.\n> >\n> > Thanks, Kirk\n> >\n> I can see it being pretty handy to check if a certain task involving two\n> different terminal windows was done in the right order. Basically to see\n> what went wrong, e.g. \"did I really stop the master database before\n> promoting the replica?\"\n>\n> +1 !\n>\n> Best, Jim\n>\n\nJim, thanks, here is that patch for the %T option, but I think you did a +1\nfor the new psql variable :SQL_EXEC_TIME.\nI realized my communication style needs to be cleaner, I caused that with\nthe lead in.\n\nI created this proposal because I felt it was an excellent suggestion, and\nI think it will be trivial to implement, although\nit will involve a lot more testing... MANY times, I have run a query that\ntook a touch too long, and I was wondering how long EXACTLY did that take.\nThis makes it easy \\echo :SQL_EXEC_TIME (Well, I think it will be\nSQL_EXEC_ELAPSED)\n\nregards, kirk",
"msg_date": "Fri, 24 Feb 2023 23:03:22 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: :SQL_EXEC_TIME (like :ROW_COUNT) Variable (psql)"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 10:56 PM Kirk Wolak <wolakk@gmail.com> wrote:\n\n> On Fri, Feb 24, 2023 at 2:11 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n>> On Thu, Feb 23, 2023 at 8:42 PM Kirk Wolak <wolakk@gmail.com> wrote:\n>>\n> ...\n\n> > I think like ROW_COUNT, it should not change because of internal\n>> commands.\n>> ...\n>\n> By using \\timing, the user is explicitly opting into any overhead\n>> caused by time-keeping. With this feature, the timing info will be\n>> collected all the time. So do consider evaluating the performance\n>> impact this can cause on people's workloads. They may not care for the\n>> impact in interactive mode, but in automated scripts, even a moderate\n>> performance overhead would be a deal-breaker.\n>>\n> Excellent point. I run lots of long scripts, but I usually set \\timing\n> on, just because I turn off everything else.\n> I tested 2,000+ lines of select 1; (Fast sql shouldn't matter, it's the\n> most impacted)\n> Honestly, it was imperceptible, Maybe approximating 0.01 seconds\n> With timing on: ~ seconds 0.28\n> With timing of: ~ seconds 0.27\n>\n> The \\timing incurs no realistic penalty at this point. The ONLY penalty\n> we could face is the time to\n> write it to the variable, and that cannot be tested until implemented.\n> But I will do that. And I will\n> report the results of the impact. But I do not expect a big impact. We\n> update SQL_COUNT without an issue.\n> And that might be much more expensive to get.\n>\n\nOkay, I've written and tested this using SQL_EXEC_ELAPSED (suggested name\nimprovement).\nFirst, the instant you have ANY output, it swamps the impact. (I settled\non: SELECT 1 as v \\gset xxx) for no output\nSecond, the variability of running even a constant script is mind-blowing.\nThird, I've limited the output... I built this in layers (init.sql\ninitializes the psql variables I use), run_100.sql runs\nanother file (\\i tst_2000.sql) 100 times. Resulting in 200k selects.\n\nExecutive Summary: 1,000,000 statements executed, consumes ~2 - 2.5\nseconds of extra time (Total)\n\nSo, the per statement cost is: 2.5s / 1,000,000 = 0.000,0025 s per statement\nRoughly: 2.5us\n\nUnfortunately, my test lines look like this:\nWithout Timing\ndone 0.198215 (500) *total *98.862548 *min* 0.167614 *avg*\n0.19772509600000000000 *max *0.290659\n\nWith Timing\ndone 0.191583 (500) *total* 100.729868 *min *0.163280 *avg\n*0.20145973600000000000\n*max *0.275787\n\nNotice that the With Timing had a lower min, and a lower max. But a higher\naverage.\nThe distance between min - avg AND min - max, is big (those are for 1,000\nselects each)\n\nAre these numbers at the \"So What\" Level?\n\nWhile testing, I got the distinct impression that I am measuring something\nthat changes, or that the\nvariance in the system itself really swamps this on a per statement basis.\nIt's only impact is felt\non millions of PSQL queries, and that's a couple of seconds...\n\nCurious what others think before I take this any further.\n\nregards, Kirk\n\n>\n> Thanks!\n>\n>>\n>> [1]:\n>> https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L278\n>> [2]:\n>> https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L262\n>>\n>> Best regards,\n>> Gurjeet\n>> http://Gurje.et\n>>\n>\n\nOn Fri, Feb 24, 2023 at 10:56 PM Kirk Wolak <wolakk@gmail.com> wrote:On Fri, Feb 24, 2023 at 2:11 AM Gurjeet Singh <gurjeet@singh.im> wrote:On Thu, Feb 23, 2023 at 8:42 PM Kirk Wolak <wolakk@gmail.com> wrote:... > I think like ROW_COUNT, it should not change because of internal commands.\n...\nBy using \\timing, the user is explicitly opting into any overhead\ncaused by time-keeping. With this feature, the timing info will be\ncollected all the time. So do consider evaluating the performance\nimpact this can cause on people's workloads. They may not care for the\nimpact in interactive mode, but in automated scripts, even a moderate\nperformance overhead would be a deal-breaker.Excellent point. I run lots of long scripts, but I usually set \\timing on, just because I turn off everything else.I tested 2,000+ lines of select 1; (Fast sql shouldn't matter, it's the most impacted)Honestly, it was imperceptible, Maybe approximating 0.01 secondsWith timing on: ~ seconds 0.28With timing of: ~ seconds 0.27The \\timing incurs no realistic penalty at this point. The ONLY penalty we could face is the time towrite it to the variable, and that cannot be tested until implemented. But I will do that. And I willreport the results of the impact. But I do not expect a big impact. We update SQL_COUNT without an issue.And that might be much more expensive to get.Okay, I've written and tested this using SQL_EXEC_ELAPSED (suggested name improvement).First, the instant you have ANY output, it swamps the impact. (I settled on: SELECT 1 as v \\gset xxx) for no outputSecond, the variability of running even a constant script is mind-blowing.Third, I've limited the output... I built this in layers (init.sql initializes the psql variables I use), run_100.sql runsanother file (\\i tst_2000.sql) 100 times. Resulting in 200k selects.Executive Summary: 1,000,000 statements executed, consumes ~2 - 2.5 seconds of extra time (Total)So, the per statement cost is: 2.5s / 1,000,000 = 0.000,0025 s per statementRoughly: 2.5usUnfortunately, my test lines look like this:Without Timingdone 0.198215 (500) total 98.862548 min 0.167614 avg 0.19772509600000000000 max 0.290659With Timingdone 0.191583 (500) total 100.729868 min 0.163280 avg 0.20145973600000000000 max 0.275787Notice that the With Timing had a lower min, and a lower max. But a higher average.The distance between min - avg AND min - max, is big (those are for 1,000 selects each)Are these numbers at the \"So What\" Level? While testing, I got the distinct impression that I am measuring something that changes, or that thevariance in the system itself really swamps this on a per statement basis. It's only impact is felton millions of PSQL queries, and that's a couple of seconds...Curious what others think before I take this any further.regards, Kirk Thanks!\n[1]: https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L278\n[2]: https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L262\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Sun, 26 Feb 2023 23:07:45 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: :SQL_EXEC_TIME (like :ROW_COUNT) Variable (psql)"
},
{
"msg_contents": "po 27. 2. 2023 v 5:08 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n\n> On Fri, Feb 24, 2023 at 10:56 PM Kirk Wolak <wolakk@gmail.com> wrote:\n>\n>> On Fri, Feb 24, 2023 at 2:11 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n>>\n>>> On Thu, Feb 23, 2023 at 8:42 PM Kirk Wolak <wolakk@gmail.com> wrote:\n>>>\n>> ...\n>\n>> > I think like ROW_COUNT, it should not change because of internal\n>>> commands.\n>>> ...\n>>\n>> By using \\timing, the user is explicitly opting into any overhead\n>>> caused by time-keeping. With this feature, the timing info will be\n>>> collected all the time. So do consider evaluating the performance\n>>> impact this can cause on people's workloads. They may not care for the\n>>> impact in interactive mode, but in automated scripts, even a moderate\n>>> performance overhead would be a deal-breaker.\n>>>\n>> Excellent point. I run lots of long scripts, but I usually set \\timing\n>> on, just because I turn off everything else.\n>> I tested 2,000+ lines of select 1; (Fast sql shouldn't matter, it's the\n>> most impacted)\n>> Honestly, it was imperceptible, Maybe approximating 0.01 seconds\n>> With timing on: ~ seconds 0.28\n>> With timing of: ~ seconds 0.27\n>>\n>> The \\timing incurs no realistic penalty at this point. The ONLY penalty\n>> we could face is the time to\n>> write it to the variable, and that cannot be tested until implemented.\n>> But I will do that. And I will\n>> report the results of the impact. But I do not expect a big impact. We\n>> update SQL_COUNT without an issue.\n>> And that might be much more expensive to get.\n>>\n>\n> Okay, I've written and tested this using SQL_EXEC_ELAPSED (suggested name\n> improvement).\n> First, the instant you have ANY output, it swamps the impact. (I settled\n> on: SELECT 1 as v \\gset xxx) for no output\n> Second, the variability of running even a constant script is mind-blowing.\n> Third, I've limited the output... I built this in layers (init.sql\n> initializes the psql variables I use), run_100.sql runs\n> another file (\\i tst_2000.sql) 100 times. Resulting in 200k selects.\n>\n\nThis is the very worst case.\n\nBut nobody will run from psql 200K selects - can you try little bit more\nreal but still synthetic test case?\n\ncreate table foo(a int);\nbegin\n insert into foo values(1);\n ...\n insert into foo values(200000);\ncommit;\n\nRegards\n\nPavel\n\n\n>\n> Executive Summary: 1,000,000 statements executed, consumes ~2 - 2.5\n> seconds of extra time (Total)\n>\n> So, the per statement cost is: 2.5s / 1,000,000 = 0.000,0025 s per\n> statement\n> Roughly: 2.5us\n>\n> Unfortunately, my test lines look like this:\n> Without Timing\n> done 0.198215 (500) *total *98.862548 *min* 0.167614 *avg*\n> 0.19772509600000000000 *max *0.290659\n>\n> With Timing\n> done 0.191583 (500) *total* 100.729868 *min *0.163280 *avg *0.20145973600000000000\n> *max *0.275787\n>\n> Notice that the With Timing had a lower min, and a lower max. But a\n> higher average.\n> The distance between min - avg AND min - max, is big (those are for 1,000\n> selects each)\n>\n> Are these numbers at the \"So What\" Level?\n>\n> While testing, I got the distinct impression that I am measuring something\n> that changes, or that the\n> variance in the system itself really swamps this on a per statement\n> basis. It's only impact is felt\n> on millions of PSQL queries, and that's a couple of seconds...\n>\n> Curious what others think before I take this any further.\n>\n> regards, Kirk\n>\n>>\n>> Thanks!\n>>\n>>>\n>>> [1]:\n>>> https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L278\n>>> [2]:\n>>> https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L262\n>>>\n>>> Best regards,\n>>> Gurjeet\n>>> http://Gurje.et\n>>>\n>>\n\npo 27. 2. 2023 v 5:08 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:On Fri, Feb 24, 2023 at 10:56 PM Kirk Wolak <wolakk@gmail.com> wrote:On Fri, Feb 24, 2023 at 2:11 AM Gurjeet Singh <gurjeet@singh.im> wrote:On Thu, Feb 23, 2023 at 8:42 PM Kirk Wolak <wolakk@gmail.com> wrote:... > I think like ROW_COUNT, it should not change because of internal commands.\n...\nBy using \\timing, the user is explicitly opting into any overhead\ncaused by time-keeping. With this feature, the timing info will be\ncollected all the time. So do consider evaluating the performance\nimpact this can cause on people's workloads. They may not care for the\nimpact in interactive mode, but in automated scripts, even a moderate\nperformance overhead would be a deal-breaker.Excellent point. I run lots of long scripts, but I usually set \\timing on, just because I turn off everything else.I tested 2,000+ lines of select 1; (Fast sql shouldn't matter, it's the most impacted)Honestly, it was imperceptible, Maybe approximating 0.01 secondsWith timing on: ~ seconds 0.28With timing of: ~ seconds 0.27The \\timing incurs no realistic penalty at this point. The ONLY penalty we could face is the time towrite it to the variable, and that cannot be tested until implemented. But I will do that. And I willreport the results of the impact. But I do not expect a big impact. We update SQL_COUNT without an issue.And that might be much more expensive to get.Okay, I've written and tested this using SQL_EXEC_ELAPSED (suggested name improvement).First, the instant you have ANY output, it swamps the impact. (I settled on: SELECT 1 as v \\gset xxx) for no outputSecond, the variability of running even a constant script is mind-blowing.Third, I've limited the output... I built this in layers (init.sql initializes the psql variables I use), run_100.sql runsanother file (\\i tst_2000.sql) 100 times. Resulting in 200k selects.This is the very worst case.But nobody will run from psql 200K selects - can you try little bit more real but still synthetic test case?create table foo(a int);begin insert into foo values(1); ... insert into foo values(200000);commit;RegardsPavel Executive Summary: 1,000,000 statements executed, consumes ~2 - 2.5 seconds of extra time (Total)So, the per statement cost is: 2.5s / 1,000,000 = 0.000,0025 s per statementRoughly: 2.5usUnfortunately, my test lines look like this:Without Timingdone 0.198215 (500) total 98.862548 min 0.167614 avg 0.19772509600000000000 max 0.290659With Timingdone 0.191583 (500) total 100.729868 min 0.163280 avg 0.20145973600000000000 max 0.275787Notice that the With Timing had a lower min, and a lower max. But a higher average.The distance between min - avg AND min - max, is big (those are for 1,000 selects each)Are these numbers at the \"So What\" Level? While testing, I got the distinct impression that I am measuring something that changes, or that thevariance in the system itself really swamps this on a per statement basis. It's only impact is felton millions of PSQL queries, and that's a couple of seconds...Curious what others think before I take this any further.regards, Kirk Thanks!\n[1]: https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L278\n[2]: https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L262\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Mon, 27 Feb 2023 05:45:04 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: :SQL_EXEC_TIME (like :ROW_COUNT) Variable (psql)"
},
{
"msg_contents": "On Sun, Feb 26, 2023 at 11:45 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> po 27. 2. 2023 v 5:08 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:\n>\n>> On Fri, Feb 24, 2023 at 10:56 PM Kirk Wolak <wolakk@gmail.com> wrote:\n>>\n>>> On Fri, Feb 24, 2023 at 2:11 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n>>>\n>>>> On Thu, Feb 23, 2023 at 8:42 PM Kirk Wolak <wolakk@gmail.com> wrote:\n>>>>\n>>> ...\n>>\n>>> ...\n>>>\n>>> Okay, I've written and tested this using SQL_EXEC_ELAPSED (suggested\n>> name improvement).\n>> First, the instant you have ANY output, it swamps the impact. (I settled\n>> on: SELECT 1 as v \\gset xxx) for no output\n>> Second, the variability of running even a constant script is mind-blowing.\n>> Third, I've limited the output... I built this in layers (init.sql\n>> initializes the psql variables I use), run_100.sql runs\n>> another file (\\i tst_2000.sql) 100 times. Resulting in 200k selects.\n>>\n>\n> This is the very worst case.\n>\n> But nobody will run from psql 200K selects - can you try little bit more\n> real but still synthetic test case?\n>\n> create table foo(a int);\n> begin\n> insert into foo values(1);\n> ...\n> insert into foo values(200000);\n> commit;\n>\n\n*Without timing:*\npostgres=# \\i ins.sql\nElapsed Time: 29.518647 (seconds)\npostgres=# \\i ins.sql\nElapsed Time: 24.973943 (seconds)\npostgres=# \\i ins.sql\nElapsed Time: 21.916432 (seconds)\npostgres=# \\i ins.sql\nElapsed Time: 25.440978 (seconds)\npostgres=# \\i ins.sql\nElapsed Time: 24.848986 (seconds)\n\n-- Because that was slower than expected, I exited, and tried again...\nGetting really different results\npostgres=# \\i ins.sql\nElapsed Time: 17.763167 (seconds)\npostgres=# \\i ins.sql\nElapsed Time: 19.210436 (seconds)\npostgres=# \\i ins.sql\nElapsed Time: 19.903553 (seconds)\npostgres=# \\i ins.sql\nElapsed Time: 21.687750 (seconds)\npostgres=# \\i ins.sql\nElapsed Time: 19.046642 (seconds)\n\n\n\n*With timing:*\n\\i ins.sql\nElapsed Time: 20.479442 (seconds)\npostgres=# \\i ins.sql\nElapsed Time: 21.493303 (seconds)\npostgres=# \\i ins.sql\nElapsed Time: 22.732409 (seconds)\npostgres=# \\i ins.sql\nElapsed Time: 20.246637 (seconds)\npostgres=# \\i ins.sql\nElapsed Time: 20.493607 (seconds)\n\nAgain, it's really hard to measure the difference as the impact, again, is\na bit below the variance.\nIn this case, I could see about a 1s - 2s (max) difference in total time.\nfor 200k statements.\nRun 5 times (for 1 million).\n\nIt's a little worse than noise. But if I used the first run, the timing\nversion would have seemed faster.\n\nI think this is sufficiently fast, and the patch simplifies the code. We\nend up only checking \"if (timing)\"\nin the few places that we print the timing...\n\nAnything else to provide?\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>>\n>> Executive Summary: 1,000,000 statements executed, consumes ~2 - 2.5\n>> seconds of extra time (Total)\n>>\n>> So, the per statement cost is: 2.5s / 1,000,000 = 0.000,0025 s per\n>> statement\n>> Roughly: 2.5us\n>>\n>> Unfortunately, my test lines look like this:\n>> Without Timing\n>> done 0.198215 (500) *total *98.862548 *min* 0.167614 *avg*\n>> 0.19772509600000000000 *max *0.290659\n>>\n>> With Timing\n>> done 0.191583 (500) *total* 100.729868 *min *0.163280 *avg *0.20145973600000000000\n>> *max *0.275787\n>>\n>> Notice that the With Timing had a lower min, and a lower max. But a\n>> higher average.\n>> The distance between min - avg AND min - max, is big (those are for\n>> 1,000 selects each)\n>>\n>> Are these numbers at the \"So What\" Level?\n>>\n>> While testing, I got the distinct impression that I am measuring\n>> something that changes, or that the\n>> variance in the system itself really swamps this on a per statement\n>> basis. It's only impact is felt\n>> on millions of PSQL queries, and that's a couple of seconds...\n>>\n>> Curious what others think before I take this any further.\n>>\n>> regards, Kirk\n>>\n>>>\n>>> Thanks!\n>>>\n>>>>\n>>>> [1]:\n>>>> https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L278\n>>>> [2]:\n>>>> https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L262\n>>>>\n>>>> Best regards,\n>>>> Gurjeet\n>>>> http://Gurje.et\n>>>>\n>>>\n\nOn Sun, Feb 26, 2023 at 11:45 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:po 27. 2. 2023 v 5:08 odesílatel Kirk Wolak <wolakk@gmail.com> napsal:On Fri, Feb 24, 2023 at 10:56 PM Kirk Wolak <wolakk@gmail.com> wrote:On Fri, Feb 24, 2023 at 2:11 AM Gurjeet Singh <gurjeet@singh.im> wrote:On Thu, Feb 23, 2023 at 8:42 PM Kirk Wolak <wolakk@gmail.com> wrote:... ...Okay, I've written and tested this using SQL_EXEC_ELAPSED (suggested name improvement).First, the instant you have ANY output, it swamps the impact. (I settled on: SELECT 1 as v \\gset xxx) for no outputSecond, the variability of running even a constant script is mind-blowing.Third, I've limited the output... I built this in layers (init.sql initializes the psql variables I use), run_100.sql runsanother file (\\i tst_2000.sql) 100 times. Resulting in 200k selects.This is the very worst case.But nobody will run from psql 200K selects - can you try little bit more real but still synthetic test case?create table foo(a int);begin insert into foo values(1); ... insert into foo values(200000);commit;Without timing:postgres=# \\i ins.sqlElapsed Time: 29.518647 (seconds)postgres=# \\i ins.sqlElapsed Time: 24.973943 (seconds)postgres=# \\i ins.sqlElapsed Time: 21.916432 (seconds)postgres=# \\i ins.sqlElapsed Time: 25.440978 (seconds)postgres=# \\i ins.sqlElapsed Time: 24.848986 (seconds)-- Because that was slower than expected, I exited, and tried again... Getting really different resultspostgres=# \\i ins.sqlElapsed Time: 17.763167 (seconds)postgres=# \\i ins.sqlElapsed Time: 19.210436 (seconds)postgres=# \\i ins.sqlElapsed Time: 19.903553 (seconds)postgres=# \\i ins.sqlElapsed Time: 21.687750 (seconds)postgres=# \\i ins.sqlElapsed Time: 19.046642 (seconds)With timing:\\i ins.sqlElapsed Time: 20.479442 (seconds)postgres=# \\i ins.sqlElapsed Time: 21.493303 (seconds)postgres=# \\i ins.sqlElapsed Time: 22.732409 (seconds)postgres=# \\i ins.sqlElapsed Time: 20.246637 (seconds)postgres=# \\i ins.sqlElapsed Time: 20.493607 (seconds)Again, it's really hard to measure the difference as the impact, again, is a bit below the variance.In this case, I could see about a 1s - 2s (max) difference in total time. for 200k statements.Run 5 times (for 1 million).It's a little worse than noise. But if I used the first run, the timing version would have seemed faster.I think this is sufficiently fast, and the patch simplifies the code. We end up only checking \"if (timing)\"in the few places that we print the timing...Anything else to provide? RegardsPavel Executive Summary: 1,000,000 statements executed, consumes ~2 - 2.5 seconds of extra time (Total)So, the per statement cost is: 2.5s / 1,000,000 = 0.000,0025 s per statementRoughly: 2.5usUnfortunately, my test lines look like this:Without Timingdone 0.198215 (500) total 98.862548 min 0.167614 avg 0.19772509600000000000 max 0.290659With Timingdone 0.191583 (500) total 100.729868 min 0.163280 avg 0.20145973600000000000 max 0.275787Notice that the With Timing had a lower min, and a lower max. But a higher average.The distance between min - avg AND min - max, is big (those are for 1,000 selects each)Are these numbers at the \"So What\" Level? While testing, I got the distinct impression that I am measuring something that changes, or that thevariance in the system itself really swamps this on a per statement basis. It's only impact is felton millions of PSQL queries, and that's a couple of seconds...Curious what others think before I take this any further.regards, Kirk Thanks!\n[1]: https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L278\n[2]: https://github.com/gurjeet/home/blob/08f1051fb854f4fc8fbc4f1326f393ed507a55ce/.bashrc#L262\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Mon, 27 Feb 2023 17:26:01 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: :SQL_EXEC_TIME (like :ROW_COUNT) Variable (psql)"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 2:05 PM Kirk Wolak <wolakk@gmail.com> wrote:\n\n> On Thu, Feb 23, 2023 at 9:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> > On 23/02/2023 13:20, Peter Eisentraut wrote:\n>> >> If you don't have \\timing turned on before the query starts, psql won't\n>> >> record what the time was before the query, so you can't compute the run\n>> >> time afterwards. This kind of feature would only work if you always\n>> >> take the start time, even if \\timing is turned off.\n>>\n>> > Correct. That seems acceptable though? gettimeofday() can be slow on\n>> > some platforms, but I doubt it's *that* slow, that we couldn't call it\n>> > two times per query.\n>>\n>> Yeah, you'd need to capture both the start and stop times even if\n>> \\timing isn't on, in case you get asked later. But the backend is\n>> going to call gettimeofday at least once per query, likely more\n>> depending on what features you use. And there are inherently\n>> multiple kernel calls involved in sending a query and receiving\n>> a response. I tend to agree with Heikki that this overhead would\n>> be unnoticeable. (Of course, some investigation proving that\n>> wouldn't be unwarranted.)\n>>\n>> regards, tom lane\n>>\n>\n> Note, for this above feature, I was thinking we have a ROW_COUNT variable\n> I use \\set to see.\n> The simplest way to add this is maybe a set variable: EXEC_TIME\n> And it's set when ROW_COUNT gets set.\n> +1\n>\n> ==\n> Now, since this opened a lively discussion, I am officially submitting my\n> first patch.\n> This includes the small change to prompt.c and the documentation. I had\n> help from Andrey Borodin,\n> and Pavel Stehule, who have supported me in how to propose, and use\n> gitlab, etc.\n>\n> We are programmers... It's literally our job to sharpen our tools. And\n> PSQL is one of my most used.\n> A small frustration, felt regularly was the motive.\n>\n> Regards, Kirk\n> PS: If I am supposed to edit the subject to say there is a patch here, I\n> did not know\n> PPS: I appreciate ANY and ALL feedback... This is how we learn!\n>\n\nPatch Posted with one edit, for line editings (Thanks Jim!)",
"msg_date": "Tue, 28 Feb 2023 19:47:15 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On 01.03.23 01:47, Kirk Wolak wrote:\n> Patch Posted with one edit, for line editings (Thanks Jim!)\n\nThe patch didn't pass the SanityCheck:\n\nhttps://cirrus-ci.com/task/5445242183221248?logs=build#L1337\n\nmissing a header perhaps?\n\n#include \"time.h\"\n\nBest, Jim\n\n\n\n\n\n\nOn 01.03.23\n 01:47, Kirk Wolak wrote:\n\n\n\n\nPatch Posted\n with one edit, for line editings (Thanks Jim!)\n\n\nThe patch didn't pass the SanityCheck:\nhttps://cirrus-ci.com/task/5445242183221248?logs=build#L1337\nmissing a header perhaps?\n#include \"time.h\"\nBest, Jim",
"msg_date": "Wed, 1 Mar 2023 10:29:06 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On Wed, 22 Feb 2023 at 13:38, Kirk Wolak <wolakk@gmail.com> wrote:\n>\n> I already requested ONLY the HH24 format. 8 characters of output. no options. It's a waste of time.\n> After all these years, sqlplus still has only one setting (show it, or not). I am asking the same here.\n> And I will gladly defend not changing it! Ever!\n\nYeah, well, it's kind of beside the point that you're satisfied with\nthis one format. We tend to think about what all users would expect\nand what a complete feature would look like.\n\nI actually tend to think this would be a nice feature. It's telling\nthat log files and other tracing tools tend to produce exactly this\ntype of output with every line prefixed with either a relative or\nabsolute timestamp.\n\nI'm not sure if the *prompt* is a sensible place for it though. The\nplace it seems like it would be most useful is reading the output of\nscript executions where there would be no prompts. Perhaps it's the\ncommand tags and \\echo statements that should be timestamped.\n\nAnd I think experience shows that there are three reasonable formats\nfor dates, the default LC_TIME format, ISO8601, and a relative\n\"seconds (with milliseconds) since starting\". I think having a feature\nthat doesn't support those three would feel incomplete and eventually\nneed to be finished.\n\n-- \ngreg\n\n\n",
"msg_date": "Sat, 8 Apr 2023 21:24:46 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> I'm not sure if the *prompt* is a sensible place for it though. The\n> place it seems like it would be most useful is reading the output of\n> script executions where there would be no prompts. Perhaps it's the\n> command tags and \\echo statements that should be timestamped.\n\nHmm, that is an interesting idea. I kind of like it, not least because\nit eliminates most of the tension between wanting a complete timestamp\nand wanting a short prompt. Command tags are short enough that there's\nplenty of room.\n\n> And I think experience shows that there are three reasonable formats\n> for dates, the default LC_TIME format, ISO8601, and a relative\n> \"seconds (with milliseconds) since starting\". I think having a feature\n> that doesn't support those three would feel incomplete and eventually\n> need to be finished.\n\nYeah, I don't believe that one timestamp format is going to satisfy\neveryone. But that was especially true when trying to wedge it\ninto the prompt, where the need for brevity adds more constraints.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Apr 2023 21:54:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "ne 9. 4. 2023 v 3:54 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Greg Stark <stark@mit.edu> writes:\n> > I'm not sure if the *prompt* is a sensible place for it though. The\n> > place it seems like it would be most useful is reading the output of\n> > script executions where there would be no prompts. Perhaps it's the\n> > command tags and \\echo statements that should be timestamped.\n>\n> Hmm, that is an interesting idea. I kind of like it, not least because\n> it eliminates most of the tension between wanting a complete timestamp\n> and wanting a short prompt. Command tags are short enough that there's\n> plenty of room.\n>\n\nI don't agree so there is a common request for a short prompt. Usually I\nuse four terminals on screen, and still my terminal has a width of 124\ncharacters (and I use relatively small display of my Lenovo T520). Last\nyears I use prompt like:\n\n(2023-04-09 06:08:30) postgres=# select 1;\n┌──────────┐\n│ ?column? │\n╞══════════╡\n│ 1 │\n└──────────┘\n(1 row)\n\nand it is working. Nice thing when I paste the timestamp in examples. I\nhave not any problems with prompt width\n\nRegards\n\nPavel\n\nne 9. 4. 2023 v 3:54 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Greg Stark <stark@mit.edu> writes:\n> I'm not sure if the *prompt* is a sensible place for it though. The\n> place it seems like it would be most useful is reading the output of\n> script executions where there would be no prompts. Perhaps it's the\n> command tags and \\echo statements that should be timestamped.\n\nHmm, that is an interesting idea. I kind of like it, not least because\nit eliminates most of the tension between wanting a complete timestamp\nand wanting a short prompt. Command tags are short enough that there's\nplenty of room.I don't agree so there is a common request for a short prompt. Usually I use four terminals on screen, and still my terminal has a width of 124 characters (and I use relatively small display of my Lenovo T520). Last years I use prompt like:(2023-04-09 06:08:30) postgres=# select 1;┌──────────┐│ ?column? │╞══════════╡│ 1 │└──────────┘(1 row)and it is working. Nice thing when I paste the timestamp in examples. I have not any problems with prompt widthRegardsPavel",
"msg_date": "Sun, 9 Apr 2023 06:16:58 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "> On 1 Mar 2023, at 10:29, Jim Jones <jim.jones@uni-muenster.de> wrote:\n> \n> On 01.03.23 01:47, Kirk Wolak wrote:\n>> Patch Posted with one edit, for line editings (Thanks Jim!)\n> The patch didn't pass the SanityCheck:\n> \n> https://cirrus-ci.com/task/5445242183221248?logs=build#L1337\n> \n> missing a header perhaps?\n> \n> #include \"time.h\"\n\nThis patch spent the March commitfest not building and still doesn't build, so\nI'm marking this returned with feedback. Please feel free to resubmit to the\nnext commitfest if there is renewed interest in the patch.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 18:18:38 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
}
] |
[
{
"msg_contents": "I attached a simple patch that allows meson to find ICU in a non-\nstandard location if if you specify -Dextra_lib_dirs and\n-Dextra_include_dirs.\n\nI'm not sure it's the right thing to do though. One downside is that it\ndoesn't output the version that it finds, it only outputs \"YES\".",
"msg_date": "Wed, 22 Feb 2023 10:26:23 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "allow meson to find ICU in non-standard localtion"
},
{
"msg_contents": "Hi,\n\nThanks for the patch.\n\nOn Wed, 22 Feb 2023 at 21:26, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> I'm not sure it's the right thing to do though. One downside is that it\n> doesn't output the version that it finds, it only outputs \"YES\".\n\n- icu = dependency('icu-uc', required: icuopt.enabled())\n- icu_i18n = dependency('icu-i18n', required: icuopt.enabled())\n\nI think you can do dependency checks with 'required: false' first and\nif they weren't found by dependency checks; then you can do\ncc.find_library() checks. This also solves only the outputting \"YES\"\nproblem if they were found by dependency checks.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Fri, 24 Feb 2023 13:43:26 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow meson to find ICU in non-standard localtion"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-22 10:26:23 -0800, Jeff Davis wrote:\n> I attached a simple patch that allows meson to find ICU in a non-\n> standard location if if you specify -Dextra_lib_dirs and\n> -Dextra_include_dirs.\n\nIf you tell meson where to find the pkg-config file in those directories it'd\nalso work. -Dpkg_config_path=...\n\nDoes that suffice?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Feb 2023 09:57:43 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: allow meson to find ICU in non-standard localtion"
},
{
"msg_contents": "On Sun, 2023-02-26 at 09:57 -0800, Andres Freund wrote:\n> If you tell meson where to find the pkg-config file in those\n> directories it'd\n> also work. -Dpkg_config_path=...\n\nSetup is able to find it, which is good, but it seems like it's not\nadding it to RPATH so it's not working.\n\nI think we need some doc updates to clarify which features are affected\nby -Dextra_lib_dirs/-Dpkg_config_path.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sun, 26 Feb 2023 19:36:17 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: allow meson to find ICU in non-standard localtion"
},
{
"msg_contents": "On Sun, Feb 26, 2023 at 7:36 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Sun, 2023-02-26 at 09:57 -0800, Andres Freund wrote:\n> > If you tell meson where to find the pkg-config file in those\n> > directories it'd\n> > also work. -Dpkg_config_path=...\n>\n> Setup is able to find it, which is good, but it seems like it's not\n> adding it to RPATH so it's not working.\n\nFor my custom OpenSSL setups using -Dpkg_config_path, meson initially\nadds the correct RPATH during build, then accidentally(?) strips it\nduring the `ninja install` step. This has been complained about [1],\nand it seems like maybe they intended to fix it back in 0.55, but I'm\nnot convinced they did. :)\n\nI work around it by manually setting -Dextra_lib_dirs. I just tried\ndoing that with ICU 72, and it worked without a patch. Hopefully that\nhelps some?\n\n--Jacob\n\n[1] https://github.com/mesonbuild/meson/issues/6541\n\n\n",
"msg_date": "Wed, 1 Mar 2023 11:43:09 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: allow meson to find ICU in non-standard localtion"
},
{
"msg_contents": "On Wed, 2023-03-01 at 11:43 -0800, Jacob Champion wrote:\n> I work around it by manually setting -Dextra_lib_dirs. I just tried\n> doing that with ICU 72, and it worked without a patch. Hopefully that\n> helps some?\n\nYes, works, thank you.\n\nObviously we'd like a little better solution so that others don't get\nconfused, but it's not really a problem for me any more.\n\nAlso there's the issue that libxml2.so pulls in the system's ICU\nregardless. I don't think that causes a major problem, but I thought\nI'd mention it.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 01 Mar 2023 12:30:40 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: allow meson to find ICU in non-standard localtion"
},
{
"msg_contents": "On 01.03.23 21:30, Jeff Davis wrote:\n> On Wed, 2023-03-01 at 11:43 -0800, Jacob Champion wrote:\n>> I work around it by manually setting -Dextra_lib_dirs. I just tried\n>> doing that with ICU 72, and it worked without a patch. Hopefully that\n>> helps some?\n> \n> Yes, works, thank you.\n> \n> Obviously we'd like a little better solution so that others don't get\n> confused, but it's not really a problem for me any more.\n\nSo should we withdraw the patch from the commit fest?\n\n\n",
"msg_date": "Wed, 8 Mar 2023 17:30:28 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: allow meson to find ICU in non-standard localtion"
},
{
"msg_contents": "On Wed, 2023-03-08 at 17:30 +0100, Peter Eisentraut wrote:\n> So should we withdraw the patch from the commit fest?\n\nWithdrawn. If someone else is interested we can still pursue some\nimprovements.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 08 Mar 2023 09:45:55 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: allow meson to find ICU in non-standard localtion"
}
] |
[
{
"msg_contents": "Here's a progress report on adapting the buildfarm client to meson\n\nThere is a development branch where I'm working on the changes. They can \nbe seen here:\n\n<https://github.com/PGBuildFarm/client-code/compare/main...dev/meson>\n\nOn my Linux box (Fedora 37, where crake runs) I can get a complete run. \nThere is work to do to make sure we pick up the right log files, and \nmaybe adjust a module or two. I have adopted a design where instead of \ntrying to know a lot about the testing regime the client needs to know a \nlot less. Instead, it gets meson to tell it the set of tests. I will \nprobably work on enabling some sort of filter, but I think this makes \nthings more future-proof. I have stuck with the design of making testing \nfairly fine-grained, so each suite runs separately.\n\nOn a Windows instance, fairly similar to what's running drongo, I can \nget a successful build with meson+VS2019, but I'm getting an error in \nthe regression tests, which don't like setting lc_time to 'de_DE'. Not \nsure what's going on there.\n\nmeson apparently wants touch and cp installed, although I can't see why \nat first glance. For Windows I just copied them into the path from an \nmsys2 installation.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\nHere's a progress report on adapting the buildfarm client to\n meson\nThere is a development branch where I'm working on the changes.\n They can be seen here:\n<https://github.com/PGBuildFarm/client-code/compare/main...dev/meson>\nOn my Linux box (Fedora 37, where crake runs) I can get a\n complete run. There is work to do to make sure we pick up the\n right log files, and maybe adjust a module or two. I have adopted\n a design where instead of trying to know a lot about the testing\n regime the client needs to know a lot less. Instead, it gets meson\n to tell it the set of tests. I will probably work on enabling some\n sort of filter, but I think this makes things more future-proof. I\n have stuck with the design of making testing fairly fine-grained,\n so each suite runs separately.\n\nOn a Windows instance, fairly similar to what's running drongo, I\n can get a successful build with meson+VS2019, but I'm getting an\n error in the regression tests, which don't like setting lc_time to\n 'de_DE'. Not sure what's going on there.\nmeson apparently wants touch and cp installed, although I can't\n see why at first glance. For Windows I just copied them into the\n path from an msys2 installation.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 22 Feb 2023 18:23:44 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "buildfarm + meson"
},
{
"msg_contents": "On Wed, Feb 22, 2023 at 06:23:44PM -0500, Andrew Dunstan wrote:\n> On my Linux box (Fedora 37, where crake runs) I can get a complete run.\n> There is work to do to make sure we pick up the right log files, and maybe\n> adjust a module or two. I have adopted a design where instead of trying to\n> know a lot about the testing regime the client needs to know a lot less.\n> Instead, it gets meson to tell it the set of tests. I will probably work on\n> enabling some sort of filter, but I think this makes things more\n> future-proof. I have stuck with the design of making testing fairly\n> fine-grained, so each suite runs separately.\n\nNice!\n\n> On a Windows instance, fairly similar to what's running drongo, I can get a\n> successful build with meson+VS2019, but I'm getting an error in the\n> regression tests, which don't like setting lc_time to 'de_DE'. Not sure\n> what's going on there.\n\nWhat's the regression issue? Some text-field ordering that ought to\nbe enforced with a C collation?\n--\nMichael",
"msg_date": "Thu, 23 Feb 2023 09:23:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-22 18:23:44 -0500, Andrew Dunstan wrote:\n> Here's a progress report on adapting the buildfarm client to meson\n> \n> There is a development branch where I'm working on the changes. They can be\n> seen here:\n> \n> <https://github.com/PGBuildFarm/client-code/compare/main...dev/meson>\n> \n> On my Linux box (Fedora 37, where crake runs) I can get a complete run.\n\nNice!\n\n\n> There is work to do to make sure we pick up the right log files, and maybe\n> adjust a module or two. I have adopted a design where instead of trying to\n> know a lot about the testing regime the client needs to know a lot less.\n> Instead, it gets meson to tell it the set of tests. I will probably work on\n> enabling some sort of filter, but I think this makes things more\n> future-proof. I have stuck with the design of making testing fairly\n> fine-grained, so each suite runs separately.\n\nI don't understand why you'd want to run each suite separately. Serially\nexecuting the test takes way longer than doing so in parallel. Why would we\nwant to enforce that?\n\nParticularly because with meson the tests log files and the failed tests can\ndirectly be correlated? And it should be easy to figure out which log files\nneed to be kept, you can just skip the directories in testrun/ that contain\ntest.success.\n\n\n> On a Windows instance, fairly similar to what's running drongo, I can get a\n> successful build with meson+VS2019, but I'm getting an error in the\n> regression tests, which don't like setting lc_time to 'de_DE'. Not sure\n> what's going on there.\n\nHuh, that's odd.\n\n\n> meson apparently wants touch and cp installed, although I can't see why at\n> first glance. For Windows I just copied them into the path from an msys2\n> installation.\n\nThose should probably be fixed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Feb 2023 17:20:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-02-22 We 19:23, Michael Paquier wrote:\n>\n>> On a Windows instance, fairly similar to what's running drongo, I can get a\n>> successful build with meson+VS2019, but I'm getting an error in the\n>> regression tests, which don't like setting lc_time to 'de_DE'. Not sure\n>> what's going on there.\n> What's the regression issue? Some text-field ordering that ought to\n> be enforced with a C collation?\n\n\nHere's the diff\n\n\ndiff -w -U3 C:/prog/bf/buildroot/HEAD/pgsql/src/test/regress/expected/collate.windows.win1252.out C:/prog/bf/buildroot/HEAD/pgsql.build/testrun/regress/regress/results/collate.windows.win1252.out\n--- C:/prog/bf/buildroot/HEAD/pgsql/src/test/regress/expected/collate.windows.win1252.out 2023-02-22 16:32:03.762370300 +0000\n+++ C:/prog/bf/buildroot/HEAD/pgsql.build/testrun/regress/regress/results/collate.windows.win1252.out 2023-02-22 22:54:59.281395200 +0000\n@@ -363,16 +363,17 @@\n \n -- to_char\n SET lc_time TO 'de_DE';\n+ERROR: invalid value for parameter \"lc_time\": \"de_DE\"\n SELECT to_char(date '2010-03-01', 'DD TMMON YYYY');\n to_char\n -------------\n- 01 MRZ 2010\n+ 01 MAR 2010\n (1 row)\n \n SELECT to_char(date '2010-03-01', 'DD TMMON YYYY' COLLATE \"de_DE\");\n to_char\n -------------\n- 01 MRZ 2010\n+ 01 MAR 2010\n (1 row)\n \n -- to_date\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-22 We 19:23, Michael Paquier\n wrote:\n\n\n\nOn a Windows instance, fairly similar to what's running drongo, I can get a\nsuccessful build with meson+VS2019, but I'm getting an error in the\nregression tests, which don't like setting lc_time to 'de_DE'. Not sure\nwhat's going on there.\n\n\n\nWhat's the regression issue? Some text-field ordering that ought to\nbe enforced with a C collation?\n\n\n\nHere's the diff\n\n\ndiff -w -U3 C:/prog/bf/buildroot/HEAD/pgsql/src/test/regress/expected/collate.windows.win1252.out C:/prog/bf/buildroot/HEAD/pgsql.build/testrun/regress/regress/results/collate.windows.win1252.out\n--- C:/prog/bf/buildroot/HEAD/pgsql/src/test/regress/expected/collate.windows.win1252.out 2023-02-22 16:32:03.762370300 +0000\n+++ C:/prog/bf/buildroot/HEAD/pgsql.build/testrun/regress/regress/results/collate.windows.win1252.out 2023-02-22 22:54:59.281395200 +0000\n@@ -363,16 +363,17 @@\n \n -- to_char\n SET lc_time TO 'de_DE';\n+ERROR: invalid value for parameter \"lc_time\": \"de_DE\"\n SELECT to_char(date '2010-03-01', 'DD TMMON YYYY');\n to_char\n -------------\n- 01 MRZ 2010\n+ 01 MAR 2010\n (1 row)\n \n SELECT to_char(date '2010-03-01', 'DD TMMON YYYY' COLLATE \"de_DE\");\n to_char\n -------------\n- 01 MRZ 2010\n+ 01 MAR 2010\n (1 row)\n \n -- to_date\n\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 23 Feb 2023 05:37:58 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-02-22 We 20:20, Andres Freund wrote:\n>\n>> There is work to do to make sure we pick up the right log files, and maybe\n>> adjust a module or two. I have adopted a design where instead of trying to\n>> know a lot about the testing regime the client needs to know a lot less.\n>> Instead, it gets meson to tell it the set of tests. I will probably work on\n>> enabling some sort of filter, but I think this makes things more\n>> future-proof. I have stuck with the design of making testing fairly\n>> fine-grained, so each suite runs separately.\n> I don't understand why you'd want to run each suite separately. Serially\n> executing the test takes way longer than doing so in parallel. Why would we\n> want to enforce that?\n>\n> Particularly because with meson the tests log files and the failed tests can\n> directly be correlated? And it should be easy to figure out which log files\n> need to be kept, you can just skip the directories in testrun/ that contain\n> test.success.\n>\n\nWe can revisit that later. For now I'm more concerned with getting a \nworking setup. The requirements of the buildfarm are a bit different \nfrom those of a developer, though. Running things in parallel can make \nthings faster, but that can also increase the compute load. Also, \nrunning things serially makes it easier to report a failure stage that \npinpoints the test that encountered an issue. But like I say we can come \nback to this.\n\n\n>> On a Windows instance, fairly similar to what's running drongo, I can get a\n>> successful build with meson+VS2019, but I'm getting an error in the\n>> regression tests, which don't like setting lc_time to 'de_DE'. Not sure\n>> what's going on there.\n> Huh, that's odd.\n\n\nSee my reply to Michael for details\n\n\n>\n>\n>> meson apparently wants touch and cp installed, although I can't see why at\n>> first glance. For Windows I just copied them into the path from an msys2\n>> installation.\n> Those should probably be fixed.\n>\n\nYeah. For touch I think we can probably just get rid of this line in the \nroot meson.build:\n\ntouch = find_program('touch', native: true)\n\nFor cp there doesn't seem to be a formal requirement, but there is a \nrecipe in src/common/unicode/meson.build that uses it, maybe that's what \ncaused the failure. On Windows/msvc we could just use copy instead, I think.\n\nI haven't experimented with any of this.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-22 We 20:20, Andres Freund\n wrote:\n\n\n\nThere is work to do to make sure we pick up the right log files, and maybe\nadjust a module or two. I have adopted a design where instead of trying to\nknow a lot about the testing regime the client needs to know a lot less.\nInstead, it gets meson to tell it the set of tests. I will probably work on\nenabling some sort of filter, but I think this makes things more\nfuture-proof. I have stuck with the design of making testing fairly\nfine-grained, so each suite runs separately.\n\n\n\nI don't understand why you'd want to run each suite separately. Serially\nexecuting the test takes way longer than doing so in parallel. Why would we\nwant to enforce that?\n\nParticularly because with meson the tests log files and the failed tests can\ndirectly be correlated? And it should be easy to figure out which log files\nneed to be kept, you can just skip the directories in testrun/ that contain\ntest.success.\n\n\n\n\n\nWe can revisit that later. For now I'm more concerned with\n getting a working setup. The requirements of the buildfarm are a\n bit different from those of a developer, though. Running things in\n parallel can make things faster, but that can also increase the\n compute load. Also, running things serially makes it easier to\n report a failure stage that pinpoints the test that encountered an\n issue. But like I say we can come back to this.\n\n\n\n\n\n\nOn a Windows instance, fairly similar to what's running drongo, I can get a\nsuccessful build with meson+VS2019, but I'm getting an error in the\nregression tests, which don't like setting lc_time to 'de_DE'. Not sure\nwhat's going on there.\n\n\n\nHuh, that's odd.\n\n\n\nSee my reply to Michael for details\n\n\n\n\n\n\n\n\nmeson apparently wants touch and cp installed, although I can't see why at\nfirst glance. For Windows I just copied them into the path from an msys2\ninstallation.\n\n\n\nThose should probably be fixed.\n\n\n\n\n\nYeah. For touch I think we can probably just get rid of this line\n in the root meson.build:\ntouch = find_program('touch', native: true)\nFor cp there doesn't seem to be a formal requirement, but there\n is a recipe in src/common/unicode/meson.build that uses it, maybe\n that's what caused the failure. On Windows/msvc we could just use\n copy instead, I think.\n\nI haven't experimented with any of this.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 23 Feb 2023 06:27:23 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-02-23 06:27:23 -0500, Andrew Dunstan wrote:\n> \n> On 2023-02-22 We 20:20, Andres Freund wrote:\n> > \n> > > There is work to do to make sure we pick up the right log files, and maybe\n> > > adjust a module or two. I have adopted a design where instead of trying to\n> > > know a lot about the testing regime the client needs to know a lot less.\n> > > Instead, it gets meson to tell it the set of tests. I will probably work on\n> > > enabling some sort of filter, but I think this makes things more\n> > > future-proof. I have stuck with the design of making testing fairly\n> > > fine-grained, so each suite runs separately.\n> > I don't understand why you'd want to run each suite separately. Serially\n> > executing the test takes way longer than doing so in parallel. Why would we\n> > want to enforce that?\n> > \n> > Particularly because with meson the tests log files and the failed tests can\n> > directly be correlated? And it should be easy to figure out which log files\n> > need to be kept, you can just skip the directories in testrun/ that contain\n> > test.success.\n> > \n> \n> We can revisit that later. For now I'm more concerned with getting a working\n> setup.\n\nMy fear is that this ends up being entrenched in the design and hard to change\nlater.\n\n\n> The requirements of the buildfarm are a bit different from those of a\n> developer, though. Running things in parallel can make things faster, but\n> that can also increase the compute load.\n\nSure, I'm not advocating to using a [high] concurrency by default.\n\n\n> Also, running things serially makes it easier to report a failure stage that\n> pinpoints the test that encountered an issue.\n\nYou're relying on running tests in a specific order. Instead you can also just\nrun tests in parallel and check test status in order and report the first\nfailed test in that order.\n\n\n> But like I say we can come\n> back to this.\n\n> \n> > > On a Windows instance, fairly similar to what's running drongo, I can get a\n> > > successful build with meson+VS2019, but I'm getting an error in the\n> > > regression tests, which don't like setting lc_time to 'de_DE'. Not sure\n> > > what's going on there.\n> > Huh, that's odd.\n> \n> \n> See my reply to Michael for details\n\nI suspect the issue might be related to this:\n\n+ local %ENV = (PATH => $ENV{PATH}, PGUSER => $ENV{PGUSER});\n+ @makeout=run_log(\"meson test --logbase checklog --print-errorlogs --no-rebuild -C $pgsql --suite setup --suite regress\");\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 Feb 2023 07:58:37 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-02-23 Th 10:58, Andres Freund wrote:\n>\n>>>> On a Windows instance, fairly similar to what's running drongo, I can get a\n>>>> successful build with meson+VS2019, but I'm getting an error in the\n>>>> regression tests, which don't like setting lc_time to 'de_DE'. Not sure\n>>>> what's going on there.\n>>> Huh, that's odd.\n>>\n>> See my reply to Michael for details\n> I suspect the issue might be related to this:\n>\n> + local %ENV = (PATH => $ENV{PATH}, PGUSER => $ENV{PGUSER});\n> + @makeout=run_log(\"meson test --logbase checklog --print-errorlogs --no-rebuild -C $pgsql --suite setup --suite regress\");\n>\n\nI commented out the 'local %ENV' line and still got the error. I also \ngot the same error running by hand.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-23 Th 10:58, Andres Freund\n wrote:\n\n\n\n\n\nOn a Windows instance, fairly similar to what's running drongo, I can get a\nsuccessful build with meson+VS2019, but I'm getting an error in the\nregression tests, which don't like setting lc_time to 'de_DE'. Not sure\nwhat's going on there.\n\n\nHuh, that's odd.\n\n\n\n\nSee my reply to Michael for details\n\n\n\nI suspect the issue might be related to this:\n\n+ local %ENV = (PATH => $ENV{PATH}, PGUSER => $ENV{PGUSER});\n+ @makeout=run_log(\"meson test --logbase checklog --print-errorlogs --no-rebuild -C $pgsql --suite setup --suite regress\");\n\n\n\n\n\nI commented out the 'local %ENV' line and still got the error. I\n also got the same error running by hand. \n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 23 Feb 2023 16:12:47 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-02-23 Th 16:12, Andrew Dunstan wrote:\n>\n>\n> On 2023-02-23 Th 10:58, Andres Freund wrote:\n>>\n>>>>> On a Windows instance, fairly similar to what's running drongo, I can get a\n>>>>> successful build with meson+VS2019, but I'm getting an error in the\n>>>>> regression tests, which don't like setting lc_time to 'de_DE'. Not sure\n>>>>> what's going on there.\n>>>> Huh, that's odd.\n>>> See my reply to Michael for details\n>> I suspect the issue might be related to this:\n>>\n>> + local %ENV = (PATH => $ENV{PATH}, PGUSER => $ENV{PGUSER});\n>> + @makeout=run_log(\"meson test --logbase checklog --print-errorlogs --no-rebuild -C $pgsql --suite setup --suite regress\");\n>>\n>\n> I commented out the 'local %ENV' line and still got the error. I also \n> got the same error running by hand.\n>\n>\n>\n\n\nOn drongo, this test isn't failing, and I think the reason is that it \nruns \"make NO_LOCALE=1 check\" so it never gets a database with win1252 \nencoding.\n\nI'm going to try adding a win1252 test to drongo's locales.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-23 Th 16:12, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-02-23 Th 10:58, Andres Freund\n wrote:\n\n\n\n\n\nOn a Windows instance, fairly similar to what's running drongo, I can get a\nsuccessful build with meson+VS2019, but I'm getting an error in the\nregression tests, which don't like setting lc_time to 'de_DE'. Not sure\nwhat's going on there.\n\n\nHuh, that's odd.\n\n\nSee my reply to Michael for details\n\n\nI suspect the issue might be related to this:\n\n+ local %ENV = (PATH => $ENV{PATH}, PGUSER => $ENV{PGUSER});\n+ @makeout=run_log(\"meson test --logbase checklog --print-errorlogs --no-rebuild -C $pgsql --suite setup --suite regress\");\n\n\n\n\n\nI commented out the 'local %ENV' line and still got the error.\n I also got the same error running by hand. \n\n\n\n\n\n\n\n\n\nOn drongo, this test isn't failing, and I think the reason is\n that it runs \"make NO_LOCALE=1 check\" so it never gets a database\n with win1252 encoding.\nI'm going to try adding a win1252 test to drongo's locales.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 24 Feb 2023 08:22:43 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 2:22 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On drongo, this test isn't failing, and I think the reason is that it runs\n> \"make NO_LOCALE=1 check\" so it never gets a database with win1252 encoding.\n>\n> I'm going to try adding a win1252 test to drongo's locales.\n>\n\nWhat seems to be failing is the setlocale() for 'de_DE'. I haven't been\nable to reproduce it locally, but I've seen something similar reported for\npython [1].\n\nAs a workaround, can you please test \"SET lc_time TO 'de-DE';\"?\n\n[1] https://bugs.python.org/issue36792\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Fri, Feb 24, 2023 at 2:22 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\nOn drongo, this test isn't failing, and I think the reason is\n that it runs \"make NO_LOCALE=1 check\" so it never gets a database\n with win1252 encoding.\nI'm going to try adding a win1252 test to drongo's locales.What seems to be failing is the setlocale() for 'de_DE'. I haven't been able to reproduce it locally, but I've seen something similar reported for python [1].As a workaround, can you please test \"SET lc_time TO 'de-DE';\"?[1] https://bugs.python.org/issue36792Regards,Juan José Santamaría Flecha",
"msg_date": "Mon, 27 Feb 2023 14:11:34 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-02-23 Th 10:58, Andres Freund wrote:\n> On 2023-02-23 06:27:23 -0500, Andrew Dunstan wrote:\n>> On 2023-02-22 We 20:20, Andres Freund wrote:\n>>>> There is work to do to make sure we pick up the right log files, and maybe\n>>>> adjust a module or two. I have adopted a design where instead of trying to\n>>>> know a lot about the testing regime the client needs to know a lot less.\n>>>> Instead, it gets meson to tell it the set of tests. I will probably work on\n>>>> enabling some sort of filter, but I think this makes things more\n>>>> future-proof. I have stuck with the design of making testing fairly\n>>>> fine-grained, so each suite runs separately.\n>>> I don't understand why you'd want to run each suite separately. Serially\n>>> executing the test takes way longer than doing so in parallel. Why would we\n>>> want to enforce that?\n>>>\n>>> Particularly because with meson the tests log files and the failed tests can\n>>> directly be correlated? And it should be easy to figure out which log files\n>>> need to be kept, you can just skip the directories in testrun/ that contain\n>>> test.success.\n>>>\n>> We can revisit that later. For now I'm more concerned with getting a working\n>> setup.\n> My fear is that this ends up being entrenched in the design and hard to change\n> later.\n>\n>\n>> The requirements of the buildfarm are a bit different from those of a\n>> developer, though. Running things in parallel can make things faster, but\n>> that can also increase the compute load.\n> Sure, I'm not advocating to using a [high] concurrency by default.\n\n\nPerhaps the latest version will be more to your taste. This is now \nworking on my MSVC test rig (WS2019, VS2019, Strawberry Perl), including \nTAP tests. I do get a whole lot of annoying messages like this:\n\nUnknown TAP version. The first line MUST be `TAP version <int>`. \nAssuming version 12.\n\nAnyway, I think this is ready for any brave soul who wants to take it \nfor a test run, not on a reporting animal just yet, though. To activate \nit you need the config to have 'using_meson => 1' and a meson_opts \nsection - see the sample file. You can get the dev/meson version at \n<https://github.com/PGBuildFarm/client-code/archive/refs/heads/dev/meson.zip>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-23 Th 10:58, Andres Freund\n wrote:\n\n\nOn 2023-02-23 06:27:23 -0500, Andrew Dunstan wrote:\n\n\n\nOn 2023-02-22 We 20:20, Andres Freund wrote:\n\n\n\n\n\nThere is work to do to make sure we pick up the right log files, and maybe\nadjust a module or two. I have adopted a design where instead of trying to\nknow a lot about the testing regime the client needs to know a lot less.\nInstead, it gets meson to tell it the set of tests. I will probably work on\nenabling some sort of filter, but I think this makes things more\nfuture-proof. I have stuck with the design of making testing fairly\nfine-grained, so each suite runs separately.\n\n\nI don't understand why you'd want to run each suite separately. Serially\nexecuting the test takes way longer than doing so in parallel. Why would we\nwant to enforce that?\n\nParticularly because with meson the tests log files and the failed tests can\ndirectly be correlated? And it should be easy to figure out which log files\nneed to be kept, you can just skip the directories in testrun/ that contain\ntest.success.\n\n\n\n\nWe can revisit that later. For now I'm more concerned with getting a working\nsetup.\n\n\n\nMy fear is that this ends up being entrenched in the design and hard to change\nlater.\n\n\n\n\nThe requirements of the buildfarm are a bit different from those of a\ndeveloper, though. Running things in parallel can make things faster, but\nthat can also increase the compute load.\n\n\n\nSure, I'm not advocating to using a [high] concurrency by default.\n\n\n\nPerhaps the latest version will be more to your taste. This is\n now working on my MSVC test rig (WS2019, VS2019, Strawberry Perl),\n including TAP tests. I do get a whole lot of annoying messages\n like this:\n\nUnknown TAP version. The first line MUST be `TAP version\n <int>`. Assuming version 12.\n\nAnyway, I think this is ready for any brave soul who wants to\n take it for a test run, not on a reporting animal just yet,\n though. To activate it you need the config to have 'using_meson\n => 1' and a meson_opts section - see the sample file. You can\n get the dev/meson version at\n<https://github.com/PGBuildFarm/client-code/archive/refs/heads/dev/meson.zip>\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 1 Mar 2023 16:21:32 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-01 16:21:32 -0500, Andrew Dunstan wrote:\n> Perhaps the latest version will be more to your taste.\n\nI'll check it out.\n\n\n> This is now working\n> on my MSVC test rig (WS2019, VS2019, Strawberry Perl), including TAP tests.\n> I do get a whole lot of annoying messages like this:\n> \n> Unknown TAP version. The first line MUST be `TAP version <int>`. Assuming\n> version 12.\n\nThe newest minor version has fixed that, it was a misunderstanding about /\nimprecision in the tap 14 specification.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 Mar 2023 13:32:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-03-01 We 16:32, Andres Freund wrote:\n>> This is now working\n>> on my MSVC test rig (WS2019, VS2019, Strawberry Perl), including TAP tests.\n>> I do get a whole lot of annoying messages like this:\n>>\n>> Unknown TAP version. The first line MUST be `TAP version <int>`. Assuming\n>> version 12.\n> The newest minor version has fixed that, it was a misunderstanding about /\n> imprecision in the tap 14 specification.\n>\n\nUnfortunately, meson v 1.0.1 appears to be broken on Windows, I had to \ndowngrade back to 1.0.0.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-01 We 16:32, Andres Freund\n wrote:\n\n\n\nThis is now working\non my MSVC test rig (WS2019, VS2019, Strawberry Perl), including TAP tests.\nI do get a whole lot of annoying messages like this:\n\nUnknown TAP version. The first line MUST be `TAP version <int>`. Assuming\nversion 12.\n\n\n\nThe newest minor version has fixed that, it was a misunderstanding about /\nimprecision in the tap 14 specification.\n\n\n\n\n\nUnfortunately, meson v 1.0.1 appears to be broken on Windows, I\n had to downgrade back to 1.0.0.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 2 Mar 2023 17:00:47 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "Hi\n\nOn 2023-03-02 17:00:47 -0500, Andrew Dunstan wrote:\n> \n> On 2023-03-01 We 16:32, Andres Freund wrote:\n> > > This is now working\n> > > on my MSVC test rig (WS2019, VS2019, Strawberry Perl), including TAP tests.\n> > > I do get a whole lot of annoying messages like this:\n> > > \n> > > Unknown TAP version. The first line MUST be `TAP version <int>`. Assuming\n> > > version 12.\n> > The newest minor version has fixed that, it was a misunderstanding about /\n> > imprecision in the tap 14 specification.\n> > \n> \n> Unfortunately, meson v 1.0.1 appears to be broken on Windows, I had to\n> downgrade back to 1.0.0.\n\nIs it possible that you're using a PG checkout from a few days ago? A\nhack I used was invalidated by 1.0.1, but I fixed that already.\n\nCI is running with 1.0.1:\nhttps://cirrus-ci.com/task/5806561726038016?logs=configure#L8\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 Mar 2023 14:06:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-03-02 Th 17:06, Andres Freund wrote:\n> Hi\n>\n> On 2023-03-02 17:00:47 -0500, Andrew Dunstan wrote:\n>> On 2023-03-01 We 16:32, Andres Freund wrote:\n>>>> This is now working\n>>>> on my MSVC test rig (WS2019, VS2019, Strawberry Perl), including TAP tests.\n>>>> I do get a whole lot of annoying messages like this:\n>>>>\n>>>> Unknown TAP version. The first line MUST be `TAP version <int>`. Assuming\n>>>> version 12.\n>>> The newest minor version has fixed that, it was a misunderstanding about /\n>>> imprecision in the tap 14 specification.\n>>>\n>> Unfortunately, meson v 1.0.1 appears to be broken on Windows, I had to\n>> downgrade back to 1.0.0.\n> Is it possible that you're using a PG checkout from a few days ago? A\n> hack I used was invalidated by 1.0.1, but I fixed that already.\n>\n> CI is running with 1.0.1:\n> https://cirrus-ci.com/task/5806561726038016?logs=configure#L8\n>\n\nNo, running against PG master tip. I'll get some details - it's not too \nhard to switch back and forth.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-02 Th 17:06, Andres Freund\n wrote:\n\n\nHi\n\nOn 2023-03-02 17:00:47 -0500, Andrew Dunstan wrote:\n\n\n\nOn 2023-03-01 We 16:32, Andres Freund wrote:\n\n\n\nThis is now working\non my MSVC test rig (WS2019, VS2019, Strawberry Perl), including TAP tests.\nI do get a whole lot of annoying messages like this:\n\nUnknown TAP version. The first line MUST be `TAP version <int>`. Assuming\nversion 12.\n\n\nThe newest minor version has fixed that, it was a misunderstanding about /\nimprecision in the tap 14 specification.\n\n\n\n\nUnfortunately, meson v 1.0.1 appears to be broken on Windows, I had to\ndowngrade back to 1.0.0.\n\n\n\nIs it possible that you're using a PG checkout from a few days ago? A\nhack I used was invalidated by 1.0.1, but I fixed that already.\n\nCI is running with 1.0.1:\nhttps://cirrus-ci.com/task/5806561726038016?logs=configure#L8\n\n\n\n\n\nNo, running against PG master tip. I'll get some details - it's\n not too hard to switch back and forth.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 2 Mar 2023 17:35:26 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-01 13:32:58 -0800, Andres Freund wrote:\n> On 2023-03-01 16:21:32 -0500, Andrew Dunstan wrote:\n> > Perhaps the latest version will be more to your taste.\n> \n> I'll check it out.\n\nA simple conversion from an existing config failed with:\nCan't use an undefined value as an ARRAY reference at /home/bf/src/pgbuildfarm-client-meson/PGBuild/Modules/TestICU.pm line 37.\n\nI disabled TestICU and was able to progress past that.\n\n...\npiculet-meson:HEAD [19:12:48] setting up db cluster (C)...\npiculet-meson:HEAD [19:12:48] starting db (C)...\npiculet-meson:HEAD [19:12:48] running installcheck (C)...\npiculet-meson:HEAD [19:12:57] restarting db (C)...\npiculet-meson:HEAD [19:12:59] running meson misc installchecks (C) ...\nBranch: HEAD\nStage delay_executionInstallCheck-C failed with status 1\n\n\nThe failures are like this:\n\n+ERROR: extension \"dummy_index_am\" is not available\n+DETAIL: Could not open extension control file \"/home/bf/bf-build/piculet-meson/HEAD/inst/share/postgresql/extension/dummy_index_am.control\": No such file or directory.\n+HINT: The extension must first be installed on the system where PostgreSQL is running.\n\nI assume this is in an interaction with b6a0d469cae.\n\n\nI think we need a install-test-modules or such that installs into the normal\ndirectory.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Mar 2023 11:37:34 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-03-07 Tu 14:37, Andres Freund wrote:\n> Hi,\n>\n> On 2023-03-01 13:32:58 -0800, Andres Freund wrote:\n>> On 2023-03-01 16:21:32 -0500, Andrew Dunstan wrote:\n>>> Perhaps the latest version will be more to your taste.\n>> I'll check it out.\n> A simple conversion from an existing config failed with:\n> Can't use an undefined value as an ARRAY reference at /home/bf/src/pgbuildfarm-client-meson/PGBuild/Modules/TestICU.pm line 37.\n>\n> I disabled TestICU and was able to progress past that.\n\n\nPushed a fix for that.\n\n\n>\n> ...\n> piculet-meson:HEAD [19:12:48] setting up db cluster (C)...\n> piculet-meson:HEAD [19:12:48] starting db (C)...\n> piculet-meson:HEAD [19:12:48] running installcheck (C)...\n> piculet-meson:HEAD [19:12:57] restarting db (C)...\n> piculet-meson:HEAD [19:12:59] running meson misc installchecks (C) ...\n> Branch: HEAD\n> Stage delay_executionInstallCheck-C failed with status 1\n>\n>\n> The failures are like this:\n>\n> +ERROR: extension \"dummy_index_am\" is not available\n> +DETAIL: Could not open extension control file \"/home/bf/bf-build/piculet-meson/HEAD/inst/share/postgresql/extension/dummy_index_am.control\": No such file or directory.\n> +HINT: The extension must first be installed on the system where PostgreSQL is running.\n>\n> I assume this is in an interaction with b6a0d469cae.\n>\n>\n> I think we need a install-test-modules or such that installs into the normal\n> directory.\n>\n\nExactly.\n\n\ncheers\n\n\nandrew\n\n-- \n\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-07 Tu 14:37, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-03-01 13:32:58 -0800, Andres Freund wrote:\n\n\nOn 2023-03-01 16:21:32 -0500, Andrew Dunstan wrote:\n\n\nPerhaps the latest version will be more to your taste.\n\n\n\nI'll check it out.\n\n\n\nA simple conversion from an existing config failed with:\nCan't use an undefined value as an ARRAY reference at /home/bf/src/pgbuildfarm-client-meson/PGBuild/Modules/TestICU.pm line 37.\n\nI disabled TestICU and was able to progress past that.\n\n\n\nPushed a fix for that.\n\n\n\n\n\n...\npiculet-meson:HEAD [19:12:48] setting up db cluster (C)...\npiculet-meson:HEAD [19:12:48] starting db (C)...\npiculet-meson:HEAD [19:12:48] running installcheck (C)...\npiculet-meson:HEAD [19:12:57] restarting db (C)...\npiculet-meson:HEAD [19:12:59] running meson misc installchecks (C) ...\nBranch: HEAD\nStage delay_executionInstallCheck-C failed with status 1\n\n\nThe failures are like this:\n\n+ERROR: extension \"dummy_index_am\" is not available\n+DETAIL: Could not open extension control file \"/home/bf/bf-build/piculet-meson/HEAD/inst/share/postgresql/extension/dummy_index_am.control\": No such file or directory.\n+HINT: The extension must first be installed on the system where PostgreSQL is running.\n\nI assume this is in an interaction with b6a0d469cae.\n\n\nI think we need a install-test-modules or such that installs into the normal\ndirectory.\n\n\n\n\n\nExactly.\n\n\ncheers\n\n\nandrew\n\n --\n Andrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 7 Mar 2023 15:47:54 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-07 15:47:54 -0500, Andrew Dunstan wrote:\n> On 2023-03-07 Tu 14:37, Andres Freund wrote:\n> > The failures are like this:\n> > \n> > +ERROR: extension \"dummy_index_am\" is not available\n> > +DETAIL: Could not open extension control file \"/home/bf/bf-build/piculet-meson/HEAD/inst/share/postgresql/extension/dummy_index_am.control\": No such file or directory.\n> > +HINT: The extension must first be installed on the system where PostgreSQL is running.\n> > \n> > I assume this is in an interaction with b6a0d469cae.\n> > \n> > \n> > I think we need a install-test-modules or such that installs into the normal\n> > directory.\n> > \n> \n> Exactly.\n\nHere's a prototype for that.\n\nIt adds an install-test-files target, Because we want to install into a normal\ndirectory, I removed the necessary munging of the target paths from\nmeson.build and moved it into install-test-files. I also added DESTDIR\nsupport, so that installing can redirect the directory if desired. That's used\nfor the tmp_install/ installation now.\n\nI didn't like the number of arguments necessary for install_test_files, so I\nchanged it to use\n\n--install target list of files\n\nwhich makes it easier to use for further directories, if/when we need them.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 7 Mar 2023 17:29:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-23 06:27:23 -0500, Andrew Dunstan wrote:\n> Yeah. For touch I think we can probably just get rid of this line in the\n> root meson.build:\n> \n> touch = find_program('touch', native: true)\n\nYep.\n\n> For cp there doesn't seem to be a formal requirement, but there is a recipe\n> in src/common/unicode/meson.build that uses it, maybe that's what caused the\n> failure. On Windows/msvc we could just use copy instead, I think.\n\nI don't know about using copy, it's very easy to get into trouble due to\ninterpreting forward slashes as options etc. I propose that for now we just\ndon't support update-unicode if cp isn't available - just as already not\navailable when wget isn't available.\n\nPlanning to apply something like the attached soon, unless somebody opposes\nthat plan.\n\n\nOther unix tools we have a hard requirement on right now:\n- sed - would be pretty easy to replace with something else\n- tar, gzip - just for tests\n\nI'm not sure it's worth working on not requiring those.\n\n\nThere's also flex, bison, perl, but those will stay a hard requirement for a\nwhile longer... :)\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 7 Mar 2023 18:26:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-03-07 18:26:21 -0800, Andres Freund wrote:\n> On 2023-02-23 06:27:23 -0500, Andrew Dunstan wrote:\n> > Yeah. For touch I think we can probably just get rid of this line in the\n> > root meson.build:\n> > \n> > touch = find_program('touch', native: true)\n> \n> Yep.\n> \n> > For cp there doesn't seem to be a formal requirement, but there is a recipe\n> > in src/common/unicode/meson.build that uses it, maybe that's what caused the\n> > failure. On Windows/msvc we could just use copy instead, I think.\n> \n> I don't know about using copy, it's very easy to get into trouble due to\n> interpreting forward slashes as options etc. I propose that for now we just\n> don't support update-unicode if cp isn't available - just as already not\n> available when wget isn't available.\n> \n> Planning to apply something like the attached soon, unless somebody opposes\n> that plan.\n\nDone.\n\n\n",
"msg_date": "Tue, 7 Mar 2023 19:56:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-03-07 Tu 20:29, Andres Freund wrote:\n> Hi,\n>\n> On 2023-03-07 15:47:54 -0500, Andrew Dunstan wrote:\n>> On 2023-03-07 Tu 14:37, Andres Freund wrote:\n>>> The failures are like this:\n>>>\n>>> +ERROR: extension \"dummy_index_am\" is not available\n>>> +DETAIL: Could not open extension control file \"/home/bf/bf-build/piculet-meson/HEAD/inst/share/postgresql/extension/dummy_index_am.control\": No such file or directory.\n>>> +HINT: The extension must first be installed on the system where PostgreSQL is running.\n>>>\n>>> I assume this is in an interaction with b6a0d469cae.\n>>>\n>>>\n>>> I think we need a install-test-modules or such that installs into the normal\n>>> directory.\n>>>\n>> Exactly.\n> Here's a prototype for that.\n>\n> It adds an install-test-files target, Because we want to install into a normal\n> directory, I removed the necessary munging of the target paths from\n> meson.build and moved it into install-test-files. I also added DESTDIR\n> support, so that installing can redirect the directory if desired. That's used\n> for the tmp_install/ installation now.\n>\n> I didn't like the number of arguments necessary for install_test_files, so I\n> changed it to use\n>\n> --install target list of files\n>\n> which makes it easier to use for further directories, if/when we need them.\n>\n\nSo if I understand this right, the way to use this would be something like:\n\n\n local $ENV{DESTDIR} = $installdir;\n\n run_log(\"meson compile -C $pgsql install-test-files\");\n\n\nIs that right? I did that but it didn't work :-(\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-07 Tu 20:29, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-03-07 15:47:54 -0500, Andrew Dunstan wrote:\n\n\nOn 2023-03-07 Tu 14:37, Andres Freund wrote:\n\n\nThe failures are like this:\n\n+ERROR: extension \"dummy_index_am\" is not available\n+DETAIL: Could not open extension control file \"/home/bf/bf-build/piculet-meson/HEAD/inst/share/postgresql/extension/dummy_index_am.control\": No such file or directory.\n+HINT: The extension must first be installed on the system where PostgreSQL is running.\n\nI assume this is in an interaction with b6a0d469cae.\n\n\nI think we need a install-test-modules or such that installs into the normal\ndirectory.\n\n\n\n\nExactly.\n\n\n\nHere's a prototype for that.\n\nIt adds an install-test-files target, Because we want to install into a normal\ndirectory, I removed the necessary munging of the target paths from\nmeson.build and moved it into install-test-files. I also added DESTDIR\nsupport, so that installing can redirect the directory if desired. That's used\nfor the tmp_install/ installation now.\n\nI didn't like the number of arguments necessary for install_test_files, so I\nchanged it to use\n\n--install target list of files\n\nwhich makes it easier to use for further directories, if/when we need them.\n\n\n\n\n\nSo if I understand this right, the way to use this would be\n something like:\n\n\n\n local $ENV{DESTDIR} = $installdir;\n run_log(\"meson compile -C $pgsql install-test-files\");\n\n\n\nIs that right? I did that but it didn't work :-(\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 8 Mar 2023 08:57:44 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "Hi,\n\nOn Wed, 8 Mar 2023 at 16:57, Andrew Dunstan <andrew@dunslane.net> wrote:\n> So if I understand this right, the way to use this would be something like:\n>\n>\n> local $ENV{DESTDIR} = $installdir;\n>\n> run_log(\"meson compile -C $pgsql install-test-files\");\n>\n>\n> Is that right? I did that but it didn't work :-(\n\nI think you shouldn't set DESTDIR to the $installdir. If DESTDIR is\nset, it joins $DESTDIR and $install_dir(-Dprefix). So, when you run\n\nlocal $ENV{DESTDIR} = $installdir;\nrun_log(\"meson compile -C $pgsql install-test-files\");\n\nit installs these files to the '$install_dir/$install_dir'.\n\nCould you try only running 'run_log(\"meson compile -C $pgsql\ninstall-test-files\");' without setting DESTDIR, this could work.\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n",
"msg_date": "Wed, 8 Mar 2023 17:40:25 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-03-08 We 08:57, Andrew Dunstan wrote:\n>\n>\n> On 2023-03-07 Tu 20:29, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2023-03-07 15:47:54 -0500, Andrew Dunstan wrote:\n>>> On 2023-03-07 Tu 14:37, Andres Freund wrote:\n>>>> The failures are like this:\n>>>>\n>>>> +ERROR: extension \"dummy_index_am\" is not available\n>>>> +DETAIL: Could not open extension control file \"/home/bf/bf-build/piculet-meson/HEAD/inst/share/postgresql/extension/dummy_index_am.control\": No such file or directory.\n>>>> +HINT: The extension must first be installed on the system where PostgreSQL is running.\n>>>>\n>>>> I assume this is in an interaction with b6a0d469cae.\n>>>>\n>>>>\n>>>> I think we need a install-test-modules or such that installs into the normal\n>>>> directory.\n>>>>\n>>> Exactly.\n>> Here's a prototype for that.\n>>\n>> It adds an install-test-files target, Because we want to install into a normal\n>> directory, I removed the necessary munging of the target paths from\n>> meson.build and moved it into install-test-files. I also added DESTDIR\n>> support, so that installing can redirect the directory if desired. That's used\n>> for the tmp_install/ installation now.\n>>\n>> I didn't like the number of arguments necessary for install_test_files, so I\n>> changed it to use\n>>\n>> --install target list of files\n>>\n>> which makes it easier to use for further directories, if/when we need them.\n>>\n>\n> So if I understand this right, the way to use this would be something \n> like:\n>\n>\n> local $ENV{DESTDIR} = $installdir;\n>\n> run_log(\"meson compile -C $pgsql install-test-files\");\n>\n>\n> Is that right? I did that but it didn't work :-(\n>\n>\n>\n\nOK, tried without the `local` line and it worked, so let's push this.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-08 We 08:57, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-03-07 Tu 20:29, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-03-07 15:47:54 -0500, Andrew Dunstan wrote:\n\n\nOn 2023-03-07 Tu 14:37, Andres Freund wrote:\n\n\nThe failures are like this:\n\n+ERROR: extension \"dummy_index_am\" is not available\n+DETAIL: Could not open extension control file \"/home/bf/bf-build/piculet-meson/HEAD/inst/share/postgresql/extension/dummy_index_am.control\": No such file or directory.\n+HINT: The extension must first be installed on the system where PostgreSQL is running.\n\nI assume this is in an interaction with b6a0d469cae.\n\n\nI think we need a install-test-modules or such that installs into the normal\ndirectory.\n\n\n\nExactly.\n\n\nHere's a prototype for that.\n\nIt adds an install-test-files target, Because we want to install into a normal\ndirectory, I removed the necessary munging of the target paths from\nmeson.build and moved it into install-test-files. I also added DESTDIR\nsupport, so that installing can redirect the directory if desired. That's used\nfor the tmp_install/ installation now.\n\nI didn't like the number of arguments necessary for install_test_files, so I\nchanged it to use\n\n--install target list of files\n\nwhich makes it easier to use for further directories, if/when we need them.\n\n\n\n\n\nSo if I understand this right, the way to use this would be\n something like:\n\n\n\n local $ENV{DESTDIR} = $installdir;\n run_log(\"meson compile -C $pgsql install-test-files\");\n\n\n\nIs that right? I did that but it didn't work :-(\n\n\n\n\n\n\nOK, tried without the `local` line and it worked, so let's push\n this.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 8 Mar 2023 09:41:57 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-08 09:41:57 -0500, Andrew Dunstan wrote:\n> On 2023-03-08 We 08:57, Andrew Dunstan wrote:\n> > On 2023-03-07 Tu 20:29, Andres Freund wrote:\n> > > On 2023-03-07 15:47:54 -0500, Andrew Dunstan wrote:\n> > > Here's a prototype for that.\n> > > \n> > > It adds an install-test-files target, Because we want to install into a normal\n> > > directory, I removed the necessary munging of the target paths from\n> > > meson.build and moved it into install-test-files. I also added DESTDIR\n> > > support, so that installing can redirect the directory if desired. That's used\n> > > for the tmp_install/ installation now.\n> > > \n> > > I didn't like the number of arguments necessary for install_test_files, so I\n> > > changed it to use\n> > > \n> > > --install target list of files\n> > > \n> > > which makes it easier to use for further directories, if/when we need them.\n> > > \n> > \n> > So if I understand this right, the way to use this would be something\n> > like:\n> > \n> > \n> > ��� local $ENV{DESTDIR} = $installdir;\n> > \n> > ��� run_log(\"meson compile -C $pgsql install-test-files\");\n> > \n> > \n> > Is that right? I did that but it didn't work :-(\n\nBilal's explanation of why that doesn't work was right. You'd only want to use\nDESTDIR to install into somewhere other than the real install path.\n\n\n> OK, tried without the `local` line and it worked, so let's push this.\n\nDone. It's possible that we might some more refinement here, but I thought\nit important to unblock the buildfarm work...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Mar 2023 11:21:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-02 17:35:26 -0500, Andrew Dunstan wrote:\n> On 2023-03-02 Th 17:06, Andres Freund wrote:\n> > Hi\n> > \n> > On 2023-03-02 17:00:47 -0500, Andrew Dunstan wrote:\n> > > On 2023-03-01 We 16:32, Andres Freund wrote:\n> > > > > This is now working\n> > > > > on my MSVC test rig (WS2019, VS2019, Strawberry Perl), including TAP tests.\n> > > > > I do get a whole lot of annoying messages like this:\n> > > > > \n> > > > > Unknown TAP version. The first line MUST be `TAP version <int>`. Assuming\n> > > > > version 12.\n> > > > The newest minor version has fixed that, it was a misunderstanding about /\n> > > > imprecision in the tap 14 specification.\n> > > > \n> > > Unfortunately, meson v 1.0.1 appears to be broken on Windows, I had to\n> > > downgrade back to 1.0.0.\n> > Is it possible that you're using a PG checkout from a few days ago? A\n> > hack I used was invalidated by 1.0.1, but I fixed that already.\n> > \n> > CI is running with 1.0.1:\n> > https://cirrus-ci.com/task/5806561726038016?logs=configure#L8\n> > \n> \n> No, running against PG master tip. I'll get some details - it's not too hard\n> to switch back and forth.\n\nAny more details?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Mar 2023 11:22:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-03-08 We 14:22, Andres Freund wrote:\n> Hi,\n>\n> On 2023-03-02 17:35:26 -0500, Andrew Dunstan wrote:\n>> On 2023-03-02 Th 17:06, Andres Freund wrote:\n>>> Hi\n>>>\n>>> On 2023-03-02 17:00:47 -0500, Andrew Dunstan wrote:\n>>>> On 2023-03-01 We 16:32, Andres Freund wrote:\n>>>>>> This is now working\n>>>>>> on my MSVC test rig (WS2019, VS2019, Strawberry Perl), including TAP tests.\n>>>>>> I do get a whole lot of annoying messages like this:\n>>>>>>\n>>>>>> Unknown TAP version. The first line MUST be `TAP version <int>`. Assuming\n>>>>>> version 12.\n>>>>> The newest minor version has fixed that, it was a misunderstanding about /\n>>>>> imprecision in the tap 14 specification.\n>>>>>\n>>>> Unfortunately, meson v 1.0.1 appears to be broken on Windows, I had to\n>>>> downgrade back to 1.0.0.\n>>> Is it possible that you're using a PG checkout from a few days ago? A\n>>> hack I used was invalidated by 1.0.1, but I fixed that already.\n>>>\n>>> CI is running with 1.0.1:\n>>> https://cirrus-ci.com/task/5806561726038016?logs=configure#L8\n>>>\n>> No, running against PG master tip. I'll get some details - it's not too hard\n>> to switch back and forth.\n> Any more details?\n\n\nI was held up by difficulties even with meson 1.0.0 (the test modules \nstuff). Now I again have a clean build with meson 1.0.0 on Windows as a \nbaseline I will get back to trying meson 1.0.1.\n\n\ncheers\n\n\nandrew\n\n\n--Andrew Dunstan\n\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-08 We 14:22, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-03-02 17:35:26 -0500, Andrew Dunstan wrote:\n\n\nOn 2023-03-02 Th 17:06, Andres Freund wrote:\n\n\nHi\n\nOn 2023-03-02 17:00:47 -0500, Andrew Dunstan wrote:\n\n\nOn 2023-03-01 We 16:32, Andres Freund wrote:\n\n\n\nThis is now working\non my MSVC test rig (WS2019, VS2019, Strawberry Perl), including TAP tests.\nI do get a whole lot of annoying messages like this:\n\nUnknown TAP version. The first line MUST be `TAP version <int>`. Assuming\nversion 12.\n\n\nThe newest minor version has fixed that, it was a misunderstanding about /\nimprecision in the tap 14 specification.\n\n\n\nUnfortunately, meson v 1.0.1 appears to be broken on Windows, I had to\ndowngrade back to 1.0.0.\n\n\nIs it possible that you're using a PG checkout from a few days ago? A\nhack I used was invalidated by 1.0.1, but I fixed that already.\n\nCI is running with 1.0.1:\nhttps://cirrus-ci.com/task/5806561726038016?logs=configure#L8\n\n\n\n\nNo, running against PG master tip. I'll get some details - it's not too hard\nto switch back and forth.\n\n\n\nAny more details?\n\n\n\n\nI was held up by difficulties even with meson 1.0.0 (the test\n modules stuff). Now I again have a clean build with meson 1.0.0 on\n Windows as a baseline I will get back to trying meson 1.0.1.\n\n\ncheers\n\n\nandrew\n\n\n\n --Andrew Dunstan\n EDB: https://www.enterprisedb.com",
"msg_date": "Wed, 8 Mar 2023 17:23:53 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-03-08 We 17:23, Andrew Dunstan wrote:\n>\n>\n> On 2023-03-08 We 14:22, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2023-03-02 17:35:26 -0500, Andrew Dunstan wrote:\n>>> On 2023-03-02 Th 17:06, Andres Freund wrote:\n>>>> Hi\n>>>>\n>>>> On 2023-03-02 17:00:47 -0500, Andrew Dunstan wrote:\n>>>>> On 2023-03-01 We 16:32, Andres Freund wrote:\n>>>>>>> This is now working\n>>>>>>> on my MSVC test rig (WS2019, VS2019, Strawberry Perl), including TAP tests.\n>>>>>>> I do get a whole lot of annoying messages like this:\n>>>>>>>\n>>>>>>> Unknown TAP version. The first line MUST be `TAP version <int>`. Assuming\n>>>>>>> version 12.\n>>>>>> The newest minor version has fixed that, it was a misunderstanding about /\n>>>>>> imprecision in the tap 14 specification.\n>>>>>>\n>>>>> Unfortunately, meson v 1.0.1 appears to be broken on Windows, I had to\n>>>>> downgrade back to 1.0.0.\n>>>> Is it possible that you're using a PG checkout from a few days ago? A\n>>>> hack I used was invalidated by 1.0.1, but I fixed that already.\n>>>>\n>>>> CI is running with 1.0.1:\n>>>> https://cirrus-ci.com/task/5806561726038016?logs=configure#L8\n>>>>\n>>> No, running against PG master tip. I'll get some details - it's not too hard\n>>> to switch back and forth.\n>> Any more details?\n>\n>\n> I was held up by difficulties even with meson 1.0.0 (the test modules \n> stuff). Now I again have a clean build with meson 1.0.0 on Windows as \n> a baseline I will get back to trying meson 1.0.1.\n>\n>\n\nOK, I have now got a clean run using meson 1.0.1 / MSVC. Not sure what \nmade the difference. One change I did make was to stop using \"--backend \nvs\" and thus use the ninja backend even for MSVC. That proved necessary \nto run the new install-test-files target which failed miserably with \n\"--backend vs\". Not sure if we have it documented, but if not it should \nbe that you need to use the ninja backend on all platforms.\n\nAt this stage I think I'm prepared to turn this loose on a couple of my \nbuildfarm animals, and if nothing goes awry for the remainder of the \nmonth merge the dev/meson branch and push a new release.\n\nThere is still probably a little polishing to do, especially w.r.t. log \nfile artefacts.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-08 We 17:23, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-03-08 We 14:22, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-03-02 17:35:26 -0500, Andrew Dunstan wrote:\n\n\nOn 2023-03-02 Th 17:06, Andres Freund wrote:\n\n\nHi\n\nOn 2023-03-02 17:00:47 -0500, Andrew Dunstan wrote:\n\n\nOn 2023-03-01 We 16:32, Andres Freund wrote:\n\n\n\nThis is now working\non my MSVC test rig (WS2019, VS2019, Strawberry Perl), including TAP tests.\nI do get a whole lot of annoying messages like this:\n\nUnknown TAP version. The first line MUST be `TAP version <int>`. Assuming\nversion 12.\n\n\nThe newest minor version has fixed that, it was a misunderstanding about /\nimprecision in the tap 14 specification.\n\n\n\nUnfortunately, meson v 1.0.1 appears to be broken on Windows, I had to\ndowngrade back to 1.0.0.\n\n\nIs it possible that you're using a PG checkout from a few days ago? A\nhack I used was invalidated by 1.0.1, but I fixed that already.\n\nCI is running with 1.0.1:\nhttps://cirrus-ci.com/task/5806561726038016?logs=configure#L8\n\n\n\nNo, running against PG master tip. I'll get some details - it's not too hard\nto switch back and forth.\n\n\nAny more details?\n\n\n\n\nI was held up by difficulties even with meson 1.0.0 (the test\n modules stuff). Now I again have a clean build with meson 1.0.0\n on Windows as a baseline I will get back to trying meson 1.0.1.\n\n\n\n\n\nOK, I have now got a clean run using meson 1.0.1 / MSVC. Not sure\n what made the difference. One change I did make was to stop using\n \"--backend vs\" and thus use the ninja backend even for MSVC. That\n proved necessary to run the new install-test-files target which\n failed miserably with \"--backend vs\". Not sure if we have it\n documented, but if not it should be that you need to use the ninja\n backend on all platforms.\nAt this stage I think I'm prepared to turn this loose on a couple\n of my buildfarm animals, and if nothing goes awry for the\n remainder of the month merge the dev/meson branch and push a new\n release.\nThere is still probably a little polishing to do, especially\n w.r.t. log file artefacts.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 9 Mar 2023 08:28:42 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-03-09 Th 08:28, Andrew Dunstan wrote:\n>\n>\n>\n> At this stage I think I'm prepared to turn this loose on a couple of \n> my buildfarm animals, and if nothing goes awry for the remainder of \n> the month merge the dev/meson branch and push a new release.\n>\n> There is still probably a little polishing to do, especially w.r.t. \n> log file artefacts.\n>\n>\n>\n\n\nA few things I've found:\n\n. We don't appear to have an equivalent of the headerscheck and \ncpluspluscheck GNUmakefile targets\n\n. I don't know how to build other docs targets (e.g. postgres-US.pdf)\n\n. There appears to be some mismatch in database names (e.g. \nregression_dblink vs contrib_regression_dblink). That's going to cause \nsome issues with the module that adjusts things for cross version upgrade.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-09 Th 08:28, Andrew Dunstan\n wrote:\n\n\n\n\n\n\nAt this stage I think I'm prepared to turn this loose on a\n couple of my buildfarm animals, and if nothing goes awry for the\n remainder of the month merge the dev/meson branch and push a new\n release.\nThere is still probably a little polishing to do, especially\n w.r.t. log file artefacts.\n\n\n\n\n\n\n\n\n\nA few things I've found:\n. We don't appear to have an equivalent of the headerscheck and\n cpluspluscheck GNUmakefile targets\n. I don't know how to build other docs targets (e.g.\n postgres-US.pdf)\n. There appears to be some mismatch in database names (e.g.\n regression_dblink vs contrib_regression_dblink). That's going to\n cause some issues with the module that adjusts things for cross\n version upgrade.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 9 Mar 2023 14:47:36 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-09 14:47:36 -0500, Andrew Dunstan wrote:\n> On 2023-03-09 Th 08:28, Andrew Dunstan wrote:\n> > At this stage I think I'm prepared to turn this loose on a couple of my\n> > buildfarm animals, and if nothing goes awry for the remainder of the\n> > month merge the dev/meson branch and push a new release.\n\nCool!\n\n\n> > There is still probably a little polishing to do, especially w.r.t. log\n> > file artefacts.\n\n> A few things I've found:\n> \n> . We don't appear to have an equivalent of the headerscheck and\n> cpluspluscheck GNUmakefile targets\n\nYes. I have a pending patch for it, but haven't yet cleaned it up\nsufficiently. The way headercheck/cpluspluscheck query information from\nMakefile.global is somewhat nasty.\n\n\n> . I don't know how to build other docs targets (e.g. postgres-US.pdf)\n\nThere's an 'alldocs' target, or you can do ninja doc/src/sgml/postgres-US.pdf\n\n\n> . There appears to be some mismatch in database names (e.g.\n> regression_dblink vs contrib_regression_dblink). That's going to cause some\n> issues with the module that adjusts things for cross version upgrade.\n\nI guess we can try to do something about that, but the make situation is\noverly complicated. I don't really want to emulate having randomly differing\ndatabase names just because a test is in contrib/ rather than src/.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 Mar 2023 11:55:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-03-09 14:47:36 -0500, Andrew Dunstan wrote:\n>> . There appears to be some mismatch in database names (e.g.\n>> regression_dblink vs contrib_regression_dblink). That's going to cause some\n>> issues with the module that adjusts things for cross version upgrade.\n\n> I guess we can try to do something about that, but the make situation is\n> overly complicated. I don't really want to emulate having randomly differing\n> database names just because a test is in contrib/ rather than src/.\n\nWe could talk about adjusting the behavior on the make side instead,\nperhaps, but something needs to be done there eventually.\n\nHaving said that, I'm not sure that the first meson-capable buildfarm\nversion needs to support cross-version-upgrade testing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Mar 2023 15:25:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-03-09 Th 15:25, Tom Lane wrote:\n> Andres Freund<andres@anarazel.de> writes:\n>> On 2023-03-09 14:47:36 -0500, Andrew Dunstan wrote:\n>>> . There appears to be some mismatch in database names (e.g.\n>>> regression_dblink vs contrib_regression_dblink). That's going to cause some\n>>> issues with the module that adjusts things for cross version upgrade.\n>> I guess we can try to do something about that, but the make situation is\n>> overly complicated. I don't really want to emulate having randomly differing\n>> database names just because a test is in contrib/ rather than src/.\n> We could talk about adjusting the behavior on the make side instead,\n> perhaps, but something needs to be done there eventually.\n>\n> Having said that, I'm not sure that the first meson-capable buildfarm\n> version needs to support cross-version-upgrade testing.\n>\n> \t\t\t\n\n\nWell, I want to store up as little future work as possible. This \nparticular issue won't be much of a problem for several months until we \nbranch the code, as we don't do database adjustments for a same version \nupgrade. At that stage I think a small modification to AdjustUpgrade.pm \nwill do the trick. We just need to remember to do it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-09 Th 15:25, Tom Lane wrote:\n\n\nAndres Freund <andres@anarazel.de> writes:\n\n\nOn 2023-03-09 14:47:36 -0500, Andrew Dunstan wrote:\n\n\n. There appears to be some mismatch in database names (e.g.\nregression_dblink vs contrib_regression_dblink). That's going to cause some\nissues with the module that adjusts things for cross version upgrade.\n\n\n\n\n\n\nI guess we can try to do something about that, but the make situation is\noverly complicated. I don't really want to emulate having randomly differing\ndatabase names just because a test is in contrib/ rather than src/.\n\n\n\nWe could talk about adjusting the behavior on the make side instead,\nperhaps, but something needs to be done there eventually.\n\nHaving said that, I'm not sure that the first meson-capable buildfarm\nversion needs to support cross-version-upgrade testing.\n\n\t\t\t\n\n\n\nWell, I want to store up as little future work as possible. This\n particular issue won't be much of a problem for several months\n until we branch the code, as we don't do database adjustments for\n a same version upgrade. At that stage I think a small modification\n to AdjustUpgrade.pm will do the trick. We just need to remember to\n do it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 9 Mar 2023 17:53:10 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-03-09 Th 14:47, Andrew Dunstan wrote:\n>\n>\n> On 2023-03-09 Th 08:28, Andrew Dunstan wrote:\n>>\n>>\n>>\n>> At this stage I think I'm prepared to turn this loose on a couple of \n>> my buildfarm animals, and if nothing goes awry for the remainder of \n>> the month merge the dev/meson branch and push a new release.\n>>\n>> There is still probably a little polishing to do, especially w.r.t. \n>> log file artefacts.\n>>\n>>\n>>\n>\n>\n> A few things I've found:\n>\n> . We don't appear to have an equivalent of the headerscheck and \n> cpluspluscheck GNUmakefile targets\n>\n> . I don't know how to build other docs targets (e.g. postgres-US.pdf)\n>\n> . There appears to be some mismatch in database names (e.g. \n> regression_dblink vs contrib_regression_dblink). That's going to cause \n> some issues with the module that adjusts things for cross version upgrade.\n>\n>\n>\n\nAnother thing: the test for uuid.h is too strict. On Fedora 36 the OSSP \nheader is in /usr/include, not /usr/include/ossp (I got around that for \nnow by symlinking it, but obviously that's a nasty hack we can't ask \npeople to do)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-09 Th 14:47, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-03-09 Th 08:28, Andrew\n Dunstan wrote:\n\n\n\n\n\n\nAt this stage I think I'm prepared to turn this loose on a\n couple of my buildfarm animals, and if nothing goes awry for\n the remainder of the month merge the dev/meson branch and push\n a new release.\nThere is still probably a little polishing to do, especially\n w.r.t. log file artefacts.\n\n\n\n\n\n\n\n\n\nA few things I've found:\n. We don't appear to have an equivalent of the headerscheck and\n cpluspluscheck GNUmakefile targets\n. I don't know how to build other docs targets (e.g.\n postgres-US.pdf)\n. There appears to be some mismatch in database names (e.g.\n regression_dblink vs contrib_regression_dblink). That's going to\n cause some issues with the module that adjusts things for cross\n version upgrade.\n\n\n\n\n\n\nAnother thing: the test for uuid.h is too strict. On Fedora 36\n the OSSP header is in /usr/include, not /usr/include/ossp (I got\n around that for now by symlinking it, but obviously that's a nasty\n hack we can't ask people to do)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 9 Mar 2023 18:31:10 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-09 11:55:57 -0800, Andres Freund wrote:\n> On 2023-03-09 14:47:36 -0500, Andrew Dunstan wrote:\n> > On 2023-03-09 Th 08:28, Andrew Dunstan wrote:\n> > > At this stage I think I'm prepared to turn this loose on a couple of my\n> > > buildfarm animals, and if nothing goes awry for the remainder of the\n> > > month merge the dev/meson branch and push a new release.\n> \n> Cool!\n\nI moved a few of my animals to it to, so far no problems.\n\nThe only other thing I noticed so far is that the status page doesn't yet know\nhow to generate the right \"flags\", but that's fairly minor...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Mar 2023 15:05:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-09 18:31:10 -0500, Andrew Dunstan wrote:\n> Another thing: the test for uuid.h is too strict. On Fedora 36 the OSSP\n> header is in /usr/include, not /usr/include/ossp (I got around that for now\n> by symlinking it, but obviously that's a nasty hack we can't ask people to\n> do)\n\nYea, that was just wrong. It happened to work on debian and a few other OSs,\nbut ossp's .pc puts whatever the right directory is into the include\npath. Pushed the fairly obvious fix.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 11 Mar 2023 13:25:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-03-10 Fr 18:05, Andres Freund wrote:\n> Hi,\n>\n> On 2023-03-09 11:55:57 -0800, Andres Freund wrote:\n>> On 2023-03-09 14:47:36 -0500, Andrew Dunstan wrote:\n>>> On 2023-03-09 Th 08:28, Andrew Dunstan wrote:\n>>>> At this stage I think I'm prepared to turn this loose on a couple of my\n>>>> buildfarm animals, and if nothing goes awry for the remainder of the\n>>>> month merge the dev/meson branch and push a new release.\n>> Cool!\n> I moved a few of my animals to it to, so far no problems.\n>\n> The only other thing I noticed so far is that the status page doesn't yet know\n> how to generate the right \"flags\", but that's fairly minor...\n>\n\nThe status page should be fixed now. Still a bit of work to do for the \nfailures page.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-10 Fr 18:05, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-03-09 11:55:57 -0800, Andres Freund wrote:\n\n\nOn 2023-03-09 14:47:36 -0500, Andrew Dunstan wrote:\n\n\nOn 2023-03-09 Th 08:28, Andrew Dunstan wrote:\n\n\nAt this stage I think I'm prepared to turn this loose on a couple of my\nbuildfarm animals, and if nothing goes awry for the remainder of the\nmonth merge the dev/meson branch and push a new release.\n\n\n\n\nCool!\n\n\n\nI moved a few of my animals to it to, so far no problems.\n\nThe only other thing I noticed so far is that the status page doesn't yet know\nhow to generate the right \"flags\", but that's fairly minor...\n\n\n\n\n\nThe status page should be fixed now. Still a bit of work to do\n for the failures page.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 13 Mar 2023 10:19:50 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-03-11 Sa 16:25, Andres Freund wrote:\n> Hi,\n>\n> On 2023-03-09 18:31:10 -0500, Andrew Dunstan wrote:\n>> Another thing: the test for uuid.h is too strict. On Fedora 36 the OSSP\n>> header is in /usr/include, not /usr/include/ossp (I got around that for now\n>> by symlinking it, but obviously that's a nasty hack we can't ask people to\n>> do)\n> Yea, that was just wrong. It happened to work on debian and a few other OSs,\n> but ossp's .pc puts whatever the right directory is into the include\n> path. Pushed the fairly obvious fix.\n\n\nAnother issue: building plpython appears impossible on Windows because \nit's finding meson's own python:\n\n\nProgram python3 found: YES (C:\\Program Files\\Meson\\meson.exe runpython)\nCould not find Python3 library 'C:\\\\Program \nFiles\\\\Meson\\\\libs\\\\python311.lib'\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-11 Sa 16:25, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-03-09 18:31:10 -0500, Andrew Dunstan wrote:\n\n\nAnother thing: the test for uuid.h is too strict. On Fedora 36 the OSSP\nheader is in /usr/include, not /usr/include/ossp (I got around that for now\nby symlinking it, but obviously that's a nasty hack we can't ask people to\ndo)\n\n\n\nYea, that was just wrong. It happened to work on debian and a few other OSs,\nbut ossp's .pc puts whatever the right directory is into the include\npath. Pushed the fairly obvious fix.\n\n\n\nAnother issue: building plpython appears impossible on Windows\n because it's finding meson's own python:\n\n\nProgram python3 found: YES (C:\\Program Files\\Meson\\meson.exe\n runpython)\n Could not find Python3 library 'C:\\\\Program\n Files\\\\Meson\\\\libs\\\\python311.lib'\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 18 Mar 2023 17:53:38 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-18 17:53:38 -0400, Andrew Dunstan wrote:\n> On 2023-03-11 Sa 16:25, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2023-03-09 18:31:10 -0500, Andrew Dunstan wrote:\n> > > Another thing: the test for uuid.h is too strict. On Fedora 36 the OSSP\n> > > header is in /usr/include, not /usr/include/ossp (I got around that for now\n> > > by symlinking it, but obviously that's a nasty hack we can't ask people to\n> > > do)\n> > Yea, that was just wrong. It happened to work on debian and a few other OSs,\n> > but ossp's .pc puts whatever the right directory is into the include\n> > path. Pushed the fairly obvious fix.\n> \n> \n> Another issue: building plpython appears impossible on Windows because it's\n> finding meson's own python:\n> \n> \n> Program python3 found: YES (C:\\Program Files\\Meson\\meson.exe runpython)\n> Could not find Python3 library 'C:\\\\Program\n> Files\\\\Meson\\\\libs\\\\python311.lib'\n\nAny more details - windows CI builds with python. What python do you want to\nuse and where is it installed?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 18 Mar 2023 16:00:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-03-18 Sa 19:00, Andres Freund wrote:\n> Hi,\n>\n> On 2023-03-18 17:53:38 -0400, Andrew Dunstan wrote:\n>> On 2023-03-11 Sa 16:25, Andres Freund wrote:\n>>> Hi,\n>>>\n>>> On 2023-03-09 18:31:10 -0500, Andrew Dunstan wrote:\n>>>> Another thing: the test for uuid.h is too strict. On Fedora 36 the OSSP\n>>>> header is in /usr/include, not /usr/include/ossp (I got around that for now\n>>>> by symlinking it, but obviously that's a nasty hack we can't ask people to\n>>>> do)\n>>> Yea, that was just wrong. It happened to work on debian and a few other OSs,\n>>> but ossp's .pc puts whatever the right directory is into the include\n>>> path. Pushed the fairly obvious fix.\n>>\n>> Another issue: building plpython appears impossible on Windows because it's\n>> finding meson's own python:\n>>\n>>\n>> Program python3 found: YES (C:\\Program Files\\Meson\\meson.exe runpython)\n>> Could not find Python3 library 'C:\\\\Program\n>> Files\\\\Meson\\\\libs\\\\python311.lib'\n> Any more details - windows CI builds with python. What python do you want to\n> use and where is it installed?\n>\n\nIt's in c:/python37, which is at the front of the PATH. It fails as \nabove if I add -Dplpython=enabled to the config.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-18 Sa 19:00, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-03-18 17:53:38 -0400, Andrew Dunstan wrote:\n\n\nOn 2023-03-11 Sa 16:25, Andres Freund wrote:\n\n\nHi,\n\nOn 2023-03-09 18:31:10 -0500, Andrew Dunstan wrote:\n\n\nAnother thing: the test for uuid.h is too strict. On Fedora 36 the OSSP\nheader is in /usr/include, not /usr/include/ossp (I got around that for now\nby symlinking it, but obviously that's a nasty hack we can't ask people to\ndo)\n\n\nYea, that was just wrong. It happened to work on debian and a few other OSs,\nbut ossp's .pc puts whatever the right directory is into the include\npath. Pushed the fairly obvious fix.\n\n\n\n\nAnother issue: building plpython appears impossible on Windows because it's\nfinding meson's own python:\n\n\nProgram python3 found: YES (C:\\Program Files\\Meson\\meson.exe runpython)\nCould not find Python3 library 'C:\\\\Program\nFiles\\\\Meson\\\\libs\\\\python311.lib'\n\n\n\nAny more details - windows CI builds with python. What python do you want to\nuse and where is it installed?\n\n\n\n\n\nIt's in c:/python37, which is at the front of the PATH. It fails\n as above if I add -Dplpython=enabled to the config.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 18 Mar 2023 21:32:05 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
},
{
"msg_contents": "On 2023-03-18 Sa 21:32, Andrew Dunstan wrote:\n>\n>\n> On 2023-03-18 Sa 19:00, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2023-03-18 17:53:38 -0400, Andrew Dunstan wrote:\n>>> On 2023-03-11 Sa 16:25, Andres Freund wrote:\n>>>> Hi,\n>>>>\n>>>> On 2023-03-09 18:31:10 -0500, Andrew Dunstan wrote:\n>>>>> Another thing: the test for uuid.h is too strict. On Fedora 36 the OSSP\n>>>>> header is in /usr/include, not /usr/include/ossp (I got around that for now\n>>>>> by symlinking it, but obviously that's a nasty hack we can't ask people to\n>>>>> do)\n>>>> Yea, that was just wrong. It happened to work on debian and a few other OSs,\n>>>> but ossp's .pc puts whatever the right directory is into the include\n>>>> path. Pushed the fairly obvious fix.\n>>> Another issue: building plpython appears impossible on Windows because it's\n>>> finding meson's own python:\n>>>\n>>>\n>>> Program python3 found: YES (C:\\Program Files\\Meson\\meson.exe runpython)\n>>> Could not find Python3 library 'C:\\\\Program\n>>> Files\\\\Meson\\\\libs\\\\python311.lib'\n>> Any more details - windows CI builds with python. What python do you want to\n>> use and where is it installed?\n>>\n>\n> It's in c:/python37, which is at the front of the PATH. It fails as \n> above if I add -Dplpython=enabled to the config.\n>\n\nLooks like the answer is not to install using the MSI installer, which \nprovides its own Python, but to install meson and ninja into an existing \npython installation via pip. That's a bit sad, but manageable.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-18 Sa 21:32, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-03-18 Sa 19:00, Andres Freund\n wrote:\n\n\nHi,\n\nOn 2023-03-18 17:53:38 -0400, Andrew Dunstan wrote:\n\n\nOn 2023-03-11 Sa 16:25, Andres Freund wrote:\n\n\nHi,\n\nOn 2023-03-09 18:31:10 -0500, Andrew Dunstan wrote:\n\n\nAnother thing: the test for uuid.h is too strict. On Fedora 36 the OSSP\nheader is in /usr/include, not /usr/include/ossp (I got around that for now\nby symlinking it, but obviously that's a nasty hack we can't ask people to\ndo)\n\n\nYea, that was just wrong. It happened to work on debian and a few other OSs,\nbut ossp's .pc puts whatever the right directory is into the include\npath. Pushed the fairly obvious fix.\n\n\nAnother issue: building plpython appears impossible on Windows because it's\nfinding meson's own python:\n\n\nProgram python3 found: YES (C:\\Program Files\\Meson\\meson.exe runpython)\nCould not find Python3 library 'C:\\\\Program\nFiles\\\\Meson\\\\libs\\\\python311.lib'\n\n\nAny more details - windows CI builds with python. What python do you want to\nuse and where is it installed?\n\n\n\n\n\nIt's in c:/python37, which is at the front of the PATH. It\n fails as above if I add -Dplpython=enabled to the config.\n\n\n\n\nLooks like the answer is not to install using the MSI installer,\n which provides its own Python, but to install meson and ninja into\n an existing python installation via pip. That's a bit sad, but\n manageable.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 19 Mar 2023 12:48:32 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: buildfarm + meson"
}
] |
[
{
"msg_contents": "Hi,\n\nI was trying to implement ExtendRelationBufferedTo(), responding to a review\ncomment by Heikki, in\nhttps://www.postgresql.org/message-id/20230222203152.rh4s75aedj65hyjn@awork3.anarazel.de\n\nWhich lead me to stare at the P_NEW do while loop in\nXLogReadBufferExtended(). I first started to reply on that thread, but it\nseems like a big enough issue that it seemed worth starting a separate thread.\n\nThe relevant logic was added in 6f2aead1ffec, the relevant discussion is at\nhttps://www.postgresql.org/message-id/32313.1392231107%40sss.pgh.pa.us\n\nMy understanding of what happend there is that we tried to extend a relation,\nsized one block below a segment boundary, and after that the relation was much\nlarger, because the next segment file existed, and had a non-zero size. And\nbecause we extended blkno-lastblock times, we'd potentially blow up the\nrelation size much more than intended.\n\nThe actual cause of that in the reported case appears to have been a bug in\nwal-e. But I suspect it's possible to hit something like that without such\nproblems, just due to crashes on the replica, or \"skew\" while taking a base\nbackup.\n\n\nI find it pretty sketchy that we just leave the contents of the previously\n\"disconnected\" segment contents around, without using log_invalid_page() for\nthe range, or warning, or ...\n\nMost of the time this issue would be fixed due to later WAL replay\ninitializing the later segment. But I don't think there's a guarantee for\nthat (see below).\n\nIt'd be one thing if we accidentally used data in such a segment, if the\nsituation is only caused by a bug in base backup tooling, or filesystem\ncorruption, or ...\n\nBut I think we can encounter the issue without anything like that being\ninvolved. Imagine this scenario:\n\n1) filenode A gets extended to segment 3\n2) basebackup starts, including performing a checkpoint\n3) basebackup ends up copying A's segment 3 first, while in progress\n4) filenode A is dropped\n5) checkpoint happens, allowing smgrrel 10 to be used again\n6) filenode 10 is created newly\n7) basebackup ends\n\nAt that point A will have segment 0, segment 3. The WAL replay for 4) won't\ndrop segment 3, because an smgrnblocks() won't even see it, because segment 2\ndoesn't exist.\n\nIf a replica starts from this base backup, we'll be fine until A again grows\nfar enough to fill segment 2. At that point, we'll suddenly have completely\nbogus contents in 3. Obviously accesses to those contents could trivially\ncrash at that point.\n\n\nI suspect there's an easier to hit version of this: Consider this path in\nExecuteTruncateGuts():\n\n\t\t/*\n\t\t * Normally, we need a transaction-safe truncation here. However, if\n\t\t * the table was either created in the current (sub)transaction or has\n\t\t * a new relfilenumber in the current (sub)transaction, then we can\n\t\t * just truncate it in-place, because a rollback would cause the whole\n\t\t * table or the current physical file to be thrown away anyway.\n\t\t */\n\t\tif (rel->rd_createSubid == mySubid ||\n\t\t\trel->rd_newRelfilelocatorSubid == mySubid)\n\t\t{\n\t\t\t/* Immediate, non-rollbackable truncation is OK */\n\t\t\theap_truncate_one_rel(rel);\n\n\n\nAfaict that could easily lead to a version of the above that doesn't even\nrequire relfilenodes getting recycled.\n\n\nOne way to to defend against this would be to make mdextend(), whenever it\nextends into the last block of a segment, unlink the next segment - it can't\nbe a validly existing contents. But it seems scary to just unlink entire\nsegments.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Feb 2023 17:01:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "XLogReadBufferExtended() vs disconnected segments"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-22 17:01:47 -0800, Andres Freund wrote:\n> One way to to defend against this would be to make mdextend(), whenever it\n> extends into the last block of a segment, unlink the next segment - it can't\n> be a validly existing contents. But it seems scary to just unlink entire\n> segments.\n\nAnother way might be for XLOG_SMGR_TRUNCATE record, as well as smgr unlinks in\ncommit/abort records, to include not just the \"target size\", as we do today,\nbut to also include the current size.\n\nI'm not sure that'd fix all potential issues, but it seems like it'd fix a lot\nof the more obvious issues, because it'd prevent scenarios like a base backup\ncopying segment N, without copying N - 1, due to a concurrent truncate/drop,\nfrom causing harm. Due to the range being included in the WAL record, replay\nwould know that N needs to be unlinked, even if smgrnblocks() thinks the\nrelation is much smaller.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Feb 2023 17:12:28 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: XLogReadBufferExtended() vs disconnected segments"
}
] |
[
{
"msg_contents": "Here is a small patch to make some invalid-record error messages in \nxlogreader a bit more accurate (IMO).\n\nMy starting point was that when you have some invalid WAL, you often get \na message like \"wanted 24, got 0\". This is a bit incorrect, since it \nreally wanted *at least* 24, not exactly 24. So I have updated the \nmessages to that effect, and also added that detail to one message where \nit was available but not printed.\n\nGoing through the remaining report_invalid_record() calls I then \nadjusted the use of \"invalid\" vs. \"incorrect\" in one case. The message \n\"record with invalid length\" makes it sound like the length was \nsomething like -5, but really we know what the length should be and what \nwe got wasn't it, so \"incorrect\" sounded better and is also used in \nother error messages in that file.",
"msg_date": "Thu, 23 Feb 2023 08:35:47 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Make some xlogreader messages more accurate"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 1:06 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Here is a small patch to make some invalid-record error messages in\n> xlogreader a bit more accurate (IMO).\n\n+1 for these changes.\n\n> My starting point was that when you have some invalid WAL, you often get\n> a message like \"wanted 24, got 0\". This is a bit incorrect, since it\n> really wanted *at least* 24, not exactly 24. So I have updated the\n> messages to that effect, and\n\nYes, it's not exactly \"wanted\", but \"wanted at least\" because\nxl_tot_len is the total length of the entire record including header\nand payload.\n\n> also added that detail to one message where\n> it was available but not printed.\n\nLooks okay.\n\n> Going through the remaining report_invalid_record() calls I then\n> adjusted the use of \"invalid\" vs. \"incorrect\" in one case. The message\n> \"record with invalid length\" makes it sound like the length was\n> something like -5, but really we know what the length should be and what\n> we got wasn't it, so \"incorrect\" sounded better and is also used in\n> other error messages in that file.\n\nI have no strong opinion about this change. We seem to be using\n\"invalid length\" and \"incorrect length\" interchangeably [1] without\ndistinguishing between \"invalid\" if length is < 0 and \"incorrect\" if\nlength >= 0 and not something we're expecting.\n\nAnother comment on the patch:\n1. Why is \"wanted >=%u\" any better than \"wanted at least %u\"? IMO, the\nwording as opposed to >= symbol in the user-facing messages works\nbetter.\n+ report_invalid_record(state, \"invalid record offset at %X/%X:\nwanted >=%u, got %u\",\n+ \"invalid record length at %X/%X:\nwanted >=%u, got %u\",\n+ \"invalid record length at %X/%X: wanted\n>=%u, got %u\",\n\n[1]\nelog(ERROR, \"incorrect length %d in streaming transaction's changes\nfile \\\"%s\\\"\",\n\"record with invalid length at %X/%X\",\n(errmsg(\"invalid length of checkpoint record\")));\nerrmsg(\"invalid length of startup packet\")));\nerrmsg(\"invalid length of startup packet\")));\nelog(ERROR, \"invalid zero-length dimension array in MCVList\");\nelog(ERROR, \"invalid length (%d) dimension array in MCVList\",\nerrmsg(\"invalid length in external \\\"%s\\\" value\",\nerrmsg(\"invalid length in external bit string\")));\nlibpq_append_conn_error(conn, \"certificate contains IP address with\ninvalid length %zu\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 28 Feb 2023 11:45:38 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make some xlogreader messages more accurate"
},
{
"msg_contents": "+1 for the changes.\n\n>1. Why is \"wanted >=%u\" any better than \"wanted at least %u\"? IMO, the\n>wording as opposed to >= symbol in the user-facing messages works\n>better.\n\nI think I agree with Bharath on this: \"wanted at least %u\" sounds better\nfor user error than \"wanted >=%u\".\n\nRegards,\nJeevan Ladhe\n\nOn Tue, 28 Feb 2023 at 11:46, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Thu, Feb 23, 2023 at 1:06 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > Here is a small patch to make some invalid-record error messages in\n> > xlogreader a bit more accurate (IMO).\n>\n> +1 for these changes.\n>\n> > My starting point was that when you have some invalid WAL, you often get\n> > a message like \"wanted 24, got 0\". This is a bit incorrect, since it\n> > really wanted *at least* 24, not exactly 24. So I have updated the\n> > messages to that effect, and\n>\n> Yes, it's not exactly \"wanted\", but \"wanted at least\" because\n> xl_tot_len is the total length of the entire record including header\n> and payload.\n>\n> > also added that detail to one message where\n> > it was available but not printed.\n>\n> Looks okay.\n>\n> > Going through the remaining report_invalid_record() calls I then\n> > adjusted the use of \"invalid\" vs. \"incorrect\" in one case. The message\n> > \"record with invalid length\" makes it sound like the length was\n> > something like -5, but really we know what the length should be and what\n> > we got wasn't it, so \"incorrect\" sounded better and is also used in\n> > other error messages in that file.\n>\n> I have no strong opinion about this change. We seem to be using\n> \"invalid length\" and \"incorrect length\" interchangeably [1] without\n> distinguishing between \"invalid\" if length is < 0 and \"incorrect\" if\n> length >= 0 and not something we're expecting.\n>\n> Another comment on the patch:\n> 1. Why is \"wanted >=%u\" any better than \"wanted at least %u\"? IMO, the\n> wording as opposed to >= symbol in the user-facing messages works\n> better.\n> + report_invalid_record(state, \"invalid record offset at %X/%X:\n> wanted >=%u, got %u\",\n> + \"invalid record length at %X/%X:\n> wanted >=%u, got %u\",\n> + \"invalid record length at %X/%X: wanted\n> >=%u, got %u\",\n>\n> [1]\n> elog(ERROR, \"incorrect length %d in streaming transaction's changes\n> file \\\"%s\\\"\",\n> \"record with invalid length at %X/%X\",\n> (errmsg(\"invalid length of checkpoint record\")));\n> errmsg(\"invalid length of startup packet\")));\n> errmsg(\"invalid length of startup packet\")));\n> elog(ERROR, \"invalid zero-length dimension array in MCVList\");\n> elog(ERROR, \"invalid length (%d) dimension array in MCVList\",\n> errmsg(\"invalid length in external \\\"%s\\\" value\",\n> errmsg(\"invalid length in external bit string\")));\n> libpq_append_conn_error(conn, \"certificate contains IP address with\n> invalid length %zu\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n>\n>\n\n+1 for the changes.>1. Why is \"wanted >=%u\" any better than \"wanted at least %u\"? IMO, the>wording as opposed to >= symbol in the user-facing messages works>better.I think I agree with Bharath on this: \"wanted at least %u\" sounds betterfor user error than \"wanted >=%u\".Regards,Jeevan LadheOn Tue, 28 Feb 2023 at 11:46, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Thu, Feb 23, 2023 at 1:06 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Here is a small patch to make some invalid-record error messages in\n> xlogreader a bit more accurate (IMO).\n\n+1 for these changes.\n\n> My starting point was that when you have some invalid WAL, you often get\n> a message like \"wanted 24, got 0\". This is a bit incorrect, since it\n> really wanted *at least* 24, not exactly 24. So I have updated the\n> messages to that effect, and\n\nYes, it's not exactly \"wanted\", but \"wanted at least\" because\nxl_tot_len is the total length of the entire record including header\nand payload.\n\n> also added that detail to one message where\n> it was available but not printed.\n\nLooks okay.\n\n> Going through the remaining report_invalid_record() calls I then\n> adjusted the use of \"invalid\" vs. \"incorrect\" in one case. The message\n> \"record with invalid length\" makes it sound like the length was\n> something like -5, but really we know what the length should be and what\n> we got wasn't it, so \"incorrect\" sounded better and is also used in\n> other error messages in that file.\n\nI have no strong opinion about this change. We seem to be using\n\"invalid length\" and \"incorrect length\" interchangeably [1] without\ndistinguishing between \"invalid\" if length is < 0 and \"incorrect\" if\nlength >= 0 and not something we're expecting.\n\nAnother comment on the patch:\n1. Why is \"wanted >=%u\" any better than \"wanted at least %u\"? IMO, the\nwording as opposed to >= symbol in the user-facing messages works\nbetter.\n+ report_invalid_record(state, \"invalid record offset at %X/%X:\nwanted >=%u, got %u\",\n+ \"invalid record length at %X/%X:\nwanted >=%u, got %u\",\n+ \"invalid record length at %X/%X: wanted\n>=%u, got %u\",\n\n[1]\nelog(ERROR, \"incorrect length %d in streaming transaction's changes\nfile \\\"%s\\\"\",\n\"record with invalid length at %X/%X\",\n(errmsg(\"invalid length of checkpoint record\")));\nerrmsg(\"invalid length of startup packet\")));\nerrmsg(\"invalid length of startup packet\")));\nelog(ERROR, \"invalid zero-length dimension array in MCVList\");\nelog(ERROR, \"invalid length (%d) dimension array in MCVList\",\nerrmsg(\"invalid length in external \\\"%s\\\" value\",\nerrmsg(\"invalid length in external bit string\")));\nlibpq_append_conn_error(conn, \"certificate contains IP address with\ninvalid length %zu\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 28 Feb 2023 15:49:18 +0530",
"msg_from": "Jeevan Ladhe <jeevanladhe.os@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make some xlogreader messages more accurate"
},
{
"msg_contents": "On 28.02.23 11:19, Jeevan Ladhe wrote:\n> +1 for the changes.\n> \n> >1. Why is \"wanted >=%u\" any better than \"wanted at least %u\"? IMO, the\n> >wording as opposed to >= symbol in the user-facing messages works\n> >better.\n> \n> I think I agree with Bharath on this: \"wanted at least %u\" sounds better\n> for user error than \"wanted >=%u\".\n\nI committed this with \"at least\", as suggested, and also changed \n\"wanted\" to \"expected\", which matches the usual error message style better.\n\n\n\n",
"msg_date": "Thu, 2 Mar 2023 09:19:52 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Make some xlogreader messages more accurate"
},
{
"msg_contents": "On 28.02.23 07:15, Bharath Rupireddy wrote:\n>> Going through the remaining report_invalid_record() calls I then\n>> adjusted the use of \"invalid\" vs. \"incorrect\" in one case. The message\n>> \"record with invalid length\" makes it sound like the length was\n>> something like -5, but really we know what the length should be and what\n>> we got wasn't it, so \"incorrect\" sounded better and is also used in\n>> other error messages in that file.\n> I have no strong opinion about this change. We seem to be using\n> \"invalid length\" and \"incorrect length\" interchangeably [1] without\n> distinguishing between \"invalid\" if length is < 0 and \"incorrect\" if\n> length >= 0 and not something we're expecting.\n\nRight, this isn't handled very consistently. I did a pass across all \n\"{invalid|incorrect|wrong} {length|size}\" messages and tried to make \nthem more precise by adding more detail and using the appropriate word. \nWhat do you think about the attached patch?",
"msg_date": "Thu, 2 Mar 2023 09:21:55 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Make some xlogreader messages more accurate"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 1:51 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 28.02.23 07:15, Bharath Rupireddy wrote:\n> >> Going through the remaining report_invalid_record() calls I then\n> >> adjusted the use of \"invalid\" vs. \"incorrect\" in one case. The message\n> >> \"record with invalid length\" makes it sound like the length was\n> >> something like -5, but really we know what the length should be and what\n> >> we got wasn't it, so \"incorrect\" sounded better and is also used in\n> >> other error messages in that file.\n> > I have no strong opinion about this change. We seem to be using\n> > \"invalid length\" and \"incorrect length\" interchangeably [1] without\n> > distinguishing between \"invalid\" if length is < 0 and \"incorrect\" if\n> > length >= 0 and not something we're expecting.\n>\n> Right, this isn't handled very consistently. I did a pass across all\n> \"{invalid|incorrect|wrong} {length|size}\" messages and tried to make\n> them more precise by adding more detail and using the appropriate word.\n> What do you think about the attached patch?\n\nThanks. IMO, when any of the incorrect/invalid/wrong errors occur, the\nwording may not matter much more than the error itself and why it\noccurred. While the uniformity of this kind helps, I think it's hard\nto enforce the same/similar wording in future. I prefer leaving the\ncode as-is. Therefore, -1 for these changes.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 2 Mar 2023 17:22:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make some xlogreader messages more accurate"
}
] |
[
{
"msg_contents": "Hey,\n\nIt depnends on scenario, but there is many use cases that hack data\nchange from somebody with admin privileges could be disaster.\nThat is the place where data history could come with help. Some basic\nsolution would be trigger which writes previous version of record\nto some other table. Trigger however can be disabled or removed (crazy\nsolution would be to provide pernament\ntriggers and tables which can only be pernamently inserted). \nThen we have also possibility to modify tablespace directly on disk.\n\nBut Postgres has ability to not override records when two concurrent\ntransaction modify data to provide MVCC.\n\nSo what about pernamently not vacuumable tables. Adding some xid log\ntables with hash of record on hash on previous hash.\nI think that would be serious additional advantage for best open source\nrelational databes.\n\nBest regards,\n Marek Mosiewicz\n\n\n\n",
"msg_date": "Thu, 23 Feb 2023 12:04:05 +0100",
"msg_from": "marekmosiewicz@gmail.com",
"msg_from_op": true,
"msg_subject": "Disable vacuuming to provide data history"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 6:04 AM <marekmosiewicz@gmail.com> wrote:\n\n> Hey,\n>\n> It depnends on scenario, but there is many use cases that hack data\n> change from somebody with admin privileges could be disaster.\n> That is the place where data history could come with help. Some basic\n> solution would be trigger which writes previous version of record\n> to some other table. Trigger however can be disabled or removed (crazy\n> solution would be to provide pernament\n> triggers and tables which can only be pernamently inserted).\n> Then we have also possibility to modify tablespace directly on disk.\n>\n> But Postgres has ability to not override records when two concurrent\n> transaction modify data to provide MVCC.\n>\n> So what about pernamently not vacuumable tables. Adding some xid log\n> tables with hash of record on hash on previous hash.\n> I think that would be serious additional advantage for best open source\n> relational databes.\n>\n> Best regards,\n> Marek Mosiewicz\n>\n\nWhat you are describing sounds like the \"system versioning\" flavor of\n\"temporal\" tables. It's a part of the SQL Standard, but PostgreSQL has yet\nto implement it in core. Basically, every row has a start_timestamp and\nend_timestamp field. Updating a row sets the end_timestamp of the old\nversion and inserts a new one with a start_timestamp matching the\nend-timestamp of the previous row. Once a record has a non-null [1]\nend_timestamp, it is not possible to update that row via SQL. Regular SQL\nstatements effectively have a \"AND end_timestamp IS NULL\" filter on them,\nso the old rows are not visible without specifically invoking temporal\nfeatures to get point-in-time queries. At the implementation level, this\nprobably means a table with 2 partitions, one for live rows all having null\nend_timestamps, and one for archived rows which is effectively append-only.\n\nThis strategy is common practice for chain of custody and auditing\npurposes, either as a feature of the RDBMS or home-rolled. I have also seen\nit used for developing forecasting models (ex \"what would this model have\ntold us to do if we had run it a year ago?\").\n\nA few years ago, I personally thought about implementing a hash-chain\nfeature, but my research at the time concluded that:\n\n* Few customers were interested in going beyond what was required for\nregulatory compliance\n* Once compliant, any divergence from established procedures, even if it\nwas an unambiguous improvement, only invited re-examination of it and\nadjacent procedures, and they would avoid that\n* They could get the same validation by comparing against a secured backup\nand out-of-band audit \"logs\" (most would call them \"reports\")\n* They were of the opinion that if a bad actor got admin access, it was\n\"game over\" anyway\n\nThe world may have changed since then, but even if there is now interest, I\nwonder if that isn't better implemented at the OS level rather than the\nRDBMS level.\n\n [1] some implementations don't use null, they use an end-timestamp set to\na date implausibly far in the future ( 3999-12-31 for example ), but the\nconcept remains that once the column is set to a real timestamp, the row\nisn't visible to update statements.\n\nOn Thu, Feb 23, 2023 at 6:04 AM <marekmosiewicz@gmail.com> wrote:Hey,\n\nIt depnends on scenario, but there is many use cases that hack data\nchange from somebody with admin privileges could be disaster.\nThat is the place where data history could come with help. Some basic\nsolution would be trigger which writes previous version of record\nto some other table. Trigger however can be disabled or removed (crazy\nsolution would be to provide pernament\ntriggers and tables which can only be pernamently inserted). \nThen we have also possibility to modify tablespace directly on disk.\n\nBut Postgres has ability to not override records when two concurrent\ntransaction modify data to provide MVCC.\n\nSo what about pernamently not vacuumable tables. Adding some xid log\ntables with hash of record on hash on previous hash.\nI think that would be serious additional advantage for best open source\nrelational databes.\n\nBest regards,\n Marek MosiewiczWhat you are describing sounds like the \"system versioning\" flavor of \"temporal\" tables. It's a part of the SQL Standard, but PostgreSQL has yet to implement it in core. Basically, every row has a start_timestamp and end_timestamp field. Updating a row sets the end_timestamp of the old version and inserts a new one with a start_timestamp matching the end-timestamp of the previous row. Once a record has a non-null [1] end_timestamp, it is not possible to update that row via SQL. Regular SQL statements effectively have a \"AND end_timestamp IS NULL\" filter on them, so the old rows are not visible without specifically invoking temporal features to get point-in-time queries. At the implementation level, this probably means a table with 2 partitions, one for live rows all having null end_timestamps, and one for archived rows which is effectively append-only.This strategy is common practice for chain of custody and auditing purposes, either as a feature of the RDBMS or home-rolled. I have also seen it used for developing forecasting models (ex \"what would this model have told us to do if we had run it a year ago?\").A few years ago, I personally thought about implementing a hash-chain feature, but my research at the time concluded that:* Few customers were interested in going beyond what was required for regulatory compliance* Once compliant, any divergence from established procedures, even if it was an unambiguous improvement, only invited re-examination of it and adjacent procedures, and they would avoid that* They could get the same validation by comparing against a secured backup and out-of-band audit \"logs\" (most would call them \"reports\")* They were of the opinion that if a bad actor got admin access, it was \"game over\" anywayThe world may have changed since then, but even if there is now interest, I wonder if that isn't better implemented at the OS level rather than the RDBMS level. [1] some implementations don't use null, they use an end-timestamp set to a date implausibly far in the future ( 3999-12-31 for example ), but the concept remains that once the column is set to a real timestamp, the row isn't visible to update statements.",
"msg_date": "Fri, 24 Feb 2023 16:06:31 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disable vacuuming to provide data history"
},
{
"msg_contents": "On 2/24/23 22:06, Corey Huinker wrote:\n> On Thu, Feb 23, 2023 at 6:04 AM <marekmosiewicz@gmail.com> wrote:\n> \n> [1] some implementations don't use null, they use an end-timestamp set to\n> a date implausibly far in the future ( 3999-12-31 for example ),\n\nThe specification is, \"At any point in time, all rows that have their \nsystem-time period end column set to the highest value supported by the \ndata type of that column are known as current system rows; all other \nrows are known as historical system rows.\"\n\nI would like to see us use 'infinity' for this.\n\nThe main design blocker for me is how to handle dump/restore. The \nstandard does not bother thinking about that.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Sat, 25 Feb 2023 03:11:03 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Disable vacuuming to provide data history"
},
{
"msg_contents": "W dniu sob, 25.02.2023 o godzinie 03∶11 +0100, użytkownik Vik Fearing\nnapisał:\n> On 2/24/23 22:06, Corey Huinker wrote:\n> \n> The main design blocker for me is how to handle dump/restore. The \n> standard does not bother thinking about that.\n\nThat would be a little difficult. Most probably you would need to\noperate on history view to dump/restore\n\nBest regards,\n Marek Mosiewicz\n\n\n\n",
"msg_date": "Sat, 25 Mar 2023 12:22:36 +0100",
"msg_from": "marekmosiewicz@gmail.com",
"msg_from_op": true,
"msg_subject": "Re: Disable vacuuming to provide data history"
},
{
"msg_contents": "There is also another blocker - our timestamp resolution is 1\nmicrosecond and we are dangerously close to speeds where one could\nupdate a row twice in the same microsecond .\n\nI have been thinking about this, and what is needed is\n\n1. a nanosecond-resolution \"abstime\" type - not absolutely necessary,\nbut would help with corner cases.\n2. VACUUM should be able to \"freeze\" by replacing xmin/xmax values\nwith commit timestamps, or adding tmin/tmax where necessary.\n3. Optionally VACUUM could move historic rows to archive tables with\nexplicit tmin/tmax columns (this also solves the pg_dump problem)\n\nMost of the above design - apart from the timestamp resolution and\nvacuum being the one doing stamping in commit timestamps - is not\nreally new - up to version 6.2 PostgreSQL had tmin/tmax instead of\nxmin/xmax and you could specify the timestamp you want to query any\ntable at.\n\nAnd the original Postgres design was Full History Database where you\ncould say \" SELECT name, population FROM cities['epoch' .. 'now'] \" to\nget all historic population values.\n\nAnd historic data was meant to be moved to the WORM optical drives\nwhich had just arrived to the market\n\n\n---\nHannu\n\n\nOn Sat, Feb 25, 2023 at 3:11 AM Vik Fearing <vik@postgresfriends.org> wrote:\n>\n> On 2/24/23 22:06, Corey Huinker wrote:\n> > On Thu, Feb 23, 2023 at 6:04 AM <marekmosiewicz@gmail.com> wrote:\n> >\n> > [1] some implementations don't use null, they use an end-timestamp set to\n> > a date implausibly far in the future ( 3999-12-31 for example ),\n>\n> The specification is, \"At any point in time, all rows that have their\n> system-time period end column set to the highest value supported by the\n> data type of that column are known as current system rows; all other\n> rows are known as historical system rows.\"\n>\n> I would like to see us use 'infinity' for this.\n>\n> The main design blocker for me is how to handle dump/restore. The\n> standard does not bother thinking about that.\n> --\n> Vik Fearing\n>\n>\n>\n\n\n",
"msg_date": "Sun, 26 Mar 2023 17:19:18 +0200",
"msg_from": "Hannu Krosing <hannuk@google.com>",
"msg_from_op": false,
"msg_subject": "Re: Disable vacuuming to provide data history"
}
] |
[
{
"msg_contents": "pg_rewind: Fix determining TLI when server was just promoted.\n\nIf the source server was just promoted, and it hasn't written the\ncheckpoint record yet, pg_rewind considered the server to be still on\nthe old timeline. Because of that, it would claim incorrectly that no\nrewind is required. Fix that by looking at minRecoveryPointTLI in the\ncontrol file in addition to the ThisTimeLineID on the checkpoint.\n\nThis has been a known issue since forever, and we had worked around it\nin the regression tests by issuing a checkpoint after each promotion,\nbefore running pg_rewind. But that was always quite hacky, so better\nto fix this properly. This doesn't add any new tests for this, but\nremoves the previously-added workarounds from the existing tests, so\nthat they should occasionally hit this codepath again.\n\nThis is arguably a bug fix, but don't backpatch because we haven't\nreally treated it as a bug so far. Also, the patch didn't apply\ncleanly to v13 and below. I'm sure sure it could be made to work on\nv13, but doesn't seem worth the risk and effort.\n\nReviewed-by: Kyotaro Horiguchi, Ibrar Ahmed, Aleksander Alekseev\nDiscussion: https://www.postgresql.org/message-id/9f568c97-87fe-a716-bd39-65299b8a60f4%40iki.fi\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/009eeee746825090ec7194321a3db4b298d6571e\n\nModified Files\n--------------\nsrc/bin/pg_rewind/pg_rewind.c | 105 ++++++++++++++++----------\nsrc/bin/pg_rewind/t/007_standby_source.pl | 1 -\nsrc/bin/pg_rewind/t/008_min_recovery_point.pl | 9 ---\nsrc/bin/pg_rewind/t/RewindTest.pm | 8 --\n4 files changed, 64 insertions(+), 59 deletions(-)",
"msg_date": "Thu, 23 Feb 2023 13:40:38 +0000",
"msg_from": "Heikki Linnakangas <heikki.linnakangas@iki.fi>",
"msg_from_op": true,
"msg_subject": "pgsql: pg_rewind: Fix determining TLI when server was just promoted."
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 8:40 AM Heikki Linnakangas\n<heikki.linnakangas@iki.fi> wrote:\n> This is arguably a bug fix, but don't backpatch because we haven't\n> really treated it as a bug so far.\n\nI guess I'm having trouble understanding why this is only arguably a\nbug fix. Seems like flat-out wrong behavior.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 27 Feb 2023 11:57:04 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: pg_rewind: Fix determining TLI when server was just\n promoted."
}
] |
[
{
"msg_contents": "Refactor to add pg_strcoll(), pg_strxfrm(), and variants.\n\nOffers a generally better separation of responsibilities for collation\ncode. Also, a step towards multi-lib ICU, which should be based on a\nclean separation of the routines required for collation providers.\n\nCallers with NUL-terminated strings should call pg_strcoll() or\npg_strxfrm(); callers with strings and their length should call the\nvariants pg_strncoll() or pg_strnxfrm().\n\nReviewed-by: Peter Eisentraut, Peter Geoghegan\nDiscussion: https://postgr.es/m/a581136455c940d7bd0ff482d3a2bd51af25a94f.camel%40j-davis.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/d87d548cd0304477413a73e9c1d148fb2d40b50d\n\nModified Files\n--------------\nsrc/backend/access/hash/hashfunc.c | 61 +--\nsrc/backend/utils/adt/pg_locale.c | 769 ++++++++++++++++++++++++++++++++++++-\nsrc/backend/utils/adt/varchar.c | 51 +--\nsrc/backend/utils/adt/varlena.c | 368 +++---------------\nsrc/include/utils/pg_locale.h | 13 +\n5 files changed, 871 insertions(+), 391 deletions(-)",
"msg_date": "Thu, 23 Feb 2023 19:09:10 +0000",
"msg_from": "Jeff Davis <jdavis@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Refactor to add pg_strcoll(), pg_strxfrm(), and variants."
},
{
"msg_contents": "These patches cause warnings under MSVC.\n\nOf course my patch to improve CI by warning about compiler warnings is\nthe only one to notice.\n\nhttps://cirrus-ci.com/task/6199582053367808\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 23 Feb 2023 18:20:29 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Refactor to add pg_strcoll(), pg_strxfrm(), and variants."
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 1:20 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> These patches cause warnings under MSVC.\n>\n> Of course my patch to improve CI by warning about compiler warnings is\n> the only one to notice.\n>\n> https://cirrus-ci.com/task/6199582053367808\n\nIt's a shame that it fails the whole Windows task, whereas for the\nUnixen we don't do -Werror so you can still see if everything else is\nOK, but then we check for errors in a separate task. I don't have any\nideas on how to achieve that, though.\n\nFWIW my CI log scanner also noticed this problem\nhttp://cfbot.cputube.org/highlights/compiler.html. Been wondering how\nto bring that to the right people's attention. Perhaps by adding a\nclickable ⚠ to the main page next to the item if any of these\n\"highlights\" were detected; perhaps it should take you to a\nper-submission history page with the highlights from each version.\n\n\n",
"msg_date": "Fri, 24 Feb 2023 13:56:05 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Refactor to add pg_strcoll(), pg_strxfrm(), and variants."
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 01:56:05PM +1300, Thomas Munro wrote:\n> On Fri, Feb 24, 2023 at 1:20 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > These patches cause warnings under MSVC.\n> >\n> > Of course my patch to improve CI by warning about compiler warnings is\n> > the only one to notice.\n> >\n> > https://cirrus-ci.com/task/6199582053367808\n> \n> It's a shame that it fails the whole Windows task, whereas for the\n> Unixen we don't do -Werror so you can still see if everything else is\n> OK, but then we check for errors in a separate task. I don't have any\n> ideas on how to achieve that, though.\n\nMy patch isn't very pretty, but you can see that runs all the tests\nbefore grepping for warnings, rather than failing during compilation as\nyou said.\n\nIMO the compiler warnings task is separate not only \"to avoid failing\nthe whole task during compilation\", but because it's compiled with\noptimization. Which is 1) needed to allow some warnings to be warned\nabout; and, 2) harmful to enable during the \"check-world\" tests, since\nit makes backtraces less accurate.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 23 Feb 2023 19:07:16 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Refactor to add pg_strcoll(), pg_strxfrm(), and variants."
}
] |
[
{
"msg_contents": "Add LZ4 compression to pg_dump\n\nExpand pg_dump's compression streaming and file APIs to support the lz4\nalgorithm. The newly added compress_lz4.{c,h} files cover all the\nfunctionality of the aforementioned APIs. Minor changes were necessary\nin various pg_backup_* files, where code for the 'lz4' file suffix has\nbeen added, as well as pg_dump's compression option parsing.\n\nAuthor: Georgios Kokolatos\nReviewed-by: Michael Paquier, Rachel Heaton, Justin Pryzby, Shi Yu, Tomas Vondra\nDiscussion: https://postgr.es/m/faUNEOpts9vunEaLnmxmG-DldLSg_ql137OC3JYDmgrOMHm1RvvWY2IdBkv_CRxm5spCCb_OmKNk2T03TMm0fBEWveFF9wA1WizPuAgB7Ss%3D%40protonmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/0da243fed0875932f781aff08df782b56af58d02\n\nModified Files\n--------------\ndoc/src/sgml/ref/pg_dump.sgml | 13 +-\nsrc/bin/pg_dump/Makefile | 2 +\nsrc/bin/pg_dump/compress_io.c | 26 +-\nsrc/bin/pg_dump/compress_lz4.c | 626 ++++++++++++++++++++++++++++++++++\nsrc/bin/pg_dump/compress_lz4.h | 24 ++\nsrc/bin/pg_dump/meson.build | 8 +-\nsrc/bin/pg_dump/pg_backup_archiver.c | 6 +-\nsrc/bin/pg_dump/pg_backup_directory.c | 9 +-\nsrc/bin/pg_dump/pg_dump.c | 5 +-\nsrc/bin/pg_dump/t/002_pg_dump.pl | 82 ++++-\nsrc/tools/pginclude/cpluspluscheck | 1 +\nsrc/tools/pgindent/typedefs.list | 2 +\n12 files changed, 782 insertions(+), 22 deletions(-)",
"msg_date": "Thu, 23 Feb 2023 20:21:39 +0000",
"msg_from": "Tomas Vondra <tomas.vondra@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Add LZ4 compression to pg_dump"
},
{
"msg_contents": "Re: Tomas Vondra\n> Add LZ4 compression to pg_dump\n\nThis broke the TAP tests on Ubuntu 18.04 (bionic):\n\n[17:06:45.513](0.000s) ok 1927 - compression_lz4_custom: should not dump test_table with 4-row INSERTs\n# Running: pg_dump --jobs=2 --format=directory --compress=lz4:1 --file=/home/myon/projects/postgresql/pg/master/build/src/bin/pg_dump/tmp_check/tmp_test__aAO/compression_lz4_dir postgres\n[17:06:46.651](1.137s) ok 1928 - compression_lz4_dir: pg_dump runs\n# Running: /usr/bin/lz4 -z -f --rm /home/myon/projects/postgresql/pg/master/build/src/bin/pg_dump/tmp_check/tmp_test__aAO/compression_lz4_dir/blobs.toc /home/myon/projects/postgresql/pg/master/build/src/bin/pg_dump/tmp_check/tmp_test__aAO/compression_lz4_dir/blobs.toc.lz4\nIncorrect parameters\nUsage :\n /usr/bin/lz4 [arg] [input] [output]\n\ninput : a filename\n with no FILE, or when FILE is - or stdin, read standard input\nArguments :\n -1 : Fast compression (default) \n -9 : High compression \n -d : decompression (default for .lz4 extension)\n -z : force compression\n -f : overwrite output without prompting \n -h/-H : display help/long help and exit\n[17:06:46.667](0.016s) not ok 1929 - compression_lz4_dir: compression commands\n[17:06:46.668](0.001s) \n[17:06:46.668](0.001s) # Failed test 'compression_lz4_dir: compression commands'\n[17:06:46.669](0.000s) # at t/002_pg_dump.pl line 4274.\n[17:06:46.670](0.001s) ok 1930 - compression_lz4_dir: glob check for /home/myon/projects/postgresql/pg/master/build/src/bin/pg_dump/tmp_check/tmp_test__aAO/compression_lz4_dir/toc.dat\n\nThe lz4 binary there doesn't have the --rm option yet.\n\nliblz4-tool 0.0~r131-2ubuntu3\n\n--rm appears in a single place only:\n\n # Give coverage for manually compressed blob.toc files during\n # restore.\n compress_cmd => {\n program => $ENV{'LZ4'},\n args => [\n '-z', '-f', '--rm',\n \"$tempdir/compression_lz4_dir/blobs.toc\",\n \"$tempdir/compression_lz4_dir/blobs.toc.lz4\",\n ],\n },\n\n18.04 will be EOL in a few weeks so it might be ok to just say it's\nnot supported, but removing the input file manually after calling lz4\nwould be an easy fix.\n\nChristoph\n\n\n",
"msg_date": "Wed, 8 Mar 2023 18:55:03 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "lz4 --rm on Ubuntu 18.04 (Add LZ4 compression to pg_dump)"
},
{
"msg_contents": "> On 8 Mar 2023, at 18:55, Christoph Berg <myon@debian.org> wrote:\n\n> 18.04 will be EOL in a few weeks so it might be ok to just say it's\n> not supported, but removing the input file manually after calling lz4\n> would be an easy fix.\n\nIs it reasonable to expect that this version of LZ4 can/will appear on any\nother platform outside of archeology? Removing the file manually would be a\ntrivial way to stabilize but if it's only expected to happen on platforms which\nare long since EOL by the time 16 ships then the added complication could be\nhard to justify.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 8 Mar 2023 20:20:54 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: lz4 --rm on Ubuntu 18.04 (Add LZ4 compression to pg_dump)"
},
{
"msg_contents": "On 3/8/23 20:20, Daniel Gustafsson wrote:\n>> On 8 Mar 2023, at 18:55, Christoph Berg <myon@debian.org> wrote:\n> \n>> 18.04 will be EOL in a few weeks so it might be ok to just say it's\n>> not supported, but removing the input file manually after calling lz4\n>> would be an easy fix.\n> \n> Is it reasonable to expect that this version of LZ4 can/will appear on any\n> other platform outside of archeology? Removing the file manually would be a\n> trivial way to stabilize but if it's only expected to happen on platforms which\n> are long since EOL by the time 16 ships then the added complication could be\n> hard to justify.\n> \n\nIMO we should fix that. We have a bunch of buildfarm members running on\nUbuntu 18.04 (or older) - it's true none of them seems to be running TAP\ntests. But considering how trivial the fix is ...\n\nBarring objections, I'll push a fix early next week.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 9 Mar 2023 00:39:08 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: lz4 --rm on Ubuntu 18.04 (Add LZ4 compression to pg_dump)"
},
{
"msg_contents": "On Thu, Mar 09, 2023 at 12:39:08AM +0100, Tomas Vondra wrote:\n> IMO we should fix that. We have a bunch of buildfarm members running on\n> Ubuntu 18.04 (or older) - it's true none of them seems to be running TAP\n> tests. But considering how trivial the fix is ...\n> \n> Barring objections, I'll push a fix early next week.\n\n+1, better to change that, thanks. Actually, would --rm be OK even on\nWindows? As far as I can see, the CI detects a LZ4 command for the\nVS2019 environment but not the liblz4 libraries that would be needed\nto trigger the set of tests.\n--\nMichael",
"msg_date": "Thu, 9 Mar 2023 09:30:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: lz4 --rm on Ubuntu 18.04 (Add LZ4 compression to pg_dump)"
},
{
"msg_contents": "\n\nOn 3/9/23 01:30, Michael Paquier wrote:\n> On Thu, Mar 09, 2023 at 12:39:08AM +0100, Tomas Vondra wrote:\n>> IMO we should fix that. We have a bunch of buildfarm members running on\n>> Ubuntu 18.04 (or older) - it's true none of them seems to be running TAP\n>> tests. But considering how trivial the fix is ...\n>>\n>> Barring objections, I'll push a fix early next week.\n> \n> +1, better to change that, thanks. Actually, would --rm be OK even on\n> Windows? As far as I can see, the CI detects a LZ4 command for the\n> VS2019 environment but not the liblz4 libraries that would be needed\n> to trigger the set of tests.\n\nThanks for noticing that. I'll investigate next week.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 9 Mar 2023 19:00:35 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: lz4 --rm on Ubuntu 18.04 (Add LZ4 compression to pg_dump)"
},
{
"msg_contents": "On 3/9/23 19:00, Tomas Vondra wrote:\n> \n> \n> On 3/9/23 01:30, Michael Paquier wrote:\n>> On Thu, Mar 09, 2023 at 12:39:08AM +0100, Tomas Vondra wrote:\n>>> IMO we should fix that. We have a bunch of buildfarm members running on\n>>> Ubuntu 18.04 (or older) - it's true none of them seems to be running TAP\n>>> tests. But considering how trivial the fix is ...\n>>>\n>>> Barring objections, I'll push a fix early next week.\n>>\n>> +1, better to change that, thanks. Actually, would --rm be OK even on\n>> Windows? As far as I can see, the CI detects a LZ4 command for the\n>> VS2019 environment but not the liblz4 libraries that would be needed\n>> to trigger the set of tests.\n> \n> Thanks for noticing that. I'll investigate next week.\n> \n\nSo, here's a fix that should (I think) replace the 'lz4 --rm' with a\nsimple 'rm'. I have two doubts about this, though:\n\n\n1) I haven't found a simple way to inject additional command into the\ntest. The pg_dump runs have a predefined list of \"steps\" to run:\n\n -- compress_cmd\n -- glob_patterns\n -- command_like\n -- restore_cmd\n\nand I don't think there's a good place to inject the 'rm' so I ended up\nadding a 'cleanup_cmd' right after 'compress_cmd'. But it seems a bit\nstrange / hacky. Maybe there's a better way?\n\n\n2) I wonder if Windows will know what 'rm' means. I haven't found any\nTAP test doing 'rm' and don't see 'rm' in any $ENV either.\n\n\nThat being said, I have no idea how to make this work on our Windows CI.\nAs mentioned, the environment is missing the lz4 library - there's a\n\n setup_additional_packages_script: |\n REM choco install -y --no-progress ...\n\nin the .yml file, but AFAICS the chocolatey does not have lz4 :-/\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 14 Mar 2023 00:16:16 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: lz4 --rm on Ubuntu 18.04 (Add LZ4 compression to pg_dump)"
},
{
"msg_contents": "Re: Tomas Vondra\n> and I don't think there's a good place to inject the 'rm' so I ended up\n> adding a 'cleanup_cmd' right after 'compress_cmd'. But it seems a bit\n> strange / hacky. Maybe there's a better way?\n\nDoes the file need to be removed at all? Could we leave it there and\nhave \"make clean\" remove it?\n\nChristoph\n\n\n",
"msg_date": "Tue, 14 Mar 2023 11:34:08 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: lz4 --rm on Ubuntu 18.04 (Add LZ4 compression to pg_dump)"
},
{
"msg_contents": "On 3/14/23 11:34, Christoph Berg wrote:\n> Re: Tomas Vondra\n>> and I don't think there's a good place to inject the 'rm' so I ended up\n>> adding a 'cleanup_cmd' right after 'compress_cmd'. But it seems a bit\n>> strange / hacky. Maybe there's a better way?\n> \n> Does the file need to be removed at all? Could we leave it there and\n> have \"make clean\" remove it?\n> \n\nI don't think that'd work, because of the automatic \"discovery\" where we\ncheck if a file exists, and if not we try to append .gz and .lz4. So if\nyou leave the .toc, we'd not find the .lz4, making the test useless ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 14 Mar 2023 15:24:41 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: lz4 --rm on Ubuntu 18.04 (Add LZ4 compression to pg_dump)"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 12:16:16AM +0100, Tomas Vondra wrote:\n> On 3/9/23 19:00, Tomas Vondra wrote:\n> > On 3/9/23 01:30, Michael Paquier wrote:\n> >> On Thu, Mar 09, 2023 at 12:39:08AM +0100, Tomas Vondra wrote:\n> >>> IMO we should fix that. We have a bunch of buildfarm members running on\n> >>> Ubuntu 18.04 (or older) - it's true none of them seems to be running TAP\n> >>> tests. But considering how trivial the fix is ...\n> >>>\n> >>> Barring objections, I'll push a fix early next week.\n> >>\n> >> +1, better to change that, thanks. Actually, would --rm be OK even on\n> >> Windows? As far as I can see, the CI detects a LZ4 command for the\n> >> VS2019 environment but not the liblz4 libraries that would be needed\n> >> to trigger the set of tests.\n> > \n> > Thanks for noticing that. I'll investigate next week.\n> \n> So, here's a fix that should (I think) replace the 'lz4 --rm' with a\n> simple 'rm'. I have two doubts about this, though:\n> \n> 1) I haven't found a simple way to inject additional command into the\n> test. The pg_dump runs have a predefined list of \"steps\" to run:\n> \n> -- compress_cmd\n> -- glob_patterns\n> -- command_like\n> -- restore_cmd\n> \n> and I don't think there's a good place to inject the 'rm' so I ended up\n> adding a 'cleanup_cmd' right after 'compress_cmd'. But it seems a bit\n> strange / hacky. Maybe there's a better way?\n\nI don't know if there's a better way, and I don't think it's worth\ncomplicating the tests by more than about 2 lines to handle this.\n\n> 2) I wonder if Windows will know what 'rm' means. I haven't found any\n> TAP test doing 'rm' and don't see 'rm' in any $ENV either.\n\nCI probably will, since it's Andres' image built with git-tools and\nother helpful stuff installed. But in general I think it won't; perl is\nbeing used for all the portable stuff.\n\n*If* you wanted to do something to fix this, you could create a key\ncalled files_to_remove_after_loading, and run unlink on those files\nrather than running a shell command. Or maybe just remove the file\nunconditionally at the start of the script ?\n\n> That being said, I have no idea how to make this work on our Windows CI.\n> As mentioned, the environment is missing the lz4 library - there's a\n> \n> setup_additional_packages_script: |\n> REM choco install -y --no-progress ...\n> \n> in the .yml file, but AFAICS the chocolatey does not have lz4 :-/\n\nI updated what I'd done in the zstd patch to also run with LZ4.\nThis won't apply directly due to other patches, but you get the idea...\n\nMaybe it'd be good to have a commented-out \"wraps\" hint like there is\nfor choco. The downloaded files could be cached, too.\n\ndiff --git a/.cirrus.yml b/.cirrus.yml\nindex a3977a4036e..b4387a739f3 100644\n--- a/.cirrus.yml\n+++ b/.cirrus.yml\n@@ -644,9 +644,11 @@ task:\n vcvarsall x64\n mkdir subprojects\n meson wrap install zstd\n- meson configure -D zstd:multithread=enabled --force-fallback-for=zstd\n+ meson wrap install lz4\n+ meson subprojects download\n+ meson configure -D zstd:multithread=enabled --force-fallback-for=zstd --force-fallback-for=lz4\n set CC=c:\\ProgramData\\chocolatey\\lib\\ccache\\tools\\ccache-4.8-windows-x86_64\\ccache.exe cl.exe\n- meson setup --backend ninja --buildtype debug -Dc_link_args=/DEBUG:FASTLINK -Dcassert=true -Db_pch=false -Dextra_lib_dirs=c:\\openssl\\1.1\\lib -Dextra_include_dirs=c:\\openssl\\1.1\\include -DTAR=%TAR% -DPG_TEST_EXTRA=\"%PG_TEST_EXTRA%\" -D zstd=enabled -Dc_args=\"/Z7 /MDd\" build\n+ meson setup --backend ninja --buildtype debug -Dc_link_args=/DEBUG:FASTLINK -Dcassert=true -Db_pch=false -Dextra_lib_dirs=c:\\openssl\\1.1\\lib -Dextra_include_dirs=c:\\openssl\\1.1\\include -DTAR=%TAR% -DPG_TEST_EXTRA=\"%PG_TEST_EXTRA%\" -D zstd=enabled -D lz4=enabled -Dc_args=\"/Z7 /MDd\" build\n \n build_script: |\n vcvarsall x64\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 6 Apr 2023 11:39:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: lz4 --rm on Ubuntu 18.04 (Add LZ4 compression to pg_dump)"
},
{
"msg_contents": "> On 6 Apr 2023, at 18:39, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> *If* you wanted to do something to fix this, you could create a key\n> called files_to_remove_after_loading, and run unlink on those files\n> rather than running a shell command. Or maybe just remove the file\n> unconditionally at the start of the script ?\n\nSince the test is written in Perl, and Perl has a function for deleting files\nwhich abstracts the platform differences, using it seems like a logical choice?\n{cleanup_cmd} can be replaced with {cleanup_files} with an unlink called on\nthat?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Apr 2023 20:40:55 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: lz4 --rm on Ubuntu 18.04 (Add LZ4 compression to pg_dump)"
}
] |
[
{
"msg_contents": "Hi,\n\nUsers may wish to speed up long-running vacuum of a large table by\ndecreasing autovacuum_vacuum_cost_delay/vacuum_cost_delay, however the\nconfig file is only reloaded between tables (for autovacuum) or after\nthe statement (for explicit vacuum). This has been brought up for\nautovacuum in [1].\n\nAndres suggested that it might be possible to check ConfigReloadPending\nin vacuum_delay_point(), so I thought I would draft a rough patch and\nstart a discussion.\n\nSince vacuum_delay_point() is also called by analyze and we do not want\nto reload the configuration file if we are in a user transaction, I\nwidened the scope of the in_outer_xact variable in vacuum() and allowed\nanalyze in a user transaction to default to the current configuration\nfile reload cadence in PostgresMain().\n\nI don't think I can set and leave vac_in_outer_xact the way I am doing\nit in this patch, since I use vac_in_outer_xact in vacuum_delay_point(),\nwhich I believe is reachable from codepaths that would not have called\nvacuum(). It seems that if a backend sets it, the outer transaction\ncommits, and then the backend ends up calling vacuum_delay_point() in a\ndifferent way later, it wouldn't be quite right.\n\nApart from this, one higher level question I have is if there are other\ngucs whose modification would make reloading the configuration file\nduring vacuum/analyze unsafe.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/flat/22CA91B4-D341-4075-BD3C-4BAB52AF1E80%40amazon.com#37f05e33d2ce43680f96332fa1c0f3d4",
"msg_date": "Thu, 23 Feb 2023 17:08:16 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Should vacuum process config file reload more often"
},
{
"msg_contents": "Hi, Melanie!\n\nOn Fri, 24 Feb 2023 at 02:08, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> Hi,\n>\n> Users may wish to speed up long-running vacuum of a large table by\n> decreasing autovacuum_vacuum_cost_delay/vacuum_cost_delay, however the\n> config file is only reloaded between tables (for autovacuum) or after\n> the statement (for explicit vacuum). This has been brought up for\n> autovacuum in [1].\n>\n> Andres suggested that it might be possible to check ConfigReloadPending\n> in vacuum_delay_point(), so I thought I would draft a rough patch and\n> start a discussion.\n>\n> Since vacuum_delay_point() is also called by analyze and we do not want\n> to reload the configuration file if we are in a user transaction, I\n> widened the scope of the in_outer_xact variable in vacuum() and allowed\n> analyze in a user transaction to default to the current configuration\n> file reload cadence in PostgresMain().\n>\n> I don't think I can set and leave vac_in_outer_xact the way I am doing\n> it in this patch, since I use vac_in_outer_xact in vacuum_delay_point(),\n> which I believe is reachable from codepaths that would not have called\n> vacuum(). It seems that if a backend sets it, the outer transaction\n> commits, and then the backend ends up calling vacuum_delay_point() in a\n> different way later, it wouldn't be quite right.\n>\n> Apart from this, one higher level question I have is if there are other\n> gucs whose modification would make reloading the configuration file\n> during vacuum/analyze unsafe.\n\nI have a couple of small questions:\nCan this patch also read the current GUC value if it's modified by the\nSET command, without editing config file?\nWhat will be if we modify config file with mistakes? (When we try to\nstart the cluster with an erroneous config file it will fail to start,\nnot sure about re-read config)\n\nOverall the proposal seems legit and useful.\n\nKind regards,\nPavel Borisov\n\n\n",
"msg_date": "Fri, 24 Feb 2023 12:42:45 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "Hi,\n\nOn Fri, Feb 24, 2023 at 7:08 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> Hi,\n>\n> Users may wish to speed up long-running vacuum of a large table by\n> decreasing autovacuum_vacuum_cost_delay/vacuum_cost_delay, however the\n> config file is only reloaded between tables (for autovacuum) or after\n> the statement (for explicit vacuum). This has been brought up for\n> autovacuum in [1].\n>\n> Andres suggested that it might be possible to check ConfigReloadPending\n> in vacuum_delay_point(), so I thought I would draft a rough patch and\n> start a discussion.\n\nIn vacuum_delay_point(), we need to update VacuumCostActive too if necessary.\n\n> Apart from this, one higher level question I have is if there are other\n> gucs whose modification would make reloading the configuration file\n> during vacuum/analyze unsafe.\n\nAs far as I know there are not such GUC parameters in the core but\nthere might be in third-party table AM and index AM extensions. Also,\nI'm concerned that allowing to change any GUC parameters during\nvacuum/analyze could be a foot-gun in the future. When modifying\nvacuum/analyze-related codes, we have to consider the case where any\nGUC parameters could be changed during vacuum/analyze. I guess it\nwould be better to apply the parameter changes for only vacuum delay\nrelated parameters. For example, autovacuum launcher advertises the\nvalues of the vacuum delay parameters on the shared memory not only\nfor autovacuum processes but also for manual vacuum/analyze processes.\nBoth processes can update them accordingly in vacuum_delay_point().\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Feb 2023 23:11:53 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-27 23:11:53 +0900, Masahiko Sawada wrote:\n> As far as I know there are not such GUC parameters in the core but\n> there might be in third-party table AM and index AM extensions.\n\nWe already reload in a pretty broad range of situations, so I'm not sure\nthere's a lot that could be unsafe that isn't already.\n\n\n> Also, I'm concerned that allowing to change any GUC parameters during\n> vacuum/analyze could be a foot-gun in the future. When modifying\n> vacuum/analyze-related codes, we have to consider the case where any GUC\n> parameters could be changed during vacuum/analyze.\n\nWhat kind of scenario are you thinking of?\n\n\n> I guess it would be better to apply the parameter changes for only vacuum\n> delay related parameters. For example, autovacuum launcher advertises the\n> values of the vacuum delay parameters on the shared memory not only for\n> autovacuum processes but also for manual vacuum/analyze processes. Both\n> processes can update them accordingly in vacuum_delay_point().\n\nI don't think this is a good idea. It'd introduce a fair amount of complexity\nwithout, as far as I can tell, a benefit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 27 Feb 2023 17:21:37 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 10:21 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-02-27 23:11:53 +0900, Masahiko Sawada wrote:\n> > As far as I know there are not such GUC parameters in the core but\n> > there might be in third-party table AM and index AM extensions.\n>\n> We already reload in a pretty broad range of situations, so I'm not sure\n> there's a lot that could be unsafe that isn't already.\n>\n>\n> > Also, I'm concerned that allowing to change any GUC parameters during\n> > vacuum/analyze could be a foot-gun in the future. When modifying\n> > vacuum/analyze-related codes, we have to consider the case where any GUC\n> > parameters could be changed during vacuum/analyze.\n>\n> What kind of scenario are you thinking of?\n\nFor example, I guess we will need to take care of changes of\nmaintenance_work_mem. Currently we initialize the dead tuple space at\nthe beginning of lazy vacuum, but perhaps we would need to\nenlarge/shrink it based on the new value?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 28 Feb 2023 11:16:45 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "Thanks for the feedback and questions, Pavel!\n\nOn Fri, Feb 24, 2023 at 3:43 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> I have a couple of small questions:\n> Can this patch also read the current GUC value if it's modified by the\n> SET command, without editing config file?\n\nIf a user sets a guc like vacuum_cost_limit with SET, this only modifies\nthe value for that session. That wouldn't affect the in-progress vacuum\nyou initiated from that session because you would have to wait for the\nvacuum to complete before issuing the SET command.\n\n> What will be if we modify config file with mistakes? (When we try to\n> start the cluster with an erroneous config file it will fail to start,\n> not sure about re-read config)\n\nIf you manually add an invalid valid to your postgresql.conf, when it is\nreloaded, the existing value will remain unchanged and an error will be\nlogged. If you attempt to change the guc value to an invalid value with\nALTER SYSTEM, the ALTER SYSTEM command will fail and the existing value\nwill remain unchanged.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 1 Mar 2023 14:54:05 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-28 11:16:45 +0900, Masahiko Sawada wrote:\n> On Tue, Feb 28, 2023 at 10:21 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-02-27 23:11:53 +0900, Masahiko Sawada wrote:\n> > > As far as I know there are not such GUC parameters in the core but\n> > > there might be in third-party table AM and index AM extensions.\n> >\n> > We already reload in a pretty broad range of situations, so I'm not sure\n> > there's a lot that could be unsafe that isn't already.\n> >\n> >\n> > > Also, I'm concerned that allowing to change any GUC parameters during\n> > > vacuum/analyze could be a foot-gun in the future. When modifying\n> > > vacuum/analyze-related codes, we have to consider the case where any GUC\n> > > parameters could be changed during vacuum/analyze.\n> >\n> > What kind of scenario are you thinking of?\n> \n> For example, I guess we will need to take care of changes of\n> maintenance_work_mem. Currently we initialize the dead tuple space at\n> the beginning of lazy vacuum, but perhaps we would need to\n> enlarge/shrink it based on the new value?\n\nI don't think we need to do anything about that initially, just because the\nconfig can be changed in a more granular way, doesn't mean we have to react to\nevery change for the current operation.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 Mar 2023 16:15:35 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 9:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Fri, Feb 24, 2023 at 7:08 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > Users may wish to speed up long-running vacuum of a large table by\n> > decreasing autovacuum_vacuum_cost_delay/vacuum_cost_delay, however the\n> > config file is only reloaded between tables (for autovacuum) or after\n> > the statement (for explicit vacuum). This has been brought up for\n> > autovacuum in [1].\n> >\n> > Andres suggested that it might be possible to check ConfigReloadPending\n> > in vacuum_delay_point(), so I thought I would draft a rough patch and\n> > start a discussion.\n>\n> In vacuum_delay_point(), we need to update VacuumCostActive too if necessary.\n\nYes, good point. Thank you!\n\nOn Thu, Feb 23, 2023 at 5:08 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I don't think I can set and leave vac_in_outer_xact the way I am doing\n> it in this patch, since I use vac_in_outer_xact in vacuum_delay_point(),\n> which I believe is reachable from codepaths that would not have called\n> vacuum(). It seems that if a backend sets it, the outer transaction\n> commits, and then the backend ends up calling vacuum_delay_point() in a\n> different way later, it wouldn't be quite right.\n\nPerhaps I could just set in_outer_xact to false in the PG_FINALLY()\nsection in vacuum() to avoid this problem.\n\nOn Wed, Mar 1, 2023 at 7:15 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-02-28 11:16:45 +0900, Masahiko Sawada wrote:\n> > On Tue, Feb 28, 2023 at 10:21 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2023-02-27 23:11:53 +0900, Masahiko Sawada wrote:\n> > > > Also, I'm concerned that allowing to change any GUC parameters during\n> > > > vacuum/analyze could be a foot-gun in the future. When modifying\n> > > > vacuum/analyze-related codes, we have to consider the case where any GUC\n> > > > parameters could be changed during vacuum/analyze.\n> > >\n> > > What kind of scenario are you thinking of?\n> >\n> > For example, I guess we will need to take care of changes of\n> > maintenance_work_mem. Currently we initialize the dead tuple space at\n> > the beginning of lazy vacuum, but perhaps we would need to\n> > enlarge/shrink it based on the new value?\n>\n> I don't think we need to do anything about that initially, just because the\n> config can be changed in a more granular way, doesn't mean we have to react to\n> every change for the current operation.\n\nPerhaps we can mention in the docs that a change to maintenance_work_mem\nwill not take effect in the middle of vacuuming a table. But, Ithink it probably\nisn't needed.\n\nOn another topic, I've just realized that when autovacuuming we only\nupdate tab->at_vacuum_cost_delay/limit from\nautovacuum_vacuum_cost_delay/limit for each table (in\ntable_recheck_autovac()) and then use that to update\nMyWorkerInfo->wi_cost_delay/limit. MyWorkerInfo->wi_cost_delay/limit is\nwhat is used to update VacuumCostDelay/Limit in AutoVacuumUpdateDelay().\nSo, even if we reload the config file in vacuum_delay_point(), if we\ndon't use the new value of autovacuum_vacuum_cost_delay/limit it will\nhave no effect for autovacuum.\n\nI started writing a little helper that could be used to update these\nworkerinfo->wi_cost_delay/limit in vacuum_delay_point(), but I notice\nwhen they are first set, we consider the autovacuum table options. So,\nI suppose I would need to consider these when updating\nwi_cost_delay/limit later as well? (during vacuum_delay_point() or\nin AutoVacuumUpdateDelay())\n\nI wasn't quite sure because I found these chained ternaries rather\ndifficult to interpret, but I think table_recheck_autovac() is saying\nthat the autovacuum table options override all other values for\nvac_cost_delay?\n\n vac_cost_delay = (avopts && avopts->vacuum_cost_delay >= 0)\n ? avopts->vacuum_cost_delay\n : (autovacuum_vac_cost_delay >= 0)\n ? autovacuum_vac_cost_delay\n : VacuumCostDelay;\n\ni.e. this?\n\n if (avopts && avopts->vacuum_cost_delay >= 0)\n vac_cost_delay = avopts->vacuum_cost_delay;\n else if (autovacuum_vac_cost_delay >= 0)\n vac_cost_delay = autovacuum_vacuum_cost_delay;\n else\n vac_cost_delay = VacuumCostDelay\n\n- Melanie\n\n\n",
"msg_date": "Wed, 1 Mar 2023 20:41:14 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 5:45 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-02-28 11:16:45 +0900, Masahiko Sawada wrote:\n> > On Tue, Feb 28, 2023 at 10:21 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2023-02-27 23:11:53 +0900, Masahiko Sawada wrote:\n> > > > As far as I know there are not such GUC parameters in the core but\n> > > > there might be in third-party table AM and index AM extensions.\n> > >\n> > > We already reload in a pretty broad range of situations, so I'm not sure\n> > > there's a lot that could be unsafe that isn't already.\n> > >\n> > >\n> > > > Also, I'm concerned that allowing to change any GUC parameters during\n> > > > vacuum/analyze could be a foot-gun in the future. When modifying\n> > > > vacuum/analyze-related codes, we have to consider the case where any GUC\n> > > > parameters could be changed during vacuum/analyze.\n> > >\n> > > What kind of scenario are you thinking of?\n> >\n> > For example, I guess we will need to take care of changes of\n> > maintenance_work_mem. Currently we initialize the dead tuple space at\n> > the beginning of lazy vacuum, but perhaps we would need to\n> > enlarge/shrink it based on the new value?\n>\n> I don't think we need to do anything about that initially, just because the\n> config can be changed in a more granular way, doesn't mean we have to react to\n> every change for the current operation.\n>\n\n+1. I also don't see the need to do anything for this case.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 2 Mar 2023 11:47:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 10:41 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Mon, Feb 27, 2023 at 9:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Fri, Feb 24, 2023 at 7:08 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > > Users may wish to speed up long-running vacuum of a large table by\n> > > decreasing autovacuum_vacuum_cost_delay/vacuum_cost_delay, however the\n> > > config file is only reloaded between tables (for autovacuum) or after\n> > > the statement (for explicit vacuum). This has been brought up for\n> > > autovacuum in [1].\n> > >\n> > > Andres suggested that it might be possible to check ConfigReloadPending\n> > > in vacuum_delay_point(), so I thought I would draft a rough patch and\n> > > start a discussion.\n> >\n> > In vacuum_delay_point(), we need to update VacuumCostActive too if necessary.\n>\n> Yes, good point. Thank you!\n>\n> On Thu, Feb 23, 2023 at 5:08 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > I don't think I can set and leave vac_in_outer_xact the way I am doing\n> > it in this patch, since I use vac_in_outer_xact in vacuum_delay_point(),\n> > which I believe is reachable from codepaths that would not have called\n> > vacuum(). It seems that if a backend sets it, the outer transaction\n> > commits, and then the backend ends up calling vacuum_delay_point() in a\n> > different way later, it wouldn't be quite right.\n>\n> Perhaps I could just set in_outer_xact to false in the PG_FINALLY()\n> section in vacuum() to avoid this problem.\n>\n> On Wed, Mar 1, 2023 at 7:15 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-02-28 11:16:45 +0900, Masahiko Sawada wrote:\n> > > On Tue, Feb 28, 2023 at 10:21 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > On 2023-02-27 23:11:53 +0900, Masahiko Sawada wrote:\n> > > > > Also, I'm concerned that allowing to change any GUC parameters during\n> > > > > vacuum/analyze could be a foot-gun in the future. When modifying\n> > > > > vacuum/analyze-related codes, we have to consider the case where any GUC\n> > > > > parameters could be changed during vacuum/analyze.\n> > > >\n> > > > What kind of scenario are you thinking of?\n> > >\n> > > For example, I guess we will need to take care of changes of\n> > > maintenance_work_mem. Currently we initialize the dead tuple space at\n> > > the beginning of lazy vacuum, but perhaps we would need to\n> > > enlarge/shrink it based on the new value?\n> >\n> > I don't think we need to do anything about that initially, just because the\n> > config can be changed in a more granular way, doesn't mean we have to react to\n> > every change for the current operation.\n>\n> Perhaps we can mention in the docs that a change to maintenance_work_mem\n> will not take effect in the middle of vacuuming a table. But, Ithink it probably\n> isn't needed.\n\nAgreed.\n\n>\n> On another topic, I've just realized that when autovacuuming we only\n> update tab->at_vacuum_cost_delay/limit from\n> autovacuum_vacuum_cost_delay/limit for each table (in\n> table_recheck_autovac()) and then use that to update\n> MyWorkerInfo->wi_cost_delay/limit. MyWorkerInfo->wi_cost_delay/limit is\n> what is used to update VacuumCostDelay/Limit in AutoVacuumUpdateDelay().\n> So, even if we reload the config file in vacuum_delay_point(), if we\n> don't use the new value of autovacuum_vacuum_cost_delay/limit it will\n> have no effect for autovacuum.\n\nRight, but IIUC wi_cost_limit (and VacuumCostDelayLimit) might be\nupdated. After the autovacuum launcher reloads the config file, it\ncalls autovac_balance_cost() that updates that value of active\nworkers. I'm not sure why we don't update workers' wi_cost_delay,\nthough.\n\n> I started writing a little helper that could be used to update these\n> workerinfo->wi_cost_delay/limit in vacuum_delay_point(),\n\nSince we set vacuum delay parameters for autovacuum workers so that we\nration out I/O equally, I think we should keep the current mechanism\nthat the autovacuum launcher sets workers' delay parameters and they\nupdate accordingly.\n\n> but I notice\n> when they are first set, we consider the autovacuum table options. So,\n> I suppose I would need to consider these when updating\n> wi_cost_delay/limit later as well? (during vacuum_delay_point() or\n> in AutoVacuumUpdateDelay())\n>\n> I wasn't quite sure because I found these chained ternaries rather\n> difficult to interpret, but I think table_recheck_autovac() is saying\n> that the autovacuum table options override all other values for\n> vac_cost_delay?\n>\n> vac_cost_delay = (avopts && avopts->vacuum_cost_delay >= 0)\n> ? avopts->vacuum_cost_delay\n> : (autovacuum_vac_cost_delay >= 0)\n> ? autovacuum_vac_cost_delay\n> : VacuumCostDelay;\n>\n> i.e. this?\n>\n> if (avopts && avopts->vacuum_cost_delay >= 0)\n> vac_cost_delay = avopts->vacuum_cost_delay;\n> else if (autovacuum_vac_cost_delay >= 0)\n> vac_cost_delay = autovacuum_vacuum_cost_delay;\n> else\n> vac_cost_delay = VacuumCostDelay\n\nYes, if the table has autovacuum table options, we use these values\nand the table is excluded from the balancing algorithm I mentioned\nabove. See the code from table_recheck_autovac(),\n\n /*\n * If any of the cost delay parameters has been set individually for\n * this table, disable the balancing algorithm.\n */\n tab->at_dobalance =\n !(avopts && (avopts->vacuum_cost_limit > 0 ||\n avopts->vacuum_cost_delay > 0));\n\nSo if the table has autovacuum table options, the vacuum delay\nparameters probably should be updated by ALTER TABLE, not by reloading\nthe config file.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 2 Mar 2023 16:36:09 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 2:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Mar 2, 2023 at 10:41 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > On another topic, I've just realized that when autovacuuming we only\n> > update tab->at_vacuum_cost_delay/limit from\n> > autovacuum_vacuum_cost_delay/limit for each table (in\n> > table_recheck_autovac()) and then use that to update\n> > MyWorkerInfo->wi_cost_delay/limit. MyWorkerInfo->wi_cost_delay/limit is\n> > what is used to update VacuumCostDelay/Limit in AutoVacuumUpdateDelay().\n> > So, even if we reload the config file in vacuum_delay_point(), if we\n> > don't use the new value of autovacuum_vacuum_cost_delay/limit it will\n> > have no effect for autovacuum.\n>\n> Right, but IIUC wi_cost_limit (and VacuumCostDelayLimit) might be\n> updated. After the autovacuum launcher reloads the config file, it\n> calls autovac_balance_cost() that updates that value of active\n> workers. I'm not sure why we don't update workers' wi_cost_delay,\n> though.\n\nAh yes, I didn't realize this. Thanks. I went back and did more code\nreading/analysis, and I see no reason why we shouldn't update\nworker->wi_cost_delay to the new value of autovacuum_vac_cost_delay in\nautovac_balance_cost(). Then, as you said, the autovac launcher will\ncall autovac_balance_cost() when it reloads the configuration file.\nThen, the next time the autovac worker calls AutoVacuumUpdateDelay(), it\nwill update VacuumCostDelay.\n\n> > I started writing a little helper that could be used to update these\n> > workerinfo->wi_cost_delay/limit in vacuum_delay_point(),\n>\n> Since we set vacuum delay parameters for autovacuum workers so that we\n> ration out I/O equally, I think we should keep the current mechanism\n> that the autovacuum launcher sets workers' delay parameters and they\n> update accordingly.\n\nYes, agreed, it should go in the same place as where we update\nwi_cost_limit (autovac_balance_cost()). I think we should potentially\nrename autovac_balance_cost() because its name and all its comments\npoint to its only purpose being to balance the total of the workers\nwi_cost_limits to no more than autovacuum_vacuum_cost_limit. And the\nautovacuum_vacuum_cost_delay doesn't need to be balanced in this way.\n\nThough, since this change on its own would make autovacuum pick up new\nvalues of autovacuum_vacuum_cost_limit (without having the worker reload\nthe config file), I wonder if it makes sense to try and have\nvacuum_delay_point() only reload the config file if it is an explicit\nvacuum or an analyze not being run in an outer transaction (to avoid\noverhead of reloading config file)?\n\nThe lifecycle of this different vacuum delay-related gucs and how it\ndiffers between autovacuum workers and explicit vacuum is quite tangled\nalready, though.\n\n> > but I notice\n> > when they are first set, we consider the autovacuum table options. So,\n> > I suppose I would need to consider these when updating\n> > wi_cost_delay/limit later as well? (during vacuum_delay_point() or\n> > in AutoVacuumUpdateDelay())\n> >\n> > I wasn't quite sure because I found these chained ternaries rather\n> > difficult to interpret, but I think table_recheck_autovac() is saying\n> > that the autovacuum table options override all other values for\n> > vac_cost_delay?\n> >\n> > vac_cost_delay = (avopts && avopts->vacuum_cost_delay >= 0)\n> > ? avopts->vacuum_cost_delay\n> > : (autovacuum_vac_cost_delay >= 0)\n> > ? autovacuum_vac_cost_delay\n> > : VacuumCostDelay;\n> >\n> > i.e. this?\n> >\n> > if (avopts && avopts->vacuum_cost_delay >= 0)\n> > vac_cost_delay = avopts->vacuum_cost_delay;\n> > else if (autovacuum_vac_cost_delay >= 0)\n> > vac_cost_delay = autovacuum_vacuum_cost_delay;\n> > else\n> > vac_cost_delay = VacuumCostDelay\n>\n> Yes, if the table has autovacuum table options, we use these values\n> and the table is excluded from the balancing algorithm I mentioned\n> above. See the code from table_recheck_autovac(),\n>\n> /*\n> * If any of the cost delay parameters has been set individually for\n> * this table, disable the balancing algorithm.\n> */\n> tab->at_dobalance =\n> !(avopts && (avopts->vacuum_cost_limit > 0 ||\n> avopts->vacuum_cost_delay > 0));\n>\n> So if the table has autovacuum table options, the vacuum delay\n> parameters probably should be updated by ALTER TABLE, not by reloading\n> the config file.\n\nYes, if the table has autovacuum table options, I think the user is\nout-of-luck until the relation is done being vacuumed because the ALTER\nTABLE will need to get a lock.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 2 Mar 2023 18:37:43 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 6:37 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Thu, Mar 2, 2023 at 2:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Mar 2, 2023 at 10:41 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > > On another topic, I've just realized that when autovacuuming we only\n> > > update tab->at_vacuum_cost_delay/limit from\n> > > autovacuum_vacuum_cost_delay/limit for each table (in\n> > > table_recheck_autovac()) and then use that to update\n> > > MyWorkerInfo->wi_cost_delay/limit. MyWorkerInfo->wi_cost_delay/limit is\n> > > what is used to update VacuumCostDelay/Limit in AutoVacuumUpdateDelay().\n> > > So, even if we reload the config file in vacuum_delay_point(), if we\n> > > don't use the new value of autovacuum_vacuum_cost_delay/limit it will\n> > > have no effect for autovacuum.\n> >\n> > Right, but IIUC wi_cost_limit (and VacuumCostDelayLimit) might be\n> > updated. After the autovacuum launcher reloads the config file, it\n> > calls autovac_balance_cost() that updates that value of active\n> > workers. I'm not sure why we don't update workers' wi_cost_delay,\n> > though.\n>\n> Ah yes, I didn't realize this. Thanks. I went back and did more code\n> reading/analysis, and I see no reason why we shouldn't update\n> worker->wi_cost_delay to the new value of autovacuum_vac_cost_delay in\n> autovac_balance_cost(). Then, as you said, the autovac launcher will\n> call autovac_balance_cost() when it reloads the configuration file.\n> Then, the next time the autovac worker calls AutoVacuumUpdateDelay(), it\n> will update VacuumCostDelay.\n>\n> > > I started writing a little helper that could be used to update these\n> > > workerinfo->wi_cost_delay/limit in vacuum_delay_point(),\n> >\n> > Since we set vacuum delay parameters for autovacuum workers so that we\n> > ration out I/O equally, I think we should keep the current mechanism\n> > that the autovacuum launcher sets workers' delay parameters and they\n> > update accordingly.\n>\n> Yes, agreed, it should go in the same place as where we update\n> wi_cost_limit (autovac_balance_cost()). I think we should potentially\n> rename autovac_balance_cost() because its name and all its comments\n> point to its only purpose being to balance the total of the workers\n> wi_cost_limits to no more than autovacuum_vacuum_cost_limit. And the\n> autovacuum_vacuum_cost_delay doesn't need to be balanced in this way.\n>\n> Though, since this change on its own would make autovacuum pick up new\n> values of autovacuum_vacuum_cost_limit (without having the worker reload\n> the config file), I wonder if it makes sense to try and have\n> vacuum_delay_point() only reload the config file if it is an explicit\n> vacuum or an analyze not being run in an outer transaction (to avoid\n> overhead of reloading config file)?\n>\n> The lifecycle of this different vacuum delay-related gucs and how it\n> differs between autovacuum workers and explicit vacuum is quite tangled\n> already, though.\n\nSo, I've attached a new version of the patch which is quite different\nfrom the previous versions.\n\nIn this version I've removed wi_cost_delay from WorkerInfoData. There is\nno synchronization of cost_delay amongst workers, so there is no reason\nto keep it in shared memory.\n\nOne consequence of not updating VacuumCostDelay from wi_cost_delay is\nthat we have to have a way to keep track of whether or not autovacuum\ntable options are in use.\n\nThis patch does this in a cringeworthy way. I added two global\nvariables, one to track whether or not cost delay table options are in\nuse and the other to store the value of the table option cost delay. I\ndidn't want to use a single variable with a special value to indicate\nthat table option cost delay is in use because\nautovacuum_vacuum_cost_delay already has special values that mean\ncertain things. My code needs a better solution.\n\nIt is worth mentioning that I think that in master,\nAutoVacuumUpdateDelay() was incorrectly reading wi_cost_limit and\nwi_cost_delay from shared memory without holding a lock.\n\nI've added in a shared lock for reading from wi_cost_limit in this\npatch. However, AutoVacuumUpdateLimit() is called unconditionally in\nvacuum_delay_point(), which is called quite often (per block-ish), so I\nwas trying to think if there is a way we could avoid having to check\nthis shared memory variable on every call to vacuum_delay_point().\nRebalances shouldn't happen very often (done by the launcher when a new\nworker is launched and by workers between vacuuming tables). Maybe we\ncan read from it less frequently?\n\nAlso not sure how the patch interacts with failsafe autovac and parallel\nvacuum.\n\n- Melanie",
"msg_date": "Sun, 5 Mar 2023 15:26:12 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 5:26 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Thu, Mar 2, 2023 at 6:37 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > On Thu, Mar 2, 2023 at 2:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Mar 2, 2023 at 10:41 AM Melanie Plageman\n> > > <melanieplageman@gmail.com> wrote:\n> > > > On another topic, I've just realized that when autovacuuming we only\n> > > > update tab->at_vacuum_cost_delay/limit from\n> > > > autovacuum_vacuum_cost_delay/limit for each table (in\n> > > > table_recheck_autovac()) and then use that to update\n> > > > MyWorkerInfo->wi_cost_delay/limit. MyWorkerInfo->wi_cost_delay/limit is\n> > > > what is used to update VacuumCostDelay/Limit in AutoVacuumUpdateDelay().\n> > > > So, even if we reload the config file in vacuum_delay_point(), if we\n> > > > don't use the new value of autovacuum_vacuum_cost_delay/limit it will\n> > > > have no effect for autovacuum.\n> > >\n> > > Right, but IIUC wi_cost_limit (and VacuumCostDelayLimit) might be\n> > > updated. After the autovacuum launcher reloads the config file, it\n> > > calls autovac_balance_cost() that updates that value of active\n> > > workers. I'm not sure why we don't update workers' wi_cost_delay,\n> > > though.\n> >\n> > Ah yes, I didn't realize this. Thanks. I went back and did more code\n> > reading/analysis, and I see no reason why we shouldn't update\n> > worker->wi_cost_delay to the new value of autovacuum_vac_cost_delay in\n> > autovac_balance_cost(). Then, as you said, the autovac launcher will\n> > call autovac_balance_cost() when it reloads the configuration file.\n> > Then, the next time the autovac worker calls AutoVacuumUpdateDelay(), it\n> > will update VacuumCostDelay.\n> >\n> > > > I started writing a little helper that could be used to update these\n> > > > workerinfo->wi_cost_delay/limit in vacuum_delay_point(),\n> > >\n> > > Since we set vacuum delay parameters for autovacuum workers so that we\n> > > ration out I/O equally, I think we should keep the current mechanism\n> > > that the autovacuum launcher sets workers' delay parameters and they\n> > > update accordingly.\n> >\n> > Yes, agreed, it should go in the same place as where we update\n> > wi_cost_limit (autovac_balance_cost()). I think we should potentially\n> > rename autovac_balance_cost() because its name and all its comments\n> > point to its only purpose being to balance the total of the workers\n> > wi_cost_limits to no more than autovacuum_vacuum_cost_limit. And the\n> > autovacuum_vacuum_cost_delay doesn't need to be balanced in this way.\n> >\n> > Though, since this change on its own would make autovacuum pick up new\n> > values of autovacuum_vacuum_cost_limit (without having the worker reload\n> > the config file), I wonder if it makes sense to try and have\n> > vacuum_delay_point() only reload the config file if it is an explicit\n> > vacuum or an analyze not being run in an outer transaction (to avoid\n> > overhead of reloading config file)?\n> >\n> > The lifecycle of this different vacuum delay-related gucs and how it\n> > differs between autovacuum workers and explicit vacuum is quite tangled\n> > already, though.\n>\n> So, I've attached a new version of the patch which is quite different\n> from the previous versions.\n\nThank you for updating the patch!\n\n>\n> In this version I've removed wi_cost_delay from WorkerInfoData. There is\n> no synchronization of cost_delay amongst workers, so there is no reason\n> to keep it in shared memory.\n>\n> One consequence of not updating VacuumCostDelay from wi_cost_delay is\n> that we have to have a way to keep track of whether or not autovacuum\n> table options are in use.\n>\n> This patch does this in a cringeworthy way. I added two global\n> variables, one to track whether or not cost delay table options are in\n> use and the other to store the value of the table option cost delay. I\n> didn't want to use a single variable with a special value to indicate\n> that table option cost delay is in use because\n> autovacuum_vacuum_cost_delay already has special values that mean\n> certain things. My code needs a better solution.\n\nWhile it's true that wi_cost_delay doesn't need to be shared, it seems\nto make the logic somewhat complex. We need to handle cost_delay in a\ndifferent way from other vacuum-related parameters and we need to make\nsure av[_use]_table_option_cost_delay are set properly. Removing\nwi_cost_delay from WorkerInfoData saves 8 bytes shared memory per\nautovacuum worker but it might be worth considering to keep\nwi_cost_delay for simplicity.\n\n---\n void\n AutoVacuumUpdateDelay(void)\n {\n- if (MyWorkerInfo)\n+ /*\n+ * We are using autovacuum-related GUCs to update\nVacuumCostDelay, so we\n+ * only want autovacuum workers and autovacuum launcher to do this.\n+ */\n+ if (!(am_autovacuum_worker || am_autovacuum_launcher))\n+ return;\n\nIs there any case where the autovacuum launcher calls\nAutoVacuumUpdateDelay() function?\n\n---\nIn at autovac_balance_cost(), we have,\n\n int vac_cost_limit = (autovacuum_vac_cost_limit > 0 ?\n autovacuum_vac_cost_limit : VacuumCostLimit);\n double vac_cost_delay = (autovacuum_vac_cost_delay >= 0 ?\n autovacuum_vac_cost_delay : VacuumCostDelay);\n :\n /* not set? nothing to do */\n if (vac_cost_limit <= 0 || vac_cost_delay <= 0)\n return;\n\nIIUC if autovacuum_vac_cost_delay is changed to 0 during autovacuums\nrunning, their vacuum delay parameters are not changed. It's not a bug\nof the patch but I think we can fix it in this patch.\n\n>\n> It is worth mentioning that I think that in master,\n> AutoVacuumUpdateDelay() was incorrectly reading wi_cost_limit and\n> wi_cost_delay from shared memory without holding a lock.\n\nIndeed.\n\n> I've added in a shared lock for reading from wi_cost_limit in this\n> patch. However, AutoVacuumUpdateLimit() is called unconditionally in\n> vacuum_delay_point(), which is called quite often (per block-ish), so I\n> was trying to think if there is a way we could avoid having to check\n> this shared memory variable on every call to vacuum_delay_point().\n> Rebalances shouldn't happen very often (done by the launcher when a new\n> worker is launched and by workers between vacuuming tables). Maybe we\n> can read from it less frequently?\n\nYeah, acquiring the lwlock for every call to vacuum_delay_point()\nseems to be harmful. One idea would be to have one sig_atomic_t\nvariable in WorkerInfoData and autovac_balance_cost() set it to true\nafter rebalancing the worker's cost-limit. The worker can check it\nwithout locking and update its delay parameters if the flag is true.\n\n>\n> Also not sure how the patch interacts with failsafe autovac and parallel\n> vacuum.\n\nGood point.\n\nWhen entering the failsafe mode, we disable the vacuum delays (see\nlazy_check_wraparound_failsafe()). We need to keep disabling the\nvacuum delays even after reloading the config file. One idea is to\nhave another global variable indicating we're in the failsafe mode.\nvacuum_delay_point() doesn't update VacuumCostActive if the flag is\ntrue.\n\nAs far as I can see we don't need special treatments for parallel\nvacuum cases since it works only in manual vacuum. It calculates the\nsleep time based on the shared cost balance and how much the worker\ndid I/O but the basic mechanism is the same as non-parallel case.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 7 Mar 2023 14:09:54 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On 3/2/23 1:36 AM, Masahiko Sawada wrote:\n\n>>>> For example, I guess we will need to take care of changes of\n>>>> maintenance_work_mem. Currently we initialize the dead tuple space at\n>>>> the beginning of lazy vacuum, but perhaps we would need to\n>>>> enlarge/shrink it based on the new value?\nDoesn't the dead tuple space grow as needed? Last I looked we don't \nallocate up to 1GB right off the bat.\n>>> I don't think we need to do anything about that initially, just because the\n>>> config can be changed in a more granular way, doesn't mean we have to react to\n>>> every change for the current operation.\n>> Perhaps we can mention in the docs that a change to maintenance_work_mem\n>> will not take effect in the middle of vacuuming a table. But, Ithink it probably\n>> isn't needed.\n> Agreed.\n\nI disagree that there's no need for this. Sure, if \nmaintenance_work_memory is 10MB then it's no big deal to just abandon \nyour current vacuum and start a new one, but the index vacuuming phase \nwith maintenance_work_mem set to say 500MB can take quite a while. \nForcing a user to either suck it up or throw everything in the phase \naway isn't terribly good.\n\nOf course, if the patch that eliminates the 1GB vacuum limit gets \ncommitted the situation will be even worse.\n\nWhile it'd be nice to also honor maintenance_work_mem getting set lower, \nI don't see any need to go through heroics to accomplish that. Simply \nrecording the change and honoring it for future attempts to grow the \nmemory and on future passes through the heap would be plenty.\n\nAll that said, don't let these suggestions get in the way of committing \nthis. Just having the ability to tweak cost parameters would be a win.\n\n\n\n\n\n\nOn 3/2/23 1:36 AM, Masahiko Sawada wrote:\n\n\n\n\n\nFor example, I guess we will need to take care of changes of\nmaintenance_work_mem. Currently we initialize the dead tuple space at\nthe beginning of lazy vacuum, but perhaps we would need to\nenlarge/shrink it based on the new value?\n\n\n\n\n Doesn't the dead tuple space grow as needed? Last I looked we don't\n allocate up to 1GB right off the bat.\n \n\n\nI don't think we need to do anything about that initially, just because the\nconfig can be changed in a more granular way, doesn't mean we have to react to\nevery change for the current operation.\n\n\nPerhaps we can mention in the docs that a change to maintenance_work_mem\nwill not take effect in the middle of vacuuming a table. But, Ithink it probably\nisn't needed.\n\n\nAgreed.\n\n\nI disagree that there's no need for this. Sure, if\n maintenance_work_memory is 10MB then it's no big deal to just\n abandon your current vacuum and start a new one, but the index\n vacuuming phase with maintenance_work_mem set to say 500MB can\n take quite a while. Forcing a user to either suck it up or throw\n everything in the phase away isn't terribly good.\nOf course, if the patch that eliminates the 1GB vacuum limit gets\n committed the situation will be even worse.\nWhile it'd be nice to also honor maintenance_work_mem getting set\n lower, I don't see any need to go through heroics to accomplish\n that. Simply recording the change and honoring it for future\n attempts to grow the memory and on future passes through the heap\n would be plenty.\nAll that said, don't let these suggestions get in the way of\n committing this. Just having the ability to tweak cost parameters\n would be a win.",
"msg_date": "Wed, 8 Mar 2023 11:42:31 -0600",
"msg_from": "Jim Nasby <nasbyj@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-08 11:42:31 -0600, Jim Nasby wrote:\n> On 3/2/23 1:36 AM, Masahiko Sawada wrote:\n> \n> > > > > For example, I guess we will need to take care of changes of\n> > > > > maintenance_work_mem. Currently we initialize the dead tuple space at\n> > > > > the beginning of lazy vacuum, but perhaps we would need to\n> > > > > enlarge/shrink it based on the new value?\n> Doesn't the dead tuple space grow as needed? Last I looked we don't allocate\n> up to 1GB right off the bat.\n> > > > I don't think we need to do anything about that initially, just because the\n> > > > config can be changed in a more granular way, doesn't mean we have to react to\n> > > > every change for the current operation.\n> > > Perhaps we can mention in the docs that a change to maintenance_work_mem\n> > > will not take effect in the middle of vacuuming a table. But, Ithink it probably\n> > > isn't needed.\n> > Agreed.\n> \n> I disagree that there's no need for this. Sure, if maintenance_work_memory\n> is 10MB then it's no big deal to just abandon your current vacuum and start\n> a new one, but the index vacuuming phase with maintenance_work_mem set to\n> say 500MB can take quite a while. Forcing a user to either suck it up or\n> throw everything in the phase away isn't terribly good.\n> \n> Of course, if the patch that eliminates the 1GB vacuum limit gets committed\n> the situation will be even worse.\n> \n> While it'd be nice to also honor maintenance_work_mem getting set lower, I\n> don't see any need to go through heroics to accomplish that. Simply\n> recording the change and honoring it for future attempts to grow the memory\n> and on future passes through the heap would be plenty.\n> \n> All that said, don't let these suggestions get in the way of committing\n> this. Just having the ability to tweak cost parameters would be a win.\n\nNobody said anything about it not being useful to react to m_w_m changes, just\nthat it's not required to make some progress . So I really don't understand\nwhat the point of your comment is.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Mar 2023 16:27:29 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 12:42 AM Jim Nasby <nasbyj@amazon.com> wrote:\n>\n> Doesn't the dead tuple space grow as needed? Last I looked we don't\nallocate up to 1GB right off the bat.\n\nIncorrect.\n\n> Of course, if the patch that eliminates the 1GB vacuum limit gets\ncommitted the situation will be even worse.\n\nIf you're referring to the proposed tid store, I'd be interested in seeing\na reproducible test case with a m_w_m over 1GB where it makes things worse\nthan the current state of affairs.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Mar 9, 2023 at 12:42 AM Jim Nasby <nasbyj@amazon.com> wrote:>> Doesn't the dead tuple space grow as needed? Last I looked we don't allocate up to 1GB right off the bat.Incorrect.> Of course, if the patch that eliminates the 1GB vacuum limit gets committed the situation will be even worse.If you're referring to the proposed tid store, I'd be interested in seeing a reproducible test case with a m_w_m over 1GB where it makes things worse than the current state of affairs.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 9 Mar 2023 14:47:19 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 4:47 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n>\n>\n> On Thu, Mar 9, 2023 at 12:42 AM Jim Nasby <nasbyj@amazon.com> wrote:\n> >\n> > Doesn't the dead tuple space grow as needed? Last I looked we don't allocate up to 1GB right off the bat.\n>\n> Incorrect.\n>\n> > Of course, if the patch that eliminates the 1GB vacuum limit gets committed the situation will be even worse.\n>\n> If you're referring to the proposed tid store, I'd be interested in seeing a reproducible test case with a m_w_m over 1GB where it makes things worse than the current state of affairs.\n\nAnd I think that the tidstore makes it easy to react to\nmaintenance_work_mem changes. We don't need to enlarge it and just\nupdate its memory limit at an appropriate time.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 9 Mar 2023 22:19:11 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 12:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Mar 6, 2023 at 5:26 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > On Thu, Mar 2, 2023 at 6:37 PM Melanie Plageman\n> > In this version I've removed wi_cost_delay from WorkerInfoData. There is\n> > no synchronization of cost_delay amongst workers, so there is no reason\n> > to keep it in shared memory.\n> >\n> > One consequence of not updating VacuumCostDelay from wi_cost_delay is\n> > that we have to have a way to keep track of whether or not autovacuum\n> > table options are in use.\n> >\n> > This patch does this in a cringeworthy way. I added two global\n> > variables, one to track whether or not cost delay table options are in\n> > use and the other to store the value of the table option cost delay. I\n> > didn't want to use a single variable with a special value to indicate\n> > that table option cost delay is in use because\n> > autovacuum_vacuum_cost_delay already has special values that mean\n> > certain things. My code needs a better solution.\n>\n> While it's true that wi_cost_delay doesn't need to be shared, it seems\n> to make the logic somewhat complex. We need to handle cost_delay in a\n> different way from other vacuum-related parameters and we need to make\n> sure av[_use]_table_option_cost_delay are set properly. Removing\n> wi_cost_delay from WorkerInfoData saves 8 bytes shared memory per\n> autovacuum worker but it might be worth considering to keep\n> wi_cost_delay for simplicity.\n\nAh, it turns out we can't really remove wi_cost_delay from WorkerInfo\nanyway because the launcher doesn't know anything about table options\nand so the workers have to keep an updated wi_cost_delay that the\nlauncher or other autovac workers who are not vacuuming that table can\nread from when calculating the new limit in autovac_balance_cost().\n\nHowever, wi_cost_delay is a double, so if we start updating it on config\nreload in vacuum_delay_point(), we definitely need some protection\nagainst torn reads.\n\nThe table options can only change when workers start vacuuming a new\ntable, so maybe there is some way to use this to solve this problem?\n\n> > It is worth mentioning that I think that in master,\n> > AutoVacuumUpdateDelay() was incorrectly reading wi_cost_limit and\n> > wi_cost_delay from shared memory without holding a lock.\n>\n> Indeed.\n>\n> > I've added in a shared lock for reading from wi_cost_limit in this\n> > patch. However, AutoVacuumUpdateLimit() is called unconditionally in\n> > vacuum_delay_point(), which is called quite often (per block-ish), so I\n> > was trying to think if there is a way we could avoid having to check\n> > this shared memory variable on every call to vacuum_delay_point().\n> > Rebalances shouldn't happen very often (done by the launcher when a new\n> > worker is launched and by workers between vacuuming tables). Maybe we\n> > can read from it less frequently?\n>\n> Yeah, acquiring the lwlock for every call to vacuum_delay_point()\n> seems to be harmful. One idea would be to have one sig_atomic_t\n> variable in WorkerInfoData and autovac_balance_cost() set it to true\n> after rebalancing the worker's cost-limit. The worker can check it\n> without locking and update its delay parameters if the flag is true.\n\nMaybe we can do something like this with the table options values?\n\n- Melanie\n\n\n",
"msg_date": "Thu, 9 Mar 2023 21:22:53 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 11:23 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Tue, Mar 7, 2023 at 12:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Mar 6, 2023 at 5:26 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > >\n> > > On Thu, Mar 2, 2023 at 6:37 PM Melanie Plageman\n> > > In this version I've removed wi_cost_delay from WorkerInfoData. There is\n> > > no synchronization of cost_delay amongst workers, so there is no reason\n> > > to keep it in shared memory.\n> > >\n> > > One consequence of not updating VacuumCostDelay from wi_cost_delay is\n> > > that we have to have a way to keep track of whether or not autovacuum\n> > > table options are in use.\n> > >\n> > > This patch does this in a cringeworthy way. I added two global\n> > > variables, one to track whether or not cost delay table options are in\n> > > use and the other to store the value of the table option cost delay. I\n> > > didn't want to use a single variable with a special value to indicate\n> > > that table option cost delay is in use because\n> > > autovacuum_vacuum_cost_delay already has special values that mean\n> > > certain things. My code needs a better solution.\n> >\n> > While it's true that wi_cost_delay doesn't need to be shared, it seems\n> > to make the logic somewhat complex. We need to handle cost_delay in a\n> > different way from other vacuum-related parameters and we need to make\n> > sure av[_use]_table_option_cost_delay are set properly. Removing\n> > wi_cost_delay from WorkerInfoData saves 8 bytes shared memory per\n> > autovacuum worker but it might be worth considering to keep\n> > wi_cost_delay for simplicity.\n>\n> Ah, it turns out we can't really remove wi_cost_delay from WorkerInfo\n> anyway because the launcher doesn't know anything about table options\n> and so the workers have to keep an updated wi_cost_delay that the\n> launcher or other autovac workers who are not vacuuming that table can\n> read from when calculating the new limit in autovac_balance_cost().\n\nIIUC if any of the cost delay parameters has been set individually,\nthe autovacuum worker is excluded from the balance algorithm.\n\n>\n> However, wi_cost_delay is a double, so if we start updating it on config\n> reload in vacuum_delay_point(), we definitely need some protection\n> against torn reads.\n>\n> The table options can only change when workers start vacuuming a new\n> table, so maybe there is some way to use this to solve this problem?\n>\n> > > It is worth mentioning that I think that in master,\n> > > AutoVacuumUpdateDelay() was incorrectly reading wi_cost_limit and\n> > > wi_cost_delay from shared memory without holding a lock.\n> >\n> > Indeed.\n> >\n> > > I've added in a shared lock for reading from wi_cost_limit in this\n> > > patch. However, AutoVacuumUpdateLimit() is called unconditionally in\n> > > vacuum_delay_point(), which is called quite often (per block-ish), so I\n> > > was trying to think if there is a way we could avoid having to check\n> > > this shared memory variable on every call to vacuum_delay_point().\n> > > Rebalances shouldn't happen very often (done by the launcher when a new\n> > > worker is launched and by workers between vacuuming tables). Maybe we\n> > > can read from it less frequently?\n> >\n> > Yeah, acquiring the lwlock for every call to vacuum_delay_point()\n> > seems to be harmful. One idea would be to have one sig_atomic_t\n> > variable in WorkerInfoData and autovac_balance_cost() set it to true\n> > after rebalancing the worker's cost-limit. The worker can check it\n> > without locking and update its delay parameters if the flag is true.\n>\n> Maybe we can do something like this with the table options values?\n\nSince an autovacuum that uses any of table option cost delay\nparameters is excluded from the balancing algorithm, the launcher\ndoesn't need to notify such workers of changes of the cost-limit, no?\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 10 Mar 2023 12:26:23 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "Quotes below are combined from two of Sawada-san's emails.\n\nI've also attached a patch with my suggested current version.\n\nOn Thu, Mar 9, 2023 at 10:27 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Mar 10, 2023 at 11:23 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > On Tue, Mar 7, 2023 at 12:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Mar 6, 2023 at 5:26 AM Melanie Plageman\n> > > <melanieplageman@gmail.com> wrote:\n> > > >\n> > > > On Thu, Mar 2, 2023 at 6:37 PM Melanie Plageman\n> > > > In this version I've removed wi_cost_delay from WorkerInfoData. There is\n> > > > no synchronization of cost_delay amongst workers, so there is no reason\n> > > > to keep it in shared memory.\n> > > >\n> > > > One consequence of not updating VacuumCostDelay from wi_cost_delay is\n> > > > that we have to have a way to keep track of whether or not autovacuum\n> > > > table options are in use.\n> > > >\n> > > > This patch does this in a cringeworthy way. I added two global\n> > > > variables, one to track whether or not cost delay table options are in\n> > > > use and the other to store the value of the table option cost delay. I\n> > > > didn't want to use a single variable with a special value to indicate\n> > > > that table option cost delay is in use because\n> > > > autovacuum_vacuum_cost_delay already has special values that mean\n> > > > certain things. My code needs a better solution.\n> > >\n> > > While it's true that wi_cost_delay doesn't need to be shared, it seems\n> > > to make the logic somewhat complex. We need to handle cost_delay in a\n> > > different way from other vacuum-related parameters and we need to make\n> > > sure av[_use]_table_option_cost_delay are set properly. Removing\n> > > wi_cost_delay from WorkerInfoData saves 8 bytes shared memory per\n> > > autovacuum worker but it might be worth considering to keep\n> > > wi_cost_delay for simplicity.\n> >\n> > Ah, it turns out we can't really remove wi_cost_delay from WorkerInfo\n> > anyway because the launcher doesn't know anything about table options\n> > and so the workers have to keep an updated wi_cost_delay that the\n> > launcher or other autovac workers who are not vacuuming that table can\n> > read from when calculating the new limit in autovac_balance_cost().\n>\n> IIUC if any of the cost delay parameters has been set individually,\n> the autovacuum worker is excluded from the balance algorithm.\n\nAh, yes! That's right. So it is not a problem. Then I still think\nremoving wi_cost_delay from the worker info makes sense. wi_cost_delay\nis a double and can't easily be accessed atomically the way\nwi_cost_limit can be.\n\nKeeping the cost delay local to the backends also makes it clear that\ncost delay is not something that should be written to by other backends\nor that can differ from worker to worker. Without table options in the\npicture, the cost delay should be the same for any worker who has\nreloaded the config file.\n\nAs for the cost limit safe access issue, maybe we can avoid a LWLock\nacquisition for reading wi_cost_limit by using an atomic similar to what\nyou suggested here for \"did_rebalance\".\n\n> > I've added in a shared lock for reading from wi_cost_limit in this\n> > patch. However, AutoVacuumUpdateLimit() is called unconditionally in\n> > vacuum_delay_point(), which is called quite often (per block-ish), so I\n> > was trying to think if there is a way we could avoid having to check\n> > this shared memory variable on every call to vacuum_delay_point().\n> > Rebalances shouldn't happen very often (done by the launcher when a new\n> > worker is launched and by workers between vacuuming tables). Maybe we\n> > can read from it less frequently?\n>\n> Yeah, acquiring the lwlock for every call to vacuum_delay_point()\n> seems to be harmful. One idea would be to have one sig_atomic_t\n> variable in WorkerInfoData and autovac_balance_cost() set it to true\n> after rebalancing the worker's cost-limit. The worker can check it\n> without locking and update its delay parameters if the flag is true.\n\nInstead of having the atomic indicate whether or not someone (launcher\nor another worker) did a rebalance, it would simply store the current\ncost limit. Then the worker can normally access it with a simple read.\n\nMy rationale is that if we used an atomic to indicate whether or not we\ndid a rebalance (\"did_rebalance\"), we would have the same cache\ncoherency guarantees as if we just used the atomic for the cost limit.\nIf we read from the \"did_rebalance\" variable and missed someone having\nwritten to it on another core, we still wouldn't get around to checking\nthe wi_cost_limit variable in shared memory, so it doesn't matter that\nwe bothered to keep it in shared memory and use a lock to access it.\n\nI noticed we don't allow wi_cost_limit to ever be less than 0, so we\ncould store wi_cost_limit in an atomic uint32.\n\nI'm not sure if it is okay to do pg_atomic_read_u32() and\npg_atomic_unlocked_write_u32() or if we need pg_atomic_write_u32() in\nmost cases.\n\nI've implemented the atomic cost limit in the attached patch. Though,\nI'm pretty unsure about how I initialized the atomics in\nAutoVacuumShmemInit()...\n\nIf the consensus is that it is simply too confusing to take\nwi_cost_delay out of WorkerInfo, we might be able to afford using a\nshared lock to access it because we won't call AutoVacuumUpdateDelay()\non every invocation of vacuum_delay_point() -- only when we've reloaded\nthe config file.\n\nOne potential option to avoid taking a shared lock on every call to\nAutoVacuumUpdateDelay() is to set a global variable to indicate that we\ndid update it (since we are the only ones updating it) and then only\ntake the shared LWLock in AutoVacuumUpdateDelay() if that flag is true.\n\n> ---\n> void\n> AutoVacuumUpdateDelay(void)\n> {\n> - if (MyWorkerInfo)\n> + /*\n> + * We are using autovacuum-related GUCs to update\n> VacuumCostDelay, so we\n> + * only want autovacuum workers and autovacuum launcher to do this.\n> + */\n> + if (!(am_autovacuum_worker || am_autovacuum_launcher))\n> + return;\n>\n> Is there any case where the autovacuum launcher calls\n> AutoVacuumUpdateDelay() function?\n\nI had meant to add it to HandleAutoVacLauncherInterrupts() after\nreloading the config file (done in attached patch). When using the\nglobal variables for cost delay (instead of wi_cost_delay in worker\ninfo), the autovac launcher also has to do the check in the else branch\nof AutoVacuumUpdateDelay()\n\n VacuumCostDelay = autovacuum_vac_cost_delay >= 0 ?\n autovacuum_vac_cost_delay : VacuumCostDelay;\n\nto make sure VacuumCostDelay is correct for when it calls\nautovac_balance_cost().\n\nThis also made me think about whether or not we still need cost_limit_base.\nIt is used to ensure that autovac_balance_cost() never ends up setting\nworkers' wi_cost_limits above the current autovacuum_vacuum_cost_limit\n(or VacuumCostLimit). However, the launcher and all the workers should\nknow what the value is without cost_limit_base, no?\n\n> ---\n> In at autovac_balance_cost(), we have,\n>\n> int vac_cost_limit = (autovacuum_vac_cost_limit > 0 ?\n> autovacuum_vac_cost_limit : VacuumCostLimit);\n> double vac_cost_delay = (autovacuum_vac_cost_delay >= 0 ?\n> autovacuum_vac_cost_delay : VacuumCostDelay);\n> :\n> /* not set? nothing to do */\n> if (vac_cost_limit <= 0 || vac_cost_delay <= 0)\n> return;\n>\n> IIUC if autovacuum_vac_cost_delay is changed to 0 during autovacuums\n> running, their vacuum delay parameters are not changed. It's not a bug\n> of the patch but I think we can fix it in this patch.\n\nYes, currently (in master) wi_cost_delay does not get updated anywhere.\nIn my patch, the global variable we are using for delay is updated but\nit is not done in autovac_balance_cost().\n\n> > Also not sure how the patch interacts with failsafe autovac and parallel\n> > vacuum.\n>\n> Good point.\n>\n> When entering the failsafe mode, we disable the vacuum delays (see\n> lazy_check_wraparound_failsafe()). We need to keep disabling the\n> vacuum delays even after reloading the config file. One idea is to\n> have another global variable indicating we're in the failsafe mode.\n> vacuum_delay_point() doesn't update VacuumCostActive if the flag is\n> true.\n\nI think we might not need to do this. Other than in\nlazy_check_wraparound_failsafe(), VacuumCostActive is only updated in\ntwo places:\n\n1) in vacuum() which autovacuum will call per table. And failsafe is\nreset per table as well.\n\n2) in vacuum_delay_point(), but, since VacuumCostActive will already be\nfalse when we enter vacuum_delay_point() the next time after\nlazy_check_wraparound_failsafe(), we won't set VacuumCostActive there.\n\nThanks again for the detailed feedback!\n\n- Melanie",
"msg_date": "Fri, 10 Mar 2023 18:11:23 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 6:11 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> Quotes below are combined from two of Sawada-san's emails.\n>\n> I've also attached a patch with my suggested current version.\n>\n> On Thu, Mar 9, 2023 at 10:27 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Mar 10, 2023 at 11:23 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > >\n> > > On Tue, Mar 7, 2023 at 12:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Mon, Mar 6, 2023 at 5:26 AM Melanie Plageman\n> > > > <melanieplageman@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Mar 2, 2023 at 6:37 PM Melanie Plageman\n> > > > > In this version I've removed wi_cost_delay from WorkerInfoData. There is\n> > > > > no synchronization of cost_delay amongst workers, so there is no reason\n> > > > > to keep it in shared memory.\n> > > > >\n> > > > > One consequence of not updating VacuumCostDelay from wi_cost_delay is\n> > > > > that we have to have a way to keep track of whether or not autovacuum\n> > > > > table options are in use.\n> > > > >\n> > > > > This patch does this in a cringeworthy way. I added two global\n> > > > > variables, one to track whether or not cost delay table options are in\n> > > > > use and the other to store the value of the table option cost delay. I\n> > > > > didn't want to use a single variable with a special value to indicate\n> > > > > that table option cost delay is in use because\n> > > > > autovacuum_vacuum_cost_delay already has special values that mean\n> > > > > certain things. My code needs a better solution.\n> > > >\n> > > > While it's true that wi_cost_delay doesn't need to be shared, it seems\n> > > > to make the logic somewhat complex. We need to handle cost_delay in a\n> > > > different way from other vacuum-related parameters and we need to make\n> > > > sure av[_use]_table_option_cost_delay are set properly. Removing\n> > > > wi_cost_delay from WorkerInfoData saves 8 bytes shared memory per\n> > > > autovacuum worker but it might be worth considering to keep\n> > > > wi_cost_delay for simplicity.\n> > >\n> > > Ah, it turns out we can't really remove wi_cost_delay from WorkerInfo\n> > > anyway because the launcher doesn't know anything about table options\n> > > and so the workers have to keep an updated wi_cost_delay that the\n> > > launcher or other autovac workers who are not vacuuming that table can\n> > > read from when calculating the new limit in autovac_balance_cost().\n> >\n> > IIUC if any of the cost delay parameters has been set individually,\n> > the autovacuum worker is excluded from the balance algorithm.\n>\n> Ah, yes! That's right. So it is not a problem. Then I still think\n> removing wi_cost_delay from the worker info makes sense. wi_cost_delay\n> is a double and can't easily be accessed atomically the way\n> wi_cost_limit can be.\n>\n> Keeping the cost delay local to the backends also makes it clear that\n> cost delay is not something that should be written to by other backends\n> or that can differ from worker to worker. Without table options in the\n> picture, the cost delay should be the same for any worker who has\n> reloaded the config file.\n>\n> As for the cost limit safe access issue, maybe we can avoid a LWLock\n> acquisition for reading wi_cost_limit by using an atomic similar to what\n> you suggested here for \"did_rebalance\".\n>\n> > > I've added in a shared lock for reading from wi_cost_limit in this\n> > > patch. However, AutoVacuumUpdateLimit() is called unconditionally in\n> > > vacuum_delay_point(), which is called quite often (per block-ish), so I\n> > > was trying to think if there is a way we could avoid having to check\n> > > this shared memory variable on every call to vacuum_delay_point().\n> > > Rebalances shouldn't happen very often (done by the launcher when a new\n> > > worker is launched and by workers between vacuuming tables). Maybe we\n> > > can read from it less frequently?\n> >\n> > Yeah, acquiring the lwlock for every call to vacuum_delay_point()\n> > seems to be harmful. One idea would be to have one sig_atomic_t\n> > variable in WorkerInfoData and autovac_balance_cost() set it to true\n> > after rebalancing the worker's cost-limit. The worker can check it\n> > without locking and update its delay parameters if the flag is true.\n>\n> Instead of having the atomic indicate whether or not someone (launcher\n> or another worker) did a rebalance, it would simply store the current\n> cost limit. Then the worker can normally access it with a simple read.\n>\n> My rationale is that if we used an atomic to indicate whether or not we\n> did a rebalance (\"did_rebalance\"), we would have the same cache\n> coherency guarantees as if we just used the atomic for the cost limit.\n> If we read from the \"did_rebalance\" variable and missed someone having\n> written to it on another core, we still wouldn't get around to checking\n> the wi_cost_limit variable in shared memory, so it doesn't matter that\n> we bothered to keep it in shared memory and use a lock to access it.\n>\n> I noticed we don't allow wi_cost_limit to ever be less than 0, so we\n> could store wi_cost_limit in an atomic uint32.\n>\n> I'm not sure if it is okay to do pg_atomic_read_u32() and\n> pg_atomic_unlocked_write_u32() or if we need pg_atomic_write_u32() in\n> most cases.\n>\n> I've implemented the atomic cost limit in the attached patch. Though,\n> I'm pretty unsure about how I initialized the atomics in\n> AutoVacuumShmemInit()...\n>\n> If the consensus is that it is simply too confusing to take\n> wi_cost_delay out of WorkerInfo, we might be able to afford using a\n> shared lock to access it because we won't call AutoVacuumUpdateDelay()\n> on every invocation of vacuum_delay_point() -- only when we've reloaded\n> the config file.\n\nOne such implementation is attached.\n\n- Melanie",
"msg_date": "Fri, 10 Mar 2023 19:34:44 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Sat, Mar 11, 2023 at 8:11 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> Quotes below are combined from two of Sawada-san's emails.\n>\n> I've also attached a patch with my suggested current version.\n>\n> On Thu, Mar 9, 2023 at 10:27 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Mar 10, 2023 at 11:23 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > >\n> > > On Tue, Mar 7, 2023 at 12:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Mon, Mar 6, 2023 at 5:26 AM Melanie Plageman\n> > > > <melanieplageman@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Mar 2, 2023 at 6:37 PM Melanie Plageman\n> > > > > In this version I've removed wi_cost_delay from WorkerInfoData. There is\n> > > > > no synchronization of cost_delay amongst workers, so there is no reason\n> > > > > to keep it in shared memory.\n> > > > >\n> > > > > One consequence of not updating VacuumCostDelay from wi_cost_delay is\n> > > > > that we have to have a way to keep track of whether or not autovacuum\n> > > > > table options are in use.\n> > > > >\n> > > > > This patch does this in a cringeworthy way. I added two global\n> > > > > variables, one to track whether or not cost delay table options are in\n> > > > > use and the other to store the value of the table option cost delay. I\n> > > > > didn't want to use a single variable with a special value to indicate\n> > > > > that table option cost delay is in use because\n> > > > > autovacuum_vacuum_cost_delay already has special values that mean\n> > > > > certain things. My code needs a better solution.\n> > > >\n> > > > While it's true that wi_cost_delay doesn't need to be shared, it seems\n> > > > to make the logic somewhat complex. We need to handle cost_delay in a\n> > > > different way from other vacuum-related parameters and we need to make\n> > > > sure av[_use]_table_option_cost_delay are set properly. Removing\n> > > > wi_cost_delay from WorkerInfoData saves 8 bytes shared memory per\n> > > > autovacuum worker but it might be worth considering to keep\n> > > > wi_cost_delay for simplicity.\n> > >\n> > > Ah, it turns out we can't really remove wi_cost_delay from WorkerInfo\n> > > anyway because the launcher doesn't know anything about table options\n> > > and so the workers have to keep an updated wi_cost_delay that the\n> > > launcher or other autovac workers who are not vacuuming that table can\n> > > read from when calculating the new limit in autovac_balance_cost().\n> >\n> > IIUC if any of the cost delay parameters has been set individually,\n> > the autovacuum worker is excluded from the balance algorithm.\n>\n> Ah, yes! That's right. So it is not a problem. Then I still think\n> removing wi_cost_delay from the worker info makes sense. wi_cost_delay\n> is a double and can't easily be accessed atomically the way\n> wi_cost_limit can be.\n>\n> Keeping the cost delay local to the backends also makes it clear that\n> cost delay is not something that should be written to by other backends\n> or that can differ from worker to worker. Without table options in the\n> picture, the cost delay should be the same for any worker who has\n> reloaded the config file.\n\nAgreed.\n\n>\n> As for the cost limit safe access issue, maybe we can avoid a LWLock\n> acquisition for reading wi_cost_limit by using an atomic similar to what\n> you suggested here for \"did_rebalance\".\n>\n> > > I've added in a shared lock for reading from wi_cost_limit in this\n> > > patch. However, AutoVacuumUpdateLimit() is called unconditionally in\n> > > vacuum_delay_point(), which is called quite often (per block-ish), so I\n> > > was trying to think if there is a way we could avoid having to check\n> > > this shared memory variable on every call to vacuum_delay_point().\n> > > Rebalances shouldn't happen very often (done by the launcher when a new\n> > > worker is launched and by workers between vacuuming tables). Maybe we\n> > > can read from it less frequently?\n> >\n> > Yeah, acquiring the lwlock for every call to vacuum_delay_point()\n> > seems to be harmful. One idea would be to have one sig_atomic_t\n> > variable in WorkerInfoData and autovac_balance_cost() set it to true\n> > after rebalancing the worker's cost-limit. The worker can check it\n> > without locking and update its delay parameters if the flag is true.\n>\n> Instead of having the atomic indicate whether or not someone (launcher\n> or another worker) did a rebalance, it would simply store the current\n> cost limit. Then the worker can normally access it with a simple read.\n>\n> My rationale is that if we used an atomic to indicate whether or not we\n> did a rebalance (\"did_rebalance\"), we would have the same cache\n> coherency guarantees as if we just used the atomic for the cost limit.\n> If we read from the \"did_rebalance\" variable and missed someone having\n> written to it on another core, we still wouldn't get around to checking\n> the wi_cost_limit variable in shared memory, so it doesn't matter that\n> we bothered to keep it in shared memory and use a lock to access it.\n>\n> I noticed we don't allow wi_cost_limit to ever be less than 0, so we\n> could store wi_cost_limit in an atomic uint32.\n>\n> I'm not sure if it is okay to do pg_atomic_read_u32() and\n> pg_atomic_unlocked_write_u32() or if we need pg_atomic_write_u32() in\n> most cases.\n\nI agree to use pg_atomic_uin32. Given that the comment of\npg_atomic_unlocked_write_u32() says:\n\n * pg_atomic_compare_exchange_u32. This should only be used in cases where\n * minor performance regressions due to atomics emulation are unacceptable.\n\nI think pg_atomic_write_u32() is enough for our use case.\n\n>\n> I've implemented the atomic cost limit in the attached patch. Though,\n> I'm pretty unsure about how I initialized the atomics in\n> AutoVacuumShmemInit()...\n\n+\n /* initialize the WorkerInfo free list */\n for (i = 0; i < autovacuum_max_workers; i++)\n dlist_push_head(&AutoVacuumShmem->av_freeWorkers,\n &worker[i].wi_links);\n+\n+ dlist_foreach(iter, &AutoVacuumShmem->av_freeWorkers)\n+ pg_atomic_init_u32(\n+\n&(dlist_container(WorkerInfoData, wi_links, iter.cur))->wi_cost_limit,\n+ 0);\n+\n\nI think we can do like:\n\n /* initialize the WorkerInfo free list */\n for (i = 0; i < autovacuum_max_workers; i++)\n {\n dlist_push_head(&AutoVacuumShmem->av_freeWorkers,\n &worker[i].wi_links);\n pg_atomic_init_u32(&(worker[i].wi_cost_limit));\n }\n\n>\n> If the consensus is that it is simply too confusing to take\n> wi_cost_delay out of WorkerInfo, we might be able to afford using a\n> shared lock to access it because we won't call AutoVacuumUpdateDelay()\n> on every invocation of vacuum_delay_point() -- only when we've reloaded\n> the config file.\n>\n> One potential option to avoid taking a shared lock on every call to\n> AutoVacuumUpdateDelay() is to set a global variable to indicate that we\n> did update it (since we are the only ones updating it) and then only\n> take the shared LWLock in AutoVacuumUpdateDelay() if that flag is true.\n>\n\nIf we remove wi_cost_delay from WorkerInfo, probably we don't need to\nacquire the lwlock in AutoVacuumUpdateDelay()? The shared field we\naccess in that function will be only wi_dobalance, but this field is\nupdated only by its owner autovacuum worker.\n\n\n> > ---\n> > void\n> > AutoVacuumUpdateDelay(void)\n> > {\n> > - if (MyWorkerInfo)\n> > + /*\n> > + * We are using autovacuum-related GUCs to update\n> > VacuumCostDelay, so we\n> > + * only want autovacuum workers and autovacuum launcher to do this.\n> > + */\n> > + if (!(am_autovacuum_worker || am_autovacuum_launcher))\n> > + return;\n> >\n> > Is there any case where the autovacuum launcher calls\n> > AutoVacuumUpdateDelay() function?\n>\n> I had meant to add it to HandleAutoVacLauncherInterrupts() after\n> reloading the config file (done in attached patch). When using the\n> global variables for cost delay (instead of wi_cost_delay in worker\n> info), the autovac launcher also has to do the check in the else branch\n> of AutoVacuumUpdateDelay()\n>\n> VacuumCostDelay = autovacuum_vac_cost_delay >= 0 ?\n> autovacuum_vac_cost_delay : VacuumCostDelay;\n>\n> to make sure VacuumCostDelay is correct for when it calls\n> autovac_balance_cost().\n\nBut doesn't the launcher do a similar thing at the beginning of\nautovac_balance_cost()?\n\n double vac_cost_delay = (autovacuum_vac_cost_delay >= 0 ?\n autovacuum_vac_cost_delay : VacuumCostDelay);\n\nRelated to this point, I think autovac_balance_cost() should use\nglobally-set cost_limit and cost_delay values to calculate worker's\nvacuum-delay parameters. IOW, vac_cost_limit and vac_cost_delay should\ncome from the config file setting, not table option etc:\n\n int vac_cost_limit = (autovacuum_vac_cost_limit > 0 ?\n autovacuum_vac_cost_limit : VacuumCostLimit);\n double vac_cost_delay = (autovacuum_vac_cost_delay >= 0 ?\n autovacuum_vac_cost_delay : VacuumCostDelay);\n\nIf my understanding is right, the following change is not right;\nAutoVacUpdateLimit() updates the VacuumCostLimit based on the value in\nMyWorkerInfo:\n\n MyWorkerInfo->wi_cost_limit_base = tab->at_vacuum_cost_limit;\n+ AutoVacuumUpdateLimit();\n\n /* do a balance */\n autovac_balance_cost();\n\n- /* set the active cost parameters from the result of that */\n- AutoVacuumUpdateDelay();\n\nAlso, even when using the global variables for cost delay, the\nlauncher doesn't need to check the global variable. It should always\nbe able to use either autovacuum_vac_cost_delay/limit or\nVacuumCostDelay/Limit.\n\n>\n> This also made me think about whether or not we still need cost_limit_base.\n> It is used to ensure that autovac_balance_cost() never ends up setting\n> workers' wi_cost_limits above the current autovacuum_vacuum_cost_limit\n> (or VacuumCostLimit). However, the launcher and all the workers should\n> know what the value is without cost_limit_base, no?\n\nYeah, the current balancing algorithm looks to respect the cost_limit\nvalue set when starting to vacuum the table. The proportion of the\namount of I/O that a worker can consume is calculated based on the\nbase value and the new worker's cost_limit value cannot exceed the\nbase value. Given that we're trying to dynamically tune worker's cost\nparameters (delay and limit), this concept seems to need to be\nupdated.\n\n>\n> > > Also not sure how the patch interacts with failsafe autovac and parallel\n> > > vacuum.\n> >\n> > Good point.\n> >\n> > When entering the failsafe mode, we disable the vacuum delays (see\n> > lazy_check_wraparound_failsafe()). We need to keep disabling the\n> > vacuum delays even after reloading the config file. One idea is to\n> > have another global variable indicating we're in the failsafe mode.\n> > vacuum_delay_point() doesn't update VacuumCostActive if the flag is\n> > true.\n>\n> I think we might not need to do this. Other than in\n> lazy_check_wraparound_failsafe(), VacuumCostActive is only updated in\n> two places:\n>\n> 1) in vacuum() which autovacuum will call per table. And failsafe is\n> reset per table as well.\n>\n> 2) in vacuum_delay_point(), but, since VacuumCostActive will already be\n> false when we enter vacuum_delay_point() the next time after\n> lazy_check_wraparound_failsafe(), we won't set VacuumCostActive there.\n\nIndeed. But does it mean that there is no code path to turn\nvacuum-delay on, even when vacuum_cost_delay is updated from 0 to\nnon-0?\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 15 Mar 2023 14:13:24 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 1:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Sat, Mar 11, 2023 at 8:11 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > I've implemented the atomic cost limit in the attached patch. Though,\n> > I'm pretty unsure about how I initialized the atomics in\n> > AutoVacuumShmemInit()...\n>\n> +\n> /* initialize the WorkerInfo free list */\n> for (i = 0; i < autovacuum_max_workers; i++)\n> dlist_push_head(&AutoVacuumShmem->av_freeWorkers,\n> &worker[i].wi_links);\n> +\n> + dlist_foreach(iter, &AutoVacuumShmem->av_freeWorkers)\n> + pg_atomic_init_u32(\n> +\n> &(dlist_container(WorkerInfoData, wi_links, iter.cur))->wi_cost_limit,\n> + 0);\n> +\n>\n> I think we can do like:\n>\n> /* initialize the WorkerInfo free list */\n> for (i = 0; i < autovacuum_max_workers; i++)\n> {\n> dlist_push_head(&AutoVacuumShmem->av_freeWorkers,\n> &worker[i].wi_links);\n> pg_atomic_init_u32(&(worker[i].wi_cost_limit));\n> }\n\nAh, yes, I was distracted by the variable name \"worker\" (as opposed to\n\"workers\").\n\n> > If the consensus is that it is simply too confusing to take\n> > wi_cost_delay out of WorkerInfo, we might be able to afford using a\n> > shared lock to access it because we won't call AutoVacuumUpdateDelay()\n> > on every invocation of vacuum_delay_point() -- only when we've reloaded\n> > the config file.\n> >\n> > One potential option to avoid taking a shared lock on every call to\n> > AutoVacuumUpdateDelay() is to set a global variable to indicate that we\n> > did update it (since we are the only ones updating it) and then only\n> > take the shared LWLock in AutoVacuumUpdateDelay() if that flag is true.\n> >\n>\n> If we remove wi_cost_delay from WorkerInfo, probably we don't need to\n> acquire the lwlock in AutoVacuumUpdateDelay()? The shared field we\n> access in that function will be only wi_dobalance, but this field is\n> updated only by its owner autovacuum worker.\n\nI realized that we cannot use dobalance to decide whether or not to\nupdate wi_cost_delay because dobalance could be false because of table\noption cost limit being set (with no table option cost delay) and we\nwould still need to update VacuumCostDelay and wi_cost_delay with the\nnew value of autovacuum_vacuum_cost_delay.\n\nBut v5 skirts around this issue altogether.\n\n> > > ---\n> > > void\n> > > AutoVacuumUpdateDelay(void)\n> > > {\n> > > - if (MyWorkerInfo)\n> > > + /*\n> > > + * We are using autovacuum-related GUCs to update\n> > > VacuumCostDelay, so we\n> > > + * only want autovacuum workers and autovacuum launcher to do this.\n> > > + */\n> > > + if (!(am_autovacuum_worker || am_autovacuum_launcher))\n> > > + return;\n> > >\n> > > Is there any case where the autovacuum launcher calls\n> > > AutoVacuumUpdateDelay() function?\n> >\n> > I had meant to add it to HandleAutoVacLauncherInterrupts() after\n> > reloading the config file (done in attached patch). When using the\n> > global variables for cost delay (instead of wi_cost_delay in worker\n> > info), the autovac launcher also has to do the check in the else branch\n> > of AutoVacuumUpdateDelay()\n> >\n> > VacuumCostDelay = autovacuum_vac_cost_delay >= 0 ?\n> > autovacuum_vac_cost_delay : VacuumCostDelay;\n> >\n> > to make sure VacuumCostDelay is correct for when it calls\n> > autovac_balance_cost().\n>\n> But doesn't the launcher do a similar thing at the beginning of\n> autovac_balance_cost()?\n>\n> double vac_cost_delay = (autovacuum_vac_cost_delay >= 0 ?\n> autovacuum_vac_cost_delay : VacuumCostDelay);\n\nAh, yes. You are right.\n\n> Related to this point, I think autovac_balance_cost() should use\n> globally-set cost_limit and cost_delay values to calculate worker's\n> vacuum-delay parameters. IOW, vac_cost_limit and vac_cost_delay should\n> come from the config file setting, not table option etc:\n>\n> int vac_cost_limit = (autovacuum_vac_cost_limit > 0 ?\n> autovacuum_vac_cost_limit : VacuumCostLimit);\n> double vac_cost_delay = (autovacuum_vac_cost_delay >= 0 ?\n> autovacuum_vac_cost_delay : VacuumCostDelay);\n>\n> If my understanding is right, the following change is not right;\n> AutoVacUpdateLimit() updates the VacuumCostLimit based on the value in\n> MyWorkerInfo:\n>\n> MyWorkerInfo->wi_cost_limit_base = tab->at_vacuum_cost_limit;\n> + AutoVacuumUpdateLimit();\n>\n> /* do a balance */\n> autovac_balance_cost();\n>\n> - /* set the active cost parameters from the result of that */\n> - AutoVacuumUpdateDelay();\n>\n> Also, even when using the global variables for cost delay, the\n> launcher doesn't need to check the global variable. It should always\n> be able to use either autovacuum_vac_cost_delay/limit or\n> VacuumCostDelay/Limit.\n\nYes, that is true. But, I actually think we can do something more\nradical, which relates to this point as well as the issue with\ncost_limit_base below.\n\n> > This also made me think about whether or not we still need cost_limit_base.\n> > It is used to ensure that autovac_balance_cost() never ends up setting\n> > workers' wi_cost_limits above the current autovacuum_vacuum_cost_limit\n> > (or VacuumCostLimit). However, the launcher and all the workers should\n> > know what the value is without cost_limit_base, no?\n>\n> Yeah, the current balancing algorithm looks to respect the cost_limit\n> value set when starting to vacuum the table. The proportion of the\n> amount of I/O that a worker can consume is calculated based on the\n> base value and the new worker's cost_limit value cannot exceed the\n> base value. Given that we're trying to dynamically tune worker's cost\n> parameters (delay and limit), this concept seems to need to be\n> updated.\n\nIn master, autovacuum workers reload the config file at most once per\ntable vacuumed. And that is the same time that they update their\nwi_cost_limit_base and wi_cost_delay. Thus, when autovac_balance_cost()\nis called, there is a good chance that different workers will have\ndifferent values for wi_cost_limit_base and wi_cost_delay (and we are\nonly talking about workers not vacuuming a table with table option\ncost-related gucs). So, it made sense that the balancing algorithm tried\nto use a ratio to determine what to set the cost limit of each worker\nto. It is clamped to the base value, as you say, but it also gives\nworkers a proportion of the new limit equal to what proportion their base\ncost represents of the total cost.\n\nI think all of this doesn't matter anymore now that everyone can reload\nthe config file often and dynamically change these values.\n\nThus, in the attached v5, I have removed both wi_cost_limit and wi_cost_delay\nfrom WorkerInfo. I've added a new variable to AutoVacuumShmem called\nnworkers_for_balance. Now, autovac_balance_cost() only recalculates this\nnumber and updates it if it has changed. Then, in\nAutoVacuumUpdateLimit() workers read from this atomic value and divide\nthe value of the cost limit gucs by that number to get their own cost limit.\n\nI keep the table option value of cost limit and cost delay in\nbackend-local memory to reference when updating the worker cost limit.\n\nOne nice thing is autovac_balance_cost() only requires an access shared\nlock now (though most callers are updating other members before calling\nit and still take an exclusive lock).\n\nWhat do you think?\n\n> > > > Also not sure how the patch interacts with failsafe autovac and parallel\n> > > > vacuum.\n> > >\n> > > Good point.\n> > >\n> > > When entering the failsafe mode, we disable the vacuum delays (see\n> > > lazy_check_wraparound_failsafe()). We need to keep disabling the\n> > > vacuum delays even after reloading the config file. One idea is to\n> > > have another global variable indicating we're in the failsafe mode.\n> > > vacuum_delay_point() doesn't update VacuumCostActive if the flag is\n> > > true.\n> >\n> > I think we might not need to do this. Other than in\n> > lazy_check_wraparound_failsafe(), VacuumCostActive is only updated in\n> > two places:\n> >\n> > 1) in vacuum() which autovacuum will call per table. And failsafe is\n> > reset per table as well.\n> >\n> > 2) in vacuum_delay_point(), but, since VacuumCostActive will already be\n> > false when we enter vacuum_delay_point() the next time after\n> > lazy_check_wraparound_failsafe(), we won't set VacuumCostActive there.\n>\n> Indeed. But does it mean that there is no code path to turn\n> vacuum-delay on, even when vacuum_cost_delay is updated from 0 to\n> non-0?\n\nAh yes! Good point. This is true.\nI'm not sure how to cheaply allow for re-enabling delays after disabling\nthem in the middle of a table vacuum.\n\nI don't see a way around checking if we need to reload the config file\non every call to vacuum_delay_point() (currently, we are only doing this\nwhen we have to wait anyway). It seems expensive to do this check every\ntime. If we do do this, we would update VacuumCostActive when updating\nVacuumCostDelay, and we would need a global variable keeping the\nfailsafe status, as you mentioned.\n\nIt could be okay to say that you can only disable cost-based delays in\nthe middle of vacuuming a table (i.e. you cannot enable them if they are\nalready disabled until you start vacuuming the next table). Though maybe\nit is weird that you can increase the delay but not re-enable it...\n\nOn an unrelated note, I was wondering if there were any docs anywhere\nthat should be updated to go along with this.\n\nAnd, I was wondering if it was worth trying to split up the part that\nreloads the config file and all of the autovacuum stuff. The reloading\nof the config file by itself won't actually result in autovacuum workers\nhaving updated cost delays because of them overwriting it with\nwi_cost_delay, but it will allow VACUUM to have those updated values.\n\n- Melanie",
"msg_date": "Sat, 18 Mar 2023 18:47:07 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Sat, Mar 18, 2023 at 6:47 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Wed, Mar 15, 2023 at 1:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Sat, Mar 11, 2023 at 8:11 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > > > > Also not sure how the patch interacts with failsafe autovac and parallel\n> > > > > vacuum.\n> > > >\n> > > > Good point.\n> > > >\n> > > > When entering the failsafe mode, we disable the vacuum delays (see\n> > > > lazy_check_wraparound_failsafe()). We need to keep disabling the\n> > > > vacuum delays even after reloading the config file. One idea is to\n> > > > have another global variable indicating we're in the failsafe mode.\n> > > > vacuum_delay_point() doesn't update VacuumCostActive if the flag is\n> > > > true.\n> > >\n> > > I think we might not need to do this. Other than in\n> > > lazy_check_wraparound_failsafe(), VacuumCostActive is only updated in\n> > > two places:\n> > >\n> > > 1) in vacuum() which autovacuum will call per table. And failsafe is\n> > > reset per table as well.\n> > >\n> > > 2) in vacuum_delay_point(), but, since VacuumCostActive will already be\n> > > false when we enter vacuum_delay_point() the next time after\n> > > lazy_check_wraparound_failsafe(), we won't set VacuumCostActive there.\n> >\n> > Indeed. But does it mean that there is no code path to turn\n> > vacuum-delay on, even when vacuum_cost_delay is updated from 0 to\n> > non-0?\n>\n> Ah yes! Good point. This is true.\n> I'm not sure how to cheaply allow for re-enabling delays after disabling\n> them in the middle of a table vacuum.\n>\n> I don't see a way around checking if we need to reload the config file\n> on every call to vacuum_delay_point() (currently, we are only doing this\n> when we have to wait anyway). It seems expensive to do this check every\n> time. If we do do this, we would update VacuumCostActive when updating\n> VacuumCostDelay, and we would need a global variable keeping the\n> failsafe status, as you mentioned.\n>\n> It could be okay to say that you can only disable cost-based delays in\n> the middle of vacuuming a table (i.e. you cannot enable them if they are\n> already disabled until you start vacuuming the next table). Though maybe\n> it is weird that you can increase the delay but not re-enable it...\n\nSo, I thought about it some more, and I think it is a bit odd that you\ncan increase the delay and limit but not re-enable them if they were\ndisabled. And, perhaps it would be okay to check ConfigReloadPending at\nthe top of vacuum_delay_point() instead of only after sleeping. It is\njust one more branch. We can check if VacuumCostActive is false after\nchecking if we should reload and doing so if needed and return early.\nI've implemented that in attached v6.\n\nI added in the global we discussed for VacuumFailsafeActive. If we keep\nit, we can probably remove the one in LVRelState -- as it seems\nredundant. Let me know what you think.\n\n- Melanie",
"msg_date": "Sun, 19 Mar 2023 12:48:38 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Sun, Mar 19, 2023 at 7:47 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Wed, Mar 15, 2023 at 1:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Sat, Mar 11, 2023 at 8:11 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > > I've implemented the atomic cost limit in the attached patch. Though,\n> > > I'm pretty unsure about how I initialized the atomics in\n> > > AutoVacuumShmemInit()...\n> >\n> > +\n> > /* initialize the WorkerInfo free list */\n> > for (i = 0; i < autovacuum_max_workers; i++)\n> > dlist_push_head(&AutoVacuumShmem->av_freeWorkers,\n> > &worker[i].wi_links);\n> > +\n> > + dlist_foreach(iter, &AutoVacuumShmem->av_freeWorkers)\n> > + pg_atomic_init_u32(\n> > +\n> > &(dlist_container(WorkerInfoData, wi_links, iter.cur))->wi_cost_limit,\n> > + 0);\n> > +\n> >\n> > I think we can do like:\n> >\n> > /* initialize the WorkerInfo free list */\n> > for (i = 0; i < autovacuum_max_workers; i++)\n> > {\n> > dlist_push_head(&AutoVacuumShmem->av_freeWorkers,\n> > &worker[i].wi_links);\n> > pg_atomic_init_u32(&(worker[i].wi_cost_limit));\n> > }\n>\n> Ah, yes, I was distracted by the variable name \"worker\" (as opposed to\n> \"workers\").\n>\n> > > If the consensus is that it is simply too confusing to take\n> > > wi_cost_delay out of WorkerInfo, we might be able to afford using a\n> > > shared lock to access it because we won't call AutoVacuumUpdateDelay()\n> > > on every invocation of vacuum_delay_point() -- only when we've reloaded\n> > > the config file.\n> > >\n> > > One potential option to avoid taking a shared lock on every call to\n> > > AutoVacuumUpdateDelay() is to set a global variable to indicate that we\n> > > did update it (since we are the only ones updating it) and then only\n> > > take the shared LWLock in AutoVacuumUpdateDelay() if that flag is true.\n> > >\n> >\n> > If we remove wi_cost_delay from WorkerInfo, probably we don't need to\n> > acquire the lwlock in AutoVacuumUpdateDelay()? The shared field we\n> > access in that function will be only wi_dobalance, but this field is\n> > updated only by its owner autovacuum worker.\n>\n> I realized that we cannot use dobalance to decide whether or not to\n> update wi_cost_delay because dobalance could be false because of table\n> option cost limit being set (with no table option cost delay) and we\n> would still need to update VacuumCostDelay and wi_cost_delay with the\n> new value of autovacuum_vacuum_cost_delay.\n>\n> But v5 skirts around this issue altogether.\n>\n> > > > ---\n> > > > void\n> > > > AutoVacuumUpdateDelay(void)\n> > > > {\n> > > > - if (MyWorkerInfo)\n> > > > + /*\n> > > > + * We are using autovacuum-related GUCs to update\n> > > > VacuumCostDelay, so we\n> > > > + * only want autovacuum workers and autovacuum launcher to do this.\n> > > > + */\n> > > > + if (!(am_autovacuum_worker || am_autovacuum_launcher))\n> > > > + return;\n> > > >\n> > > > Is there any case where the autovacuum launcher calls\n> > > > AutoVacuumUpdateDelay() function?\n> > >\n> > > I had meant to add it to HandleAutoVacLauncherInterrupts() after\n> > > reloading the config file (done in attached patch). When using the\n> > > global variables for cost delay (instead of wi_cost_delay in worker\n> > > info), the autovac launcher also has to do the check in the else branch\n> > > of AutoVacuumUpdateDelay()\n> > >\n> > > VacuumCostDelay = autovacuum_vac_cost_delay >= 0 ?\n> > > autovacuum_vac_cost_delay : VacuumCostDelay;\n> > >\n> > > to make sure VacuumCostDelay is correct for when it calls\n> > > autovac_balance_cost().\n> >\n> > But doesn't the launcher do a similar thing at the beginning of\n> > autovac_balance_cost()?\n> >\n> > double vac_cost_delay = (autovacuum_vac_cost_delay >= 0 ?\n> > autovacuum_vac_cost_delay : VacuumCostDelay);\n>\n> Ah, yes. You are right.\n>\n> > Related to this point, I think autovac_balance_cost() should use\n> > globally-set cost_limit and cost_delay values to calculate worker's\n> > vacuum-delay parameters. IOW, vac_cost_limit and vac_cost_delay should\n> > come from the config file setting, not table option etc:\n> >\n> > int vac_cost_limit = (autovacuum_vac_cost_limit > 0 ?\n> > autovacuum_vac_cost_limit : VacuumCostLimit);\n> > double vac_cost_delay = (autovacuum_vac_cost_delay >= 0 ?\n> > autovacuum_vac_cost_delay : VacuumCostDelay);\n> >\n> > If my understanding is right, the following change is not right;\n> > AutoVacUpdateLimit() updates the VacuumCostLimit based on the value in\n> > MyWorkerInfo:\n> >\n> > MyWorkerInfo->wi_cost_limit_base = tab->at_vacuum_cost_limit;\n> > + AutoVacuumUpdateLimit();\n> >\n> > /* do a balance */\n> > autovac_balance_cost();\n> >\n> > - /* set the active cost parameters from the result of that */\n> > - AutoVacuumUpdateDelay();\n> >\n> > Also, even when using the global variables for cost delay, the\n> > launcher doesn't need to check the global variable. It should always\n> > be able to use either autovacuum_vac_cost_delay/limit or\n> > VacuumCostDelay/Limit.\n>\n> Yes, that is true. But, I actually think we can do something more\n> radical, which relates to this point as well as the issue with\n> cost_limit_base below.\n>\n> > > This also made me think about whether or not we still need cost_limit_base.\n> > > It is used to ensure that autovac_balance_cost() never ends up setting\n> > > workers' wi_cost_limits above the current autovacuum_vacuum_cost_limit\n> > > (or VacuumCostLimit). However, the launcher and all the workers should\n> > > know what the value is without cost_limit_base, no?\n> >\n> > Yeah, the current balancing algorithm looks to respect the cost_limit\n> > value set when starting to vacuum the table. The proportion of the\n> > amount of I/O that a worker can consume is calculated based on the\n> > base value and the new worker's cost_limit value cannot exceed the\n> > base value. Given that we're trying to dynamically tune worker's cost\n> > parameters (delay and limit), this concept seems to need to be\n> > updated.\n>\n> In master, autovacuum workers reload the config file at most once per\n> table vacuumed. And that is the same time that they update their\n> wi_cost_limit_base and wi_cost_delay. Thus, when autovac_balance_cost()\n> is called, there is a good chance that different workers will have\n> different values for wi_cost_limit_base and wi_cost_delay (and we are\n> only talking about workers not vacuuming a table with table option\n> cost-related gucs). So, it made sense that the balancing algorithm tried\n> to use a ratio to determine what to set the cost limit of each worker\n> to. It is clamped to the base value, as you say, but it also gives\n> workers a proportion of the new limit equal to what proportion their base\n> cost represents of the total cost.\n>\n> I think all of this doesn't matter anymore now that everyone can reload\n> the config file often and dynamically change these values.\n>\n> Thus, in the attached v5, I have removed both wi_cost_limit and wi_cost_delay\n> from WorkerInfo. I've added a new variable to AutoVacuumShmem called\n> nworkers_for_balance. Now, autovac_balance_cost() only recalculates this\n> number and updates it if it has changed. Then, in\n> AutoVacuumUpdateLimit() workers read from this atomic value and divide\n> the value of the cost limit gucs by that number to get their own cost limit.\n>\n> I keep the table option value of cost limit and cost delay in\n> backend-local memory to reference when updating the worker cost limit.\n>\n> One nice thing is autovac_balance_cost() only requires an access shared\n> lock now (though most callers are updating other members before calling\n> it and still take an exclusive lock).\n>\n> What do you think?\n\nI think this is a good idea.\n\nDo we need to calculate the number of workers running with\nnworkers_for_balance by iterating over the running worker list? I\nguess autovacuum workers can increment/decrement it at the beginning\nand end of vacuum.\n\n>\n> > > > > Also not sure how the patch interacts with failsafe autovac and parallel\n> > > > > vacuum.\n> > > >\n> > > > Good point.\n> > > >\n> > > > When entering the failsafe mode, we disable the vacuum delays (see\n> > > > lazy_check_wraparound_failsafe()). We need to keep disabling the\n> > > > vacuum delays even after reloading the config file. One idea is to\n> > > > have another global variable indicating we're in the failsafe mode.\n> > > > vacuum_delay_point() doesn't update VacuumCostActive if the flag is\n> > > > true.\n> > >\n> > > I think we might not need to do this. Other than in\n> > > lazy_check_wraparound_failsafe(), VacuumCostActive is only updated in\n> > > two places:\n> > >\n> > > 1) in vacuum() which autovacuum will call per table. And failsafe is\n> > > reset per table as well.\n> > >\n> > > 2) in vacuum_delay_point(), but, since VacuumCostActive will already be\n> > > false when we enter vacuum_delay_point() the next time after\n> > > lazy_check_wraparound_failsafe(), we won't set VacuumCostActive there.\n> >\n> > Indeed. But does it mean that there is no code path to turn\n> > vacuum-delay on, even when vacuum_cost_delay is updated from 0 to\n> > non-0?\n>\n> Ah yes! Good point. This is true.\n> I'm not sure how to cheaply allow for re-enabling delays after disabling\n> them in the middle of a table vacuum.\n>\n> I don't see a way around checking if we need to reload the config file\n> on every call to vacuum_delay_point() (currently, we are only doing this\n> when we have to wait anyway). It seems expensive to do this check every\n> time. If we do do this, we would update VacuumCostActive when updating\n> VacuumCostDelay, and we would need a global variable keeping the\n> failsafe status, as you mentioned.\n>\n> It could be okay to say that you can only disable cost-based delays in\n> the middle of vacuuming a table (i.e. you cannot enable them if they are\n> already disabled until you start vacuuming the next table). Though maybe\n> it is weird that you can increase the delay but not re-enable it...\n\nOn Mon, Mar 20, 2023 at 1:48 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> So, I thought about it some more, and I think it is a bit odd that you\n> can increase the delay and limit but not re-enable them if they were\n> disabled. And, perhaps it would be okay to check ConfigReloadPending at\n> the top of vacuum_delay_point() instead of only after sleeping. It is\n> just one more branch. We can check if VacuumCostActive is false after\n> checking if we should reload and doing so if needed and return early.\n> I've implemented that in attached v6.\n>\n> I added in the global we discussed for VacuumFailsafeActive. If we keep\n> it, we can probably remove the one in LVRelState -- as it seems\n> redundant. Let me know what you think.\n\nI think the following change is related:\n\n- if (!VacuumCostActive || InterruptPending)\n+ if (InterruptPending || VacuumFailsafeActive ||\n+ (!VacuumCostActive && !ConfigReloadPending))\n return;\n\n+ /*\n+ * Reload the configuration file if requested. This allows changes to\n+ * [autovacuum_]vacuum_cost_limit and [autovacuum_]vacuum_cost_delay to\n+ * take effect while a table is being vacuumed or analyzed.\n+ */\n+ if (ConfigReloadPending && !analyze_in_outer_xact)\n+ {\n+ ConfigReloadPending = false;\n+ ProcessConfigFile(PGC_SIGHUP);\n+ AutoVacuumUpdateDelay();\n+ AutoVacuumUpdateLimit();\n+ }\n\nIt makes sense to me that we need to reload the config file even when\nvacuum-delay is disabled. But I think it's not convenient for users\nthat we don't reload the configuration file once the failsafe is\ntriggered. I think users might want to change some GUCs such as\nlog_autovacuum_min_duration.\n\n>\n> On an unrelated note, I was wondering if there were any docs anywhere\n> that should be updated to go along with this.\n\nThe current patch improves the internal mechanism of (re)balancing\nvacuum-cost but doesn't change user-visible behavior. I don't have any\nidea so far that we should update somewhere in the doc.\n\n>\n> And, I was wondering if it was worth trying to split up the part that\n> reloads the config file and all of the autovacuum stuff. The reloading\n> of the config file by itself won't actually result in autovacuum workers\n> having updated cost delays because of them overwriting it with\n> wi_cost_delay, but it will allow VACUUM to have those updated values.\n\nIt makes sense to me to have changes for overhauling the rebalance\nmechanism in a separate patch.\n\nLooking back at the original concern you mentioned[1]:\n\nspeed up long-running vacuum of a large table by\ndecreasing autovacuum_vacuum_cost_delay/vacuum_cost_delay, however the\nconfig file is only reloaded between tables (for autovacuum) or after\nthe statement (for explicit vacuum).\n\ndoes it make sense to have autovac_balance_cost() update workers'\nwi_cost_delay too? Autovacuum launcher already reloads the config file\nand does the rebalance. So I thought autovac_balance_cost() can update\nthe cost_delay as well, and this might be a minimal change to deal\nwith your concern. This doesn't have the effect for manual VACUUM but\nsince vacuum delay is disabled by default it won't be a big problem.\nAs for manual VACUUMs, we would need to reload the config file in\nvacuum_delay_point() as the part of your patch does. Overhauling the\nrebalance mechanism would be another patch to improve it further.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAAKRu_ZngzqnEODc7LmS1NH04Kt6Y9huSjz5pp7%2BDXhrjDA0gw%40mail.gmail.com\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 23 Mar 2023 15:08:25 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 23 Mar 2023, at 07:08, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Sun, Mar 19, 2023 at 7:47 AM Melanie Plageman <melanieplageman@gmail.com> wrote:\n\n> It makes sense to me that we need to reload the config file even when\n> vacuum-delay is disabled. But I think it's not convenient for users\n> that we don't reload the configuration file once the failsafe is\n> triggered. I think users might want to change some GUCs such as\n> log_autovacuum_min_duration.\n\nI agree with this.\n\n>> On an unrelated note, I was wondering if there were any docs anywhere\n>> that should be updated to go along with this.\n> \n> The current patch improves the internal mechanism of (re)balancing\n> vacuum-cost but doesn't change user-visible behavior. I don't have any\n> idea so far that we should update somewhere in the doc.\n\nI had a look as well and can't really spot anywhere where the current behavior\nis detailed, so there is little to update. On top of that, I also don't think\nit's worth adding this to the docs.\n\n>> And, I was wondering if it was worth trying to split up the part that\n>> reloads the config file and all of the autovacuum stuff. The reloading\n>> of the config file by itself won't actually result in autovacuum workers\n>> having updated cost delays because of them overwriting it with\n>> wi_cost_delay, but it will allow VACUUM to have those updated values.\n> \n> It makes sense to me to have changes for overhauling the rebalance\n> mechanism in a separate patch.\n\nIt would for sure be worth considering, \n\n+bool VacuumFailsafeActive = false;\nThis needs documentation, how it's used and how it relates to failsafe_active\nin LVRelState (which it might replace(?), but until then).\n\n+ pg_atomic_uint32 nworkers_for_balance;\nThis needs a short oneline documentation update to the struct comment.\n\n\n- double wi_cost_delay;\n- int wi_cost_limit;\n- int wi_cost_limit_base;\nThis change makes the below comment in do_autovacuum in need of an update:\n /*\n * Remove my info from shared memory. We could, but intentionally.\n * don't, clear wi_cost_limit and friends --- this is on the\n * assumption that we probably have more to do with similar cost\n * settings, so we don't want to give up our share of I/O for a very\n * short interval and thereby thrash the global balance.\n */\n\n\n+ if (av_table_option_cost_delay >= 0)\n+ VacuumCostDelay = av_table_option_cost_delay;\n+ else\n+ VacuumCostDelay = autovacuum_vac_cost_delay >= 0 ?\n+ autovacuum_vac_cost_delay : VacuumCostDelay;\nWhile it's a matter of personal preference, I for one would like if we reduced\nthe number of ternary operators in the vacuum code, especially those mixed into\nif statements. The vacuum code is full of this already though so this isn't\nless of an objection (as it follows style) than an observation.\n\n\n+ * note: in cost_limit, zero also means use value from elsewhere, because\n+ * zero is not a valid value.\n...\n+ int vac_cost_limit = autovacuum_vac_cost_limit > 0 ?\n+ autovacuum_vac_cost_limit : VacuumCostLimit;\nNot mentioning the fact that a magic value in a GUC means it's using the value\nfrom another GUC (which is not great IMHO), it seems we are using zero as well\nas -1 as that magic value here? (not introduced in this patch.) The docs does\nAFAICT only specify -1 as that value though. Am I missing something or is the\ncode and documentation slightly out of sync?\n\nI need another few readthroughs to figure out of VacuumFailsafeActive does what\nI think it does, and should be doing, but in general I think this is a good\nidea and a patch in good condition close to being committable.\n\n--\nDaniel Gustafsson\n\n",
"msg_date": "Thu, 23 Mar 2023 17:24:36 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 2:09 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Sun, Mar 19, 2023 at 7:47 AM Melanie Plageman <melanieplageman@gmail.com> wrote:\n> Do we need to calculate the number of workers running with\n> nworkers_for_balance by iterating over the running worker list? I\n> guess autovacuum workers can increment/decrement it at the beginning\n> and end of vacuum.\n\nI don't think we can do that because if a worker crashes, we have no way\nof knowing if it had incremented or decremented the number, so we can't\nadjust for it.\n\n> > > > > > Also not sure how the patch interacts with failsafe autovac and parallel\n> > > > > > vacuum.\n> > > > >\n> > > > > Good point.\n> > > > >\n> > > > > When entering the failsafe mode, we disable the vacuum delays (see\n> > > > > lazy_check_wraparound_failsafe()). We need to keep disabling the\n> > > > > vacuum delays even after reloading the config file. One idea is to\n> > > > > have another global variable indicating we're in the failsafe mode.\n> > > > > vacuum_delay_point() doesn't update VacuumCostActive if the flag is\n> > > > > true.\n> > > >\n> > > > I think we might not need to do this. Other than in\n> > > > lazy_check_wraparound_failsafe(), VacuumCostActive is only updated in\n> > > > two places:\n> > > >\n> > > > 1) in vacuum() which autovacuum will call per table. And failsafe is\n> > > > reset per table as well.\n> > > >\n> > > > 2) in vacuum_delay_point(), but, since VacuumCostActive will already be\n> > > > false when we enter vacuum_delay_point() the next time after\n> > > > lazy_check_wraparound_failsafe(), we won't set VacuumCostActive there.\n> > >\n> > > Indeed. But does it mean that there is no code path to turn\n> > > vacuum-delay on, even when vacuum_cost_delay is updated from 0 to\n> > > non-0?\n> >\n> > Ah yes! Good point. This is true.\n> > I'm not sure how to cheaply allow for re-enabling delays after disabling\n> > them in the middle of a table vacuum.\n> >\n> > I don't see a way around checking if we need to reload the config file\n> > on every call to vacuum_delay_point() (currently, we are only doing this\n> > when we have to wait anyway). It seems expensive to do this check every\n> > time. If we do do this, we would update VacuumCostActive when updating\n> > VacuumCostDelay, and we would need a global variable keeping the\n> > failsafe status, as you mentioned.\n> >\n> > It could be okay to say that you can only disable cost-based delays in\n> > the middle of vacuuming a table (i.e. you cannot enable them if they are\n> > already disabled until you start vacuuming the next table). Though maybe\n> > it is weird that you can increase the delay but not re-enable it...\n>\n> On Mon, Mar 20, 2023 at 1:48 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > So, I thought about it some more, and I think it is a bit odd that you\n> > can increase the delay and limit but not re-enable them if they were\n> > disabled. And, perhaps it would be okay to check ConfigReloadPending at\n> > the top of vacuum_delay_point() instead of only after sleeping. It is\n> > just one more branch. We can check if VacuumCostActive is false after\n> > checking if we should reload and doing so if needed and return early.\n> > I've implemented that in attached v6.\n> >\n> > I added in the global we discussed for VacuumFailsafeActive. If we keep\n> > it, we can probably remove the one in LVRelState -- as it seems\n> > redundant. Let me know what you think.\n>\n> I think the following change is related:\n>\n> - if (!VacuumCostActive || InterruptPending)\n> + if (InterruptPending || VacuumFailsafeActive ||\n> + (!VacuumCostActive && !ConfigReloadPending))\n> return;\n>\n> + /*\n> + * Reload the configuration file if requested. This allows changes to\n> + * [autovacuum_]vacuum_cost_limit and [autovacuum_]vacuum_cost_delay to\n> + * take effect while a table is being vacuumed or analyzed.\n> + */\n> + if (ConfigReloadPending && !analyze_in_outer_xact)\n> + {\n> + ConfigReloadPending = false;\n> + ProcessConfigFile(PGC_SIGHUP);\n> + AutoVacuumUpdateDelay();\n> + AutoVacuumUpdateLimit();\n> + }\n>\n> It makes sense to me that we need to reload the config file even when\n> vacuum-delay is disabled. But I think it's not convenient for users\n> that we don't reload the configuration file once the failsafe is\n> triggered. I think users might want to change some GUCs such as\n> log_autovacuum_min_duration.\n\nAh, okay. Attached v7 has this change (it reloads even if failsafe is\nactive).\n\n> > And, I was wondering if it was worth trying to split up the part that\n> > reloads the config file and all of the autovacuum stuff. The reloading\n> > of the config file by itself won't actually result in autovacuum workers\n> > having updated cost delays because of them overwriting it with\n> > wi_cost_delay, but it will allow VACUUM to have those updated values.\n>\n> It makes sense to me to have changes for overhauling the rebalance\n> mechanism in a separate patch.\n>\n> Looking back at the original concern you mentioned[1]:\n>\n> speed up long-running vacuum of a large table by\n> decreasing autovacuum_vacuum_cost_delay/vacuum_cost_delay, however the\n> config file is only reloaded between tables (for autovacuum) or after\n> the statement (for explicit vacuum).\n>\n> does it make sense to have autovac_balance_cost() update workers'\n> wi_cost_delay too? Autovacuum launcher already reloads the config file\n> and does the rebalance. So I thought autovac_balance_cost() can update\n> the cost_delay as well, and this might be a minimal change to deal\n> with your concern. This doesn't have the effect for manual VACUUM but\n> since vacuum delay is disabled by default it won't be a big problem.\n> As for manual VACUUMs, we would need to reload the config file in\n> vacuum_delay_point() as the part of your patch does. Overhauling the\n> rebalance mechanism would be another patch to improve it further.\n\nSo, we can't do this without acquiring an access shared lock on every\ncall to vacuum_delay_point() because cost delay is a double.\n\nI will work on a patchset with separate commits for reloading the config\nfile, though (with autovac not benefitting in the first commit).\n\nOn Thu, Mar 23, 2023 at 12:24 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> +bool VacuumFailsafeActive = false;\n> This needs documentation, how it's used and how it relates to failsafe_active\n> in LVRelState (which it might replace(?), but until then).\n\nThanks! I've removed LVRelState->failsafe_active.\n\nI've also separated the VacuumFailsafeActive change into its own commit.\nI will say that that commit message needs some work.\n\n> + pg_atomic_uint32 nworkers_for_balance;\n> This needs a short oneline documentation update to the struct comment.\n\nDone. I also prefixed with av to match the other members. I am thinking\nthat this variable name could be better. I want to convey that it is the\nnumber of workers sharing a cost limit, so I considered\nav_limit_sharers or something like that. I am looking to convey that\nit is the number of workers amongst whom we must split the cost limit.\n\n>\n> - double wi_cost_delay;\n> - int wi_cost_limit;\n> - int wi_cost_limit_base;\n> This change makes the below comment in do_autovacuum in need of an update:\n> /*\n> * Remove my info from shared memory. We could, but intentionally.\n> * don't, clear wi_cost_limit and friends --- this is on the\n> * assumption that we probably have more to do with similar cost\n> * settings, so we don't want to give up our share of I/O for a very\n> * short interval and thereby thrash the global balance.\n> */\n\nUpdated to mention wi_dobalance instead.\nOn the topic of wi_dobalance, should we bother making it an atomic flag\ninstead? We would avoid taking a lock a few times, though probably not\nfrequently enough to matter. I was wondering if making it atomically\naccessible would be less confusing than acquiring a lock only to set\none member in do_autovacuum() (and otherwise it is only read). I think\nif I had to make it an atomic flag, I would reverse the logic and make\nit wi_skip_balance or something like that.\n\n> + if (av_table_option_cost_delay >= 0)\n> + VacuumCostDelay = av_table_option_cost_delay;\n> + else\n> + VacuumCostDelay = autovacuum_vac_cost_delay >= 0 ?\n> + autovacuum_vac_cost_delay : VacuumCostDelay;\n> While it's a matter of personal preference, I for one would like if we reduced\n> the number of ternary operators in the vacuum code, especially those mixed into\n> if statements. The vacuum code is full of this already though so this isn't\n> less of an objection (as it follows style) than an observation.\n\nI agree. This one was better served as an \"else if\" anyway -- updated!\n\n>\n> + * note: in cost_limit, zero also means use value from elsewhere, because\n> + * zero is not a valid value.\n> ...\n> + int vac_cost_limit = autovacuum_vac_cost_limit > 0 ?\n> + autovacuum_vac_cost_limit : VacuumCostLimit;\n> Not mentioning the fact that a magic value in a GUC means it's using the value\n> from another GUC (which is not great IMHO), it seems we are using zero as well\n> as -1 as that magic value here? (not introduced in this patch.) The docs does\n> AFAICT only specify -1 as that value though. Am I missing something or is the\n> code and documentation slightly out of sync?\n\nI copied that comment from elsewhere, but, yes it is a weird situation.\nSo, you can set autovacuum_vacuum_cost_limit to 0, -1 or a\npositive number. You can only set vacuum_cost_limit to a positive value.\nThe documentation mentions that setting autovacuum_vacuum_cost_limit to\n-1, the default, will have it use vacuum_cost_limit. However, it says\nnothing about what setting it to 0 does. In the code, everywhere assumes\nif autovacuum_vacuum_cost_limit is 0 OR -1, use vacuum_cost_limit.\n\nThis is in contrast to autovacuum_vacuum_cost_delay, for which 0 means\nto disable it -- so setting autovacuum_vacuum_cost_delay to 0 will\nspecifically not fall back to vacuum_cost_limit.\n\nI think the problem is that 0 is not a valid cost limit (i.e. it has no\nmeaning like infinity/no limit), so we basically don't want to allow the\ncost limit to be set to 0, but GUC values have to be a range with a max\nand a min, so we can't just exclude 0 if we want to allow -1 (as far as\nI know). I think it would be nice to be able to specify multiple valid\nranges for GUCs to the GUC machinery.\n\nSo, to answer your question, yes, the code and docs are a bit\nout-of-sync.\n\n> I need another few readthroughs to figure out of VacuumFailsafeActive does what\n> I think it does, and should be doing, but in general I think this is a good\n> idea and a patch in good condition close to being committable.\n\nI will take a pass at splitting up the main commit into two. However, I\nhave attached a new version with the other specific updates discussed in\nthis thread. Feel free to provide review on this version in the meantime.\n\n- Melanie",
"msg_date": "Thu, 23 Mar 2023 20:27:17 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 9:27 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Thu, Mar 23, 2023 at 2:09 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Sun, Mar 19, 2023 at 7:47 AM Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > Do we need to calculate the number of workers running with\n> > nworkers_for_balance by iterating over the running worker list? I\n> > guess autovacuum workers can increment/decrement it at the beginning\n> > and end of vacuum.\n>\n> I don't think we can do that because if a worker crashes, we have no way\n> of knowing if it had incremented or decremented the number, so we can't\n> adjust for it.\n\nWhat kind of crash are you concerned about? If a worker raises an\nERROR, we can catch it in PG_CATCH() block. If it's a FATAL, we can do\nthat in FreeWorkerInfo(). A PANIC error ends up crashing the entire\nserver.\n\n>\n> > > > > > > Also not sure how the patch interacts with failsafe autovac and parallel\n> > > > > > > vacuum.\n> > > > > >\n> > > > > > Good point.\n> > > > > >\n> > > > > > When entering the failsafe mode, we disable the vacuum delays (see\n> > > > > > lazy_check_wraparound_failsafe()). We need to keep disabling the\n> > > > > > vacuum delays even after reloading the config file. One idea is to\n> > > > > > have another global variable indicating we're in the failsafe mode.\n> > > > > > vacuum_delay_point() doesn't update VacuumCostActive if the flag is\n> > > > > > true.\n> > > > >\n> > > > > I think we might not need to do this. Other than in\n> > > > > lazy_check_wraparound_failsafe(), VacuumCostActive is only updated in\n> > > > > two places:\n> > > > >\n> > > > > 1) in vacuum() which autovacuum will call per table. And failsafe is\n> > > > > reset per table as well.\n> > > > >\n> > > > > 2) in vacuum_delay_point(), but, since VacuumCostActive will already be\n> > > > > false when we enter vacuum_delay_point() the next time after\n> > > > > lazy_check_wraparound_failsafe(), we won't set VacuumCostActive there.\n> > > >\n> > > > Indeed. But does it mean that there is no code path to turn\n> > > > vacuum-delay on, even when vacuum_cost_delay is updated from 0 to\n> > > > non-0?\n> > >\n> > > Ah yes! Good point. This is true.\n> > > I'm not sure how to cheaply allow for re-enabling delays after disabling\n> > > them in the middle of a table vacuum.\n> > >\n> > > I don't see a way around checking if we need to reload the config file\n> > > on every call to vacuum_delay_point() (currently, we are only doing this\n> > > when we have to wait anyway). It seems expensive to do this check every\n> > > time. If we do do this, we would update VacuumCostActive when updating\n> > > VacuumCostDelay, and we would need a global variable keeping the\n> > > failsafe status, as you mentioned.\n> > >\n> > > It could be okay to say that you can only disable cost-based delays in\n> > > the middle of vacuuming a table (i.e. you cannot enable them if they are\n> > > already disabled until you start vacuuming the next table). Though maybe\n> > > it is weird that you can increase the delay but not re-enable it...\n> >\n> > On Mon, Mar 20, 2023 at 1:48 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > > So, I thought about it some more, and I think it is a bit odd that you\n> > > can increase the delay and limit but not re-enable them if they were\n> > > disabled. And, perhaps it would be okay to check ConfigReloadPending at\n> > > the top of vacuum_delay_point() instead of only after sleeping. It is\n> > > just one more branch. We can check if VacuumCostActive is false after\n> > > checking if we should reload and doing so if needed and return early.\n> > > I've implemented that in attached v6.\n> > >\n> > > I added in the global we discussed for VacuumFailsafeActive. If we keep\n> > > it, we can probably remove the one in LVRelState -- as it seems\n> > > redundant. Let me know what you think.\n> >\n> > I think the following change is related:\n> >\n> > - if (!VacuumCostActive || InterruptPending)\n> > + if (InterruptPending || VacuumFailsafeActive ||\n> > + (!VacuumCostActive && !ConfigReloadPending))\n> > return;\n> >\n> > + /*\n> > + * Reload the configuration file if requested. This allows changes to\n> > + * [autovacuum_]vacuum_cost_limit and [autovacuum_]vacuum_cost_delay to\n> > + * take effect while a table is being vacuumed or analyzed.\n> > + */\n> > + if (ConfigReloadPending && !analyze_in_outer_xact)\n> > + {\n> > + ConfigReloadPending = false;\n> > + ProcessConfigFile(PGC_SIGHUP);\n> > + AutoVacuumUpdateDelay();\n> > + AutoVacuumUpdateLimit();\n> > + }\n> >\n> > It makes sense to me that we need to reload the config file even when\n> > vacuum-delay is disabled. But I think it's not convenient for users\n> > that we don't reload the configuration file once the failsafe is\n> > triggered. I think users might want to change some GUCs such as\n> > log_autovacuum_min_duration.\n>\n> Ah, okay. Attached v7 has this change (it reloads even if failsafe is\n> active).\n>\n> > > And, I was wondering if it was worth trying to split up the part that\n> > > reloads the config file and all of the autovacuum stuff. The reloading\n> > > of the config file by itself won't actually result in autovacuum workers\n> > > having updated cost delays because of them overwriting it with\n> > > wi_cost_delay, but it will allow VACUUM to have those updated values.\n> >\n> > It makes sense to me to have changes for overhauling the rebalance\n> > mechanism in a separate patch.\n> >\n> > Looking back at the original concern you mentioned[1]:\n> >\n> > speed up long-running vacuum of a large table by\n> > decreasing autovacuum_vacuum_cost_delay/vacuum_cost_delay, however the\n> > config file is only reloaded between tables (for autovacuum) or after\n> > the statement (for explicit vacuum).\n> >\n> > does it make sense to have autovac_balance_cost() update workers'\n> > wi_cost_delay too? Autovacuum launcher already reloads the config file\n> > and does the rebalance. So I thought autovac_balance_cost() can update\n> > the cost_delay as well, and this might be a minimal change to deal\n> > with your concern. This doesn't have the effect for manual VACUUM but\n> > since vacuum delay is disabled by default it won't be a big problem.\n> > As for manual VACUUMs, we would need to reload the config file in\n> > vacuum_delay_point() as the part of your patch does. Overhauling the\n> > rebalance mechanism would be another patch to improve it further.\n>\n> So, we can't do this without acquiring an access shared lock on every\n> call to vacuum_delay_point() because cost delay is a double.\n>\n> I will work on a patchset with separate commits for reloading the config\n> file, though (with autovac not benefitting in the first commit).\n>\n> On Thu, Mar 23, 2023 at 12:24 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > +bool VacuumFailsafeActive = false;\n> > This needs documentation, how it's used and how it relates to failsafe_active\n> > in LVRelState (which it might replace(?), but until then).\n>\n> Thanks! I've removed LVRelState->failsafe_active.\n>\n> I've also separated the VacuumFailsafeActive change into its own commit.\n\n@@ -492,6 +493,7 @@ vacuum(List *relations, VacuumParams *params,\n\n in_vacuum = true;\n VacuumCostActive = (VacuumCostDelay > 0);\n+ VacuumFailsafeActive = false;\n VacuumCostBalance = 0;\n VacuumPageHit = 0;\n VacuumPageMiss = 0;\n\nI think we need to reset VacuumFailsafeActive also in PG_FINALLY()\nblock in vacuum().\n\nOne comment on 0002 patch:\n\n+ /*\n+ * Reload the configuration file if requested. This allows changes to\n+ * [autovacuum_]vacuum_cost_limit and [autovacuum_]vacuum_cost_delay to\n+ * take effect while a table is being vacuumed or analyzed.\n+ */\n+ if (ConfigReloadPending && !analyze_in_outer_xact)\n+ {\n+ ConfigReloadPending = false;\n+ ProcessConfigFile(PGC_SIGHUP);\n+ AutoVacuumUpdateDelay();\n+ AutoVacuumUpdateLimit();\n+ }\n\nI think we need comments on why we don't reload the config file if\nwe're analyzing a table in a user transaction.\n\n>\n> > I need another few readthroughs to figure out of VacuumFailsafeActive does what\n> > I think it does, and should be doing, but in general I think this is a good\n> > idea and a patch in good condition close to being committable.\n\nAnother approach would be to make VacuumCostActive a ternary value:\non, off, and never. When we trigger the failsafe mode we switch it to\nnever, meaning that it never becomes active even after reloading the\nconfig file. A good point is that we don't need to add a new global\nvariable, but I'm not sure it's better than the current approach.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 24 Mar 2023 14:21:03 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 1:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Mar 24, 2023 at 9:27 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > On Thu, Mar 23, 2023 at 2:09 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > On Sun, Mar 19, 2023 at 7:47 AM Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > > Do we need to calculate the number of workers running with\n> > > nworkers_for_balance by iterating over the running worker list? I\n> > > guess autovacuum workers can increment/decrement it at the beginning\n> > > and end of vacuum.\n> >\n> > I don't think we can do that because if a worker crashes, we have no way\n> > of knowing if it had incremented or decremented the number, so we can't\n> > adjust for it.\n>\n> What kind of crash are you concerned about? If a worker raises an\n> ERROR, we can catch it in PG_CATCH() block. If it's a FATAL, we can do\n> that in FreeWorkerInfo(). A PANIC error ends up crashing the entire\n> server.\n\nYes, but what about a worker that segfaults? Since table AMs can define\nrelation_vacuum(), this seems like a real possibility.\n\nI'll address your other code feedback in the next version.\n\nI realized nworkers_for_balance should be initialized to 0 and not 1 --\n1 is misleading since there are often 0 autovac workers. We just never\nwant to use nworkers_for_balance when it is 0. But, workers put a floor\nof 1 on the number when they divide limit/nworkers_for_balance (since\nthey know there must be at least one worker right now since they are a\nworker). I thought about whether or not they should call\nautovac_balance_cost() if they find that nworkers_for_balance is 0 when\nupdating their own limit, but I'm not sure.\n\n> > > I need another few readthroughs to figure out of VacuumFailsafeActive does what\n> > > I think it does, and should be doing, but in general I think this is a good\n> > > idea and a patch in good condition close to being committable.\n>\n> Another approach would be to make VacuumCostActive a ternary value:\n> on, off, and never. When we trigger the failsafe mode we switch it to\n> never, meaning that it never becomes active even after reloading the\n> config file. A good point is that we don't need to add a new global\n> variable, but I'm not sure it's better than the current approach.\n\nHmm, this is interesting. I don't love the word \"never\" since it kind of\nimplies a duration longer than the current table being vacuumed. But we\ncould find a different word or just document it well. For clarity, we\nmight want to call it failsafe_mode or something.\n\nI wonder if the primary drawback to converting\nLVRelState->failsafe_active to a global VacuumFailsafeActive is just the\ngeneral rule of limiting scope to the minimum needed.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 24 Mar 2023 13:27:45 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 8:27 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Thu, Mar 23, 2023 at 2:09 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > And, I was wondering if it was worth trying to split up the part that\n> > > reloads the config file and all of the autovacuum stuff. The reloading\n> > > of the config file by itself won't actually result in autovacuum workers\n> > > having updated cost delays because of them overwriting it with\n> > > wi_cost_delay, but it will allow VACUUM to have those updated values.\n> >\n> > It makes sense to me to have changes for overhauling the rebalance\n> > mechanism in a separate patch.\n> >\n> > Looking back at the original concern you mentioned[1]:\n> >\n> > speed up long-running vacuum of a large table by\n> > decreasing autovacuum_vacuum_cost_delay/vacuum_cost_delay, however the\n> > config file is only reloaded between tables (for autovacuum) or after\n> > the statement (for explicit vacuum).\n> >\n> > does it make sense to have autovac_balance_cost() update workers'\n> > wi_cost_delay too? Autovacuum launcher already reloads the config file\n> > and does the rebalance. So I thought autovac_balance_cost() can update\n> > the cost_delay as well, and this might be a minimal change to deal\n> > with your concern. This doesn't have the effect for manual VACUUM but\n> > since vacuum delay is disabled by default it won't be a big problem.\n> > As for manual VACUUMs, we would need to reload the config file in\n> > vacuum_delay_point() as the part of your patch does. Overhauling the\n> > rebalance mechanism would be another patch to improve it further.\n>\n> So, we can't do this without acquiring an access shared lock on every\n> call to vacuum_delay_point() because cost delay is a double.\n>\n> I will work on a patchset with separate commits for reloading the config\n> file, though (with autovac not benefitting in the first commit).\n\nSo, I realized we could actually do as you say and have autovac workers\nupdate their wi_cost_delay and keep the balance changes in a separate\ncommit. I've done this in attached v8.\n\nWorkers take the exclusive lock to update their wi_cost_delay and\nwi_cost_limit only when there is a config reload. So, there is one\ncommit that implements this behavior and a separate commit to revise the\nworker rebalancing.\n\nNote that we must have the workers also update wi_cost_limit_base and\nthen call autovac_balance_cost() when they reload the config file\n(instead of waiting for launcher to call autovac_balance_cost()) to\navoid potentially calculating the sleep with a new value of cost delay\nand an old value of cost limit.\n\nIn the commit which revises the worker rebalancing, I'm still wondering\nif wi_dobalance should be an atomic flag -- probably not worth it,\nright?\n\nOn Fri, Mar 24, 2023 at 1:27 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I realized nworkers_for_balance should be initialized to 0 and not 1 --\n> 1 is misleading since there are often 0 autovac workers. We just never\n> want to use nworkers_for_balance when it is 0. But, workers put a floor\n> of 1 on the number when they divide limit/nworkers_for_balance (since\n> they know there must be at least one worker right now since they are a\n> worker). I thought about whether or not they should call\n> autovac_balance_cost() if they find that nworkers_for_balance is 0 when\n> updating their own limit, but I'm not sure.\n\nI've gone ahead and updated this. I haven't made the workers call\nautovac_balance_cost() if they find that nworkers_for_balance is 0 when\nthey try and use it when updating their limit because I'm not sure if\nthis can happen. I would be interested in input here.\n\nI'm also still interested in feedback on the variable name\nav_nworkers_for_balance.\n\n> On Fri, Mar 24, 2023 at 1:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > I need another few readthroughs to figure out of VacuumFailsafeActive does what\n> > > > I think it does, and should be doing, but in general I think this is a good\n> > > > idea and a patch in good condition close to being committable.\n> >\n> > Another approach would be to make VacuumCostActive a ternary value:\n> > on, off, and never. When we trigger the failsafe mode we switch it to\n> > never, meaning that it never becomes active even after reloading the\n> > config file. A good point is that we don't need to add a new global\n> > variable, but I'm not sure it's better than the current approach.\n>\n> Hmm, this is interesting. I don't love the word \"never\" since it kind of\n> implies a duration longer than the current table being vacuumed. But we\n> could find a different word or just document it well. For clarity, we\n> might want to call it failsafe_mode or something.\n>\n> I wonder if the primary drawback to converting\n> LVRelState->failsafe_active to a global VacuumFailsafeActive is just the\n> general rule of limiting scope to the minimum needed.\n\nOkay, so I've changed my mind about this. I like having a ternary for\nVacuumCostActive and keeping failsafe_active in LVRelState. What I\ndidn't like was having non-vacuum code have to care about the\ndistinction between failsafe + inactive and just inactive. To handle\nthis, I converted VacuumCostActive to VacuumCostInactive since there are\ntwo inactive cases (inactive and failsafe and plain inactive) and only\none active case. Then, I defined VacuumCostInactive as an int but use\nenum values for it in vacuum code to distinguish between failsafe +\ninactive and just inactive (I call it VACUUM_COST_INACTIVE_AND_LOCKED\nand VACUUM_COST_INACTIVE_AND_UNLOCKED). Non-vacuum code only needs to\ncheck if VacuumCostInactive is 0 like if (!VacuumCostInactive). I'm\nhappy with the result, and I think it employs only well-defined C\nbehavior.\n\n- Melanie",
"msg_date": "Sat, 25 Mar 2023 15:03:56 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Sat, Mar 25, 2023 at 3:03 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Thu, Mar 23, 2023 at 8:27 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > On Thu, Mar 23, 2023 at 2:09 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > And, I was wondering if it was worth trying to split up the part that\n> > > > reloads the config file and all of the autovacuum stuff. The reloading\n> > > > of the config file by itself won't actually result in autovacuum workers\n> > > > having updated cost delays because of them overwriting it with\n> > > > wi_cost_delay, but it will allow VACUUM to have those updated values.\n> > >\n> > > It makes sense to me to have changes for overhauling the rebalance\n> > > mechanism in a separate patch.\n> > >\n> > > Looking back at the original concern you mentioned[1]:\n> > >\n> > > speed up long-running vacuum of a large table by\n> > > decreasing autovacuum_vacuum_cost_delay/vacuum_cost_delay, however the\n> > > config file is only reloaded between tables (for autovacuum) or after\n> > > the statement (for explicit vacuum).\n> > >\n> > > does it make sense to have autovac_balance_cost() update workers'\n> > > wi_cost_delay too? Autovacuum launcher already reloads the config file\n> > > and does the rebalance. So I thought autovac_balance_cost() can update\n> > > the cost_delay as well, and this might be a minimal change to deal\n> > > with your concern. This doesn't have the effect for manual VACUUM but\n> > > since vacuum delay is disabled by default it won't be a big problem.\n> > > As for manual VACUUMs, we would need to reload the config file in\n> > > vacuum_delay_point() as the part of your patch does. Overhauling the\n> > > rebalance mechanism would be another patch to improve it further.\n> >\n> > So, we can't do this without acquiring an access shared lock on every\n> > call to vacuum_delay_point() because cost delay is a double.\n> >\n> > I will work on a patchset with separate commits for reloading the config\n> > file, though (with autovac not benefitting in the first commit).\n>\n> So, I realized we could actually do as you say and have autovac workers\n> update their wi_cost_delay and keep the balance changes in a separate\n> commit. I've done this in attached v8.\n>\n> Workers take the exclusive lock to update their wi_cost_delay and\n> wi_cost_limit only when there is a config reload. So, there is one\n> commit that implements this behavior and a separate commit to revise the\n> worker rebalancing.\n\nSo, I've attached an alternate version of the patchset which takes the\napproach of having one commit which only enables cost-based delay GUC\nrefresh for VACUUM and another commit which enables it for autovacuum\nand makes the changes to balancing variables.\n\nI still think the commit which has workers updating their own\nwi_cost_delay in vacuum_delay_point() is a bit weird. It relies on no one\nelse emulating our bad behavior and reading from wi_cost_delay without a\nlock and on no one else deciding to ever write to wi_cost_delay (even\nthough it is in shared memory [this is the same as master]). It is only\nsafe because our process is the only one (right now) writing to\nwi_cost_delay, so when we read from it without a lock, we know it isn't\nbeing written to. And everyone else takes a lock when reading from\nwi_cost_delay right now. So, it seems...not great.\n\nThis approach also introduces a function that is only around for\none commit until the next commit obsoletes it, which seems a bit silly.\n\nBasically, I think it is probably better to just have one commit\nenabling guc refresh for VACUUM and then another which correctly\nimplements what is needed for autovacuum to do the same.\nAttached v9 does this.\n\nI've provided both complete versions of both approaches (v9 and v8).\n\n- Melanie",
"msg_date": "Mon, 27 Mar 2023 14:12:03 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "At Mon, 27 Mar 2023 14:12:03 -0400, Melanie Plageman <melanieplageman@gmail.com> wrote in \n> So, I've attached an alternate version of the patchset which takes the\n> approach of having one commit which only enables cost-based delay GUC\n> refresh for VACUUM and another commit which enables it for autovacuum\n> and makes the changes to balancing variables.\n> \n> I still think the commit which has workers updating their own\n> wi_cost_delay in vacuum_delay_point() is a bit weird. It relies on no one\n> else emulating our bad behavior and reading from wi_cost_delay without a\n> lock and on no one else deciding to ever write to wi_cost_delay (even\n> though it is in shared memory [this is the same as master]). It is only\n> safe because our process is the only one (right now) writing to\n> wi_cost_delay, so when we read from it without a lock, we know it isn't\n> being written to. And everyone else takes a lock when reading from\n> wi_cost_delay right now. So, it seems...not great.\n> \n> This approach also introduces a function that is only around for\n> one commit until the next commit obsoletes it, which seems a bit silly.\n\n(I'm not sure what this refers to, though..) I don't think it's silly\nif a later patch removes something that the preceding patches\nintrodcued, as long as that contributes to readability. Untimately,\nthey will be merged together on committing.\n\n> Basically, I think it is probably better to just have one commit\n> enabling guc refresh for VACUUM and then another which correctly\n> implements what is needed for autovacuum to do the same.\n> Attached v9 does this.\n> \n> I've provided both complete versions of both approaches (v9 and v8).\n\nI took a look at v9 and have a few comments.\n\n0001:\n\nI don't believe it is necessary, as mentioned in the commit\nmessage. It apperas that we are resetting it at the appropriate times.\n\n0002:\n\nI felt a bit uneasy on this. It seems somewhat complex (and makes the\nsucceeding patches complex), has confusing names, and doesn't seem\nlike self-contained. I think it'd be simpler to add a global boolean\n(maybe VacuumCostActiveForceDisable or such) that forces\nVacuumCostActive to be false and set VacuumCostActive using a setter\nfunction that follows the boolean.\n\n\n0003:\n\n+\t * Reload the configuration file if requested. This allows changes to\n+\t * vacuum_cost_limit and vacuum_cost_delay to take effect while a table is\n+\t * being vacuumed or analyzed. Analyze should not reload configuration\n+\t * file if it is in an outer transaction, as GUC values shouldn't be\n+\t * allowed to refer to some uncommitted state (e.g. database objects\n+\t * created in this transaction).\n\nI'm not sure GUC reload is or should be related to transactions. For\ninstance, work_mem can be changed by a reload during a transaction\nunless it has been set in the current transaction. I don't think we\nneed to deliberately suppress changes in variables caused by realods\nduring transactions only for analzye. If analyze doesn't like changes\nto certain GUC variables, their values should be snapshotted before\nstarting the process.\n\n\n0004:\n-\tdouble\t\tat_vacuum_cost_delay;\n-\tint\t\t\tat_vacuum_cost_limit;\n+\tdouble\t\tat_table_option_vac_cost_delay;\n+\tint\t\t\tat_table_option_vac_cost_limit;\n\nWe call that options \"relopt(ion)\". I don't think there's any reason\nto use different names.\n\n\n \tdlist_head\tav_runningWorkers;\n \tWorkerInfo\tav_startingWorker;\n \tAutoVacuumWorkItem av_workItems[NUM_WORKITEMS];\n+\tpg_atomic_uint32 av_nworkers_for_balance;\n\nThe name of the new member doesn't seem to follow the surrounding\nconvention. (However, I don't think the member is needed. See below.)\n\n-\t\t/*\n-\t\t * Remember the prevailing values of the vacuum cost GUCs. We have to\n-\t\t * restore these at the bottom of the loop, else we'll compute wrong\n-\t\t * values in the next iteration of autovac_balance_cost().\n-\t\t */\n-\t\tstdVacuumCostDelay = VacuumCostDelay;\n-\t\tstdVacuumCostLimit = VacuumCostLimit;\n+\t\tav_table_option_cost_limit = tab->at_table_option_vac_cost_limit;\n+\t\tav_table_option_cost_delay = tab->at_table_option_vac_cost_delay;\n\nI think this requires a comment.\n\n\n+\t\t/* There is at least 1 autovac worker (this worker). */\n+\t\tint\t\t\tnworkers_for_balance = Max(pg_atomic_read_u32(\n+\t\t\t\t\t\t\t\t&AutoVacuumShmem->av_nworkers_for_balance), 1);\n\nI think it *must* be greater than 0. However, to begin with, I don't\nthink we need that variable to be shared. I don't believe it matters\nif we count involved workers every time we calcualte the delay.\n\n+/*\n+ * autovac_balance_cost\n+ *\t\tRecalculate the number of workers to consider, given table options and\n+ *\t\tthe current number of active workers.\n+ *\n+ * Caller must hold the AutovacuumLock in at least shared mode.\n\nThe function name doesn't seem align with what it does. However, I\nmentioned above that it might be unnecessary.\n\n\n\n+AutoVacuumUpdateLimit(void)\n \nIf I'm not missing anything, this function does something quite\ndifferent from the original autovac_balance_cost(). The original\nfunction distributes the total cost based on the GUC variables among\nworkers proportionally according to each worker's cost\nparameters. Howevwer, this function distributes the total cost\nequally.\n\n\n+\t\tint\t\t\tvac_cost_limit = autovacuum_vac_cost_limit > 0 ?\n+\t\tautovacuum_vac_cost_limit : VacuumCostLimit;\n...\n+\t\tint\t\t\tbalanced_cost_limit = vac_cost_limit / nworkers_for_balance;\n...\n+\t\tVacuumCostLimit = Max(Min(balanced_cost_limit, vac_cost_limit), 1);\n \t}\n\nThis seems to repeatedly divide VacuumCostLimit by\nnworkers_for_balance. I'm not sure, but this function might only be\ncalled after a reload. If that's the case, I don't think it's safe\ncoding, even if it works.\n\n\nregars.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 28 Mar 2023 17:21:47 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 4:21 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 27 Mar 2023 14:12:03 -0400, Melanie Plageman <melanieplageman@gmail.com> wrote in\n> > So, I've attached an alternate version of the patchset which takes the\n> > approach of having one commit which only enables cost-based delay GUC\n> > refresh for VACUUM and another commit which enables it for autovacuum\n> > and makes the changes to balancing variables.\n> >\n> > I still think the commit which has workers updating their own\n> > wi_cost_delay in vacuum_delay_point() is a bit weird. It relies on no one\n> > else emulating our bad behavior and reading from wi_cost_delay without a\n> > lock and on no one else deciding to ever write to wi_cost_delay (even\n> > though it is in shared memory [this is the same as master]). It is only\n> > safe because our process is the only one (right now) writing to\n> > wi_cost_delay, so when we read from it without a lock, we know it isn't\n> > being written to. And everyone else takes a lock when reading from\n> > wi_cost_delay right now. So, it seems...not great.\n> >\n> > This approach also introduces a function that is only around for\n> > one commit until the next commit obsoletes it, which seems a bit silly.\n>\n> (I'm not sure what this refers to, though..) I don't think it's silly\n> if a later patch removes something that the preceding patches\n> introdcued, as long as that contributes to readability. Untimately,\n> they will be merged together on committing.\n\nI was under the impression that reviewers thought config reload and\nworker balance changes should be committed in separate commits.\n\nEither way, the ephemeral function is not my primary concern. I felt\nmore uncomfortable with increasing how often we update a double in\nshared memory which is read without acquiring a lock.\n\n> > Basically, I think it is probably better to just have one commit\n> > enabling guc refresh for VACUUM and then another which correctly\n> > implements what is needed for autovacuum to do the same.\n> > Attached v9 does this.\n> >\n> > I've provided both complete versions of both approaches (v9 and v8).\n>\n> I took a look at v9 and have a few comments.\n>\n> 0001:\n>\n> I don't believe it is necessary, as mentioned in the commit\n> message. It apperas that we are resetting it at the appropriate times.\n\nVacuumCostBalance must be zeroed out when we disable vacuum cost.\nPreviously, we did not reenable VacuumCostActive once it was disabled,\nbut now that we do, I think it is good practice to always zero out\nVacuumCostBalance when we disable vacuum cost. I will squash this commit\ninto the one introducing VacuumCostInactive, though.\n\n>\n> 0002:\n>\n> I felt a bit uneasy on this. It seems somewhat complex (and makes the\n> succeeding patches complex),\n\nEven if we introduced a second global variable to indicate that failsafe\nmode has been engaged, we would still require the additional checks\nof VacuumCostInactive.\n\n> has confusing names,\n\nI would be happy to rename the values of the enum to make them less\nconfusing. Are you thinking \"force\" instead of \"locked\"?\nmaybe:\nVACUUM_COST_FORCE_INACTIVE and\nVACUUM_COST_INACTIVE\n?\n\n> and doesn't seem like self-contained.\n\nBy changing the variable from VacuumCostActive to VacuumCostInactive, I\nhave kept all non-vacuum code from having to distinguish between it\nbeing inactive due to failsafe mode or due to user settings.\n\n> I think it'd be simpler to add a global boolean (maybe\n> VacuumCostActiveForceDisable or such) that forces VacuumCostActive to\n> be false and set VacuumCostActive using a setter function that follows\n> the boolean.\n\nI think maintaining an additional global variable is more brittle than\nincluding the information in a single variable.\n\n> 0003:\n>\n> + * Reload the configuration file if requested. This allows changes to\n> + * vacuum_cost_limit and vacuum_cost_delay to take effect while a table is\n> + * being vacuumed or analyzed. Analyze should not reload configuration\n> + * file if it is in an outer transaction, as GUC values shouldn't be\n> + * allowed to refer to some uncommitted state (e.g. database objects\n> + * created in this transaction).\n>\n> I'm not sure GUC reload is or should be related to transactions. For\n> instance, work_mem can be changed by a reload during a transaction\n> unless it has been set in the current transaction. I don't think we\n> need to deliberately suppress changes in variables caused by realods\n> during transactions only for analzye. If analyze doesn't like changes\n> to certain GUC variables, their values should be snapshotted before\n> starting the process.\n\nCurrently, we only reload the config file in top-level statements. We\ndon't reload the configuration file from within a nested transaction\ncommand. BEGIN starts a transaction but not a transaction command. So\nBEGIN; ANALYZE; probably wouldn't violate this rule. But it is simpler\nto just forbid reloading when it is not a top-level transaction command.\nI have updated the comment to reflect this.\n\n> 0004:\n> - double at_vacuum_cost_delay;\n> - int at_vacuum_cost_limit;\n> + double at_table_option_vac_cost_delay;\n> + int at_table_option_vac_cost_limit;\n>\n> We call that options \"relopt(ion)\". I don't think there's any reason\n> to use different names.\n\nI've updated the names.\n\n> dlist_head av_runningWorkers;\n> WorkerInfo av_startingWorker;\n> AutoVacuumWorkItem av_workItems[NUM_WORKITEMS];\n> + pg_atomic_uint32 av_nworkers_for_balance;\n>\n> The name of the new member doesn't seem to follow the surrounding\n> convention. (However, I don't think the member is needed. See below.)\n\nI've updated the name to fit the convention better.\n\n> - /*\n> - * Remember the prevailing values of the vacuum cost GUCs. We have to\n> - * restore these at the bottom of the loop, else we'll compute wrong\n> - * values in the next iteration of autovac_balance_cost().\n> - */\n> - stdVacuumCostDelay = VacuumCostDelay;\n> - stdVacuumCostLimit = VacuumCostLimit;\n> + av_table_option_cost_limit = tab->at_table_option_vac_cost_limit;\n> + av_table_option_cost_delay = tab->at_table_option_vac_cost_delay;\n>\n> I think this requires a comment.\n\nI've added one.\n\n>\n> + /* There is at least 1 autovac worker (this worker). */\n> + int nworkers_for_balance = Max(pg_atomic_read_u32(\n> + &AutoVacuumShmem->av_nworkers_for_balance), 1);\n>\n> I think it *must* be greater than 0. However, to begin with, I don't\n> think we need that variable to be shared. I don't believe it matters\n> if we count involved workers every time we calculate the delay.\n\nWe are not calculating the delay but the cost limit. The cost limit must\nbe balanced across all of the workers currently actively vacuuming\ntables without cost-related table options.\n\nThere shouldn't be a way for this to be zero, since this worker calls\nautovac_balance_cost() before it starts vacuuming the table. I wanted to\nrule out any possibility of a divide by 0 issue. I have changed it to an\nassert instead.\n\n> +/*\n> + * autovac_balance_cost\n> + * Recalculate the number of workers to consider, given table options and\n> + * the current number of active workers.\n> + *\n> + * Caller must hold the AutovacuumLock in at least shared mode.\n>\n> The function name doesn't seem align with what it does. However, I\n> mentioned above that it might be unnecessary.\n\nThis is the same name as the function had previously. However, I think\nit does make sense to rename it. The cost limit must be balanced across\nthe workers. This function calculated how many workers the cost limit\nshould be balanced across. I renamed it to\nautovac_recalculate_workers_for_balance()\n\n> +AutoVacuumUpdateLimit(void)\n>\n> If I'm not missing anything, this function does something quite\n> different from the original autovac_balance_cost(). The original\n> function distributes the total cost based on the GUC variables among\n> workers proportionally according to each worker's cost\n> parameters. Howevwer, this function distributes the total cost\n> equally.\n\nYes, as I mentioned in the commit message, because all the workers now\nhave no reason to have different cost parameters (due to reloading the\nconfig file on almost every page), there is no reason to use ratios.\nWorkers vacuuming a table with no cost-related table options simply need\nto divide the limit equally amongst themselves because they all will\nhave the same limit and delay values.\n\n>\n> + int vac_cost_limit = autovacuum_vac_cost_limit > 0 ?\n> + autovacuum_vac_cost_limit : VacuumCostLimit;\n> ...\n> + int balanced_cost_limit = vac_cost_limit / nworkers_for_balance;\n> ...\n> + VacuumCostLimit = Max(Min(balanced_cost_limit, vac_cost_limit), 1);\n> }\n>\n> This seems to repeatedly divide VacuumCostLimit by\n> nworkers_for_balance. I'm not sure, but this function might only be\n> called after a reload. If that's the case, I don't think it's safe\n> coding, even if it works.\n\nGood point about repeatedly dividing VacuumCostLimit by\nnworkers_for_balance. I've added a variable to keep track of the base\ncost limit and separated the functionality of updating the limit into\ntwo parts -- one AutoVacuumUpdateLimit() which is only meant to be\ncalled after reload and references VacuumCostLimit to set the\nav_base_cost_limit and another, AutoVacuumBalanceLimit(), which only\noverrides VacuumCostLimit but uses av_base_cost_limit.\n\nI've noted in the comments that AutoVacuumBalanceLimit() should be\ncalled to adjust to a potential change in nworkers_for_balance\n(currently every time after we sleep in vacuum_delay_point()) and\nAutoVacuumUpdateLimit() should only be called once after a config\nreload, as it references VacuumCostLimit.\n\nI will note that this problem also exists in master, as\nautovac_balance_cost references VacuumCostLimit in order to set worker\ncost limits and then AutoVacuumUpdateDelay() overrides VacuumCostLimit\nwith the value calculated in autovac_balance_cost() from\nVacuumCostLimit.\n\nv10 attached with mentioned updates.\n\n- Melanie",
"msg_date": "Tue, 28 Mar 2023 20:35:28 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "At Tue, 28 Mar 2023 20:35:28 -0400, Melanie Plageman <melanieplageman@gmail.com> wrote in \r\n> On Tue, Mar 28, 2023 at 4:21 AM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> >\r\n> > At Mon, 27 Mar 2023 14:12:03 -0400, Melanie Plageman <melanieplageman@gmail.com> wrote in\r\n> > > So, I've attached an alternate version of the patchset which takes the\r\n> > > approach of having one commit which only enables cost-based delay GUC\r\n> > > refresh for VACUUM and another commit which enables it for autovacuum\r\n> > > and makes the changes to balancing variables.\r\n> > >\r\n> > > I still think the commit which has workers updating their own\r\n> > > wi_cost_delay in vacuum_delay_point() is a bit weird. It relies on no one\r\n> > > else emulating our bad behavior and reading from wi_cost_delay without a\r\n> > > lock and on no one else deciding to ever write to wi_cost_delay (even\r\n> > > though it is in shared memory [this is the same as master]). It is only\r\n> > > safe because our process is the only one (right now) writing to\r\n> > > wi_cost_delay, so when we read from it without a lock, we know it isn't\r\n> > > being written to. And everyone else takes a lock when reading from\r\n> > > wi_cost_delay right now. So, it seems...not great.\r\n> > >\r\n> > > This approach also introduces a function that is only around for\r\n> > > one commit until the next commit obsoletes it, which seems a bit silly.\r\n> >\r\n> > (I'm not sure what this refers to, though..) I don't think it's silly\r\n> > if a later patch removes something that the preceding patches\r\n> > introdcued, as long as that contributes to readability. Untimately,\r\n> > they will be merged together on committing.\r\n> \r\n> I was under the impression that reviewers thought config reload and\r\n> worker balance changes should be committed in separate commits.\r\n> \r\n> Either way, the ephemeral function is not my primary concern. I felt\r\n> more uncomfortable with increasing how often we update a double in\r\n> shared memory which is read without acquiring a lock.\r\n> \r\n> > > Basically, I think it is probably better to just have one commit\r\n> > > enabling guc refresh for VACUUM and then another which correctly\r\n> > > implements what is needed for autovacuum to do the same.\r\n> > > Attached v9 does this.\r\n> > >\r\n> > > I've provided both complete versions of both approaches (v9 and v8).\r\n> >\r\n> > I took a look at v9 and have a few comments.\r\n> >\r\n> > 0001:\r\n> >\r\n> > I don't believe it is necessary, as mentioned in the commit\r\n> > message. It apperas that we are resetting it at the appropriate times.\r\n> \r\n> VacuumCostBalance must be zeroed out when we disable vacuum cost.\r\n> Previously, we did not reenable VacuumCostActive once it was disabled,\r\n> but now that we do, I think it is good practice to always zero out\r\n> VacuumCostBalance when we disable vacuum cost. I will squash this commit\r\n> into the one introducing VacuumCostInactive, though.\r\n\r\n> >\r\n> > 0002:\r\n> >\r\n> > I felt a bit uneasy on this. It seems somewhat complex (and makes the\r\n> > succeeding patches complex),\r\n> \r\n> Even if we introduced a second global variable to indicate that failsafe\r\n> mode has been engaged, we would still require the additional checks\r\n> of VacuumCostInactive.\r\n>\r\n> > has confusing names,\r\n> \r\n> I would be happy to rename the values of the enum to make them less\r\n> confusing. Are you thinking \"force\" instead of \"locked\"?\r\n> maybe:\r\n> VACUUM_COST_FORCE_INACTIVE and\r\n> VACUUM_COST_INACTIVE\r\n> ?\r\n> \r\n> > and doesn't seem like self-contained.\r\n> \r\n> By changing the variable from VacuumCostActive to VacuumCostInactive, I\r\n> have kept all non-vacuum code from having to distinguish between it\r\n> being inactive due to failsafe mode or due to user settings.\r\n\r\nMy concern is that VacuumCostActive is logic-inverted and turned into\r\na ternary variable in a subtle way. The expression\r\n\"!VacuumCostInactive\" is quite confusing. (I sometimes feel the same\r\nway about \"!XLogRecPtrIsInvalid(lsn)\", and I believe most people write\r\nit with another macro like \"lsn != InvalidXLogrecPtr\"). Additionally,\r\nthe constraint in this patch will be implemented as open code. So I\r\nwanted to suggest something like the attached. The main idea is to use\r\na wrapper function to enforce the restriction, and by doing so, we\r\neliminated the need to make the variable into a ternary without a good\r\nreason.\r\n\r\n> > I think it'd be simpler to add a global boolean (maybe\r\n> > VacuumCostActiveForceDisable or such) that forces VacuumCostActive to\r\n> > be false and set VacuumCostActive using a setter function that follows\r\n> > the boolean.\r\n> \r\n> I think maintaining an additional global variable is more brittle than\r\n> including the information in a single variable.\r\n> \r\n> > 0003:\r\n> >\r\n> > + * Reload the configuration file if requested. This allows changes to\r\n> > + * vacuum_cost_limit and vacuum_cost_delay to take effect while a table is\r\n> > + * being vacuumed or analyzed. Analyze should not reload configuration\r\n> > + * file if it is in an outer transaction, as GUC values shouldn't be\r\n> > + * allowed to refer to some uncommitted state (e.g. database objects\r\n> > + * created in this transaction).\r\n> >\r\n> > I'm not sure GUC reload is or should be related to transactions. For\r\n> > instance, work_mem can be changed by a reload during a transaction\r\n> > unless it has been set in the current transaction. I don't think we\r\n> > need to deliberately suppress changes in variables caused by realods\r\n> > during transactions only for analzye. If analyze doesn't like changes\r\n> > to certain GUC variables, their values should be snapshotted before\r\n> > starting the process.\r\n> \r\n> Currently, we only reload the config file in top-level statements. We\r\n> don't reload the configuration file from within a nested transaction\r\n> command. BEGIN starts a transaction but not a transaction command. So\r\n> BEGIN; ANALYZE; probably wouldn't violate this rule. But it is simpler\r\n> to just forbid reloading when it is not a top-level transaction command.\r\n> I have updated the comment to reflect this.\r\n\r\nI feel it's a bit fragile. We may not be able to manage the reload\r\ntimeing perfectly. I think we might accidentally add a reload\r\ntiming. In that case, the assumption could break. In most cases, I\r\nthink we use snapshotting in various ways to avoid unintended variable\r\nchanges. (And I beilieve the analyze code also does that.)\r\n\r\n> > + /* There is at least 1 autovac worker (this worker). */\r\n> > + int nworkers_for_balance = Max(pg_atomic_read_u32(\r\n> > + &AutoVacuumShmem->av_nworkers_for_balance), 1);\r\n> >\r\n> > I think it *must* be greater than 0. However, to begin with, I don't\r\n> > think we need that variable to be shared. I don't believe it matters\r\n> > if we count involved workers every time we calculate the delay.\r\n> \r\n> We are not calculating the delay but the cost limit. The cost limit must\r\n\r\nAh, right, it's limit, but my main point still stands.\r\n\r\n> be balanced across all of the workers currently actively vacuuming\r\n> tables without cost-related table options.\r\n\r\nThe purpose of the old autovac_balance_cost() is to distribute the\r\ncost among all involved tables, proportionally based on each worker's\r\ncost specification. Adjusting the limit just for tables affected by\r\nreloads disrupts the cost balance.\r\n\r\n> > If I'm not missing anything, this function does something quite\r\n> > different from the original autovac_balance_cost(). The original\r\n> > function distributes the total cost based on the GUC variables among\r\n> > workers proportionally according to each worker's cost\r\n> > parameters. Howevwer, this function distributes the total cost\r\n> > equally.\r\n> \r\n> Yes, as I mentioned in the commit message, because all the workers now\r\n> have no reason to have different cost parameters (due to reloading the\r\n> config file on almost every page), there is no reason to use ratios.\r\n> Workers vacuuming a table with no cost-related table options simply need\r\n> to divide the limit equally amongst themselves because they all will\r\n> have the same limit and delay values.\r\n\r\nI'm not sure about the assumption in the commit message. For instance,\r\nif the total cost limit drops significantly, it's possible that the\r\nworkers left out of this calculation might end up using all the\r\nreduced cost. Wouldn't this imply that all workers should recompute\r\ntheir individual limits?\r\n\r\n> >\r\n> > + int vac_cost_limit = autovacuum_vac_cost_limit > 0 ?\r\n> > + autovacuum_vac_cost_limit : VacuumCostLimit;\r\n> > ...\r\n> > + int balanced_cost_limit = vac_cost_limit / nworkers_for_balance;\r\n> > ...\r\n> > + VacuumCostLimit = Max(Min(balanced_cost_limit, vac_cost_limit), 1);\r\n> > }\r\n> >\r\n> > This seems to repeatedly divide VacuumCostLimit by\r\n> > nworkers_for_balance. I'm not sure, but this function might only be\r\n> > called after a reload. If that's the case, I don't think it's safe\r\n> > coding, even if it works.\r\n> \r\n> Good point about repeatedly dividing VacuumCostLimit by\r\n> nworkers_for_balance. I've added a variable to keep track of the base\r\n> cost limit and separated the functionality of updating the limit into\r\n> two parts -- one AutoVacuumUpdateLimit() which is only meant to be\r\n> called after reload and references VacuumCostLimit to set the\r\n> av_base_cost_limit and another, AutoVacuumBalanceLimit(), which only\r\n> overrides VacuumCostLimit but uses av_base_cost_limit.\r\n\r\nSorry, but will check this later.\r\n\r\n> I've noted in the comments that AutoVacuumBalanceLimit() should be\r\n> called to adjust to a potential change in nworkers_for_balance\r\n> (currently every time after we sleep in vacuum_delay_point()) and\r\n> AutoVacuumUpdateLimit() should only be called once after a config\r\n> reload, as it references VacuumCostLimit.\r\n> \r\n> I will note that this problem also exists in master, as\r\n> autovac_balance_cost references VacuumCostLimit in order to set worker\r\n> cost limits and then AutoVacuumUpdateDelay() overrides VacuumCostLimit\r\n> with the value calculated in autovac_balance_cost() from\r\n> VacuumCostLimit.\r\n> \r\n> v10 attached with mentioned updates.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center",
"msg_date": "Wed, 29 Mar 2023 12:09:08 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "At Wed, 29 Mar 2023 12:09:08 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> timeing perfectly. I think we might accidentally add a reload\n> timing. In that case, the assumption could break. In most cases, I\n> think we use snapshotting in various ways to avoid unintended variable\n> changes. (And I beilieve the analyze code also does that.)\n\nOkay, I was missing the following code.\n\nautovacuum.c:2893\n\t\t/*\n\t\t * If any of the cost delay parameters has been set individually for\n\t\t * this table, disable the balancing algorithm.\n\t\t */\n\t\ttab->at_dobalance =\n\t\t\t!(avopts && (avopts->vacuum_cost_limit > 0 ||\n\t\t\t\t\t\t avopts->vacuum_cost_delay > 0));\n\nSo, sorry for the noise. I'll review it while this into cnosideration.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 29 Mar 2023 13:21:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "At Wed, 29 Mar 2023 13:21:55 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> autovacuum.c:2893\n> \t\t/*\n> \t\t * If any of the cost delay parameters has been set individually for\n> \t\t * this table, disable the balancing algorithm.\n> \t\t */\n> \t\ttab->at_dobalance =\n> \t\t\t!(avopts && (avopts->vacuum_cost_limit > 0 ||\n> \t\t\t\t\t\t avopts->vacuum_cost_delay > 0));\n> \n> So, sorry for the noise. I'll review it while this into cnosideration.\n\nThen I found that the code is quite confusing as it is.\n\nFor the tables that don't have cost_delay and cost_limit specified\nindificually, at_vacuum_cost_limit and _delay store the system global\nvalues detemined by GUCs. wi_cost_delay, _limit and _limit_base stores\nthe same values with them. As the result I concluded tha\nautovac_balance_cost() does exactly what Melanie's patch does, except\nthat nworkers_for_balance is not stored in shared memory.\n\nI discovered that commit 1021bd6a89 brought in do_balance.\n\n> Since the mechanism is already complicated, just disable it for those\n> cases rather than trying to make it cope. There are undesirable\n\nAfter reading this, I get why the code is so complex. It is a remnant\nof when balancing was done with tables that had individually specified\ncost parameters. And I found the following description in the doc.\n\nhttps://www.postgresql.org/docs/devel/routine-vacuuming.html\n> When multiple workers are running, the autovacuum cost delay\n> parameters (see Section 20.4.4) are “balanced” among all the running\n> workers, so that the total I/O impact on the system is the same\n> regardless of the number of workers actually running. However, any\n> workers processing tables whose per-table\n> autovacuum_vacuum_cost_delay or autovacuum_vacuum_cost_limit storage\n> parameters have been set are not considered in the balancing\n> algorithm.\n\nThe initial balancing mechanism was brought in by e2a186b03c back in\n2007. The balancing code has had that unnecessarily complexity ever\nsince.\n\nSince I can't think of a better idea than Melanie's proposal for\nhandling this code, I'll keep reviewing it with that approach in mind.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 29 Mar 2023 15:00:47 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "At Wed, 29 Mar 2023 13:21:55 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> So, sorry for the noise. I'll review it while this into cnosideration.\n\n0003:\n\nIt's not this patche's fault, but I don't like the fact that the\nvariables used for GUC, VacuumCostDelay and VacuumCostLimit, are\nupdated outside the GUC mechanism. Also I don't like the incorrect\nsorting of variables, where some working variables are referred to as\nGUC parameters or vise versa.\n\nAlthough it's somewhat unrelated to the goal of this patch, I think we\nshould clean up the code tidy before proceeding. Shouldn't we separate\nthe actual parameters from the GUC base variables, and sort out the\nall related variaghble? (something like the attached, on top of your\npatch.)\n\n\nI have some comments on 0003 as-is.\n\n+\t\ttab->at_relopt_vac_cost_limit = avopts ?\n+\t\t\tavopts->vacuum_cost_limit : 0;\n+\t\ttab->at_relopt_vac_cost_delay = avopts ?\n+\t\t\tavopts->vacuum_cost_delay : -1;\n\nThe value is not used when do_balance is false, so I don't see a\nspecific reason for these variables to be different when avopts is\nnull.\n\n+autovac_recalculate_workers_for_balance(void)\n+{\n+\tdlist_iter\titer;\n+\tint\t\t\torig_nworkers_for_balance;\n+\tint\t\t\tnworkers_for_balance = 0;\n+\n+\tif (autovacuum_vac_cost_delay == 0 ||\n+\t\t(autovacuum_vac_cost_delay == -1 && VacuumCostDelay == 0))\n \t\treturn;\n+\tif (autovacuum_vac_cost_limit <= 0 && VacuumCostLimit <= 0)\n+\t\treturn;\n+\n\nI'm not quite sure how these conditions relate to the need to count\nworkers that shares the global I/O cost. (Though I still believe this\nfuntion might not be necessary.)\n\n+\tif (av_relopt_cost_limit > 0)\n+\t\tVacuumCostLimit = av_relopt_cost_limit;\n+\telse\n+\t{\n+\t\tav_base_cost_limit = autovacuum_vac_cost_limit > 0 ?\n+\t\t\tautovacuum_vac_cost_limit : VacuumCostLimit;\n+\n+\t\tAutoVacuumBalanceLimit();\n\nI think each worker should use MyWorkerInfo->wi_dobalance to identyify\nwhether the worker needs to use balanced cost values.\n\n\n+void\n+AutoVacuumBalanceLimit(void)\n\nI'm not sure this function needs to be a separate function.\n\n(Sorry, timed out..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 29 Mar 2023 17:34:56 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "Thanks for the detailed review!\n\nOn Tue, Mar 28, 2023 at 11:09 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 28 Mar 2023 20:35:28 -0400, Melanie Plageman <melanieplageman@gmail.com> wrote in\n> > On Tue, Mar 28, 2023 at 4:21 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Mon, 27 Mar 2023 14:12:03 -0400, Melanie Plageman <melanieplageman@gmail.com> wrote in\n> > >\n> > > 0002:\n> > >\n> > > I felt a bit uneasy on this. It seems somewhat complex (and makes the\n> > > succeeding patches complex),\n> >\n> > Even if we introduced a second global variable to indicate that failsafe\n> > mode has been engaged, we would still require the additional checks\n> > of VacuumCostInactive.\n> >\n> > > has confusing names,\n> >\n> > I would be happy to rename the values of the enum to make them less\n> > confusing. Are you thinking \"force\" instead of \"locked\"?\n> > maybe:\n> > VACUUM_COST_FORCE_INACTIVE and\n> > VACUUM_COST_INACTIVE\n> > ?\n> >\n> > > and doesn't seem like self-contained.\n> >\n> > By changing the variable from VacuumCostActive to VacuumCostInactive, I\n> > have kept all non-vacuum code from having to distinguish between it\n> > being inactive due to failsafe mode or due to user settings.\n>\n> My concern is that VacuumCostActive is logic-inverted and turned into\n> a ternary variable in a subtle way. The expression\n> \"!VacuumCostInactive\" is quite confusing. (I sometimes feel the same\n> way about \"!XLogRecPtrIsInvalid(lsn)\", and I believe most people write\n> it with another macro like \"lsn != InvalidXLogrecPtr\"). Additionally,\n> the constraint in this patch will be implemented as open code. So I\n> wanted to suggest something like the attached. The main idea is to use\n> a wrapper function to enforce the restriction, and by doing so, we\n> eliminated the need to make the variable into a ternary without a good\n> reason.\n\nSo, the rationale for making it a ternary is that the variable is the\ncombination of two pieces of information which has only has 3 valid\nstates:\nfailsafe inactive + cost active = cost active\nfailsafe inactive + cost inactive = cost inactive\nfailsafe active + cost inactive = cost inactive and locked\nthe fourth is invalid\nfailsafe active + cost active = invalid\nThat is harder to enforce with two variables.\nAlso, the two pieces of information are not meaningful individually.\nSo, I thought it made sense to make a single variable.\n\nYour suggested patch introduces an additional variable which shadows\nLVRelState->failsafe_active but doesn't actually get set/reset at all of\nthe correct places. If we did introduce a second global variable, I\ndon't think we should also keep LVRelState->failsafe_active, as keeping\nthem in sync will be difficult.\n\nAs for the double negative (!VacuumCostInactive), I agree that it is not\nideal, however, if we use a ternary and keep VacuumCostActive, there is\nno way for non-vacuum code to treat it as a boolean.\nWith the ternary VacuumCostInactive, only vacuum code has to know about\nthe distinction between inactive+failsafe active and inactive+failsafe\ninactive.\n\nAs for the setter function, I think that having a function to set\nVacuumCostActive based on failsafe_active is actually doing more harm\nthan good. Only vacuum code has to know about the distinction as it is,\nso we aren't really saving any trouble (there would really only be two\ncallers of the suggested function). And, since the function hides\nwhether or not VacuumCostActive was actually set to the passed-in value,\nwe can't easily do other necessary maintenance -- like zero out\nVacuumCostBalance if we disabled vacuum cost.\n\n> > > 0003:\n> > >\n> > > + * Reload the configuration file if requested. This allows changes to\n> > > + * vacuum_cost_limit and vacuum_cost_delay to take effect while a table is\n> > > + * being vacuumed or analyzed. Analyze should not reload configuration\n> > > + * file if it is in an outer transaction, as GUC values shouldn't be\n> > > + * allowed to refer to some uncommitted state (e.g. database objects\n> > > + * created in this transaction).\n> > >\n> > > I'm not sure GUC reload is or should be related to transactions. For\n> > > instance, work_mem can be changed by a reload during a transaction\n> > > unless it has been set in the current transaction. I don't think we\n> > > need to deliberately suppress changes in variables caused by realods\n> > > during transactions only for analzye. If analyze doesn't like changes\n> > > to certain GUC variables, their values should be snapshotted before\n> > > starting the process.\n> >\n> > Currently, we only reload the config file in top-level statements. We\n> > don't reload the configuration file from within a nested transaction\n> > command. BEGIN starts a transaction but not a transaction command. So\n> > BEGIN; ANALYZE; probably wouldn't violate this rule. But it is simpler\n> > to just forbid reloading when it is not a top-level transaction command.\n> > I have updated the comment to reflect this.\n>\n> I feel it's a bit fragile. We may not be able to manage the reload\n> timeing perfectly. I think we might accidentally add a reload\n> timing. In that case, the assumption could break. In most cases, I\n> think we use snapshotting in various ways to avoid unintended variable\n> changes. (And I beilieve the analyze code also does that.)\n\nI'm not sure I fully understand the problem you are thinking of. What do\nyou mean about managing the reload timing? Are you suggesting there is a\nproblem with excluding analzye in an outer transaction from doing the\nreload or with doing the reload during vacuum and analyze when they are\ntop-level statements?\n\nAnd, by snapshotting do you mean how vacuum_rel() and do_analyze_rel() do\nNewGUCNestLevel() so that they can then do AtEOXact_GUC() and rollback\nguc changes done during that operation?\nHow are you envisioning that being used here?\n\nOn Wed, Mar 29, 2023 at 2:00 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 29 Mar 2023 13:21:55 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > autovacuum.c:2893\n> > /*\n> > * If any of the cost delay parameters has been set individually for\n> > * this table, disable the balancing algorithm.\n> > */\n> > tab->at_dobalance =\n> > !(avopts && (avopts->vacuum_cost_limit > 0 ||\n> > avopts->vacuum_cost_delay > 0));\n> >\n> > So, sorry for the noise. I'll review it while this into cnosideration.\n>\n> Then I found that the code is quite confusing as it is.\n>\n> For the tables that don't have cost_delay and cost_limit specified\n> indificually, at_vacuum_cost_limit and _delay store the system global\n> values detemined by GUCs. wi_cost_delay, _limit and _limit_base stores\n> the same values with them. As the result I concluded tha\n> autovac_balance_cost() does exactly what Melanie's patch does, except\n> that nworkers_for_balance is not stored in shared memory.\n>\n> I discovered that commit 1021bd6a89 brought in do_balance.\n>\n> > Since the mechanism is already complicated, just disable it for those\n> > cases rather than trying to make it cope. There are undesirable\n>\n> After reading this, I get why the code is so complex. It is a remnant\n> of when balancing was done with tables that had individually specified\n> cost parameters. And I found the following description in the doc.\n>\n> https://www.postgresql.org/docs/devel/routine-vacuuming.html\n> > When multiple workers are running, the autovacuum cost delay\n> > parameters (see Section 20.4.4) are “balanced” among all the running\n> > workers, so that the total I/O impact on the system is the same\n> > regardless of the number of workers actually running. However, any\n> > workers processing tables whose per-table\n> > autovacuum_vacuum_cost_delay or autovacuum_vacuum_cost_limit storage\n> > parameters have been set are not considered in the balancing\n> > algorithm.\n>\n> The initial balancing mechanism was brought in by e2a186b03c back in\n> 2007. The balancing code has had that unnecessarily complexity ever\n> since.\n>\n> Since I can't think of a better idea than Melanie's proposal for\n> handling this code, I'll keep reviewing it with that approach in mind.\n\nThanks for doing this archaeology. I didn't know the history of dobalance\nand hadn't looked into 1021bd6a89.\nI was a bit confused by why dobalance was false even if only table\noption cost delay is set and not table option cost limit.\n\nI think we can retain this behavior for now, but it may be worth\nre-examining in the future.\n\nOn Wed, Mar 29, 2023 at 4:35 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 29 Mar 2023 13:21:55 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > So, sorry for the noise. I'll review it while this into cnosideration.\n>\n> 0003:\n>\n> It's not this patche's fault, but I don't like the fact that the\n> variables used for GUC, VacuumCostDelay and VacuumCostLimit, are\n> updated outside the GUC mechanism. Also I don't like the incorrect\n> sorting of variables, where some working variables are referred to as\n> GUC parameters or vise versa.\n>\n> Although it's somewhat unrelated to the goal of this patch, I think we\n> should clean up the code tidy before proceeding. Shouldn't we separate\n> the actual parameters from the GUC base variables, and sort out the\n> all related variaghble? (something like the attached, on top of your\n> patch.)\n\nSo, I agree we should separate the parameters used in the code from the\nGUC variables -- since there are multiple users with different needs\n(autovac workers, parallel vac workers, and vacuum). However, I was\nhesitant to tackle that here.\n\nI'm not sure how these changes will impact extensions that rely on\nthese vacuum parameters and their direct relationship to the guc values.\n\nIn your patch, you didn't update the parameter with the guc value of\nvacuum_cost_limit and vacuum_cost_delay, but were we to do so, we would\nneed to make sure it was updated every time after a config reload. This\nisn't hard to do in the current code, but I'm not sure how we can ensure\nthat future callers of ProcessConfigFile() in vacuum code always update\nthese values afterward. Perhaps we could add some after_reload hook?\nWhich does seem like a larger project.\n\n> I have some comments on 0003 as-is.\n>\n> + tab->at_relopt_vac_cost_limit = avopts ?\n> + avopts->vacuum_cost_limit : 0;\n> + tab->at_relopt_vac_cost_delay = avopts ?\n> + avopts->vacuum_cost_delay : -1;\n>\n> The value is not used when do_balance is false, so I don't see a\n> specific reason for these variables to be different when avopts is\n> null.\n\nActually we need to set these to 0 and -1, because we set\nav_relopt_cost_limit and av_relopt_cost_delay with them and those values\nare checked regardless of wi_dobalance.\n\nWe need to do this because we want to use the correct value to override\nVacuumCostLimit and VacuumCostDelay. wi_dobalance may be false because\nwe have a table option cost delay but we have no table option cost\nlimit. When we override VacuumCostDelay, we want to use the table option\nvalue but when we override VacuumCostLimit, we want to use the regular\nvalue. We need these initialized to values that will allow us to do\nthat.\n\n> +autovac_recalculate_workers_for_balance(void)\n> +{\n> + dlist_iter iter;\n> + int orig_nworkers_for_balance;\n> + int nworkers_for_balance = 0;\n> +\n> + if (autovacuum_vac_cost_delay == 0 ||\n> + (autovacuum_vac_cost_delay == -1 && VacuumCostDelay == 0))\n> return;\n> + if (autovacuum_vac_cost_limit <= 0 && VacuumCostLimit <= 0)\n> + return;\n> +\n>\n> I'm not quite sure how these conditions relate to the need to count\n> workers that shares the global I/O cost.\n\nAh, this is a good point, we should still keep this number up-to-date\neven if the costs are disabled at the time we are checking it in case\ncost-based delays are re-enabled later before we recalculate this\nnumber. I had this code originally because autovac_balance_cost() would\nexit early if cost-based delays were disabled -- but this only worked\nbecause they couldn't be re-enabled during vacuuming a table and\nautovac_balance_cost() was called always in between vacuuming tables.\n\nI've removed these lines.\n\nAnd perhaps there is an argument for calling\nautovac_recalculate_workers_for_balance() in vacuum_delay_point() after\nreloading the config file...\nI have not done so in attached version.\n\n> (Though I still believe this funtion might not be necessary.)\n\nI don't see how we can do without this function. We need an up-to-date\ncount of the number of autovacuum workers vacuuming tables which do not\nhave vacuum cost-related table options.\n\n\n> + if (av_relopt_cost_limit > 0)\n> + VacuumCostLimit = av_relopt_cost_limit;\n> + else\n> + {\n> + av_base_cost_limit = autovacuum_vac_cost_limit > 0 ?\n> + autovacuum_vac_cost_limit : VacuumCostLimit;\n> +\n> + AutoVacuumBalanceLimit();\n>\n> I think each worker should use MyWorkerInfo->wi_dobalance to identyify\n> whether the worker needs to use balanced cost values.\n\nAh, there is a bug here. I have fixed it by making wi_dobalance an\natomic flag so that we can check it before calling\nAutoVacuumBalanceLimit() (without taking a lock).\n\nI don't see other (non-test code) callers using atomic flags, so I can't\ntell if we need to loop to ensure that pg_atomic_test_set_flag() returns\ntrue.\n\n> +void\n> +AutoVacuumBalanceLimit(void)\n>\n> I'm not sure this function needs to be a separate function.\n\nWe need to call it more often than we can call AutoVacuumUpdateLimit(),\nso the logic needs to be separate. Are you suggesting we inline the\nlogic in the two places it is needed?\n\nv11 attached with updates mentioned above.\n\n- Melanie",
"msg_date": "Wed, 29 Mar 2023 16:01:00 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "Hi,\n\nThank you for updating the patches.\n\nOn Thu, Mar 30, 2023 at 5:01 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> Thanks for the detailed review!\n>\n> On Tue, Mar 28, 2023 at 11:09 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Tue, 28 Mar 2023 20:35:28 -0400, Melanie Plageman <melanieplageman@gmail.com> wrote in\n> > > On Tue, Mar 28, 2023 at 4:21 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > > >\n> > > > At Mon, 27 Mar 2023 14:12:03 -0400, Melanie Plageman <melanieplageman@gmail.com> wrote in\n> > > >\n> > > > 0002:\n> > > >\n> > > > I felt a bit uneasy on this. It seems somewhat complex (and makes the\n> > > > succeeding patches complex),\n> > >\n> > > Even if we introduced a second global variable to indicate that failsafe\n> > > mode has been engaged, we would still require the additional checks\n> > > of VacuumCostInactive.\n> > >\n> > > > has confusing names,\n> > >\n> > > I would be happy to rename the values of the enum to make them less\n> > > confusing. Are you thinking \"force\" instead of \"locked\"?\n> > > maybe:\n> > > VACUUM_COST_FORCE_INACTIVE and\n> > > VACUUM_COST_INACTIVE\n> > > ?\n> > >\n> > > > and doesn't seem like self-contained.\n> > >\n> > > By changing the variable from VacuumCostActive to VacuumCostInactive, I\n> > > have kept all non-vacuum code from having to distinguish between it\n> > > being inactive due to failsafe mode or due to user settings.\n> >\n> > My concern is that VacuumCostActive is logic-inverted and turned into\n> > a ternary variable in a subtle way. The expression\n> > \"!VacuumCostInactive\" is quite confusing. (I sometimes feel the same\n> > way about \"!XLogRecPtrIsInvalid(lsn)\", and I believe most people write\n> > it with another macro like \"lsn != InvalidXLogrecPtr\"). Additionally,\n> > the constraint in this patch will be implemented as open code. So I\n> > wanted to suggest something like the attached. The main idea is to use\n> > a wrapper function to enforce the restriction, and by doing so, we\n> > eliminated the need to make the variable into a ternary without a good\n> > reason.\n>\n> So, the rationale for making it a ternary is that the variable is the\n> combination of two pieces of information which has only has 3 valid\n> states:\n> failsafe inactive + cost active = cost active\n> failsafe inactive + cost inactive = cost inactive\n> failsafe active + cost inactive = cost inactive and locked\n> the fourth is invalid\n> failsafe active + cost active = invalid\n> That is harder to enforce with two variables.\n> Also, the two pieces of information are not meaningful individually.\n> So, I thought it made sense to make a single variable.\n>\n> Your suggested patch introduces an additional variable which shadows\n> LVRelState->failsafe_active but doesn't actually get set/reset at all of\n> the correct places. If we did introduce a second global variable, I\n> don't think we should also keep LVRelState->failsafe_active, as keeping\n> them in sync will be difficult.\n>\n> As for the double negative (!VacuumCostInactive), I agree that it is not\n> ideal, however, if we use a ternary and keep VacuumCostActive, there is\n> no way for non-vacuum code to treat it as a boolean.\n> With the ternary VacuumCostInactive, only vacuum code has to know about\n> the distinction between inactive+failsafe active and inactive+failsafe\n> inactive.\n\nAs another idea, why don't we use macros for that? For example,\nsuppose VacuumCostStatus is like:\n\ntypedef enum VacuumCostStatus\n{\n VACUUM_COST_INACTIVE_LOCKED = 0,\n VACUUM_COST_INACTIVE,\n VACUUM_COST_ACTIVE,\n} VacuumCostStatus;\nVacuumCostStatus VacuumCost;\n\nnon-vacuum code can use the following macros:\n\n#define VacuumCostActive() (VacuumCost == VACUUM_COST_ACTIVE)\n#define VacuumCostInactive() (VacuumCost <= VACUUM_COST_INACTIVE) //\nor we can use !VacuumCostActive() instead.\n\nOr is there any reason why we need to keep VacuumCostActive and treat\nit as a boolean?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 30 Mar 2023 11:57:05 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 30 Mar 2023, at 04:57, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> As another idea, why don't we use macros for that? For example,\n> suppose VacuumCostStatus is like:\n> \n> typedef enum VacuumCostStatus\n> {\n> VACUUM_COST_INACTIVE_LOCKED = 0,\n> VACUUM_COST_INACTIVE,\n> VACUUM_COST_ACTIVE,\n> } VacuumCostStatus;\n> VacuumCostStatus VacuumCost;\n> \n> non-vacuum code can use the following macros:\n> \n> #define VacuumCostActive() (VacuumCost == VACUUM_COST_ACTIVE)\n> #define VacuumCostInactive() (VacuumCost <= VACUUM_COST_INACTIVE) //\n> or we can use !VacuumCostActive() instead.\n\nI'm in favor of something along these lines. A variable with a name that\nimplies a boolean value (active/inactive) but actually contains a tri-value is\neasily misunderstood. A VacuumCostState tri-value variable (or a better name)\nwith a set of convenient macros for extracting the boolean active/inactive that\nmost of the code needs to be concerned with would more for more readable code I\nthink.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 30 Mar 2023 21:26:47 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 3:26 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 30 Mar 2023, at 04:57, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> > As another idea, why don't we use macros for that? For example,\n> > suppose VacuumCostStatus is like:\n> >\n> > typedef enum VacuumCostStatus\n> > {\n> > VACUUM_COST_INACTIVE_LOCKED = 0,\n> > VACUUM_COST_INACTIVE,\n> > VACUUM_COST_ACTIVE,\n> > } VacuumCostStatus;\n> > VacuumCostStatus VacuumCost;\n> >\n> > non-vacuum code can use the following macros:\n> >\n> > #define VacuumCostActive() (VacuumCost == VACUUM_COST_ACTIVE)\n> > #define VacuumCostInactive() (VacuumCost <= VACUUM_COST_INACTIVE) //\n> > or we can use !VacuumCostActive() instead.\n>\n> I'm in favor of something along these lines. A variable with a name that\n> implies a boolean value (active/inactive) but actually contains a tri-value is\n> easily misunderstood. A VacuumCostState tri-value variable (or a better name)\n> with a set of convenient macros for extracting the boolean active/inactive that\n> most of the code needs to be concerned with would more for more readable code I\n> think.\n\nThe macros are very error-prone. I was just implementing this idea and\nmistakenly tried to set the macro instead of the variable in multiple\nplaces. Avoiding this involves another set of macros, and, in the end, I\nthink the complexity is much worse. Given the reviewers' uniform dislike\nof VacuumCostInactive, I favor going back to two variables\n(VacuumCostActive + VacuumFailsafeActive) and moving\nLVRelState->failsafe_active to the global VacuumFailsafeActive.\n\nI will reimplement this in the next version.\n\nOn the subject of globals, the next version will implement\nHoriguchi-san's proposal to separate GUC variables from the globals used\nin the code (quoted below). It should hopefully reduce the complexity of\nthis patchset.\n\n> Although it's somewhat unrelated to the goal of this patch, I think we\n> should clean up the code tidy before proceeding. Shouldn't we separate\n> the actual parameters from the GUC base variables, and sort out the\n> all related variaghble? (something like the attached, on top of your\n> patch.)\n\n- Melanie\n\n\n",
"msg_date": "Fri, 31 Mar 2023 10:31:10 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Fri, Mar 31, 2023 at 10:31 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Thu, Mar 30, 2023 at 3:26 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > > On 30 Mar 2023, at 04:57, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > > As another idea, why don't we use macros for that? For example,\n> > > suppose VacuumCostStatus is like:\n> > >\n> > > typedef enum VacuumCostStatus\n> > > {\n> > > VACUUM_COST_INACTIVE_LOCKED = 0,\n> > > VACUUM_COST_INACTIVE,\n> > > VACUUM_COST_ACTIVE,\n> > > } VacuumCostStatus;\n> > > VacuumCostStatus VacuumCost;\n> > >\n> > > non-vacuum code can use the following macros:\n> > >\n> > > #define VacuumCostActive() (VacuumCost == VACUUM_COST_ACTIVE)\n> > > #define VacuumCostInactive() (VacuumCost <= VACUUM_COST_INACTIVE) //\n> > > or we can use !VacuumCostActive() instead.\n> >\n> > I'm in favor of something along these lines. A variable with a name that\n> > implies a boolean value (active/inactive) but actually contains a tri-value is\n> > easily misunderstood. A VacuumCostState tri-value variable (or a better name)\n> > with a set of convenient macros for extracting the boolean active/inactive that\n> > most of the code needs to be concerned with would more for more readable code I\n> > think.\n>\n> The macros are very error-prone. I was just implementing this idea and\n> mistakenly tried to set the macro instead of the variable in multiple\n> places. Avoiding this involves another set of macros, and, in the end, I\n> think the complexity is much worse. Given the reviewers' uniform dislike\n> of VacuumCostInactive, I favor going back to two variables\n> (VacuumCostActive + VacuumFailsafeActive) and moving\n> LVRelState->failsafe_active to the global VacuumFailsafeActive.\n>\n> I will reimplement this in the next version.\n>\n> On the subject of globals, the next version will implement\n> Horiguchi-san's proposal to separate GUC variables from the globals used\n> in the code (quoted below). It should hopefully reduce the complexity of\n> this patchset.\n>\n> > Although it's somewhat unrelated to the goal of this patch, I think we\n> > should clean up the code tidy before proceeding. Shouldn't we separate\n> > the actual parameters from the GUC base variables, and sort out the\n> > all related variaghble? (something like the attached, on top of your\n> > patch.)\n\nAttached is v12. It has a number of updates, including a commit to\nseparate VacuumCostLimit and VacuumCostDelay from the gucs\nvacuum_cost_limit and vacuum_cost_delay, and a return to\nVacuumCostActive.\n\n- Melanie",
"msg_date": "Fri, 31 Mar 2023 15:09:21 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Sat, Apr 1, 2023 at 4:09 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Fri, Mar 31, 2023 at 10:31 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > On Thu, Mar 30, 2023 at 3:26 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > >\n> > > > On 30 Mar 2023, at 04:57, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > > As another idea, why don't we use macros for that? For example,\n> > > > suppose VacuumCostStatus is like:\n> > > >\n> > > > typedef enum VacuumCostStatus\n> > > > {\n> > > > VACUUM_COST_INACTIVE_LOCKED = 0,\n> > > > VACUUM_COST_INACTIVE,\n> > > > VACUUM_COST_ACTIVE,\n> > > > } VacuumCostStatus;\n> > > > VacuumCostStatus VacuumCost;\n> > > >\n> > > > non-vacuum code can use the following macros:\n> > > >\n> > > > #define VacuumCostActive() (VacuumCost == VACUUM_COST_ACTIVE)\n> > > > #define VacuumCostInactive() (VacuumCost <= VACUUM_COST_INACTIVE) //\n> > > > or we can use !VacuumCostActive() instead.\n> > >\n> > > I'm in favor of something along these lines. A variable with a name that\n> > > implies a boolean value (active/inactive) but actually contains a tri-value is\n> > > easily misunderstood. A VacuumCostState tri-value variable (or a better name)\n> > > with a set of convenient macros for extracting the boolean active/inactive that\n> > > most of the code needs to be concerned with would more for more readable code I\n> > > think.\n> >\n> > The macros are very error-prone. I was just implementing this idea and\n> > mistakenly tried to set the macro instead of the variable in multiple\n> > places. Avoiding this involves another set of macros, and, in the end, I\n> > think the complexity is much worse. Given the reviewers' uniform dislike\n> > of VacuumCostInactive, I favor going back to two variables\n> > (VacuumCostActive + VacuumFailsafeActive) and moving\n> > LVRelState->failsafe_active to the global VacuumFailsafeActive.\n> >\n> > I will reimplement this in the next version.\n\nThank you for updating the patches. Here are comments for 0001, 0002,\nand 0003 patches:\n\n 0001:\n\n@@ -391,7 +389,7 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,\n Assert(params->index_cleanup != VACOPTVALUE_UNSPECIFIED);\n Assert(params->truncate != VACOPTVALUE_UNSPECIFIED &&\n params->truncate != VACOPTVALUE_AUTO);\n- vacrel->failsafe_active = false;\n+ VacuumFailsafeActive = false;\n\nIf we go with the idea of using VacuumCostActive +\nVacuumFailsafeActive, we need to make sure that both are cleared at\nthe end of the vacuum per table. Since the patch clears it only here,\nit remains true even after vacuum() if we trigger the failsafe mode\nfor the last table in the table list.\n\nIn addition to that, to ensure that also in an error case, I think we\nneed to clear it also in PG_FINALLY() block in vacuum().\n\n---\n@@ -306,6 +306,7 @@ extern PGDLLIMPORT pg_atomic_uint32\n*VacuumSharedCostBalance;\n extern PGDLLIMPORT pg_atomic_uint32 *VacuumActiveNWorkers;\n extern PGDLLIMPORT int VacuumCostBalanceLocal;\n\n+extern bool VacuumFailsafeActive;\n\nDo we need PGDLLIMPORT for VacuumFailSafeActive?\n\n0002:\n\n@@ -2388,6 +2398,7 @@ vac_max_items_to_alloc_size(int max_items)\n return offsetof(VacDeadItems, items) +\nsizeof(ItemPointerData) * max_items;\n }\n\n+\n /*\n * vac_tid_reaped() -- is a particular tid deletable?\n *\n\nUnnecessary new line. There are some other unnecessary new lines in this patch.\n\n---\n@@ -307,6 +309,8 @@ extern PGDLLIMPORT pg_atomic_uint32 *VacuumActiveNWorkers;\n extern PGDLLIMPORT int VacuumCostBalanceLocal;\n\n extern bool VacuumFailsafeActive;\n+extern int VacuumCostLimit;\n+extern double VacuumCostDelay;\n\nand\n\n@@ -266,8 +266,6 @@ extern PGDLLIMPORT int max_parallel_maintenance_workers;\n extern PGDLLIMPORT int VacuumCostPageHit;\n extern PGDLLIMPORT int VacuumCostPageMiss;\n extern PGDLLIMPORT int VacuumCostPageDirty;\n-extern PGDLLIMPORT int VacuumCostLimit;\n-extern PGDLLIMPORT double VacuumCostDelay;\n\nDo we need PGDLLIMPORT too?\n\n---\n@@ -1773,20 +1773,33 @@ FreeWorkerInfo(int code, Datum arg)\n }\n }\n\n+\n /*\n- * Update the cost-based delay parameters, so that multiple workers consume\n- * each a fraction of the total available I/O.\n+ * Update vacuum cost-based delay-related parameters for autovacuum workers and\n+ * backends executing VACUUM or ANALYZE using the value of relevant gucs and\n+ * global state. This must be called during setup for vacuum and after every\n+ * config reload to ensure up-to-date values.\n */\n void\n-AutoVacuumUpdateDelay(void)\n+VacuumUpdateCosts(void)\n {\n\nIsn't it better to define VacuumUpdateCosts() in vacuum.c rather than\nautovacuum.c as this is now a common code for both vacuum and\nautovacuum?\n\n0003:\n\n@@ -501,9 +502,9 @@ vacuum(List *relations, VacuumParams *params,\n {\n ListCell *cur;\n\n- VacuumUpdateCosts();\n in_vacuum = true;\n- VacuumCostActive = (VacuumCostDelay > 0);\n+ VacuumFailsafeActive = false;\n+ VacuumUpdateCosts();\n\nHmm, if we initialize VacuumFailsafeActive here, should it be included\nin 0001 patch?\n\n---\n+ if (VacuumCostDelay > 0)\n+ VacuumCostActive = true;\n+ else\n+ {\n+ VacuumCostActive = false;\n+ VacuumCostBalance = 0;\n+ }\n\nI agree to update VacuumCostActive in VacuumUpdateCosts(). But if we\ndo that I think this change should be included in 0002 patch.\n\n---\n+ if (ConfigReloadPending && !analyze_in_outer_xact)\n+ {\n+ ConfigReloadPending = false;\n+ ProcessConfigFile(PGC_SIGHUP);\n+ VacuumUpdateCosts();\n+ }\n\nSince analyze_in_outer_xact is false by default, we reload the config\nfile in vacuum_delay_point() by default. We need to note that\nvacuum_delay_point() could be called via other paths, for example\ngin_cleanup_pending_list() and ambulkdelete() called by\nvalidate_index(). So it seems to me that we should do the opposite; we\nhave another global variable, say vacuum_can_reload_config, which is\nfalse by default, and is set to true only when vacuum() allows it. In\nvacuum_delay_point(), we reload the config file iff\n(ConfigReloadPending && vacuum_can_reload_config).\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 3 Apr 2023 11:27:42 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Sun, Apr 2, 2023 at 10:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Thank you for updating the patches. Here are comments for 0001, 0002,\n> and 0003 patches:\n\nThanks for the review!\n\nv13 attached with requested updates.\n\n> 0001:\n>\n> @@ -391,7 +389,7 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,\n> Assert(params->index_cleanup != VACOPTVALUE_UNSPECIFIED);\n> Assert(params->truncate != VACOPTVALUE_UNSPECIFIED &&\n> params->truncate != VACOPTVALUE_AUTO);\n> - vacrel->failsafe_active = false;\n> + VacuumFailsafeActive = false;\n>\n> If we go with the idea of using VacuumCostActive +\n> VacuumFailsafeActive, we need to make sure that both are cleared at\n> the end of the vacuum per table. Since the patch clears it only here,\n> it remains true even after vacuum() if we trigger the failsafe mode\n> for the last table in the table list.\n>\n> In addition to that, to ensure that also in an error case, I think we\n> need to clear it also in PG_FINALLY() block in vacuum().\n\nSo, in 0001, I tried to keep it exactly the same as\nLVRelState->failsafe_active except for it being a global. We don't\nactually use VacuumFailsafeActive in this commit except in vacuumlazy.c,\nwhich does its own management of the value (it resets it to false at the\ntop of heap_vacuum_rel()).\n\nIn the later commit which references VacuumFailsafeActive outside of\nvacuumlazy.c, I had reset it in PG_FINALLY(). I hadn't reset it in the\nrelation list loop in vacuum(). Autovacuum calls vacuum() for each\nrelation. However, you are right that for VACUUM with a list of\nrelations for a table access method other than heap, once set to true,\nif the table AM forgets to reset the value to false at the end of\nvacuuming the relation, it would stay true.\n\nI've set it to false now at the bottom of the loop through relations in\nvacuum().\n\n> ---\n> @@ -306,6 +306,7 @@ extern PGDLLIMPORT pg_atomic_uint32\n> *VacuumSharedCostBalance;\n> extern PGDLLIMPORT pg_atomic_uint32 *VacuumActiveNWorkers;\n> extern PGDLLIMPORT int VacuumCostBalanceLocal;\n>\n> +extern bool VacuumFailsafeActive;\n>\n> Do we need PGDLLIMPORT for VacuumFailSafeActive?\n\nI didn't add one because I thought extensions and other code probably\nshouldn't access this variable. I thought PGDLLIMPORT was only needed\nfor extensions built on windows to access variables.\n\n> 0002:\n>\n> @@ -2388,6 +2398,7 @@ vac_max_items_to_alloc_size(int max_items)\n> return offsetof(VacDeadItems, items) +\n> sizeof(ItemPointerData) * max_items;\n> }\n>\n> +\n> /*\n> * vac_tid_reaped() -- is a particular tid deletable?\n> *\n>\n> Unnecessary new line. There are some other unnecessary new lines in this patch.\n\nThanks! I think I got them all.\n\n> ---\n> @@ -307,6 +309,8 @@ extern PGDLLIMPORT pg_atomic_uint32 *VacuumActiveNWorkers;\n> extern PGDLLIMPORT int VacuumCostBalanceLocal;\n>\n> extern bool VacuumFailsafeActive;\n> +extern int VacuumCostLimit;\n> +extern double VacuumCostDelay;\n>\n> and\n>\n> @@ -266,8 +266,6 @@ extern PGDLLIMPORT int max_parallel_maintenance_workers;\n> extern PGDLLIMPORT int VacuumCostPageHit;\n> extern PGDLLIMPORT int VacuumCostPageMiss;\n> extern PGDLLIMPORT int VacuumCostPageDirty;\n> -extern PGDLLIMPORT int VacuumCostLimit;\n> -extern PGDLLIMPORT double VacuumCostDelay;\n>\n> Do we need PGDLLIMPORT too?\n\nI was on the fence about this. I annotated the new guc variables\nvacuum_cost_delay and vacuum_cost_limit with PGDLLIMPORT, but I did not\nannotate the variables used in vacuum code (VacuumCostLimit/Delay). I\nthink whether or not this is the right choice depends on two things:\nwhether or not my understanding of PGDLLIMPORT is correct and, if it is,\nwhether or not we want extensions to be able to access\nVacuumCostLimit/Delay or if just access to the guc variables is\nsufficient/desirable.\n\n> ---\n> @@ -1773,20 +1773,33 @@ FreeWorkerInfo(int code, Datum arg)\n> }\n> }\n>\n> +\n> /*\n> - * Update the cost-based delay parameters, so that multiple workers consume\n> - * each a fraction of the total available I/O.\n> + * Update vacuum cost-based delay-related parameters for autovacuum workers and\n> + * backends executing VACUUM or ANALYZE using the value of relevant gucs and\n> + * global state. This must be called during setup for vacuum and after every\n> + * config reload to ensure up-to-date values.\n> */\n> void\n> -AutoVacuumUpdateDelay(void)\n> +VacuumUpdateCosts(void\n>\n> Isn't it better to define VacuumUpdateCosts() in vacuum.c rather than\n> autovacuum.c as this is now a common code for both vacuum and\n> autovacuum?\n\nWe can't access members of WorkerInfoData from inside vacuum.c\n\n> 0003:\n>\n> @@ -501,9 +502,9 @@ vacuum(List *relations, VacuumParams *params,\n> {\n> ListCell *cur;\n>\n> - VacuumUpdateCosts();\n> in_vacuum = true;\n> - VacuumCostActive = (VacuumCostDelay > 0);\n> + VacuumFailsafeActive = false;\n> + VacuumUpdateCosts();\n>\n> Hmm, if we initialize VacuumFailsafeActive here, should it be included\n> in 0001 patch?\n\nSee comment above. This is the first patch where we use or reference it\noutside of vacuumlazy.c\n\n> ---\n> + if (VacuumCostDelay > 0)\n> + VacuumCostActive = true;\n> + else\n> + {\n> + VacuumCostActive = false;\n> + VacuumCostBalance = 0;\n> + }\n>\n> I agree to update VacuumCostActive in VacuumUpdateCosts(). But if we\n> do that I think this change should be included in 0002 patch.\n\nI'm a bit hesitant to do this because in 0002 VacuumCostActive cannot\nchange status while vacuuming a table or even between tables for VACUUM\nwhen a list of relations is specified (except for being disabled by\nfailsafe mode) Adding it to VacuumUpdateCosts() in 0003 makes it clear\nthat it could change while vacuuming a table, so we must update it.\n\nI previously had 0002 introduce AutoVacuumUpdateLimit(), which only\nupdated VacuumCostLimit with wi_cost_limit for autovacuum workers and\nthen called that in vacuum_delay_point() (instead of\nAutoVacuumUpdateDelay() or VacuumUpdateCosts()). I abandoned that idea\nin favor of the simplicity of having VacuumUpdateCosts() just update\nthose variables for everyone, since it could be reused in 0003.\n\nNow, I'm thinking the previous method might be more clear?\nOr is what I have okay?\n\n> ---\n> + if (ConfigReloadPending && !analyze_in_outer_xact)\n> + {\n> + ConfigReloadPending = false;\n> + ProcessConfigFile(PGC_SIGHUP);\n> + VacuumUpdateCosts();\n> + }\n>\n> Since analyze_in_outer_xact is false by default, we reload the config\n> file in vacuum_delay_point() by default. We need to note that\n> vacuum_delay_point() could be called via other paths, for example\n> gin_cleanup_pending_list() and ambulkdelete() called by\n> validate_index(). So it seems to me that we should do the opposite; we\n> have another global variable, say vacuum_can_reload_config, which is\n> false by default, and is set to true only when vacuum() allows it. In\n> vacuum_delay_point(), we reload the config file iff\n> (ConfigReloadPending && vacuum_can_reload_config).\n\nWow, great point. Thanks for catching this. I've made the update you\nsuggested. I also set vacuum_can_reload_config to false in PG_FINALLY()\nin vacuum().\n\n- Melanie",
"msg_date": "Mon, 3 Apr 2023 12:40:49 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "Melanie Plageman <melanieplageman@gmail.com> writes:\n> v13 attached with requested updates.\n\nI'm afraid I'd not been paying any attention to this discussion,\nbut better late than never. I'm okay with letting autovacuum\nprocesses reload config files more often than now. However,\nI object to allowing ProcessConfigFile to be called from within\ncommands in a normal user backend. The existing semantics are\nthat user backends respond to SIGHUP only at the start of processing\na user command, and I'm uncomfortable with suddenly deciding that\nthat can work differently if the command happens to be VACUUM.\nIt seems unprincipled and perhaps actively unsafe.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Apr 2023 14:43:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-03 14:43:14 -0400, Tom Lane wrote:\n> Melanie Plageman <melanieplageman@gmail.com> writes:\n> > v13 attached with requested updates.\n> \n> I'm afraid I'd not been paying any attention to this discussion,\n> but better late than never. I'm okay with letting autovacuum\n> processes reload config files more often than now. However,\n> I object to allowing ProcessConfigFile to be called from within\n> commands in a normal user backend. The existing semantics are\n> that user backends respond to SIGHUP only at the start of processing\n> a user command, and I'm uncomfortable with suddenly deciding that\n> that can work differently if the command happens to be VACUUM.\n> It seems unprincipled and perhaps actively unsafe.\n\nI think it should be ok in commands like VACUUM that already internally start\ntheir own transactions, and thus require to be run outside of a transaction\nand at the toplevel. I share your concerns about allowing config reload in\narbitrary places. While we might want to go there, it would require a lot more\nanalysis.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 3 Apr 2023 12:08:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Mon, Apr 3, 2023 at 3:08 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-04-03 14:43:14 -0400, Tom Lane wrote:\n> > Melanie Plageman <melanieplageman@gmail.com> writes:\n> > > v13 attached with requested updates.\n> >\n> > I'm afraid I'd not been paying any attention to this discussion,\n> > but better late than never. I'm okay with letting autovacuum\n> > processes reload config files more often than now. However,\n> > I object to allowing ProcessConfigFile to be called from within\n> > commands in a normal user backend. The existing semantics are\n> > that user backends respond to SIGHUP only at the start of processing\n> > a user command, and I'm uncomfortable with suddenly deciding that\n> > that can work differently if the command happens to be VACUUM.\n> > It seems unprincipled and perhaps actively unsafe.\n>\n> I think it should be ok in commands like VACUUM that already internally start\n> their own transactions, and thus require to be run outside of a transaction\n> and at the toplevel. I share your concerns about allowing config reload in\n> arbitrary places. While we might want to go there, it would require a lot more\n> analysis.\n\nAs an alternative for your consideration, attached v14 set implements\nthe config file reload for autovacuum only (in 0003) and then enables it\nfor VACUUM and ANALYZE not in a nested transaction command (in 0004).\n\nPreviously I had the commits in the reverse order for ease of review (to\nseparate changes to worker limit balancing logic from config reload\ncode).\n\n- Melanie",
"msg_date": "Mon, 3 Apr 2023 18:35:07 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Tue, Apr 4, 2023 at 1:41 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Sun, Apr 2, 2023 at 10:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Thank you for updating the patches. Here are comments for 0001, 0002,\n> > and 0003 patches:\n>\n> Thanks for the review!\n>\n> v13 attached with requested updates.\n>\n> > 0001:\n> >\n> > @@ -391,7 +389,7 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,\n> > Assert(params->index_cleanup != VACOPTVALUE_UNSPECIFIED);\n> > Assert(params->truncate != VACOPTVALUE_UNSPECIFIED &&\n> > params->truncate != VACOPTVALUE_AUTO);\n> > - vacrel->failsafe_active = false;\n> > + VacuumFailsafeActive = false;\n> >\n> > If we go with the idea of using VacuumCostActive +\n> > VacuumFailsafeActive, we need to make sure that both are cleared at\n> > the end of the vacuum per table. Since the patch clears it only here,\n> > it remains true even after vacuum() if we trigger the failsafe mode\n> > for the last table in the table list.\n> >\n> > In addition to that, to ensure that also in an error case, I think we\n> > need to clear it also in PG_FINALLY() block in vacuum().\n>\n> So, in 0001, I tried to keep it exactly the same as\n> LVRelState->failsafe_active except for it being a global. We don't\n> actually use VacuumFailsafeActive in this commit except in vacuumlazy.c,\n> which does its own management of the value (it resets it to false at the\n> top of heap_vacuum_rel()).\n>\n> In the later commit which references VacuumFailsafeActive outside of\n> vacuumlazy.c, I had reset it in PG_FINALLY(). I hadn't reset it in the\n> relation list loop in vacuum(). Autovacuum calls vacuum() for each\n> relation. However, you are right that for VACUUM with a list of\n> relations for a table access method other than heap, once set to true,\n> if the table AM forgets to reset the value to false at the end of\n> vacuuming the relation, it would stay true.\n>\n> I've set it to false now at the bottom of the loop through relations in\n> vacuum().\n\nAgreed. Probably we can merge 0001 into 0003 but I leave it to\ncommitters. The 0001 patch mostly looks good to me except for one\npoint:\n\n@@ -391,7 +389,7 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,\n Assert(params->index_cleanup != VACOPTVALUE_UNSPECIFIED);\n Assert(params->truncate != VACOPTVALUE_UNSPECIFIED &&\n params->truncate != VACOPTVALUE_AUTO);\n- vacrel->failsafe_active = false;\n+ VacuumFailsafeActive = false;\n vacrel->consider_bypass_optimization = true;\n vacrel->do_index_vacuuming = true;\n\nLooking at the 0003 patch, we set VacuumFailsafeActive to false per table:\n\n+ /*\n+ * Ensure VacuumFailsafeActive has been reset\nbefore vacuuming the\n+ * next relation relation.\n+ */\n+ VacuumFailsafeActive = false;\n\nGiven that we ensure it's reset before vacuuming the next table, do we\nneed to reset it in heap_vacuum_rel?\n\n(there is a typo; s/relation relation/relation/)\n\n>\n> > ---\n> > @@ -306,6 +306,7 @@ extern PGDLLIMPORT pg_atomic_uint32\n> > *VacuumSharedCostBalance;\n> > extern PGDLLIMPORT pg_atomic_uint32 *VacuumActiveNWorkers;\n> > extern PGDLLIMPORT int VacuumCostBalanceLocal;\n> >\n> > +extern bool VacuumFailsafeActive;\n> >\n> > Do we need PGDLLIMPORT for VacuumFailSafeActive?\n>\n> I didn't add one because I thought extensions and other code probably\n> shouldn't access this variable. I thought PGDLLIMPORT was only needed\n> for extensions built on windows to access variables.\n\nAgreed.\n\n>\n> > 0002:\n> >\n> > @@ -2388,6 +2398,7 @@ vac_max_items_to_alloc_size(int max_items)\n> > return offsetof(VacDeadItems, items) +\n> > sizeof(ItemPointerData) * max_items;\n> > }\n> >\n> > +\n> > /*\n> > * vac_tid_reaped() -- is a particular tid deletable?\n> > *\n> >\n> > Unnecessary new line. There are some other unnecessary new lines in this patch.\n>\n> Thanks! I think I got them all.\n>\n> > ---\n> > @@ -307,6 +309,8 @@ extern PGDLLIMPORT pg_atomic_uint32 *VacuumActiveNWorkers;\n> > extern PGDLLIMPORT int VacuumCostBalanceLocal;\n> >\n> > extern bool VacuumFailsafeActive;\n> > +extern int VacuumCostLimit;\n> > +extern double VacuumCostDelay;\n> >\n> > and\n> >\n> > @@ -266,8 +266,6 @@ extern PGDLLIMPORT int max_parallel_maintenance_workers;\n> > extern PGDLLIMPORT int VacuumCostPageHit;\n> > extern PGDLLIMPORT int VacuumCostPageMiss;\n> > extern PGDLLIMPORT int VacuumCostPageDirty;\n> > -extern PGDLLIMPORT int VacuumCostLimit;\n> > -extern PGDLLIMPORT double VacuumCostDelay;\n> >\n> > Do we need PGDLLIMPORT too?\n>\n> I was on the fence about this. I annotated the new guc variables\n> vacuum_cost_delay and vacuum_cost_limit with PGDLLIMPORT, but I did not\n> annotate the variables used in vacuum code (VacuumCostLimit/Delay). I\n> think whether or not this is the right choice depends on two things:\n> whether or not my understanding of PGDLLIMPORT is correct and, if it is,\n> whether or not we want extensions to be able to access\n> VacuumCostLimit/Delay or if just access to the guc variables is\n> sufficient/desirable.\n\nI guess it would be better to keep both accessible for backward\ncompatibility. Extensions are able to access both GUC values and\nvalues that are actually used for vacuum delays (as we used to use the\nsame variables).\n\n>\n> > ---\n> > @@ -1773,20 +1773,33 @@ FreeWorkerInfo(int code, Datum arg)\n> > }\n> > }\n> >\n> > +\n> > /*\n> > - * Update the cost-based delay parameters, so that multiple workers consume\n> > - * each a fraction of the total available I/O.\n> > + * Update vacuum cost-based delay-related parameters for autovacuum workers and\n> > + * backends executing VACUUM or ANALYZE using the value of relevant gucs and\n> > + * global state. This must be called during setup for vacuum and after every\n> > + * config reload to ensure up-to-date values.\n> > */\n> > void\n> > -AutoVacuumUpdateDelay(void)\n> > +VacuumUpdateCosts(void\n> >\n> > Isn't it better to define VacuumUpdateCosts() in vacuum.c rather than\n> > autovacuum.c as this is now a common code for both vacuum and\n> > autovacuum?\n>\n> We can't access members of WorkerInfoData from inside vacuum.c\n\nOops, you're right.\n\n>\n> > 0003:\n> >\n> > @@ -501,9 +502,9 @@ vacuum(List *relations, VacuumParams *params,\n> > {\n> > ListCell *cur;\n> >\n> > - VacuumUpdateCosts();\n> > in_vacuum = true;\n> > - VacuumCostActive = (VacuumCostDelay > 0);\n> > + VacuumFailsafeActive = false;\n> > + VacuumUpdateCosts();\n> >\n> > Hmm, if we initialize VacuumFailsafeActive here, should it be included\n> > in 0001 patch?\n>\n> See comment above. This is the first patch where we use or reference it\n> outside of vacuumlazy.c\n>\n> > ---\n> > + if (VacuumCostDelay > 0)\n> > + VacuumCostActive = true;\n> > + else\n> > + {\n> > + VacuumCostActive = false;\n> > + VacuumCostBalance = 0;\n> > + }\n> >\n> > I agree to update VacuumCostActive in VacuumUpdateCosts(). But if we\n> > do that I think this change should be included in 0002 patch.\n>\n> I'm a bit hesitant to do this because in 0002 VacuumCostActive cannot\n> change status while vacuuming a table or even between tables for VACUUM\n> when a list of relations is specified (except for being disabled by\n> failsafe mode) Adding it to VacuumUpdateCosts() in 0003 makes it clear\n> that it could change while vacuuming a table, so we must update it.\n>\n\nAgreed.\n\n> I previously had 0002 introduce AutoVacuumUpdateLimit(), which only\n> updated VacuumCostLimit with wi_cost_limit for autovacuum workers and\n> then called that in vacuum_delay_point() (instead of\n> AutoVacuumUpdateDelay() or VacuumUpdateCosts()). I abandoned that idea\n> in favor of the simplicity of having VacuumUpdateCosts() just update\n> those variables for everyone, since it could be reused in 0003.\n>\n> Now, I'm thinking the previous method might be more clear?\n> Or is what I have okay?\n\nI'm fine with the current one.\n\n>\n> > ---\n> > + if (ConfigReloadPending && !analyze_in_outer_xact)\n> > + {\n> > + ConfigReloadPending = false;\n> > + ProcessConfigFile(PGC_SIGHUP);\n> > + VacuumUpdateCosts();\n> > + }\n> >\n> > Since analyze_in_outer_xact is false by default, we reload the config\n> > file in vacuum_delay_point() by default. We need to note that\n> > vacuum_delay_point() could be called via other paths, for example\n> > gin_cleanup_pending_list() and ambulkdelete() called by\n> > validate_index(). So it seems to me that we should do the opposite; we\n> > have another global variable, say vacuum_can_reload_config, which is\n> > false by default, and is set to true only when vacuum() allows it. In\n> > vacuum_delay_point(), we reload the config file iff\n> > (ConfigReloadPending && vacuum_can_reload_config).\n>\n> Wow, great point. Thanks for catching this. I've made the update you\n> suggested. I also set vacuum_can_reload_config to false in PG_FINALLY()\n> in vacuum().\n\nHere are some review comments for 0002-0004 patches:\n\n0002:\n- if (MyWorkerInfo)\n+ if (am_autovacuum_launcher)\n+ return;\n+\n+ if (am_autovacuum_worker)\n {\n- VacuumCostDelay = MyWorkerInfo->wi_cost_delay;\n VacuumCostLimit = MyWorkerInfo->wi_cost_limit;\n+ VacuumCostDelay = MyWorkerInfo->wi_cost_delay;\n+ }\n\nIsn't it a bit safer to check MyWorkerInfo instead of\nam_autovacuum_worker? Also, I don't think there is any reason why we\nwant to exclude only the autovacuum launcher.\n\n---\n+ * TODO: should VacuumCostLimit and VacuumCostDelay be initialized to valid or\n+ * invalid values?\n\nHow about using the default value of normal backends, 200 and 0?\n\n0003:\n\n@@ -83,6 +84,7 @@ int vacuum_cost_limit;\n */\n int VacuumCostLimit = 0;\n double VacuumCostDelay = -1;\n+static bool vacuum_can_reload_config = false;\n\nIn vacuum.c, we use snake case for GUC parameters and camel case for\nother global variables, so it seems better to rename it\nVacuumCanReloadConfig. Sorry, that's my fault.\n\n0004:\n\n+ if (tab->at_dobalance)\n+ pg_atomic_test_set_flag(&MyWorkerInfo->wi_dobalance);\n+ else\n\nThe comment of pg_atomic_test_set_flag() says that it returns false if\nthe flag has not successfully been set:\n\n * pg_atomic_test_set_flag - TAS()\n *\n * Returns true if the flag has successfully been set, false otherwise.\n *\n * Acquire (including read barrier) semantics.\n\nBut IIUC we don't need to worry about that as only one process updates\nthe flag, right? It might be a good idea to add some comments why we\ndon't need to check the return value.\n\n---\n- if (worker->wi_proc != NULL)\n- elog(DEBUG2, \"autovac_balance_cost(pid=%d\ndb=%u, rel=%u, dobalance=%s cost_limit=%d, cost_limit_base=%d,\ncost_delay=%g)\",\n- worker->wi_proc->pid,\nworker->wi_dboid, worker->wi_tableoid,\n- worker->wi_dobalance ? \"yes\" : \"no\",\n- worker->wi_cost_limit,\nworker->wi_cost_limit_base,\n- worker->wi_cost_delay);\n\nI think it's better to keep this kind of log in some form for\ndebugging. For example, we can show these values of autovacuum workers\nin VacuumUpdateCosts().\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 4 Apr 2023 17:26:58 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 4 Apr 2023, at 00:35, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> \n> On Mon, Apr 3, 2023 at 3:08 PM Andres Freund <andres@anarazel.de> wrote:\n>> On 2023-04-03 14:43:14 -0400, Tom Lane wrote:\n>>> Melanie Plageman <melanieplageman@gmail.com> writes:\n>>>> v13 attached with requested updates.\n>>> \n>>> I'm afraid I'd not been paying any attention to this discussion,\n>>> but better late than never. I'm okay with letting autovacuum\n>>> processes reload config files more often than now. However,\n>>> I object to allowing ProcessConfigFile to be called from within\n>>> commands in a normal user backend. The existing semantics are\n>>> that user backends respond to SIGHUP only at the start of processing\n>>> a user command, and I'm uncomfortable with suddenly deciding that\n>>> that can work differently if the command happens to be VACUUM.\n>>> It seems unprincipled and perhaps actively unsafe.\n>> \n>> I think it should be ok in commands like VACUUM that already internally start\n>> their own transactions, and thus require to be run outside of a transaction\n>> and at the toplevel. I share your concerns about allowing config reload in\n>> arbitrary places. While we might want to go there, it would require a lot more\n>> analysis.\n\nThinking more on this I'm leaning towards going with allowing more frequent\nreloads in autovacuum, and saving the same for VACUUM for more careful study.\nThe general case is probably fine but I'm not convinced that there aren't error\ncases which can present unpleasant scenarios.\n\nRegarding the autovacuum part of this patch I think we are down to the final\ndetails and I think it's doable to finish this in time for 16.\n\n> As an alternative for your consideration, attached v14 set implements\n> the config file reload for autovacuum only (in 0003) and then enables it\n> for VACUUM and ANALYZE not in a nested transaction command (in 0004).\n> \n> Previously I had the commits in the reverse order for ease of review (to\n> separate changes to worker limit balancing logic from config reload\n> code).\n\nA few comments on top of already submitted reviews, will do another pass over\nthis later today.\n\n+ * VacuumFailsafeActive is a defined as a global so that we can determine\n+ * whether or not to re-enable cost-based vacuum delay when vacuuming a table.\n\nThis comment should be expanded to document who we expect to inspect this\nvariable in order to decide on cost-based vacuum.\n\nMoving the failsafe switch into a global context means we face the risk of an\nextension changing it independently of the GUCs that control it (or the code\nrelying on it) such that these are out of sync. External code messing up\ninternal state is not new and of course outside of our control, but it's worth\nat least considering. There isn't too much we can do here, but perhaps expand\nthis comment to include a \"do not change this\" note?\n\n+extern bool VacuumFailsafeActive;\n\nWhile I agree with upthread review comments that extensions shoulnd't poke at\nthis, not decorating it with PGDLLEXPORT adds little protection and only cause\ninconsistencies in symbol exports across platforms. We only explicitly hide\nsymbols in shared libraries IIRC.\n\n+extern int VacuumCostLimit;\n+extern double VacuumCostDelay;\n ...\n-extern PGDLLIMPORT int VacuumCostLimit;\n-extern PGDLLIMPORT double VacuumCostDelay;\n\nSame with these, I don't think this is according to our default visibility.\nMoreover, I'm not sure it's a good idea to perform this rename. This will keep\nVacuumCostLimit and VacuumCostDelay exported, but change their meaning. Any\nexternal code referring to these thinking they are backing the GUCs will still\ncompile, but may be broken in subtle ways. Is there a reason for not keeping\nthe current GUC variables and instead add net new ones?\n\n\n+ * TODO: should VacuumCostLimit and VacuumCostDelay be initialized to valid or\n+ * invalid values?\n+ */\n+int VacuumCostLimit = 0;\n+double VacuumCostDelay = -1;\n\nI think the important part is to make sure they are never accessed without\nVacuumUpdateCosts having been called first. I think that's the case here, but\nit's not entirely clear. Do you see a codepath where that could happen? If\nthey are initialized to a sentinel value we also need to check for that, so\ninitializing to the defaults from the corresponding GUCs seems better.\n\n+* Update VacuumCostLimit with the correct value for an autovacuum worker, given\n\nTrivial whitespace error in function comment.\n\n\n+static double av_relopt_cost_delay = -1;\n+static int av_relopt_cost_limit = 0;\n\nThese need a comment IMO, ideally one that explain why they are initialized to\nthose values.\n\n\n+ /* There is at least 1 autovac worker (this worker). */\n+ Assert(nworkers_for_balance > 0);\n\nIs there a scenario where this is expected to fail? If so I think this should\nbe handled and not just an Assert.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 4 Apr 2023 15:36:28 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Tue, Apr 4, 2023 at 4:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> The 0001 patch mostly looks good to me except for one\n> point:\n>\n> @@ -391,7 +389,7 @@ heap_vacuum_rel(Relation rel, VacuumParams *params,\n> Assert(params->index_cleanup != VACOPTVALUE_UNSPECIFIED);\n> Assert(params->truncate != VACOPTVALUE_UNSPECIFIED &&\n> params->truncate != VACOPTVALUE_AUTO);\n> - vacrel->failsafe_active = false;\n> + VacuumFailsafeActive = false;\n> vacrel->consider_bypass_optimization = true;\n> vacrel->do_index_vacuuming = true;\n>\n> Looking at the 0003 patch, we set VacuumFailsafeActive to false per table:\n>\n> + /*\n> + * Ensure VacuumFailsafeActive has been reset\n> before vacuuming the\n> + * next relation relation.\n> + */\n> + VacuumFailsafeActive = false;\n>\n> Given that we ensure it's reset before vacuuming the next table, do we\n> need to reset it in heap_vacuum_rel?\n\nI've changed the one in heap_vacuum_rel() to an assert.\n\n> (there is a typo; s/relation relation/relation/)\n\nThanks! fixed.\n\n> > > 0002:\n> > >\n> > > @@ -2388,6 +2398,7 @@ vac_max_items_to_alloc_size(int max_items)\n> > > return offsetof(VacDeadItems, items) +\n> > > sizeof(ItemPointerData) * max_items;\n> > > }\n> > >\n> > > +\n> > > /*\n> > > * vac_tid_reaped() -- is a particular tid deletable?\n> > > *\n> > >\n> > > Unnecessary new line. There are some other unnecessary new lines in this patch.\n> >\n> > Thanks! I think I got them all.\n> >\n> > > ---\n> > > @@ -307,6 +309,8 @@ extern PGDLLIMPORT pg_atomic_uint32 *VacuumActiveNWorkers;\n> > > extern PGDLLIMPORT int VacuumCostBalanceLocal;\n> > >\n> > > extern bool VacuumFailsafeActive;\n> > > +extern int VacuumCostLimit;\n> > > +extern double VacuumCostDelay;\n> > >\n> > > and\n> > >\n> > > @@ -266,8 +266,6 @@ extern PGDLLIMPORT int max_parallel_maintenance_workers;\n> > > extern PGDLLIMPORT int VacuumCostPageHit;\n> > > extern PGDLLIMPORT int VacuumCostPageMiss;\n> > > extern PGDLLIMPORT int VacuumCostPageDirty;\n> > > -extern PGDLLIMPORT int VacuumCostLimit;\n> > > -extern PGDLLIMPORT double VacuumCostDelay;\n> > >\n> > > Do we need PGDLLIMPORT too?\n> >\n> > I was on the fence about this. I annotated the new guc variables\n> > vacuum_cost_delay and vacuum_cost_limit with PGDLLIMPORT, but I did not\n> > annotate the variables used in vacuum code (VacuumCostLimit/Delay). I\n> > think whether or not this is the right choice depends on two things:\n> > whether or not my understanding of PGDLLIMPORT is correct and, if it is,\n> > whether or not we want extensions to be able to access\n> > VacuumCostLimit/Delay or if just access to the guc variables is\n> > sufficient/desirable.\n>\n> I guess it would be better to keep both accessible for backward\n> compatibility. Extensions are able to access both GUC values and\n> values that are actually used for vacuum delays (as we used to use the\n> same variables).\n\n> Here are some review comments for 0002-0004 patches:\n>\n> 0002:\n> - if (MyWorkerInfo)\n> + if (am_autovacuum_launcher)\n> + return;\n> +\n> + if (am_autovacuum_worker)\n> {\n> - VacuumCostDelay = MyWorkerInfo->wi_cost_delay;\n> VacuumCostLimit = MyWorkerInfo->wi_cost_limit;\n> + VacuumCostDelay = MyWorkerInfo->wi_cost_delay;\n> + }\n>\n> Isn't it a bit safer to check MyWorkerInfo instead of\n> am_autovacuum_worker?\n\nAh, since we access it. I've made the change.\n\n> Also, I don't think there is any reason why we want to exclude only\n> the autovacuum launcher.\n\nMy rationale is that the launcher is the only other process type which\nmight reasonably be executing this code besides autovac workers, client\nbackends doing VACUUM/ANALYZE, and parallel vacuum workers. Is it\nconfusing to have the launcher have VacuumCostLimt and VacuumCostDelay\nset to the guc values for explicit VACUUM and ANALYZE -- even if the\nlauncher doesn't use these variables?\n\nI've removed the check, because I do agree with you that it may be\nunnecessarily confusing in the code.\n\n> ---\n> + * TODO: should VacuumCostLimit and VacuumCostDelay be initialized to valid or\n> + * invalid values?\n>\n> How about using the default value of normal backends, 200 and 0?\n\nI've gone with this suggestion\n\n> 0003:\n>\n> @@ -83,6 +84,7 @@ int vacuum_cost_limit;\n> */\n> int VacuumCostLimit = 0;\n> double VacuumCostDelay = -1;\n> +static bool vacuum_can_reload_config = false;\n>\n> In vacuum.c, we use snake case for GUC parameters and camel case for\n> other global variables, so it seems better to rename it\n> VacuumCanReloadConfig. Sorry, that's my fault.\n\nI have renamed it.\n\n> 0004:\n>\n> + if (tab->at_dobalance)\n> + pg_atomic_test_set_flag(&MyWorkerInfo->wi_dobalance);\n> + else\n>\n> The comment of pg_atomic_test_set_flag() says that it returns false if\n> the flag has not successfully been set:\n>\n> * pg_atomic_test_set_flag - TAS()\n> *\n> * Returns true if the flag has successfully been set, false otherwise.\n> *\n> * Acquire (including read barrier) semantics.\n>\n> But IIUC we don't need to worry about that as only one process updates\n> the flag, right? It might be a good idea to add some comments why we\n> don't need to check the return value.\n\nI have added this comment.\n\n> ---\n> - if (worker->wi_proc != NULL)\n> - elog(DEBUG2, \"autovac_balance_cost(pid=%d\n> db=%u, rel=%u, dobalance=%s cost_limit=%d, cost_limit_base=%d,\n> cost_delay=%g)\",\n> - worker->wi_proc->pid,\n> worker->wi_dboid, worker->wi_tableoid,\n> - worker->wi_dobalance ? \"yes\" : \"no\",\n> - worker->wi_cost_limit,\n> worker->wi_cost_limit_base,\n> - worker->wi_cost_delay);\n>\n> I think it's better to keep this kind of log in some form for\n> debugging. For example, we can show these values of autovacuum workers\n> in VacuumUpdateCosts().\n\nI added a message to do_autovacuum() after calling VacuumUpdateCosts()\nin the loop vacuuming each table. That means it will happen once per\ntable. It's not ideal that I had to move the call to VacuumUpdateCosts()\nbehind the shared lock in that loop so that we could access the pid and\nsuch in the logging message after updating the cost and delay, but it is\nprobably okay. Though noone is going to be changing those at this\npoint, it still seemed better to access them under the lock.\n\nThis does mean we won't log anything when we do change the values of\nVacuumCostDelay and VacuumCostLimit while vacuuming a table. Is it worth\nadding some code to do that in VacuumUpdateCosts() (only when the value\nhas changed not on every call to VacuumUpdateCosts())? Or perhaps we\ncould add it in the config reload branch that is already in\nvacuum_delay_point()?\n\n\nOn Tue, Apr 4, 2023 at 9:36 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n\nThanks for the review!\n\n> > On 4 Apr 2023, at 00:35, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> >\n> > On Mon, Apr 3, 2023 at 3:08 PM Andres Freund <andres@anarazel.de> wrote:\n> >> On 2023-04-03 14:43:14 -0400, Tom Lane wrote:\n> >>> Melanie Plageman <melanieplageman@gmail.com> writes:\n> >>>> v13 attached with requested updates.\n> >>>\n> >>> I'm afraid I'd not been paying any attention to this discussion,\n> >>> but better late than never. I'm okay with letting autovacuum\n> >>> processes reload config files more often than now. However,\n> >>> I object to allowing ProcessConfigFile to be called from within\n> >>> commands in a normal user backend. The existing semantics are\n> >>> that user backends respond to SIGHUP only at the start of processing\n> >>> a user command, and I'm uncomfortable with suddenly deciding that\n> >>> that can work differently if the command happens to be VACUUM.\n> >>> It seems unprincipled and perhaps actively unsafe.\n> >>\n> >> I think it should be ok in commands like VACUUM that already internally start\n> >> their own transactions, and thus require to be run outside of a transaction\n> >> and at the toplevel. I share your concerns about allowing config reload in\n> >> arbitrary places. While we might want to go there, it would require a lot more\n> >> analysis.\n>\n> Thinking more on this I'm leaning towards going with allowing more frequent\n> reloads in autovacuum, and saving the same for VACUUM for more careful study.\n> The general case is probably fine but I'm not convinced that there aren't error\n> cases which can present unpleasant scenarios.\n\nIn attached v15, I've dropped support for VACUUM and non-nested ANALYZE.\nIt is like a 5 line change and could be added back at any time.\n\n> > As an alternative for your consideration, attached v14 set implements\n> > the config file reload for autovacuum only (in 0003) and then enables it\n> > for VACUUM and ANALYZE not in a nested transaction command (in 0004).\n> >\n> > Previously I had the commits in the reverse order for ease of review (to\n> > separate changes to worker limit balancing logic from config reload\n> > code).\n>\n> A few comments on top of already submitted reviews, will do another pass over\n> this later today.\n>\n> + * VacuumFailsafeActive is a defined as a global so that we can determine\n> + * whether or not to re-enable cost-based vacuum delay when vacuuming a table.\n>\n> This comment should be expanded to document who we expect to inspect this\n> variable in order to decide on cost-based vacuum.\n>\n> Moving the failsafe switch into a global context means we face the risk of an\n> extension changing it independently of the GUCs that control it (or the code\n> relying on it) such that these are out of sync. External code messing up\n> internal state is not new and of course outside of our control, but it's worth\n> at least considering. There isn't too much we can do here, but perhaps expand\n> this comment to include a \"do not change this\" note?\n\nI've updated the comment to mention how table AM-agnostic VACUUM code\nuses it and to say that table AMs can set it if they want that behavior.\n\n> +extern bool VacuumFailsafeActive;\n>\n> While I agree with upthread review comments that extensions shoulnd't poke at\n> this, not decorating it with PGDLLEXPORT adds little protection and only cause\n> inconsistencies in symbol exports across platforms. We only explicitly hide\n> symbols in shared libraries IIRC.\n\nI've updated this.\n\n> +extern int VacuumCostLimit;\n> +extern double VacuumCostDelay;\n> ...\n> -extern PGDLLIMPORT int VacuumCostLimit;\n> -extern PGDLLIMPORT double VacuumCostDelay;\n>\n> Same with these, I don't think this is according to our default visibility.\n> Moreover, I'm not sure it's a good idea to perform this rename. This will keep\n> VacuumCostLimit and VacuumCostDelay exported, but change their meaning. Any\n> external code referring to these thinking they are backing the GUCs will still\n> compile, but may be broken in subtle ways. Is there a reason for not keeping\n> the current GUC variables and instead add net new ones?\n\nWhen VacuumCostLimit was the same variable in the code and for the GUC\nvacuum_cost_limit, everytime we reload the config file, VacuumCostLimit\nis overwritten. Autovacuum workers have to overwrite this value with the\nappropriate one for themselves given the balancing logic and the value\nof autovacuum_vacuum_cost_limit. However, the problem is, because you\ncan specify -1 for autovacuum_vacuum_cost_limit to indicate it should\nfall back to vacuum_cost_limit, we have to reference the value of\nVacuumCostLimit when calculating the new autovacuum worker's cost limit\nafter a config reload.\n\nBut, you have to be sure you *only* do this after a config reload when\nthe value of VacuumCostLimit is fresh and unmodified or you risk\ndividing the value of VacuumCostLimit over and over. That means it is\nunsafe to call functions updating the cost limit more than once.\n\nThis orchestration wasn't as difficult when we only reloaded the config\nfile once every table. We were careful about it and also kept the\noriginal \"base\" cost limit around from table_recheck_autovac(). However,\nonce we started reloading the config file more often, this no longer\nworks.\n\nBy separating the variables modified when the gucs are set and the ones\nused the code, we can make sure we always have the original value the\nguc was set to in vacuum_cost_limit and autovacuum_vacuum_cost_limit,\nwhenever we need to reference it.\n\nThat being said, perhaps we should document what extensions should do?\nDo you think they will want to use the variables backing the gucs or to\nbe able to overwrite the variables being used in the code?\n\nOh, also I've annotated these with PGDLLIMPORT too.\n\n> + * TODO: should VacuumCostLimit and VacuumCostDelay be initialized to valid or\n> + * invalid values?\n> + */\n> +int VacuumCostLimit = 0;\n> +double VacuumCostDelay = -1;\n>\n> I think the important part is to make sure they are never accessed without\n> VacuumUpdateCosts having been called first. I think that's the case here, but\n> it's not entirely clear. Do you see a codepath where that could happen? If\n> they are initialized to a sentinel value we also need to check for that, so\n> initializing to the defaults from the corresponding GUCs seems better.\n\nI don't see a case where autovacuum could access these without calling\nVacuumUpdateCosts() first. I think the other callers of\nvacuum_delay_point() are the issue (gist/gin/hash/etc).\n\nIt might need a bit more thought.\n\nMy concern was that these variables correspond to multiple GUCs each\ndepending on the backend type, and those backends have different\ndefaults (e.g. autovac workers default cost delay is different than\nclient backend doing vacuum cost delay).\n\nHowever, what I have done in this version is initialize them to the\ndefaults for a client backend executing VACUUM or ANALYZE, since I am\nfairly confident that autovacuum will not use them without calling\nVacuumUpdateCosts().\n\n>\n> +* Update VacuumCostLimit with the correct value for an autovacuum worker, given\n>\n> Trivial whitespace error in function comment.\n\nFixed.\n\n> +static double av_relopt_cost_delay = -1;\n> +static int av_relopt_cost_limit = 0;\n>\n> These need a comment IMO, ideally one that explain why they are initialized to\n> those values.\n\nI've added a comment.\n\n> + /* There is at least 1 autovac worker (this worker). */\n> + Assert(nworkers_for_balance > 0);\n>\n> Is there a scenario where this is expected to fail? If so I think this should\n> be handled and not just an Assert.\n\nNo, this isn't expected to happen because an autovacuum worker would\nhave called autovac_recalculate_workers_for_balance() before calling\nVacuumUpdateCosts() (which calls AutoVacuumUpdateLimit()) in\ndo_autovacuum(). But, if someone were to move around or add a call to\nVacuumUpdateCosts() there is a chance it could happen.\n\n- Melanie",
"msg_date": "Tue, 4 Apr 2023 16:04:52 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Wed, Apr 5, 2023 at 5:05 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Tue, Apr 4, 2023 at 4:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > ---\n> > - if (worker->wi_proc != NULL)\n> > - elog(DEBUG2, \"autovac_balance_cost(pid=%d\n> > db=%u, rel=%u, dobalance=%s cost_limit=%d, cost_limit_base=%d,\n> > cost_delay=%g)\",\n> > - worker->wi_proc->pid,\n> > worker->wi_dboid, worker->wi_tableoid,\n> > - worker->wi_dobalance ? \"yes\" : \"no\",\n> > - worker->wi_cost_limit,\n> > worker->wi_cost_limit_base,\n> > - worker->wi_cost_delay);\n> >\n> > I think it's better to keep this kind of log in some form for\n> > debugging. For example, we can show these values of autovacuum workers\n> > in VacuumUpdateCosts().\n>\n> I added a message to do_autovacuum() after calling VacuumUpdateCosts()\n> in the loop vacuuming each table. That means it will happen once per\n> table. It's not ideal that I had to move the call to VacuumUpdateCosts()\n> behind the shared lock in that loop so that we could access the pid and\n> such in the logging message after updating the cost and delay, but it is\n> probably okay. Though noone is going to be changing those at this\n> point, it still seemed better to access them under the lock.\n>\n> This does mean we won't log anything when we do change the values of\n> VacuumCostDelay and VacuumCostLimit while vacuuming a table. Is it worth\n> adding some code to do that in VacuumUpdateCosts() (only when the value\n> has changed not on every call to VacuumUpdateCosts())? Or perhaps we\n> could add it in the config reload branch that is already in\n> vacuum_delay_point()?\n\nPreviously, we used to show the pid in the log since a worker/launcher\nset other workers' delay costs. But now that the worker sets its delay\ncosts, we don't need to show the pid in the log. Also, I think it's\nuseful for debugging and investigating the system if we log it when\nchanging the values. The log I imagined to add was like:\n\n@@ -1801,6 +1801,13 @@ VacuumUpdateCosts(void)\n VacuumCostDelay = vacuum_cost_delay;\n\n AutoVacuumUpdateLimit();\n+\n+ elog(DEBUG2, \"autovacuum update costs (db=%u, rel=%u,\ndobalance=%s, cost_limit=%d, cost_delay=%g active=%s failsafe=%s)\",\n+ MyWorkerInfo->wi_dboid, MyWorkerInfo->wi_tableoid,\n+ pg_atomic_unlocked_test_flag(&MyWorkerInfo->wi_dobalance)\n? \"no\" : \"yes\",\n+ VacuumCostLimit, VacuumCostDelay,\n+ VacuumCostDelay > 0 ? \"yes\" : \"no\",\n+ VacuumFailsafeActive ? \"yes\" : \"no\");\n }\n else\n {\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 5 Apr 2023 14:56:22 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "Hi.\n\nAbout 0001:\n\n+ * VacuumFailsafeActive is a defined as a global so that we can determine\n+ * whether or not to re-enable cost-based vacuum delay when vacuuming a table.\n+ * If failsafe mode has been engaged, we will not re-enable cost-based delay\n+ * for the table until after vacuuming has completed, regardless of other\n+ * settings. Only VACUUM code should inspect this variable and only table\n+ * access methods should set it. In Table AM-agnostic VACUUM code, this\n+ * variable controls whether or not to allow cost-based delays. Table AMs are\n+ * free to use it if they desire this behavior.\n+ */\n+bool\t\tVacuumFailsafeActive = false;\n\nIf I understand this correctly, there seems to be an issue. The\nAM-agnostic VACUUM code is setting it and no table AMs actually do\nthat.\n\n\n0003:\n+\n+\t\t\t/*\n+\t\t\t * Ensure VacuumFailsafeActive has been reset before vacuuming the\n+\t\t\t * next relation.\n+\t\t\t */\n+\t\t\tVacuumFailsafeActive = false;\n \t\t}\n \t}\n \tPG_FINALLY();\n \t{\n \t\tin_vacuum = false;\n \t\tVacuumCostActive = false;\n+\t\tVacuumFailsafeActive = false;\n+\t\tVacuumCostBalance = 0;\n\nThere is no need to reset VacuumFailsafeActive in the PG_TRY() block.\n\n\n+\t/*\n+\t * Reload the configuration file if requested. This allows changes to\n+\t * autovacuum_vacuum_cost_limit and autovacuum_vacuum_cost_delay to take\n+\t * effect while a table is being vacuumed or analyzed.\n+\t */\n+\tif (ConfigReloadPending && IsAutoVacuumWorkerProcess())\n+\t{\n+\t\tConfigReloadPending = false;\n+\t\tProcessConfigFile(PGC_SIGHUP);\n+\t\tVacuumUpdateCosts();\n+\t}\n\nI believe we should prevent unnecessary reloading when\nVacuumFailsafeActive is true.\n\n\n+\t\tAutoVacuumUpdateLimit();\n\nI'm not entirely sure, but it might be better to name this\nAutoVacuumUpdateCostLimit().\n\n\n+\tpg_atomic_flag wi_dobalance;\n...\n+\t\t/*\n+\t\t * We only expect this worker to ever set the flag, so don't bother\n+\t\t * checking the return value. We shouldn't have to retry.\n+\t\t */\n+\t\tif (tab->at_dobalance)\n+\t\t\tpg_atomic_test_set_flag(&MyWorkerInfo->wi_dobalance);\n+\t\telse\n+\t\t\tpg_atomic_clear_flag(&MyWorkerInfo->wi_dobalance);\n\n\t\tLWLockAcquire(AutovacuumLock, LW_SHARED);\n\n\t\tautovac_recalculate_workers_for_balance();\n\nI don't see the need for using atomic here. The code is executed\ninfrequently and we already take a lock while counting do_balance\nworkers. So sticking with the old locking method (taking LW_EXCLUSIVE\nthen set wi_dobalance then do balance) should be fine.\n\n\n+void\n+AutoVacuumUpdateLimit(void)\n...\n+\tif (av_relopt_cost_limit > 0)\n+\t\tVacuumCostLimit = av_relopt_cost_limit;\n+\telse\n\nI think we should use wi_dobalance to decide if we need to do balance\nor not. We don't need to take a lock to do that since only the process\nupdates it.\n\n\n/*\n \t\t * Remove my info from shared memory. We could, but intentionally\n-\t\t * don't, clear wi_cost_limit and friends --- this is on the\n-\t\t * assumption that we probably have more to do with similar cost\n-\t\t * settings, so we don't want to give up our share of I/O for a very\n-\t\t * short interval and thereby thrash the global balance.\n+\t\t * don't, unset wi_dobalance on the assumption that we are more likely\n+\t\t * than not to vacuum a table with no table options next, so we don't\n+\t\t * want to give up our share of I/O for a very short interval and\n+\t\t * thereby thrash the global balance.\n \t\t */\n \t\tLWLockAcquire(AutovacuumScheduleLock, LW_EXCLUSIVE);\n \t\tMyWorkerInfo->wi_tableoid = InvalidOid;\n\nThe comment mentions wi_dobalance, but it doesn't appear here..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 05 Apr 2023 16:16:12 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 4 Apr 2023, at 22:04, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> \n> On Tue, Apr 4, 2023 at 4:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n>> Also, I don't think there is any reason why we want to exclude only\n>> the autovacuum launcher.\n> \n> My rationale is that the launcher is the only other process type which\n> might reasonably be executing this code besides autovac workers, client\n> backends doing VACUUM/ANALYZE, and parallel vacuum workers. Is it\n> confusing to have the launcher have VacuumCostLimt and VacuumCostDelay\n> set to the guc values for explicit VACUUM and ANALYZE -- even if the\n> launcher doesn't use these variables?\n> \n> I've removed the check, because I do agree with you that it may be\n> unnecessarily confusing in the code.\n\n+1\n\n> On Tue, Apr 4, 2023 at 9:36 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> On 4 Apr 2023, at 00:35, Melanie Plageman <melanieplageman@gmail.com> wrote:\n\n>> Thinking more on this I'm leaning towards going with allowing more frequent\n>> reloads in autovacuum, and saving the same for VACUUM for more careful study.\n>> The general case is probably fine but I'm not convinced that there aren't error\n>> cases which can present unpleasant scenarios.\n> \n> In attached v15, I've dropped support for VACUUM and non-nested ANALYZE.\n> It is like a 5 line change and could be added back at any time.\n\nI think thats the best option for now.\n\n>> +extern int VacuumCostLimit;\n>> +extern double VacuumCostDelay;\n>> ...\n>> -extern PGDLLIMPORT int VacuumCostLimit;\n>> -extern PGDLLIMPORT double VacuumCostDelay;\n>> \n>> Same with these, I don't think this is according to our default visibility.\n>> Moreover, I'm not sure it's a good idea to perform this rename. This will keep\n>> VacuumCostLimit and VacuumCostDelay exported, but change their meaning. Any\n>> external code referring to these thinking they are backing the GUCs will still\n>> compile, but may be broken in subtle ways. Is there a reason for not keeping\n>> the current GUC variables and instead add net new ones?\n> \n> When VacuumCostLimit was the same variable in the code and for the GUC\n> vacuum_cost_limit, everytime we reload the config file, VacuumCostLimit\n> is overwritten. Autovacuum workers have to overwrite this value with the\n> appropriate one for themselves given the balancing logic and the value\n> of autovacuum_vacuum_cost_limit. However, the problem is, because you\n> can specify -1 for autovacuum_vacuum_cost_limit to indicate it should\n> fall back to vacuum_cost_limit, we have to reference the value of\n> VacuumCostLimit when calculating the new autovacuum worker's cost limit\n> after a config reload.\n> \n> But, you have to be sure you *only* do this after a config reload when\n> the value of VacuumCostLimit is fresh and unmodified or you risk\n> dividing the value of VacuumCostLimit over and over. That means it is\n> unsafe to call functions updating the cost limit more than once.\n> \n> This orchestration wasn't as difficult when we only reloaded the config\n> file once every table. We were careful about it and also kept the\n> original \"base\" cost limit around from table_recheck_autovac(). However,\n> once we started reloading the config file more often, this no longer\n> works.\n> \n> By separating the variables modified when the gucs are set and the ones\n> used the code, we can make sure we always have the original value the\n> guc was set to in vacuum_cost_limit and autovacuum_vacuum_cost_limit,\n> whenever we need to reference it.\n> \n> That being said, perhaps we should document what extensions should do?\n> Do you think they will want to use the variables backing the gucs or to\n> be able to overwrite the variables being used in the code?\n\nI think I wasn't clear in my comment, sorry. I don't have a problem with\nintroducing a new variable to split the balanced value from the GUC value.\nWhat I don't think we should do is repurpose an exported symbol into doing a\nnew thing. In the case at hand I think VacuumCostLimit and VacuumCostDelay\nshould remain the backing variables for the GUCs, with vacuum_cost_limit and\nvacuum_cost_delay carrying the balanced values. So the inverse of what is in\nthe patch now.\n\nThe risk of these symbols being used in extensions might be very low but on\nprinciple it seems unwise to alter a symbol and risk subtle breakage.\n\n> Oh, also I've annotated these with PGDLLIMPORT too.\n> \n>> + * TODO: should VacuumCostLimit and VacuumCostDelay be initialized to valid or\n>> + * invalid values?\n>> + */\n>> +int VacuumCostLimit = 0;\n>> +double VacuumCostDelay = -1;\n>> \n>> I think the important part is to make sure they are never accessed without\n>> VacuumUpdateCosts having been called first. I think that's the case here, but\n>> it's not entirely clear. Do you see a codepath where that could happen? If\n>> they are initialized to a sentinel value we also need to check for that, so\n>> initializing to the defaults from the corresponding GUCs seems better.\n> \n> I don't see a case where autovacuum could access these without calling\n> VacuumUpdateCosts() first. I think the other callers of\n> vacuum_delay_point() are the issue (gist/gin/hash/etc).\n> \n> It might need a bit more thought.\n> \n> My concern was that these variables correspond to multiple GUCs each\n> depending on the backend type, and those backends have different\n> defaults (e.g. autovac workers default cost delay is different than\n> client backend doing vacuum cost delay).\n> \n> However, what I have done in this version is initialize them to the\n> defaults for a client backend executing VACUUM or ANALYZE, since I am\n> fairly confident that autovacuum will not use them without calling\n> VacuumUpdateCosts().\n\nAnother question along these lines, we only call AutoVacuumUpdateLimit() in\ncase there is a sleep in vacuum_delay_point():\n\n+ /*\n+ * Balance and update limit values for autovacuum workers. We must\n+ * always do this in case the autovacuum launcher or another\n+ * autovacuum worker has recalculated the number of workers across\n+ * which we must balance the limit. This is done by the launcher when\n+ * launching a new worker and by workers before vacuuming each table.\n+ */\n+ AutoVacuumUpdateLimit();\n\nShouldn't we always call that in case we had a config reload, or am I being\nthick?\n\n>> +static double av_relopt_cost_delay = -1;\n>> +static int av_relopt_cost_limit = 0;\n\nSorry, I didn't catch this earlier, shouldn't this be -1 to match the default\nvalue of autovacuum_vacuum_cost_limit?\n\n>> These need a comment IMO, ideally one that explain why they are initialized to\n>> those values.\n> \n> I've added a comment.\n\n+ * Variables to save the cost-related table options for the current relation\n\nThe \"table options\" nomenclature is right now only used for FDW foreign table\noptions, I think we should use \"storage parameters\" or \"relation options\" here.\n\n>> + /* There is at least 1 autovac worker (this worker). */\n>> + Assert(nworkers_for_balance > 0);\n>> \n>> Is there a scenario where this is expected to fail? If so I think this should\n>> be handled and not just an Assert.\n> \n> No, this isn't expected to happen because an autovacuum worker would\n> have called autovac_recalculate_workers_for_balance() before calling\n> VacuumUpdateCosts() (which calls AutoVacuumUpdateLimit()) in\n> do_autovacuum(). But, if someone were to move around or add a call to\n> VacuumUpdateCosts() there is a chance it could happen.\n\nThinking more on this I'm tempted to recommend that we promote this to an\nelog(), mainly due to the latter. An accidental call to VacuumUpdateCosts()\ndoesn't seem entirely unlikely to happen\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 5 Apr 2023 15:10:55 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "Thanks all for the reviews.\n\nv16 attached. I put it together rather quickly, so there might be a few\nspurious whitespaces or similar. There is one rather annoying pgindent\noutlier that I have to figure out what to do about as well.\n\nThe remaining functional TODOs that I know of are:\n\n- Resolve what to do about names of GUC and vacuum variables for cost\n limit and cost delay (since it may affect extensions)\n\n- Figure out what to do about the logging message which accesses dboid\n and tableoid (lock/no lock, where to put it, etc)\n\n- I see several places in docs which reference the balancing algorithm\n for autovac workers. I did not read them in great detail, but we may\n want to review them to see if any require updates.\n\n- Consider whether or not the initial two commits should just be\n squashed with the third commit\n\n- Anything else reviewers are still unhappy with\n\n\nOn Wed, Apr 5, 2023 at 1:56 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Apr 5, 2023 at 5:05 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > On Tue, Apr 4, 2023 at 4:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > ---\n> > > - if (worker->wi_proc != NULL)\n> > > - elog(DEBUG2, \"autovac_balance_cost(pid=%d\n> > > db=%u, rel=%u, dobalance=%s cost_limit=%d, cost_limit_base=%d,\n> > > cost_delay=%g)\",\n> > > - worker->wi_proc->pid,\n> > > worker->wi_dboid, worker->wi_tableoid,\n> > > - worker->wi_dobalance ? \"yes\" : \"no\",\n> > > - worker->wi_cost_limit,\n> > > worker->wi_cost_limit_base,\n> > > - worker->wi_cost_delay);\n> > >\n> > > I think it's better to keep this kind of log in some form for\n> > > debugging. For example, we can show these values of autovacuum workers\n> > > in VacuumUpdateCosts().\n> >\n> > I added a message to do_autovacuum() after calling VacuumUpdateCosts()\n> > in the loop vacuuming each table. That means it will happen once per\n> > table. It's not ideal that I had to move the call to VacuumUpdateCosts()\n> > behind the shared lock in that loop so that we could access the pid and\n> > such in the logging message after updating the cost and delay, but it is\n> > probably okay. Though noone is going to be changing those at this\n> > point, it still seemed better to access them under the lock.\n> >\n> > This does mean we won't log anything when we do change the values of\n> > VacuumCostDelay and VacuumCostLimit while vacuuming a table. Is it worth\n> > adding some code to do that in VacuumUpdateCosts() (only when the value\n> > has changed not on every call to VacuumUpdateCosts())? Or perhaps we\n> > could add it in the config reload branch that is already in\n> > vacuum_delay_point()?\n>\n> Previously, we used to show the pid in the log since a worker/launcher\n> set other workers' delay costs. But now that the worker sets its delay\n> costs, we don't need to show the pid in the log. Also, I think it's\n> useful for debugging and investigating the system if we log it when\n> changing the values. The log I imagined to add was like:\n>\n> @@ -1801,6 +1801,13 @@ VacuumUpdateCosts(void)\n> VacuumCostDelay = vacuum_cost_delay;\n>\n> AutoVacuumUpdateLimit();\n> +\n> + elog(DEBUG2, \"autovacuum update costs (db=%u, rel=%u,\n> dobalance=%s, cost_limit=%d, cost_delay=%g active=%s failsafe=%s)\",\n> + MyWorkerInfo->wi_dboid, MyWorkerInfo->wi_tableoid,\n> + pg_atomic_unlocked_test_flag(&MyWorkerInfo->wi_dobalance)\n> ? \"no\" : \"yes\",\n> + VacuumCostLimit, VacuumCostDelay,\n> + VacuumCostDelay > 0 ? \"yes\" : \"no\",\n> + VacuumFailsafeActive ? \"yes\" : \"no\");\n> }\n> else\n> {\n\nMakes sense. I've updated the log message to roughly what you suggested.\nI also realized I think it does make sense to call it in\nVacuumUpdateCosts() -- only for autovacuum workers of course. I've done\nthis. I haven't taken the lock though and can't decide if I must since\nthey access dboid and tableoid -- those are not going to change at this\npoint, but I still don't know if I can access them lock-free...\nPerhaps there is a way to condition it on the log level?\n\nIf I have to take a lock, then I don't know if we should put these in\nVacuumUpdateCosts()...\n\nOn Wed, Apr 5, 2023 at 3:16 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> About 0001:\n>\n> + * VacuumFailsafeActive is a defined as a global so that we can determine\n> + * whether or not to re-enable cost-based vacuum delay when vacuuming a table.\n> + * If failsafe mode has been engaged, we will not re-enable cost-based delay\n> + * for the table until after vacuuming has completed, regardless of other\n> + * settings. Only VACUUM code should inspect this variable and only table\n> + * access methods should set it. In Table AM-agnostic VACUUM code, this\n> + * variable controls whether or not to allow cost-based delays. Table AMs are\n> + * free to use it if they desire this behavior.\n> + */\n> +bool VacuumFailsafeActive = false;\n>\n> If I understand this correctly, there seems to be an issue. The\n> AM-agnostic VACUUM code is setting it and no table AMs actually do\n> that.\n\nNo, it is not set in table AM-agnostic VACUUM code. I meant it is\nused/read from/inspected in table AM-agnostic VACUUM code. Table AMs can\nset it if they want to avoid cost-based delays being re-enabled. It is\nonly set to true heap-specific code and is initialized to false and\nreset in table AM-agnostic code back to false in between each relation\nbeing vacuumed. I updated the comment to reflect this. Let me know if\nyou think it is clear.\n\n> 0003:\n> +\n> + /*\n> + * Ensure VacuumFailsafeActive has been reset before vacuuming the\n> + * next relation.\n> + */\n> + VacuumFailsafeActive = false;\n> }\n> }\n> PG_FINALLY();\n> {\n> in_vacuum = false;\n> VacuumCostActive = false;\n> + VacuumFailsafeActive = false;\n> + VacuumCostBalance = 0;\n>\n> There is no need to reset VacuumFailsafeActive in the PG_TRY() block.\n\nI think that is true -- since it is initialized to false and reset to\nfalse after vacuuming every relation. However, I am leaning toward\nkeeping it because I haven't thought through every codepath and\ndetermined if there is ever a way where it could be true here.\n\n> + /*\n> + * Reload the configuration file if requested. This allows changes to\n> + * autovacuum_vacuum_cost_limit and autovacuum_vacuum_cost_delay to take\n> + * effect while a table is being vacuumed or analyzed.\n> + */\n> + if (ConfigReloadPending && IsAutoVacuumWorkerProcess())\n> + {\n> + ConfigReloadPending = false;\n> + ProcessConfigFile(PGC_SIGHUP);\n> + VacuumUpdateCosts();\n> + }\n>\n> I believe we should prevent unnecessary reloading when\n> VacuumFailsafeActive is true.\n\nThis is in conflict with two of the other reviewers feedback:\nSawada-san:\n\n> + * Reload the configuration file if requested. This allows changes to\n> + * [autovacuum_]vacuum_cost_limit and [autovacuum_]vacuum_cost_delay to\n> + * take effect while a table is being vacuumed or analyzed.\n> + */\n> + if (ConfigReloadPending && !analyze_in_outer_xact)\n> + {\n> + ConfigReloadPending = false;\n> + ProcessConfigFile(PGC_SIGHUP);\n> + AutoVacuumUpdateDelay();\n> + AutoVacuumUpdateLimit();\n> + }\n>\n> It makes sense to me that we need to reload the config file even when\n> vacuum-delay is disabled. But I think it's not convenient for users\n> that we don't reload the configuration file once the failsafe is\n> triggered. I think users might want to change some GUCs such as\n> log_autovacuum_min_duration.\n\nand Daniel in response to this:\n\n> > It makes sense to me that we need to reload the config file even when\n> > vacuum-delay is disabled. But I think it's not convenient for users\n> > that we don't reload the configuration file once the failsafe is\n> > triggered. I think users might want to change some GUCs such as\n> > log_autovacuum_min_duration.\n>\n> I agree with this.\n\n> + AutoVacuumUpdateLimit();\n>\n> I'm not entirely sure, but it might be better to name this\n> AutoVacuumUpdateCostLimit().\n\nI have made this change.\n\n> + pg_atomic_flag wi_dobalance;\n> ...\n> + /*\n> + * We only expect this worker to ever set the flag, so don't bother\n> + * checking the return value. We shouldn't have to retry.\n> + */\n> + if (tab->at_dobalance)\n> + pg_atomic_test_set_flag(&MyWorkerInfo->wi_dobalance);\n> + else\n> + pg_atomic_clear_flag(&MyWorkerInfo->wi_dobalance);\n>\n> LWLockAcquire(AutovacuumLock, LW_SHARED);\n>\n> autovac_recalculate_workers_for_balance();\n>\n> I don't see the need for using atomic here. The code is executed\n> infrequently and we already take a lock while counting do_balance\n> workers. So sticking with the old locking method (taking LW_EXCLUSIVE\n> then set wi_dobalance then do balance) should be fine.\n\nWe access wi_dobalance on every call to AutoVacuumUpdateLimit() which is\nexecuted in vacuum_delay_point(). I do not think we can justify take a\nshared lock in a function that is called so frequently.\n\n> +void\n> +AutoVacuumUpdateLimit(void)\n> ...\n> + if (av_relopt_cost_limit > 0)\n> + VacuumCostLimit = av_relopt_cost_limit;\n> + else\n>\n> I think we should use wi_dobalance to decide if we need to do balance\n> or not. We don't need to take a lock to do that since only the process\n> updates it.\n\nWe do do that below in the \"else\" before balancing. But we for sure\ndon't need to balance if relopt for cost limit is set. We can save an\naccess to an atomic variable this way. I think the atomic is a\nrelatively cheap way of avoiding this whole locking question.\n\n> /*\n> * Remove my info from shared memory. We could, but intentionally\n> - * don't, clear wi_cost_limit and friends --- this is on the\n> - * assumption that we probably have more to do with similar cost\n> - * settings, so we don't want to give up our share of I/O for a very\n> - * short interval and thereby thrash the global balance.\n> + * don't, unset wi_dobalance on the assumption that we are more likely\n> + * than not to vacuum a table with no table options next, so we don't\n> + * want to give up our share of I/O for a very short interval and\n> + * thereby thrash the global balance.\n> */\n> LWLockAcquire(AutovacuumScheduleLock, LW_EXCLUSIVE);\n> MyWorkerInfo->wi_tableoid = InvalidOid;\n>\n> The comment mentions wi_dobalance, but it doesn't appear here..\n\nThe point of the comment is that we don't do anything with wi_dobalance\nhere. It is explaining why it doesn't appear. The previous comment\nmentioned not doing anything with wi_cost_delay and wi_cost_limit which\nalso didn't appear here.\n\nOn Wed, Apr 5, 2023 at 9:10 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 4 Apr 2023, at 22:04, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> >> +extern int VacuumCostLimit;\n> >> +extern double VacuumCostDelay;\n> >> ...\n> >> -extern PGDLLIMPORT int VacuumCostLimit;\n> >> -extern PGDLLIMPORT double VacuumCostDelay;\n> >>\n> >> Same with these, I don't think this is according to our default visibility.\n> >> Moreover, I'm not sure it's a good idea to perform this rename. This will keep\n> >> VacuumCostLimit and VacuumCostDelay exported, but change their meaning. Any\n> >> external code referring to these thinking they are backing the GUCs will still\n> >> compile, but may be broken in subtle ways. Is there a reason for not keeping\n> >> the current GUC variables and instead add net new ones?\n> >\n> > When VacuumCostLimit was the same variable in the code and for the GUC\n> > vacuum_cost_limit, everytime we reload the config file, VacuumCostLimit\n> > is overwritten. Autovacuum workers have to overwrite this value with the\n> > appropriate one for themselves given the balancing logic and the value\n> > of autovacuum_vacuum_cost_limit. However, the problem is, because you\n> > can specify -1 for autovacuum_vacuum_cost_limit to indicate it should\n> > fall back to vacuum_cost_limit, we have to reference the value of\n> > VacuumCostLimit when calculating the new autovacuum worker's cost limit\n> > after a config reload.\n> >\n> > But, you have to be sure you *only* do this after a config reload when\n> > the value of VacuumCostLimit is fresh and unmodified or you risk\n> > dividing the value of VacuumCostLimit over and over. That means it is\n> > unsafe to call functions updating the cost limit more than once.\n> >\n> > This orchestration wasn't as difficult when we only reloaded the config\n> > file once every table. We were careful about it and also kept the\n> > original \"base\" cost limit around from table_recheck_autovac(). However,\n> > once we started reloading the config file more often, this no longer\n> > works.\n> >\n> > By separating the variables modified when the gucs are set and the ones\n> > used the code, we can make sure we always have the original value the\n> > guc was set to in vacuum_cost_limit and autovacuum_vacuum_cost_limit,\n> > whenever we need to reference it.\n> >\n> > That being said, perhaps we should document what extensions should do?\n> > Do you think they will want to use the variables backing the gucs or to\n> > be able to overwrite the variables being used in the code?\n>\n> I think I wasn't clear in my comment, sorry. I don't have a problem with\n> introducing a new variable to split the balanced value from the GUC value.\n> What I don't think we should do is repurpose an exported symbol into doing a\n> new thing. In the case at hand I think VacuumCostLimit and VacuumCostDelay\n> should remain the backing variables for the GUCs, with vacuum_cost_limit and\n> vacuum_cost_delay carrying the balanced values. So the inverse of what is in\n> the patch now.\n>\n> The risk of these symbols being used in extensions might be very low but on\n> principle it seems unwise to alter a symbol and risk subtle breakage.\n\nI totally see what you are saying. The only complication is that all of\nthe other variables used in vacuum code are the camelcase and the gucs\nfollow the snake case -- as pointed out in a previous review comment by\nSawada-san:\n\n> @@ -83,6 +84,7 @@ int vacuum_cost_limit;\n> */\n> int VacuumCostLimit = 0;\n> double VacuumCostDelay = -1;\n> +static bool vacuum_can_reload_config = false;\n>\n> In vacuum.c, we use snake case for GUC parameters and camel case for\n> other global variables, so it seems better to rename it\n> VacuumCanReloadConfig. Sorry, that's my fault.\n\nThis is less of a compelling argument than subtle breakage for extension\ncode, though.\n\nI am, however, wondering if extensions expect to have access to the guc\nvariable or the global variable -- or both?\n\nLeft it as is in this version until we resolve the question.\n\n> > Oh, also I've annotated these with PGDLLIMPORT too.\n> >\n> >> + * TODO: should VacuumCostLimit and VacuumCostDelay be initialized to valid or\n> >> + * invalid values?\n> >> + */\n> >> +int VacuumCostLimit = 0;\n> >> +double VacuumCostDelay = -1;\n> >>\n> >> I think the important part is to make sure they are never accessed without\n> >> VacuumUpdateCosts having been called first. I think that's the case here, but\n> >> it's not entirely clear. Do you see a codepath where that could happen? If\n> >> they are initialized to a sentinel value we also need to check for that, so\n> >> initializing to the defaults from the corresponding GUCs seems better.\n> >\n> > I don't see a case where autovacuum could access these without calling\n> > VacuumUpdateCosts() first. I think the other callers of\n> > vacuum_delay_point() are the issue (gist/gin/hash/etc).\n> >\n> > It might need a bit more thought.\n> >\n> > My concern was that these variables correspond to multiple GUCs each\n> > depending on the backend type, and those backends have different\n> > defaults (e.g. autovac workers default cost delay is different than\n> > client backend doing vacuum cost delay).\n> >\n> > However, what I have done in this version is initialize them to the\n> > defaults for a client backend executing VACUUM or ANALYZE, since I am\n> > fairly confident that autovacuum will not use them without calling\n> > VacuumUpdateCosts().\n>\n> Another question along these lines, we only call AutoVacuumUpdateLimit() in\n> case there is a sleep in vacuum_delay_point():\n>\n> + /*\n> + * Balance and update limit values for autovacuum workers. We must\n> + * always do this in case the autovacuum launcher or another\n> + * autovacuum worker has recalculated the number of workers across\n> + * which we must balance the limit. This is done by the launcher when\n> + * launching a new worker and by workers before vacuuming each table.\n> + */\n> + AutoVacuumUpdateLimit();\n>\n> Shouldn't we always call that in case we had a config reload, or am I being\n> thick?\n\nWe actually also call it from inside VacuumUpdateCosts(), which is\nalways called in the case of a config reload.\n\n> >> +static double av_relopt_cost_delay = -1;\n> >> +static int av_relopt_cost_limit = 0;\n>\n> Sorry, I didn't catch this earlier, shouldn't this be -1 to match the default\n> value of autovacuum_vacuum_cost_limit?\n\nYea, this is a bit tricky. Initial values of -1 and 0 have the same\neffect when we are referencing av_relopt_vacuum_cost_limit in\nAutoVacuumUpdateCostLimit(). However, I was trying to initialize both\nav_relopt_vacuum_cost_limit and av_relopt_vacuum_cost_delay to \"invalid\"\nvalues which were not the default for the associated autovacuum gucs,\nsince initializing av_relopt_cost_delay to the default for\nautovacuum_vacuum_cost_delay (2 ms) would cause it to be used even if\nstorage params were not set for the relation.\n\nI have updated the initial value to -1, as you suggested -- but I don't\nknow if it is more or less confusing the explain what I just explained\nin the comment above it.\n\n> >> These need a comment IMO, ideally one that explain why they are initialized to\n> >> those values.\n> >\n> > I've added a comment.\n>\n> + * Variables to save the cost-related table options for the current relation\n>\n> The \"table options\" nomenclature is right now only used for FDW foreign table\n> options, I think we should use \"storage parameters\" or \"relation options\" here.\n\nI've updated these to \"storage parameters\" to match the docs. I poked\naround looking for other places I referred to them as table options and\ntried to fix those as well. I've also changed all relevant variable\nnames.\n\n> >> + /* There is at least 1 autovac worker (this worker). */\n> >> + Assert(nworkers_for_balance > 0);\n> >>\n> >> Is there a scenario where this is expected to fail? If so I think this should\n> >> be handled and not just an Assert.\n> >\n> > No, this isn't expected to happen because an autovacuum worker would\n> > have called autovac_recalculate_workers_for_balance() before calling\n> > VacuumUpdateCosts() (which calls AutoVacuumUpdateLimit()) in\n> > do_autovacuum(). But, if someone were to move around or add a call to\n> > VacuumUpdateCosts() there is a chance it could happen.\n>\n> Thinking more on this I'm tempted to recommend that we promote this to an\n> elog(), mainly due to the latter. An accidental call to VacuumUpdateCosts()\n> doesn't seem entirely unlikely to happen\n\nMakes sense. I've added a trivial elog ERROR, but I didn't spend quite\nenough time thinking about what (if any) other context to include in it.\n\n- Melanie",
"msg_date": "Wed, 5 Apr 2023 11:29:30 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 5 Apr 2023, at 17:29, Melanie Plageman <melanieplageman@gmail.com> wrote:\n>> \n>> I think I wasn't clear in my comment, sorry. I don't have a problem with\n>> introducing a new variable to split the balanced value from the GUC value.\n>> What I don't think we should do is repurpose an exported symbol into doing a\n>> new thing. In the case at hand I think VacuumCostLimit and VacuumCostDelay\n>> should remain the backing variables for the GUCs, with vacuum_cost_limit and\n>> vacuum_cost_delay carrying the balanced values. So the inverse of what is in\n>> the patch now.\n>> \n>> The risk of these symbols being used in extensions might be very low but on\n>> principle it seems unwise to alter a symbol and risk subtle breakage.\n> \n> I totally see what you are saying. The only complication is that all of\n> the other variables used in vacuum code are the camelcase and the gucs\n> follow the snake case -- as pointed out in a previous review comment by\n> Sawada-san:\n\nFair point.\n\n>> @@ -83,6 +84,7 @@ int vacuum_cost_limit;\n>> */\n>> int VacuumCostLimit = 0;\n>> double VacuumCostDelay = -1;\n>> +static bool vacuum_can_reload_config = false;\n>> \n>> In vacuum.c, we use snake case for GUC parameters and camel case for\n>> other global variables, so it seems better to rename it\n>> VacuumCanReloadConfig. Sorry, that's my fault.\n> \n> This is less of a compelling argument than subtle breakage for extension\n> code, though.\n\nHow about if we rename the variable into something which also acts at bit as\nself documenting why there are two in the first place? Perhaps\nBalancedVacuumCostLimit or something similar (I'm terrible with names)?\n\n> I am, however, wondering if extensions expect to have access to the guc\n> variable or the global variable -- or both?\n\nExtensions have access to all exported symbols, and I think it's not uncommon\nfor extension authors to expect to have access to at least read GUC variables.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 5 Apr 2023 17:54:46 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Wed, Apr 5, 2023 at 11:29 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> Thanks all for the reviews.\n>\n> v16 attached. I put it together rather quickly, so there might be a few\n> spurious whitespaces or similar. There is one rather annoying pgindent\n> outlier that I have to figure out what to do about as well.\n>\n> The remaining functional TODOs that I know of are:\n>\n> - Resolve what to do about names of GUC and vacuum variables for cost\n> limit and cost delay (since it may affect extensions)\n>\n> - Figure out what to do about the logging message which accesses dboid\n> and tableoid (lock/no lock, where to put it, etc)\n>\n> - I see several places in docs which reference the balancing algorithm\n> for autovac workers. I did not read them in great detail, but we may\n> want to review them to see if any require updates.\n>\n> - Consider whether or not the initial two commits should just be\n> squashed with the third commit\n>\n> - Anything else reviewers are still unhappy with\n\nI really like having the first couple of patches split out -- it makes\nthem super-easy to understand. A committer can always choose to squash\nat commit time if they want. I kind of wish the patch set were split\nup more, for even easier understanding. I don't think that's a thing\nto get hung up on, but it's an opinion that I have.\n\nI strongly agree with the goals of the patch set, as I understand\nthem. Being able to change the config file and SIGHUP the server and\nhave the new values affect running autovacuum workers seems pretty\nhuge. It would make it possible to solve problems that currently can\nonly be solved by using gdb on a production instance, which is not a\nfun thing to be doing.\n\n+ /*\n+ * Balance and update limit values for autovacuum workers. We must\n+ * always do this in case the autovacuum launcher or another\n+ * autovacuum worker has recalculated the number of workers across\n+ * which we must balance the limit. This is done by the launcher when\n+ * launching a new worker and by workers before vacuuming each table.\n+ */\n\nI don't quite understand what's going on here. A big reason that I'm\nworried about this whole issue in the first place is that sometimes\nthere's a vacuum going on a giant table and you can't get it to go\nfast. You want it to absorb new settings, and to do so quickly. I\nrealize that this is about the number of workers, not the actual cost\nlimit, so that makes what I'm about to say less important. But ... is\nthis often enough? Like, the time before we move onto the next table\ncould be super long. The time before a new worker is launched should\nbe ~autovacuum_naptime/autovacuum_max_workers or ~20s with default\nsettings, so that's not horrible, but I'm kind of struggling to\nunderstand the rationale for this particular choice. Maybe it's fine.\n\nTo be honest, I think that the whole system where we divide the cost\nlimit across the workers is the wrong idea. Does anyone actually like\nthat behavior? This patch probably shouldn't touch that, just in the\ninterest of getting something done that is an improvement over where\nwe are now, but I think this behavior is really counterintuitive.\nPeople expect that they can increase autovacuum_max_workers to get\nmore vacuuming done, and actually in most cases that does not work.\nAnd if that behavior didn't exist, this patch would also be a whole\nlot simpler. Again, I don't think this is something we should try to\naddress right now under time pressure, but in the future, I think we\nshould consider ripping this behavior out.\n\n+ if (autovacuum_vac_cost_limit > 0)\n+ VacuumCostLimit = autovacuum_vac_cost_limit;\n+ else\n+ VacuumCostLimit = vacuum_cost_limit;\n+\n+ /* Only balance limit if no cost-related storage\nparameters specified */\n+ if (pg_atomic_unlocked_test_flag(&MyWorkerInfo->wi_dobalance))\n+ return;\n+ Assert(VacuumCostLimit > 0);\n+\n+ nworkers_for_balance = pg_atomic_read_u32(\n+\n&AutoVacuumShmem->av_nworkersForBalance);\n+\n+ /* There is at least 1 autovac worker (this worker). */\n+ if (nworkers_for_balance <= 0)\n+ elog(ERROR, \"nworkers_for_balance must be > 0\");\n+\n+ VacuumCostLimit = Max(VacuumCostLimit /\nnworkers_for_balance, 1);\n\nI think it would be better stylistically to use a temporary variable\nhere and only assign the final value to VacuumCostLimit.\n\nDaniel: Are you intending to commit this?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Apr 2023 14:55:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 5 Apr 2023, at 20:55, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Again, I don't think this is something we should try to\n> address right now under time pressure, but in the future, I think we\n> should consider ripping this behavior out.\n\nI would not be opposed to that, but I wholeheartedly agree that it's not the\njob of this patch (or any patch at this point in the cycle).\n\n> + if (autovacuum_vac_cost_limit > 0)\n> + VacuumCostLimit = autovacuum_vac_cost_limit;\n> + else\n> + VacuumCostLimit = vacuum_cost_limit;\n> +\n> + /* Only balance limit if no cost-related storage\n> parameters specified */\n> + if (pg_atomic_unlocked_test_flag(&MyWorkerInfo->wi_dobalance))\n> + return;\n> + Assert(VacuumCostLimit > 0);\n> +\n> + nworkers_for_balance = pg_atomic_read_u32(\n> +\n> &AutoVacuumShmem->av_nworkersForBalance);\n> +\n> + /* There is at least 1 autovac worker (this worker). */\n> + if (nworkers_for_balance <= 0)\n> + elog(ERROR, \"nworkers_for_balance must be > 0\");\n> +\n> + VacuumCostLimit = Max(VacuumCostLimit /\n> nworkers_for_balance, 1);\n> \n> I think it would be better stylistically to use a temporary variable\n> here and only assign the final value to VacuumCostLimit.\n\nI can agree with that. Another supertiny nitpick on the above is to not end a\nsingle-line comment with a period.\n\n> Daniel: Are you intending to commit this?\n\nYes, my plan is to get it in before feature freeze. I notice now that I had\nmissed setting myself as committer in the CF to signal this intent, sorry about\nthat.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 5 Apr 2023 21:03:53 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Wed, Apr 5, 2023 at 3:04 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > Daniel: Are you intending to commit this?\n>\n> Yes, my plan is to get it in before feature freeze.\n\nAll right, let's make it happen! I think this is pretty close to ready\nto ship, and it would solve a problem that is real, annoying, and\nserious.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Apr 2023 15:42:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Wed, Apr 5, 2023 at 2:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> + /*\n> + * Balance and update limit values for autovacuum workers. We must\n> + * always do this in case the autovacuum launcher or another\n> + * autovacuum worker has recalculated the number of workers across\n> + * which we must balance the limit. This is done by the launcher when\n> + * launching a new worker and by workers before vacuuming each table.\n> + */\n>\n> I don't quite understand what's going on here. A big reason that I'm\n> worried about this whole issue in the first place is that sometimes\n> there's a vacuum going on a giant table and you can't get it to go\n> fast. You want it to absorb new settings, and to do so quickly. I\n> realize that this is about the number of workers, not the actual cost\n> limit, so that makes what I'm about to say less important. But ... is\n> this often enough? Like, the time before we move onto the next table\n> could be super long. The time before a new worker is launched should\n> be ~autovacuum_naptime/autovacuum_max_workers or ~20s with default\n> settings, so that's not horrible, but I'm kind of struggling to\n> understand the rationale for this particular choice. Maybe it's fine.\n\nVacuumUpdateCosts() also calls AutoVacuumUpdateCostLimit(), so this will\nhappen if a config reload is pending the next time vacuum_delay_point()\nis called (which is pretty often -- roughly once per block vacuumed but\ndefinitely more than once per table).\n\nRelevant code is at the top of vacuum_delay_point():\n\n if (ConfigReloadPending && IsAutoVacuumWorkerProcess())\n {\n ConfigReloadPending = false;\n ProcessConfigFile(PGC_SIGHUP);\n VacuumUpdateCosts();\n }\n\n- Melanie\n\n\n",
"msg_date": "Wed, 5 Apr 2023 15:43:59 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Wed, Apr 5, 2023 at 3:44 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> VacuumUpdateCosts() also calls AutoVacuumUpdateCostLimit(), so this will\n> happen if a config reload is pending the next time vacuum_delay_point()\n> is called (which is pretty often -- roughly once per block vacuumed but\n> definitely more than once per table).\n>\n> Relevant code is at the top of vacuum_delay_point():\n>\n> if (ConfigReloadPending && IsAutoVacuumWorkerProcess())\n> {\n> ConfigReloadPending = false;\n> ProcessConfigFile(PGC_SIGHUP);\n> VacuumUpdateCosts();\n> }\n\nYeah, that all makes sense, and I did see that logic, but I'm\nstruggling to reconcile it with what that comment says.\n\nMaybe I'm just confused about what that comment is trying to explain.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Apr 2023 15:58:28 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Wed, Apr 5, 2023 at 11:56 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> To be honest, I think that the whole system where we divide the cost\n> limit across the workers is the wrong idea. Does anyone actually like\n> that behavior? This patch probably shouldn't touch that, just in the\n> interest of getting something done that is an improvement over where\n> we are now, but I think this behavior is really counterintuitive.\n> People expect that they can increase autovacuum_max_workers to get\n> more vacuuming done, and actually in most cases that does not work.\n\nI disagree. Increasing autovacuum_max_workers as a method of\nincreasing the overall aggressiveness of autovacuum seems like the\nwrong idea. I'm sure that users do that at times, but they really\nought to have a better way of getting the same result.\n\nISTM that autovacuum_max_workers confuses the question of what the\nmaximum possible number of workers should ever be (in extreme cases)\nwith the question of how many workers might be a good idea given\npresent conditions.\n\n> And if that behavior didn't exist, this patch would also be a whole\n> lot simpler.\n\nProbably, but the fact remains that the system level view of things is\nmostly what matters. The competition between the amount of vacuuming\nthat we can afford to do right now and the amount of vacuuming that\nwe'd ideally be able to do really matters. In fact, I'd argue that the\namount of vacuuming that we'd ideally be able to do isn't a\nparticularly meaningful concept on its own. It's just too hard to\nmodel what we need to do accurately -- emphasizing what we can afford\nto do seems much more promising.\n\n> Again, I don't think this is something we should try to\n> address right now under time pressure, but in the future, I think we\n> should consider ripping this behavior out.\n\n-1. The delay stuff might not work as well as it should, but it at\nleast seems like roughly the right idea. The bigger problem seems to\nbe everything else -- the way that tuning autovacuum_max_workers kinda\nmakes sense (it shouldn't be an interesting tunable), and the problems\nwith the autovacuum.c scheduling being so primitive.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 5 Apr 2023 13:19:39 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 5 Apr 2023, at 22:19, Peter Geoghegan <pg@bowt.ie> wrote:\n\n> The bigger problem seems to\n> be everything else -- the way that tuning autovacuum_max_workers kinda\n> makes sense (it shouldn't be an interesting tunable)\n\nNot to derail this thread, and pre-empt a thread where this can be discussed in\nits own context, but isn't that kind of the main problem? Tuning autovacuum is\nreally complicated and one of the parameters that I think universally seem to\nmake sense to users is just autovacuum_max_workers. I agree that it doesn't do\nwhat most think it should, but a quick skim of the name and docs can probably\nlead to a lot of folks trying to use it as hammer.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 5 Apr 2023 22:38:17 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Wed, Apr 5, 2023 at 1:38 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> Not to derail this thread, and pre-empt a thread where this can be discussed in\n> its own context, but isn't that kind of the main problem? Tuning autovacuum is\n> really complicated and one of the parameters that I think universally seem to\n> make sense to users is just autovacuum_max_workers. I agree that it doesn't do\n> what most think it should, but a quick skim of the name and docs can probably\n> lead to a lot of folks trying to use it as hammer.\n\nI think that I agree. I think that the difficulty of tuning autovacuum\nis the actual real problem. (Or maybe it's just very closely related\nto the real problem -- the precise definition doesn't seem important.)\n\nThere seems to be a kind of physics envy to some of these things.\nFalse precision. The way that the mechanisms actually work (the\nautovacuum scheduling, freeze_min_age, and quite a few other things)\n*are* simple. But so are the rules of Conway's game of life, yet\npeople seem to have a great deal of difficulty predicting how it will\nbehave in any given situation. Any design that focuses on the\nimmediate consequences of any particular policy while ignoring second\norder effects isn't going to work particularly well. Users ought to be\nable to constrain the behavior of autovacuum using settings that\nexpress what they want in high level terms. And VACUUM ought to have\nmuch more freedom around finding the best way to meet those high level\ngoals over time (e.g., very loose rules about how much we need to\nadvance relfrozenxid by during any individual VACUUM).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 5 Apr 2023 13:59:03 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Wed, Apr 5, 2023 at 3:43 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Wed, Apr 5, 2023 at 2:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > + /*\n> > + * Balance and update limit values for autovacuum workers. We must\n> > + * always do this in case the autovacuum launcher or another\n> > + * autovacuum worker has recalculated the number of workers across\n> > + * which we must balance the limit. This is done by the launcher when\n> > + * launching a new worker and by workers before vacuuming each table.\n> > + */\n> >\n> > I don't quite understand what's going on here. A big reason that I'm\n> > worried about this whole issue in the first place is that sometimes\n> > there's a vacuum going on a giant table and you can't get it to go\n> > fast. You want it to absorb new settings, and to do so quickly. I\n> > realize that this is about the number of workers, not the actual cost\n> > limit, so that makes what I'm about to say less important. But ... is\n> > this often enough? Like, the time before we move onto the next table\n> > could be super long. The time before a new worker is launched should\n> > be ~autovacuum_naptime/autovacuum_max_workers or ~20s with default\n> > settings, so that's not horrible, but I'm kind of struggling to\n> > understand the rationale for this particular choice. Maybe it's fine.\n>\n> VacuumUpdateCosts() also calls AutoVacuumUpdateCostLimit(), so this will\n> happen if a config reload is pending the next time vacuum_delay_point()\n> is called (which is pretty often -- roughly once per block vacuumed but\n> definitely more than once per table).\n>\n> Relevant code is at the top of vacuum_delay_point():\n>\n> if (ConfigReloadPending && IsAutoVacuumWorkerProcess())\n> {\n> ConfigReloadPending = false;\n> ProcessConfigFile(PGC_SIGHUP);\n> VacuumUpdateCosts();\n> }\n>\n\nGah, I think I misunderstood you. You are saying that only calling\nAutoVacuumUpdateCostLimit() after napping while vacuuming a table may\nnot be enough. The frequency at which the number of workers changes will\nlikely be different. This is a good point.\nIt's kind of weird to call AutoVacuumUpdateCostLimit() only after napping...\n\nHmm. Well, I don't think we want to call AutoVacuumUpdateCostLimit() on\nevery call to vacuum_delay_point(), though, do we? It includes two\natomic operations. Maybe that pales in comparison to what we are doing\non each page we are vacuuming. I haven't properly thought about it.\n\nIs there some other relevant condition we can use to determine whether\nor not to call AutoVacuumUpdateCostLimit() on a given invocation of\nvacuum_delay_point()? Maybe something with naptime/max workers?\n\nI'm not sure if there is a more reliable place than vacuum_delay_point()\nfor us to do this. I poked around heap_vacuum_rel(), but I think we\nwould want this cost limit update to happen table AM-agnostically.\n\nThank you for bringing this up!\n\n- Melanie\n\n\n",
"msg_date": "Wed, 5 Apr 2023 23:10:00 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Thu, Apr 6, 2023 at 12:29 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> Thanks all for the reviews.\n>\n> v16 attached. I put it together rather quickly, so there might be a few\n> spurious whitespaces or similar. There is one rather annoying pgindent\n> outlier that I have to figure out what to do about as well.\n>\n> The remaining functional TODOs that I know of are:\n>\n> - Resolve what to do about names of GUC and vacuum variables for cost\n> limit and cost delay (since it may affect extensions)\n>\n> - Figure out what to do about the logging message which accesses dboid\n> and tableoid (lock/no lock, where to put it, etc)\n>\n> - I see several places in docs which reference the balancing algorithm\n> for autovac workers. I did not read them in great detail, but we may\n> want to review them to see if any require updates.\n>\n> - Consider whether or not the initial two commits should just be\n> squashed with the third commit\n>\n> - Anything else reviewers are still unhappy with\n>\n>\n> On Wed, Apr 5, 2023 at 1:56 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Apr 5, 2023 at 5:05 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > >\n> > > On Tue, Apr 4, 2023 at 4:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > ---\n> > > > - if (worker->wi_proc != NULL)\n> > > > - elog(DEBUG2, \"autovac_balance_cost(pid=%d\n> > > > db=%u, rel=%u, dobalance=%s cost_limit=%d, cost_limit_base=%d,\n> > > > cost_delay=%g)\",\n> > > > - worker->wi_proc->pid,\n> > > > worker->wi_dboid, worker->wi_tableoid,\n> > > > - worker->wi_dobalance ? \"yes\" : \"no\",\n> > > > - worker->wi_cost_limit,\n> > > > worker->wi_cost_limit_base,\n> > > > - worker->wi_cost_delay);\n> > > >\n> > > > I think it's better to keep this kind of log in some form for\n> > > > debugging. For example, we can show these values of autovacuum workers\n> > > > in VacuumUpdateCosts().\n> > >\n> > > I added a message to do_autovacuum() after calling VacuumUpdateCosts()\n> > > in the loop vacuuming each table. That means it will happen once per\n> > > table. It's not ideal that I had to move the call to VacuumUpdateCosts()\n> > > behind the shared lock in that loop so that we could access the pid and\n> > > such in the logging message after updating the cost and delay, but it is\n> > > probably okay. Though noone is going to be changing those at this\n> > > point, it still seemed better to access them under the lock.\n> > >\n> > > This does mean we won't log anything when we do change the values of\n> > > VacuumCostDelay and VacuumCostLimit while vacuuming a table. Is it worth\n> > > adding some code to do that in VacuumUpdateCosts() (only when the value\n> > > has changed not on every call to VacuumUpdateCosts())? Or perhaps we\n> > > could add it in the config reload branch that is already in\n> > > vacuum_delay_point()?\n> >\n> > Previously, we used to show the pid in the log since a worker/launcher\n> > set other workers' delay costs. But now that the worker sets its delay\n> > costs, we don't need to show the pid in the log. Also, I think it's\n> > useful for debugging and investigating the system if we log it when\n> > changing the values. The log I imagined to add was like:\n> >\n> > @@ -1801,6 +1801,13 @@ VacuumUpdateCosts(void)\n> > VacuumCostDelay = vacuum_cost_delay;\n> >\n> > AutoVacuumUpdateLimit();\n> > +\n> > + elog(DEBUG2, \"autovacuum update costs (db=%u, rel=%u,\n> > dobalance=%s, cost_limit=%d, cost_delay=%g active=%s failsafe=%s)\",\n> > + MyWorkerInfo->wi_dboid, MyWorkerInfo->wi_tableoid,\n> > + pg_atomic_unlocked_test_flag(&MyWorkerInfo->wi_dobalance)\n> > ? \"no\" : \"yes\",\n> > + VacuumCostLimit, VacuumCostDelay,\n> > + VacuumCostDelay > 0 ? \"yes\" : \"no\",\n> > + VacuumFailsafeActive ? \"yes\" : \"no\");\n> > }\n> > else\n> > {\n>\n> Makes sense. I've updated the log message to roughly what you suggested.\n> I also realized I think it does make sense to call it in\n> VacuumUpdateCosts() -- only for autovacuum workers of course. I've done\n> this. I haven't taken the lock though and can't decide if I must since\n> they access dboid and tableoid -- those are not going to change at this\n> point, but I still don't know if I can access them lock-free...\n> Perhaps there is a way to condition it on the log level?\n>\n> If I have to take a lock, then I don't know if we should put these in\n> VacuumUpdateCosts()...\n\nI think we don't need to acquire a lock there as both values are\nupdated only by workers reporting this message. Also I agree with\nwhere to put the log but I think the log message should start with\nlower cases:\n\n+ elog(DEBUG2,\n+ \"Autovacuum VacuumUpdateCosts(db=%u, rel=%u,\ndobalance=%s, cost_limit=%d, cost_delay=%g active=%s failsafe=%s)\",\n+ MyWorkerInfo->wi_dboid, MyWorkerInfo->wi_tableoid,\n+\npg_atomic_unlocked_test_flag(&MyWorkerInfo->wi_dobalance) ? \"no\" :\n\"yes\",\n+ VacuumCostLimit, VacuumCostDelay,\n+ VacuumCostDelay > 0 ? \"yes\" : \"no\",\n+ VacuumFailsafeActive ? \"yes\" : \"no\");\n\nSome minor comments on 0003:\n\n+/*\n+ * autovac_recalculate_workers_for_balance\n+ * Recalculate the number of workers to consider, given\ncost-related\n+ * storage parameters and the current number of active workers.\n+ *\n+ * Caller must hold the AutovacuumLock in at least shared mode to access\n+ * worker->wi_proc.\n+ */\n\nDoes it make sense to add Assert(LWLockHeldByMe(AutovacuumLock)) at\nthe beginning of this function?\n\n---\n /* rebalance in case the default cost parameters changed */\n- LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);\n- autovac_balance_cost();\n+ LWLockAcquire(AutovacuumLock, LW_SHARED);\n+ autovac_recalculate_workers_for_balance();\n LWLockRelease(AutovacuumLock);\n\nDo we really need to have the autovacuum launcher recalculate\nav_nworkersForBalance after reloading the config file? Since the cost\nparameters are not used inautovac_recalculate_workers_for_balance()\nthe comment also needs to be updated.\n\n---\n+ /*\n+ * Balance and update limit values for autovacuum\nworkers. We must\n+ * always do this in case the autovacuum launcher or another\n+ * autovacuum worker has recalculated the number of\nworkers across\n+ * which we must balance the limit. This is done by\nthe launcher when\n+ * launching a new worker and by workers before\nvacuuming each table.\n+ */\n+ AutoVacuumUpdateCostLimit();\n\nI think the last sentence is not correct. IIUC recalculation of\nav_nworkersForBalance is done by the launcher after a worker finished\nand by workers before vacuuming each table.\n\n---\nIt's not a problem of this patch, but IIUC since we don't reset\nwi_dobalance after vacuuming each table we use the last value of\nwi_dobalance for performing autovacuum items. At end of the loop for\ntables in do_autovacuum() we have the following code that explains why\nwe don't reset wi_dobalance:\n\n /*\n * Remove my info from shared memory. We could, but intentionally\n * don't, unset wi_dobalance on the assumption that we are more likely\n * than not to vacuum a table with no cost-related storage parameters\n * next, so we don't want to give up our share of I/O for a very short\n * interval and thereby thrash the global balance.\n */\n LWLockAcquire(AutovacuumScheduleLock, LW_EXCLUSIVE);\n MyWorkerInfo->wi_tableoid = InvalidOid;\n MyWorkerInfo->wi_sharedrel = false;\n LWLockRelease(AutovacuumScheduleLock);\n\nAssuming we agree with that, probably we need to reset it to true\nafter vacuuming all tables?\n\n0001 and 0002 patches look good to me except for the renaming GUCs\nstuff as the discussion is ongoing.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 6 Apr 2023 15:39:43 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 6 Apr 2023, at 08:39, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> Also I agree with\n> where to put the log but I think the log message should start with\n> lower cases:\n> \n> + elog(DEBUG2,\n> + \"Autovacuum VacuumUpdateCosts(db=%u, rel=%u,\n\nIn principle I agree, but in this case Autovacuum is a name and should IMO in\nuserfacing messages start with capital A.\n\n> +/*\n> + * autovac_recalculate_workers_for_balance\n> + * Recalculate the number of workers to consider, given\n> cost-related\n> + * storage parameters and the current number of active workers.\n> + *\n> + * Caller must hold the AutovacuumLock in at least shared mode to access\n> + * worker->wi_proc.\n> + */\n> \n> Does it make sense to add Assert(LWLockHeldByMe(AutovacuumLock)) at\n> the beginning of this function?\n\nThat's probably not a bad idea.\n\n> ---\n> /* rebalance in case the default cost parameters changed */\n> - LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);\n> - autovac_balance_cost();\n> + LWLockAcquire(AutovacuumLock, LW_SHARED);\n> + autovac_recalculate_workers_for_balance();\n> LWLockRelease(AutovacuumLock);\n> \n> Do we really need to have the autovacuum launcher recalculate\n> av_nworkersForBalance after reloading the config file? Since the cost\n> parameters are not used inautovac_recalculate_workers_for_balance()\n> the comment also needs to be updated.\n\nIf I understand this comment right; there was a discussion upthread that simply\ndoing it in both launcher and worker simplifies the code with little overhead.\nA comment can reflect that choice though.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Apr 2023 14:29:34 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Wed, Apr 5, 2023 at 11:10 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Wed, Apr 5, 2023 at 3:43 PM Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > On Wed, Apr 5, 2023 at 2:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > + /*\n> > > + * Balance and update limit values for autovacuum workers. We must\n> > > + * always do this in case the autovacuum launcher or another\n> > > + * autovacuum worker has recalculated the number of workers across\n> > > + * which we must balance the limit. This is done by the launcher when\n> > > + * launching a new worker and by workers before vacuuming each table.\n> > > + */\n> > >\n> > > I don't quite understand what's going on here. A big reason that I'm\n> > > worried about this whole issue in the first place is that sometimes\n> > > there's a vacuum going on a giant table and you can't get it to go\n> > > fast. You want it to absorb new settings, and to do so quickly. I\n> > > realize that this is about the number of workers, not the actual cost\n> > > limit, so that makes what I'm about to say less important. But ... is\n> > > this often enough? Like, the time before we move onto the next table\n> > > could be super long. The time before a new worker is launched should\n> > > be ~autovacuum_naptime/autovacuum_max_workers or ~20s with default\n> > > settings, so that's not horrible, but I'm kind of struggling to\n> > > understand the rationale for this particular choice. Maybe it's fine.\n> >\n> > VacuumUpdateCosts() also calls AutoVacuumUpdateCostLimit(), so this will\n> > happen if a config reload is pending the next time vacuum_delay_point()\n> > is called (which is pretty often -- roughly once per block vacuumed but\n> > definitely more than once per table).\n> >\n> > Relevant code is at the top of vacuum_delay_point():\n> >\n> > if (ConfigReloadPending && IsAutoVacuumWorkerProcess())\n> > {\n> > ConfigReloadPending = false;\n> > ProcessConfigFile(PGC_SIGHUP);\n> > VacuumUpdateCosts();\n> > }\n> >\n>\n> Gah, I think I misunderstood you. You are saying that only calling\n> AutoVacuumUpdateCostLimit() after napping while vacuuming a table may\n> not be enough. The frequency at which the number of workers changes will\n> likely be different. This is a good point.\n> It's kind of weird to call AutoVacuumUpdateCostLimit() only after napping...\n\nA not fully baked idea for a solution:\n\nWhy not keep the balanced limit in the atomic instead of the number of\nworkers for balance. If we expect all of the workers to have the same\nvalue for cost limit, then why would we just count the workers and not\nalso do the division and store that in the atomic variable. We are\nworried about the division not being done often enough, not the number\nof workers being out of date. This solves that, right?\n\n- Melanie\n\n\n",
"msg_date": "Thu, 6 Apr 2023 11:52:25 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Thu, Apr 6, 2023 at 11:52 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> > Gah, I think I misunderstood you. You are saying that only calling\n> > AutoVacuumUpdateCostLimit() after napping while vacuuming a table may\n> > not be enough. The frequency at which the number of workers changes will\n> > likely be different. This is a good point.\n> > It's kind of weird to call AutoVacuumUpdateCostLimit() only after napping...\n>\n> A not fully baked idea for a solution:\n>\n> Why not keep the balanced limit in the atomic instead of the number of\n> workers for balance. If we expect all of the workers to have the same\n> value for cost limit, then why would we just count the workers and not\n> also do the division and store that in the atomic variable. We are\n> worried about the division not being done often enough, not the number\n> of workers being out of date. This solves that, right?\n\nA bird in the hand is worth two in the bush, though. We don't really\nhave time to redesign the patch before feature freeze, and I can't\nconvince myself that there's a big enough problem with what you\nalready did that it would be worth putting off fixing this for another\nyear. Reading your newer emails, I think that the answer to my\noriginal question is \"we don't want to do it at every\nvacuum_delay_point because it might be too costly,\" which is\nreasonable.\n\nI don't particularly like this new idea, either, I think. While it may\nbe true that we expect all the workers to come up with the same\nanswer, they need not, because rereading the configuration file isn't\nsynchronized. It would be pretty lame if a worker that had reread an\nupdated value from the configuration file recomputed the value, and\nthen another worker that still had an older value recalculated it\nagain just afterward. Keeping only the number of workers in memory\navoids the possibility of thrashing around in situations like that.\n\nI do kind of wonder if it would be possible to rejigger things so that\nwe didn't have to keep recalculating av_nworkersForBalance, though.\nPerhaps now is not the time due to the impending freeze, but maybe we\nshould explore maintaining that value in such a way that it is correct\nat every instant, instead of recalculating it at intervals.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 6 Apr 2023 13:18:34 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Wed, Apr 5, 2023 at 4:59 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I think that I agree. I think that the difficulty of tuning autovacuum\n> is the actual real problem. (Or maybe it's just very closely related\n> to the real problem -- the precise definition doesn't seem important.)\n\nI agree, and I think that bad choices around what the parameters do\nare a big part of the problem. autovacuum_max_workers is one example\nof that, but there are a bunch of others. It's not at all intuitive\nthat if your database gets really big you either need to raise\nautovacuum_vacuum_cost_limit or lower autovacuum_vacuum_cost_delay.\nAnd, it's not intuitive either that raising autovacuum_max_workers\ndoesn't increase the amount of vacuuming that gets done. In my\nexperience, it's very common for people to observe that autovacuum is\nrunning constantly, and to figure out that the number of running\nworkers is equal to autovacuum_max_workers at all times, and to then\nconclude that they need more workers. So they raise\nautovacuum_max_workers and nothing gets any better. In fact, things\nmight get *worse*, because the time required to complete vacuuming of\na large table can increase if the available bandwidth is potentially\nspread across more workers, and it's very often the time to vacuum the\nlargest tables that determines whether things hold together adequately\nor not.\n\nThis kind of stuff drives me absolutely batty. It's impossible to make\nevery database behavior completely intuitive, but here we have a\nparameter that seems like it is exactly the right thing to solve the\nproblem that the user knows they have, and it actually does nothing on\na good day and causes a regression on a bad one. That's incredibly\npoor design.\n\nThe way it works at the implementation level is pretty kooky, too. The\navailable resources are split between the workers, but if any of the\nrelevant vacuum parameters are set for the table currently being\nvacuumed, then that worker gets the full resources configured for that\ntable, and everyone else divides up the amount that's configured\nglobally. So if you went and set the cost delay and cost limit for all\nof your tables to exactly the same values that are configured\nglobally, you'd vacuum 3 times faster than if you relied on the\nidentical global defaults (or N times faster, where N is the value\nyou've picked for autovacuum_max_workers). If you have one really big\ntable that requires continuous vacuuming, you could slow down\nvacuuming on that table through manual configuration settings and\nstill end up speeding up vacuuming overall, because the remaining\nworkers would be dividing the budget implied by the default settings\namong N-1 workers instead of N workers. As far as I can see, none of\nthis is documented, which is perhaps for the best, because IMV it\nmakes no sense.\n\nI think we need to move more toward a model where VACUUM just keeps\nup. Emergency mode is a step in that direction, because the definition\nof an emergency is that we're definitely not keeping up, but I think\nwe need something less Boolean. If the database gets bigger or smaller\nor more or less active, autovacuum should somehow just adjust to that,\nwithout so much manual fiddling. I think it's good to have the\npossibility of some manual fiddling to handle problematic situations,\nbut you shouldn't have to do it just because you made a table bigger.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 6 Apr 2023 13:55:12 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 6 Apr 2023, at 19:18, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Thu, Apr 6, 2023 at 11:52 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n>>> Gah, I think I misunderstood you. You are saying that only calling\n>>> AutoVacuumUpdateCostLimit() after napping while vacuuming a table may\n>>> not be enough. The frequency at which the number of workers changes will\n>>> likely be different. This is a good point.\n>>> It's kind of weird to call AutoVacuumUpdateCostLimit() only after napping...\n>> \n>> A not fully baked idea for a solution:\n>> \n>> Why not keep the balanced limit in the atomic instead of the number of\n>> workers for balance. If we expect all of the workers to have the same\n>> value for cost limit, then why would we just count the workers and not\n>> also do the division and store that in the atomic variable. We are\n>> worried about the division not being done often enough, not the number\n>> of workers being out of date. This solves that, right?\n> \n> A bird in the hand is worth two in the bush, though. We don't really\n> have time to redesign the patch before feature freeze, and I can't\n> convince myself that there's a big enough problem with what you\n> already did that it would be worth putting off fixing this for another\n> year.\n\n+1, I'd rather see we did a conservative version of the feature first and\nexpand upon it in the 17 cycle.\n\n> Reading your newer emails, I think that the answer to my\n> original question is \"we don't want to do it at every\n> vacuum_delay_point because it might be too costly,\" which is\n> reasonable.\n\nI think we kind of need to get to that granularity eventually, but it's not a\nshowstopper for this feature, and can probably benefit from being done in the\ncontext of a larger av-worker re-think (the importance of which discussed\ndownthread).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Apr 2023 20:55:02 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "v17 attached does not yet fix the logging problem or variable naming\nproblem.\n\nI have not changed where AutoVacuumUpdateCostLimit() is called either.\n\nThis is effectively just a round of cleanup. I hope I have managed to\naddress all other code review feedback so far, though some may have\nslipped through the cracks.\n\nOn Wed, Apr 5, 2023 at 2:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Apr 5, 2023 at 11:29 AM Melanie Plageman <melanieplageman@gmail.com> wrote:\n> + /*\n> + * Balance and update limit values for autovacuum workers. We must\n> + * always do this in case the autovacuum launcher or another\n> + * autovacuum worker has recalculated the number of workers across\n> + * which we must balance the limit. This is done by the launcher when\n> + * launching a new worker and by workers before vacuuming each table.\n> + */\n>\n> I don't quite understand what's going on here. A big reason that I'm\n> worried about this whole issue in the first place is that sometimes\n> there's a vacuum going on a giant table and you can't get it to go\n> fast. You want it to absorb new settings, and to do so quickly. I\n> realize that this is about the number of workers, not the actual cost\n> limit, so that makes what I'm about to say less important. But ... is\n> this often enough? Like, the time before we move onto the next table\n> could be super long. The time before a new worker is launched should\n> be ~autovacuum_naptime/autovacuum_max_workers or ~20s with default\n> settings, so that's not horrible, but I'm kind of struggling to\n> understand the rationale for this particular choice. Maybe it's fine.\n\nI've at least updated this comment to be more correct/less misleading.\n\n>\n> + if (autovacuum_vac_cost_limit > 0)\n> + VacuumCostLimit = autovacuum_vac_cost_limit;\n> + else\n> + VacuumCostLimit = vacuum_cost_limit;\n> +\n> + /* Only balance limit if no cost-related storage\n> parameters specified */\n> + if (pg_atomic_unlocked_test_flag(&MyWorkerInfo->wi_dobalance))\n> + return;\n> + Assert(VacuumCostLimit > 0);\n> +\n> + nworkers_for_balance = pg_atomic_read_u32(\n> +\n> &AutoVacuumShmem->av_nworkersForBalance);\n> +\n> + /* There is at least 1 autovac worker (this worker). */\n> + if (nworkers_for_balance <= 0)\n> + elog(ERROR, \"nworkers_for_balance must be > 0\");\n> +\n> + VacuumCostLimit = Max(VacuumCostLimit /\n> nworkers_for_balance, 1);\n>\n> I think it would be better stylistically to use a temporary variable\n> here and only assign the final value to VacuumCostLimit.\n\nI tried that and thought it adding confusing clutter. If it is a code\ncleanliness issue, I am willing to change it, though.\n\nOn Wed, Apr 5, 2023 at 3:04 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 5 Apr 2023, at 20:55, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> > Again, I don't think this is something we should try to\n> > address right now under time pressure, but in the future, I think we\n> > should consider ripping this behavior out.\n>\n> I would not be opposed to that, but I wholeheartedly agree that it's not the\n> job of this patch (or any patch at this point in the cycle).\n>\n> > + if (autovacuum_vac_cost_limit > 0)\n> > + VacuumCostLimit = autovacuum_vac_cost_limit;\n> > + else\n> > + VacuumCostLimit = vacuum_cost_limit;\n> > +\n> > + /* Only balance limit if no cost-related storage\n> > parameters specified */\n> > + if (pg_atomic_unlocked_test_flag(&MyWorkerInfo->wi_dobalance))\n> > + return;\n> > + Assert(VacuumCostLimit > 0);\n> > +\n> > + nworkers_for_balance = pg_atomic_read_u32(\n> > +\n> > &AutoVacuumShmem->av_nworkersForBalance);\n> > +\n> > + /* There is at least 1 autovac worker (this worker). */\n> > + if (nworkers_for_balance <= 0)\n> > + elog(ERROR, \"nworkers_for_balance must be > 0\");\n> > +\n> > + VacuumCostLimit = Max(VacuumCostLimit /\n> > nworkers_for_balance, 1);\n> >\n> > I think it would be better stylistically to use a temporary variable\n> > here and only assign the final value to VacuumCostLimit.\n>\n> I can agree with that. Another supertiny nitpick on the above is to not end a\n> single-line comment with a period.\n\nI have fixed this.\n\nOn Thu, Apr 6, 2023 at 2:40 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Apr 6, 2023 at 12:29 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > Thanks all for the reviews.\n> >\n> > v16 attached. I put it together rather quickly, so there might be a few\n> > spurious whitespaces or similar. There is one rather annoying pgindent\n> > outlier that I have to figure out what to do about as well.\n> >\n> > The remaining functional TODOs that I know of are:\n> >\n> > - Resolve what to do about names of GUC and vacuum variables for cost\n> > limit and cost delay (since it may affect extensions)\n> >\n> > - Figure out what to do about the logging message which accesses dboid\n> > and tableoid (lock/no lock, where to put it, etc)\n> >\n> > - I see several places in docs which reference the balancing algorithm\n> > for autovac workers. I did not read them in great detail, but we may\n> > want to review them to see if any require updates.\n> >\n> > - Consider whether or not the initial two commits should just be\n> > squashed with the third commit\n> >\n> > - Anything else reviewers are still unhappy with\n> >\n> >\n> > On Wed, Apr 5, 2023 at 1:56 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Apr 5, 2023 at 5:05 AM Melanie Plageman\n> > > <melanieplageman@gmail.com> wrote:\n> > > >\n> > > > On Tue, Apr 4, 2023 at 4:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > ---\n> > > > > - if (worker->wi_proc != NULL)\n> > > > > - elog(DEBUG2, \"autovac_balance_cost(pid=%d\n> > > > > db=%u, rel=%u, dobalance=%s cost_limit=%d, cost_limit_base=%d,\n> > > > > cost_delay=%g)\",\n> > > > > - worker->wi_proc->pid,\n> > > > > worker->wi_dboid, worker->wi_tableoid,\n> > > > > - worker->wi_dobalance ? \"yes\" : \"no\",\n> > > > > - worker->wi_cost_limit,\n> > > > > worker->wi_cost_limit_base,\n> > > > > - worker->wi_cost_delay);\n> > > > >\n> > > > > I think it's better to keep this kind of log in some form for\n> > > > > debugging. For example, we can show these values of autovacuum workers\n> > > > > in VacuumUpdateCosts().\n> > > >\n> > > > I added a message to do_autovacuum() after calling VacuumUpdateCosts()\n> > > > in the loop vacuuming each table. That means it will happen once per\n> > > > table. It's not ideal that I had to move the call to VacuumUpdateCosts()\n> > > > behind the shared lock in that loop so that we could access the pid and\n> > > > such in the logging message after updating the cost and delay, but it is\n> > > > probably okay. Though noone is going to be changing those at this\n> > > > point, it still seemed better to access them under the lock.\n> > > >\n> > > > This does mean we won't log anything when we do change the values of\n> > > > VacuumCostDelay and VacuumCostLimit while vacuuming a table. Is it worth\n> > > > adding some code to do that in VacuumUpdateCosts() (only when the value\n> > > > has changed not on every call to VacuumUpdateCosts())? Or perhaps we\n> > > > could add it in the config reload branch that is already in\n> > > > vacuum_delay_point()?\n> > >\n> > > Previously, we used to show the pid in the log since a worker/launcher\n> > > set other workers' delay costs. But now that the worker sets its delay\n> > > costs, we don't need to show the pid in the log. Also, I think it's\n> > > useful for debugging and investigating the system if we log it when\n> > > changing the values. The log I imagined to add was like:\n> > >\n> > > @@ -1801,6 +1801,13 @@ VacuumUpdateCosts(void)\n> > > VacuumCostDelay = vacuum_cost_delay;\n> > >\n> > > AutoVacuumUpdateLimit();\n> > > +\n> > > + elog(DEBUG2, \"autovacuum update costs (db=%u, rel=%u,\n> > > dobalance=%s, cost_limit=%d, cost_delay=%g active=%s failsafe=%s)\",\n> > > + MyWorkerInfo->wi_dboid, MyWorkerInfo->wi_tableoid,\n> > > + pg_atomic_unlocked_test_flag(&MyWorkerInfo->wi_dobalance)\n> > > ? \"no\" : \"yes\",\n> > > + VacuumCostLimit, VacuumCostDelay,\n> > > + VacuumCostDelay > 0 ? \"yes\" : \"no\",\n> > > + VacuumFailsafeActive ? \"yes\" : \"no\");\n> > > }\n> > > else\n> > > {\n> >\n> > Makes sense. I've updated the log message to roughly what you suggested.\n> > I also realized I think it does make sense to call it in\n> > VacuumUpdateCosts() -- only for autovacuum workers of course. I've done\n> > this. I haven't taken the lock though and can't decide if I must since\n> > they access dboid and tableoid -- those are not going to change at this\n> > point, but I still don't know if I can access them lock-free...\n> > Perhaps there is a way to condition it on the log level?\n> >\n> > If I have to take a lock, then I don't know if we should put these in\n> > VacuumUpdateCosts()...\n>\n> I think we don't need to acquire a lock there as both values are\n> updated only by workers reporting this message.\n\nI dunno. I just don't feel that comfortable saying, oh it's okay to\naccess these without a lock probably. I propose we do one of the\nfollowing:\n\n- Take a shared lock inside VacuumUpdateCosts() (it is not called on every\n call to vacuum_delay_point()) before reading from these variables.\n\n Pros:\n - We will log whenever there is a change to these parameters\n Cons:\n - This adds overhead in the common case when log level is < DEBUG2.\n Is there a way to check the log level before taking the lock?\n - Acquiring the lock inside the function is inconsistent with the\n pattern that some of the other autovacuum functions requiring\n locks use (they assume you have a lock if needed inside of the\n function). But, we could assert that the lock is not already held.\n - If we later decide we don't like this choice and want to move the\n logging elsewhere, it will necessarily log less frequently which\n seems like a harder change to make than logging more frequently.\n\n- Move this logging into the loop through relations in do_autovacuum()\n and the config reload code and take the shared lock before doing the\n logging.\n\n Pros:\n - Seems safe and not expensive\n - Covers most of the times we would want the logging\n Cons:\n - duplicates logging in two places\n\n> Some minor comments on 0003:\n>\n> +/*\n> + * autovac_recalculate_workers_for_balance\n> + * Recalculate the number of workers to consider, given\n> cost-related\n> + * storage parameters and the current number of active workers.\n> + *\n> + * Caller must hold the AutovacuumLock in at least shared mode to access\n> + * worker->wi_proc.\n> + */\n>\n> Does it make sense to add Assert(LWLockHeldByMe(AutovacuumLock)) at\n> the beginning of this function?\n\nI've added this. It is called infrequently enough to be okay, I think.\n\n\n> /* rebalance in case the default cost parameters changed */\n> - LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);\n> - autovac_balance_cost();\n> + LWLockAcquire(AutovacuumLock, LW_SHARED);\n> + autovac_recalculate_workers_for_balance();\n> LWLockRelease(AutovacuumLock);\n>\n> Do we really need to have the autovacuum launcher recalculate\n> av_nworkersForBalance after reloading the config file? Since the cost\n> parameters are not used inautovac_recalculate_workers_for_balance()\n> the comment also needs to be updated.\n\nYep, almost certainly don't need this. I've removed this call to\nautovac_recalculate_workers_for_balance().\n\n> + /*\n> + * Balance and update limit values for autovacuum\n> workers. We must\n> + * always do this in case the autovacuum launcher or another\n> + * autovacuum worker has recalculated the number of\n> workers across\n> + * which we must balance the limit. This is done by\n> the launcher when\n> + * launching a new worker and by workers before\n> vacuuming each table.\n> + */\n> + AutoVacuumUpdateCostLimit();\n>\n> I think the last sentence is not correct. IIUC recalculation of\n> av_nworkersForBalance is done by the launcher after a worker finished\n> and by workers before vacuuming each table.\n\nYes, you are right. However, I think the comment was generally\nmisleading and I have reworded it.\n\n> It's not a problem of this patch, but IIUC since we don't reset\n> wi_dobalance after vacuuming each table we use the last value of\n> wi_dobalance for performing autovacuum items. At end of the loop for\n> tables in do_autovacuum() we have the following code that explains why\n> we don't reset wi_dobalance:\n>\n> /*\n> * Remove my info from shared memory. We could, but intentionally\n> * don't, unset wi_dobalance on the assumption that we are more likely\n> * than not to vacuum a table with no cost-related storage parameters\n> * next, so we don't want to give up our share of I/O for a very short\n> * interval and thereby thrash the global balance.\n> */\n> LWLockAcquire(AutovacuumScheduleLock, LW_EXCLUSIVE);\n> MyWorkerInfo->wi_tableoid = InvalidOid;\n> MyWorkerInfo->wi_sharedrel = false;\n> LWLockRelease(AutovacuumScheduleLock);\n>\n> Assuming we agree with that, probably we need to reset it to true\n> after vacuuming all tables?\n\nAh, great point. I have done this.\n\nOn Thu, Apr 6, 2023 at 8:29 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 6 Apr 2023, at 08:39, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> > Also I agree with\n> > where to put the log but I think the log message should start with\n> > lower cases:\n> >\n> > + elog(DEBUG2,\n> > + \"Autovacuum VacuumUpdateCosts(db=%u, rel=%u,\n>\n> In principle I agree, but in this case Autovacuum is a name and should IMO in\n> userfacing messages start with capital A.\n\nI've left this unchanged while I agonize over what to do with the\nplacement of the log message in general. But I am happy to keep it\nuppercase.\n\n> > +/*\n> > + * autovac_recalculate_workers_for_balance\n> > + * Recalculate the number of workers to consider, given\n> > cost-related\n> > + * storage parameters and the current number of active workers.\n> > + *\n> > + * Caller must hold the AutovacuumLock in at least shared mode to access\n> > + * worker->wi_proc.\n> > + */\n> >\n> > Does it make sense to add Assert(LWLockHeldByMe(AutovacuumLock)) at\n> > the beginning of this function?\n>\n> That's probably not a bad idea.\n\nDone.\n\n> > ---\n> > /* rebalance in case the default cost parameters changed */\n> > - LWLockAcquire(AutovacuumLock, LW_EXCLUSIVE);\n> > - autovac_balance_cost();\n> > + LWLockAcquire(AutovacuumLock, LW_SHARED);\n> > + autovac_recalculate_workers_for_balance();\n> > LWLockRelease(AutovacuumLock);\n> >\n> > Do we really need to have the autovacuum launcher recalculate\n> > av_nworkersForBalance after reloading the config file? Since the cost\n> > parameters are not used inautovac_recalculate_workers_for_balance()\n> > the comment also needs to be updated.\n>\n> If I understand this comment right; there was a discussion upthread that simply\n> doing it in both launcher and worker simplifies the code with little overhead.\n> A comment can reflect that choice though.\n\nYes, but now that this function no longer deals with the cost limit and\ndelay values itself, we can remove it.\n\n- Melanie",
"msg_date": "Thu, 6 Apr 2023 15:09:08 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "I think attached v18 addresses all outstanding issues except a run\nthrough the docs making sure all mentions of the balancing algorithm are\nstill correct.\n\nOn Wed, Apr 5, 2023 at 9:10 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 4 Apr 2023, at 22:04, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> >> +extern int VacuumCostLimit;\n> >> +extern double VacuumCostDelay;\n> >> ...\n> >> -extern PGDLLIMPORT int VacuumCostLimit;\n> >> -extern PGDLLIMPORT double VacuumCostDelay;\n> >>\n> >> Same with these, I don't think this is according to our default visibility.\n> >> Moreover, I'm not sure it's a good idea to perform this rename. This will keep\n> >> VacuumCostLimit and VacuumCostDelay exported, but change their meaning. Any\n> >> external code referring to these thinking they are backing the GUCs will still\n> >> compile, but may be broken in subtle ways. Is there a reason for not keeping\n> >> the current GUC variables and instead add net new ones?\n> >\n> > When VacuumCostLimit was the same variable in the code and for the GUC\n> > vacuum_cost_limit, everytime we reload the config file, VacuumCostLimit\n> > is overwritten. Autovacuum workers have to overwrite this value with the\n> > appropriate one for themselves given the balancing logic and the value\n> > of autovacuum_vacuum_cost_limit. However, the problem is, because you\n> > can specify -1 for autovacuum_vacuum_cost_limit to indicate it should\n> > fall back to vacuum_cost_limit, we have to reference the value of\n> > VacuumCostLimit when calculating the new autovacuum worker's cost limit\n> > after a config reload.\n> >\n> > But, you have to be sure you *only* do this after a config reload when\n> > the value of VacuumCostLimit is fresh and unmodified or you risk\n> > dividing the value of VacuumCostLimit over and over. That means it is\n> > unsafe to call functions updating the cost limit more than once.\n> >\n> > This orchestration wasn't as difficult when we only reloaded the config\n> > file once every table. We were careful about it and also kept the\n> > original \"base\" cost limit around from table_recheck_autovac(). However,\n> > once we started reloading the config file more often, this no longer\n> > works.\n> >\n> > By separating the variables modified when the gucs are set and the ones\n> > used the code, we can make sure we always have the original value the\n> > guc was set to in vacuum_cost_limit and autovacuum_vacuum_cost_limit,\n> > whenever we need to reference it.\n> >\n> > That being said, perhaps we should document what extensions should do?\n> > Do you think they will want to use the variables backing the gucs or to\n> > be able to overwrite the variables being used in the code?\n>\n> I think I wasn't clear in my comment, sorry. I don't have a problem with\n> introducing a new variable to split the balanced value from the GUC value.\n> What I don't think we should do is repurpose an exported symbol into doing a\n> new thing. In the case at hand I think VacuumCostLimit and VacuumCostDelay\n> should remain the backing variables for the GUCs, with vacuum_cost_limit and\n> vacuum_cost_delay carrying the balanced values. So the inverse of what is in\n> the patch now.\n>\n> The risk of these symbols being used in extensions might be very low but on\n> principle it seems unwise to alter a symbol and risk subtle breakage.\n\nIn attached v18, I have flipped them. Existing (in master) GUCs which\nwere exported for VacuumCostLimit and VacuumCostDelay retain their names\nand new globals vacuum_cost_limit and vacuum_cost_delay have been\nintroduced for use in the code.\n\nFlipping these kind of melted my mind, so I could definitely use another\nset of eyes double checking that the correct ones are being used in the\ncorrect places throughout 0002 and 0003.\n\nOn Thu, Apr 6, 2023 at 3:09 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> v17 attached does not yet fix the logging problem or variable naming\n> problem.\n>\n> I have not changed where AutoVacuumUpdateCostLimit() is called either.\n>\n> This is effectively just a round of cleanup. I hope I have managed to\n> address all other code review feedback so far, though some may have\n> slipped through the cracks.\n>\n> On Wed, Apr 5, 2023 at 2:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Wed, Apr 5, 2023 at 11:29 AM Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > + /*\n> > + * Balance and update limit values for autovacuum workers. We must\n> > + * always do this in case the autovacuum launcher or another\n> > + * autovacuum worker has recalculated the number of workers across\n> > + * which we must balance the limit. This is done by the launcher when\n> > + * launching a new worker and by workers before vacuuming each table.\n> > + */\n> >\n> > I don't quite understand what's going on here. A big reason that I'm\n> > worried about this whole issue in the first place is that sometimes\n> > there's a vacuum going on a giant table and you can't get it to go\n> > fast. You want it to absorb new settings, and to do so quickly. I\n> > realize that this is about the number of workers, not the actual cost\n> > limit, so that makes what I'm about to say less important. But ... is\n> > this often enough? Like, the time before we move onto the next table\n> > could be super long. The time before a new worker is launched should\n> > be ~autovacuum_naptime/autovacuum_max_workers or ~20s with default\n> > settings, so that's not horrible, but I'm kind of struggling to\n> > understand the rationale for this particular choice. Maybe it's fine.\n>\n> I've at least updated this comment to be more correct/less misleading.\n>\n> >\n> > + if (autovacuum_vac_cost_limit > 0)\n> > + VacuumCostLimit = autovacuum_vac_cost_limit;\n> > + else\n> > + VacuumCostLimit = vacuum_cost_limit;\n> > +\n> > + /* Only balance limit if no cost-related storage\n> > parameters specified */\n> > + if (pg_atomic_unlocked_test_flag(&MyWorkerInfo->wi_dobalance))\n> > + return;\n> > + Assert(VacuumCostLimit > 0);\n> > +\n> > + nworkers_for_balance = pg_atomic_read_u32(\n> > +\n> > &AutoVacuumShmem->av_nworkersForBalance);\n> > +\n> > + /* There is at least 1 autovac worker (this worker). */\n> > + if (nworkers_for_balance <= 0)\n> > + elog(ERROR, \"nworkers_for_balance must be > 0\");\n> > +\n> > + VacuumCostLimit = Max(VacuumCostLimit /\n> > nworkers_for_balance, 1);\n> >\n> > I think it would be better stylistically to use a temporary variable\n> > here and only assign the final value to VacuumCostLimit.\n>\n> I tried that and thought it adding confusing clutter. If it is a code\n> cleanliness issue, I am willing to change it, though.\n>\n> On Wed, Apr 5, 2023 at 3:04 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > > On 5 Apr 2023, at 20:55, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > > Again, I don't think this is something we should try to\n> > > address right now under time pressure, but in the future, I think we\n> > > should consider ripping this behavior out.\n> >\n> > I would not be opposed to that, but I wholeheartedly agree that it's not the\n> > job of this patch (or any patch at this point in the cycle).\n> >\n> > > + if (autovacuum_vac_cost_limit > 0)\n> > > + VacuumCostLimit = autovacuum_vac_cost_limit;\n> > > + else\n> > > + VacuumCostLimit = vacuum_cost_limit;\n> > > +\n> > > + /* Only balance limit if no cost-related storage\n> > > parameters specified */\n> > > + if (pg_atomic_unlocked_test_flag(&MyWorkerInfo->wi_dobalance))\n> > > + return;\n> > > + Assert(VacuumCostLimit > 0);\n> > > +\n> > > + nworkers_for_balance = pg_atomic_read_u32(\n> > > +\n> > > &AutoVacuumShmem->av_nworkersForBalance);\n> > > +\n> > > + /* There is at least 1 autovac worker (this worker). */\n> > > + if (nworkers_for_balance <= 0)\n> > > + elog(ERROR, \"nworkers_for_balance must be > 0\");\n> > > +\n> > > + VacuumCostLimit = Max(VacuumCostLimit /\n> > > nworkers_for_balance, 1);\n> > >\n> > > I think it would be better stylistically to use a temporary variable\n> > > here and only assign the final value to VacuumCostLimit.\n> >\n> > I can agree with that. Another supertiny nitpick on the above is to not end a\n> > single-line comment with a period.\n>\n> I have fixed this.\n>\n> On Thu, Apr 6, 2023 at 2:40 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Apr 6, 2023 at 12:29 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > >\n> > > Thanks all for the reviews.\n> > >\n> > > v16 attached. I put it together rather quickly, so there might be a few\n> > > spurious whitespaces or similar. There is one rather annoying pgindent\n> > > outlier that I have to figure out what to do about as well.\n> > >\n> > > The remaining functional TODOs that I know of are:\n> > >\n> > > - Resolve what to do about names of GUC and vacuum variables for cost\n> > > limit and cost delay (since it may affect extensions)\n> > >\n> > > - Figure out what to do about the logging message which accesses dboid\n> > > and tableoid (lock/no lock, where to put it, etc)\n> > >\n> > > - I see several places in docs which reference the balancing algorithm\n> > > for autovac workers. I did not read them in great detail, but we may\n> > > want to review them to see if any require updates.\n> > >\n> > > - Consider whether or not the initial two commits should just be\n> > > squashed with the third commit\n> > >\n> > > - Anything else reviewers are still unhappy with\n> > >\n> > >\n> > > On Wed, Apr 5, 2023 at 1:56 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Wed, Apr 5, 2023 at 5:05 AM Melanie Plageman\n> > > > <melanieplageman@gmail.com> wrote:\n> > > > >\n> > > > > On Tue, Apr 4, 2023 at 4:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > ---\n> > > > > > - if (worker->wi_proc != NULL)\n> > > > > > - elog(DEBUG2, \"autovac_balance_cost(pid=%d\n> > > > > > db=%u, rel=%u, dobalance=%s cost_limit=%d, cost_limit_base=%d,\n> > > > > > cost_delay=%g)\",\n> > > > > > - worker->wi_proc->pid,\n> > > > > > worker->wi_dboid, worker->wi_tableoid,\n> > > > > > - worker->wi_dobalance ? \"yes\" : \"no\",\n> > > > > > - worker->wi_cost_limit,\n> > > > > > worker->wi_cost_limit_base,\n> > > > > > - worker->wi_cost_delay);\n> > > > > >\n> > > > > > I think it's better to keep this kind of log in some form for\n> > > > > > debugging. For example, we can show these values of autovacuum workers\n> > > > > > in VacuumUpdateCosts().\n> > > > >\n> > > > > I added a message to do_autovacuum() after calling VacuumUpdateCosts()\n> > > > > in the loop vacuuming each table. That means it will happen once per\n> > > > > table. It's not ideal that I had to move the call to VacuumUpdateCosts()\n> > > > > behind the shared lock in that loop so that we could access the pid and\n> > > > > such in the logging message after updating the cost and delay, but it is\n> > > > > probably okay. Though noone is going to be changing those at this\n> > > > > point, it still seemed better to access them under the lock.\n> > > > >\n> > > > > This does mean we won't log anything when we do change the values of\n> > > > > VacuumCostDelay and VacuumCostLimit while vacuuming a table. Is it worth\n> > > > > adding some code to do that in VacuumUpdateCosts() (only when the value\n> > > > > has changed not on every call to VacuumUpdateCosts())? Or perhaps we\n> > > > > could add it in the config reload branch that is already in\n> > > > > vacuum_delay_point()?\n> > > >\n> > > > Previously, we used to show the pid in the log since a worker/launcher\n> > > > set other workers' delay costs. But now that the worker sets its delay\n> > > > costs, we don't need to show the pid in the log. Also, I think it's\n> > > > useful for debugging and investigating the system if we log it when\n> > > > changing the values. The log I imagined to add was like:\n> > > >\n> > > > @@ -1801,6 +1801,13 @@ VacuumUpdateCosts(void)\n> > > > VacuumCostDelay = vacuum_cost_delay;\n> > > >\n> > > > AutoVacuumUpdateLimit();\n> > > > +\n> > > > + elog(DEBUG2, \"autovacuum update costs (db=%u, rel=%u,\n> > > > dobalance=%s, cost_limit=%d, cost_delay=%g active=%s failsafe=%s)\",\n> > > > + MyWorkerInfo->wi_dboid, MyWorkerInfo->wi_tableoid,\n> > > > + pg_atomic_unlocked_test_flag(&MyWorkerInfo->wi_dobalance)\n> > > > ? \"no\" : \"yes\",\n> > > > + VacuumCostLimit, VacuumCostDelay,\n> > > > + VacuumCostDelay > 0 ? \"yes\" : \"no\",\n> > > > + VacuumFailsafeActive ? \"yes\" : \"no\");\n> > > > }\n> > > > else\n> > > > {\n> > >\n> > > Makes sense. I've updated the log message to roughly what you suggested.\n> > > I also realized I think it does make sense to call it in\n> > > VacuumUpdateCosts() -- only for autovacuum workers of course. I've done\n> > > this. I haven't taken the lock though and can't decide if I must since\n> > > they access dboid and tableoid -- those are not going to change at this\n> > > point, but I still don't know if I can access them lock-free...\n> > > Perhaps there is a way to condition it on the log level?\n> > >\n> > > If I have to take a lock, then I don't know if we should put these in\n> > > VacuumUpdateCosts()...\n> >\n> > I think we don't need to acquire a lock there as both values are\n> > updated only by workers reporting this message.\n>\n> I dunno. I just don't feel that comfortable saying, oh it's okay to\n> access these without a lock probably. I propose we do one of the\n> following:\n>\n> - Take a shared lock inside VacuumUpdateCosts() (it is not called on every\n> call to vacuum_delay_point()) before reading from these variables.\n>\n> Pros:\n> - We will log whenever there is a change to these parameters\n> Cons:\n> - This adds overhead in the common case when log level is < DEBUG2.\n> Is there a way to check the log level before taking the lock?\n> - Acquiring the lock inside the function is inconsistent with the\n> pattern that some of the other autovacuum functions requiring\n> locks use (they assume you have a lock if needed inside of the\n> function). But, we could assert that the lock is not already held.\n> - If we later decide we don't like this choice and want to move the\n> logging elsewhere, it will necessarily log less frequently which\n> seems like a harder change to make than logging more frequently.\n>\n> - Move this logging into the loop through relations in do_autovacuum()\n> and the config reload code and take the shared lock before doing the\n> logging.\n>\n> Pros:\n> - Seems safe and not expensive\n> - Covers most of the times we would want the logging\n> Cons:\n> - duplicates logging in two places\n\nOkay, in an attempt to wrap up this saga, I have made the following\nchange:\n\nAutovacuum workers, at the end of VacuumUpdateCosts(), check if cost\nlimit or cost delay have been changed. If they have, they assert that\nthey don't already hold the AutovacuumLock, take it in shared mode, and\ndo the logging.\n\n- Melanie",
"msg_date": "Thu, 6 Apr 2023 17:06:41 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 6 Apr 2023, at 23:06, Melanie Plageman <melanieplageman@gmail.com> wrote:\n\n> Autovacuum workers, at the end of VacuumUpdateCosts(), check if cost\n> limit or cost delay have been changed. If they have, they assert that\n> they don't already hold the AutovacuumLock, take it in shared mode, and\n> do the logging.\n\nAnother idea would be to copy the values to local temp variables while holding\nthe lock, and release the lock before calling elog() to avoid holding the lock\nover potential IO.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Apr 2023 23:45:16 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Thu, Apr 6, 2023 at 5:45 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 6 Apr 2023, at 23:06, Melanie Plageman <melanieplageman@gmail.com> wrote:\n>\n> > Autovacuum workers, at the end of VacuumUpdateCosts(), check if cost\n> > limit or cost delay have been changed. If they have, they assert that\n> > they don't already hold the AutovacuumLock, take it in shared mode, and\n> > do the logging.\n>\n> Another idea would be to copy the values to local temp variables while holding\n> the lock, and release the lock before calling elog() to avoid holding the lock\n> over potential IO.\n\nGood idea. I've done this in attached v19.\nAlso I looked through the docs and everything still looks correct for\nbalancing algo.\n\n- Melanie",
"msg_date": "Thu, 6 Apr 2023 18:12:22 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 7 Apr 2023, at 00:12, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> \n> On Thu, Apr 6, 2023 at 5:45 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 6 Apr 2023, at 23:06, Melanie Plageman <melanieplageman@gmail.com> wrote:\n>> \n>>> Autovacuum workers, at the end of VacuumUpdateCosts(), check if cost\n>>> limit or cost delay have been changed. If they have, they assert that\n>>> they don't already hold the AutovacuumLock, take it in shared mode, and\n>>> do the logging.\n>> \n>> Another idea would be to copy the values to local temp variables while holding\n>> the lock, and release the lock before calling elog() to avoid holding the lock\n>> over potential IO.\n> \n> Good idea. I've done this in attached v19.\n> Also I looked through the docs and everything still looks correct for\n> balancing algo.\n\nI had another read-through and test-through of this version, and have applied\nit with some minor changes to comments and whitespace. Thanks for the quick\nturnaround times on reviews in this thread!\n\nI opted for keeping the three individual commits, squashing them didn't seem\nhelpful enough to future commitlog readers and no other combination of the\nthree made more sense than what has been in the thread.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 7 Apr 2023 01:08:09 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 8:08 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 7 Apr 2023, at 00:12, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> >\n> > On Thu, Apr 6, 2023 at 5:45 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >>\n> >>> On 6 Apr 2023, at 23:06, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> >>\n> >>> Autovacuum workers, at the end of VacuumUpdateCosts(), check if cost\n> >>> limit or cost delay have been changed. If they have, they assert that\n> >>> they don't already hold the AutovacuumLock, take it in shared mode, and\n> >>> do the logging.\n> >>\n> >> Another idea would be to copy the values to local temp variables while holding\n> >> the lock, and release the lock before calling elog() to avoid holding the lock\n> >> over potential IO.\n> >\n> > Good idea. I've done this in attached v19.\n> > Also I looked through the docs and everything still looks correct for\n> > balancing algo.\n>\n> I had another read-through and test-through of this version, and have applied\n> it with some minor changes to comments and whitespace. Thanks for the quick\n> turnaround times on reviews in this thread!\n\nCool!\n\nRegarding the commit 7d71d3dd08, I have one comment:\n\n+ /* Only log updates to cost-related variables */\n+ if (vacuum_cost_delay == original_cost_delay &&\n+ vacuum_cost_limit == original_cost_limit)\n+ return;\n\nIIUC by default, we log not only before starting the vacuum but also\nwhen changing cost-related variables. Which is good, I think, because\nlogging the initial values would also be helpful for investigation.\nHowever, I think that we don't log the initial vacuum cost values\ndepending on the values. For example, if the\nautovacuum_vacuum_cost_delay storage option is set to 0, we don't log\nthe initial values. I think that instead of comparing old and new\nvalues, we can write the log only if\nmessage_level_is_interesting(DEBUG2) is true. That way, we don't need\nto acquire the lwlock unnecessarily. And the code looks cleaner to me.\nI've attached the patch (use_message_level_is_interesting.patch)\n\nAlso, while testing the autovacuum delay with relopt\nautovacuum_vacuum_cost_delay = 0, I realized that even if we set\nautovacuum_vacuum_cost_delay = 0 to a table, wi_dobalance is set to\ntrue. wi_dobalance comes from the following expression:\n\n /*\n * If any of the cost delay parameters has been set individually for\n * this table, disable the balancing algorithm.\n */\n tab->at_dobalance =\n !(avopts && (avopts->vacuum_cost_limit > 0 ||\n avopts->vacuum_cost_delay > 0));\n\nThe initial values of both avopts->vacuum_cost_limit and\navopts->vacuum_cost_delay are -1. I think we should use \">= 0\" instead\nof \"> 0\". Otherwise, we include the autovacuum worker working on a\ntable whose autovacuum_vacuum_cost_delay is 0 to the balancing\nalgorithm. Probably this behavior has existed also on back branches\nbut I haven't checked it yet.\n\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 7 Apr 2023 15:52:49 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 7 Apr 2023, at 08:52, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Fri, Apr 7, 2023 at 8:08 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n>> I had another read-through and test-through of this version, and have applied\n>> it with some minor changes to comments and whitespace. Thanks for the quick\n>> turnaround times on reviews in this thread!\n> \n> Cool!\n> \n> Regarding the commit 7d71d3dd08, I have one comment:\n> \n> + /* Only log updates to cost-related variables */\n> + if (vacuum_cost_delay == original_cost_delay &&\n> + vacuum_cost_limit == original_cost_limit)\n> + return;\n> \n> IIUC by default, we log not only before starting the vacuum but also\n> when changing cost-related variables. Which is good, I think, because\n> logging the initial values would also be helpful for investigation.\n> However, I think that we don't log the initial vacuum cost values\n> depending on the values. For example, if the\n> autovacuum_vacuum_cost_delay storage option is set to 0, we don't log\n> the initial values. I think that instead of comparing old and new\n> values, we can write the log only if\n> message_level_is_interesting(DEBUG2) is true. That way, we don't need\n> to acquire the lwlock unnecessarily. And the code looks cleaner to me.\n> I've attached the patch (use_message_level_is_interesting.patch)\n\nThat's a good idea, unless Melanie has conflicting opinions I think we should\ngo ahead with this. Avoiding taking a lock here is a good save.\n\n> Also, while testing the autovacuum delay with relopt\n> autovacuum_vacuum_cost_delay = 0, I realized that even if we set\n> autovacuum_vacuum_cost_delay = 0 to a table, wi_dobalance is set to\n> true. wi_dobalance comes from the following expression:\n> \n> /*\n> * If any of the cost delay parameters has been set individually for\n> * this table, disable the balancing algorithm.\n> */\n> tab->at_dobalance =\n> !(avopts && (avopts->vacuum_cost_limit > 0 ||\n> avopts->vacuum_cost_delay > 0));\n> \n> The initial values of both avopts->vacuum_cost_limit and\n> avopts->vacuum_cost_delay are -1. I think we should use \">= 0\" instead\n> of \"> 0\". Otherwise, we include the autovacuum worker working on a\n> table whose autovacuum_vacuum_cost_delay is 0 to the balancing\n> algorithm. Probably this behavior has existed also on back branches\n> but I haven't checked it yet.\n\nInteresting, good find. Looking quickly at the back branches I think there is\na variant of this for vacuum_cost_limit even there but needs more investigation.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 7 Apr 2023 13:28:54 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 2:53 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Apr 7, 2023 at 8:08 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > > On 7 Apr 2023, at 00:12, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > >\n> > > On Thu, Apr 6, 2023 at 5:45 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > >>\n> > >>> On 6 Apr 2023, at 23:06, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > >>\n> > >>> Autovacuum workers, at the end of VacuumUpdateCosts(), check if cost\n> > >>> limit or cost delay have been changed. If they have, they assert that\n> > >>> they don't already hold the AutovacuumLock, take it in shared mode, and\n> > >>> do the logging.\n> > >>\n> > >> Another idea would be to copy the values to local temp variables while holding\n> > >> the lock, and release the lock before calling elog() to avoid holding the lock\n> > >> over potential IO.\n> > >\n> > > Good idea. I've done this in attached v19.\n> > > Also I looked through the docs and everything still looks correct for\n> > > balancing algo.\n> >\n> > I had another read-through and test-through of this version, and have applied\n> > it with some minor changes to comments and whitespace. Thanks for the quick\n> > turnaround times on reviews in this thread!\n>\n> Cool!\n>\n> Regarding the commit 7d71d3dd08, I have one comment:\n>\n> + /* Only log updates to cost-related variables */\n> + if (vacuum_cost_delay == original_cost_delay &&\n> + vacuum_cost_limit == original_cost_limit)\n> + return;\n>\n> IIUC by default, we log not only before starting the vacuum but also\n> when changing cost-related variables. Which is good, I think, because\n> logging the initial values would also be helpful for investigation.\n> However, I think that we don't log the initial vacuum cost values\n> depending on the values. For example, if the\n> autovacuum_vacuum_cost_delay storage option is set to 0, we don't log\n> the initial values. I think that instead of comparing old and new\n> values, we can write the log only if\n> message_level_is_interesting(DEBUG2) is true. That way, we don't need\n> to acquire the lwlock unnecessarily. And the code looks cleaner to me.\n> I've attached the patch (use_message_level_is_interesting.patch)\n\nThanks for coming up with the case you thought of with storage param for\ncost delay = 0. In that case we wouldn't print the message initially and\nwe should fix that.\n\nI disagree, however, that we should condition it only on\nmessage_level_is_interesting().\n\nActually, outside of printing initial values when the autovacuum worker\nfirst starts (before vacuuming all tables), I don't think we should log\nthese values except when they are being updated. Autovacuum workers\ncould vacuum tons of small tables and having this print out at least\nonce per table (which I know is how it is on master) would be\ndistracting. Also, you could be reloading the config to update some\nother GUCs and be oblivious to an ongoing autovacuum and get these\nmessages printed out, which I would also find distracting.\n\nYou will have to stare very hard at the logs to tell if your changes to\nvacuum cost delay and limit took effect when you reload config. I think\nwith our changes to update the values more often, we should take the\nopportunity to make this logging more useful by making it happen only\nwhen the values are changed.\n\nI would be open to elevating the log level to DEBUG1 for logging only\nupdates and, perhaps, having an option if you set log level to DEBUG2,\nfor example, to always log these values in VacuumUpdateCosts().\n\nI'd even argue that, potentially, having the cost-delay related\nparameters printed at the beginning of vacuuming could be interesting to\nregular VACUUM as well (even though it doesn't benefit from config\nreload while in progress).\n\nTo fix the issue you mentioned and ensure the logging is printed when\nautovacuum workers start up before vacuuming tables, we could either\ninitialize vacuum_cost_delay and vacuum_cost_limit to something invalid\nthat will always be different than what they are set to in\nVacuumUpdateCosts() (not sure if this poses a problem for VACUUM using\nthese values since they are set to the defaults for VACUUM). Or, we\ncould duplicate this logging message in do_autovacuum().\n\nFinally, one other point about message_level_is_interesting(). I liked\nthe idea of using it a lot, since log level DEBUG2 will not be the\ncommon case. I thought of it but hesitated because all other users of\nmessage_level_is_interesting() are avoiding some memory allocation or\nstring copying -- not avoiding take a lock. Making this conditioned on\nlog level made me a bit uncomfortable. I can't think of a situation when\nit would be a problem, but it felt a bit off.\n\n> Also, while testing the autovacuum delay with relopt\n> autovacuum_vacuum_cost_delay = 0, I realized that even if we set\n> autovacuum_vacuum_cost_delay = 0 to a table, wi_dobalance is set to\n> true. wi_dobalance comes from the following expression:\n>\n> /*\n> * If any of the cost delay parameters has been set individually for\n> * this table, disable the balancing algorithm.\n> */\n> tab->at_dobalance =\n> !(avopts && (avopts->vacuum_cost_limit > 0 ||\n> avopts->vacuum_cost_delay > 0));\n>\n> The initial values of both avopts->vacuum_cost_limit and\n> avopts->vacuum_cost_delay are -1. I think we should use \">= 0\" instead\n> of \"> 0\". Otherwise, we include the autovacuum worker working on a\n> table whose autovacuum_vacuum_cost_delay is 0 to the balancing\n> algorithm. Probably this behavior has existed also on back branches\n> but I haven't checked it yet.\n\nThank you for catching this. Indeed this exists in master since\n1021bd6a89b which was backpatched. I checked and it is true all the way\nback through REL_11_STABLE.\n\nDefinitely seems worth fixing as it kind of defeats the purpose of the\noriginal commit. I wish I had noticed before!\n\nYour fix has:\n !(avopts && (avopts->vacuum_cost_limit >= 0 ||\n avopts->vacuum_cost_delay >= 0));\n\nAnd though delay is required to be >= 0\n avopts->vacuum_cost_delay >= 0\n\nLimit does not. It can just be > 0.\n\npostgres=# create table foo (a int) with (autovacuum_vacuum_cost_limit = 0);\nERROR: value 0 out of bounds for option \"autovacuum_vacuum_cost_limit\"\nDETAIL: Valid values are between \"1\" and \"10000\".\n\nThough >= is also fine, the rest of the code in all versions always\nchecks if limit > 0 and delay >= 0 since 0 is a valid value for delay\nand not for limit. Probably best we keep it consistent (though the whole\nthing is quite confusing).\n\n- Melanie\n\n\n",
"msg_date": "Fri, 7 Apr 2023 09:07:46 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 7 Apr 2023, at 15:07, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> On Fri, Apr 7, 2023 at 2:53 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n>> + /* Only log updates to cost-related variables */\n>> + if (vacuum_cost_delay == original_cost_delay &&\n>> + vacuum_cost_limit == original_cost_limit)\n>> + return;\n>> \n>> IIUC by default, we log not only before starting the vacuum but also\n>> when changing cost-related variables. Which is good, I think, because\n>> logging the initial values would also be helpful for investigation.\n>> However, I think that we don't log the initial vacuum cost values\n>> depending on the values. For example, if the\n>> autovacuum_vacuum_cost_delay storage option is set to 0, we don't log\n>> the initial values. I think that instead of comparing old and new\n>> values, we can write the log only if\n>> message_level_is_interesting(DEBUG2) is true. That way, we don't need\n>> to acquire the lwlock unnecessarily. And the code looks cleaner to me.\n>> I've attached the patch (use_message_level_is_interesting.patch)\n> \n> Thanks for coming up with the case you thought of with storage param for\n> cost delay = 0. In that case we wouldn't print the message initially and\n> we should fix that.\n> \n> I disagree, however, that we should condition it only on\n> message_level_is_interesting().\n\nI think we should keep the logging frequency as committed, but condition taking\nthe lock on message_level_is_interesting().\n\n> Actually, outside of printing initial values when the autovacuum worker\n> first starts (before vacuuming all tables), I don't think we should log\n> these values except when they are being updated. Autovacuum workers\n> could vacuum tons of small tables and having this print out at least\n> once per table (which I know is how it is on master) would be\n> distracting. Also, you could be reloading the config to update some\n> other GUCs and be oblivious to an ongoing autovacuum and get these\n> messages printed out, which I would also find distracting.\n> \n> You will have to stare very hard at the logs to tell if your changes to\n> vacuum cost delay and limit took effect when you reload config. I think\n> with our changes to update the values more often, we should take the\n> opportunity to make this logging more useful by making it happen only\n> when the values are changed.\n> \n> I would be open to elevating the log level to DEBUG1 for logging only\n> updates and, perhaps, having an option if you set log level to DEBUG2,\n> for example, to always log these values in VacuumUpdateCosts().\n> \n> I'd even argue that, potentially, having the cost-delay related\n> parameters printed at the beginning of vacuuming could be interesting to\n> regular VACUUM as well (even though it doesn't benefit from config\n> reload while in progress).\n> \n> To fix the issue you mentioned and ensure the logging is printed when\n> autovacuum workers start up before vacuuming tables, we could either\n> initialize vacuum_cost_delay and vacuum_cost_limit to something invalid\n> that will always be different than what they are set to in\n> VacuumUpdateCosts() (not sure if this poses a problem for VACUUM using\n> these values since they are set to the defaults for VACUUM). Or, we\n> could duplicate this logging message in do_autovacuum().\n\nDuplicating logging, maybe with a slightly tailored message, seem the least\nbad option.\n\n> Finally, one other point about message_level_is_interesting(). I liked\n> the idea of using it a lot, since log level DEBUG2 will not be the\n> common case. I thought of it but hesitated because all other users of\n> message_level_is_interesting() are avoiding some memory allocation or\n> string copying -- not avoiding take a lock. Making this conditioned on\n> log level made me a bit uncomfortable. I can't think of a situation when\n> it would be a problem, but it felt a bit off.\n\nConsidering how uncommon DEBUG2 will be in production, I think conditioning\ntaking a lock on it makes sense.\n\n>> Also, while testing the autovacuum delay with relopt\n>> autovacuum_vacuum_cost_delay = 0, I realized that even if we set\n>> autovacuum_vacuum_cost_delay = 0 to a table, wi_dobalance is set to\n>> true. wi_dobalance comes from the following expression:\n>> \n>> /*\n>> * If any of the cost delay parameters has been set individually for\n>> * this table, disable the balancing algorithm.\n>> */\n>> tab->at_dobalance =\n>> !(avopts && (avopts->vacuum_cost_limit > 0 ||\n>> avopts->vacuum_cost_delay > 0));\n>> \n>> The initial values of both avopts->vacuum_cost_limit and\n>> avopts->vacuum_cost_delay are -1. I think we should use \">= 0\" instead\n>> of \"> 0\". Otherwise, we include the autovacuum worker working on a\n>> table whose autovacuum_vacuum_cost_delay is 0 to the balancing\n>> algorithm. Probably this behavior has existed also on back branches\n>> but I haven't checked it yet.\n> \n> Thank you for catching this. Indeed this exists in master since\n> 1021bd6a89b which was backpatched. I checked and it is true all the way\n> back through REL_11_STABLE.\n> \n> Definitely seems worth fixing as it kind of defeats the purpose of the\n> original commit. I wish I had noticed before!\n> \n> Your fix has:\n> !(avopts && (avopts->vacuum_cost_limit >= 0 ||\n> avopts->vacuum_cost_delay >= 0));\n> \n> And though delay is required to be >= 0\n> avopts->vacuum_cost_delay >= 0\n> \n> Limit does not. It can just be > 0.\n> \n> postgres=# create table foo (a int) with (autovacuum_vacuum_cost_limit = 0);\n> ERROR: value 0 out of bounds for option \"autovacuum_vacuum_cost_limit\"\n> DETAIL: Valid values are between \"1\" and \"10000\".\n> \n> Though >= is also fine, the rest of the code in all versions always\n> checks if limit > 0 and delay >= 0 since 0 is a valid value for delay\n> and not for limit. Probably best we keep it consistent (though the whole\n> thing is quite confusing).\n\n+1\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 7 Apr 2023 15:23:13 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 9:07 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Fri, Apr 7, 2023 at 2:53 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Apr 7, 2023 at 8:08 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > >\n> > > > On 7 Apr 2023, at 00:12, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > > >\n> > > > On Thu, Apr 6, 2023 at 5:45 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > > >>\n> > > >>> On 6 Apr 2023, at 23:06, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > > >>\n> > > >>> Autovacuum workers, at the end of VacuumUpdateCosts(), check if cost\n> > > >>> limit or cost delay have been changed. If they have, they assert that\n> > > >>> they don't already hold the AutovacuumLock, take it in shared mode, and\n> > > >>> do the logging.\n> > > >>\n> > > >> Another idea would be to copy the values to local temp variables while holding\n> > > >> the lock, and release the lock before calling elog() to avoid holding the lock\n> > > >> over potential IO.\n> > > >\n> > > > Good idea. I've done this in attached v19.\n> > > > Also I looked through the docs and everything still looks correct for\n> > > > balancing algo.\n> > >\n> > > I had another read-through and test-through of this version, and have applied\n> > > it with some minor changes to comments and whitespace. Thanks for the quick\n> > > turnaround times on reviews in this thread!\n> >\n> > Cool!\n> >\n> > Regarding the commit 7d71d3dd08, I have one comment:\n> >\n> > + /* Only log updates to cost-related variables */\n> > + if (vacuum_cost_delay == original_cost_delay &&\n> > + vacuum_cost_limit == original_cost_limit)\n> > + return;\n> >\n> > IIUC by default, we log not only before starting the vacuum but also\n> > when changing cost-related variables. Which is good, I think, because\n> > logging the initial values would also be helpful for investigation.\n> > However, I think that we don't log the initial vacuum cost values\n> > depending on the values. For example, if the\n> > autovacuum_vacuum_cost_delay storage option is set to 0, we don't log\n> > the initial values. I think that instead of comparing old and new\n> > values, we can write the log only if\n> > message_level_is_interesting(DEBUG2) is true. That way, we don't need\n> > to acquire the lwlock unnecessarily. And the code looks cleaner to me.\n> > I've attached the patch (use_message_level_is_interesting.patch)\n>\n> Thanks for coming up with the case you thought of with storage param for\n> cost delay = 0. In that case we wouldn't print the message initially and\n> we should fix that.\n>\n> I disagree, however, that we should condition it only on\n> message_level_is_interesting().\n>\n> Actually, outside of printing initial values when the autovacuum worker\n> first starts (before vacuuming all tables), I don't think we should log\n> these values except when they are being updated. Autovacuum workers\n> could vacuum tons of small tables and having this print out at least\n> once per table (which I know is how it is on master) would be\n> distracting. Also, you could be reloading the config to update some\n> other GUCs and be oblivious to an ongoing autovacuum and get these\n> messages printed out, which I would also find distracting.\n>\n> You will have to stare very hard at the logs to tell if your changes to\n> vacuum cost delay and limit took effect when you reload config. I think\n> with our changes to update the values more often, we should take the\n> opportunity to make this logging more useful by making it happen only\n> when the values are changed.\n>\n> I would be open to elevating the log level to DEBUG1 for logging only\n> updates and, perhaps, having an option if you set log level to DEBUG2,\n> for example, to always log these values in VacuumUpdateCosts().\n>\n> I'd even argue that, potentially, having the cost-delay related\n> parameters printed at the beginning of vacuuming could be interesting to\n> regular VACUUM as well (even though it doesn't benefit from config\n> reload while in progress).\n>\n> To fix the issue you mentioned and ensure the logging is printed when\n> autovacuum workers start up before vacuuming tables, we could either\n> initialize vacuum_cost_delay and vacuum_cost_limit to something invalid\n> that will always be different than what they are set to in\n> VacuumUpdateCosts() (not sure if this poses a problem for VACUUM using\n> these values since they are set to the defaults for VACUUM). Or, we\n> could duplicate this logging message in do_autovacuum().\n>\n> Finally, one other point about message_level_is_interesting(). I liked\n> the idea of using it a lot, since log level DEBUG2 will not be the\n> common case. I thought of it but hesitated because all other users of\n> message_level_is_interesting() are avoiding some memory allocation or\n> string copying -- not avoiding take a lock. Making this conditioned on\n> log level made me a bit uncomfortable. I can't think of a situation when\n> it would be a problem, but it felt a bit off.\n>\n> > Also, while testing the autovacuum delay with relopt\n> > autovacuum_vacuum_cost_delay = 0, I realized that even if we set\n> > autovacuum_vacuum_cost_delay = 0 to a table, wi_dobalance is set to\n> > true. wi_dobalance comes from the following expression:\n> >\n> > /*\n> > * If any of the cost delay parameters has been set individually for\n> > * this table, disable the balancing algorithm.\n> > */\n> > tab->at_dobalance =\n> > !(avopts && (avopts->vacuum_cost_limit > 0 ||\n> > avopts->vacuum_cost_delay > 0));\n> >\n> > The initial values of both avopts->vacuum_cost_limit and\n> > avopts->vacuum_cost_delay are -1. I think we should use \">= 0\" instead\n> > of \"> 0\". Otherwise, we include the autovacuum worker working on a\n> > table whose autovacuum_vacuum_cost_delay is 0 to the balancing\n> > algorithm. Probably this behavior has existed also on back branches\n> > but I haven't checked it yet.\n>\n> Thank you for catching this. Indeed this exists in master since\n> 1021bd6a89b which was backpatched. I checked and it is true all the way\n> back through REL_11_STABLE.\n>\n> Definitely seems worth fixing as it kind of defeats the purpose of the\n> original commit. I wish I had noticed before!\n>\n> Your fix has:\n> !(avopts && (avopts->vacuum_cost_limit >= 0 ||\n> avopts->vacuum_cost_delay >= 0));\n>\n> And though delay is required to be >= 0\n> avopts->vacuum_cost_delay >= 0\n>\n> Limit does not. It can just be > 0.\n>\n> postgres=# create table foo (a int) with (autovacuum_vacuum_cost_limit = 0);\n> ERROR: value 0 out of bounds for option \"autovacuum_vacuum_cost_limit\"\n> DETAIL: Valid values are between \"1\" and \"10000\".\n>\n> Though >= is also fine, the rest of the code in all versions always\n> checks if limit > 0 and delay >= 0 since 0 is a valid value for delay\n> and not for limit. Probably best we keep it consistent (though the whole\n> thing is quite confusing).\n\nI have created an open item for each of these issues on the wiki\n(one for 16 and one under the section \"affects stable branches\").\n\n- Melanie\n\n\n",
"msg_date": "Mon, 10 Apr 2023 20:16:12 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 10:23 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 7 Apr 2023, at 15:07, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > On Fri, Apr 7, 2023 at 2:53 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> >> + /* Only log updates to cost-related variables */\n> >> + if (vacuum_cost_delay == original_cost_delay &&\n> >> + vacuum_cost_limit == original_cost_limit)\n> >> + return;\n> >>\n> >> IIUC by default, we log not only before starting the vacuum but also\n> >> when changing cost-related variables. Which is good, I think, because\n> >> logging the initial values would also be helpful for investigation.\n> >> However, I think that we don't log the initial vacuum cost values\n> >> depending on the values. For example, if the\n> >> autovacuum_vacuum_cost_delay storage option is set to 0, we don't log\n> >> the initial values. I think that instead of comparing old and new\n> >> values, we can write the log only if\n> >> message_level_is_interesting(DEBUG2) is true. That way, we don't need\n> >> to acquire the lwlock unnecessarily. And the code looks cleaner to me.\n> >> I've attached the patch (use_message_level_is_interesting.patch)\n> >\n> > Thanks for coming up with the case you thought of with storage param for\n> > cost delay = 0. In that case we wouldn't print the message initially and\n> > we should fix that.\n> >\n> > I disagree, however, that we should condition it only on\n> > message_level_is_interesting().\n>\n> I think we should keep the logging frequency as committed, but condition taking\n> the lock on message_level_is_interesting().\n>\n> > Actually, outside of printing initial values when the autovacuum worker\n> > first starts (before vacuuming all tables), I don't think we should log\n> > these values except when they are being updated. Autovacuum workers\n> > could vacuum tons of small tables and having this print out at least\n> > once per table (which I know is how it is on master) would be\n> > distracting. Also, you could be reloading the config to update some\n> > other GUCs and be oblivious to an ongoing autovacuum and get these\n> > messages printed out, which I would also find distracting.\n> >\n> > You will have to stare very hard at the logs to tell if your changes to\n> > vacuum cost delay and limit took effect when you reload config. I think\n> > with our changes to update the values more often, we should take the\n> > opportunity to make this logging more useful by making it happen only\n> > when the values are changed.\n> >\n\nFor debugging purposes, I think it could also be important information\nthat the cost values are not changed. Personally, I prefer to log the\ncurrent state rather than deciding for ourselves which events are\nimportant. If always logging these values in DEBUG2 had been\ndistracting, we might want to lower it to DEBUG3.\n\n> > I would be open to elevating the log level to DEBUG1 for logging only\n> > updates and, perhaps, having an option if you set log level to DEBUG2,\n> > for example, to always log these values in VacuumUpdateCosts().\n\nI'm not really sure it's a good idea to change the log messages and\nevents depending on elevel. Do you know we have any precedents ?\n\n> >\n> > I'd even argue that, potentially, having the cost-delay related\n> > parameters printed at the beginning of vacuuming could be interesting to\n> > regular VACUUM as well (even though it doesn't benefit from config\n> > reload while in progress).\n> >\n> > To fix the issue you mentioned and ensure the logging is printed when\n> > autovacuum workers start up before vacuuming tables, we could either\n> > initialize vacuum_cost_delay and vacuum_cost_limit to something invalid\n> > that will always be different than what they are set to in\n> > VacuumUpdateCosts() (not sure if this poses a problem for VACUUM using\n> > these values since they are set to the defaults for VACUUM). Or, we\n> > could duplicate this logging message in do_autovacuum().\n>\n> Duplicating logging, maybe with a slightly tailored message, seem the least\n> bad option.\n>\n> > Finally, one other point about message_level_is_interesting(). I liked\n> > the idea of using it a lot, since log level DEBUG2 will not be the\n> > common case. I thought of it but hesitated because all other users of\n> > message_level_is_interesting() are avoiding some memory allocation or\n> > string copying -- not avoiding take a lock. Making this conditioned on\n> > log level made me a bit uncomfortable. I can't think of a situation when\n> > it would be a problem, but it felt a bit off.\n>\n> Considering how uncommon DEBUG2 will be in production, I think conditioning\n> taking a lock on it makes sense.\n\nThe comment of message_level_is_interesting() says:\n\n * This is useful to short-circuit any expensive preparatory work that\n * might be needed for a logging message.\n\nWhich can apply to taking a lwlock, I think.\n\n>\n> >> Also, while testing the autovacuum delay with relopt\n> >> autovacuum_vacuum_cost_delay = 0, I realized that even if we set\n> >> autovacuum_vacuum_cost_delay = 0 to a table, wi_dobalance is set to\n> >> true. wi_dobalance comes from the following expression:\n> >>\n> >> /*\n> >> * If any of the cost delay parameters has been set individually for\n> >> * this table, disable the balancing algorithm.\n> >> */\n> >> tab->at_dobalance =\n> >> !(avopts && (avopts->vacuum_cost_limit > 0 ||\n> >> avopts->vacuum_cost_delay > 0));\n> >>\n> >> The initial values of both avopts->vacuum_cost_limit and\n> >> avopts->vacuum_cost_delay are -1. I think we should use \">= 0\" instead\n> >> of \"> 0\". Otherwise, we include the autovacuum worker working on a\n> >> table whose autovacuum_vacuum_cost_delay is 0 to the balancing\n> >> algorithm. Probably this behavior has existed also on back branches\n> >> but I haven't checked it yet.\n> >\n> > Thank you for catching this. Indeed this exists in master since\n> > 1021bd6a89b which was backpatched. I checked and it is true all the way\n> > back through REL_11_STABLE.\n\nThanks for checking!\n\n> >\n> > Definitely seems worth fixing as it kind of defeats the purpose of the\n> > original commit. I wish I had noticed before!\n> >\n> > Your fix has:\n> > !(avopts && (avopts->vacuum_cost_limit >= 0 ||\n> > avopts->vacuum_cost_delay >= 0));\n> >\n> > And though delay is required to be >= 0\n> > avopts->vacuum_cost_delay >= 0\n> >\n> > Limit does not. It can just be > 0.\n> >\n> > postgres=# create table foo (a int) with (autovacuum_vacuum_cost_limit = 0);\n> > ERROR: value 0 out of bounds for option \"autovacuum_vacuum_cost_limit\"\n> > DETAIL: Valid values are between \"1\" and \"10000\".\n> >\n> > Though >= is also fine, the rest of the code in all versions always\n> > checks if limit > 0 and delay >= 0 since 0 is a valid value for delay\n> > and not for limit. Probably best we keep it consistent (though the whole\n> > thing is quite confusing).\n>\n> +1\n\n+1. I misunderstood the initial value of autovacuum_vacuum_cost_limit reloption.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 12 Apr 2023 00:05:16 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 11 Apr 2023, at 17:05, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> The comment of message_level_is_interesting() says:\n> \n> * This is useful to short-circuit any expensive preparatory work that\n> * might be needed for a logging message.\n> \n> Which can apply to taking a lwlock, I think.\n\nI agree that we can, and should, use message_level_is_interesting to skip\ntaking this lock. Also, the more I think about the more I'm convinced that we\nshould not change the current logging frequency of once per table from what we\nship today. In DEGUG2 the logs should tell the whole story without requiring\nextrapolation based on missing entries. So I think we should use your patch to\nsolve this open item. If there is interest in reducing the logging frequency\nwe should discuss that in its own thread, insted of it being hidden in here.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Sat, 15 Apr 2023 22:40:08 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Wed, Apr 12, 2023 at 12:05 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Apr 7, 2023 at 10:23 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > > On 7 Apr 2023, at 15:07, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > > On Fri, Apr 7, 2023 at 2:53 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Definitely seems worth fixing as it kind of defeats the purpose of the\n> > > original commit. I wish I had noticed before!\n> > >\n> > > Your fix has:\n> > > !(avopts && (avopts->vacuum_cost_limit >= 0 ||\n> > > avopts->vacuum_cost_delay >= 0));\n> > >\n> > > And though delay is required to be >= 0\n> > > avopts->vacuum_cost_delay >= 0\n> > >\n> > > Limit does not. It can just be > 0.\n> > >\n> > > postgres=# create table foo (a int) with (autovacuum_vacuum_cost_limit = 0);\n> > > ERROR: value 0 out of bounds for option \"autovacuum_vacuum_cost_limit\"\n> > > DETAIL: Valid values are between \"1\" and \"10000\".\n> > >\n> > > Though >= is also fine, the rest of the code in all versions always\n> > > checks if limit > 0 and delay >= 0 since 0 is a valid value for delay\n> > > and not for limit. Probably best we keep it consistent (though the whole\n> > > thing is quite confusing).\n> >\n> > +1\n>\n> +1. I misunderstood the initial value of autovacuum_vacuum_cost_limit reloption.\n\nI've attached an updated patch for fixing at_dobalance condition.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 17 Apr 2023 11:04:22 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 17 Apr 2023, at 04:04, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> I've attached an updated patch for fixing at_dobalance condition.\n\nI revisited this and pushed it to all supported branches after another round of\ntesting and reading.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 25 Apr 2023 14:39:39 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Tue, Apr 25, 2023 at 9:39 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 17 Apr 2023, at 04:04, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> > I've attached an updated patch for fixing at_dobalance condition.\n>\n> I revisited this and pushed it to all supported branches after another round of\n> testing and reading.\n\nThanks!\n\nCan we mark the open item \"Can't disable autovacuum cost delay through\nstorage parameter\" as resolved?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 25 Apr 2023 22:31:11 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 25 Apr 2023, at 15:31, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> \n> On Tue, Apr 25, 2023 at 9:39 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 17 Apr 2023, at 04:04, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> \n>>> I've attached an updated patch for fixing at_dobalance condition.\n>> \n>> I revisited this and pushed it to all supported branches after another round of\n>> testing and reading.\n> \n> Thanks!\n> \n> Can we mark the open item \"Can't disable autovacuum cost delay through\n> storage parameter\" as resolved?\n\nYes, I've gone ahead and done that now.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 25 Apr 2023 15:35:45 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Tue, Apr 25, 2023 at 10:35 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 25 Apr 2023, at 15:31, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Apr 25, 2023 at 9:39 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >>\n> >>> On 17 Apr 2023, at 04:04, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>\n> >>> I've attached an updated patch for fixing at_dobalance condition.\n> >>\n> >> I revisited this and pushed it to all supported branches after another round of\n> >> testing and reading.\n> >\n> > Thanks!\n> >\n> > Can we mark the open item \"Can't disable autovacuum cost delay through\n> > storage parameter\" as resolved?\n>\n> Yes, I've gone ahead and done that now.\n\nGreat, thank you!\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 26 Apr 2023 00:02:25 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 6:08 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> I had another read-through and test-through of this version, and have\napplied\n> it with some minor changes to comments and whitespace. Thanks for the\nquick\n> turnaround times on reviews in this thread!\n\n- VacuumFailsafeActive = false;\n+ Assert(!VacuumFailsafeActive);\n\nI can trigger this assert added in commit 7d71d3dd08.\n\nFirst build with the patch in [1], then:\n\nsession 1:\n\nCREATE EXTENSION xid_wraparound ;\n\nCREATE TABLE autovacuum_disabled(id serial primary key, data text) WITH\n(autovacuum_enabled=false);\nINSERT INTO autovacuum_disabled(data) SELECT generate_series(1,1000);\n\n-- I can trigger without this, but just make sure it doesn't get vacuumed\nBEGIN;\nDELETE FROM autovacuum_disabled WHERE id % 2 = 0;\n\nsession 2:\n\n-- get to failsafe limit\nSELECT consume_xids(1*1000*1000*1000);\nINSERT INTO autovacuum_disabled(data) SELECT 1;\nSELECT consume_xids(1*1000*1000*1000);\nINSERT INTO autovacuum_disabled(data) SELECT 1;\n\nVACUUM autovacuum_disabled;\n\nWARNING: cutoff for removing and freezing tuples is far in the past\nHINT: Close open transactions soon to avoid wraparound problems.\nYou might also need to commit or roll back old prepared transactions, or\ndrop stale replication slots.\nWARNING: bypassing nonessential maintenance of table\n\"john.public.autovacuum_disabled\" as a failsafe after 0 index scans\nDETAIL: The table's relfrozenxid or relminmxid is too far in the past.\nHINT: Consider increasing configuration parameter \"maintenance_work_mem\"\nor \"autovacuum_work_mem\".\nYou might also need to consider other ways for VACUUM to keep up with the\nallocation of transaction IDs.\nserver closed the connection unexpectedly\n\n#0 0x00007ff31f68ebec in __pthread_kill_implementation ()\n from /lib64/libc.so.6\n#1 0x00007ff31f63e956 in raise () from /lib64/libc.so.6\n#2 0x00007ff31f6287f4 in abort () from /lib64/libc.so.6\n#3 0x0000000000978032 in ExceptionalCondition (\n conditionName=conditionName@entry=0xa4e970 \"!VacuumFailsafeActive\",\n fileName=fileName@entry=0xa4da38\n\"../src/backend/access/heap/vacuumlazy.c\", lineNumber=lineNumber@entry=392)\nat ../src/backend/utils/error/assert.c:66\n#4 0x000000000058c598 in heap_vacuum_rel (rel=0x7ff31d8a97d0,\n params=<optimized out>, bstrategy=<optimized out>)\n at ../src/backend/access/heap/vacuumlazy.c:392\n#5 0x000000000069af1f in table_relation_vacuum (bstrategy=0x14ddca8,\n params=0x7ffec28585f0, rel=0x7ff31d8a97d0)\n at ../src/include/access/tableam.h:1705\n#6 vacuum_rel (relid=relid@entry=16402, relation=relation@entry=0x0,\n params=params@entry=0x7ffec28585f0, skip_privs=skip_privs@entry=true,\n bstrategy=bstrategy@entry=0x14ddca8)\n at ../src/backend/commands/vacuum.c:2202\n#7 0x000000000069b0e4 in vacuum_rel (relid=16398, relation=<optimized\nout>,\n params=params@entry=0x7ffec2858850, skip_privs=skip_privs@entry=false,\n bstrategy=bstrategy@entry=0x14ddca8)\n at ../src/backend/commands/vacuum.c:2236\n#8 0x000000000069c594 in vacuum (relations=0x14dde38,\n params=0x7ffec2858850, bstrategy=0x14ddca8, vac_context=0x14ddb90,\n isTopLevel=<optimized out>) at ../src/backend/commands/vacuum.c:623\n\n[1]\nhttps://www.postgresql.org/message-id/CAD21AoAyYBZOiB1UPCPZJHTLk0-arrq5zqNGj%2BPrsbpdUy%3Dg-g%40mail.gmail.com\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Fri, Apr 7, 2023 at 6:08 AM Daniel Gustafsson <daniel@yesql.se> wrote:>> I had another read-through and test-through of this version, and have applied> it with some minor changes to comments and whitespace. Thanks for the quick> turnaround times on reviews in this thread!- VacuumFailsafeActive = false;+ Assert(!VacuumFailsafeActive);I can trigger this assert added in commit 7d71d3dd08.First build with the patch in [1], then:session 1:CREATE EXTENSION xid_wraparound ;CREATE TABLE autovacuum_disabled(id serial primary key, data text) WITH (autovacuum_enabled=false);INSERT INTO autovacuum_disabled(data) SELECT generate_series(1,1000);-- I can trigger without this, but just make sure it doesn't get vacuumedBEGIN;DELETE FROM autovacuum_disabled WHERE id % 2 = 0;session 2:-- get to failsafe limitSELECT consume_xids(1*1000*1000*1000);INSERT INTO autovacuum_disabled(data) SELECT 1;SELECT consume_xids(1*1000*1000*1000);INSERT INTO autovacuum_disabled(data) SELECT 1;VACUUM autovacuum_disabled;WARNING: cutoff for removing and freezing tuples is far in the pastHINT: Close open transactions soon to avoid wraparound problems.You might also need to commit or roll back old prepared transactions, or drop stale replication slots.WARNING: bypassing nonessential maintenance of table \"john.public.autovacuum_disabled\" as a failsafe after 0 index scansDETAIL: The table's relfrozenxid or relminmxid is too far in the past.HINT: Consider increasing configuration parameter \"maintenance_work_mem\" or \"autovacuum_work_mem\".You might also need to consider other ways for VACUUM to keep up with the allocation of transaction IDs.server closed the connection unexpectedly#0 0x00007ff31f68ebec in __pthread_kill_implementation () from /lib64/libc.so.6#1 0x00007ff31f63e956 in raise () from /lib64/libc.so.6#2 0x00007ff31f6287f4 in abort () from /lib64/libc.so.6#3 0x0000000000978032 in ExceptionalCondition ( conditionName=conditionName@entry=0xa4e970 \"!VacuumFailsafeActive\", fileName=fileName@entry=0xa4da38 \"../src/backend/access/heap/vacuumlazy.c\", lineNumber=lineNumber@entry=392) at ../src/backend/utils/error/assert.c:66#4 0x000000000058c598 in heap_vacuum_rel (rel=0x7ff31d8a97d0, params=<optimized out>, bstrategy=<optimized out>) at ../src/backend/access/heap/vacuumlazy.c:392#5 0x000000000069af1f in table_relation_vacuum (bstrategy=0x14ddca8, params=0x7ffec28585f0, rel=0x7ff31d8a97d0) at ../src/include/access/tableam.h:1705#6 vacuum_rel (relid=relid@entry=16402, relation=relation@entry=0x0, params=params@entry=0x7ffec28585f0, skip_privs=skip_privs@entry=true, bstrategy=bstrategy@entry=0x14ddca8) at ../src/backend/commands/vacuum.c:2202#7 0x000000000069b0e4 in vacuum_rel (relid=16398, relation=<optimized out>, params=params@entry=0x7ffec2858850, skip_privs=skip_privs@entry=false, bstrategy=bstrategy@entry=0x14ddca8) at ../src/backend/commands/vacuum.c:2236#8 0x000000000069c594 in vacuum (relations=0x14dde38, params=0x7ffec2858850, bstrategy=0x14ddca8, vac_context=0x14ddb90, isTopLevel=<optimized out>) at ../src/backend/commands/vacuum.c:623[1] https://www.postgresql.org/message-id/CAD21AoAyYBZOiB1UPCPZJHTLk0-arrq5zqNGj%2BPrsbpdUy%3Dg-g%40mail.gmail.com--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 27 Apr 2023 16:29:49 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 27 Apr 2023, at 11:29, John Naylor <john.naylor@enterprisedb.com> wrote:\n> On Fri, Apr 7, 2023 at 6:08 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > I had another read-through and test-through of this version, and have applied\n> > it with some minor changes to comments and whitespace. Thanks for the quick\n> > turnaround times on reviews in this thread!\n> \n> - VacuumFailsafeActive = false;\n> + Assert(!VacuumFailsafeActive);\n> \n> I can trigger this assert added in commit 7d71d3dd08.\n> \n> First build with the patch in [1], then:\n\nInteresting, thanks for the report! I'll look into it directly.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 27 Apr 2023 11:32:29 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Thu, Apr 27, 2023 at 6:30 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n>\n> On Fri, Apr 7, 2023 at 6:08 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > I had another read-through and test-through of this version, and have applied\n> > it with some minor changes to comments and whitespace. Thanks for the quick\n> > turnaround times on reviews in this thread!\n>\n> - VacuumFailsafeActive = false;\n> + Assert(!VacuumFailsafeActive);\n>\n> I can trigger this assert added in commit 7d71d3dd08.\n>\n> First build with the patch in [1], then:\n>\n> session 1:\n>\n> CREATE EXTENSION xid_wraparound ;\n>\n> CREATE TABLE autovacuum_disabled(id serial primary key, data text) WITH (autovacuum_enabled=false);\n> INSERT INTO autovacuum_disabled(data) SELECT generate_series(1,1000);\n>\n> -- I can trigger without this, but just make sure it doesn't get vacuumed\n> BEGIN;\n> DELETE FROM autovacuum_disabled WHERE id % 2 = 0;\n>\n> session 2:\n>\n> -- get to failsafe limit\n> SELECT consume_xids(1*1000*1000*1000);\n> INSERT INTO autovacuum_disabled(data) SELECT 1;\n> SELECT consume_xids(1*1000*1000*1000);\n> INSERT INTO autovacuum_disabled(data) SELECT 1;\n>\n> VACUUM autovacuum_disabled;\n>\n> WARNING: cutoff for removing and freezing tuples is far in the past\n> HINT: Close open transactions soon to avoid wraparound problems.\n> You might also need to commit or roll back old prepared transactions, or drop stale replication slots.\n> WARNING: bypassing nonessential maintenance of table \"john.public.autovacuum_disabled\" as a failsafe after 0 index scans\n> DETAIL: The table's relfrozenxid or relminmxid is too far in the past.\n> HINT: Consider increasing configuration parameter \"maintenance_work_mem\" or \"autovacuum_work_mem\".\n> You might also need to consider other ways for VACUUM to keep up with the allocation of transaction IDs.\n> server closed the connection unexpectedly\n>\n> #0 0x00007ff31f68ebec in __pthread_kill_implementation ()\n> from /lib64/libc.so.6\n> #1 0x00007ff31f63e956 in raise () from /lib64/libc.so.6\n> #2 0x00007ff31f6287f4 in abort () from /lib64/libc.so.6\n> #3 0x0000000000978032 in ExceptionalCondition (\n> conditionName=conditionName@entry=0xa4e970 \"!VacuumFailsafeActive\",\n> fileName=fileName@entry=0xa4da38 \"../src/backend/access/heap/vacuumlazy.c\", lineNumber=lineNumber@entry=392) at ../src/backend/utils/error/assert.c:66\n> #4 0x000000000058c598 in heap_vacuum_rel (rel=0x7ff31d8a97d0,\n> params=<optimized out>, bstrategy=<optimized out>)\n> at ../src/backend/access/heap/vacuumlazy.c:392\n> #5 0x000000000069af1f in table_relation_vacuum (bstrategy=0x14ddca8,\n> params=0x7ffec28585f0, rel=0x7ff31d8a97d0)\n> at ../src/include/access/tableam.h:1705\n> #6 vacuum_rel (relid=relid@entry=16402, relation=relation@entry=0x0,\n> params=params@entry=0x7ffec28585f0, skip_privs=skip_privs@entry=true,\n> bstrategy=bstrategy@entry=0x14ddca8)\n> at ../src/backend/commands/vacuum.c:2202\n> #7 0x000000000069b0e4 in vacuum_rel (relid=16398, relation=<optimized out>,\n> params=params@entry=0x7ffec2858850, skip_privs=skip_privs@entry=false,\n> bstrategy=bstrategy@entry=0x14ddca8)\n> at ../src/backend/commands/vacuum.c:2236\n\nGood catch. I think the problem is that vacuum_rel() is called\nrecursively and we don't reset VacuumFailsafeActive before vacuuming\nthe toast table. I think we should reset it in heap_vacuum_rel()\ninstead of Assert(). It's possible that we trigger the failsafe mode\nonly for either one.Please find the attached patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 27 Apr 2023 21:10:00 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 27 Apr 2023, at 14:10, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> \n> On Thu, Apr 27, 2023 at 6:30 PM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n>> \n>> \n>> On Fri, Apr 7, 2023 at 6:08 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> \n>>> I had another read-through and test-through of this version, and have applied\n>>> it with some minor changes to comments and whitespace. Thanks for the quick\n>>> turnaround times on reviews in this thread!\n>> \n>> - VacuumFailsafeActive = false;\n>> + Assert(!VacuumFailsafeActive);\n>> \n>> I can trigger this assert added in commit 7d71d3dd08.\n>> \n>> First build with the patch in [1], then:\n>> \n>> session 1:\n>> \n>> CREATE EXTENSION xid_wraparound ;\n>> \n>> CREATE TABLE autovacuum_disabled(id serial primary key, data text) WITH (autovacuum_enabled=false);\n>> INSERT INTO autovacuum_disabled(data) SELECT generate_series(1,1000);\n>> \n>> -- I can trigger without this, but just make sure it doesn't get vacuumed\n>> BEGIN;\n>> DELETE FROM autovacuum_disabled WHERE id % 2 = 0;\n>> \n>> session 2:\n>> \n>> -- get to failsafe limit\n>> SELECT consume_xids(1*1000*1000*1000);\n>> INSERT INTO autovacuum_disabled(data) SELECT 1;\n>> SELECT consume_xids(1*1000*1000*1000);\n>> INSERT INTO autovacuum_disabled(data) SELECT 1;\n>> \n>> VACUUM autovacuum_disabled;\n>> \n>> WARNING: cutoff for removing and freezing tuples is far in the past\n>> HINT: Close open transactions soon to avoid wraparound problems.\n>> You might also need to commit or roll back old prepared transactions, or drop stale replication slots.\n>> WARNING: bypassing nonessential maintenance of table \"john.public.autovacuum_disabled\" as a failsafe after 0 index scans\n>> DETAIL: The table's relfrozenxid or relminmxid is too far in the past.\n>> HINT: Consider increasing configuration parameter \"maintenance_work_mem\" or \"autovacuum_work_mem\".\n>> You might also need to consider other ways for VACUUM to keep up with the allocation of transaction IDs.\n>> server closed the connection unexpectedly\n>> \n>> #0 0x00007ff31f68ebec in __pthread_kill_implementation ()\n>> from /lib64/libc.so.6\n>> #1 0x00007ff31f63e956 in raise () from /lib64/libc.so.6\n>> #2 0x00007ff31f6287f4 in abort () from /lib64/libc.so.6\n>> #3 0x0000000000978032 in ExceptionalCondition (\n>> conditionName=conditionName@entry=0xa4e970 \"!VacuumFailsafeActive\",\n>> fileName=fileName@entry=0xa4da38 \"../src/backend/access/heap/vacuumlazy.c\", lineNumber=lineNumber@entry=392) at ../src/backend/utils/error/assert.c:66\n>> #4 0x000000000058c598 in heap_vacuum_rel (rel=0x7ff31d8a97d0,\n>> params=<optimized out>, bstrategy=<optimized out>)\n>> at ../src/backend/access/heap/vacuumlazy.c:392\n>> #5 0x000000000069af1f in table_relation_vacuum (bstrategy=0x14ddca8,\n>> params=0x7ffec28585f0, rel=0x7ff31d8a97d0)\n>> at ../src/include/access/tableam.h:1705\n>> #6 vacuum_rel (relid=relid@entry=16402, relation=relation@entry=0x0,\n>> params=params@entry=0x7ffec28585f0, skip_privs=skip_privs@entry=true,\n>> bstrategy=bstrategy@entry=0x14ddca8)\n>> at ../src/backend/commands/vacuum.c:2202\n>> #7 0x000000000069b0e4 in vacuum_rel (relid=16398, relation=<optimized out>,\n>> params=params@entry=0x7ffec2858850, skip_privs=skip_privs@entry=false,\n>> bstrategy=bstrategy@entry=0x14ddca8)\n>> at ../src/backend/commands/vacuum.c:2236\n> \n> Good catch. I think the problem is that vacuum_rel() is called\n> recursively and we don't reset VacuumFailsafeActive before vacuuming\n> the toast table. I think we should reset it in heap_vacuum_rel()\n> instead of Assert(). It's possible that we trigger the failsafe mode\n> only for either one.Please find the attached patch.\n\nAgreed, that matches my research and testing, I have the same diff here and it\npasses testing and works as intended. This was briefly discussed in [0] and\nslightly upthread from there but then missed. I will do some more looking and\ntesting but I'm fairly sure this is the right fix, so unless I find something\nelse I will go ahead with this.\n\nxid_wraparound is a really nifty testing tool. Very cool.\n\n--\nDaniel Gustafsson\n\n[0] CAAKRu_b1HjGCTsFpUnmwLNS8NeXJ+JnrDLhT1osP+Gq9HCU+Rw@mail.gmail.com\n\n",
"msg_date": "Thu, 27 Apr 2023 14:54:57 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "On Thu, Apr 27, 2023 at 8:55 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 27 Apr 2023, at 14:10, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Apr 27, 2023 at 6:30 PM John Naylor\n> > <john.naylor@enterprisedb.com> wrote:\n> >>\n> >>\n> >> On Fri, Apr 7, 2023 at 6:08 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >>>\n> >>> I had another read-through and test-through of this version, and have applied\n> >>> it with some minor changes to comments and whitespace. Thanks for the quick\n> >>> turnaround times on reviews in this thread!\n> >>\n> >> - VacuumFailsafeActive = false;\n> >> + Assert(!VacuumFailsafeActive);\n> >>\n> >> I can trigger this assert added in commit 7d71d3dd08.\n> >>\n> >> First build with the patch in [1], then:\n> >>\n> >> session 1:\n> >>\n> >> CREATE EXTENSION xid_wraparound ;\n> >>\n> >> CREATE TABLE autovacuum_disabled(id serial primary key, data text) WITH (autovacuum_enabled=false);\n> >> INSERT INTO autovacuum_disabled(data) SELECT generate_series(1,1000);\n> >>\n> >> -- I can trigger without this, but just make sure it doesn't get vacuumed\n> >> BEGIN;\n> >> DELETE FROM autovacuum_disabled WHERE id % 2 = 0;\n> >>\n> >> session 2:\n> >>\n> >> -- get to failsafe limit\n> >> SELECT consume_xids(1*1000*1000*1000);\n> >> INSERT INTO autovacuum_disabled(data) SELECT 1;\n> >> SELECT consume_xids(1*1000*1000*1000);\n> >> INSERT INTO autovacuum_disabled(data) SELECT 1;\n> >>\n> >> VACUUM autovacuum_disabled;\n> >>\n> >> WARNING: cutoff for removing and freezing tuples is far in the past\n> >> HINT: Close open transactions soon to avoid wraparound problems.\n> >> You might also need to commit or roll back old prepared transactions, or drop stale replication slots.\n> >> WARNING: bypassing nonessential maintenance of table \"john.public.autovacuum_disabled\" as a failsafe after 0 index scans\n> >> DETAIL: The table's relfrozenxid or relminmxid is too far in the past.\n> >> HINT: Consider increasing configuration parameter \"maintenance_work_mem\" or \"autovacuum_work_mem\".\n> >> You might also need to consider other ways for VACUUM to keep up with the allocation of transaction IDs.\n> >> server closed the connection unexpectedly\n> >>\n> >> #0 0x00007ff31f68ebec in __pthread_kill_implementation ()\n> >> from /lib64/libc.so.6\n> >> #1 0x00007ff31f63e956 in raise () from /lib64/libc.so.6\n> >> #2 0x00007ff31f6287f4 in abort () from /lib64/libc.so.6\n> >> #3 0x0000000000978032 in ExceptionalCondition (\n> >> conditionName=conditionName@entry=0xa4e970 \"!VacuumFailsafeActive\",\n> >> fileName=fileName@entry=0xa4da38 \"../src/backend/access/heap/vacuumlazy.c\", lineNumber=lineNumber@entry=392) at ../src/backend/utils/error/assert.c:66\n> >> #4 0x000000000058c598 in heap_vacuum_rel (rel=0x7ff31d8a97d0,\n> >> params=<optimized out>, bstrategy=<optimized out>)\n> >> at ../src/backend/access/heap/vacuumlazy.c:392\n> >> #5 0x000000000069af1f in table_relation_vacuum (bstrategy=0x14ddca8,\n> >> params=0x7ffec28585f0, rel=0x7ff31d8a97d0)\n> >> at ../src/include/access/tableam.h:1705\n> >> #6 vacuum_rel (relid=relid@entry=16402, relation=relation@entry=0x0,\n> >> params=params@entry=0x7ffec28585f0, skip_privs=skip_privs@entry=true,\n> >> bstrategy=bstrategy@entry=0x14ddca8)\n> >> at ../src/backend/commands/vacuum.c:2202\n> >> #7 0x000000000069b0e4 in vacuum_rel (relid=16398, relation=<optimized out>,\n> >> params=params@entry=0x7ffec2858850, skip_privs=skip_privs@entry=false,\n> >> bstrategy=bstrategy@entry=0x14ddca8)\n> >> at ../src/backend/commands/vacuum.c:2236\n> >\n> > Good catch. I think the problem is that vacuum_rel() is called\n> > recursively and we don't reset VacuumFailsafeActive before vacuuming\n> > the toast table. I think we should reset it in heap_vacuum_rel()\n> > instead of Assert(). It's possible that we trigger the failsafe mode\n> > only for either one.Please find the attached patch.\n>\n> Agreed, that matches my research and testing, I have the same diff here and it\n> passes testing and works as intended. This was briefly discussed in [0] and\n> slightly upthread from there but then missed. I will do some more looking and\n> testing but I'm fairly sure this is the right fix, so unless I find something\n> else I will go ahead with this.\n>\n> xid_wraparound is a really nifty testing tool. Very cool.Makes sense to me too.\n\nFix LGTM.\nThough we previously set it to false before this series of patches,\nperhaps it is\nworth adding a comment about why VacuumFailsafeActive must be reset here\neven though we reset it before vacuuming each table?\n\n- Melanie\n\n\n",
"msg_date": "Thu, 27 Apr 2023 10:53:01 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 27 Apr 2023, at 16:53, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> On Thu, Apr 27, 2023 at 8:55 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 27 Apr 2023, at 14:10, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n>>> Good catch. I think the problem is that vacuum_rel() is called\n>>> recursively and we don't reset VacuumFailsafeActive before vacuuming\n>>> the toast table. I think we should reset it in heap_vacuum_rel()\n>>> instead of Assert(). It's possible that we trigger the failsafe mode\n>>> only for either one.Please find the attached patch.\n>> \n>> Agreed, that matches my research and testing, I have the same diff here and it\n>> passes testing and works as intended. This was briefly discussed in [0] and\n>> slightly upthread from there but then missed. I will do some more looking and\n>> testing but I'm fairly sure this is the right fix, so unless I find something\n>> else I will go ahead with this.\n>> \n>> xid_wraparound is a really nifty testing tool. Very cool.Makes sense to me too.\n> \n> Fix LGTM.\n\nThanks for review. I plan to push this in the morning.\n\n> Though we previously set it to false before this series of patches,\n> perhaps it is\n> worth adding a comment about why VacuumFailsafeActive must be reset here\n> even though we reset it before vacuuming each table?\n\nAgreed. \n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 27 Apr 2023 23:25:30 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
},
{
"msg_contents": "> On 27 Apr 2023, at 23:25, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 27 Apr 2023, at 16:53, Melanie Plageman <melanieplageman@gmail.com> wrote:\n\n>> Fix LGTM.\n> \n> Thanks for review. I plan to push this in the morning.\n\nDone, thanks.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 28 Apr 2023 12:52:11 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Should vacuum process config file reload more often"
}
] |
[
{
"msg_contents": "On Wed, Feb 22, 2023 at 09:48:10PM +1300, Thomas Munro wrote:\n> On Tue, Feb 21, 2023 at 5:50 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> I'm happy to create a new thread if needed, but I can't tell if there is\n>> any interest in this stopgap/back-branch fix. Perhaps we should just jump\n>> straight to the long-term fix that Thomas is looking into.\n> \n> Unfortunately the latch-friendly subprocess module proposal I was\n> talking about would be for 17. I may post a thread fairly soon with\n> design ideas + list of problems and decision points as I see them, and\n> hopefully some sketch code, but it won't be a proposal for [/me checks\n> calendar] next week's commitfest and probably wouldn't be appropriate\n> in a final commitfest anyway, and I also have some other existing\n> stuff to clear first. So please do continue with the stopgap ideas.\n\nOkay, here is a new thread...\n\nSince v8.4, the startup process will proc_exit() immediately within its\nSIGTERM handler while the restore_command executes via system(). Some\nrecent changes added unsafe code to the section where this behavior is\nenabled [0]. The long-term fix likely includes moving away from system()\ncompletely, but we may want to have a stopgap/back-branch fix while that is\nunder development.\n\nI've attached a patch set for a proposed stopgap fix. 0001 simply moves\nthe extra code outside of the Pre/PostRestoreCommand() block so that only\nsystem() is executed while the SIGTERM handler might proc_exit(). This\nrestores the behavior that was in place from v8.4 to v14, so I don't expect\nit to be too controversial. 0002 adds code to startup's SIGTERM handler to\ncall _exit() instead of proc_exit() if we are in a forked process from\nsystem(), etc. It also adds assertions to ensure proc_exit(), ProcKill(),\nand AuxiliaryProcKill() are not called within such forked processes.\n\nThoughts?\n\n[0] https://postgr.es/m/20230201105514.rsjl4bnhb65giyvo%40alap3.anarazel.de\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 23 Feb 2023 15:15:03 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 12:15 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> Thoughts?\n\nI think you should have a trailing \\n when writing to stderr.\n\nHere's that reproducer I speculated about (sorry I confused SIGQUIT\nand SIGTERM in my earlier email, ENOCOFFEE). Seems to do the job, and\nI tested on a Linux box for good measure. If you comment out the\nkill(), \"check PROVE_TESTS=t/002_archiving.pl\" works fine\n(demonstrating that that definition of system() works fine). With the\nkill(), it reliably reaches 'TRAP: failed Assert(\"latch->owner_pid ==\nMyProcPid\")' without your patch, and with your patch it avoids it. (I\nbelieve glibc's system() could reach it too with the right timing, but\nI didn't try, my point being that the use of the OpenBSD system() here\nis only because it's easier to grok and to wrangle.)",
"msg_date": "Fri, 24 Feb 2023 13:25:01 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 1:25 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> ENOCOFFEE\n\nErm, I realised after sending that I'd accidentally sent a version\nthat uses fork() anyway, and now if I change it back to vfork() it\ndoesn't fail the way I wanted to demonstrate, at least on Linux. I\ndon't have time or desire to dig into how Linux vfork() really works\nso I'll leave it at that... but the patch as posted does seem to be a\nuseful tool for understanding this failure... please just ignore the\nconfused comments about fork() vs vfork() therein.\n\n\n",
"msg_date": "Fri, 24 Feb 2023 14:15:36 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 01:25:01PM +1300, Thomas Munro wrote:\n> I think you should have a trailing \\n when writing to stderr.\n\nOops. I added that in v7.\n\n> Here's that reproducer I speculated about (sorry I confused SIGQUIT\n> and SIGTERM in my earlier email, ENOCOFFEE). Seems to do the job, and\n> I tested on a Linux box for good measure. If you comment out the\n> kill(), \"check PROVE_TESTS=t/002_archiving.pl\" works fine\n> (demonstrating that that definition of system() works fine). With the\n> kill(), it reliably reaches 'TRAP: failed Assert(\"latch->owner_pid ==\n> MyProcPid\")' without your patch, and with your patch it avoids it. (I\n> believe glibc's system() could reach it too with the right timing, but\n> I didn't try, my point being that the use of the OpenBSD system() here\n> is only because it's easier to grok and to wrangle.)\n\nThanks for providing the reproducer! I am seeing the behavior that you\ndescribed on my Linux machine.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 23 Feb 2023 20:33:23 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-23 20:33:23 -0800, Nathan Bossart wrote:> \n> \tif (in_restore_command)\n> -\t\tproc_exit(1);\n> +\t{\n> +\t\t/*\n> +\t\t * If we are in a child process (e.g., forked by system() in\n> +\t\t * RestoreArchivedFile()), we don't want to call any exit callbacks.\n> +\t\t * The parent will take care of that.\n> +\t\t */\n> +\t\tif (MyProcPid == (int) getpid())\n> +\t\t\tproc_exit(1);\n> +\t\telse\n> +\t\t{\n> +\t\t\tconst char\tmsg[] = \"StartupProcShutdownHandler() called in child process\\n\";\n> +\t\t\tint\t\t\trc pg_attribute_unused();\n> +\n> +\t\t\trc = write(STDERR_FILENO, msg, sizeof(msg));\n> +\t\t\t_exit(1);\n> +\t\t}\n> +\t}\n\nWhy do we need that rc variable? Don't we normally get away with (void)\nwrite(...)?\n\n\n> diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\n> index 22b4278610..e3da0622d7 100644\n> --- a/src/backend/storage/lmgr/proc.c\n> +++ b/src/backend/storage/lmgr/proc.c\n> @@ -805,6 +805,7 @@ ProcKill(int code, Datum arg)\n> \tdlist_head *procgloballist;\n> \n> \tAssert(MyProc != NULL);\n> +\tAssert(MyProcPid == (int) getpid()); /* not safe if forked by system(), etc. */\n> \n> \t/* Make sure we're out of the sync rep lists */\n> \tSyncRepCleanupAtProcExit();\n> @@ -925,6 +926,7 @@ AuxiliaryProcKill(int code, Datum arg)\n> \tPGPROC\t *proc;\n> \n> \tAssert(proctype >= 0 && proctype < NUM_AUXILIARY_PROCS);\n> +\tAssert(MyProcPid == (int) getpid()); /* not safe if forked by system(), etc. */\n> \n> \tauxproc = &AuxiliaryProcs[proctype];\n> \n> -- \n> 2.25.1\n\nI think the much more interesting assertion here would be to check that\nMyProc->pid equals the current pid.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 25 Feb 2023 11:07:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Sat, Feb 25, 2023 at 11:07:42AM -0800, Andres Freund wrote:\n> On 2023-02-23 20:33:23 -0800, Nathan Bossart wrote:> \n>> \tif (in_restore_command)\n>> -\t\tproc_exit(1);\n>> +\t{\n>> +\t\t/*\n>> +\t\t * If we are in a child process (e.g., forked by system() in\n>> +\t\t * RestoreArchivedFile()), we don't want to call any exit callbacks.\n>> +\t\t * The parent will take care of that.\n>> +\t\t */\n>> +\t\tif (MyProcPid == (int) getpid())\n>> +\t\t\tproc_exit(1);\n>> +\t\telse\n>> +\t\t{\n>> +\t\t\tconst char\tmsg[] = \"StartupProcShutdownHandler() called in child process\\n\";\n>> +\t\t\tint\t\t\trc pg_attribute_unused();\n>> +\n>> +\t\t\trc = write(STDERR_FILENO, msg, sizeof(msg));\n>> +\t\t\t_exit(1);\n>> +\t\t}\n>> +\t}\n> \n> Why do we need that rc variable? Don't we normally get away with (void)\n> write(...)?\n\nMy compiler complains about that. :/\n\n\t../postgresql/src/backend/postmaster/startup.c: In function ‘StartupProcShutdownHandler’:\n\t../postgresql/src/backend/postmaster/startup.c:139:11: error: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Werror=unused-result]\n\t 139 | (void) write(STDERR_FILENO, msg, sizeof(msg));\n\t | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\tcc1: all warnings being treated as errors\n\n>> diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\n>> index 22b4278610..e3da0622d7 100644\n>> --- a/src/backend/storage/lmgr/proc.c\n>> +++ b/src/backend/storage/lmgr/proc.c\n>> @@ -805,6 +805,7 @@ ProcKill(int code, Datum arg)\n>> \tdlist_head *procgloballist;\n>> \n>> \tAssert(MyProc != NULL);\n>> +\tAssert(MyProcPid == (int) getpid()); /* not safe if forked by system(), etc. */\n>> \n>> \t/* Make sure we're out of the sync rep lists */\n>> \tSyncRepCleanupAtProcExit();\n>> @@ -925,6 +926,7 @@ AuxiliaryProcKill(int code, Datum arg)\n>> \tPGPROC\t *proc;\n>> \n>> \tAssert(proctype >= 0 && proctype < NUM_AUXILIARY_PROCS);\n>> +\tAssert(MyProcPid == (int) getpid()); /* not safe if forked by system(), etc. */\n>> \n>> \tauxproc = &AuxiliaryProcs[proctype];\n>> \n>> -- \n>> 2.25.1\n> \n> I think the much more interesting assertion here would be to check that\n> MyProc->pid equals the current pid.\n\nI don't mind changing this, but why is this a more interesting assertion?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 25 Feb 2023 11:28:25 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Sat, Feb 25, 2023 at 11:28:25AM -0800, Nathan Bossart wrote:\n> On Sat, Feb 25, 2023 at 11:07:42AM -0800, Andres Freund wrote:\n>> I think the much more interesting assertion here would be to check that\n>> MyProc->pid equals the current pid.\n> \n> I don't mind changing this, but why is this a more interesting assertion?\n\nHere is a new patch set with this change.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 25 Feb 2023 11:39:29 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-25 11:28:25 -0800, Nathan Bossart wrote:\n> On Sat, Feb 25, 2023 at 11:07:42AM -0800, Andres Freund wrote:\n> > Why do we need that rc variable? Don't we normally get away with (void)\n> > write(...)?\n> \n> My compiler complains about that. :/\n> \n> \t../postgresql/src/backend/postmaster/startup.c: In function ‘StartupProcShutdownHandler’:\n> \t../postgresql/src/backend/postmaster/startup.c:139:11: error: ignoring return value of ‘write’, declared with attribute warn_unused_result [-Werror=unused-result]\n> \t 139 | (void) write(STDERR_FILENO, msg, sizeof(msg));\n> \t | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n> \tcc1: all warnings being treated as errors\n\nIck. I guess we've already encountered this, because we've apparently removed\nall the (void) write cases. Which I am certain we had at some point. We still\ndo it for a bunch of other functions though. Ah, yes: aa90e148ca7,\n27314d32a88, 6c72a28e5ce etc.\n\nI think I opined on this before, but we really ought to have a function to do\nsome minimal signal safe output. Implemented centrally, instead of being open\ncoded in a bunch of places.\n\n\n> >> diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\n> >> index 22b4278610..e3da0622d7 100644\n> >> --- a/src/backend/storage/lmgr/proc.c\n> >> +++ b/src/backend/storage/lmgr/proc.c\n> >> @@ -805,6 +805,7 @@ ProcKill(int code, Datum arg)\n> >> \tdlist_head *procgloballist;\n> >> \n> >> \tAssert(MyProc != NULL);\n> >> +\tAssert(MyProcPid == (int) getpid()); /* not safe if forked by system(), etc. */\n> >> \n> >> \t/* Make sure we're out of the sync rep lists */\n> >> \tSyncRepCleanupAtProcExit();\n> >> @@ -925,6 +926,7 @@ AuxiliaryProcKill(int code, Datum arg)\n> >> \tPGPROC\t *proc;\n> >> \n> >> \tAssert(proctype >= 0 && proctype < NUM_AUXILIARY_PROCS);\n> >> +\tAssert(MyProcPid == (int) getpid()); /* not safe if forked by system(), etc. */\n> >> \n> >> \tauxproc = &AuxiliaryProcs[proctype];\n> >> \n> >> -- \n> >> 2.25.1\n> > \n> > I think the much more interesting assertion here would be to check that\n> > MyProc->pid equals the current pid.\n> \n> I don't mind changing this, but why is this a more interesting assertion?\n\nBecause we so far have little to no protection against some state corruption\nleading to releasing PGPROC that's not ours. I didn't actually mean that we\nshouldn't check that MyProcPid == (int) getpid(), just that the much more\ninteresting case to check is that MyProc->pid matches, because that protect\nagainst multiple releases, releasing the wrong slot, etc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 25 Feb 2023 11:52:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Sat, Feb 25, 2023 at 11:52:53AM -0800, Andres Freund wrote:\n> I think I opined on this before, but we really ought to have a function to do\n> some minimal signal safe output. Implemented centrally, instead of being open\n> coded in a bunch of places.\n\nWhile looking around for the right place to put this, I noticed that\nthere's a write_stderr() function in elog.c that we might be able to use.\nI used that in v9. WDYT?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 25 Feb 2023 14:06:29 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-25 14:06:29 -0800, Nathan Bossart wrote:\n> On Sat, Feb 25, 2023 at 11:52:53AM -0800, Andres Freund wrote:\n> > I think I opined on this before, but we really ought to have a function to do\n> > some minimal signal safe output. Implemented centrally, instead of being open\n> > coded in a bunch of places.\n> \n> While looking around for the right place to put this, I noticed that\n> there's a write_stderr() function in elog.c that we might be able to use.\n> I used that in v9. WDYT?\n\nwrite_stderr() isn't signal safe, from what I can tell.\n\n\n",
"msg_date": "Sun, 26 Feb 2023 10:00:29 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Sun, Feb 26, 2023 at 10:00:29AM -0800, Andres Freund wrote:\n> On 2023-02-25 14:06:29 -0800, Nathan Bossart wrote:\n>> On Sat, Feb 25, 2023 at 11:52:53AM -0800, Andres Freund wrote:\n>> > I think I opined on this before, but we really ought to have a function to do\n>> > some minimal signal safe output. Implemented centrally, instead of being open\n>> > coded in a bunch of places.\n>> \n>> While looking around for the right place to put this, I noticed that\n>> there's a write_stderr() function in elog.c that we might be able to use.\n>> I used that in v9. WDYT?\n> \n> write_stderr() isn't signal safe, from what I can tell.\n\n*facepalm* Sorry.\n\nWhat precisely did you have in mind? AFAICT you are asking for a wrapper\naround write().\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 26 Feb 2023 11:39:00 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-26 11:39:00 -0800, Nathan Bossart wrote:\n> On Sun, Feb 26, 2023 at 10:00:29AM -0800, Andres Freund wrote:\n> > On 2023-02-25 14:06:29 -0800, Nathan Bossart wrote:\n> >> On Sat, Feb 25, 2023 at 11:52:53AM -0800, Andres Freund wrote:\n> >> > I think I opined on this before, but we really ought to have a function to do\n> >> > some minimal signal safe output. Implemented centrally, instead of being open\n> >> > coded in a bunch of places.\n> >> \n> >> While looking around for the right place to put this, I noticed that\n> >> there's a write_stderr() function in elog.c that we might be able to use.\n> >> I used that in v9. WDYT?\n> > \n> > write_stderr() isn't signal safe, from what I can tell.\n> \n> *facepalm* Sorry.\n> \n> What precisely did you have in mind? AFAICT you are asking for a wrapper\n> around write().\n\nPartially I just want something that can easily be searched for, that can have\ncomments attached to it documenting why what it is doing is safe.\n\nIt'd not be a huge amount of work to have a slow and restricted string\ninterpolation support, to make it easier to write messages. Converting floats\nis probably too hard to do safely, and I'm not sure %m can safely be\nsupported. But basic things like %d would be pretty simple.\n\nBasically a loop around the format string that directly writes to stderr using\nwrite(), and only supports a signal safe subset of normal format strings.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Feb 2023 12:12:27 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Sun, Feb 26, 2023 at 12:12:27PM -0800, Andres Freund wrote:\n> On 2023-02-26 11:39:00 -0800, Nathan Bossart wrote:\n>> What precisely did you have in mind? AFAICT you are asking for a wrapper\n>> around write().\n> \n> Partially I just want something that can easily be searched for, that can have\n> comments attached to it documenting why what it is doing is safe.\n> \n> It'd not be a huge amount of work to have a slow and restricted string\n> interpolation support, to make it easier to write messages. Converting floats\n> is probably too hard to do safely, and I'm not sure %m can safely be\n> supported. But basic things like %d would be pretty simple.\n> \n> Basically a loop around the format string that directly writes to stderr using\n> write(), and only supports a signal safe subset of normal format strings.\n\nGot it, thanks. I will try to put something together along these lines,\nalthough I don't know if I'll pick up the interpolation support in this\nthread.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 28 Feb 2023 20:36:03 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 08:36:03PM -0800, Nathan Bossart wrote:\n> On Sun, Feb 26, 2023 at 12:12:27PM -0800, Andres Freund wrote:\n>> Partially I just want something that can easily be searched for, that can have\n>> comments attached to it documenting why what it is doing is safe.\n>> \n>> It'd not be a huge amount of work to have a slow and restricted string\n>> interpolation support, to make it easier to write messages. Converting floats\n>> is probably too hard to do safely, and I'm not sure %m can safely be\n>> supported. But basic things like %d would be pretty simple.\n>> \n>> Basically a loop around the format string that directly writes to stderr using\n>> write(), and only supports a signal safe subset of normal format strings.\n> \n> Got it, thanks. I will try to put something together along these lines,\n> although I don't know if I'll pick up the interpolation support in this\n> thread.\n\nHere is an attempt at adding a signal safe function for writing to STDERR.\n\nI didn't add support for format strings, but looking ahead, I think one\nchallenge will be avoiding va_start() and friends. In any case, IMO format\nstring support probably deserves its own thread.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 1 Mar 2023 14:47:51 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-01 14:47:51 -0800, Nathan Bossart wrote:\n> On Tue, Feb 28, 2023 at 08:36:03PM -0800, Nathan Bossart wrote:\n> > On Sun, Feb 26, 2023 at 12:12:27PM -0800, Andres Freund wrote:\n> >> Partially I just want something that can easily be searched for, that can have\n> >> comments attached to it documenting why what it is doing is safe.\n> >> \n> >> It'd not be a huge amount of work to have a slow and restricted string\n> >> interpolation support, to make it easier to write messages. Converting floats\n> >> is probably too hard to do safely, and I'm not sure %m can safely be\n> >> supported. But basic things like %d would be pretty simple.\n> >> \n> >> Basically a loop around the format string that directly writes to stderr using\n> >> write(), and only supports a signal safe subset of normal format strings.\n> > \n> > Got it, thanks. I will try to put something together along these lines,\n> > although I don't know if I'll pick up the interpolation support in this\n> > thread.\n> \n> Here is an attempt at adding a signal safe function for writing to STDERR.\n\nCool.\n\n> I didn't add support for format strings, but looking ahead, I think one\n> challenge will be avoiding va_start() and friends. In any case, IMO format\n> string support probably deserves its own thread.\n\nMakes sense to split that off.\n\nFWIW, I think we could rely on va_start() et al to be signal safe. The\nstandardese isn't super clear about this, because they aren't functions, and\nposix only talks about functions being async signal safe...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 Mar 2023 15:13:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Wed, Mar 01, 2023 at 03:13:04PM -0800, Andres Freund wrote:\n> FWIW, I think we could rely on va_start() et al to be signal safe. The\n> standardese isn't super clear about this, because they aren't functions, and\n> posix only talks about functions being async signal safe...\n\nGood to know. I couldn't tell whether that was a safe assumption from\nbriefly reading around.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 1 Mar 2023 15:26:33 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Wed, Mar 01, 2023 at 03:13:04PM -0800, Andres Freund wrote:\n> On 2023-03-01 14:47:51 -0800, Nathan Bossart wrote:\n>> Here is an attempt at adding a signal safe function for writing to STDERR.\n> \n> Cool.\n\nI'm gently bumping this thread to see if anyone had additional feedback on\nthe proposed patches [0]. The intent was to back-patch these as needed and\nto pursue a long-term fix in v17. Are there any concerns with that?\n\n[0] https://postgr.es/m/20230301224751.GA1823946%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 21 Apr 2023 16:07:49 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On 22.04.23 01:07, Nathan Bossart wrote:\n> On Wed, Mar 01, 2023 at 03:13:04PM -0800, Andres Freund wrote:\n>> On 2023-03-01 14:47:51 -0800, Nathan Bossart wrote:\n>>> Here is an attempt at adding a signal safe function for writing to STDERR.\n>>\n>> Cool.\n> \n> I'm gently bumping this thread to see if anyone had additional feedback on\n> the proposed patches [0]. The intent was to back-patch these as needed and\n> to pursue a long-term fix in v17. Are there any concerns with that?\n> \n> [0] https://postgr.es/m/20230301224751.GA1823946%40nathanxps13\n\nIs this still being contemplated? What is the status of this?\n\n\n",
"msg_date": "Sun, 1 Oct 2023 20:50:15 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Sun, Oct 01, 2023 at 08:50:15PM +0200, Peter Eisentraut wrote:\n> Is this still being contemplated? What is the status of this?\n\nI'll plan on committing this in the next couple of days.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 4 Oct 2023 09:52:11 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On 2023-03-01 14:47:51 -0800, Nathan Bossart wrote:\n> Subject: [PATCH v10 1/2] Move extra code out of the Pre/PostRestoreCommand()\n> block.\n\nLGTM\n\n\n> From fb6957da01f11b75d1a1966f32b00e2e77c789a0 Mon Sep 17 00:00:00 2001\n> From: Nathan Bossart <nathandbossart@gmail.com>\n> Date: Tue, 14 Feb 2023 09:44:53 -0800\n> Subject: [PATCH v10 2/2] Don't proc_exit() in startup's SIGTERM handler if\n> forked by system().\n> \n> Instead, emit a message to STDERR and _exit() in this case. This\n> change also adds assertions to proc_exit(), ProcKill(), and\n> AuxiliaryProcKill() to verify that these functions are not called\n> by a process forked by system(), etc.\n> ---\n> src/backend/postmaster/startup.c | 17 ++++++++++++++++-\n> src/backend/storage/ipc/ipc.c | 3 +++\n> src/backend/storage/lmgr/proc.c | 2 ++\n> src/backend/utils/error/elog.c | 28 ++++++++++++++++++++++++++++\n> src/include/utils/elog.h | 6 +-----\n> 5 files changed, 50 insertions(+), 6 deletions(-)\n\n\n\n> diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\n> index 22b4278610..b9e2c3aafe 100644\n> --- a/src/backend/storage/lmgr/proc.c\n> +++ b/src/backend/storage/lmgr/proc.c\n> @@ -805,6 +805,7 @@ ProcKill(int code, Datum arg)\n> \tdlist_head *procgloballist;\n> \n> \tAssert(MyProc != NULL);\n> +\tAssert(MyProc->pid == (int) getpid()); /* not safe if forked by system(), etc. */\n> \n> \t/* Make sure we're out of the sync rep lists */\n> \tSyncRepCleanupAtProcExit();\n> @@ -925,6 +926,7 @@ AuxiliaryProcKill(int code, Datum arg)\n> \tPGPROC\t *proc;\n> \n> \tAssert(proctype >= 0 && proctype < NUM_AUXILIARY_PROCS);\n> +\tAssert(MyProc->pid == (int) getpid()); /* not safe if forked by system(), etc. */\n> \n> \tauxproc = &AuxiliaryProcs[proctype];\n> \n\nI'd make these elog(PANIC), I think. The paths are not performance critical\nenough that a single branch hurts, so the overhead of the check is irrelevant,\nand the consequences of calling ProcKill() twice for the same process are very\nsevere.\n\n\n> +/*\n> + * Write a message to STDERR using only async-signal-safe functions. This can\n> + * be used to safely emit a message from a signal handler.\n> + *\n> + * TODO: It is likely possible to safely do a limited amount of string\n> + * interpolation (e.g., %s and %d), but that is not presently supported.\n> + */\n> +void\n> +write_stderr_signal_safe(const char *fmt)\n\nAs is, this isn't a format, so I'd probably just name it s or str :)\n\n\n> -/*\n> - * Write errors to stderr (or by equal means when stderr is\n> - * not available). Used before ereport/elog can be used\n> - * safely (memory context, GUC load etc)\n> - */\n> extern void write_stderr(const char *fmt,...) pg_attribute_printf(1, 2);\n> +extern void write_stderr_signal_safe(const char *fmt);\n\nNot sure why you removed the comment?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 10 Oct 2023 16:40:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Tue, Oct 10, 2023 at 04:40:28PM -0700, Andres Freund wrote:\n> On 2023-03-01 14:47:51 -0800, Nathan Bossart wrote:\n>> diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\n>> index 22b4278610..b9e2c3aafe 100644\n>> --- a/src/backend/storage/lmgr/proc.c\n>> +++ b/src/backend/storage/lmgr/proc.c\n>> @@ -805,6 +805,7 @@ ProcKill(int code, Datum arg)\n>> \tdlist_head *procgloballist;\n>> \n>> \tAssert(MyProc != NULL);\n>> +\tAssert(MyProc->pid == (int) getpid()); /* not safe if forked by system(), etc. */\n>> \n>> \t/* Make sure we're out of the sync rep lists */\n>> \tSyncRepCleanupAtProcExit();\n>> @@ -925,6 +926,7 @@ AuxiliaryProcKill(int code, Datum arg)\n>> \tPGPROC\t *proc;\n>> \n>> \tAssert(proctype >= 0 && proctype < NUM_AUXILIARY_PROCS);\n>> +\tAssert(MyProc->pid == (int) getpid()); /* not safe if forked by system(), etc. */\n>> \n>> \tauxproc = &AuxiliaryProcs[proctype];\n>> \n> \n> I'd make these elog(PANIC), I think. The paths are not performance critical\n> enough that a single branch hurts, so the overhead of the check is irrelevant,\n> and the consequences of calling ProcKill() twice for the same process are very\n> severe.\n\nRight. Should we write_stderr_signal_safe() and then abort() to keep these\npaths async-signal-safe?\n\n>> +/*\n>> + * Write a message to STDERR using only async-signal-safe functions. This can\n>> + * be used to safely emit a message from a signal handler.\n>> + *\n>> + * TODO: It is likely possible to safely do a limited amount of string\n>> + * interpolation (e.g., %s and %d), but that is not presently supported.\n>> + */\n>> +void\n>> +write_stderr_signal_safe(const char *fmt)\n> \n> As is, this isn't a format, so I'd probably just name it s or str :)\n\nYup.\n\n>> -/*\n>> - * Write errors to stderr (or by equal means when stderr is\n>> - * not available). Used before ereport/elog can be used\n>> - * safely (memory context, GUC load etc)\n>> - */\n>> extern void write_stderr(const char *fmt,...) pg_attribute_printf(1, 2);\n>> +extern void write_stderr_signal_safe(const char *fmt);\n> \n> Not sure why you removed the comment?\n\nI think it was because it's an exact copy of the comment above the function\nin elog.c, and I didn't want to give the impression that it applied to the\nsignal-safe one, too. I added it back along with a new comment for\nwrite_stderr_signal_safe().\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 10 Oct 2023 21:54:18 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Tue, Oct 10, 2023 at 09:54:18PM -0500, Nathan Bossart wrote:\n> On Tue, Oct 10, 2023 at 04:40:28PM -0700, Andres Freund wrote:\n>> I'd make these elog(PANIC), I think. The paths are not performance critical\n>> enough that a single branch hurts, so the overhead of the check is irrelevant,\n>> and the consequences of calling ProcKill() twice for the same process are very\n>> severe.\n> \n> Right. Should we write_stderr_signal_safe() and then abort() to keep these\n> paths async-signal-safe?\n\nHm. I see that elog() is called elsewhere in proc_exit(), and it does not\nappear to be async-signal-safe. Am I missing something?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 10 Oct 2023 22:29:34 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-10 22:29:34 -0500, Nathan Bossart wrote:\n> On Tue, Oct 10, 2023 at 09:54:18PM -0500, Nathan Bossart wrote:\n> > On Tue, Oct 10, 2023 at 04:40:28PM -0700, Andres Freund wrote:\n> >> I'd make these elog(PANIC), I think. The paths are not performance critical\n> >> enough that a single branch hurts, so the overhead of the check is irrelevant,\n> >> and the consequences of calling ProcKill() twice for the same process are very\n> >> severe.\n> > \n> > Right. Should we write_stderr_signal_safe() and then abort() to keep these\n> > paths async-signal-safe?\n> \n> Hm. I see that elog() is called elsewhere in proc_exit(), and it does not\n> appear to be async-signal-safe. Am I missing something?\n\nWe shouldn't call proc_exit() in a signal handler. We perhaps have a few\nremaining calls left, but we should (and I think in some cases are) working on\nremoving those.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 10 Oct 2023 20:39:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Tue, Oct 10, 2023 at 08:39:29PM -0700, Andres Freund wrote:\n> We shouldn't call proc_exit() in a signal handler. We perhaps have a few\n> remaining calls left, but we should (and I think in some cases are) working on\n> removing those.\n\nHmm. I don't recall anything remaining, even after a quick check.\nFWIW, I was under the impression that Thomas' work done in\n0da096d78e1e4 has cleaned up the last bits of that.\n--\nMichael",
"msg_date": "Wed, 11 Oct 2023 13:02:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Wed, Oct 11, 2023 at 01:02:14PM +0900, Michael Paquier wrote:\n> On Tue, Oct 10, 2023 at 08:39:29PM -0700, Andres Freund wrote:\n>> We shouldn't call proc_exit() in a signal handler. We perhaps have a few\n>> remaining calls left, but we should (and I think in some cases are) working on\n>> removing those.\n\nGot it.\n\n> Hmm. I don't recall anything remaining, even after a quick check.\n> FWIW, I was under the impression that Thomas' work done in\n> 0da096d78e1e4 has cleaned up the last bits of that.\n\nStartupProcShutdownHandler() remains, at least. Of the other items in\nTom's list from 2020 [0], bgworker_die() and FloatExceptionHandler() are\nalso still unsafe. RecoveryConflictInterrupt() should be fixed by 0da096d,\nand StandbyDeadLockHandler() and StandbyTimeoutHandler() should be fixed by\n8900b5a and 8f1537d, respectively.\n\n[0] https://postgr.es/m/148145.1599703626%40sss.pgh.pa.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 11 Oct 2023 14:00:00 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "Committed and back-patched.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 17 Oct 2023 10:46:47 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 10:46:47AM -0500, Nathan Bossart wrote:\n> Committed and back-patched.\n\n... and it looks like some of the back-branches are failing for Windows.\nI'm assuming this is because c290e79 was only back-patched to v15. My\nfirst instinct is just to back-patch that one all the way to v11, but maybe\nthere's an alternative involving #ifdef WIN32. Are there any concerns with\nback-patching c290e79?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 17 Oct 2023 11:45:17 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> ... and it looks like some of the back-branches are failing for Windows.\n> I'm assuming this is because c290e79 was only back-patched to v15. My\n> first instinct is just to back-patch that one all the way to v11, but maybe\n> there's an alternative involving #ifdef WIN32. Are there any concerns with\n> back-patching c290e79?\n\nSounds fine to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Oct 2023 12:47:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
},
{
"msg_contents": "On Tue, Oct 17, 2023 at 12:47:29PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> ... and it looks like some of the back-branches are failing for Windows.\n>> I'm assuming this is because c290e79 was only back-patched to v15. My\n>> first instinct is just to back-patch that one all the way to v11, but maybe\n>> there's an alternative involving #ifdef WIN32. Are there any concerns with\n>> back-patching c290e79?\n> \n> Sounds fine to me.\n\nThanks, done.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 17 Oct 2023 16:17:46 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: stopgap fix for signal handling during restore_command"
}
] |
[
{
"msg_contents": "Attached is a patch fixing a few doc omissions for MERGE.\n\nI don't think that it's necessary to update every place that could\npossibly apply to MERGE, but there are a few places where we give a\nlist of commands that may be used in a particular context, and I think\nthose should mention MERGE, if it applies.\n\nAlso, there were a couple of other places where it seemed worth giving\nslightly more detail about what happens for MERGE.\n\nRegards,\nDean",
"msg_date": "Fri, 24 Feb 2023 06:08:38 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Doc updates for MERGE"
},
{
"msg_contents": "On 2023-Feb-24, Dean Rasheed wrote:\n\n> Attached is a patch fixing a few doc omissions for MERGE.\n> \n> I don't think that it's necessary to update every place that could\n> possibly apply to MERGE, but there are a few places where we give a\n> list of commands that may be used in a particular context, and I think\n> those should mention MERGE, if it applies.\n\nAgreed. Your patch looks good to me.\n\nI was confused for a bit about arch-dev.sgml talking about ModifyTable\nwhen perform.sgml talks about Insert/Update et al; I thought at first\nthat one or the other was in error, so I checked. It turns out that\nthey are both correct, because arch-dev is talking about code-level\nexecutor nodes while perform.sgml is talking about how it looks under\nEXPLAIN. So, it all looks good.\n\nI assume you're proposing to back-patch this.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La verdad no siempre es bonita, pero el hambre de ella sí\"\n\n\n",
"msg_date": "Fri, 24 Feb 2023 09:56:22 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc updates for MERGE"
},
{
"msg_contents": "On Fri, 24 Feb 2023 at 08:56, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Agreed. Your patch looks good to me.\n>\n> I was confused for a bit about arch-dev.sgml talking about ModifyTable\n> when perform.sgml talks about Insert/Update et al; I thought at first\n> that one or the other was in error, so I checked. It turns out that\n> they are both correct, because arch-dev is talking about code-level\n> executor nodes while perform.sgml is talking about how it looks under\n> EXPLAIN. So, it all looks good.\n>\n\nCool. Thanks for checking.\n\n> I assume you're proposing to back-patch this.\n>\n\nYes. Will do.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 24 Feb 2023 09:28:03 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc updates for MERGE"
},
{
"msg_contents": "On Fri, 24 Feb 2023 at 09:28, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Fri, 24 Feb 2023 at 08:56, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > I assume you're proposing to back-patch this.\n>\n> Yes. Will do.\n>\n\nDone.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sun, 26 Feb 2023 09:11:47 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc updates for MERGE"
}
] |
[
{
"msg_contents": "I noticed that the commit e9960732a9 introduced the following message.\n\n+\tif (EndCompressFileHandle(ctx->dataFH) != 0)\n+\t\tpg_fatal(\"could not close blob data file: %m\");\n\nIt seems that we have removed the terminology \"blob(s)\" from\nuser-facing messages by the commit 35ce24c333 (discussion is [1]).\nShouldn't we use \"large object\" instead of \"blob\" in the message?\n\n\n[1] https://www.postgresql.org/message-id/868a381f-4650-9460-1726-1ffd39a270b4%40enterprisedb.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 24 Feb 2023 16:31:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "New \"blob\" re-introduced?"
},
{
"msg_contents": "> On 24 Feb 2023, at 08:31, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> Shouldn't we use \"large object\" instead of \"blob\" in the message?\n\nNice catch, it should be \"large object\" as per the linked discussion. There\nare also a few more like:\n\n- if (cfclose(ctx->LOsTocFH) != 0)\n- pg_fatal(\"could not close LOs TOC file: %m\");\n+ if (EndCompressFileHandle(ctx->LOsTocFH) != 0)\n+ pg_fatal(\"could not close blobs TOC file: %m\");\n\nI'll go ahead and fix them, thanks for the report!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 24 Feb 2023 08:38:44 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: New \"blob\" re-introduced?"
},
{
"msg_contents": "At Fri, 24 Feb 2023 16:31:27 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I noticed that the commit e9960732a9 introduced the following message.\n> \n> +\tif (EndCompressFileHandle(ctx->dataFH) != 0)\n> +\t\tpg_fatal(\"could not close blob data file: %m\");\n> \n> It seems that we have removed the terminology \"blob(s)\" from\n> user-facing messages by the commit 35ce24c333 (discussion is [1]).\n> Shouldn't we use \"large object\" instead of \"blob\" in the message?\n> \n> \n> [1] https://www.postgresql.org/message-id/868a381f-4650-9460-1726-1ffd39a270b4%40enterprisedb.com\n\nMmm. The following changes of e9960732a9 seem like reverting the\nprevious commit 35ce24c333...\n\ne9960732a9 @ 2023/2/23:\n-\tif (cfclose(ctx->dataFH) != 0)\n-\t\tpg_fatal(\"could not close LO data file: %m\");\n+\t/* Close the BLOB data file itself */\n+\tif (EndCompressFileHandle(ctx->dataFH) != 0)\n+\t\tpg_fatal(\"could not close blob data file: %m\");\n-\tif (cfwrite(buf, len, ctx->LOsTocFH) != len)\n-\t\tpg_fatal(\"could not write to LOs TOC file\");\n+\tif (CFH->write_func(buf, len, CFH) != len)\n+\t\tpg_fatal(\"could not write to blobs TOC file\");\n..\n-\tif (cfclose(ctx->LOsTocFH) != 0)\n-\t\tpg_fatal(\"could not close LOs TOC file: %m\");\n+\tif (EndCompressFileHandle(ctx->LOsTocFH) != 0)\n+\t\tpg_fatal(\"could not close blobs TOC file: %m\");\n\n35ce24c333 @ 2022/12/5:\n-\t\tpg_fatal(\"could not close blob data file: %m\");\n+\t\tpg_fatal(\"could not close LO data file: %m\");\n...\n-\tif (cfwrite(buf, len, ctx->blobsTocFH) != len)\n-\t\tpg_fatal(\"could not write to blobs TOC file\");\n+\tif (cfwrite(buf, len, ctx->LOsTocFH) != len)\n+\t\tpg_fatal(\"could not write to LOs TOC file\");\n...\n-\t\tpg_fatal(\"could not close blob data file: %m\");\n+\t\tpg_fatal(\"could not close LO data file: %m\");\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 24 Feb 2023 16:40:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: New \"blob\" re-introduced?"
},
{
"msg_contents": "> On 24 Feb 2023, at 08:40, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> Mmm. The following changes of e9960732a9 seem like reverting the\n> previous commit 35ce24c333...\n\nFixed in 94851e4b90, thanks for the report!\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 24 Feb 2023 08:58:20 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: New \"blob\" re-introduced?"
}
] |
[
{
"msg_contents": "I happened to notice that there were a few references to guc.c regarding\nvariables, which with the recent refactoring in 0a20ff54f have become stale.\nAttached is a trivial patch to instead point to guc_tables.c.\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 24 Feb 2023 14:15:55 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Stale references to guc.c in comments/tests"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> I happened to notice that there were a few references to guc.c regarding\n> variables, which with the recent refactoring in 0a20ff54f have become stale.\n> Attached is a trivial patch to instead point to guc_tables.c.\n\nHmm, I think you may have done an overenthusiastic replacement here.\nI agree with changes like this:\n\n-extern char *role_string;\t\t/* in guc.c */\n+extern char *role_string;\t\t/* in guc_tables.c */\n\nbecause clearly that variable is now declared in guc_tables.c\nand guc.c knows nothing of it explicitly. However, a lot of\nthese places are really talking about the behavior of the GUC\nmechanisms as a whole, and so a pointer to guc_tables.c doesn't\nseem very on-point to me --- I find it hard to attribute behavior\nto a static table. Here for instance:\n\n@@ -3041,7 +3041,7 @@ pg_get_functiondef(PG_FUNCTION_ARGS)\n *\n * Variables that are not so marked should just be emitted as\n * simple string literals. If the variable is not known to\n- * guc.c, we'll do that; this makes it unsafe to use\n+ * guc_tables.c, we'll do that; this makes it unsafe to use\n * GUC_LIST_QUOTE for extension variables.\n */\n if (GetConfigOptionFlags(configitem, true) & GUC_LIST_QUOTE)\n\nAn extension's GUC is by definition not known in guc_tables.c, so I think\nthis change is losing the point of the text. What it's really describing\nis a variable that hasn't been entered into the dynamic tables maintained\nby guc.c.\n\nPerhaps you could use \"the GUC mechanisms\" in these places, but it's a bit\nlonger than \"guc.c\". Leaving such references alone seems OK too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 Feb 2023 10:19:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stale references to guc.c in comments/tests"
},
{
"msg_contents": "> On 24 Feb 2023, at 16:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> I happened to notice that there were a few references to guc.c regarding\n>> variables, which with the recent refactoring in 0a20ff54f have become stale.\n>> Attached is a trivial patch to instead point to guc_tables.c.\n> \n> Hmm, I think you may have done an overenthusiastic replacement here.\n\nFair enough, I only changed those places I felt referenced variables, or their\ndefinition, in guc_tables.c but I agree that there is a lot of greyzone in the\ninterpretation.\n\n> Perhaps you could use \"the GUC mechanisms\" in these places, but it's a bit\n> longer than \"guc.c\". Leaving such references alone seems OK too.\n\nI've opted for mostly leaving them in the attached v2.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 27 Feb 2023 14:35:52 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Stale references to guc.c in comments/tests"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 24 Feb 2023, at 16:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Perhaps you could use \"the GUC mechanisms\" in these places, but it's a bit\n>> longer than \"guc.c\". Leaving such references alone seems OK too.\n\n> I've opted for mostly leaving them in the attached v2.\n\nThis version seems OK to me except for this bit:\n\n * This is a straightforward one-to-one mapping, but doing it this way makes\n- * guc.c independent of OpenSSL availability and version.\n+ * GUC definition independent of OpenSSL availability and version.\n\nThe grammar is a bit off (\"the GUC definition\" would read better),\nbut really I think the wording was vague already and we should tighten\nit up. Can we specify exactly which GUC variable(s) we're talking about?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 Feb 2023 11:59:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stale references to guc.c in comments/tests"
},
{
"msg_contents": "> On 27 Feb 2023, at 17:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> The grammar is a bit off (\"the GUC definition\" would read better),\n> but really I think the wording was vague already and we should tighten\n> it up. Can we specify exactly which GUC variable(s) we're talking about?\n\nSpecifying the GUCs in question is a good idea, done in the attached. I'm not\nsure the phrasing is spot-on though, but I can't think of a better one. If you\ncan think of a better one I'm all ears.\n\n--\nDaniel Gustafsson",
"msg_date": "Tue, 28 Feb 2023 23:52:46 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Stale references to guc.c in comments/tests"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Specifying the GUCs in question is a good idea, done in the attached. I'm not\n> sure the phrasing is spot-on though, but I can't think of a better one. If you\n> can think of a better one I'm all ears.\n\nI'd just change \"the definition of\" to \"the definitions of\".\nLGTM otherwise.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Feb 2023 18:00:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stale references to guc.c in comments/tests"
}
] |
[
{
"msg_contents": "Hi\n\nHacker from another open-source DB here (h2database.com).\n\nHow does postgresql handle the following situation?\n\n(1) a table containing a LOB column\n(2) a query that does\n ResultSet rs = query(\"select lob_column from table_foo\");\n while (rs.next())\n {\n retrieve_lob_data(rs.getLob(1));\n .... very long running stuff here......\n }\n\nIn the face of concurrent updates that might overwrite the existing LOB\ndata, how does PostgresQL handle this?\n\nDoes it keep the LOB data around until the ResultSet/Connection is closed?\nOr does it impose some extra constraint on the client side? e.g..\nexplicitly opening and closing a transaction, and only wipe the \"old\" LOB\ndata when the transaction is closed?\n\nI ask because I have implemented two of the four LOB implementations that\nH2 has used, and we are still having trouble :-(\n\nRegards, Noel.\n\nHiHacker from another open-source DB here (h2database.com).How does postgresql handle the following situation?(1) a table containing a LOB column (2) a query that does ResultSet rs = query(\"select lob_column from table_foo\"); while (rs.next()) { retrieve_lob_data(rs.getLob(1)); .... very long running stuff here...... }In the face of concurrent updates that might overwrite the existing LOB data, how does PostgresQL handle this?Does it keep the LOB data around until the ResultSet/Connection is closed?Or does it impose some extra constraint on the client side? e.g.. explicitly opening and closing a transaction, and only wipe the \"old\" LOB data when the transaction is closed?I ask because I have implemented two of the four LOB implementations that H2 has used, and we are still having trouble :-(Regards, Noel.",
"msg_date": "Fri, 24 Feb 2023 15:31:39 +0200",
"msg_from": "Noel Grandin <noelgrandin@gmail.com>",
"msg_from_op": true,
"msg_subject": "how does postgresql handle LOB/CLOB/BLOB column data that dies before\n the query ends"
},
{
"msg_contents": "Noel Grandin <noelgrandin@gmail.com> writes:\n> Hacker from another open-source DB here (h2database.com).\n\n> How does postgresql handle the following situation?\n\n> (1) a table containing a LOB column\n\nPostgres doesn't really do LOB in the same sense that some other DBs\nhave, so you'd need to specify what you have in mind in Postgres\nterms to get a useful answer.\n\nWe do have a concept of \"large objects\" named by OIDs, but they're\nmuch more of a manually-managed, nontransparent feature than typical\nLOB implementations. I don't think our JDBC driver implements the\nsort of syntax you sketch (I could be wrong though, not much of a\nJDBC guy).\n\nHaving said that ...\n\n> In the face of concurrent updates that might overwrite the existing LOB\n> data, how does PostgresQL handle this?\n\n... reading from a large object follows the same MVCC rules we use\nfor all other data. We allow multiple versions of a tuple to exist\non-disk, and we don't clean out old versions until no live transaction\ncan \"see\" them anymore. So data consistency is just a matter of using\nthe same \"snapshot\" (which selects appropriate tuple versions) across\nhowever many queries you want consistent results from. If somebody\nwrites new data meanwhile, it doesn't matter because that tuple version\nis invisible to your snapshot.\n\n> Or does it impose some extra constraint on the client side? e.g..\n> explicitly opening and closing a transaction, and only wipe the \"old\" LOB\n> data when the transaction is closed?\n\n From a client's perspective, the two options are \"snapshots last for\none query\" and \"snapshots last for one transaction\". You signify which\none you want by selecting a transaction isolation mode when you begin\nthe transaction.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 Feb 2023 10:39:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: how does postgresql handle LOB/CLOB/BLOB column data that dies\n before the query ends"
},
{
"msg_contents": "Thanks for the answers.\n\nSo, H2, like PostgreSQL, also internally has (a) an MVCC engine and (b)\nLOBs existing as a on-the-side extra thing.\n\nOn Fri, 24 Feb 2023 at 17:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Postgres doesn't really do LOB in the same sense that some other DBs\n> have, so you'd need to specify what you have in mind in Postgres\n> terms to get a useful answer.\n>\n\nSo, specifically, the primary problem we have is this:\n\n(1) A typical small query returns all of its data in a stream to the client\n(2) which means that, from the server's perspective, the transaction is\nclosed the moment the last record in the stream is pushed to the client.\n(3) which means that, in the face of concurrent updates, the underlying\nMVCC data in the query might be long-gone from the server by the time the\nclient has finished reading the result set.\n(4) However, with LOBs, the client doesn't get the LOB in the result set\ndata stream, it gets a special identifier (a hash), which it uses to fetch\nLOB data from the server in chunks\n(5) Which means that the lifetime of an individual LOB is just horrible\nAt the moment the implementation I have satisfies the needs of clients in\nterms of correctness (crosses fingers), but is horrible in terms of\nperformance because of how long it has to keep LOB data around.\n\nThanks for the answers.So, H2, like PostgreSQL, also internally has (a) an MVCC engine and (b) LOBs existing as a on-the-side extra thing.On Fri, 24 Feb 2023 at 17:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:Postgres doesn't really do LOB in the same sense that some other DBs\nhave, so you'd need to specify what you have in mind in Postgres\nterms to get a useful answer.So, specifically, the primary problem we have is this:(1) A typical small query returns all of its data in a stream to the client(2) which means that, from the server's perspective, the transaction is closed the moment the last record in the stream is pushed to the client.(3) which means that, in the face of concurrent updates, the underlying MVCC data in the query might be long-gone from the server by the time the client has finished reading the result set.(4) However, with LOBs, the client doesn't get the LOB in the result set data stream, it gets a special identifier (a hash), which it uses to fetch LOB data from the server in chunks(5) Which means that the lifetime of an individual LOB is just horribleAt the moment the implementation I have satisfies the needs of clients in terms of correctness (crosses fingers), but is horrible in terms of performance because of how long it has to keep LOB data around.",
"msg_date": "Sat, 25 Feb 2023 08:19:39 +0200",
"msg_from": "Noel Grandin <noelgrandin@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: how does postgresql handle LOB/CLOB/BLOB column data that dies\n before the query ends"
},
{
"msg_contents": "Noel Grandin <noelgrandin@gmail.com> writes:\n> On Fri, 24 Feb 2023 at 17:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Postgres doesn't really do LOB in the same sense that some other DBs\n>> have, so you'd need to specify what you have in mind in Postgres\n>> terms to get a useful answer.\n\n> So, specifically, the primary problem we have is this:\n\n> (1) A typical small query returns all of its data in a stream to the client\n> (2) which means that, from the server's perspective, the transaction is\n> closed the moment the last record in the stream is pushed to the client.\n> (3) which means that, in the face of concurrent updates, the underlying\n> MVCC data in the query might be long-gone from the server by the time the\n> client has finished reading the result set.\n> (4) However, with LOBs, the client doesn't get the LOB in the result set\n> data stream, it gets a special identifier (a hash), which it uses to fetch\n> LOB data from the server in chunks\n> (5) Which means that the lifetime of an individual LOB is just horrible\n> At the moment the implementation I have satisfies the needs of clients in\n> terms of correctness (crosses fingers), but is horrible in terms of\n> performance because of how long it has to keep LOB data around.\n\nYeah, Postgres has an analogous kind of problem. Our standard way to\nuse \"large objects\" is to store their identifying OIDs in tables,\nfetch the desired OID with a regular SQL query, and then open and read\n(or write) the large object using its OID. So you have a hazard of\ntime skew between what you saw in the table and what you see in the\nlarge object. We pretty much lay that problem off on the clients: if\nthey want consistency of those views they need to make sure that the\nsame snapshot is used for both the SQL query and the large-object\nread. That's not hard to do, but it isn't the default behavior,\nand in particular they can *not* close the transaction that read the\nOID if they'd like to read a matching state of the large object.\nSo far there's not been a lot of complaints about that ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Feb 2023 01:33:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: how does postgresql handle LOB/CLOB/BLOB column data that dies\n before the query ends"
},
{
"msg_contents": "On Sat, 25 Feb 2023 at 08:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Yeah, Postgres has an analogous kind of problem. Our standard way to\n> use \"large objects\" is to store their identifying OIDs in tables,\n>\n...\n\n> and in particular they can *not* close the transaction that read the\n> OID if they'd like to read a matching state of the large object.\n> So far there's not been a lot of complaints about that ...\n>\n>\nOK, so it seems like so far my design is not far off the PostgreSQL design\n(which is very comforting).\n\nI wonder if the difference is in the client<->server protocol.\n\nDoes PostgreSQL hold the transaction open until the client side has closed\nthe resultset (or the query object possibly, not sure about the PostgreSQL\nAPI here).\nH2 has a very simple client-server protocol, which means the client simply\nsends a query and gets back a result-set stream, and there is no explicit\nacknowledgement of when the client closes the resultset, which means that\nthe MVCC transaction is typically closed by the time the client even starts\nreading the resultset.\n\nOn Sat, 25 Feb 2023 at 08:33, Tom Lane <tgl@sss.pgh.pa.us> wrote:Yeah, Postgres has an analogous kind of problem. Our standard way to\nuse \"large objects\" is to store their identifying OIDs in tables,\n... \nand in particular they can *not* close the transaction that read the\nOID if they'd like to read a matching state of the large object.\nSo far there's not been a lot of complaints about that ...OK, so it seems like so far my design is not far off the PostgreSQL design (which is very comforting). I wonder if the difference is in the client<->server protocol.Does PostgreSQL hold the transaction open until the client side has closed the resultset (or the query object possibly, not sure about the PostgreSQL API here).H2 has a very simple client-server protocol, which means the client simply sends a query and gets back a result-set stream, and there is no explicit acknowledgement of when the client closes the resultset, which means that the MVCC transaction is typically closed by the time the client even starts reading the resultset.",
"msg_date": "Sat, 25 Feb 2023 12:05:52 +0200",
"msg_from": "Noel Grandin <noelgrandin@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: how does postgresql handle LOB/CLOB/BLOB column data that dies\n before the query ends"
},
{
"msg_contents": "Noel Grandin <noelgrandin@gmail.com> writes:\n> OK, so it seems like so far my design is not far off the PostgreSQL design\n> (which is very comforting).\n\n> I wonder if the difference is in the client<->server protocol.\n\nThat could be a piece of the puzzle, yeah.\n\n> Does PostgreSQL hold the transaction open until the client side has closed\n> the resultset (or the query object possibly, not sure about the PostgreSQL\n> API here).\n\nWe use single-threaded server processes, so we couldn't close the\ntransaction (or more to the point, drop the query's snapshot) until\nwe've computed and sent the whole resultset. I should think that\nthere's a similar requirement even if multi-threaded: if you do MVCC\nat all then you have to hold your snapshot (or whatever mechanism\nyou use) until the resultset is all computed, or else later rows\nin the query result might be wrong.\n\nIn the scenario I'm describing with a query fetching some large\nobject OID(s) followed by separate queries retrieving those large\nobjects, we put it on the client to create an explicit transaction\nblock around those queries (ie send BEGIN and COMMIT commands),\nand to select a transaction mode that causes the same snapshot to\nbe used across the whole transaction. If the client fails to do\nthis, there could be concurrency anomalies. Any one of those\nqueries will still deliver self-consistent results, but they\nmight not match up with earlier or later queries.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Feb 2023 12:06:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: how does postgresql handle LOB/CLOB/BLOB column data that dies\n before the query ends"
},
{
"msg_contents": "On Sat, 25 Feb 2023 at 19:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> That could be a piece of the puzzle, yeah.\n>\n>\nThank you very much, this conversion has been a great help.\n\nRegards, Noel Grandin\n\nOn Sat, 25 Feb 2023 at 19:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:That could be a piece of the puzzle, yeah.\nThank you very much, this conversion has been a great help.Regards, Noel Grandin",
"msg_date": "Mon, 27 Feb 2023 14:14:10 +0200",
"msg_from": "Noel Grandin <noelgrandin@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: how does postgresql handle LOB/CLOB/BLOB column data that dies\n before the query ends"
}
] |
[
{
"msg_contents": "Hi all,\n\nI noticed a very minor inconsistency in some ACL error messages. When\nyou are try and alter a role, it just says \"permission denied\":\n\n postgres=> ALTER ROLE bar NOCREATEDB;\n ERROR: permission denied\n postgres=> ALTER ROLE bar SET search_path TO 'foo';\n ERROR: permission denied\n\nFor almost all other ACL error, we include what the action was. For\nexample:\n\n postgres=> CREATE ROLE r;\n ERROR: permission denied to create role\n postgres=> DROP ROLE postgres;\n ERROR: permission denied to drop role\n postgres=> CREATE DATABASE foo;\n ERROR: permission denied to create database\n\n\nIt's not a huge deal, but it's easy enough to fix that I thought I'd\ngenerate a patch (attached). Let me know if people think that it's\nworth merging.\n\n- Joe Koshakow",
"msg_date": "Fri, 24 Feb 2023 12:23:27 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Inconsistency in ACL error message"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 12:23:27PM -0500, Joseph Koshakow wrote:\n> I noticed a very minor inconsistency in some ACL error messages. When\n> you are try and alter a role, it just says \"permission denied\":\n\nYou might be interested in\n\n\thttps://commitfest.postgresql.org/42/4145/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 24 Feb 2023 10:31:09 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in ACL error message"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 1:31 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> You might be interested in\n>\n> https://commitfest.postgresql.org/42/4145/\n\nAh, perfect. In that case ignore my patch!\n\n- Joe Koshakow\n\nOn Fri, Feb 24, 2023 at 1:31 PM Nathan Bossart <nathandbossart@gmail.com> wrote:> You might be interested in>> https://commitfest.postgresql.org/42/4145/Ah, perfect. In that case ignore my patch!- Joe Koshakow",
"msg_date": "Fri, 24 Feb 2023 14:44:01 -0500",
"msg_from": "Joseph Koshakow <koshy44@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency in ACL error message"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI have a question on the code below:\n\nDatum\nnumeric_cmp(PG_FUNCTION_ARGS)\n{\n Numeric num1 = PG_GETARG_NUMERIC(0);\n Numeric num2 = PG_GETARG_NUMERIC(1);\n int result;\n\n result = cmp_numerics(num1, num2);\n\n PG_FREE_IF_COPY(num1, 0);\n PG_FREE_IF_COPY(num2, 1);\n\n PG_RETURN_INT32(result);\n}\n\nIt seems to me that num1 is a copy of fcinfo->arg[0]. It is passed to\nthe function cmp_numerics(), It's value remains the same after the\ncall. Also, cmp_numerics() does not have a handle to fcinfo, so it\ncan't modify fcinfo->arg[0].\n\nIsn't it true that pfree() will never be called by PG_FREE_IF_COPY?\n\nCheers,\n-cktan\n\n\n",
"msg_date": "Fri, 24 Feb 2023 10:51:12 -0800",
"msg_from": "CK Tan <cktan@vitessedata.com>",
"msg_from_op": true,
"msg_subject": "PG_FREE_IF_COPY extraneous in numeric_cmp?"
},
{
"msg_contents": "CK Tan <cktan@vitessedata.com> writes:\n> Isn't it true that pfree() will never be called by PG_FREE_IF_COPY?\n\nNo. You're forgetting the possibility that PG_GETARG_NUMERIC will\nhave to de-toast a toasted input. Granted, numerics are seldom\ngoing to be long enough to get compressed or pushed out-of-line;\nbut that's possible, and what's very possible is that they'll have\na short header.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 Feb 2023 17:16:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PG_FREE_IF_COPY extraneous in numeric_cmp?"
},
{
"msg_contents": "Thanks!\n\nOn Fri, Feb 24, 2023 at 2:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> CK Tan <cktan@vitessedata.com> writes:\n> > Isn't it true that pfree() will never be called by PG_FREE_IF_COPY?\n>\n> No. You're forgetting the possibility that PG_GETARG_NUMERIC will\n> have to de-toast a toasted input. Granted, numerics are seldom\n> going to be long enough to get compressed or pushed out-of-line;\n> but that's possible, and what's very possible is that they'll have\n> a short header.\n>\n> regards, tom lane\n\n\n",
"msg_date": "Sat, 25 Feb 2023 04:19:20 -0800",
"msg_from": "CK Tan <cktan@vitessedata.com>",
"msg_from_op": true,
"msg_subject": "Re: PG_FREE_IF_COPY extraneous in numeric_cmp?"
}
] |
[
{
"msg_contents": "This is a draft patch - review is welcome and would help to get this\nready to be considererd for v16, if desired.\n\nI'm going to add this thread to the old CF entry.\nhttps://commitfest.postgresql.org/31/2888/\n\n-- \nJustin",
"msg_date": "Fri, 24 Feb 2023 13:18:40 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "zstd compression for pg_dump"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 01:18:40PM -0600, Justin Pryzby wrote:\n> This is a draft patch - review is welcome and would help to get this\n> ready to be considererd for v16, if desired.\n> \n> I'm going to add this thread to the old CF entry.\n> https://commitfest.postgresql.org/31/2888/\n\nPatch 0003 adds support for the --long option of zstd, meaning that it\n\"enables long distance matching with #windowLog\". What's the benefit\nof that when it is applied to dumps and base backup contents?\n--\nMichael",
"msg_date": "Sat, 25 Feb 2023 13:44:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Sat, Feb 25, 2023 at 01:44:36PM +0900, Michael Paquier wrote:\n> On Fri, Feb 24, 2023 at 01:18:40PM -0600, Justin Pryzby wrote:\n> > This is a draft patch - review is welcome and would help to get this\n> > ready to be considererd for v16, if desired.\n> > \n> > I'm going to add this thread to the old CF entry.\n> > https://commitfest.postgresql.org/31/2888/\n> \n> Patch 0003 adds support for the --long option of zstd, meaning that it\n> \"enables long distance matching with #windowLog\". What's the benefit\n> of that when it is applied to dumps and base backup contents?\n\nIt (can) makes it smaller.\n\n+ The <literal>long</literal> keyword enables long-distance matching\n+ mode, for improved compression ratio, at the expense of higher memory\n+ use. Long-distance mode is supported only for \n\n+ With zstd compression, <literal>long</literal> mode may allow dumps\n+ to be significantly smaller, but it might not reduce the size of\n+ custom or directory format dumps, whose fields are separately compressed.\n\nNote that I included that here as 003, but I also have an pre-existing\npatch for adding that just to basebackup.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 24 Feb 2023 22:50:41 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On 2/24/23 20:18, Justin Pryzby wrote:\n> This is a draft patch - review is welcome and would help to get this\n> ready to be considererd for v16, if desired.\n> \n> I'm going to add this thread to the old CF entry.\n> https://commitfest.postgresql.org/31/2888/\n> \n\nThanks. Sadly cfbot is unhappy - the windows and cplusplus builds failed\nbecause of some issue in pg_backup_archiver.h. But it's a bit bizarre\nbecause the patch does not modify that file at all ...\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 25 Feb 2023 11:31:28 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Sat, Feb 25, 2023, at 7:31 AM, Tomas Vondra wrote:\n> On 2/24/23 20:18, Justin Pryzby wrote:\n> > This is a draft patch - review is welcome and would help to get this\n> > ready to be considererd for v16, if desired.\n> > \n> > I'm going to add this thread to the old CF entry.\n> > https://commitfest.postgresql.org/31/2888/\n> > \n> \n> Thanks. Sadly cfbot is unhappy - the windows and cplusplus builds failed\n> because of some issue in pg_backup_archiver.h. But it's a bit bizarre\n> because the patch does not modify that file at all ...\ncpluspluscheck says\n\n # pg_dump is not C++-clean because it uses \"public\" and \"namespace\"\n # as field names, which is unfortunate but we won't change it now.\n\nHence, the patch should exclude the new header file from it.\n\n--- a/src/tools/pginclude/cpluspluscheck\n+++ b/src/tools/pginclude/cpluspluscheck\n@@ -153,6 +153,7 @@ do\n test \"$f\" = src/bin/pg_dump/compress_gzip.h && continue\n test \"$f\" = src/bin/pg_dump/compress_io.h && continue\n test \"$f\" = src/bin/pg_dump/compress_lz4.h && continue\n+ test \"$f\" = src/bin/pg_dump/compress_zstd.h && continue\n test \"$f\" = src/bin/pg_dump/compress_none.h && continue\n test \"$f\" = src/bin/pg_dump/parallel.h && continue\n test \"$f\" = src/bin/pg_dump/pg_backup_archiver.h && continue\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Sat, Feb 25, 2023, at 7:31 AM, Tomas Vondra wrote:On 2/24/23 20:18, Justin Pryzby wrote:> This is a draft patch - review is welcome and would help to get this> ready to be considererd for v16, if desired.> > I'm going to add this thread to the old CF entry.> https://commitfest.postgresql.org/31/2888/> Thanks. Sadly cfbot is unhappy - the windows and cplusplus builds failedbecause of some issue in pg_backup_archiver.h. But it's a bit bizarrebecause the patch does not modify that file at all ...cpluspluscheck says # pg_dump is not C++-clean because it uses \"public\" and \"namespace\" # as field names, which is unfortunate but we won't change it now.Hence, the patch should exclude the new header file from it.--- a/src/tools/pginclude/cpluspluscheck+++ b/src/tools/pginclude/cpluspluscheck@@ -153,6 +153,7 @@ do test \"$f\" = src/bin/pg_dump/compress_gzip.h && continue test \"$f\" = src/bin/pg_dump/compress_io.h && continue test \"$f\" = src/bin/pg_dump/compress_lz4.h && continue+ test \"$f\" = src/bin/pg_dump/compress_zstd.h && continue test \"$f\" = src/bin/pg_dump/compress_none.h && continue test \"$f\" = src/bin/pg_dump/parallel.h && continue test \"$f\" = src/bin/pg_dump/pg_backup_archiver.h && continue--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Sat, 25 Feb 2023 11:47:26 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 01:18:40PM -0600, Justin Pryzby wrote:\n> This is a draft patch - review is welcome and would help to get this\n> ready to be considererd for v16, if desired.\n> \n> I'm going to add this thread to the old CF entry.\n> https://commitfest.postgresql.org/31/2888/\n\nThis resolves cfbot warnings: windows and cppcheck.\nAnd refactors zstd routines.\nAnd updates docs.\nAnd includes some fixes for earlier patches that these patches conflicts\nwith/depends on.",
"msg_date": "Sat, 25 Feb 2023 19:22:27 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Sat, Feb 25, 2023 at 5:22 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> This resolves cfbot warnings: windows and cppcheck.\n> And refactors zstd routines.\n> And updates docs.\n> And includes some fixes for earlier patches that these patches conflicts\n> with/depends on.\n\nThis'll need a rebase (cfbot took a while to catch up). The patchset\nincludes basebackup modifications, which are part of a different CF\nentry; was that intended?\n\nI tried this on a local, 3.5GB, mostly-text table (from the UK Price\nPaid dataset [1]) and the comparison against the other methods was\nimpressive. (I'm no good at constructing compression benchmarks, so\nthis is a super naive setup. Client's on the same laptop as the\nserver.)\n\n $ time ./src/bin/pg_dump/pg_dump -d postgres -t pp_complete -Z\nzstd > /tmp/zstd.dump\n real 1m17.632s\n user 0m35.521s\n sys 0m2.683s\n\n $ time ./\\src/bin/pg_dump/pg_dump -d postgres -t pp_complete -Z\nlz4 > /tmp/lz4.dump\n real 1m13.125s\n user 0m19.795s\n sys 0m3.370s\n\n $ time ./\\src/bin/pg_dump/pg_dump -d postgres -t pp_complete -Z\ngzip > /tmp/gzip.dump\n real 2m24.523s\n user 2m22.114s\n sys 0m1.848s\n\n $ ls -l /tmp/*.dump\n -rw-rw-r-- 1 jacob jacob 1331493925 Mar 3 09:45 /tmp/gzip.dump\n -rw-rw-r-- 1 jacob jacob 2125998939 Mar 3 09:42 /tmp/lz4.dump\n -rw-rw-r-- 1 jacob jacob 1215834718 Mar 3 09:40 /tmp/zstd.dump\n\nDefault gzip was the only method that bottlenecked on pg_dump rather\nthan the server, and default zstd outcompressed it at a fraction of\nthe CPU time. So, naively, this looks really good.\n\nWith this particular dataset, I don't see much improvement with\nzstd:long. (At nearly double the CPU time, I get a <1% improvement in\ncompression size.) I assume it's heavily data dependent, but from the\nnotes on --long [2] it seems like they expect you to play around with\nthe window size to further tailor it to your data. Does it make sense\nto provide the long option without the windowLog parameter?\n\nThanks,\n--Jacob\n\n[1] https://landregistry.data.gov.uk/\n[2] https://github.com/facebook/zstd/releases/tag/v1.3.2\n\n\n",
"msg_date": "Fri, 3 Mar 2023 10:32:53 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Fri, Mar 03, 2023 at 10:32:53AM -0800, Jacob Champion wrote:\n> On Sat, Feb 25, 2023 at 5:22 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > This resolves cfbot warnings: windows and cppcheck.\n> > And refactors zstd routines.\n> > And updates docs.\n> > And includes some fixes for earlier patches that these patches conflicts\n> > with/depends on.\n> \n> This'll need a rebase (cfbot took a while to catch up).\n\nSoon.\n\n> The patchset includes basebackup modifications, which are part of a\n> different CF entry; was that intended?\n\nYes, it's intentional - if zstd:long mode were to be merged first, then\nthis patch should include long mode from the start.\nOr, if pgdump+zstd were merged first, then long mode could be added to\nboth places.\n\n> I tried this on a local, 3.5GB, mostly-text table (from the UK Price\n\nThanks for looking. If your zstd library is compiled with thread\nsupport, could you also try with :workers=N ? I believe this is working\ncorrectly, but I'm going to ask for help verifying that...\n\nIt'd be especially useful to test under windows, where pgdump/restore\nuse threads instead of forking... If you have a windows environment but\nnot set up for development, I think it's possible to get cirrusci to\ncompile a patch for you and then retrieve the binaries provided as an\n\"artifact\" (credit/blame for this idea should be directed to Thomas\nMunro).\n\n> With this particular dataset, I don't see much improvement with\n> zstd:long.\n\nYeah. I this could be because either 1) you already got very good\ncomprssion without looking at more data; and/or 2) the neighboring data\nis already very similar, maybe equally or more similar, than the further\ndata, from which there's nothing to gain.\n\n> (At nearly double the CPU time, I get a <1% improvement in\n> compression size.) I assume it's heavily data dependent, but from the\n> notes on --long [2] it seems like they expect you to play around with\n> the window size to further tailor it to your data. Does it make sense\n> to provide the long option without the windowLog parameter?\n\nI don't want to start exposing lots of fine-granined parameters at this\npoint. In the immediate case, it looks like it may require more than\njust adding another parameter:\n\n Note: If windowLog is set to larger than 27,\n--long=windowLog or --memory=windowSize needs to be passed to the\ndecompressor.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 3 Mar 2023 12:55:46 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Fri, Mar 3, 2023 at 10:55 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Thanks for looking. If your zstd library is compiled with thread\n> support, could you also try with :workers=N ? I believe this is working\n> correctly, but I'm going to ask for help verifying that...\n\nUnfortunately not (Ubuntu 20.04):\n\n pg_dump: error: could not set compression parameter: Unsupported parameter\n\nBut that lets me review the error! I think these error messages should\nsay which options caused them.\n\n> It'd be especially useful to test under windows, where pgdump/restore\n> use threads instead of forking... If you have a windows environment but\n> not set up for development, I think it's possible to get cirrusci to\n> compile a patch for you and then retrieve the binaries provided as an\n> \"artifact\" (credit/blame for this idea should be directed to Thomas\n> Munro).\n\nI should be able to do that next week.\n\n> > With this particular dataset, I don't see much improvement with\n> > zstd:long.\n>\n> Yeah. I this could be because either 1) you already got very good\n> comprssion without looking at more data; and/or 2) the neighboring data\n> is already very similar, maybe equally or more similar, than the further\n> data, from which there's nothing to gain.\n\nWhat kinds of improvements do you see with your setup? I'm wondering\nwhen we would suggest that people use it.\n\n> I don't want to start exposing lots of fine-granined parameters at this\n> point. In the immediate case, it looks like it may require more than\n> just adding another parameter:\n>\n> Note: If windowLog is set to larger than 27,\n> --long=windowLog or --memory=windowSize needs to be passed to the\n> decompressor.\n\nHm. That would complicate things.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Fri, 3 Mar 2023 13:38:05 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Fri, Mar 03, 2023 at 01:38:05PM -0800, Jacob Champion wrote:\n> > > With this particular dataset, I don't see much improvement with\n> > > zstd:long.\n> >\n> > Yeah. I this could be because either 1) you already got very good\n> > comprssion without looking at more data; and/or 2) the neighboring data\n> > is already very similar, maybe equally or more similar, than the further\n> > data, from which there's nothing to gain.\n> \n> What kinds of improvements do you see with your setup? I'm wondering\n> when we would suggest that people use it.\n\nOn customer data, I see small improvements - below 10%.\n\nBut on my first two tries, I made synthetic data sets where it's a lot:\n\n$ ./src/bin/pg_dump/pg_dump -d pryzbyj -Fp -Z zstd:long |wc -c\n286107\n$ ./src/bin/pg_dump/pg_dump -d pryzbyj -Fp -Z zstd:long=0 |wc -c\n1709695\n\nThat's just 6 identical tables like:\npryzbyj=# CREATE TABLE t1 AS SELECT generate_series(1,999999);\n\nIn this case, \"custom\" format doesn't see that benefit, because the\ngreatest similarity is across tables, which don't share compressor\nstate. But I think the note that I wrote in the docs about that should\nbe removed - custom format could see a big benefit, as long as the table\nis big enough, and there's more similarity/repetition at longer\ndistances.\n\nHere's one where custom format *does* benefit, due to long-distance\nrepetition within a single table. The data is contrived, but the schema\nof ID => data is not. What's notable isn't how compressible the data\nis, but how much *more* compressible it is with long-distance matching.\n\npryzbyj=# CREATE TABLE t1 AS SELECT i,array_agg(j) FROM generate_series(1,444)i,generate_series(1,99999)j GROUP BY 1;\n$ ./src/bin/pg_dump/pg_dump -d pryzbyj -Fc -Z zstd:long=1 |wc -c\n82023\n$ ./src/bin/pg_dump/pg_dump -d pryzbyj -Fc -Z zstd:long=0 |wc -c\n1048267\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 4 Mar 2023 10:57:48 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Sat, Feb 25, 2023 at 07:22:27PM -0600, Justin Pryzby wrote:\n> On Fri, Feb 24, 2023 at 01:18:40PM -0600, Justin Pryzby wrote:\n> > This is a draft patch - review is welcome and would help to get this\n> > ready to be considererd for v16, if desired.\n> > \n> > I'm going to add this thread to the old CF entry.\n> > https://commitfest.postgresql.org/31/2888/\n> \n> This resolves cfbot warnings: windows and cppcheck.\n> And refactors zstd routines.\n> And updates docs.\n> And includes some fixes for earlier patches that these patches conflicts\n> with/depends on.\n\nThis rebases over the TAP and doc fixes to LZ4.\nAnd adds necessary ENV to makefile and meson.\nAnd adds an annoying boilerplate header.\nAnd removes supports_compression(), which is what I think Tomas meant\nwhen referring to \"annoying unsupported cases\".\nAnd updates zstd.c: fix an off-by-one, allocate in init depending on\nreadF/writeF, do not reset the input buffer on each iteration, and show\nparameter name in errors.\n\nI'd appreciate help checking that this is doing the right things and\nworks correctly with zstd threaded workers. The zstd API says: \"use one\ndifferent context per thread for parallel execution\" and \"For parallel\nexecution, use one separate ZSTD_CStream per thread\".\nhttps://github.com/facebook/zstd/blob/dev/lib/zstd.h\n\nI understand that to mean that, if pg_dump *itself* were using threads,\nthen each thread would need to call ZSTD_createCStream(). pg_dump isn't\nthreaded, so there's nothing special needed, right?\n\nExcept that, under windows, pg_dump -Fd -j actually uses threads instead\nof forking. I *think* that's still safe, since the pgdump threads are\ncreated *before* calling zstd functions (see _PrintTocData and\n_StartData of the custom and directory formats), so it happens naturally\nthat there's a separate zstd stream for each thread of pgdump.\n\n-- \nJustin",
"msg_date": "Sun, 5 Mar 2023 11:47:58 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Sat, Mar 4, 2023 at 8:57 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> pryzbyj=# CREATE TABLE t1 AS SELECT i,array_agg(j) FROM generate_series(1,444)i,generate_series(1,99999)j GROUP BY 1;\n> $ ./src/bin/pg_dump/pg_dump -d pryzbyj -Fc -Z zstd:long=1 |wc -c\n> 82023\n> $ ./src/bin/pg_dump/pg_dump -d pryzbyj -Fc -Z zstd:long=0 |wc -c\n> 1048267\n\nNice!\n\nI did some smoke testing against zstd's GitHub release on Windows. To\nbuild against it, I had to construct an import library, and put that\nand the DLL into the `lib` folder expected by the MSVC scripts...\nwhich makes me wonder if I've chosen a harder way than necessary?\n\nParallel zstd dumps seem to work as expected, in that the resulting\npg_restore output is identical to uncompressed dumps and nothing\nexplodes. I haven't inspected the threading implementation for safety\nyet, as you mentioned. And I still wasn't able to test :workers, since\nit looks like the official libzstd for Windows isn't built for\nmultithreading. That'll be another day's project.\n\n--Jacob\n\n\n",
"msg_date": "Wed, 8 Mar 2023 10:59:23 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "Hi,\n\nThis'll need another rebase over the meson ICU changes.\n\nOn Wed, Mar 8, 2023 at 10:59 AM Jacob Champion <jchampion@timescale.com>\nwrote:\n> I did some smoke testing against zstd's GitHub release on Windows. To\n> build against it, I had to construct an import library, and put that\n> and the DLL into the `lib` folder expected by the MSVC scripts...\n> which makes me wonder if I've chosen a harder way than necessary?\n\nA meson wrap made this much easier! It looks like pg_dump's meson.build\nis missing dependencies on zstd (meson couldn't find the headers in the\nsubproject without them).\n\n> Parallel zstd dumps seem to work as expected, in that the resulting\n> pg_restore output is identical to uncompressed dumps and nothing\n> explodes. I haven't inspected the threading implementation for safety\n> yet, as you mentioned.\n\nHm. Best I can tell, the CloneArchive() machinery is supposed to be\nhandling safety for this, by isolating each thread's state. I don't feel\ncomfortable pronouncing this new addition safe or not, because I'm not\nsure I understand what the comments in the format-specific _Clone()\ncallbacks are saying yet.\n\n> And I still wasn't able to test :workers, since\n> it looks like the official libzstd for Windows isn't built for\n> multithreading. That'll be another day's project.\n\nThe wrapped installation enabled threading too, so I was able to try\nwith :workers=8. Everything seems to work, but I didn't have a dataset\nthat showed speed improvements at the time. It did seem to affect the\noverall compressibility negatively -- which makes sense, I think,\nassuming each thread is looking at a separate and/or smaller window.\n\nOn to code (not a complete review):\n\n> if (hasSuffix(fname, \".gz\"))\n> compression_spec.algorithm = PG_COMPRESSION_GZIP;\n> else\n> {\n> bool exists;\n> \n> exists = (stat(path, &st) == 0);\n> /* avoid unused warning if it is not built with compression */\n> if (exists)\n> compression_spec.algorithm = PG_COMPRESSION_NONE;\n> -#ifdef HAVE_LIBZ\n> - if (!exists)\n> - {\n> - free_keep_errno(fname);\n> - fname = psprintf(\"%s.gz\", path);\n> - exists = (stat(fname, &st) == 0);\n> -\n> - if (exists)\n> - compression_spec.algorithm = PG_COMPRESSION_GZIP;\n> - }\n> -#endif\n> -#ifdef USE_LZ4\n> - if (!exists)\n> - {\n> - free_keep_errno(fname);\n> - fname = psprintf(\"%s.lz4\", path);\n> - exists = (stat(fname, &st) == 0);\n> -\n> - if (exists)\n> - compression_spec.algorithm = PG_COMPRESSION_LZ4;\n> - }\n> -#endif\n> + else if (check_compressed_file(path, &fname, \"gz\"))\n> + compression_spec.algorithm = PG_COMPRESSION_GZIP;\n> + else if (check_compressed_file(path, &fname, \"lz4\"))\n> + compression_spec.algorithm = PG_COMPRESSION_LZ4;\n> + else if (check_compressed_file(path, &fname, \"zst\"))\n> + compression_spec.algorithm = PG_COMPRESSION_ZSTD;\n> }\n\nThis function lost some coherence, I think. Should there be a hasSuffix\ncheck at the top for \".zstd\" (and, for that matter, \".lz4\")? And the\ncomment references an unused warning, which is only possible with the\n#ifdef blocks that were removed.\n\nI'm a little suspicious of the replacement of supports_compression()\nwith parse_compress_specification(). For example:\n\n> - errmsg = supports_compression(AH->compression_spec);\n> - if (errmsg)\n> + parse_compress_specification(AH->compression_spec.algorithm,\n> + NULL, &compress_spec);\n> + if (compress_spec.parse_error != NULL)\n> {\n> pg_log_warning(\"archive is compressed, but this installation does not support compression (%s\n> - errmsg);\n> - pg_free(errmsg);\n> + compress_spec.parse_error);\n> + pg_free(compress_spec.parse_error);\n> }\n\nThe top-level error here is \"does not support compression\", but wouldn't\na bad specification option with a supported compression method trip this\npath too?\n\n> +static void\n> +ZSTD_CCtx_setParam_or_die(ZSTD_CStream *cstream,\n> + ZSTD_cParameter param, int value, char *paramname)\n\nIMO we should avoid stepping on the ZSTD_ namespace with our own\ninternal function names.\n\n> + if (cs->readF != NULL)\n> + {\n> + zstdcs->dstream = ZSTD_createDStream();\n> + if (zstdcs->dstream == NULL)\n> + pg_fatal(\"could not initialize compression library\");\n> +\n> + zstdcs->input.size = ZSTD_DStreamInSize();\n> + zstdcs->input.src = pg_malloc(zstdcs->input.size);\n> +\n> + zstdcs->output.size = ZSTD_DStreamOutSize();\n> + zstdcs->output.dst = pg_malloc(zstdcs->output.size + 1);\n> + }\n> +\n> + if (cs->writeF != NULL)\n> + {\n> + zstdcs->cstream = ZstdCStreamParams(cs->compression_spec);\n> +\n> + zstdcs->output.size = ZSTD_CStreamOutSize();\n> + zstdcs->output.dst = pg_malloc(zstdcs->output.size);\n> + zstdcs->output.pos = 0;\n> + }\n\nThis seems to suggest that both cs->readF and cs->writeF could be set,\nbut in that case, the output buffer gets reallocated.\n\nI was curious about the extra byte allocated in the decompression case.\nI see that ReadDataFromArchiveZstd() is null-terminating the buffer\nbefore handing it to ahwrite(), but why does it need to do that?\n\n> +static const char *\n> +Zstd_get_error(CompressFileHandle *CFH)\n> +{\n> + return strerror(errno);\n> +}\n\nSeems like this should be using the zstderror stored in the handle?\n\nIn ReadDataFromArchiveZstd():\n\n> + size_t input_allocated_size = ZSTD_DStreamInSize();\n> + size_t res;\n> +\n> + for (;;)\n> + {\n> + size_t cnt;\n> +\n> + /*\n> + * Read compressed data. Note that readF can resize the buffer; the\n> + * new size is tracked and used for future loops.\n> + */\n> + input->size = input_allocated_size;\n> + cnt = cs->readF(AH, (char **) unconstify(void **, &input->src), &input->size);\n> + input_allocated_size = input->size;\n> + input->size = cnt;\nThis is pretty complex for what it's doing. I'm a little worried that we\nlet the reallocated buffer escape to the caller while losing track of\nhow big it is. I think that works today, since there's only ever one\ncall per handle, but any future refactoring that allowed cs->readData()\nto be called more than once would subtly break this code.\n\nIn ZstdWriteCommon():\n\n> + /*\n> + * Extra paranoia: avoid zero-length chunks, since a zero length chunk\n> + * is the EOF marker in the custom format. This should never happen\n> + * but...\n> + */\n> + if (output->pos > 0)\n> + cs->writeF(AH, output->dst, output->pos);\n> +\n> + output->pos = 0;\n\nElsewhere, output->pos is set to zero before compressing, but here we do\nit after, which I think leads to subtle differences in the function\npreconditions. If that's an intentional difference, can the reason be\ncalled out in a comment?\n\n--Jacob\n\n\n",
"msg_date": "Fri, 10 Mar 2023 12:48:13 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 12:48:13PM -0800, Jacob Champion wrote:\n> On Wed, Mar 8, 2023 at 10:59 AM Jacob Champion <jchampion@timescale.com> wrote:\n> > I did some smoke testing against zstd's GitHub release on Windows. To\n> > build against it, I had to construct an import library, and put that\n> > and the DLL into the `lib` folder expected by the MSVC scripts...\n> > which makes me wonder if I've chosen a harder way than necessary?\n> \n> It looks like pg_dump's meson.build is missing dependencies on zstd\n> (meson couldn't find the headers in the subproject without them).\n\nI saw that this was added for LZ4, but I hadn't added it for zstd since\nI didn't run into an issue without it. Could you check that what I've\nadded works for your case ?\n\n> > Parallel zstd dumps seem to work as expected, in that the resulting\n> > pg_restore output is identical to uncompressed dumps and nothing\n> > explodes. I haven't inspected the threading implementation for safety\n> > yet, as you mentioned.\n> \n> Hm. Best I can tell, the CloneArchive() machinery is supposed to be\n> handling safety for this, by isolating each thread's state. I don't feel\n> comfortable pronouncing this new addition safe or not, because I'm not\n> sure I understand what the comments in the format-specific _Clone()\n> callbacks are saying yet.\n\nMy line of reasoning for unix is that pg_dump forks before any calls to\nzstd. Nothing zstd does ought to affect the pg_dump layer. But that\ndoesn't apply to pg_dump under windows. This is an opened question. If\nthere's no solid answer, I could disable/ignore the option (maybe only\nunder windows).\n\n> On to code (not a complete review):\n> \n> > if (hasSuffix(fname, \".gz\"))\n> > compression_spec.algorithm = PG_COMPRESSION_GZIP;\n> > else\n> > {\n> > bool exists;\n> > \n> > exists = (stat(path, &st) == 0);\n> > /* avoid unused warning if it is not built with compression */\n> > if (exists)\n> > compression_spec.algorithm = PG_COMPRESSION_NONE;\n> > -#ifdef HAVE_LIBZ\n> > - if (!exists)\n> > - {\n> > - free_keep_errno(fname);\n> > - fname = psprintf(\"%s.gz\", path);\n> > - exists = (stat(fname, &st) == 0);\n> > -\n> > - if (exists)\n> > - compression_spec.algorithm = PG_COMPRESSION_GZIP;\n> > - }\n> > -#endif\n> > -#ifdef USE_LZ4\n> > - if (!exists)\n> > - {\n> > - free_keep_errno(fname);\n> > - fname = psprintf(\"%s.lz4\", path);\n> > - exists = (stat(fname, &st) == 0);\n> > -\n> > - if (exists)\n> > - compression_spec.algorithm = PG_COMPRESSION_LZ4;\n> > - }\n> > -#endif\n> > + else if (check_compressed_file(path, &fname, \"gz\"))\n> > + compression_spec.algorithm = PG_COMPRESSION_GZIP;\n> > + else if (check_compressed_file(path, &fname, \"lz4\"))\n> > + compression_spec.algorithm = PG_COMPRESSION_LZ4;\n> > + else if (check_compressed_file(path, &fname, \"zst\"))\n> > + compression_spec.algorithm = PG_COMPRESSION_ZSTD;\n> > }\n> \n> This function lost some coherence, I think. Should there be a hasSuffix\n> check at the top for \".zstd\" (and, for that matter, \".lz4\")?\n\nThe function is first checking if it was passed a filename which already\nhas a suffix. And if not, it searches through a list of suffixes,\ntesting for an existing file with each suffix. The search with stat()\ndoesn't happen if it has a suffix. I'm having trouble seeing how the\nhasSuffix() branch isn't dead code. Another opened question.\n\n> I'm a little suspicious of the replacement of supports_compression()\n> with parse_compress_specification(). For example:\n> \n> > - errmsg = supports_compression(AH->compression_spec);\n> > - if (errmsg)\n> > + parse_compress_specification(AH->compression_spec.algorithm,\n> > + NULL, &compress_spec);\n> > + if (compress_spec.parse_error != NULL)\n> > {\n> > pg_log_warning(\"archive is compressed, but this installation does not support compression (%s\n> > - errmsg);\n> > - pg_free(errmsg);\n> > + compress_spec.parse_error);\n> > + pg_free(compress_spec.parse_error);\n> > }\n> \n> The top-level error here is \"does not support compression\", but wouldn't\n> a bad specification option with a supported compression method trip this\n> path too?\n\nNo - since the 2nd argument is passed as NULL, it just checks whether\nthe compression is supported. Maybe there ought to be a more\ndirect/clean way to do it. But up to now evidently nobody needed to do\nthat.\n\n> > +static void\n> > +ZSTD_CCtx_setParam_or_die(ZSTD_CStream *cstream,\n> > + ZSTD_cParameter param, int value, char *paramname)\n> \n> IMO we should avoid stepping on the ZSTD_ namespace with our own\n> internal function names.\n\ndone\n\n> > + if (cs->readF != NULL)\n> > +\n> > + if (cs->writeF != NULL)\n> \n> This seems to suggest that both cs->readF and cs->writeF could be set,\n> but in that case, the output buffer gets reallocated.\n\nI put back an assertion that exactly one of them was set, since that's\ntrue of how it currently works.\n\n> I was curious about the extra byte allocated in the decompression case.\n> I see that ReadDataFromArchiveZstd() is null-terminating the buffer\n> before handing it to ahwrite(), but why does it need to do that?\n\nI was trying to figure that out, too. I think the unterminated case\nmight be for ExecuteSqlCommandBuf(), and that may only (have) been\nneeded to allow pg_restore to handle ancient/development versions of\npg_dump... It's not currently hit.\nhttps://coverage.postgresql.org/src/bin/pg_dump/pg_backup_db.c.gcov.html#470\n\nI found that the terminator was added for the uncompressed case was\nadded at e8f69be05 and removed in bf9aa490d.\n\n> > +Zstd_get_error(CompressFileHandle *CFH)\n> \n> Seems like this should be using the zstderror stored in the handle?\n\nYes - I'd already addressed that locally.\n\n> In ReadDataFromArchiveZstd():\n> \n> > + * Read compressed data. Note that readF can resize the buffer; the\n> > + * new size is tracked and used for future loops.\n> This is pretty complex for what it's doing. I'm a little worried that we\n> let the reallocated buffer escape to the caller while losing track of\n> how big it is. I think that works today, since there's only ever one\n> call per handle, but any future refactoring that allowed cs->readData()\n> to be called more than once would subtly break this code.\n\nNote that nothing bad happens if we lose track of how big it is (well,\nassuming that readF doesn't *shrink* the buffer).\n\nThe previous patch version didn't keep track of its new size, and the only\nconsequence is that readF() might re-resize it again on a future iteration,\neven if it was already sufficiently large.\n\nWhen I originally wrote it (and up until that patch version), I left\nthis as an XXX comment about reusing the resized buffer. But it seemed\neasy enough to fix so I did.\n\n> In ZstdWriteCommon():\n> \n> Elsewhere, output->pos is set to zero before compressing, but here we do\n> it after, which I think leads to subtle differences in the function\n> preconditions. If that's an intentional difference, can the reason be\n> called out in a comment?\n\nIt's not deliberate. I think it had no effect, but changed - thanks.\n\n-- \nJustin",
"msg_date": "Wed, 15 Mar 2023 23:50:15 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "\n\nOn 3/16/23 05:50, Justin Pryzby wrote:\n> On Fri, Mar 10, 2023 at 12:48:13PM -0800, Jacob Champion wrote:\n>> On Wed, Mar 8, 2023 at 10:59 AM Jacob Champion <jchampion@timescale.com> wrote:\n>>> I did some smoke testing against zstd's GitHub release on Windows. To\n>>> build against it, I had to construct an import library, and put that\n>>> and the DLL into the `lib` folder expected by the MSVC scripts...\n>>> which makes me wonder if I've chosen a harder way than necessary?\n>>\n>> It looks like pg_dump's meson.build is missing dependencies on zstd\n>> (meson couldn't find the headers in the subproject without them).\n> \n> I saw that this was added for LZ4, but I hadn't added it for zstd since\n> I didn't run into an issue without it. Could you check that what I've\n> added works for your case ?\n> \n>>> Parallel zstd dumps seem to work as expected, in that the resulting\n>>> pg_restore output is identical to uncompressed dumps and nothing\n>>> explodes. I haven't inspected the threading implementation for safety\n>>> yet, as you mentioned.\n>>\n>> Hm. Best I can tell, the CloneArchive() machinery is supposed to be\n>> handling safety for this, by isolating each thread's state. I don't feel\n>> comfortable pronouncing this new addition safe or not, because I'm not\n>> sure I understand what the comments in the format-specific _Clone()\n>> callbacks are saying yet.\n> \n> My line of reasoning for unix is that pg_dump forks before any calls to\n> zstd. Nothing zstd does ought to affect the pg_dump layer. But that\n> doesn't apply to pg_dump under windows. This is an opened question. If\n> there's no solid answer, I could disable/ignore the option (maybe only\n> under windows).\n> \n\nI may be missing something, but why would the patch affect this? Why\nwould it even affect safety of the parallel dump? And I don't see any\nchanges to the clone stuff ...\n\n>> On to code (not a complete review):\n>>\n>>> if (hasSuffix(fname, \".gz\"))\n>>> compression_spec.algorithm = PG_COMPRESSION_GZIP;\n>>> else\n>>> {\n>>> bool exists;\n>>>\n>>> exists = (stat(path, &st) == 0);\n>>> /* avoid unused warning if it is not built with compression */\n>>> if (exists)\n>>> compression_spec.algorithm = PG_COMPRESSION_NONE;\n>>> -#ifdef HAVE_LIBZ\n>>> - if (!exists)\n>>> - {\n>>> - free_keep_errno(fname);\n>>> - fname = psprintf(\"%s.gz\", path);\n>>> - exists = (stat(fname, &st) == 0);\n>>> -\n>>> - if (exists)\n>>> - compression_spec.algorithm = PG_COMPRESSION_GZIP;\n>>> - }\n>>> -#endif\n>>> -#ifdef USE_LZ4\n>>> - if (!exists)\n>>> - {\n>>> - free_keep_errno(fname);\n>>> - fname = psprintf(\"%s.lz4\", path);\n>>> - exists = (stat(fname, &st) == 0);\n>>> -\n>>> - if (exists)\n>>> - compression_spec.algorithm = PG_COMPRESSION_LZ4;\n>>> - }\n>>> -#endif\n>>> + else if (check_compressed_file(path, &fname, \"gz\"))\n>>> + compression_spec.algorithm = PG_COMPRESSION_GZIP;\n>>> + else if (check_compressed_file(path, &fname, \"lz4\"))\n>>> + compression_spec.algorithm = PG_COMPRESSION_LZ4;\n>>> + else if (check_compressed_file(path, &fname, \"zst\"))\n>>> + compression_spec.algorithm = PG_COMPRESSION_ZSTD;\n>>> }\n>>\n>> This function lost some coherence, I think. Should there be a hasSuffix\n>> check at the top for \".zstd\" (and, for that matter, \".lz4\")?\n> \n\nThis was discussed in the lz4 thread a couple days, and I think there\nshould be hasSuffix() cases for lz4/zstd too, not just for .gz.\n\n> The function is first checking if it was passed a filename which already\n> has a suffix. And if not, it searches through a list of suffixes,\n> testing for an existing file with each suffix. The search with stat()\n> doesn't happen if it has a suffix. I'm having trouble seeing how the\n> hasSuffix() branch isn't dead code. Another opened question.\n> \n\nAFAICS it's done this way because of this comment in pg_backup_directory\n\n * ...\n * \".gz\" suffix is added to the filenames. The TOC files are never\n * compressed by pg_dump, however they are accepted with the .gz suffix\n * too, in case the user has manually compressed them with 'gzip'.\n\nI haven't tried, but I believe that if you manually compress the\ndirectory, it may hit this branch. And IMO if we support that for gzip,\nthe other compression methods should do that too for consistency.\n\nIn any case, it's a tiny amount of code and I don't feel like ripping\nthat out when it might break some currently supported use case.\n\n>> I'm a little suspicious of the replacement of supports_compression()\n>> with parse_compress_specification(). For example:\n>>\n>>> - errmsg = supports_compression(AH->compression_spec);\n>>> - if (errmsg)\n>>> + parse_compress_specification(AH->compression_spec.algorithm,\n>>> + NULL, &compress_spec);\n>>> + if (compress_spec.parse_error != NULL)\n>>> {\n>>> pg_log_warning(\"archive is compressed, but this installation does not support compression (%s\n>>> - errmsg);\n>>> - pg_free(errmsg);\n>>> + compress_spec.parse_error);\n>>> + pg_free(compress_spec.parse_error);\n>>> }\n>>\n>> The top-level error here is \"does not support compression\", but wouldn't\n>> a bad specification option with a supported compression method trip this\n>> path too?\n> \n> No - since the 2nd argument is passed as NULL, it just checks whether\n> the compression is supported. Maybe there ought to be a more\n> direct/clean way to do it. But up to now evidently nobody needed to do\n> that.\n> \n\nI don't think the patch can use parse_compress_specification() instead\nof replace supports_compression(). The parsing simply determines if the\nbuild has the library, it doesn't say if a particular tool was modified\nto support the algorithm. I might build --with-zstd and yet pg_dump does\nnot support that algorithm yet.\n\nEven after we add zstd to pg_dump, it's quite likely other compression\nalgorithms may not be supported by pg_dump from day 1.\n\n\nI haven't looked at / tested the patch yet, but I wonder if you have any\nthoughts regarding the size_t / int tweaks. I don't know what types zstd\nlibrary uses, how it reports errors etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 17 Mar 2023 03:43:31 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "Hi,\n\nOn 3/17/23 03:43, Tomas Vondra wrote:\n> \n> ...\n>\n>>> I'm a little suspicious of the replacement of supports_compression()\n>>> with parse_compress_specification(). For example:\n>>>\n>>>> - errmsg = supports_compression(AH->compression_spec);\n>>>> - if (errmsg)\n>>>> + parse_compress_specification(AH->compression_spec.algorithm,\n>>>> + NULL, &compress_spec);\n>>>> + if (compress_spec.parse_error != NULL)\n>>>> {\n>>>> pg_log_warning(\"archive is compressed, but this installation does not support compression (%s\n>>>> - errmsg);\n>>>> - pg_free(errmsg);\n>>>> + compress_spec.parse_error);\n>>>> + pg_free(compress_spec.parse_error);\n>>>> }\n>>>\n>>> The top-level error here is \"does not support compression\", but wouldn't\n>>> a bad specification option with a supported compression method trip this\n>>> path too?\n>>\n>> No - since the 2nd argument is passed as NULL, it just checks whether\n>> the compression is supported. Maybe there ought to be a more\n>> direct/clean way to do it. But up to now evidently nobody needed to do\n>> that.\n>>\n> \n> I don't think the patch can use parse_compress_specification() instead\n> of replace supports_compression(). The parsing simply determines if the\n> build has the library, it doesn't say if a particular tool was modified\n> to support the algorithm. I might build --with-zstd and yet pg_dump does\n> not support that algorithm yet.\n> \n> Even after we add zstd to pg_dump, it's quite likely other compression\n> algorithms may not be supported by pg_dump from day 1.\n> \n> \n> I haven't looked at / tested the patch yet, but I wonder if you have any\n> thoughts regarding the size_t / int tweaks. I don't know what types zstd\n> library uses, how it reports errors etc.\n> \n\nAny thoughts regarding my comments on removing supports_compression()?\n\nAlso, this patch needs a rebase to adopt it to the API changes from last\nweek. The sooner the better, considering we're getting fairly close to\nthe end of the CF and code freeze.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 27 Mar 2023 18:20:14 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 03:43:31AM +0100, Tomas Vondra wrote:\n> On 3/16/23 05:50, Justin Pryzby wrote:\n> > On Fri, Mar 10, 2023 at 12:48:13PM -0800, Jacob Champion wrote:\n> >> On Wed, Mar 8, 2023 at 10:59 AM Jacob Champion <jchampion@timescale.com> wrote:\n> >>> I did some smoke testing against zstd's GitHub release on Windows. To\n> >>> build against it, I had to construct an import library, and put that\n> >>> and the DLL into the `lib` folder expected by the MSVC scripts...\n> >>> which makes me wonder if I've chosen a harder way than necessary?\n> >>\n> >> It looks like pg_dump's meson.build is missing dependencies on zstd\n> >> (meson couldn't find the headers in the subproject without them).\n> > \n> > I saw that this was added for LZ4, but I hadn't added it for zstd since\n> > I didn't run into an issue without it. Could you check that what I've\n> > added works for your case ?\n> > \n> >>> Parallel zstd dumps seem to work as expected, in that the resulting\n> >>> pg_restore output is identical to uncompressed dumps and nothing\n> >>> explodes. I haven't inspected the threading implementation for safety\n> >>> yet, as you mentioned.\n> >>\n> >> Hm. Best I can tell, the CloneArchive() machinery is supposed to be\n> >> handling safety for this, by isolating each thread's state. I don't feel\n> >> comfortable pronouncing this new addition safe or not, because I'm not\n> >> sure I understand what the comments in the format-specific _Clone()\n> >> callbacks are saying yet.\n> > \n> > My line of reasoning for unix is that pg_dump forks before any calls to\n> > zstd. Nothing zstd does ought to affect the pg_dump layer. But that\n> > doesn't apply to pg_dump under windows. This is an opened question. If\n> > there's no solid answer, I could disable/ignore the option (maybe only\n> > under windows).\n> \n> I may be missing something, but why would the patch affect this? Why\n> would it even affect safety of the parallel dump? And I don't see any\n> changes to the clone stuff ...\n\nzstd supports using threads during compression, with -Z zstd:workers=N.\nWhen unix forks, the child processes can't do anything to mess up the\nstate of the parent processes. \n\nBut windows pg_dump uses threads instead of forking, so it seems\npossible that the pg_dump -j threads that then spawn zstd threads could\n\"leak threads\" and break the main thread. I suspect there's no issue,\nbut we still ought to verify that before declaring it safe.\n\n> > The function is first checking if it was passed a filename which already\n> > has a suffix. And if not, it searches through a list of suffixes,\n> > testing for an existing file with each suffix. The search with stat()\n> > doesn't happen if it has a suffix. I'm having trouble seeing how the\n> > hasSuffix() branch isn't dead code. Another opened question.\n> \n> AFAICS it's done this way because of this comment in pg_backup_directory\n> \n> * ...\n> * \".gz\" suffix is added to the filenames. The TOC files are never\n> * compressed by pg_dump, however they are accepted with the .gz suffix\n> * too, in case the user has manually compressed them with 'gzip'.\n> \n> I haven't tried, but I believe that if you manually compress the\n> directory, it may hit this branch.\n\nThat would make sense, but when I tried, it didn't work like that.\nThe filenames were all uncompressed names. Maybe it worked differently\nin an older release. Or maybe it changed during development of the\nparallel-directory-dump patch and it's actually dead code.\n\nThis is rebased over the updated compression API.\n\nIt seems like I misunderstood something you said before, so now I put\nback \"supports_compression()\".\n\n-- \nJustin",
"msg_date": "Mon, 27 Mar 2023 12:28:42 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "\n\nOn 3/27/23 19:28, Justin Pryzby wrote:\n> On Fri, Mar 17, 2023 at 03:43:31AM +0100, Tomas Vondra wrote:\n>> On 3/16/23 05:50, Justin Pryzby wrote:\n>>> On Fri, Mar 10, 2023 at 12:48:13PM -0800, Jacob Champion wrote:\n>>>> On Wed, Mar 8, 2023 at 10:59 AM Jacob Champion <jchampion@timescale.com> wrote:\n>>>>> I did some smoke testing against zstd's GitHub release on Windows. To\n>>>>> build against it, I had to construct an import library, and put that\n>>>>> and the DLL into the `lib` folder expected by the MSVC scripts...\n>>>>> which makes me wonder if I've chosen a harder way than necessary?\n>>>>\n>>>> It looks like pg_dump's meson.build is missing dependencies on zstd\n>>>> (meson couldn't find the headers in the subproject without them).\n>>>\n>>> I saw that this was added for LZ4, but I hadn't added it for zstd since\n>>> I didn't run into an issue without it. Could you check that what I've\n>>> added works for your case ?\n>>>\n>>>>> Parallel zstd dumps seem to work as expected, in that the resulting\n>>>>> pg_restore output is identical to uncompressed dumps and nothing\n>>>>> explodes. I haven't inspected the threading implementation for safety\n>>>>> yet, as you mentioned.\n>>>>\n>>>> Hm. Best I can tell, the CloneArchive() machinery is supposed to be\n>>>> handling safety for this, by isolating each thread's state. I don't feel\n>>>> comfortable pronouncing this new addition safe or not, because I'm not\n>>>> sure I understand what the comments in the format-specific _Clone()\n>>>> callbacks are saying yet.\n>>>\n>>> My line of reasoning for unix is that pg_dump forks before any calls to\n>>> zstd. Nothing zstd does ought to affect the pg_dump layer. But that\n>>> doesn't apply to pg_dump under windows. This is an opened question. If\n>>> there's no solid answer, I could disable/ignore the option (maybe only\n>>> under windows).\n>>\n>> I may be missing something, but why would the patch affect this? Why\n>> would it even affect safety of the parallel dump? And I don't see any\n>> changes to the clone stuff ...\n> \n> zstd supports using threads during compression, with -Z zstd:workers=N.\n> When unix forks, the child processes can't do anything to mess up the\n> state of the parent processes. \n> \n> But windows pg_dump uses threads instead of forking, so it seems\n> possible that the pg_dump -j threads that then spawn zstd threads could\n> \"leak threads\" and break the main thread. I suspect there's no issue,\n> but we still ought to verify that before declaring it safe.\n> \n\nOK. I don't have access to a Windows machine so I can't test that. Is it\npossible to disable the zstd threading, until we figure this out?\n\n>>> The function is first checking if it was passed a filename which already\n>>> has a suffix. And if not, it searches through a list of suffixes,\n>>> testing for an existing file with each suffix. The search with stat()\n>>> doesn't happen if it has a suffix. I'm having trouble seeing how the\n>>> hasSuffix() branch isn't dead code. Another opened question.\n>>\n>> AFAICS it's done this way because of this comment in pg_backup_directory\n>>\n>> * ...\n>> * \".gz\" suffix is added to the filenames. The TOC files are never\n>> * compressed by pg_dump, however they are accepted with the .gz suffix\n>> * too, in case the user has manually compressed them with 'gzip'.\n>>\n>> I haven't tried, but I believe that if you manually compress the\n>> directory, it may hit this branch.\n> \n> That would make sense, but when I tried, it didn't work like that.\n> The filenames were all uncompressed names. Maybe it worked differently\n> in an older release. Or maybe it changed during development of the\n> parallel-directory-dump patch and it's actually dead code.\n> \n\nInteresting. Would be good to find out. I wonder if a little bit of\ngit-log digging could tell us more. Anyway, until we confirm it's dead\ncode, we should probably do what .gz does and have the same check for\n.lz4 and .zst files.\n\n> This is rebased over the updated compression API.\n> \n> It seems like I misunderstood something you said before, so now I put\n> back \"supports_compression()\".\n> \n\nThanks! I need to do a bit more testing and review, but it seems pretty\nmuch RFC to me, assuming we can figure out what to do about threading.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Mar 2023 18:23:26 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 12:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> On 3/27/23 19:28, Justin Pryzby wrote:\n> > On Fri, Mar 17, 2023 at 03:43:31AM +0100, Tomas Vondra wrote:\n> >> On 3/16/23 05:50, Justin Pryzby wrote:\n> >>> On Fri, Mar 10, 2023 at 12:48:13PM -0800, Jacob Champion wrote:\n> >>>> On Wed, Mar 8, 2023 at 10:59 AM Jacob Champion <\n> jchampion@timescale.com> wrote:\n> >>>>> I did some smoke testing against zstd's GitHub release on Windows. To\n> ...\n> OK. I don't have access to a Windows machine so I can't test that. Is it\n> possible to disable the zstd threading, until we figure this out?\n>\n> Thomas since I appear to be one of the few windows users (I use both), can\nI help?\nI can test pg_dump... for you, easy to do. I do about 5-10 pg_dumps a day\non windows while developing.\n\nAlso, I have an AWS instance I created to build PG/Win with readline back\nin November.\nI could give you access to that... (you are not the only person who has\nmade this statement here).\nI've made such instances available for other Open Source developers, to\nsupport them.\n\n Obvi I would share connection credentials privately.\n\nRegards, Kirk\n\nOn Tue, Mar 28, 2023 at 12:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:On 3/27/23 19:28, Justin Pryzby wrote:\n> On Fri, Mar 17, 2023 at 03:43:31AM +0100, Tomas Vondra wrote:\n>> On 3/16/23 05:50, Justin Pryzby wrote:\n>>> On Fri, Mar 10, 2023 at 12:48:13PM -0800, Jacob Champion wrote:\n>>>> On Wed, Mar 8, 2023 at 10:59 AM Jacob Champion <jchampion@timescale.com> wrote:\n>>>>> I did some smoke testing against zstd's GitHub release on Windows. To...\nOK. I don't have access to a Windows machine so I can't test that. Is it\npossible to disable the zstd threading, until we figure this out?Thomas since I appear to be one of the few windows users (I use both), can I help?I can test pg_dump... for you, easy to do. I do about 5-10 pg_dumps a day on windows while developing.Also, I have an AWS instance I created to build PG/Win with readline back in November.I could give you access to that... (you are not the only person who has made this statement here).I've made such instances available for other Open Source developers, to support them. Obvi I would share connection credentials privately.Regards, Kirk",
"msg_date": "Tue, 28 Mar 2023 14:03:49 -0400",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On 3/28/23 20:03, Kirk Wolak wrote:\n> On Tue, Mar 28, 2023 at 12:23 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> wrote:\n> \n> On 3/27/23 19:28, Justin Pryzby wrote:\n> > On Fri, Mar 17, 2023 at 03:43:31AM +0100, Tomas Vondra wrote:\n> >> On 3/16/23 05:50, Justin Pryzby wrote:\n> >>> On Fri, Mar 10, 2023 at 12:48:13PM -0800, Jacob Champion wrote:\n> >>>> On Wed, Mar 8, 2023 at 10:59 AM Jacob Champion\n> <jchampion@timescale.com <mailto:jchampion@timescale.com>> wrote:\n> >>>>> I did some smoke testing against zstd's GitHub release on\n> Windows. To\n> ...\n> OK. I don't have access to a Windows machine so I can't test that. Is it\n> possible to disable the zstd threading, until we figure this out?\n> \n> Thomas since I appear to be one of the few windows users (I use both),\n> can I help?\n> I can test pg_dump... for you, easy to do. I do about 5-10 pg_dumps a\n> day on windows while developing.\n> \n\nPerhaps. But I'll leave the details up to Justin - it's his patch, and\nI'm not sure how to verify the threading is OK.\n\nI'd try applying this patch, build with --with-zstd and then run the\npg_dump TAP tests, and perhaps do some manual tests.\n\nAnd perhaps do the same for --with-lz4 - there's a thread [1] suggesting\nwe don't detect lz4 stuff on Windows, so the TAP tests do nothing.\n\nhttps://www.postgresql.org/message-id/ZAjL96N9ZW84U59p@msg.df7cb.de\n\n> Also, I have an AWS instance I created to build PG/Win with readline\n> back in November.\n> I could give you access to that... (you are not the only person who has\n> made this statement here).\n> I've made such instances available for other Open Source developers, to\n> support them.\n> \n> Obvi I would share connection credentials privately.\n> \n\nI'd rather leave the Windows stuff up to someone with more experience\nwith that platform. I have plenty of other stuff on my plate atm.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Mar 2023 20:45:33 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 9:50 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Fri, Mar 10, 2023 at 12:48:13PM -0800, Jacob Champion wrote:\n> > It looks like pg_dump's meson.build is missing dependencies on zstd\n> > (meson couldn't find the headers in the subproject without them).\n>\n> I saw that this was added for LZ4, but I hadn't added it for zstd since\n> I didn't run into an issue without it. Could you check that what I've\n> added works for your case ?\n\nI thought I replied to this, sorry -- your newest patchset builds\ncorrectly with subprojects, so the new dependency looks good to me.\nThanks!\n\n> > Hm. Best I can tell, the CloneArchive() machinery is supposed to be\n> > handling safety for this, by isolating each thread's state. I don't feel\n> > comfortable pronouncing this new addition safe or not, because I'm not\n> > sure I understand what the comments in the format-specific _Clone()\n> > callbacks are saying yet.\n>\n> My line of reasoning for unix is that pg_dump forks before any calls to\n> zstd. Nothing zstd does ought to affect the pg_dump layer. But that\n> doesn't apply to pg_dump under windows. This is an opened question. If\n> there's no solid answer, I could disable/ignore the option (maybe only\n> under windows).\n\nTo (maybe?) move this forward a bit, note that pg_backup_custom's\n_Clone() function makes sure that there is no active compressor state\nat the beginning of the new thread. pg_backup_directory's\nimplementation has no such provision. And I don't think it can,\nbecause the parent thread might have concurrently set one up -- see\nthe directory-specific implementation of _CloseArchive(). Perhaps we\nshould just NULL out those fields after the copy, instead?\n\nTo illustrate why I think this is tough to characterize: if I've read\nthe code correctly, the _Clone() and CloneArchive() implementations\nare running concurrently with code that is actively modifying the\nArchiveHandle and the lclContext. So safety is only ensured to the\nextent that we keep track of which fields threads are allowed to\ntouch, and I don't have that mental model.\n\n--Jacob\n\n\n",
"msg_date": "Tue, 28 Mar 2023 15:33:25 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 02:03:49PM -0400, Kirk Wolak wrote:\n> On Tue, Mar 28, 2023 at 12:23 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> > On 3/27/23 19:28, Justin Pryzby wrote:\n> > > On Fri, Mar 17, 2023 at 03:43:31AM +0100, Tomas Vondra wrote:\n> > >> On 3/16/23 05:50, Justin Pryzby wrote:\n> > >>> On Fri, Mar 10, 2023 at 12:48:13PM -0800, Jacob Champion wrote:\n> > >>>> On Wed, Mar 8, 2023 at 10:59 AM Jacob Champion <jchampion@timescale.com> wrote:\n> > >>>>> I did some smoke testing against zstd's GitHub release on Windows. To\n> > ...\n> > OK. I don't have access to a Windows machine so I can't test that. Is it\n> > possible to disable the zstd threading, until we figure this out?\n>\n> Thomas since I appear to be one of the few windows users (I use both), can I help?\n> I can test pg_dump... for you, easy to do. I do about 5-10 pg_dumps a day\n> on windows while developing.\n\nIt'd be great if you'd exercise this and other changes to\npg_dump/restore. Tomas just pushed a bugfix, so be sure to \"git pull\"\nbefore testing, or else you might rediscover the bug.\n\nIf you have a zstd library with thread support, you could test with\n-Z zstd:workers=3. But I think threads aren't enabled in the common\nlibzstd packages. Jacob figured out how to compile libzstd easily using\n\"meson wraps\" - but I don't know the details.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 29 Mar 2023 08:35:28 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 6:35 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> If you have a zstd library with thread support, you could test with\n> -Z zstd:workers=3. But I think threads aren't enabled in the common\n> libzstd packages. Jacob figured out how to compile libzstd easily using\n> \"meson wraps\" - but I don't know the details.\n\n From the source root,\n\n $ mkdir subprojects\n $ meson wrap install zstd\n\n From then on, Meson was pretty automagical about it during the ninja\nbuild. The subproject's settings are themselves inspectable and\nsettable via `meson configure`:\n\n $ meson configure -Dzstd:<option>=<value>\n\n--Jacob\n\n\n",
"msg_date": "Wed, 29 Mar 2023 08:10:18 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 06:23:26PM +0200, Tomas Vondra wrote:\n> On 3/27/23 19:28, Justin Pryzby wrote:\n> > On Fri, Mar 17, 2023 at 03:43:31AM +0100, Tomas Vondra wrote:\n> >> On 3/16/23 05:50, Justin Pryzby wrote:\n> >>> On Fri, Mar 10, 2023 at 12:48:13PM -0800, Jacob Champion wrote:\n> >>>> On Wed, Mar 8, 2023 at 10:59 AM Jacob Champion <jchampion@timescale.com> wrote:\n> >>>>> I did some smoke testing against zstd's GitHub release on Windows. To\n> >>>>> build against it, I had to construct an import library, and put that\n> >>>>> and the DLL into the `lib` folder expected by the MSVC scripts...\n> >>>>> which makes me wonder if I've chosen a harder way than necessary?\n> >>>>\n> >>>> It looks like pg_dump's meson.build is missing dependencies on zstd\n> >>>> (meson couldn't find the headers in the subproject without them).\n> >>>\n> >>> I saw that this was added for LZ4, but I hadn't added it for zstd since\n> >>> I didn't run into an issue without it. Could you check that what I've\n> >>> added works for your case ?\n> >>>\n> >>>>> Parallel zstd dumps seem to work as expected, in that the resulting\n> >>>>> pg_restore output is identical to uncompressed dumps and nothing\n> >>>>> explodes. I haven't inspected the threading implementation for safety\n> >>>>> yet, as you mentioned.\n> >>>>\n> >>>> Hm. Best I can tell, the CloneArchive() machinery is supposed to be\n> >>>> handling safety for this, by isolating each thread's state. I don't feel\n> >>>> comfortable pronouncing this new addition safe or not, because I'm not\n> >>>> sure I understand what the comments in the format-specific _Clone()\n> >>>> callbacks are saying yet.\n> >>>\n> >>> My line of reasoning for unix is that pg_dump forks before any calls to\n> >>> zstd. Nothing zstd does ought to affect the pg_dump layer. But that\n> >>> doesn't apply to pg_dump under windows. This is an opened question. If\n> >>> there's no solid answer, I could disable/ignore the option (maybe only\n> >>> under windows).\n> >>\n> >> I may be missing something, but why would the patch affect this? Why\n> >> would it even affect safety of the parallel dump? And I don't see any\n> >> changes to the clone stuff ...\n> > \n> > zstd supports using threads during compression, with -Z zstd:workers=N.\n> > When unix forks, the child processes can't do anything to mess up the\n> > state of the parent processes. \n> > \n> > But windows pg_dump uses threads instead of forking, so it seems\n> > possible that the pg_dump -j threads that then spawn zstd threads could\n> > \"leak threads\" and break the main thread. I suspect there's no issue,\n> > but we still ought to verify that before declaring it safe.\n> \n> OK. I don't have access to a Windows machine so I can't test that. Is it\n> possible to disable the zstd threading, until we figure this out?\n\nI think that's what's best. I made it issue a warning if \"workers\" was\nspecified. It could also be an error, or just ignored.\n\nI considered disabling workers only for windows, but realized that I\nhaven't tested with threads myself - my local zstd package is compiled\nwithout threading, and I remember having some issue recompiling it with\nthreading. Jacob's recipe for using meson wraps works well, but it\nstill seems better to leave it as a future feature. I used that recipe\nto enabled zstd with threading on CI (except for linux/autoconf).\n\n> >>> The function is first checking if it was passed a filename which already\n> >>> has a suffix. And if not, it searches through a list of suffixes,\n> >>> testing for an existing file with each suffix. The search with stat()\n> >>> doesn't happen if it has a suffix. I'm having trouble seeing how the\n> >>> hasSuffix() branch isn't dead code. Another opened question.\n> >>\n> >> AFAICS it's done this way because of this comment in pg_backup_directory\n> >>\n> >> * ...\n> >> * \".gz\" suffix is added to the filenames. The TOC files are never\n> >> * compressed by pg_dump, however they are accepted with the .gz suffix\n> >> * too, in case the user has manually compressed them with 'gzip'.\n> >>\n> >> I haven't tried, but I believe that if you manually compress the\n> >> directory, it may hit this branch.\n> > \n> > That would make sense, but when I tried, it didn't work like that.\n> > The filenames were all uncompressed names. Maybe it worked differently\n> > in an older release. Or maybe it changed during development of the\n> > parallel-directory-dump patch and it's actually dead code.\n> \n> Interesting. Would be good to find out. I wonder if a little bit of\n> git-log digging could tell us more. Anyway, until we confirm it's dead\n> code, we should probably do what .gz does and have the same check for\n> .lz4 and .zst files.\n\nI found that hasSuffix() and cfopen() originated in the refactored patch\nHeikki's sent here; there's no history beyond that.\n\nhttps://www.postgresql.org/message-id/4D3954C7.9060503%40enterprisedb.com\n\nThe patch published there appends the .gz within cfopen(), and the\ncaller writes into the TOC the filename without \".gz\". It seems like\nmaybe a few hours prior, Heikki may have been appending the \".gz\" suffix\nin the caller, and then writing the TOC with filename.gz.\n\nThe only way I've been able to get a \"filename.gz\" passed to hasSuffix\nis to write a directory-format dump, with LOs, and without compression,\nand then compress the blobs with \"gzip\", and *also* edit the blobs.toc\nfile to say \".gz\" (which isn't necessary since, if the original file\nisn't found, the restore would search for files with compressed\nsuffixes).\n\nSo .. it's not *technically* unreachable, but I can't see why it'd be\nuseful to support editing the *content* of the blob TOC (other than\ncompressing it). I might give some weight to the idea if it were also\npossible to edit the non-blob TOC; but, it's a binary file, so no.\n\nFor now, I made the change to make zstd and lz4 to behave the same here\nas .gz, unless Heikki has a memory or a git reflog going back far enough\nto further support the idea that the code path isn't useful.\n\nI'm going to set the patch as RFC as a hint to anyone who would want to\nmake a final review.\n\n-- \nJustin",
"msg_date": "Fri, 31 Mar 2023 18:16:31 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On 4/1/23 01:16, Justin Pryzby wrote:\n> On Tue, Mar 28, 2023 at 06:23:26PM +0200, Tomas Vondra wrote:\n>> On 3/27/23 19:28, Justin Pryzby wrote:\n>>> On Fri, Mar 17, 2023 at 03:43:31AM +0100, Tomas Vondra wrote:\n>>>> On 3/16/23 05:50, Justin Pryzby wrote:\n>>>>> On Fri, Mar 10, 2023 at 12:48:13PM -0800, Jacob Champion wrote:\n>>>>>> On Wed, Mar 8, 2023 at 10:59 AM Jacob Champion <jchampion@timescale.com> wrote:\n>>>>>>> I did some smoke testing against zstd's GitHub release on Windows. To\n>>>>>>> build against it, I had to construct an import library, and put that\n>>>>>>> and the DLL into the `lib` folder expected by the MSVC scripts...\n>>>>>>> which makes me wonder if I've chosen a harder way than necessary?\n>>>>>>\n>>>>>> It looks like pg_dump's meson.build is missing dependencies on zstd\n>>>>>> (meson couldn't find the headers in the subproject without them).\n>>>>>\n>>>>> I saw that this was added for LZ4, but I hadn't added it for zstd since\n>>>>> I didn't run into an issue without it. Could you check that what I've\n>>>>> added works for your case ?\n>>>>>\n>>>>>>> Parallel zstd dumps seem to work as expected, in that the resulting\n>>>>>>> pg_restore output is identical to uncompressed dumps and nothing\n>>>>>>> explodes. I haven't inspected the threading implementation for safety\n>>>>>>> yet, as you mentioned.\n>>>>>>\n>>>>>> Hm. Best I can tell, the CloneArchive() machinery is supposed to be\n>>>>>> handling safety for this, by isolating each thread's state. I don't feel\n>>>>>> comfortable pronouncing this new addition safe or not, because I'm not\n>>>>>> sure I understand what the comments in the format-specific _Clone()\n>>>>>> callbacks are saying yet.\n>>>>>\n>>>>> My line of reasoning for unix is that pg_dump forks before any calls to\n>>>>> zstd. Nothing zstd does ought to affect the pg_dump layer. But that\n>>>>> doesn't apply to pg_dump under windows. This is an opened question. If\n>>>>> there's no solid answer, I could disable/ignore the option (maybe only\n>>>>> under windows).\n>>>>\n>>>> I may be missing something, but why would the patch affect this? Why\n>>>> would it even affect safety of the parallel dump? And I don't see any\n>>>> changes to the clone stuff ...\n>>>\n>>> zstd supports using threads during compression, with -Z zstd:workers=N.\n>>> When unix forks, the child processes can't do anything to mess up the\n>>> state of the parent processes. \n>>>\n>>> But windows pg_dump uses threads instead of forking, so it seems\n>>> possible that the pg_dump -j threads that then spawn zstd threads could\n>>> \"leak threads\" and break the main thread. I suspect there's no issue,\n>>> but we still ought to verify that before declaring it safe.\n>>\n>> OK. I don't have access to a Windows machine so I can't test that. Is it\n>> possible to disable the zstd threading, until we figure this out?\n> \n> I think that's what's best. I made it issue a warning if \"workers\" was\n> specified. It could also be an error, or just ignored.\n> \n> I considered disabling workers only for windows, but realized that I\n> haven't tested with threads myself - my local zstd package is compiled\n> without threading, and I remember having some issue recompiling it with\n> threading. Jacob's recipe for using meson wraps works well, but it\n> still seems better to leave it as a future feature. I used that recipe\n> to enabled zstd with threading on CI (except for linux/autoconf).\n> \n\n+1 to disable this if we're unsure it works correctly. I agree it's\nbetter to just error out if workers are requested - I rather dislike\nwhen a tool just ignores an explicit parameter. And AFAICS it's what\nzstd does too, when someone requests workers on incompatible build.\n\nFWIW I've been thinking about this a bit more and I don't quite see why\nwould the threading cause issues (except for Windows). I forgot\npg_basebackup already supports zstd, including the worker threading, so\nwhy would it work there and not in pg_dump? Sure, pg_basebackup is not\nparallel, but with separate pg_dump processes that shouldn't be an issue\n(although I'm not sure when zstd creates threads).\n\nThe one thing I'm wondering about is at which point are the worker\nthreads spawned - but presumably not before the pg_dump processes fork.\n\nI'll try building zstd with threading enabled, and do some tests over\nthe weekend.\n\n>>>>> The function is first checking if it was passed a filename which already\n>>>>> has a suffix. And if not, it searches through a list of suffixes,\n>>>>> testing for an existing file with each suffix. The search with stat()\n>>>>> doesn't happen if it has a suffix. I'm having trouble seeing how the\n>>>>> hasSuffix() branch isn't dead code. Another opened question.\n>>>>\n>>>> AFAICS it's done this way because of this comment in pg_backup_directory\n>>>>\n>>>> * ...\n>>>> * \".gz\" suffix is added to the filenames. The TOC files are never\n>>>> * compressed by pg_dump, however they are accepted with the .gz suffix\n>>>> * too, in case the user has manually compressed them with 'gzip'.\n>>>>\n>>>> I haven't tried, but I believe that if you manually compress the\n>>>> directory, it may hit this branch.\n>>>\n>>> That would make sense, but when I tried, it didn't work like that.\n>>> The filenames were all uncompressed names. Maybe it worked differently\n>>> in an older release. Or maybe it changed during development of the\n>>> parallel-directory-dump patch and it's actually dead code.\n>>\n>> Interesting. Would be good to find out. I wonder if a little bit of\n>> git-log digging could tell us more. Anyway, until we confirm it's dead\n>> code, we should probably do what .gz does and have the same check for\n>> .lz4 and .zst files.\n> \n> I found that hasSuffix() and cfopen() originated in the refactored patch\n> Heikki's sent here; there's no history beyond that.\n> \n> https://www.postgresql.org/message-id/4D3954C7.9060503%40enterprisedb.com\n> \n> The patch published there appends the .gz within cfopen(), and the\n> caller writes into the TOC the filename without \".gz\". It seems like\n> maybe a few hours prior, Heikki may have been appending the \".gz\" suffix\n> in the caller, and then writing the TOC with filename.gz.\n> \n> The only way I've been able to get a \"filename.gz\" passed to hasSuffix\n> is to write a directory-format dump, with LOs, and without compression,\n> and then compress the blobs with \"gzip\", and *also* edit the blobs.toc\n> file to say \".gz\" (which isn't necessary since, if the original file\n> isn't found, the restore would search for files with compressed\n> suffixes).\n> \n> So .. it's not *technically* unreachable, but I can't see why it'd be\n> useful to support editing the *content* of the blob TOC (other than\n> compressing it). I might give some weight to the idea if it were also\n> possible to edit the non-blob TOC; but, it's a binary file, so no.\n> \n> For now, I made the change to make zstd and lz4 to behave the same here\n> as .gz, unless Heikki has a memory or a git reflog going back far enough\n> to further support the idea that the code path isn't useful.\n> \n\nMakes sense. Let's keep the same behavior for all compression methods,\nand if it's unreachable we shall remove it from all. It's a trivial\namount of code, we can live with that.\n\n> I'm going to set the patch as RFC as a hint to anyone who would want to\n> make a final review.\n> \n\nOK.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 1 Apr 2023 02:11:12 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Sat, Apr 01, 2023 at 02:11:12AM +0200, Tomas Vondra wrote:\n> On 4/1/23 01:16, Justin Pryzby wrote:\n> > On Tue, Mar 28, 2023 at 06:23:26PM +0200, Tomas Vondra wrote:\n> >> On 3/27/23 19:28, Justin Pryzby wrote:\n> >>> On Fri, Mar 17, 2023 at 03:43:31AM +0100, Tomas Vondra wrote:\n> >>>> On 3/16/23 05:50, Justin Pryzby wrote:\n> >>>>> On Fri, Mar 10, 2023 at 12:48:13PM -0800, Jacob Champion wrote:\n> >>>>>> On Wed, Mar 8, 2023 at 10:59 AM Jacob Champion <jchampion@timescale.com> wrote:\n> >>>>>>> I did some smoke testing against zstd's GitHub release on Windows. To\n> >>>>>>> build against it, I had to construct an import library, and put that\n> >>>>>>> and the DLL into the `lib` folder expected by the MSVC scripts...\n> >>>>>>> which makes me wonder if I've chosen a harder way than necessary?\n> >>>>>>\n> >>>>>> It looks like pg_dump's meson.build is missing dependencies on zstd\n> >>>>>> (meson couldn't find the headers in the subproject without them).\n> >>>>>\n> >>>>> I saw that this was added for LZ4, but I hadn't added it for zstd since\n> >>>>> I didn't run into an issue without it. Could you check that what I've\n> >>>>> added works for your case ?\n> >>>>>\n> >>>>>>> Parallel zstd dumps seem to work as expected, in that the resulting\n> >>>>>>> pg_restore output is identical to uncompressed dumps and nothing\n> >>>>>>> explodes. I haven't inspected the threading implementation for safety\n> >>>>>>> yet, as you mentioned.\n> >>>>>>\n> >>>>>> Hm. Best I can tell, the CloneArchive() machinery is supposed to be\n> >>>>>> handling safety for this, by isolating each thread's state. I don't feel\n> >>>>>> comfortable pronouncing this new addition safe or not, because I'm not\n> >>>>>> sure I understand what the comments in the format-specific _Clone()\n> >>>>>> callbacks are saying yet.\n> >>>>>\n> >>>>> My line of reasoning for unix is that pg_dump forks before any calls to\n> >>>>> zstd. Nothing zstd does ought to affect the pg_dump layer. But that\n> >>>>> doesn't apply to pg_dump under windows. This is an opened question. If\n> >>>>> there's no solid answer, I could disable/ignore the option (maybe only\n> >>>>> under windows).\n> >>>>\n> >>>> I may be missing something, but why would the patch affect this? Why\n> >>>> would it even affect safety of the parallel dump? And I don't see any\n> >>>> changes to the clone stuff ...\n> >>>\n> >>> zstd supports using threads during compression, with -Z zstd:workers=N.\n> >>> When unix forks, the child processes can't do anything to mess up the\n> >>> state of the parent processes. \n> >>>\n> >>> But windows pg_dump uses threads instead of forking, so it seems\n> >>> possible that the pg_dump -j threads that then spawn zstd threads could\n> >>> \"leak threads\" and break the main thread. I suspect there's no issue,\n> >>> but we still ought to verify that before declaring it safe.\n> >>\n> >> OK. I don't have access to a Windows machine so I can't test that. Is it\n> >> possible to disable the zstd threading, until we figure this out?\n> > \n> > I think that's what's best. I made it issue a warning if \"workers\" was\n> > specified. It could also be an error, or just ignored.\n> > \n> > I considered disabling workers only for windows, but realized that I\n> > haven't tested with threads myself - my local zstd package is compiled\n> > without threading, and I remember having some issue recompiling it with\n> > threading. Jacob's recipe for using meson wraps works well, but it\n> > still seems better to leave it as a future feature. I used that recipe\n> > to enabled zstd with threading on CI (except for linux/autoconf).\n> \n> +1 to disable this if we're unsure it works correctly. I agree it's\n> better to just error out if workers are requested - I rather dislike\n> when a tool just ignores an explicit parameter. And AFAICS it's what\n> zstd does too, when someone requests workers on incompatible build.\n> \n> FWIW I've been thinking about this a bit more and I don't quite see why\n> would the threading cause issues (except for Windows). I forgot\n> pg_basebackup already supports zstd, including the worker threading, so\n> why would it work there and not in pg_dump? Sure, pg_basebackup is not\n> parallel, but with separate pg_dump processes that shouldn't be an issue\n> (although I'm not sure when zstd creates threads).\n\nThere's no concern at all except under windows (because on windows\npg_dump -j is implemented using threads rather than forking).\nEspecially since zstd:workers is already allowed in the basebackup\nbackend process.\n\n> I'll try building zstd with threading enabled, and do some tests over\n> the weekend.\n\nFeel free to wait until v17 :)\n\nI used \"meson wraps\" to get a local version with threading. Note that\nif you want to use a zstd subproject, you may have to specify -D\nzstd=enabled, or else meson may not enable the library at all.\n\nAlso, in order to introspect its settings, I had to do like this:\n\nmkdir subprojects\nmeson wrap install zstd\nmeson subprojects download\nmkdir build.meson\nmeson setup -C build.meson --force-fallback-for=zstd\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 31 Mar 2023 19:28:33 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "\n\nOn 4/1/23 02:28, Justin Pryzby wrote:\n> On Sat, Apr 01, 2023 at 02:11:12AM +0200, Tomas Vondra wrote:\n>> On 4/1/23 01:16, Justin Pryzby wrote:\n>>> On Tue, Mar 28, 2023 at 06:23:26PM +0200, Tomas Vondra wrote:\n>>>> On 3/27/23 19:28, Justin Pryzby wrote:\n>>>>> On Fri, Mar 17, 2023 at 03:43:31AM +0100, Tomas Vondra wrote:\n>>>>>> On 3/16/23 05:50, Justin Pryzby wrote:\n>>>>>>> On Fri, Mar 10, 2023 at 12:48:13PM -0800, Jacob Champion wrote:\n>>>>>>>> On Wed, Mar 8, 2023 at 10:59 AM Jacob Champion <jchampion@timescale.com> wrote:\n>>>>>>>>> I did some smoke testing against zstd's GitHub release on Windows. To\n>>>>>>>>> build against it, I had to construct an import library, and put that\n>>>>>>>>> and the DLL into the `lib` folder expected by the MSVC scripts...\n>>>>>>>>> which makes me wonder if I've chosen a harder way than necessary?\n>>>>>>>>\n>>>>>>>> It looks like pg_dump's meson.build is missing dependencies on zstd\n>>>>>>>> (meson couldn't find the headers in the subproject without them).\n>>>>>>>\n>>>>>>> I saw that this was added for LZ4, but I hadn't added it for zstd since\n>>>>>>> I didn't run into an issue without it. Could you check that what I've\n>>>>>>> added works for your case ?\n>>>>>>>\n>>>>>>>>> Parallel zstd dumps seem to work as expected, in that the resulting\n>>>>>>>>> pg_restore output is identical to uncompressed dumps and nothing\n>>>>>>>>> explodes. I haven't inspected the threading implementation for safety\n>>>>>>>>> yet, as you mentioned.\n>>>>>>>>\n>>>>>>>> Hm. Best I can tell, the CloneArchive() machinery is supposed to be\n>>>>>>>> handling safety for this, by isolating each thread's state. I don't feel\n>>>>>>>> comfortable pronouncing this new addition safe or not, because I'm not\n>>>>>>>> sure I understand what the comments in the format-specific _Clone()\n>>>>>>>> callbacks are saying yet.\n>>>>>>>\n>>>>>>> My line of reasoning for unix is that pg_dump forks before any calls to\n>>>>>>> zstd. Nothing zstd does ought to affect the pg_dump layer. But that\n>>>>>>> doesn't apply to pg_dump under windows. This is an opened question. If\n>>>>>>> there's no solid answer, I could disable/ignore the option (maybe only\n>>>>>>> under windows).\n>>>>>>\n>>>>>> I may be missing something, but why would the patch affect this? Why\n>>>>>> would it even affect safety of the parallel dump? And I don't see any\n>>>>>> changes to the clone stuff ...\n>>>>>\n>>>>> zstd supports using threads during compression, with -Z zstd:workers=N.\n>>>>> When unix forks, the child processes can't do anything to mess up the\n>>>>> state of the parent processes. \n>>>>>\n>>>>> But windows pg_dump uses threads instead of forking, so it seems\n>>>>> possible that the pg_dump -j threads that then spawn zstd threads could\n>>>>> \"leak threads\" and break the main thread. I suspect there's no issue,\n>>>>> but we still ought to verify that before declaring it safe.\n>>>>\n>>>> OK. I don't have access to a Windows machine so I can't test that. Is it\n>>>> possible to disable the zstd threading, until we figure this out?\n>>>\n>>> I think that's what's best. I made it issue a warning if \"workers\" was\n>>> specified. It could also be an error, or just ignored.\n>>>\n>>> I considered disabling workers only for windows, but realized that I\n>>> haven't tested with threads myself - my local zstd package is compiled\n>>> without threading, and I remember having some issue recompiling it with\n>>> threading. Jacob's recipe for using meson wraps works well, but it\n>>> still seems better to leave it as a future feature. I used that recipe\n>>> to enabled zstd with threading on CI (except for linux/autoconf).\n>>\n>> +1 to disable this if we're unsure it works correctly. I agree it's\n>> better to just error out if workers are requested - I rather dislike\n>> when a tool just ignores an explicit parameter. And AFAICS it's what\n>> zstd does too, when someone requests workers on incompatible build.\n>>\n>> FWIW I've been thinking about this a bit more and I don't quite see why\n>> would the threading cause issues (except for Windows). I forgot\n>> pg_basebackup already supports zstd, including the worker threading, so\n>> why would it work there and not in pg_dump? Sure, pg_basebackup is not\n>> parallel, but with separate pg_dump processes that shouldn't be an issue\n>> (although I'm not sure when zstd creates threads).\n> \n> There's no concern at all except under windows (because on windows\n> pg_dump -j is implemented using threads rather than forking).\n> Especially since zstd:workers is already allowed in the basebackup\n> backend process.\n> \n\nIf there are no concerns, why disable it outside Windows? I don't have a\ngood idea how beneficial the multi-threaded compression is, so I can't\nquite judge the risk/benefits tradeoff.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 1 Apr 2023 14:49:44 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Sat, Apr 01, 2023 at 02:49:44PM +0200, Tomas Vondra wrote:\n> On 4/1/23 02:28, Justin Pryzby wrote:\n> > On Sat, Apr 01, 2023 at 02:11:12AM +0200, Tomas Vondra wrote:\n> >> On 4/1/23 01:16, Justin Pryzby wrote:\n> >>> On Tue, Mar 28, 2023 at 06:23:26PM +0200, Tomas Vondra wrote:\n> >>>> On 3/27/23 19:28, Justin Pryzby wrote:\n> >>>>> On Fri, Mar 17, 2023 at 03:43:31AM +0100, Tomas Vondra wrote:\n> >>>>>> On 3/16/23 05:50, Justin Pryzby wrote:\n> >>>>>>> On Fri, Mar 10, 2023 at 12:48:13PM -0800, Jacob Champion wrote:\n> >>>>>>>> On Wed, Mar 8, 2023 at 10:59 AM Jacob Champion <jchampion@timescale.com> wrote:\n> >>>>>>>>> I did some smoke testing against zstd's GitHub release on Windows. To\n> >>>>>>>>> build against it, I had to construct an import library, and put that\n> >>>>>>>>> and the DLL into the `lib` folder expected by the MSVC scripts...\n> >>>>>>>>> which makes me wonder if I've chosen a harder way than necessary?\n> >>>>>>>>\n> >>>>>>>> It looks like pg_dump's meson.build is missing dependencies on zstd\n> >>>>>>>> (meson couldn't find the headers in the subproject without them).\n> >>>>>>>\n> >>>>>>> I saw that this was added for LZ4, but I hadn't added it for zstd since\n> >>>>>>> I didn't run into an issue without it. Could you check that what I've\n> >>>>>>> added works for your case ?\n> >>>>>>>\n> >>>>>>>>> Parallel zstd dumps seem to work as expected, in that the resulting\n> >>>>>>>>> pg_restore output is identical to uncompressed dumps and nothing\n> >>>>>>>>> explodes. I haven't inspected the threading implementation for safety\n> >>>>>>>>> yet, as you mentioned.\n> >>>>>>>>\n> >>>>>>>> Hm. Best I can tell, the CloneArchive() machinery is supposed to be\n> >>>>>>>> handling safety for this, by isolating each thread's state. I don't feel\n> >>>>>>>> comfortable pronouncing this new addition safe or not, because I'm not\n> >>>>>>>> sure I understand what the comments in the format-specific _Clone()\n> >>>>>>>> callbacks are saying yet.\n> >>>>>>>\n> >>>>>>> My line of reasoning for unix is that pg_dump forks before any calls to\n> >>>>>>> zstd. Nothing zstd does ought to affect the pg_dump layer. But that\n> >>>>>>> doesn't apply to pg_dump under windows. This is an opened question. If\n> >>>>>>> there's no solid answer, I could disable/ignore the option (maybe only\n> >>>>>>> under windows).\n> >>>>>>\n> >>>>>> I may be missing something, but why would the patch affect this? Why\n> >>>>>> would it even affect safety of the parallel dump? And I don't see any\n> >>>>>> changes to the clone stuff ...\n> >>>>>\n> >>>>> zstd supports using threads during compression, with -Z zstd:workers=N.\n> >>>>> When unix forks, the child processes can't do anything to mess up the\n> >>>>> state of the parent processes. \n> >>>>>\n> >>>>> But windows pg_dump uses threads instead of forking, so it seems\n> >>>>> possible that the pg_dump -j threads that then spawn zstd threads could\n> >>>>> \"leak threads\" and break the main thread. I suspect there's no issue,\n> >>>>> but we still ought to verify that before declaring it safe.\n> >>>>\n> >>>> OK. I don't have access to a Windows machine so I can't test that. Is it\n> >>>> possible to disable the zstd threading, until we figure this out?\n> >>>\n> >>> I think that's what's best. I made it issue a warning if \"workers\" was\n> >>> specified. It could also be an error, or just ignored.\n> >>>\n> >>> I considered disabling workers only for windows, but realized that I\n> >>> haven't tested with threads myself - my local zstd package is compiled\n> >>> without threading, and I remember having some issue recompiling it with\n> >>> threading. Jacob's recipe for using meson wraps works well, but it\n> >>> still seems better to leave it as a future feature. I used that recipe\n> >>> to enabled zstd with threading on CI (except for linux/autoconf).\n> >>\n> >> +1 to disable this if we're unsure it works correctly. I agree it's\n> >> better to just error out if workers are requested - I rather dislike\n> >> when a tool just ignores an explicit parameter. And AFAICS it's what\n> >> zstd does too, when someone requests workers on incompatible build.\n> >>\n> >> FWIW I've been thinking about this a bit more and I don't quite see why\n> >> would the threading cause issues (except for Windows). I forgot\n> >> pg_basebackup already supports zstd, including the worker threading, so\n> >> why would it work there and not in pg_dump? Sure, pg_basebackup is not\n> >> parallel, but with separate pg_dump processes that shouldn't be an issue\n> >> (although I'm not sure when zstd creates threads).\n> > \n> > There's no concern at all except under windows (because on windows\n> > pg_dump -j is implemented using threads rather than forking).\n> > Especially since zstd:workers is already allowed in the basebackup\n> > backend process.\n> \n> If there are no concerns, why disable it outside Windows? I don't have a\n> good idea how beneficial the multi-threaded compression is, so I can't\n> quite judge the risk/benefits tradeoff.\n\nBecause it's a minor/fringe feature, and it's annoying to have platform\ndifferences (would we plan on relaxing the restriction in v17, or is it\nmore likely we'd forget ?).\n\nI realized how little I've tested with zstd workers myself. And I think\non cirrusci, the macos and freebsd tasks have zstd libraries with\nthreading support, but it wasn't being exercised (because using :workers\nwould cause the patch to fail unless it's supported everywhere). So I\nupdated the \"for CI only\" patch to 1) use meson wraps to compile zstd\nlibrary with threading on linux and windows; and, 2) use zstd:workers=3\n\"opportunistically\" (but avoid failing if threads are not supported,\nsince the autoconf task still doesn't have access to a library with\nthread support). That's a great step, but it still seems bad that the\nthread stuff has been little exercised until now. (Also, the windows\ntask failed; I think that's due to a transient network issue).\n\nFeel free to mess around with threads (but I'd much rather see the patch\nprogress for zstd:long).\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 1 Apr 2023 08:36:44 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On 4/1/23 15:36, Justin Pryzby wrote:\n>\n> ...\n>\n>> If there are no concerns, why disable it outside Windows? I don't have a\n>> good idea how beneficial the multi-threaded compression is, so I can't\n>> quite judge the risk/benefits tradeoff.\n> \n> Because it's a minor/fringe feature, and it's annoying to have platform\n> differences (would we plan on relaxing the restriction in v17, or is it\n> more likely we'd forget ?).\n> \n> I realized how little I've tested with zstd workers myself. And I think\n> on cirrusci, the macos and freebsd tasks have zstd libraries with\n> threading support, but it wasn't being exercised (because using :workers\n> would cause the patch to fail unless it's supported everywhere). So I\n> updated the \"for CI only\" patch to 1) use meson wraps to compile zstd\n> library with threading on linux and windows; and, 2) use zstd:workers=3\n> \"opportunistically\" (but avoid failing if threads are not supported,\n> since the autoconf task still doesn't have access to a library with\n> thread support). That's a great step, but it still seems bad that the\n> thread stuff has been little exercised until now. (Also, the windows\n> task failed; I think that's due to a transient network issue).\n> \n\nAgreed, let's leave the threading for PG17, depending on how beneficial\nit turns out to be for pg_dump.\n\n> Feel free to mess around with threads (but I'd much rather see the patch\n> progress for zstd:long).\n\nOK, understood. The long mode patch is pretty simple. IIUC it does not\nchange the format, i.e. in the worst case we could leave it for PG17\ntoo. Correct?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 1 Apr 2023 22:26:01 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Sat, Apr 01, 2023 at 10:26:01PM +0200, Tomas Vondra wrote:\n> > Feel free to mess around with threads (but I'd much rather see the patch\n> > progress for zstd:long).\n> \n> OK, understood. The long mode patch is pretty simple. IIUC it does not\n> change the format, i.e. in the worst case we could leave it for PG17\n> too. Correct?\n\nRight, libzstd only has one \"format\", which is the same as what's used\nby the commandline tool. zstd:long doesn't change the format of the\noutput: the library just uses a larger memory buffer to allow better\ncompression. There's no format change for zstd:workers, either.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 3 Apr 2023 14:17:30 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "\n\nOn 4/3/23 21:17, Justin Pryzby wrote:\n> On Sat, Apr 01, 2023 at 10:26:01PM +0200, Tomas Vondra wrote:\n>>> Feel free to mess around with threads (but I'd much rather see the patch\n>>> progress for zstd:long).\n>>\n>> OK, understood. The long mode patch is pretty simple. IIUC it does not\n>> change the format, i.e. in the worst case we could leave it for PG17\n>> too. Correct?\n> \n> Right, libzstd only has one \"format\", which is the same as what's used\n> by the commandline tool. zstd:long doesn't change the format of the\n> output: the library just uses a larger memory buffer to allow better\n> compression. There's no format change for zstd:workers, either.\n> \n\nOK. I plan to do a bit more review/testing on this, and get it committed\nover the next day or two, likely including the long mode. One thing I\nnoticed today is that maybe long_distance should be a bool, not int.\nYes, ZSTD_c_enableLongDistanceMatching() accepts int, but it'd be\ncleaner to cast the value during a call and keep it bool otherwise.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 3 Apr 2023 23:26:09 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Mon, Apr 03, 2023 at 11:26:09PM +0200, Tomas Vondra wrote:\n> On 4/3/23 21:17, Justin Pryzby wrote:\n> > On Sat, Apr 01, 2023 at 10:26:01PM +0200, Tomas Vondra wrote:\n> >>> Feel free to mess around with threads (but I'd much rather see the patch\n> >>> progress for zstd:long).\n> >>\n> >> OK, understood. The long mode patch is pretty simple. IIUC it does not\n> >> change the format, i.e. in the worst case we could leave it for PG17\n> >> too. Correct?\n> > \n> > Right, libzstd only has one \"format\", which is the same as what's used\n> > by the commandline tool. zstd:long doesn't change the format of the\n> > output: the library just uses a larger memory buffer to allow better\n> > compression. There's no format change for zstd:workers, either.\n> \n> OK. I plan to do a bit more review/testing on this, and get it committed\n> over the next day or two, likely including the long mode. One thing I\n> noticed today is that maybe long_distance should be a bool, not int.\n> Yes, ZSTD_c_enableLongDistanceMatching() accepts int, but it'd be\n> cleaner to cast the value during a call and keep it bool otherwise.\n\nThanks for noticing. Evidently I wrote it using \"int\" to get the\nfeature working, and then later wrote the bool parsing bits but never\nchanged the data structure.\n\nThis also updates a few comments, indentation, removes a useless\nassertion, and updates the warning about zstd:workers.\n\n-- \nJustin",
"msg_date": "Mon, 3 Apr 2023 22:04:02 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On 4/4/23 05:04, Justin Pryzby wrote:\n> On Mon, Apr 03, 2023 at 11:26:09PM +0200, Tomas Vondra wrote:\n>> On 4/3/23 21:17, Justin Pryzby wrote:\n>>> On Sat, Apr 01, 2023 at 10:26:01PM +0200, Tomas Vondra wrote:\n>>>>> Feel free to mess around with threads (but I'd much rather see the patch\n>>>>> progress for zstd:long).\n>>>>\n>>>> OK, understood. The long mode patch is pretty simple. IIUC it does not\n>>>> change the format, i.e. in the worst case we could leave it for PG17\n>>>> too. Correct?\n>>>\n>>> Right, libzstd only has one \"format\", which is the same as what's used\n>>> by the commandline tool. zstd:long doesn't change the format of the\n>>> output: the library just uses a larger memory buffer to allow better\n>>> compression. There's no format change for zstd:workers, either.\n>>\n>> OK. I plan to do a bit more review/testing on this, and get it committed\n>> over the next day or two, likely including the long mode. One thing I\n>> noticed today is that maybe long_distance should be a bool, not int.\n>> Yes, ZSTD_c_enableLongDistanceMatching() accepts int, but it'd be\n>> cleaner to cast the value during a call and keep it bool otherwise.\n> \n> Thanks for noticing. Evidently I wrote it using \"int\" to get the\n> feature working, and then later wrote the bool parsing bits but never\n> changed the data structure.\n> \n> This also updates a few comments, indentation, removes a useless\n> assertion, and updates the warning about zstd:workers.\n> \n\nThanks. I've cleaned up the 0001 a little bit (a couple comment\nimprovements), updated the commit message and pushed it. I plan to take\ncare of the 0002 (long distance mode) tomorrow, and that'll be it for\nPG16 I think.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 5 Apr 2023 21:42:57 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "\n\nOn 4/5/23 21:42, Tomas Vondra wrote:\n> On 4/4/23 05:04, Justin Pryzby wrote:\n>> On Mon, Apr 03, 2023 at 11:26:09PM +0200, Tomas Vondra wrote:\n>>> On 4/3/23 21:17, Justin Pryzby wrote:\n>>>> On Sat, Apr 01, 2023 at 10:26:01PM +0200, Tomas Vondra wrote:\n>>>>>> Feel free to mess around with threads (but I'd much rather see the patch\n>>>>>> progress for zstd:long).\n>>>>>\n>>>>> OK, understood. The long mode patch is pretty simple. IIUC it does not\n>>>>> change the format, i.e. in the worst case we could leave it for PG17\n>>>>> too. Correct?\n>>>>\n>>>> Right, libzstd only has one \"format\", which is the same as what's used\n>>>> by the commandline tool. zstd:long doesn't change the format of the\n>>>> output: the library just uses a larger memory buffer to allow better\n>>>> compression. There's no format change for zstd:workers, either.\n>>>\n>>> OK. I plan to do a bit more review/testing on this, and get it committed\n>>> over the next day or two, likely including the long mode. One thing I\n>>> noticed today is that maybe long_distance should be a bool, not int.\n>>> Yes, ZSTD_c_enableLongDistanceMatching() accepts int, but it'd be\n>>> cleaner to cast the value during a call and keep it bool otherwise.\n>>\n>> Thanks for noticing. Evidently I wrote it using \"int\" to get the\n>> feature working, and then later wrote the bool parsing bits but never\n>> changed the data structure.\n>>\n>> This also updates a few comments, indentation, removes a useless\n>> assertion, and updates the warning about zstd:workers.\n>>\n> \n> Thanks. I've cleaned up the 0001 a little bit (a couple comment\n> improvements), updated the commit message and pushed it. I plan to take\n> care of the 0002 (long distance mode) tomorrow, and that'll be it for\n> PG16 I think.\n\nI looked at the long mode patch again, updated the commit message and\npushed it. I was wondering if long_mode should really be bool -\nlogically it is, but ZSTD_CCtx_setParameter() expects int. But I think\nthat's fine.\n\nI think that's all for PG16 in this patch series. If there's more we\nwant to do, it'll have to wait for PG17 - Justin, can you update and\nsubmit the patches that you think are relevant for the next CF?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 6 Apr 2023 17:34:30 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "On Thu, Apr 06, 2023 at 05:34:30PM +0200, Tomas Vondra wrote:\n> I looked at the long mode patch again, updated the commit message and\n> pushed it. I was wondering if long_mode should really be bool -\n> logically it is, but ZSTD_CCtx_setParameter() expects int. But I think\n> that's fine.\n\nThanks!\n\n> I think that's all for PG16 in this patch series. If there's more we want to\n> do, it'll have to wait for PG17 - \n\nYes\n\n> Justin, can you update and submit the patches that you think are relevant for\n> the next CF?\n\nYeah.\n\nIt sounds like a shiny new feature, but it's not totally clear if it's safe\nhere or even how useful it is. (It might be like my patch for\nwal_compression=zstd:level, and Michael's for toast_compression=zstd, neither\nof which saw any support).\n\nLast year's basebackup thread had some interesting comments about safety of\nthreads, although pg_dump's considerations may be different.\n\nThe patch itself is trivial, so it'd be fine to wait until PG16 is released to\nget some experience. If someone else wanted to do that, it'd be fine with me.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 6 Apr 2023 11:10:17 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: zstd compression for pg_dump"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, Apr 06, 2023 at 05:34:30PM +0200, Tomas Vondra wrote:\n>> I think that's all for PG16 in this patch series. If there's more we want to\n>> do, it'll have to wait for PG17 - \n\n> Yes\n\nShouldn't the CF entry be closed as committed? It's certainly\nmaking the cfbot unhappy because the patch-of-record doesn't apply.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 06 Apr 2023 22:09:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: zstd compression for pg_dump"
}
] |
[
{
"msg_contents": "Replacing constants in pg_stat_statements is on a best effort basis.\r\nIt is not unlikely that on a busy workload with heavy entry deallocation,\r\nthe user may observe the query with the constants in pg_stat_statements.\r\n\r\nFrom what I can see, this is because the only time an entry is normalized is\r\nduring post_parse_analyze, and the entry may be deallocated by the time query\r\nexecution ends. At that point, the original form ( with constants ) of the query\r\nis used.\r\n\r\nIt is not clear how prevalent this is in real-world workloads, but it's easily reproducible\r\non a workload with high entry deallocation. Attached are the repro steps on the latest\r\nbranch.\r\n\r\nI think the only thing to do here is to call this out in docs with a suggestion to increase\r\npg_stat_statements.max to reduce the likelihood. I also attached the suggested\r\ndoc enhancement as well.\r\n\r\nAny thoughts?\r\n\r\nRegards,\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services",
"msg_date": "Fri, 24 Feb 2023 20:54:00 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>",
"msg_from_op": true,
"msg_subject": "Doc update for pg_stat_statements normalization"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 08:54:00PM +0000, Imseih (AWS), Sami wrote:\n> I think the only thing to do here is to call this out in docs with a\n> suggestion to increase pg_stat_statements.max to reduce the\n> likelihood. I also attached the suggested doc enhancement as well.\n\nImproving the docs about that sounds like a good idea. This would be\nless surprising for users, if we had some details about that.\n\n> Any thoughts?\n\nThe risk of deallocation of an entry between the post-analyze hook and\nthe planner/utility hook represented with two calls of pgss_store()\nmeans that this will never be able to work correctly as long as we\ndon't know the shape of the normalized query in the second code path\n(planner, utility execution) updating the entries with the call\ninformation, etc. And we only want to pay the cost of normalization\nonce, after the post-analyze where we do the query jumbling.\n\nCould things be done in a more stable way? For example, imagine that\nwe have an extra Query field called void *private_data that extensions\ncan use to store custom data associated to a query ID, then we could\ndo something like that:\n- In the post-analyze hook, check if an entry with the query ID\ncalculated exists.\n-- If the entry exists, grab a copy of the existing query string,\nwhich may be normalized or not, and save it into Query->private_data.\n-- If the entry does not exist, normalize the query, store it in\nQuery->private_data but do not yet create an entry in the hash table.\n- In the planner/utility hook, fetch the normalized query from\nprivate_data, then use it if an entry needs to be created in the hash\ntable. The entry may have been deallocated since the post-analyze\nhook, in which case it is re-created with the normalized copy saved in\nthe first phase.\n--\nMichael",
"msg_date": "Sat, 25 Feb 2023 14:58:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Doc update for pg_stat_statements normalization"
},
{
"msg_contents": "On Sat, Feb 25, 2023 at 02:58:36PM +0900, Michael Paquier wrote:\n> On Fri, Feb 24, 2023 at 08:54:00PM +0000, Imseih (AWS), Sami wrote:\n> > I think the only thing to do here is to call this out in docs with a\n> > suggestion to increase pg_stat_statements.max to reduce the\n> > likelihood. I also attached the suggested doc enhancement as well.\n> \n> Improving the docs about that sounds like a good idea. This would be\n> less surprising for users, if we had some details about that.\n> \n> > Any thoughts?\n> \n> The risk of deallocation of an entry between the post-analyze hook and\n> the planner/utility hook represented with two calls of pgss_store()\n> means that this will never be able to work correctly as long as we\n> don't know the shape of the normalized query in the second code path\n> (planner, utility execution) updating the entries with the call\n> information, etc. And we only want to pay the cost of normalization\n> once, after the post-analyze where we do the query jumbling.\n\nNote also that this is a somewhat wanted behavior (to evict queries that didn't\nhave any planning or execution stats record), per the\nSTICKY_DECREASE_FACTOR and related stuff.\n\n> Could things be done in a more stable way? For example, imagine that\n> we have an extra Query field called void *private_data that extensions\n> can use to store custom data associated to a query ID, then we could\n> do something like that:\n> - In the post-analyze hook, check if an entry with the query ID\n> calculated exists.\n> -- If the entry exists, grab a copy of the existing query string,\n> which may be normalized or not, and save it into Query->private_data.\n> -- If the entry does not exist, normalize the query, store it in\n> Query->private_data but do not yet create an entry in the hash table.\n> - In the planner/utility hook, fetch the normalized query from\n> private_data, then use it if an entry needs to be created in the hash\n> table. The entry may have been deallocated since the post-analyze\n> hook, in which case it is re-created with the normalized copy saved in\n> the first phase.\n\nI think the idea of a \"private_data\" like thing has been discussed before and\nrejected IIRC, as it could be quite expensive and would also need to\naccommodate for multiple extensions and so on.\n\nOverall, I think that if the pgss eviction rate is high enough that it's\nproblematic for doing performance analysis, the performance overhead will be so\nbad that simply removing pg_stat_statements will give you a ~ x2 performance\nincrease. I don't see much point trying to make such a performance killer\nscenario more usable.\n\n\n",
"msg_date": "Sat, 25 Feb 2023 16:06:23 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc update for pg_stat_statements normalization"
},
{
"msg_contents": "> > Could things be done in a more stable way? For example, imagine that\r\n> > we have an extra Query field called void *private_data that extensions\r\n> > can use to store custom data associated to a query ID, then we could\r\n> > do something like that:\r\n> > - In the post-analyze hook, check if an entry with the query ID\r\n> > calculated exists.\r\n> > -- If the entry exists, grab a copy of the existing query string,\r\n> > which may be normalized or not, and save it into Query->private_data.\r\n> > -- If the entry does not exist, normalize the query, store it in\r\n> > Query->private_data but do not yet create an entry in the hash table.\r\n> > - In the planner/utility hook, fetch the normalized query from\r\n> > private_data, then use it if an entry needs to be created in the hash\r\n> > table. The entry may have been deallocated since the post-analyze\r\n> > hook, in which case it is re-created with the normalized copy saved in\r\n> > the first phase.\r\n\r\n> I think the idea of a \"private_data\" like thing has been discussed before and\r\n> rejected IIRC, as it could be quite expensive and would also need to\r\n> accommodate for multiple extensions and so on.\r\n\r\nThe overhead of storing this additional private data for the life of the query\r\nexecution may not be desirable. I think we also will need to copy the\r\nprivate data to QueryDesc as well to make it available to planner/utility/exec\r\nhooks.\r\n\r\n> Overall, I think that if the pgss eviction rate is high enough that it's\r\n> problematic for doing performance analysis, the performance overhead will be so\r\n> bad that simply removing pg_stat_statements will give you a ~ x2 performance\r\n> increase. I don't see much point trying to make such a performance killer\r\n> scenario more usable.\r\n\r\nIn v14, we added a dealloc metric to pg_stat_statements_info, which is helpful.\r\nHowever, this only deals with the pgss_hash entry deallocation.\r\nI think we should also add a metric for the text file garbage collection.\r\n\r\nRegards\r\n\r\n-- \r\nSami Imseih\r\nAmazon Web Services\r\n\r\n",
"msg_date": "Sat, 25 Feb 2023 13:59:04 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc update for pg_stat_statements normalization"
},
{
"msg_contents": "On Sat, Feb 25, 2023 at 01:59:04PM +0000, Imseih (AWS), Sami wrote:\n> The overhead of storing this additional private data for the life of the query\n> execution may not be desirable.\n\nOkay, but why?\n\n> I think we also will need to copy the\n> private data to QueryDesc as well to make it available to planner/utility/exec\n> hooks.\n\nThis seems like the key point to me here. If we copy more information\ninto the Query structures, then we basically have no need for sticky\nentries, which could be an advantage on its own as it simplifies the\ndeallocation and lookup logic.\n\nFor a DML or a SELECT, the manipulation of the hash table would still\nbe a three-step process (post-analyze, planner and execution end), but\nthe first step would have no need to use an exclusive lock on the hash\ntable because we could just read and copy over the Query the\nnormalized query if an entry exists, meaning that we could actually\nrelax things a bit? This relaxation has as cost the extra memory used\nto store more data to allow the insertion to use a proper state of the\nQuery[Desc] coming from the JumbleState (this extra data has no need\nto be JumbleState, just the results we generate from it aka the\nnormalized query).\n\n> In v14, we added a dealloc metric to pg_stat_statements_info, which is helpful.\n> However, this only deals with the pgss_hash entry deallocation.\n> I think we should also add a metric for the text file garbage collection.\n\nThis sounds like a good idea on its own.\n--\nMichael",
"msg_date": "Mon, 27 Feb 2023 09:36:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Doc update for pg_stat_statements normalization"
},
{
"msg_contents": "> On Sat, Feb 25, 2023 at 01:59:04PM +0000, Imseih (AWS), Sami wrote:>\r\n> > The overhead of storing this additional private data for the life of the query\r\n> > execution may not be desirable.\r\n\r\n> Okay, but why?\r\n\r\nAdditional memory to maintain the JumbleState data in other structs, and\r\nthe additional copy operations.\r\n\r\n> This seems like the key point to me here. If we copy more information\r\n> into the Query structures, then we basically have no need for sticky\r\n> entries, which could be an advantage on its own as it simplifies the\r\n> deallocation and lookup logic.\r\n\r\nRemoving the sticky entry logic will be a big plus, and of course we can\r\neliminate a query not showing up properly normalized.\r\n\r\n> For a DML or a SELECT, the manipulation of the hash table would still\r\n> be a three-step process (post-analyze, planner and execution end), but\r\n> the first step would have no need to use an exclusive lock on the hash\r\n> table because we could just read and copy over the Query the\r\n> normalized query if an entry exists, meaning that we could actually\r\n> relax things a bit? \r\n\r\nNo lock is held while normalizing, and a shared lock is held while storing,\r\nso there is no apparent benefit from that aspect.\r\n\r\n> This relaxation has as cost the extra memory used\r\n> to store more data to allow the insertion to use a proper state of the\r\n> Query[Desc] coming from the JumbleState (this extra data has no need\r\n> to be JumbleState, just the results we generate from it aka the\r\n> normalized query).\r\n\r\nWouldn't be less in terms of memory usage to just store the required\r\nJumbleState fields in Query[Desc]?\r\n\r\nclocations_count,\r\nhighest_extern_param_id,\r\nclocations_locations,\r\nclocations_length\r\n\r\n> > In v14, we added a dealloc metric to pg_stat_statements_info, which is helpful.\r\n> > However, this only deals with the pgss_hash entry deallocation.\r\n> > I think we should also add a metric for the text file garbage collection.\r\n\r\n> This sounds like a good idea on its own.\r\n\r\nI can create a separate patch for this.\r\n\r\nRegards,\r\n\r\n--\r\nSami Imseih\r\nAmazon Web services\r\n\r\n",
"msg_date": "Mon, 27 Feb 2023 22:53:26 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc update for pg_stat_statements normalization"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 10:53:26PM +0000, Imseih (AWS), Sami wrote:\n> Wouldn't be less in terms of memory usage to just store the required\n> JumbleState fields in Query[Desc]?\n> \n> clocations_count,\n> highest_extern_param_id,\n> clocations_locations,\n> clocations_length\n\nYes, these would be enough to ensure a proper rebuild of the\nnormalized query in either the 2nd or 3rd call of pgss_store(). With\na high deallocation rate, perhaps we just don't care about bearing\nthe extra computation to build a normalized qury more than once, still\nit could be noticeable?\n\nAnything that gets changed is going to need some serious benchmarking\n(based on deallocation rate, string length, etc.) to check that the\ncost of this extra memory is worth the correctness gained when storing\nthe normalization. FWIW, I am all in if it means code simplifications\nwith better performance and better correctness of the results.\n--\nMichael",
"msg_date": "Tue, 28 Feb 2023 09:09:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Doc update for pg_stat_statements normalization"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 08:54:00PM +0000, Imseih (AWS), Sami wrote:\n> I think the only thing to do here is to call this out in docs with a suggestion to increase\n> pg_stat_statements.max to reduce the likelihood. I also attached the suggested\n> doc enhancement as well.\n\n+ <para>\n+ A query text may be observed with constants in\n+ <structname>pg_stat_statements</structname>, especially when there is a high\n+ rate of entry deallocations. To reduce the likelihood of this occuring, consider\n+ increasing <varname>pg_stat_statements.max</varname>.\n+ The <structname>pg_stat_statements_info</structname> view provides entry\n+ deallocation statistics.\n+ </para>\n\nI am OK with an addition to the documentation to warn that one may\nhave to increase the maximum number of entries that can be stored if\nseeing a non-normalized entry that should have been normalized.\n\nShouldn't this text make it clear that this concerns only query\nstrings that can be normalized? Utility queries can have constants,\nfor one (well, not for long for most of them with the work I have been\ndoing lately, but there will still be some exceptions like CREATE\nVIEW or utilities with A_Const nodes).\n--\nMichael",
"msg_date": "Tue, 28 Feb 2023 13:56:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Doc update for pg_stat_statements normalization"
},
{
"msg_contents": "> I am OK with an addition to the documentation to warn that one may\r\n> have to increase the maximum number of entries that can be stored if\r\n> seeing a non-normalized entry that should have been normalized.\r\n\r\nI agree. We introduce the concept of a plannable statement in a \r\nprevious section and we can then make this distinction in the new\r\nparagraph. \r\n\r\nI also added a link to pg_stat_statements_info since that is introduced\r\nlater on int the doc.\r\n\r\n\r\nRegards,\r\n\r\n-- \r\nSami Imseih\r\nAmazon Web Services",
"msg_date": "Tue, 28 Feb 2023 23:11:30 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc update for pg_stat_statements normalization"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 11:11:30PM +0000, Imseih (AWS), Sami wrote:\n> I agree. We introduce the concept of a plannable statement in a \n> previous section and we can then make this distinction in the new\n> paragraph. \n> \n> I also added a link to pg_stat_statements_info since that is introduced\n> later on int the doc.\n\nI have reworded the paragraph a bit to be more general so as it would\nnot need an update once more normalization is applied to utility\nqueries (I am going to fix the part where we mention that we use the\nstrings for utilities, which is not the case anymore now):\n+ <para>\n+ Queries on which normalization can be applied may be observed with constant\n+ values in <structname>pg_stat_statements</structname>, especially when there\n+ is a high rate of entry deallocations. To reduce the likelihood of this\n+ happening, consider increasing <varname>pg_stat_statements.max</varname>.\n+ The <structname>pg_stat_statements_info</structname> view, discussed below\n+ in <xref linkend=\"pgstatstatements-pg-stat-statements-info\"/>,\n+ provides statistics about entry deallocations.\n+ </para>\n\nAre you OK with this text?\n--\nMichael",
"msg_date": "Wed, 1 Mar 2023 09:09:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Doc update for pg_stat_statements normalization"
},
{
"msg_contents": "> + <para>\r\n> + Queries on which normalization can be applied may be observed with constant\r\n> + values in <structname>pg_stat_statements</structname>, especially when there\r\n> + is a high rate of entry deallocations. To reduce the likelihood of this\r\n> + happening, consider increasing <varname>pg_stat_statements.max</varname>.\r\n> + The <structname>pg_stat_statements_info</structname> view, discussed below\r\n> + in <xref linkend=\"pgstatstatements-pg-stat-statements-info\"/>,\r\n> + provides statistics about entry deallocations.\r\n> + </para>\r\n\r\n> Are you OK with this text?\r\n\r\nYes, that makes sense.\r\n\r\n\r\nRegards,\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services\r\n\r\n\r\n\r\n",
"msg_date": "Wed, 1 Mar 2023 00:43:40 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc update for pg_stat_statements normalization"
},
{
"msg_contents": "On Wed, Mar 01, 2023 at 12:43:40AM +0000, Imseih (AWS), Sami wrote:\n> Yes, that makes sense.\n\nOkay, thanks. Done that now on HEAD.\n--\nMichael",
"msg_date": "Wed, 1 Mar 2023 10:48:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Doc update for pg_stat_statements normalization"
}
] |
[
{
"msg_contents": "Hi\n\ndiff --git a/src/backend/utils/adt/numeric.c\nb/src/backend/utils/adt/numeric.c\nindex a83feea396..12c6548675 100644\n--- a/src/backend/utils/adt/numeric.c\n+++ b/src/backend/utils/adt/numeric.c\n@@ -1233,7 +1233,7 @@ numeric_support(PG_FUNCTION_ARGS)\n * scale of the attribute have to be applied on the value.\n */\n Datum\n-numeric (PG_FUNCTION_ARGS)\n+numeric(PG_FUNCTION_ARGS)\n {\n Numeric num = PG_GETARG_NUMERIC(0);\n int32 typmod = PG_GETARG_INT32(1);\n\nRegards\n\nPavel\n\nHidiff --git a/src/backend/utils/adt/numeric.c b/src/backend/utils/adt/numeric.cindex a83feea396..12c6548675 100644--- a/src/backend/utils/adt/numeric.c+++ b/src/backend/utils/adt/numeric.c@@ -1233,7 +1233,7 @@ numeric_support(PG_FUNCTION_ARGS) * scale of the attribute have to be applied on the value. */ Datum-numeric (PG_FUNCTION_ARGS)+numeric(PG_FUNCTION_ARGS) { Numeric num = PG_GETARG_NUMERIC(0); int32 typmod = PG_GETARG_INT32(1);RegardsPavel",
"msg_date": "Sat, 25 Feb 2023 13:56:30 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "broken formatting?"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> -numeric (PG_FUNCTION_ARGS)\n> +numeric(PG_FUNCTION_ARGS)\n\nSadly, pgindent will just put that back, because it knows that \"numeric\"\nis a typedef.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Feb 2023 11:57:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: broken formatting?"
},
{
"msg_contents": "so 25. 2. 2023 v 17:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > -numeric (PG_FUNCTION_ARGS)\n> > +numeric(PG_FUNCTION_ARGS)\n>\n> Sadly, pgindent will just put that back, because it knows that \"numeric\"\n> is a typedef.\n>\n\nIs it possible to rename this function?\n\n\n\n>\n> regards, tom lane\n>\n\nso 25. 2. 2023 v 17:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> -numeric (PG_FUNCTION_ARGS)\n> +numeric(PG_FUNCTION_ARGS)\n\nSadly, pgindent will just put that back, because it knows that \"numeric\"\nis a typedef.Is it possible to rename this function? \n\n regards, tom lane",
"msg_date": "Sat, 25 Feb 2023 18:03:16 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: broken formatting?"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> so 25. 2. 2023 v 17:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>>> -numeric (PG_FUNCTION_ARGS)\n>>> +numeric(PG_FUNCTION_ARGS)\n\n>> Sadly, pgindent will just put that back, because it knows that \"numeric\"\n>> is a typedef.\n\n> Is it possible to rename this function?\n\nThat would be a way out, but it never seemed worth the trouble.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Feb 2023 12:13:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: broken formatting?"
},
{
"msg_contents": "so 25. 2. 2023 v 18:13 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > so 25. 2. 2023 v 17:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n> >> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> >>> -numeric (PG_FUNCTION_ARGS)\n> >>> +numeric(PG_FUNCTION_ARGS)\n>\n> >> Sadly, pgindent will just put that back, because it knows that \"numeric\"\n> >> is a typedef.\n>\n> > Is it possible to rename this function?\n>\n> That would be a way out, but it never seemed worth the trouble.\n>\n\nook\n\nRegards\n\nPavel\n\n\n>\n> regards, tom lane\n>\n\nso 25. 2. 2023 v 18:13 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> so 25. 2. 2023 v 17:57 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>>> -numeric (PG_FUNCTION_ARGS)\n>>> +numeric(PG_FUNCTION_ARGS)\n\n>> Sadly, pgindent will just put that back, because it knows that \"numeric\"\n>> is a typedef.\n\n> Is it possible to rename this function?\n\nThat would be a way out, but it never seemed worth the trouble.ookRegardsPavel \n\n regards, tom lane",
"msg_date": "Sat, 25 Feb 2023 18:14:13 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: broken formatting?"
}
] |
[
{
"msg_contents": "vcregress's installcheck_internal has \"--encoding=SQL_ASCII --no-locale\" \nhardcoded. It's been like that for a long time, for no good reason that \nI can see. The practical effect is to make it well nigh impossible to \nrun the regular regression tests against any other encoding/locale. This \nin turn has apparently masked an issue with the collate.windows.win1252 \ntest, which only runs on a WIN1252-encoded database.\n\nI propose simply to remove those settings for the installcheck target. \nWe already run the regression tests under these conditions in \n'vcregress.pl check', so we wouldn't be giving up anything important. \nAlthough this partcular test is only present in HEAD, I think we should \nbackpatch the change to all live branches.\n\n(Yes, I know we are trying to get rid of these tools, but we haven't \ndone so yet. I'm working on it for the buildfarm, which is how I \ndiscovered this issue.)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\nvcregress's installcheck_internal has\n \"--encoding=SQL_ASCII --no-locale\" hardcoded. It's been like\n that for a long time, for no good reason that I can see. The\n practical effect is to make it well nigh impossible to run the\n regular regression tests against any other encoding/locale. This\n in turn has apparently masked an issue with the\n collate.windows.win1252 test, which only runs on a\n WIN1252-encoded database.\n\nI propose simply to remove those settings\n for the installcheck target. We already run the regression tests\n under these conditions in 'vcregress.pl check', so we wouldn't\n be giving up anything important. Although this partcular test is\n only present in HEAD, I think we should backpatch the change to\n all live branches.\n\n(Yes, I know we are trying to get rid of\n these tools, but we haven't done so yet. I'm working on it for\n the buildfarm, which is how I discovered this issue.)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 25 Feb 2023 12:13:54 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "locale/encoding vs vcregress.pl installcheck"
},
{
"msg_contents": "On 2023-02-25 Sa 12:13, Andrew Dunstan wrote:\n>\n> vcregress's installcheck_internal has \"--encoding=SQL_ASCII \n> --no-locale\" hardcoded. It's been like that for a long time, for no \n> good reason that I can see. The practical effect is to make it well \n> nigh impossible to run the regular regression tests against any other \n> encoding/locale. This in turn has apparently masked an issue with the \n> collate.windows.win1252 test, which only runs on a WIN1252-encoded \n> database.\n>\n> I propose simply to remove those settings for the installcheck target. \n> We already run the regression tests under these conditions in \n> 'vcregress.pl check', so we wouldn't be giving up anything important. \n> Although this partcular test is only present in HEAD, I think we \n> should backpatch the change to all live branches.\n>\n> (Yes, I know we are trying to get rid of these tools, but we haven't \n> done so yet. I'm working on it for the buildfarm, which is how I \n> discovered this issue.)\n>\n>\n\nDone.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-25 Sa 12:13, Andrew Dunstan\n wrote:\n\n\n\nvcregress's installcheck_internal has\n \"--encoding=SQL_ASCII --no-locale\" hardcoded. It's been like\n that for a long time, for no good reason that I can see. The\n practical effect is to make it well nigh impossible to run the\n regular regression tests against any other encoding/locale.\n This in turn has apparently masked an issue with the\n collate.windows.win1252 test, which only runs on a\n WIN1252-encoded database.\n\nI propose simply to remove those\n settings for the installcheck target. We already run the\n regression tests under these conditions in 'vcregress.pl\n check', so we wouldn't be giving up anything important.\n Although this partcular test is only present in HEAD, I think\n we should backpatch the change to all live branches.\n\n(Yes, I know we are trying to get rid of\n these tools, but we haven't done so yet. I'm working on it for\n the buildfarm, which is how I discovered this issue.)\n\n\n\n\nDone.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 26 Feb 2023 07:03:45 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: locale/encoding vs vcregress.pl installcheck"
}
] |
[
{
"msg_contents": "hi community\n\nThis is the first time for me to submit a patch to Postgres community.\n\ninstead of using for loop to find the most significant bit set. we could\nuse __builtin_clz function to first find the number of leading zeros for\nthe mask and then we can find the index by 32 - __builtin_clz(mask).\n\ndiff --git a/src/port/fls.c b/src/port/fls.c\nindex 19b4221826..4f4c412732 100644\n--- a/src/port/fls.c\n+++ b/src/port/fls.c\n@@ -54,11 +54,7 @@\n int\n fls(int mask)\n {\n- int bit;\n-\n if (mask == 0)\n return (0);\n- for (bit = 1; mask != 1; bit++)\n- mask = (unsigned int) mask >> 1;\n- return (bit);\n+ return (sizeof(int) << 3) - __builtin_clz(mask);\n }\n\nBest Regards,\n\nJoseph",
"msg_date": "Sun, 26 Feb 2023 03:13:53 +0800",
"msg_from": "Joseph Yu <kiddo831007@gmail.com>",
"msg_from_op": true,
"msg_subject": "use __builtin_clz to compute most significant bit set"
},
{
"msg_contents": "On Sat, Feb 25, 2023 at 9:32 PM Joseph Yu <kiddo831007@gmail.com> wrote:\n\n> hi community\n>\n> This is the first time for me to submit a patch to Postgres community.\n>\n> instead of using for loop to find the most significant bit set. we could\n> use __builtin_clz function to first find the number of leading zeros for\n> the mask and then we can find the index by 32 - __builtin_clz(mask).\n>\n\nHi!\n\nThis file has already been removed, as of 4f1f5a7f85. Which already uses\n__builtin_clz if it' available.\n\nWere you perhaps looking at an old version instead of the master branch?\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Sat, Feb 25, 2023 at 9:32 PM Joseph Yu <kiddo831007@gmail.com> wrote:hi communityThis is the first time for me to submit a patch to Postgres community.instead of using for loop to find the most significant bit set. we could use __builtin_clz function to first find the number of leading zeros for the mask and then we can find the index by 32 - __builtin_clz(mask).Hi!This file has already been removed, as of 4f1f5a7f85. Which already uses __builtin_clz if it' available.Were you perhaps looking at an old version instead of the master branch?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Sat, 25 Feb 2023 21:47:16 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: use __builtin_clz to compute most significant bit set"
}
] |
[
{
"msg_contents": "Hi,\n\nAs suggested in [1], the attached patch adds shared buffer hits to\npg_stat_io.\n\nI remember at some point having this in the view and then removing it\nbut I can't quite remember what the issue was -- nor do I see a\nrationale mentioned in the thread [2].\n\nIt might have had something to do with the interaction between the\nnow-removed \"rejected\" buffers column.\n\nI am looking for input as to the order of this column in the view. I\nthink it should go after op_bytes since it is not relevant for\nnon-block-oriented IO. However, I'm not sure what the order of hits,\nevictions, and reuses should be (all have to do with buffers).\n\nWhile adding this, I noticed that I had made all of the IOOP columns\nint8 in the view, and I was wondering if this is sufficient for hits (I\nimagine you could end up with quite a lot of those).\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/20230209050319.chyyup4vtq4jzobq%40awork3.anarazel.de\n[2] https://www.postgresql.org/message-id/flat/20230209050319.chyyup4vtq4jzobq%40awork3.anarazel.de#63ff7a97b7a5bb7b86c1a250065be7f9",
"msg_date": "Sat, 25 Feb 2023 15:16:40 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add shared buffer hits to pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn 2/25/23 9:16 PM, Melanie Plageman wrote:\n> Hi,\n> \n> As suggested in [1], the attached patch adds shared buffer hits to\n> pg_stat_io.\n> \n\nThanks for the patch!\n\n BufferDesc *\n LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n- bool *foundPtr, IOContext *io_context)\n+ bool *foundPtr, IOContext io_context)\n {\n BufferTag newTag; /* identity of requested block */\n LocalBufferLookupEnt *hresult;\n@@ -128,14 +128,6 @@ LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n hresult = (LocalBufferLookupEnt *)\n hash_search(LocalBufHash, &newTag, HASH_FIND, NULL);\n\n- /*\n- * IO Operations on local buffers are only done in IOCONTEXT_NORMAL. Set\n- * io_context here (instead of after a buffer hit would have returned) for\n- * convenience since we don't have to worry about the overhead of calling\n- * IOContextForStrategy().\n- */\n- *io_context = IOCONTEXT_NORMAL;\n\n\nIt looks like that io_context is not used in LocalBufferAlloc() anymore and then can be removed as an argument.\n\n> \n> I am looking for input as to the order of this column in the view. I\n> think it should go after op_bytes since it is not relevant for\n> non-block-oriented IO. \n\nAgree.\n\n> However, I'm not sure what the order of hits,\n> evictions, and reuses should be (all have to do with buffers).\n> \n\nI'm not sure there is a strong \"correct\" ordering but the proposed one looks natural to me.\n\n> While adding this, I noticed that I had made all of the IOOP columns\n> int8 in the view, and I was wondering if this is sufficient for hits (I\n> imagine you could end up with quite a lot of those).\n> \n\nI think that's ok and bigint is what is already used for pg_statio_user_tables.heap_blks_hit for example.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 28 Feb 2023 13:36:24 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add shared buffer hits to pg_stat_io"
},
{
"msg_contents": "Thanks for the review!\n\nOn Tue, Feb 28, 2023 at 7:36 AM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n> BufferDesc *\n> LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n> - bool *foundPtr, IOContext *io_context)\n> + bool *foundPtr, IOContext io_context)\n> {\n> BufferTag newTag; /* identity of requested block */\n> LocalBufferLookupEnt *hresult;\n> @@ -128,14 +128,6 @@ LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n> hresult = (LocalBufferLookupEnt *)\n> hash_search(LocalBufHash, &newTag, HASH_FIND, NULL);\n>\n> - /*\n> - * IO Operations on local buffers are only done in IOCONTEXT_NORMAL. Set\n> - * io_context here (instead of after a buffer hit would have returned) for\n> - * convenience since we don't have to worry about the overhead of calling\n> - * IOContextForStrategy().\n> - */\n> - *io_context = IOCONTEXT_NORMAL;\n>\n>\n> It looks like that io_context is not used in LocalBufferAlloc() anymore and then can be removed as an argument.\n\nGood catch. Updated patchset attached.\n\n> > While adding this, I noticed that I had made all of the IOOP columns\n> > int8 in the view, and I was wondering if this is sufficient for hits (I\n> > imagine you could end up with quite a lot of those).\n> >\n>\n> I think that's ok and bigint is what is already used for pg_statio_user_tables.heap_blks_hit for example.\n\nAh, I was silly and didn't understand that the SQL type int8 is eight\nbytes and not 1. That makes a lot of things make more sense :)\n\nhttps://www.postgresql.org/docs/current/xfunc-c.html#XFUNC-C-TYPE-TABLE\n\n- Melanie",
"msg_date": "Mon, 6 Mar 2023 10:38:13 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add shared buffer hits to pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn 3/6/23 4:38 PM, Melanie Plageman wrote:\n> Thanks for the review!\n> \n> On Tue, Feb 28, 2023 at 7:36 AM Drouvot, Bertrand\n> <bertranddrouvot.pg@gmail.com> wrote:\n>> BufferDesc *\n>> LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n>> - bool *foundPtr, IOContext *io_context)\n>> + bool *foundPtr, IOContext io_context)\n>> {\n>> BufferTag newTag; /* identity of requested block */\n>> LocalBufferLookupEnt *hresult;\n>> @@ -128,14 +128,6 @@ LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n>> hresult = (LocalBufferLookupEnt *)\n>> hash_search(LocalBufHash, &newTag, HASH_FIND, NULL);\n>>\n>> - /*\n>> - * IO Operations on local buffers are only done in IOCONTEXT_NORMAL. Set\n>> - * io_context here (instead of after a buffer hit would have returned) for\n>> - * convenience since we don't have to worry about the overhead of calling\n>> - * IOContextForStrategy().\n>> - */\n>> - *io_context = IOCONTEXT_NORMAL;\n>>\n>>\n>> It looks like that io_context is not used in LocalBufferAlloc() anymore and then can be removed as an argument.\n> \n> Good catch. Updated patchset attached.\n\nThanks for the update!\n\n> \n>>> While adding this, I noticed that I had made all of the IOOP columns\n>>> int8 in the view, and I was wondering if this is sufficient for hits (I\n>>> imagine you could end up with quite a lot of those).\n>>>\n>>\n>> I think that's ok and bigint is what is already used for pg_statio_user_tables.heap_blks_hit for example.\n> \n> Ah, I was silly and didn't understand that the SQL type int8 is eight\n> bytes and not 1. That makes a lot of things make more sense :)\n\nOh, I see ;-)\n\nI may give it another review but currently V2 looks good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 7 Mar 2023 16:10:07 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add shared buffer hits to pg_stat_io"
},
{
"msg_contents": "Hi,\n\nLGTM. The only comment I have is that a small test wouldn't hurt... Compared\nto the other things it should be fairly easy...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Mar 2023 11:47:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add shared buffer hits to pg_stat_io"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 2:47 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> LGTM. The only comment I have is that a small test wouldn't hurt... Compared\n> to the other things it should be fairly easy...\n\nSo, I have attached an updated patchset which adds a test for hits. Since\nthere is only one call site where we count hits, I think this single\ntest is sufficient to protect against regressions.\n\nHowever, I am concerned that, while unlikely, this could be flakey.\nSomething could happen to force all of those blocks out of shared\nbuffers (even though they were just read in) before we hit them.\n\nWe could simply check if hits are greater at the end of all of the\npg_stat_io tests than at the beginning and rely on the fact that it is\nhighly unlikely that every single buffer access will be a miss for all\nof the tests. However, is it not technically also possible to have zero\nhits?\n\n- Melanie",
"msg_date": "Wed, 8 Mar 2023 13:44:32 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add shared buffer hits to pg_stat_io"
},
{
"msg_contents": "On 2023-03-08 13:44:32 -0500, Melanie Plageman wrote:\n> However, I am concerned that, while unlikely, this could be flakey.\n> Something could happen to force all of those blocks out of shared\n> buffers (even though they were just read in) before we hit them.\n\nYou could make the test query a simple nested loop self-join, that'll prevent\nthe page being evicted, because it'll still be pinned on the outer side, while\ngenerating hits on the inner side.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Mar 2023 11:23:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add shared buffer hits to pg_stat_io"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 2:23 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-03-08 13:44:32 -0500, Melanie Plageman wrote:\n> > However, I am concerned that, while unlikely, this could be flakey.\n> > Something could happen to force all of those blocks out of shared\n> > buffers (even though they were just read in) before we hit them.\n>\n> You could make the test query a simple nested loop self-join, that'll prevent\n> the page being evicted, because it'll still be pinned on the outer side, while\n> generating hits on the inner side.\n\nGood idea. v3 attached.",
"msg_date": "Thu, 9 Mar 2023 08:23:46 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add shared buffer hits to pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn 3/9/23 2:23 PM, Melanie Plageman wrote:\n> On Wed, Mar 8, 2023 at 2:23 PM Andres Freund <andres@anarazel.de> wrote:\n>>\n>> On 2023-03-08 13:44:32 -0500, Melanie Plageman wrote:\n>>> However, I am concerned that, while unlikely, this could be flakey.\n>>> Something could happen to force all of those blocks out of shared\n>>> buffers (even though they were just read in) before we hit them.\n>>\n>> You could make the test query a simple nested loop self-join, that'll prevent\n>> the page being evicted, because it'll still be pinned on the outer side, while\n>> generating hits on the inner side.\n> \n> Good idea. v3 attached.\n\nThanks! The added test looks good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 9 Mar 2023 17:03:13 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add shared buffer hits to pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-09 08:23:46 -0500, Melanie Plageman wrote:\n> Good idea. v3 attached.\n\nI committed this, after some small regression test changes. I was worried that\nthe query for testing buffer hits might silently change in the future, so I\nadded an EXPLAIN for the query. Also removed the need for the explicit RESETs\nby using BEGIN; SET LOCAL ...; query; COMMIT;.\n\nThanks for the patch Melanie and the review Bertrand. I'm excited about\nfinally being able to compute meaningful cache hit ratios :)\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Thu, 30 Mar 2023 19:39:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add shared buffer hits to pg_stat_io"
}
] |
[
{
"msg_contents": "Hi,\n\nAround\nhttps://www.postgresql.org/message-id/20230224015417.75yimxbksejpffh3%40awork3.anarazel.de\nI suggested that we should evaluate the arguments of correlated SubPlans as\npart of the expression referencing the subplan.\n\nHere's a patch for that.\n\nEnded up simpler than I'd thought. I see small, consistent, speedups and\nreductions in memory usage.\n\nI think individual arguments are mainly (always?) Var nodes. By evaluating\nthem as part of the containing expression we avoid the increased memory usage,\nand the increased dispatch of going through another layer of\nExprState. Because the arguments are a single Var, which end up with a\nslot_getattr() via ExecJust*Var, we also elide redundant slot_getattr()\nchecks. I think we already avoided redundant tuple deforming, because the\nparent ExprState will have done that already.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 25 Feb 2023 13:44:01 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Around\n> https://www.postgresql.org/message-id/20230224015417.75yimxbksejpffh3%40awork3.anarazel.de\n> I suggested that we should evaluate the arguments of correlated SubPlans as\n> part of the expression referencing the subplan.\n\n> Here's a patch for that.\n\nI looked through this, and there is one point that is making me really\nuncomfortable. This bit is assuming that we can bind the address of\nthe es_param_exec_vals array right into the compiled expression:\n\n+\t\tParamExecData *prm = &estate->es_param_exec_vals[paramid];\n+\n+\t\tExecInitExprRec(lfirst(pvar), state, &prm->value, &prm->isnull);\n\nEven if that works today, it'd kill the ability to use the compiled\nexpression across more than one executor instance, which seems like\na pretty high price. Also, I think it probably fails already in\nEvalPlanQual contexts, because EvalPlanQualStart allocates a separate\nes_param_exec_vals array for EPQ execution.\n\nI think we'd be better off inventing an EEOP_SET_PARAM_EXEC step type\nthat is essentially the inverse of EEOP_PARAM_EXEC/ExecEvalParamExec,\nand then evaluating each parameter value into the expression's\nscratch Datum/isnull fields and emitting SET_PARAM_EXEC to copy those\nto the correct ParamExecData slot.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Mar 2023 14:33:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-02 14:33:35 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Around\n> > https://www.postgresql.org/message-id/20230224015417.75yimxbksejpffh3%40awork3.anarazel.de\n> > I suggested that we should evaluate the arguments of correlated SubPlans as\n> > part of the expression referencing the subplan.\n> \n> > Here's a patch for that.\n> \n> I looked through this, and there is one point that is making me really\n> uncomfortable. This bit is assuming that we can bind the address of\n> the es_param_exec_vals array right into the compiled expression:\n> \n> +\t\tParamExecData *prm = &estate->es_param_exec_vals[paramid];\n> +\n> +\t\tExecInitExprRec(lfirst(pvar), state, &prm->value, &prm->isnull);\n> \n> Even if that works today, it'd kill the ability to use the compiled\n> expression across more than one executor instance, which seems like\n> a pretty high price. Also, I think it probably fails already in\n> EvalPlanQual contexts, because EvalPlanQualStart allocates a separate\n> es_param_exec_vals array for EPQ execution.\n\nYea, I wasn't super comfortable with that either. I concluded it's ok\nbecause we already cache pointers to the array inside each ExprContext.\n\n\n> I think we'd be better off inventing an EEOP_SET_PARAM_EXEC step type\n> that is essentially the inverse of EEOP_PARAM_EXEC/ExecEvalParamExec,\n> and then evaluating each parameter value into the expression's\n> scratch Datum/isnull fields and emitting SET_PARAM_EXEC to copy those\n> to the correct ParamExecData slot.\n\nAgreed, that'd make sense. If we can build the infrastructure to figure\nout what param to use, that'd also provide a nice basis for using params\nfor CaseTest etc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 Mar 2023 12:05:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-03-02 14:33:35 -0500, Tom Lane wrote:\n>> I looked through this, and there is one point that is making me really\n>> uncomfortable. This bit is assuming that we can bind the address of\n>> the es_param_exec_vals array right into the compiled expression:\n\n> Yea, I wasn't super comfortable with that either. I concluded it's ok\n> because we already cache pointers to the array inside each ExprContext.\n\nExprContext, sure, but compiled expressions? Considering what it\ncosts to JIT those, I think we ought to be trying to make them\nfairly long-lived.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Mar 2023 15:10:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-02 15:10:31 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-03-02 14:33:35 -0500, Tom Lane wrote:\n> >> I looked through this, and there is one point that is making me really\n> >> uncomfortable. This bit is assuming that we can bind the address of\n> >> the es_param_exec_vals array right into the compiled expression:\n> \n> > Yea, I wasn't super comfortable with that either. I concluded it's ok\n> > because we already cache pointers to the array inside each ExprContext.\n> \n> ExprContext, sure, but compiled expressions? Considering what it\n> costs to JIT those, I think we ought to be trying to make them\n> fairly long-lived.\n\nI'm not opposed to EXPR_PARAM_SET, to be clear. I'll send an updated\nversion later. I was just thinking about the correctness in the current\nworld.\n\nI think it's not just JIT that could benefit, fwiw. I think making\nexpressions longer lived could also help plpgsql tremendously, for\nexample.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 2 Mar 2023 13:00:31 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-02 13:00:31 -0800, Andres Freund wrote:\n> I'm not opposed to EXPR_PARAM_SET, to be clear. I'll send an updated\n> version later. I was just thinking about the correctness in the current\n> world.\n\nAttached.\n\nI named the set EEOP_PARAM_SET EEOP_PARAM_EXEC_SET or such, because I\nwas wondering if there cases it could also be useful in conjunction with\nPARAM_EXTERN, and because nothing really depends on the kind of param.\n\nGreetings,\n\nAndres",
"msg_date": "Thu, 2 Mar 2023 16:19:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-03-02 13:00:31 -0800, Andres Freund wrote:\n>> I'm not opposed to EXPR_PARAM_SET, to be clear. I'll send an updated\n>> version later. I was just thinking about the correctness in the current\n>> world.\n\n> Attached.\n\nI've looked through this, and it looks basically OK so I marked it RfC.\nI do have a few nitpicks that you might or might not choose to adopt:\n\nIt'd be good to have a header comment for ExecInitExprRec documenting\nthe arguments, particularly that resv/resnull are where to put the\nsubplan's eventual result.\n\nYou could avoid having to assume ExprState's resvalue/resnull being\nsafe to use by instead using the target resv/resnull. This would\nrequire putting those into the EEOP_PARAM_SET step so that\nExecEvalParamSet knows where to fetch from, so maybe it's not an\nimprovement, but perhaps worth considering.\n\n+\t\t/* type isn't needed, but an old value could be confusing */\n+\t\tscratch.d.param.paramtype = InvalidOid;\nI'd just store the param's type, rather than justifying why you didn't.\nIt's cheap enough and even less confusing.\n\nI think that ExecEvalParamSet should either set prm->execPlan to NULL,\nor maybe better Assert that it is already NULL.\n\nIt's a bit weird to keep this in ExecScanSubPlan, when the code there\nno longer depends on it:\n+\tAssert(list_length(subplan->parParam) == list_length(subplan->args));\nI'd put that before the forboth() in ExecInitSubPlanExpr instead,\nwhere it does matter.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Mar 2023 15:09:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-03 15:09:18 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-03-02 13:00:31 -0800, Andres Freund wrote:\n> >> I'm not opposed to EXPR_PARAM_SET, to be clear. I'll send an updated\n> >> version later. I was just thinking about the correctness in the current\n> >> world.\n>\n> > Attached.\n>\n> I've looked through this, and it looks basically OK so I marked it RfC.\n\nThanks!\n\n\n> I do have a few nitpicks that you might or might not choose to adopt:\n>\n> It'd be good to have a header comment for ExecInitExprRec documenting\n> the arguments, particularly that resv/resnull are where to put the\n> subplan's eventual result.\n\nDid you mean ExecInitSubPlanExpr()?\n\n\n> You could avoid having to assume ExprState's resvalue/resnull being\n> safe to use by instead using the target resv/resnull. This would\n> require putting those into the EEOP_PARAM_SET step so that\n> ExecEvalParamSet knows where to fetch from, so maybe it's not an\n> improvement, but perhaps worth considering.\n\nI think that'd be a bit worse - we'd have more pointers that can't be handled\nin a generic way in JIT.\n\n\n> I think that ExecEvalParamSet should either set prm->execPlan to NULL,\n> or maybe better Assert that it is already NULL.\n\nAgreed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 Mar 2023 16:28:30 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-03-03 15:09:18 -0500, Tom Lane wrote:\n>> It'd be good to have a header comment for ExecInitExprRec documenting\n>> the arguments, particularly that resv/resnull are where to put the\n>> subplan's eventual result.\n\n> Did you mean ExecInitSubPlanExpr()?\n\nRight, copy-and-pasteo, sorry.\n\n>> You could avoid having to assume ExprState's resvalue/resnull being\n>> safe to use by instead using the target resv/resnull. This would\n>> require putting those into the EEOP_PARAM_SET step so that\n>> ExecEvalParamSet knows where to fetch from, so maybe it's not an\n>> improvement, but perhaps worth considering.\n\n> I think that'd be a bit worse - we'd have more pointers that can't be handled\n> in a generic way in JIT.\n\nOK.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Mar 2023 19:51:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "Is this patch still being worked on?\n\nOn 07.03.23 01:51, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2023-03-03 15:09:18 -0500, Tom Lane wrote:\n>>> It'd be good to have a header comment for ExecInitExprRec documenting\n>>> the arguments, particularly that resv/resnull are where to put the\n>>> subplan's eventual result.\n> \n>> Did you mean ExecInitSubPlanExpr()?\n> \n> Right, copy-and-pasteo, sorry.\n> \n>>> You could avoid having to assume ExprState's resvalue/resnull being\n>>> safe to use by instead using the target resv/resnull. This would\n>>> require putting those into the EEOP_PARAM_SET step so that\n>>> ExecEvalParamSet knows where to fetch from, so maybe it's not an\n>>> improvement, but perhaps worth considering.\n> \n>> I think that'd be a bit worse - we'd have more pointers that can't be handled\n>> in a generic way in JIT.\n> \n> OK.\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n\n",
"msg_date": "Sun, 1 Oct 2023 20:41:53 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "Peter Eisentraut <peter@eisentraut.org> writes:\n> Is this patch still being worked on?\n\nI thought Andres simply hadn't gotten back to it yet.\nIt still seems like a worthwhile improvement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Oct 2023 14:53:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-01 14:53:23 -0400, Tom Lane wrote:\n> Peter Eisentraut <peter@eisentraut.org> writes:\n> > Is this patch still being worked on?\n> \n> I thought Andres simply hadn't gotten back to it yet.\n> It still seems like a worthwhile improvement.\n\nIndeed - I do plan to commit it. I haven't quite shifted into v17 mode yet...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Oct 2023 20:00:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "Hi!\n\nI looked through your patch and noticed that it was not applied to the \ncurrent version of the master. I rebased it and attached a version. I \ndidn't see any problems and, honestly, no big changes were needed, all \nregression tests were passed.\n\nI think it's better to add a test, but to be honest, I haven't been able \nto come up with something yet.\n\n-- \nRegards,\nAlena Rybakina",
"msg_date": "Mon, 23 Oct 2023 23:14:02 +0300",
"msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "On Tue, Oct 10, 2023 at 10:00 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-10-01 14:53:23 -0400, Tom Lane wrote:\n> > Peter Eisentraut <peter@eisentraut.org> writes:\n> > > Is this patch still being worked on?\n> >\n> > I thought Andres simply hadn't gotten back to it yet.\n> > It still seems like a worthwhile improvement.\n>\n> Indeed - I do plan to commit it. I haven't quite shifted into v17 mode yet...\n\nAny shift yet? ;-)\n\n\n",
"msg_date": "Wed, 22 Nov 2023 15:37:16 +0700",
"msg_from": "John Naylor <johncnaylorls@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Ready for Committer\", but it is\ncurrently failing some CFbot tests [1]. Please have a look and post an\nupdated version..\n\n======\n[1] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/4209\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 10:30:22 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "Hi,\n\nOn 2024-01-22 10:30:22 +1100, Peter Smith wrote:\n> 2024-01 Commitfest.\n> \n> Hi, This patch has a CF status of \"Ready for Committer\", but it is\n> currently failing some CFbot tests [1]. Please have a look and post an\n> updated version..\n\nI think this failure is independent of this patch - by coincidence I just\nsent an email about the issue\nhttps://www.postgresql.org/message-id/20240122204117.swton324xcoodnyi%40awork3.anarazel.de\na few minutes ago.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Jan 2024 12:47:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "On Tue, 24 Oct 2023 at 01:47, Alena Rybakina <lena.ribackina@yandex.ru> wrote:\n>\n> Hi!\n>\n> I looked through your patch and noticed that it was not applied to the current version of the master. I rebased it and attached a version. I didn't see any problems and, honestly, no big changes were needed, all regression tests were passed.\n>\n> I think it's better to add a test, but to be honest, I haven't been able to come up with something yet.\n\nThe patch does not apply anymore as in CFBot at [1]:\n\n=== Applying patches on top of PostgreSQL commit ID\n7014c9a4bba2d1b67d60687afb5b2091c1d07f73 ===\n=== applying patch\n./v2-0001-WIP-Evaluate-arguments-of-correlated-SubPlans-in-the.patch\n....\npatching file src/include/executor/execExpr.h\nHunk #1 succeeded at 160 (offset 1 line).\nHunk #2 succeeded at 382 (offset 2 lines).\nHunk #3 FAILED at 778.\n1 out of 3 hunks FAILED -- saving rejects to file\nsrc/include/executor/execExpr.h.rej\npatching file src/include/nodes/execnodes.h\nHunk #1 succeeded at 959 (offset 7 lines).\n\nPlease have a look and post an updated version.\n\n[1] - http://cfbot.cputube.org/patch_46_4209.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 26 Jan 2024 08:07:00 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "On 26.01.2024 05:37, vignesh C wrote:\n> On Tue, 24 Oct 2023 at 01:47, Alena Rybakina <lena.ribackina@yandex.ru> wrote:\n>> Hi!\n>>\n>> I looked through your patch and noticed that it was not applied to the current version of the master. I rebased it and attached a version. I didn't see any problems and, honestly, no big changes were needed, all regression tests were passed.\n>>\n>> I think it's better to add a test, but to be honest, I haven't been able to come up with something yet.\n> The patch does not apply anymore as in CFBot at [1]:\n>\n> === Applying patches on top of PostgreSQL commit ID\n> 7014c9a4bba2d1b67d60687afb5b2091c1d07f73 ===\n> === applying patch\n> ./v2-0001-WIP-Evaluate-arguments-of-correlated-SubPlans-in-the.patch\n> ....\n> patching file src/include/executor/execExpr.h\n> Hunk #1 succeeded at 160 (offset 1 line).\n> Hunk #2 succeeded at 382 (offset 2 lines).\n> Hunk #3 FAILED at 778.\n> 1 out of 3 hunks FAILED -- saving rejects to file\n> src/include/executor/execExpr.h.rej\n> patching file src/include/nodes/execnodes.h\n> Hunk #1 succeeded at 959 (offset 7 lines).\n>\n> Please have a look and post an updated version.\n>\n> [1] - http://cfbot.cputube.org/patch_46_4209.log\n>\n> Regards,\n> Vignesh\n\nThank you!\n\nI fixed it. The code remains the same.\n\n-- \nRegards,\nAlena Rybakina\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sun, 28 Jan 2024 12:07:46 +0300",
"msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "Alena Rybakina <lena.ribackina@yandex.ru> writes:\n> I fixed it. The code remains the same.\n\nI see the cfbot is again complaining that this patch doesn't apply.\n\nIn hopes of pushing this over the finish line, I fixed up the (minor)\npatch conflict and also addressed the cosmetic complaints I had\nupthread [1]. I think the attached v4 is committable. If Andres is\ntoo busy, I can push it, but really it's his patch ...\n\n(BTW, I see no need for additional test cases. Coverage checks show\nthat all this code is reached during the core regression tests.)\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/2618533.1677874158%40sss.pgh.pa.us",
"msg_date": "Thu, 18 Jul 2024 16:01:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "On 18.07.2024 23:01, Tom Lane wrote:\n> Alena Rybakina <lena.ribackina@yandex.ru> writes:\n>> I fixed it. The code remains the same.\n> I see the cfbot is again complaining that this patch doesn't apply.\n>\n> In hopes of pushing this over the finish line, I fixed up the (minor)\n> patch conflict and also addressed the cosmetic complaints I had\n> upthread [1]. I think the attached v4 is committable. If Andres is\n> too busy, I can push it, but really it's his patch ...\n>\n> (BTW, I see no need for additional test cases. Coverage checks show\n> that all this code is reached during the core regression tests.)\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/2618533.1677874158%40sss.pgh.pa.us\n>\nThank you for your contribution! I looked at the patch again and I agree \nthat it is ready to be pushed.\n\n\n\n",
"msg_date": "Fri, 19 Jul 2024 22:19:55 +0300",
"msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>",
"msg_from_op": false,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "On 2024-07-18 16:01:19 -0400, Tom Lane wrote:\n> Alena Rybakina <lena.ribackina@yandex.ru> writes:\n> > I fixed it. The code remains the same.\n> \n> I see the cfbot is again complaining that this patch doesn't apply.\n> \n> In hopes of pushing this over the finish line, I fixed up the (minor)\n> patch conflict and also addressed the cosmetic complaints I had\n> upthread [1]. I think the attached v4 is committable. If Andres is\n> too busy, I can push it, but really it's his patch ...\n\nThanks for the rebase - I'll try to get it pushed in the next few days!\n\n\n",
"msg_date": "Fri, 19 Jul 2024 21:17:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-25 13:44:01 -0800, Andres Freund wrote:\n> Ended up simpler than I'd thought. I see small, consistent, speedups and\n> reductions in memory usage.\n\nFor the sake of person following the link from the commit message to this\nthread in a few years, I thought it'd be useful to have an example for the\ndifferences due to the patch.\n\n\nConsider e.g. the query used for psql's \\d pg_class, just because that's the\nfirst thing using a subplan that I got my hand on:\n\nMemory usage in ExecutorState changes from\n Grand total: 131072 bytes in 12 blocks; 88696 free (2 chunks); 42376 used\nto\n Grand total: 131072 bytes in 12 blocks; 93656 free (4 chunks); 37416 used\n\n\nWhat's more interesting is that if I - just to show the effect - force JITing,\nEXPLAIN ANALYZE's jit section changes from:\n\nJIT:\n Functions: 31\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n Timing: Generation 2.656 ms (Deform 1.496 ms), Inlining 25.147 ms, Optimization 112.853 ms, Emission 81.585 ms, Total 222.241 ms\n\nto\n\n JIT:\n Functions: 21\n Options: Inlining true, Optimization true, Expressions true, Deforming true\n Timing: Generation 1.883 ms (Deform 0.990 ms), Inlining 23.821 ms, Optimization 85.150 ms, Emission 64.303 ms, Total 175.157 ms\n\nI.e. noticeably reduced overhead, mostly due to the reduction in emitted\nfunctions.\n\nThe difference obviously gets bigger the more parameters the subplan has, in\nartificial cases it can be very large.\n\n\nI also see some small performance gains during execution, but for realistic\nqueries that's in the ~1-3% range.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Jul 2024 19:12:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
},
{
"msg_contents": "Hi,\n\nOn 2024-07-19 21:17:12 -0700, Andres Freund wrote:\n> On 2024-07-18 16:01:19 -0400, Tom Lane wrote:\n> > Alena Rybakina <lena.ribackina@yandex.ru> writes:\n> > > I fixed it. The code remains the same.\n> > \n> > I see the cfbot is again complaining that this patch doesn't apply.\n> > \n> > In hopes of pushing this over the finish line, I fixed up the (minor)\n> > patch conflict and also addressed the cosmetic complaints I had\n> > upthread [1]. I think the attached v4 is committable. If Andres is\n> > too busy, I can push it, but really it's his patch ...\n> \n> Thanks for the rebase - I'll try to get it pushed in the next few days!\n\nAnd finally done. No code changes. I did spend some more time evaluating the\nresource usage benefits actually do exist (see mail upthread).\n\nThanks for the reviews, rebasing and the reminders!\n\nGreetings,\n\nAndres\n\n\n",
"msg_date": "Wed, 31 Jul 2024 20:24:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Evaluate arguments of correlated SubPlans in the referencing\n ExprState"
}
] |
[
{
"msg_contents": "here are the source codes from src/include/access/htup_details.h.\r\n/*\r\n * information stored in t_infomask:\r\n */\r\n#define HEAP_HASNULL 0x0001 /* has null attribute(s) */\r\n#define HEAP_HASVARWIDTH 0x0002 /* has variable-width attribute(s) */\r\n#define HEAP_HASEXTERNAL 0x0004 /* has external stored attribute(s) */\r\n#define HEAP_HASOID_OLD 0x0008 /* has an object-id field */\r\n#define HEAP_XMAX_KEYSHR_LOCK 0x0010 /* xmax is a key-shared locker */\r\n#define HEAP_COMBOCID 0x0020 /* t_cid is a combo CID */\r\n#define HEAP_XMAX_EXCL_LOCK 0x0040 /* xmax is exclusive locker */\r\n#define HEAP_XMAX_LOCK_ONLY 0x0080 /* xmax, if valid, is only a locker */\r\n\r\nAnd I can't understand these attrs:\r\n1. external stored attribute(s), what is this? can you give a create statement to show me?\r\n2. xmax is a key-shared locker/exclusive locker/only a locker, so how you use this? can you give me a scenario?\r\nlet me try to explain it:\r\n if there is a txn is trying to read this heaptuple, the HEAP_XMAX_KEYSHR_LOCK bit will be set to 1.\r\n if there is a txn is trying to delete/update this heaptuple, the HEAP_XMAX_EXCL_LOCK bit will be set to 1.\r\n but for HEAP_XMAX_LOCK_ONLY, I can't understand.\r\nAnd another thought is that these three bit can have only one to be set 1 at most.\r\n3. t_cid is a combo CID? what's a CID? give me an example please.\r\n--------------------------------------\r\njacktby@gmail.com\r\n\n\nhere are the source codes from src/include/access/htup_details.h./* * information stored in t_infomask: */#define HEAP_HASNULL 0x0001 /* has null attribute(s) */#define HEAP_HASVARWIDTH 0x0002 /* has variable-width attribute(s) */#define HEAP_HASEXTERNAL 0x0004 /* has external stored attribute(s) */#define HEAP_HASOID_OLD 0x0008 /* has an object-id field */#define HEAP_XMAX_KEYSHR_LOCK 0x0010 /* xmax is a key-shared locker */#define HEAP_COMBOCID 0x0020 /* t_cid is a combo CID */#define HEAP_XMAX_EXCL_LOCK 0x0040 /* xmax is exclusive locker */#define HEAP_XMAX_LOCK_ONLY 0x0080 /* xmax, if valid, is only a locker */And I can't understand these attrs:1. external stored attribute(s), what is this? can you give a create statement to show me?2. xmax is a key-shared locker/exclusive locker/only a locker, so how you use this? can you give me a scenario?let me try to explain it: if there is a txn is trying to read this heaptuple, the HEAP_XMAX_KEYSHR_LOCK bit will be set to 1. if there is a txn is trying to delete/update this heaptuple, the HEAP_XMAX_EXCL_LOCK bit will be set to 1. but for HEAP_XMAX_LOCK_ONLY, I can't understand.And another thought is that these three bit can have only one to be set 1 at most.3. t_cid is a combo CID? what's a CID? give me an example please.--------------------------------------jacktby@gmail.com",
"msg_date": "Sun, 26 Feb 2023 22:30:56 +0800",
"msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>",
"msg_from_op": true,
"msg_subject": "Give me more details of some bits in infomask!!"
},
{
"msg_contents": "On 2/26/23 15:30, jacktby@gmail.com wrote:\n> here are the source codes from src/include/access/htup_details.h.\n> /*\n> * information stored in t_infomask:\n> */\n> #define HEAP_HASNULL0x0001/* has null attribute(s) */\n> #define HEAP_HASVARWIDTH0x0002/* has variable-width attribute(s) */\n> #define HEAP_HASEXTERNAL0x0004/* has external stored attribute(s) */\n> #define HEAP_HASOID_OLD0x0008/* has an object-id field */\n> #define HEAP_XMAX_KEYSHR_LOCK0x0010/* xmax is a key-shared locker */\n> #define HEAP_COMBOCID0x0020/* t_cid is a combo CID */\n> #define HEAP_XMAX_EXCL_LOCK0x0040/* xmax is exclusive locker */\n> #define HEAP_XMAX_LOCK_ONLY0x0080/* xmax, if valid, is only a locker */\n> \n> And I can't understand these attrs:\n\nI suggest you try something like 'git grep HEAP_HASEXTERNAL' which shows\nyou where the flag is used, which should tell you what it means. These\nshort descriptions generally assume you know enough about the internals.\n\n> 1. external stored attribute(s), what is this? can you give a create\n> statement to show me?\n\nexternal = value stored in a TOAST table\n\n> 2. xmax is a key-shared locker/exclusive locker/only a locker, so how\n> you use this? can you give me a scenario?\n> let me try to explain it:\n> if there is a txn is trying to read this heaptuple,\n> the HEAP_XMAX_KEYSHR_LOCK bit will be set to 1.\n> if there is a txn is trying to delete/update this heaptuple,\n> the HEAP_XMAX_EXCL_LOCK bit will be set to 1.\n> but for HEAP_XMAX_LOCK_ONLY, I can't understand.\n> And another thought is that these three bit can have only one to be set\n> 1 at most.\n\nI believe HEAP_XMAX_LOCK_ONLY means the xmax transaction only locked the\ntuple, without deleting/updating it.\n\n> 3. t_cid is a combo CID? what's a CID? give me an example please.\n\nCID means \"command ID\" i.e. sequential ID assigned to commands in a\nsingle session (for visibility checks, so that a query doesn't see data\ndeleted by earlier commands in the same session). See\nsrc/backend/utils/time/combocid.c for basic explanation of what \"combo\nCID\" is.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 26 Feb 2023 16:23:03 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Give me more details of some bits in infomask!!"
},
{
"msg_contents": "From: Tomas Vondra\r\nDate: 2023-02-26 23:23\r\nTo: jacktby@gmail.com; pgsql-hackers\r\nSubject: Re: Give me more details of some bits in infomask!!\r\nOn 2/26/23 15:30, jacktby@gmail.com wrote:\r\n> here are the source codes from src/include/access/htup_details.h.\r\n> /*\r\n> * information stored in t_infomask:\r\n> */\r\n> #define HEAP_HASNULL0x0001/* has null attribute(s) */\r\n> #define HEAP_HASVARWIDTH0x0002/* has variable-width attribute(s) */\r\n> #define HEAP_HASEXTERNAL0x0004/* has external stored attribute(s) */\r\n> #define HEAP_HASOID_OLD0x0008/* has an object-id field */\r\n> #define HEAP_XMAX_KEYSHR_LOCK0x0010/* xmax is a key-shared locker */\r\n> #define HEAP_COMBOCID0x0020/* t_cid is a combo CID */\r\n> #define HEAP_XMAX_EXCL_LOCK0x0040/* xmax is exclusive locker */\r\n> #define HEAP_XMAX_LOCK_ONLY0x0080/* xmax, if valid, is only a locker */\r\n> \r\n> And I can't understand these attrs:\r\n \r\nI suggest you try something like 'git grep HEAP_HASEXTERNAL' which shows\r\nyou where the flag is used, which should tell you what it means. These\r\nshort descriptions generally assume you know enough about the internals.\r\n \r\n> 1. external stored attribute(s), what is this? can you give a create\r\n> statement to show me?\r\n \r\nexternal = value stored in a TOAST table\r\n \r\n> 2. xmax is a key-shared locker/exclusive locker/only a locker, so how\r\n> you use this? can you give me a scenario?\r\n> let me try to explain it:\r\n> if there is a txn is trying to read this heaptuple,\r\n> the HEAP_XMAX_KEYSHR_LOCK bit will be set to 1.\r\n> if there is a txn is trying to delete/update this heaptuple,\r\n> the HEAP_XMAX_EXCL_LOCK bit will be set to 1.\r\n> but for HEAP_XMAX_LOCK_ONLY, I can't understand.\r\n> And another thought is that these three bit can have only one to be set\r\n> 1 at most.\r\n \r\nI believe HEAP_XMAX_LOCK_ONLY means the xmax transaction only locked the\r\ntuple, without deleting/updating it.\r\n \r\n> 3. t_cid is a combo CID? what's a CID? give me an example please.\r\n \r\nCID means \"command ID\" i.e. sequential ID assigned to commands in a\r\nsingle session (for visibility checks, so that a query doesn't see data\r\ndeleted by earlier commands in the same session). See\r\nsrc/backend/utils/time/combocid.c for basic explanation of what \"combo\r\nCID\" is.\r\n \r\n \r\nregards\r\n \r\n-- \r\nTomas Vondra\r\nEnterpriseDB: http://www.enterprisedb.com\r\nThe Enterprise PostgreSQL Company\r\n\r\n\r\n> I believe HEAP_XMAX_LOCK_ONLY means the xmax transaction only locked the\r\n> tuple, without deleting/updating it.\r\nif so, you mean when I read this tuple, this bit will be set 1, but I think this is duplicat with HEAP_XMAX_KEYSHR_LOCK.\r\n\r\n> CID means \"command ID\" i.e. sequential ID assigned to commands in a\r\n> single session (for visibility checks, so that a query doesn't see data\r\n> deleted by earlier commands in the same session). See\r\n> src/backend/utils/time/combocid.c for basic explanation of what \"combo\r\n> CID\" is.\r\nI think if cid is used for visibility checks in one session, that's meaingless, beacause we can use the t_xmin and t_xmax to \r\nget this goal. Is tis \r\n\n\n From: Tomas VondraDate: 2023-02-26 23:23To: jacktby@gmail.com; pgsql-hackersSubject: Re: Give me more details of some bits in infomask!!On 2/26/23 15:30, jacktby@gmail.com wrote:\n> here are the source codes from src/include/access/htup_details.h.\n> /*\n> * information stored in t_infomask:\n> */\n> #define HEAP_HASNULL0x0001/* has null attribute(s) */\n> #define HEAP_HASVARWIDTH0x0002/* has variable-width attribute(s) */\n> #define HEAP_HASEXTERNAL0x0004/* has external stored attribute(s) */\n> #define HEAP_HASOID_OLD0x0008/* has an object-id field */\n> #define HEAP_XMAX_KEYSHR_LOCK0x0010/* xmax is a key-shared locker */\n> #define HEAP_COMBOCID0x0020/* t_cid is a combo CID */\n> #define HEAP_XMAX_EXCL_LOCK0x0040/* xmax is exclusive locker */\n> #define HEAP_XMAX_LOCK_ONLY0x0080/* xmax, if valid, is only a locker */\n> \n> And I can't understand these attrs:\n \nI suggest you try something like 'git grep HEAP_HASEXTERNAL' which shows\nyou where the flag is used, which should tell you what it means. These\nshort descriptions generally assume you know enough about the internals.\n \n> 1. external stored attribute(s), what is this? can you give a create\n> statement to show me?\n \nexternal = value stored in a TOAST table\n \n> 2. xmax is a key-shared locker/exclusive locker/only a locker, so how\n> you use this? can you give me a scenario?\n> let me try to explain it:\n> if there is a txn is trying to read this heaptuple,\n> the HEAP_XMAX_KEYSHR_LOCK bit will be set to 1.\n> if there is a txn is trying to delete/update this heaptuple,\n> the HEAP_XMAX_EXCL_LOCK bit will be set to 1.\n> but for HEAP_XMAX_LOCK_ONLY, I can't understand.\n> And another thought is that these three bit can have only one to be set\n> 1 at most.\n \nI believe HEAP_XMAX_LOCK_ONLY means the xmax transaction only locked the\ntuple, without deleting/updating it.\n \n> 3. t_cid is a combo CID? what's a CID? give me an example please.\n \nCID means \"command ID\" i.e. sequential ID assigned to commands in a\nsingle session (for visibility checks, so that a query doesn't see data\ndeleted by earlier commands in the same session). See\nsrc/backend/utils/time/combocid.c for basic explanation of what \"combo\nCID\" is.\n \n \nregards\n \n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company> I believe HEAP_XMAX_LOCK_ONLY means the xmax transaction only locked the> tuple, without deleting/updating it.if so, you mean when I read this tuple, this bit will be set 1, but I think this is duplicat with HEAP_XMAX_KEYSHR_LOCK.> CID means \"command ID\" i.e. sequential ID assigned to commands in a> single session (for visibility checks, so that a query doesn't see data> deleted by earlier commands in the same session). See> src/backend/utils/time/combocid.c for basic explanation of what \"combo> CID\" is.I think if cid is used for visibility checks in one session, that's meaingless, beacause we can use the t_xmin and t_xmax to get this goal. Is tis",
"msg_date": "Sun, 26 Feb 2023 23:36:18 +0800",
"msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: Give me more details of some bits in infomask!!"
},
{
"msg_contents": "On Sun, Feb 26, 2023 at 8:36 AM jacktby@gmail.com <jacktby@gmail.com> wrote:\n\n> > CID means \"command ID\" i.e. sequential ID assigned to commands in a\n> > single session (for visibility checks, so that a query doesn't see data\n> > deleted by earlier commands in the same session). See\n> > src/backend/utils/time/combocid.c for basic explanation of what \"combo\n> > CID\" is.\n> I think if cid is used for visibility checks in one session, that's\n> meaingless, beacause we can use the t_xmin and t_xmax to\n> get this goal. Is tis\n>\n>\nI think the word \"session\" is wrong. It should be \"transaction\".\n\nIIUC, it is what is changed when one issues CommandCounterIncrement within\na transaction. And you need somewhere to save which CCI step deletes rows,\nin particular due to the savepoint feature.\n\nDavid J.\n\nOn Sun, Feb 26, 2023 at 8:36 AM jacktby@gmail.com <jacktby@gmail.com> wrote:\n> CID means \"command ID\" i.e. sequential ID assigned to commands in a> single session (for visibility checks, so that a query doesn't see data> deleted by earlier commands in the same session). See> src/backend/utils/time/combocid.c for basic explanation of what \"combo> CID\" is.I think if cid is used for visibility checks in one session, that's meaingless, beacause we can use the t_xmin and t_xmax to get this goal. Is tis I think the word \"session\" is wrong. It should be \"transaction\".IIUC, it is what is changed when one issues CommandCounterIncrement within a transaction. And you need somewhere to save which CCI step deletes rows, in particular due to the savepoint feature.David J.",
"msg_date": "Sun, 26 Feb 2023 08:44:45 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Give me more details of some bits in infomask!!"
}
] |
[
{
"msg_contents": "use these sqls below:\r\ncreate table t(a int);\r\ninsert into t values(1);\r\nselect lp,lp_off,lp_len,t_data from heap_page_items(get_raw_page('t',0));\r\n lp | lp_off | lp_len | t_data \r\n----+--------+--------+------------\r\n 1 | 8160 | 28 | \\x01000000\r\n--------------------------------------------------------------------------------\r\njacktby@gmail.com\r\n\n\nuse these sqls below:\ncreate table t(a int);insert into t values(1);select lp,lp_off,lp_len,t_data from heap_page_items(get_raw_page('t',0)); lp | lp_off | lp_len | t_data ----+--------+--------+------------ 1 | 8160 | 28 | \\x01000000--------------------------------------------------------------------------------jacktby@gmail.com",
"msg_date": "Sun, 26 Feb 2023 22:35:07 +0800",
"msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>",
"msg_from_op": true,
"msg_subject": "Why the lp_len is 28 not 32?"
},
{
"msg_contents": "On 2/26/23 15:35, jacktby@gmail.com wrote:\n> use these sqls below:\n> create table t(a int);\n> insert into t values(1);\n> select lp,lp_off,lp_len,t_data from heap_page_items(get_raw_page('t',0));\n> lp | lp_off | lp_len | t_data \n> ----+--------+--------+------------\n> 1 | 8160 | 28 | \\x01000000\n> --------------------------------------------------------------------------------\n\nPretty sure this is because we align the data to MAXALIGN, and on x86_64\nthat's 8 bytes. 28 is not a multiple of 8 while 32 is.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 26 Feb 2023 16:07:21 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Why the lp_len is 28 not 32?"
},
{
"msg_contents": "From: Tomas Vondra\r\nDate: 2023-02-26 23:07\r\nTo: jacktby@gmail.com; pgsql-hackers\r\nSubject: Re: Why the lp_len is 28 not 32?\r\nOn 2/26/23 15:35, jacktby@gmail.com wrote:\r\n> use these sqls below:\r\n> create table t(a int);\r\n> insert into t values(1);\r\n> select lp,lp_off,lp_len,t_data from heap_page_items(get_raw_page('t',0));\r\n> lp | lp_off | lp_len | t_data \r\n> ----+--------+--------+------------\r\n> 1 | 8160 | 28 | \\x01000000\r\n> --------------------------------------------------------------------------------\r\n \r\nPretty sure this is because we align the data to MAXALIGN, and on x86_64\r\nthat's 8 bytes. 28 is not a multiple of 8 while 32 is.\r\n \r\nregards\r\n \r\n-- \r\nTomas Vondra\r\nEnterpriseDB: http://www.enterprisedb.com\r\nThe Enterprise PostgreSQL Company\r\n\r\nyes, So it should be 32 bytes not 28bytes, but the sql result is 28 !!!!!! that's false!!!!\r\n-------------------------------------------------\r\njacktby@gmail.com;\r\n\n\n From: Tomas VondraDate: 2023-02-26 23:07To: jacktby@gmail.com; pgsql-hackersSubject: Re: Why the lp_len is 28 not 32?On 2/26/23 15:35, jacktby@gmail.com wrote:\n> use these sqls below:\n> create table t(a int);\n> insert into t values(1);\n> select lp,lp_off,lp_len,t_data from heap_page_items(get_raw_page('t',0));\n> lp | lp_off | lp_len | t_data \n> ----+--------+--------+------------\n> 1 | 8160 | 28 | \\x01000000\n> --------------------------------------------------------------------------------\n \nPretty sure this is because we align the data to MAXALIGN, and on x86_64\nthat's 8 bytes. 28 is not a multiple of 8 while 32 is.\n \nregards\n \n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Companyyes, So it should be 32 bytes not 28bytes, but the sql result is 28 !!!!!! that's false!!!!-------------------------------------------------jacktby@gmail.com;",
"msg_date": "Sun, 26 Feb 2023 23:11:36 +0800",
"msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: Why the lp_len is 28 not 32?"
},
{
"msg_contents": "On 2/26/23 16:11, jacktby@gmail.com wrote:\n\n> \n> yes, So it should be 32 bytes not 28bytes, but the sql result is 28\n> !!!!!! that's false!!!!\n\nNo. The tuple is 28 bytes long, and that's what's stored in lp_len. But\nwe align the start of the tuple to a multiple of 8 bytes. So it's at\noffset 8160 because that's the closest multiple of 8. Then there's 28\nbytes of data and then 4 empty bytes.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 26 Feb 2023 16:35:53 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Why the lp_len is 28 not 32?"
},
{
"msg_contents": "On Sun, Feb 26, 2023 at 8:11 AM jacktby@gmail.com <jacktby@gmail.com> wrote:\n\n>\n> *From:* Tomas Vondra <tomas.vondra@enterprisedb.com>\n>\n> > ----+--------+--------+------------\n> > 1 | 8160 | 28 | \\x01000000\n> >\n> --------------------------------------------------------------------------------\n>\n> Pretty sure this is because we align the data to MAXALIGN, and on x86_64\n> that's 8 bytes. 28 is not a multiple of 8 while 32 is.\n>\n> >> yes, So it should be 32 bytes not 28bytes, but the sql result is 28\n> !!!!!! that's false!!!!\n>\n>\nNo, that is a definition not matching your expectation. Are you trying to\ndemonstrate a bug here or just observing that your intuition of this didn't\nwork here?\n\nThe content doesn't include alignment padding. The claim isn't that the\nsize is \"the number of bytes consumed in some place within the page\" but\nrather the size is \"the number of bytes needed to get the content required\nto be passed into the input function for the datatype\". Nicely, it is\ntrivial to then align the value to figure out the consumed width. If you\njust have the aligned size you would never know how many bytes you need for\nthe data value.\n\nDavid J.\n\nOn Sun, Feb 26, 2023 at 8:11 AM jacktby@gmail.com <jacktby@gmail.com> wrote:\n From: Tomas Vondra\n> ----+--------+--------+------------\n> 1 | 8160 | 28 | \\x01000000\n> --------------------------------------------------------------------------------\n \nPretty sure this is because we align the data to MAXALIGN, and on x86_64\nthat's 8 bytes. 28 is not a multiple of 8 while 32 is.\n \n>> yes, So it should be 32 bytes not 28bytes, but the sql result is 28 !!!!!! that's false!!!!No, that is a definition not matching your expectation. Are you trying to demonstrate a bug here or just observing that your intuition of this didn't work here?The content doesn't include alignment padding. The claim isn't that the size is \"the number of bytes consumed in some place within the page\" but rather the size is \"the number of bytes needed to get the content required to be passed into the input function for the datatype\". Nicely, it is trivial to then align the value to figure out the consumed width. If you just have the aligned size you would never know how many bytes you need for the data value.David J.",
"msg_date": "Sun, 26 Feb 2023 08:40:18 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Re: Why the lp_len is 28 not 32?"
}
] |
[
{
"msg_contents": "Hi,\n\nAs suggested in [1], the attached patch adds IO times to pg_stat_io;\n\nI added docs but haven't added any tests. The timings will only be\nnon-zero when track_io_timing is on, and I only see tests with track IO\ntiming on in explain.sql and the IO timings I added to pg_stat_io would\nnot be visible there.\n\nI didn't split it up into two patches (one with the changes to track IO\ntiming and 1 with the view additions and docs), because I figured the\noverall diff is pretty small.\n\nThere is one minor question (in the code as a TODO) which is whether or\nnot it is worth cross-checking that IO counts and times are either both\nzero or neither zero in the validation function\npgstat_bktype_io_stats_valid().\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/20230209050319.chyyup4vtq4jzobq%40awork3.anarazel.de",
"msg_date": "Sun, 26 Feb 2023 11:03:43 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Track IO times in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn 2/26/23 5:03 PM, Melanie Plageman wrote:\n> Hi,\n> \n> As suggested in [1], the attached patch adds IO times to pg_stat_io;\n\nThanks for the patch!\n\nI started to have a look at it and figured out that a tiny rebase was needed (due to\n728560db7d and b9f0e54bc9), so please find the rebase (aka V2) attached.\n\n> The timings will only be non-zero when track_io_timing is on\n\nThat could lead to incorrect interpretation if one wants to divide the timing per operations, say:\n\n- track_io_timing is set to on while there is already operations\n- or set to off while it was on (and the number of operations keeps growing)\n\nMight be worth to warn/highlight in the \"track_io_timing\" doc?\n\n\n+ if (track_io_timing)\n+ {\n+ INSTR_TIME_SET_CURRENT(io_time);\n+ INSTR_TIME_SUBTRACT(io_time, io_start);\n+ pgstat_count_io_time(io_object, io_context, IOOP_EXTEND, io_time);\n+ }\n+\n+\n pgstat_count_io_op(io_object, io_context, IOOP_EXTEND);\n\nvs\n\n@@ -1042,6 +1059,7 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n INSTR_TIME_SUBTRACT(io_time, io_start);\n pgstat_count_buffer_read_time(INSTR_TIME_GET_MICROSEC(io_time));\n INSTR_TIME_ADD(pgBufferUsage.blk_read_time, io_time);\n+ pgstat_count_io_time(io_object, io_context, IOOP_READ, io_time);\n }\n\nThat leads to pgstat_count_io_time() to be called before pgstat_count_io_op() (for the IOOP_EXTEND case) and\nafter pgstat_count_io_op() (for the IOOP_READ case).\n\nWhat about calling them in the same order and so that pgstat_count_io_time() is called before pgstat_count_io_op()?\n\nIf so, the ordering would also need to be changed in:\n\n- FlushRelationBuffers()\n- register_dirty_segment()\n\n> \n> There is one minor question (in the code as a TODO) which is whether or\n> not it is worth cross-checking that IO counts and times are either both\n> zero or neither zero in the validation function\n> pgstat_bktype_io_stats_valid().\n> \n\nAs pgstat_bktype_io_stats_valid() is called only in Assert(), I think that would be a good idea\nto also check that if counts are not Zero then times are not Zero.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 28 Feb 2023 10:49:13 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "Thanks for the review!\n\nOn Tue, Feb 28, 2023 at 4:49 AM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n> On 2/26/23 5:03 PM, Melanie Plageman wrote:\n> > As suggested in [1], the attached patch adds IO times to pg_stat_io;\n>\n> Thanks for the patch!\n>\n> I started to have a look at it and figured out that a tiny rebase was needed (due to\n> 728560db7d and b9f0e54bc9), so please find the rebase (aka V2) attached.\n\nThanks for doing that!\n\n> > The timings will only be non-zero when track_io_timing is on\n>\n> That could lead to incorrect interpretation if one wants to divide the timing per operations, say:\n>\n> - track_io_timing is set to on while there is already operations\n> - or set to off while it was on (and the number of operations keeps growing)\n>\n> Might be worth to warn/highlight in the \"track_io_timing\" doc?\n\nThis is a good point. I've added a note to the docs for pg_stat_io.\n\n> + if (track_io_timing)\n> + {\n> + INSTR_TIME_SET_CURRENT(io_time);\n> + INSTR_TIME_SUBTRACT(io_time, io_start);\n> + pgstat_count_io_time(io_object, io_context, IOOP_EXTEND, io_time);\n> + }\n> +\n> +\n> pgstat_count_io_op(io_object, io_context, IOOP_EXTEND);\n>\n> vs\n>\n> @@ -1042,6 +1059,7 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> INSTR_TIME_SUBTRACT(io_time, io_start);\n> pgstat_count_buffer_read_time(INSTR_TIME_GET_MICROSEC(io_time));\n> INSTR_TIME_ADD(pgBufferUsage.blk_read_time, io_time);\n> + pgstat_count_io_time(io_object, io_context, IOOP_READ, io_time);\n> }\n>\n> That leads to pgstat_count_io_time() to be called before pgstat_count_io_op() (for the IOOP_EXTEND case) and\n> after pgstat_count_io_op() (for the IOOP_READ case).\n>\n> What about calling them in the same order and so that pgstat_count_io_time() is called before pgstat_count_io_op()?\n>\n> If so, the ordering would also need to be changed in:\n>\n> - FlushRelationBuffers()\n> - register_dirty_segment()\n\nYes, good point. I've updated the code to use this suggested ordering in\nattached v3.\n\n> > There is one minor question (in the code as a TODO) which is whether or\n> > not it is worth cross-checking that IO counts and times are either both\n> > zero or neither zero in the validation function\n> > pgstat_bktype_io_stats_valid().\n> >\n>\n> As pgstat_bktype_io_stats_valid() is called only in Assert(), I think that would be a good idea\n> to also check that if counts are not Zero then times are not Zero.\n\nYes, I think adding some validation around the relationship between\ncounts and timing should help prevent developers from forgetting to call\npg_stat_count_io_op() when calling pgstat_count_io_time() (as relevant).\n\nHowever, I think that we cannot check that if IO counts are non-zero\nthat IO times are non-zero, because the user may not have\ntrack_io_timing enabled. We can check that if IO times are not zero, IO\ncounts are not zero. I've done this in the attached v3.\n\n- Melanie",
"msg_date": "Mon, 6 Mar 2023 11:30:13 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn 3/6/23 5:30 PM, Melanie Plageman wrote:\n> Thanks for the review!\n> \n> On Tue, Feb 28, 2023 at 4:49 AM Drouvot, Bertrand\n> <bertranddrouvot.pg@gmail.com> wrote:\n>> On 2/26/23 5:03 PM, Melanie Plageman wrote:\n>>> The timings will only be non-zero when track_io_timing is on\n>>\n>> That could lead to incorrect interpretation if one wants to divide the timing per operations, say:\n>>\n>> - track_io_timing is set to on while there is already operations\n>> - or set to off while it was on (and the number of operations keeps growing)\n>>\n>> Might be worth to warn/highlight in the \"track_io_timing\" doc?\n> \n> This is a good point. I've added a note to the docs for pg_stat_io.\n\nThanks!\n\nNow I've a second thought: what do you think about resetting the related number\nof operations and *_time fields when enabling/disabling track_io_timing? (And mention it in the doc).\n\nThat way it'd prevent bad interpretation (at least as far the time per operation metrics are concerned).\n\nThinking that way as we'd loose some (most?) benefits of the new *_time columns\nif one can't \"trust\" their related operations and/or one is not sampling pg_stat_io frequently enough (to discard the samples\nwhere the track_io_timing changes occur).\n\nBut well, resetting the operations could also lead to bad interpretation about the operations...\n\nNot sure about which approach I like the most yet, what do you think?\n\n>> That leads to pgstat_count_io_time() to be called before pgstat_count_io_op() (for the IOOP_EXTEND case) and\n>> after pgstat_count_io_op() (for the IOOP_READ case).\n>>\n>> What about calling them in the same order and so that pgstat_count_io_time() is called before pgstat_count_io_op()?\n>>\n>> If so, the ordering would also need to be changed in:\n>>\n>> - FlushRelationBuffers()\n>> - register_dirty_segment()\n> \n> Yes, good point. I've updated the code to use this suggested ordering in\n> attached v3.\n> \n\nThanks, this looks good to me.\n\n>> As pgstat_bktype_io_stats_valid() is called only in Assert(), I think that would be a good idea\n>> to also check that if counts are not Zero then times are not Zero.\n> \n> Yes, I think adding some validation around the relationship between\n> counts and timing should help prevent developers from forgetting to call\n> pg_stat_count_io_op() when calling pgstat_count_io_time() (as relevant).\n> \n> However, I think that we cannot check that if IO counts are non-zero\n> that IO times are non-zero, because the user may not have\n> track_io_timing enabled.\n\nYeah, right.\n\n> We can check that if IO times are not zero, IO\n> counts are not zero. I've done this in the attached v3.\n> \n\nThanks, looks good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 7 Mar 2023 16:52:37 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-06 11:30:13 -0500, Melanie Plageman wrote:\n> > As pgstat_bktype_io_stats_valid() is called only in Assert(), I think that would be a good idea\n> > to also check that if counts are not Zero then times are not Zero.\n> \n> Yes, I think adding some validation around the relationship between\n> counts and timing should help prevent developers from forgetting to call\n> pg_stat_count_io_op() when calling pgstat_count_io_time() (as relevant).\n> \n> However, I think that we cannot check that if IO counts are non-zero\n> that IO times are non-zero, because the user may not have\n> track_io_timing enabled. We can check that if IO times are not zero, IO\n> counts are not zero. I've done this in the attached v3.\n\nAnd even if track_io_timing is enabled, the timer granularity might be so low\nthat we *still* get zeroes.\n\nI wonder if we should get rid of pgStatBlockReadTime, pgStatBlockWriteTime,\n\n\n> @@ -1000,11 +1000,27 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> \n> \tif (isExtend)\n> \t{\n> +\t\tinstr_time\tio_start,\n> +\t\t\t\t\tio_time;\n> +\n> \t\t/* new buffers are zero-filled */\n> \t\tMemSet((char *) bufBlock, 0, BLCKSZ);\n> +\n> +\t\tif (track_io_timing)\n> +\t\t\tINSTR_TIME_SET_CURRENT(io_start);\n> +\t\telse\n> +\t\t\tINSTR_TIME_SET_ZERO(io_start);\n> +\n\nI wonder if there's an argument for tracking this in the existing IO stats as\nwell. But I guess we've lived with this for a long time...\n\n\n> @@ -2981,16 +2998,16 @@ FlushBuffer(BufferDesc *buf, SMgrRelation reln, IOObject io_object,\n> \t * When a strategy is not in use, the write can only be a \"regular\" write\n> \t * of a dirty shared buffer (IOCONTEXT_NORMAL IOOP_WRITE).\n> \t */\n> -\tpgstat_count_io_op(IOOBJECT_RELATION, io_context, IOOP_WRITE);\n> -\n> \tif (track_io_timing)\n> \t{\n> \t\tINSTR_TIME_SET_CURRENT(io_time);\n> \t\tINSTR_TIME_SUBTRACT(io_time, io_start);\n> \t\tpgstat_count_buffer_write_time(INSTR_TIME_GET_MICROSEC(io_time));\n> \t\tINSTR_TIME_ADD(pgBufferUsage.blk_write_time, io_time);\n> +\t\tpgstat_count_io_time(IOOBJECT_RELATION, io_context, IOOP_WRITE, io_time);\n> \t}\n\nI think this needs a bit of cleanup - pgstat_count_buffer_write_time(),\npgBufferUsage.blk_write_time++, pgstat_count_io_time() is a bit excessive. We\nmight not be able to reduce the whole duplication at this point, but at least\nit should be a bit more centralized.\n\n\n\n> +\tpgstat_count_io_op(IOOBJECT_RELATION, io_context, IOOP_WRITE);\n> \tpgBufferUsage.shared_blks_written++;\n> \n> \t/*\n> @@ -3594,6 +3611,9 @@ FlushRelationBuffers(Relation rel)\n> \n> \tif (RelationUsesLocalBuffers(rel))\n> \t{\n> +\t\tinstr_time\tio_start,\n> +\t\t\t\t\tio_time;\n> +\n> \t\tfor (i = 0; i < NLocBuffer; i++)\n> \t\t{\n> \t\t\tuint32\t\tbuf_state;\n> @@ -3616,6 +3636,11 @@ FlushRelationBuffers(Relation rel)\n> \n> \t\t\t\tPageSetChecksumInplace(localpage, bufHdr->tag.blockNum);\n> \n> +\t\t\t\tif (track_io_timing)\n> +\t\t\t\t\tINSTR_TIME_SET_CURRENT(io_start);\n> +\t\t\t\telse\n> +\t\t\t\t\tINSTR_TIME_SET_ZERO(io_start);\n> +\n> \t\t\t\tsmgrwrite(RelationGetSmgr(rel),\n> \t\t\t\t\t\t BufTagGetForkNum(&bufHdr->tag),\n> \t\t\t\t\t\t bufHdr->tag.blockNum,\n\nI don't think you need the INSTR_TIME_SET_ZERO() in the body of the loop, to\nsilence the compiler warnings you can do it one level up.\n\n\n\n> @@ -228,6 +230,11 @@ LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n> \n> \t\tPageSetChecksumInplace(localpage, bufHdr->tag.blockNum);\n> \n> +\t\tif (track_io_timing)\n> +\t\t\tINSTR_TIME_SET_CURRENT(io_start);\n> +\t\telse\n> +\t\t\tINSTR_TIME_SET_ZERO(io_start);\n> +\n> \t\t/* And write... */\n> \t\tsmgrwrite(oreln,\n> \t\t\t\t BufTagGetForkNum(&bufHdr->tag),\n> @@ -239,6 +246,13 @@ LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n> \t\tbuf_state &= ~BM_DIRTY;\n> \t\tpg_atomic_unlocked_write_u32(&bufHdr->state, buf_state);\n> \n> +\t\tif (track_io_timing)\n> +\t\t{\n> +\t\t\tINSTR_TIME_SET_CURRENT(io_time);\n> +\t\t\tINSTR_TIME_SUBTRACT(io_time, io_start);\n> +\t\t\tpgstat_count_io_time(IOOBJECT_TEMP_RELATION, IOCONTEXT_NORMAL, IOOP_WRITE, io_time);\n> +\t\t}\n> +\n> \t\tpgstat_count_io_op(IOOBJECT_TEMP_RELATION, IOCONTEXT_NORMAL, IOOP_WRITE);\n> \t\tpgBufferUsage.local_blks_written++;\n> \t}\n\nPerhaps we can instead introduce a FlushLocalBuffer()? Then we don't need this\nin multiple write paths.\n\n\n> diff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c\n> index 352958e1fe..052875d86a 100644\n> --- a/src/backend/storage/smgr/md.c\n> +++ b/src/backend/storage/smgr/md.c\n> @@ -1030,6 +1030,30 @@ register_dirty_segment(SMgrRelation reln, ForkNumber forknum, MdfdVec *seg)\n> \n> \tif (!RegisterSyncRequest(&tag, SYNC_REQUEST, false /* retryOnError */ ))\n> \t{\n> +\t\tinstr_time\tio_start,\n> +\t\t\t\t\tio_time;\n> +\n> +\t\tif (track_io_timing)\n> +\t\t\tINSTR_TIME_SET_CURRENT(io_start);\n> +\t\telse\n> +\t\t\tINSTR_TIME_SET_ZERO(io_start);\n> +\n> +\t\tereport(DEBUG1,\n> +\t\t\t\t(errmsg_internal(\"could not forward fsync request because request queue is full\")));\n> +\n> +\t\tif (FileSync(seg->mdfd_vfd, WAIT_EVENT_DATA_FILE_SYNC) < 0)\n> +\t\t\tereport(data_sync_elevel(ERROR),\n> +\t\t\t\t\t(errcode_for_file_access(),\n> +\t\t\t\t\t errmsg(\"could not fsync file \\\"%s\\\": %m\",\n> +\t\t\t\t\t\t\tFilePathName(seg->mdfd_vfd))));\n> +\n> +\t\tif (track_io_timing)\n> +\t\t{\n> +\t\t\tINSTR_TIME_SET_CURRENT(io_time);\n> +\t\t\tINSTR_TIME_SUBTRACT(io_time, io_start);\n> +\t\t\tpgstat_count_io_time(IOOBJECT_RELATION, IOCONTEXT_NORMAL, IOOP_FSYNC, io_time);\n> +\t\t}\n> +\n> \t\t/*\n> \t\t * We have no way of knowing if the current IOContext is\n> \t\t * IOCONTEXT_NORMAL or IOCONTEXT_[BULKREAD, BULKWRITE, VACUUM] at this\n> @@ -1042,15 +1066,6 @@ register_dirty_segment(SMgrRelation reln, ForkNumber forknum, MdfdVec *seg)\n> \t\t * backend fsyncs.\n> \t\t */\n> \t\tpgstat_count_io_op(IOOBJECT_RELATION, IOCONTEXT_NORMAL, IOOP_FSYNC);\n> -\n> -\t\tereport(DEBUG1,\n> -\t\t\t\t(errmsg_internal(\"could not forward fsync request because request queue is full\")));\n> -\n> -\t\tif (FileSync(seg->mdfd_vfd, WAIT_EVENT_DATA_FILE_SYNC) < 0)\n> -\t\t\tereport(data_sync_elevel(ERROR),\n> -\t\t\t\t\t(errcode_for_file_access(),\n> -\t\t\t\t\t errmsg(\"could not fsync file \\\"%s\\\": %m\",\n> -\t\t\t\t\t\t\tFilePathName(seg->mdfd_vfd))));\n> \t}\n> }\n> \n> @@ -1399,6 +1414,8 @@ int\n> mdsyncfiletag(const FileTag *ftag, char *path)\n> {\n> \tSMgrRelation reln = smgropen(ftag->rlocator, InvalidBackendId);\n> +\tinstr_time\tio_start,\n> +\t\t\t\tio_time;\n> \tFile\t\tfile;\n> \tbool\t\tneed_to_close;\n> \tint\t\t\tresult,\n> @@ -1425,10 +1442,22 @@ mdsyncfiletag(const FileTag *ftag, char *path)\n> \t\tneed_to_close = true;\n> \t}\n> \n> +\tif (track_io_timing)\n> +\t\tINSTR_TIME_SET_CURRENT(io_start);\n> +\telse\n> +\t\tINSTR_TIME_SET_ZERO(io_start);\n> +\n> \t/* Sync the file. */\n> \tresult = FileSync(file, WAIT_EVENT_DATA_FILE_SYNC);\n> \tsave_errno = errno;\n> \n> +\tif (track_io_timing)\n> +\t{\n> +\t\tINSTR_TIME_SET_CURRENT(io_time);\n> +\t\tINSTR_TIME_SUBTRACT(io_time, io_start);\n> +\t\tpgstat_count_io_time(IOOBJECT_RELATION, IOCONTEXT_NORMAL, IOOP_FSYNC, io_time);\n> +\t}\n> +\n> \tif (need_to_close)\n> \t\tFileClose(file);\n\nPerhaps we could have mdsyncfd(), used by both mdsyncfiletag() and\nregister_dirty_segment()?\n\n\n\n> @@ -1359,20 +1378,31 @@ pg_stat_get_io(PG_FUNCTION_ARGS)\n> \n> \t\t\t\tfor (int io_op = 0; io_op < IOOP_NUM_TYPES; io_op++)\n> \t\t\t\t{\n> -\t\t\t\t\tint\t\t\tcol_idx = pgstat_get_io_op_index(io_op);\n> +\t\t\t\t\tint\t\t\ti = pgstat_get_io_op_index(io_op);\n> \n> \t\t\t\t\t/*\n> \t\t\t\t\t * Some combinations of BackendType and IOOp, of IOContext\n> \t\t\t\t\t * and IOOp, and of IOObject and IOOp are not tracked. Set\n> \t\t\t\t\t * these cells in the view NULL.\n> \t\t\t\t\t */\n> -\t\t\t\t\tnulls[col_idx] = !pgstat_tracks_io_op(bktype, io_obj, io_context, io_op);\n> +\t\t\t\t\tif (pgstat_tracks_io_op(bktype, io_obj, io_context, io_op))\n> +\t\t\t\t\t\tvalues[i] = Int64GetDatum(bktype_stats->counts[io_obj][io_context][io_op]);\n> +\t\t\t\t\telse\n> +\t\t\t\t\t\tnulls[i] = true;\n> +\t\t\t\t}\n\nThese lines were already too long, and it's getting worse with this change.\n\n\n> typedef struct PgStat_BktypeIO\n> {\n> -\tPgStat_Counter data[IOOBJECT_NUM_TYPES][IOCONTEXT_NUM_TYPES][IOOP_NUM_TYPES];\n> +\tPgStat_Counter counts[IOOBJECT_NUM_TYPES][IOCONTEXT_NUM_TYPES][IOOP_NUM_TYPES];\n> +\tinstr_time\ttimes[IOOBJECT_NUM_TYPES][IOCONTEXT_NUM_TYPES][IOOP_NUM_TYPES];\n> } PgStat_BktypeIO;\n\nAh, you're going to hate me. We can't store instr_time on disk. There's\nanother patch that gets substantial peformance gains by varying the frequency\nat which instr_time keeps track of time based on the CPU frequency... It also\njust doesn't have enough range to keep track of system wide time on a larger\nsystem. A single backend won't run for 293 years, but with a few thousand\nbackends that's a whole different story.\n\nI think we need to accumulate in instr_time, but convert to floating point\nwhen flushing stats.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Mar 2023 10:39:29 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "Thanks for taking another look!\n\nOn Tue, Mar 7, 2023 at 10:52 AM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n> On 3/6/23 5:30 PM, Melanie Plageman wrote:\n> > Thanks for the review!\n> >\n> > On Tue, Feb 28, 2023 at 4:49 AM Drouvot, Bertrand\n> > <bertranddrouvot.pg@gmail.com> wrote:\n> >> On 2/26/23 5:03 PM, Melanie Plageman wrote:\n> >>> The timings will only be non-zero when track_io_timing is on\n> >>\n> >> That could lead to incorrect interpretation if one wants to divide the timing per operations, say:\n> >>\n> >> - track_io_timing is set to on while there is already operations\n> >> - or set to off while it was on (and the number of operations keeps growing)\n> >>\n> >> Might be worth to warn/highlight in the \"track_io_timing\" doc?\n> >\n> > This is a good point. I've added a note to the docs for pg_stat_io.\n>\n> Thanks!\n>\n> Now I've a second thought: what do you think about resetting the related number\n> of operations and *_time fields when enabling/disabling track_io_timing? (And mention it in the doc).\n>\n> That way it'd prevent bad interpretation (at least as far the time per operation metrics are concerned).\n>\n> Thinking that way as we'd loose some (most?) benefits of the new *_time columns\n> if one can't \"trust\" their related operations and/or one is not sampling pg_stat_io frequently enough (to discard the samples\n> where the track_io_timing changes occur).\n>\n> But well, resetting the operations could also lead to bad interpretation about the operations...\n>\n> Not sure about which approach I like the most yet, what do you think?\n\nOh, this is an interesting idea. I think you are right about the\nsynchronization issues making the statistics untrustworthy and, thus,\nunuseable.\n\nBuilding on your idea, what if we had the times be NULL instead of zero\nwhen track_io_timing is disabled? Then as you suggested, when you enable\ntrack_io_timing, it resets the IOOp counts and starts the times off at\nzero. However, disabling track_io_timing would only NULL out the times\nand not zero out the counts.\n\nWe could also, as you say, log these events.\n\n- Melanie\n\n\n",
"msg_date": "Tue, 7 Mar 2023 13:43:28 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "On 2023-03-07 13:43:28 -0500, Melanie Plageman wrote:\n> > Now I've a second thought: what do you think about resetting the related number\n> > of operations and *_time fields when enabling/disabling track_io_timing? (And mention it in the doc).\n> >\n> > That way it'd prevent bad interpretation (at least as far the time per operation metrics are concerned).\n> >\n> > Thinking that way as we'd loose some (most?) benefits of the new *_time columns\n> > if one can't \"trust\" their related operations and/or one is not sampling pg_stat_io frequently enough (to discard the samples\n> > where the track_io_timing changes occur).\n> >\n> > But well, resetting the operations could also lead to bad interpretation about the operations...\n> >\n> > Not sure about which approach I like the most yet, what do you think?\n> \n> Oh, this is an interesting idea. I think you are right about the\n> synchronization issues making the statistics untrustworthy and, thus,\n> unuseable.\n\nNo, I don't think we can do that. It can be enabled on a per-session basis.\n\nI think we simply shouldn't do anything here. This is a pre-existing issue. I\nalso think that loosing stats when turning track_io_timing on/off would not be\nhelpful.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Mar 2023 10:47:51 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn 3/7/23 7:47 PM, Andres Freund wrote:\n> On 2023-03-07 13:43:28 -0500, Melanie Plageman wrote:\n>>> Now I've a second thought: what do you think about resetting the related number\n>>> of operations and *_time fields when enabling/disabling track_io_timing? (And mention it in the doc).\n>>>\n>>> That way it'd prevent bad interpretation (at least as far the time per operation metrics are concerned).\n>>>\n>>> Thinking that way as we'd loose some (most?) benefits of the new *_time columns\n>>> if one can't \"trust\" their related operations and/or one is not sampling pg_stat_io frequently enough (to discard the samples\n>>> where the track_io_timing changes occur).\n>>>\n>>> But well, resetting the operations could also lead to bad interpretation about the operations...\n>>>\n>>> Not sure about which approach I like the most yet, what do you think?\n>>\n>> Oh, this is an interesting idea. I think you are right about the\n>> synchronization issues making the statistics untrustworthy and, thus,\n>> unuseable.\n> \n> No, I don't think we can do that. It can be enabled on a per-session basis.\n\nOh right. So it's even less clear to me to get how one would make use of those new *_time fields, given that:\n\n- pg_stat_io is \"global\" across all sessions. So, even if one session is doing some \"testing\" and needs to turn track_io_timing on, then it\nis even not sure it's only reflecting its own testing (as other sessions may have turned it on too).\n\n- There is the risk mentioned above of bad interpretations for the \"time per operation\" metrics.\n\n- Even if there is frequent enough sampling of it pg_stat_io, one does not know which samples contain track_io_timing changes (at the cluster or session level).\n\n> I think we simply shouldn't do anything here. This is a pre-existing issue.\n\nOh, never thought about it. You mean like for pg_stat_database.blks_read and pg_stat_database.blk_read_time for example?\n\n> I also think that loosing stats when turning track_io_timing on/off would not be\n> helpful.\n> \n\nYeah not 100% sure too as that would lead to other possible bad interpretations.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 8 Mar 2023 12:55:34 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-08 12:55:34 +0100, Drouvot, Bertrand wrote:\n> On 3/7/23 7:47 PM, Andres Freund wrote:\n> > On 2023-03-07 13:43:28 -0500, Melanie Plageman wrote:\n> > > > Now I've a second thought: what do you think about resetting the related number\n> > > > of operations and *_time fields when enabling/disabling track_io_timing? (And mention it in the doc).\n> > > > \n> > > > That way it'd prevent bad interpretation (at least as far the time per operation metrics are concerned).\n> > > > \n> > > > Thinking that way as we'd loose some (most?) benefits of the new *_time columns\n> > > > if one can't \"trust\" their related operations and/or one is not sampling pg_stat_io frequently enough (to discard the samples\n> > > > where the track_io_timing changes occur).\n> > > > \n> > > > But well, resetting the operations could also lead to bad interpretation about the operations...\n> > > > \n> > > > Not sure about which approach I like the most yet, what do you think?\n> > > \n> > > Oh, this is an interesting idea. I think you are right about the\n> > > synchronization issues making the statistics untrustworthy and, thus,\n> > > unuseable.\n> > \n> > No, I don't think we can do that. It can be enabled on a per-session basis.\n> \n> Oh right. So it's even less clear to me to get how one would make use of those new *_time fields, given that:\n> \n> - pg_stat_io is \"global\" across all sessions. So, even if one session is doing some \"testing\" and needs to turn track_io_timing on, then it\n> is even not sure it's only reflecting its own testing (as other sessions may have turned it on too).\n\nI think for 17 we should provide access to per-existing-connection pg_stat_io\nstats, and also provide a database aggregated version. Neither should be\nparticularly hard.\n\n\n> - There is the risk mentioned above of bad interpretations for the \"time per operation\" metrics.\n> \n> - Even if there is frequent enough sampling of it pg_stat_io, one does not know which samples contain track_io_timing changes (at the cluster or session level).\n\nYou'd just make the same use of them you do with pg_stat_database.blks_read\netc today.\n\nI don't think it's particularly useful to use the time to calculate \"per IO\"\ncosts - they can vary *drastically* due to kernel level buffering. The point\nof having the time available is that it provides information that the number\nof operations doesn't provide.\n\n\n> > I think we simply shouldn't do anything here. This is a pre-existing issue.\n> \n> Oh, never thought about it. You mean like for pg_stat_database.blks_read and pg_stat_database.blk_read_time for example?\n\nYes.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Mar 2023 16:34:38 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn 3/9/23 1:34 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2023-03-08 12:55:34 +0100, Drouvot, Bertrand wrote:\n>> On 3/7/23 7:47 PM, Andres Freund wrote:\n>>> On 2023-03-07 13:43:28 -0500, Melanie Plageman wrote:\n>>> No, I don't think we can do that. It can be enabled on a per-session basis.\n>>\n>> Oh right. So it's even less clear to me to get how one would make use of those new *_time fields, given that:\n>>\n>> - pg_stat_io is \"global\" across all sessions. So, even if one session is doing some \"testing\" and needs to turn track_io_timing on, then it\n>> is even not sure it's only reflecting its own testing (as other sessions may have turned it on too).\n> \n> I think for 17 we should provide access to per-existing-connection pg_stat_io\n> stats, and also provide a database aggregated version. Neither should be\n> particularly hard.\n> \n\n+1 that would be great.\n> \n> I don't think it's particularly useful to use the time to calculate \"per IO\"\n> costs - they can vary *drastically* due to kernel level buffering.\n\nExactly and I think that's the reason why it could be useful. I think that could help (with frequent enough sampling)\nto try to identify when the IOs are served by the page cache or not (if one knows his infra well enough).\n\nOne could say (for example, depending on his environment) that if the read_time > 4ms then the IO is served by spindle disks (if any)\nand if <<< ms then by the page cache.\n\nWhat I mean is that one could try to characterized their IOs based on threshold that they could define.\n\nAdding/reporting histograms in the game would be even better: something we could look for for 17?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 9 Mar 2023 09:20:39 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "> >>> Now I've a second thought: what do you think about resetting the related number\r\n> >>> of operations and *_time fields when enabling/disabling track_io_timing? (And mention it in the doc).\r\n> >>>\r\n> >>> That way it'd prevent bad interpretation (at least as far the time per operation metrics are concerned).\r\n> >>>\r\n> >>> Thinking that way as we'd loose some (most?) benefits of the new *_time columns\r\n> >>> if one can't \"trust\" their related operations and/or one is not sampling pg_stat_io frequently enough (to discard the samples\r\n> >>> where the track_io_timing changes occur).\r\n> >>>\r\n> >>> But well, resetting the operations could also lead to bad interpretation about the operations...\r\n> >>>\r\n> >>> Not sure about which approach I like the most yet, what do you think?\r\n> >>\r\n> >> Oh, this is an interesting idea. I think you are right about the\r\n> >> synchronization issues making the statistics untrustworthy and, thus,\r\n> >> unuseable.\r\n> > \r\n> > No, I don't think we can do that. It can be enabled on a per-session basis.\r\n\r\n> Oh right. So it's even less clear to me to get how one would make use of those new *_time fields, given that:\r\n\r\n> - pg_stat_io is \"global\" across all sessions. So, even if one session is doing some \"testing\" and needs to turn track_io_timing on, then it\r\n> is even not sure it's only reflecting its own testing (as other sessions may have turned it on too).\r\n\r\n> - There is the risk mentioned above of bad interpretations for the \"time per operation\" metrics.\r\n\r\n> - Even if there is frequent enough sampling of it pg_stat_io, one does not know which samples contain track_io_timing changes (at the cluster or session level).\r\n\r\nAs long as track_io_timing can be toggled, blk_write_time could lead to wrong conclusions.\r\nI think it may be helpful to track the blks_read when track_io_timing is enabled\r\nSeparately.\r\n\r\nblks_read will be as is and give the overall blks_read, while a new column\r\nblks_read_with_timing will only report on blks_read with track_io_timing enabled.\r\n\r\nblks_read_with_timing should never be larger than blks_read.\r\n\r\nThis will then make the blks_read_time valuable if it's looked at with\r\nthe blks_read_with_timing column.\r\n\r\n\r\nRegards,\r\n\r\n-- \r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n",
"msg_date": "Thu, 9 Mar 2023 14:28:38 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "Hi, v4 attached addresses these review comments.\n\nOn Tue, Mar 7, 2023 at 1:39 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-03-06 11:30:13 -0500, Melanie Plageman wrote:\n> > > As pgstat_bktype_io_stats_valid() is called only in Assert(), I think that would be a good idea\n> > > to also check that if counts are not Zero then times are not Zero.\n> >\n> > Yes, I think adding some validation around the relationship between\n> > counts and timing should help prevent developers from forgetting to call\n> > pg_stat_count_io_op() when calling pgstat_count_io_time() (as relevant).\n> >\n> > However, I think that we cannot check that if IO counts are non-zero\n> > that IO times are non-zero, because the user may not have\n> > track_io_timing enabled. We can check that if IO times are not zero, IO\n> > counts are not zero. I've done this in the attached v3.\n>\n> And even if track_io_timing is enabled, the timer granularity might be so low\n> that we *still* get zeroes.\n>\n> I wonder if we should get rid of pgStatBlockReadTime, pgStatBlockWriteTime,\n\nAnd then have pg_stat_reset_shared('io') reset pg_stat_database IO\nstats?\n\n> > @@ -1000,11 +1000,27 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> >\n> > if (isExtend)\n> > {\n> > + instr_time io_start,\n> > + io_time;\n> > +\n> > /* new buffers are zero-filled */\n> > MemSet((char *) bufBlock, 0, BLCKSZ);\n> > +\n> > + if (track_io_timing)\n> > + INSTR_TIME_SET_CURRENT(io_start);\n> > + else\n> > + INSTR_TIME_SET_ZERO(io_start);\n> > +\n>\n> I wonder if there's an argument for tracking this in the existing IO stats as\n> well. But I guess we've lived with this for a long time...\n\nNot sure I want to include that in this patchset.\n\n> > @@ -2981,16 +2998,16 @@ FlushBuffer(BufferDesc *buf, SMgrRelation reln, IOObject io_object,\n> > * When a strategy is not in use, the write can only be a \"regular\" write\n> > * of a dirty shared buffer (IOCONTEXT_NORMAL IOOP_WRITE).\n> > */\n> > - pgstat_count_io_op(IOOBJECT_RELATION, io_context, IOOP_WRITE);\n> > -\n> > if (track_io_timing)\n> > {\n> > INSTR_TIME_SET_CURRENT(io_time);\n> > INSTR_TIME_SUBTRACT(io_time, io_start);\n> > pgstat_count_buffer_write_time(INSTR_TIME_GET_MICROSEC(io_time));\n> > INSTR_TIME_ADD(pgBufferUsage.blk_write_time, io_time);\n> > + pgstat_count_io_time(IOOBJECT_RELATION, io_context, IOOP_WRITE, io_time);\n> > }\n>\n> I think this needs a bit of cleanup - pgstat_count_buffer_write_time(),\n> pgBufferUsage.blk_write_time++, pgstat_count_io_time() is a bit excessive. We\n> might not be able to reduce the whole duplication at this point, but at least\n> it should be a bit more centralized.\n\nSo, in the attached v4, I've introduced pgstat_io_start() and\npgstat_io_end(...). The end IO function takes the IOObject, IOOp, and\nIOContext, in addition to the start_time, so that we know which\npgBufferUsage field to increment and which pgstat_count_buffer_*_time()\nto call.\n\nI will note that calling this function now causes pgBufferUsage and\npgStatBlock*Time to be incremented in a couple of places that they were\nnot before. I think those might have been accidental omissions, so I\nthink it is okay.\n\nThe exception is pgstat_count_write_time() being only called for\nrelations in shared buffers and not temporary relations while\npgstat_count_buffer_read_time() is called for temporary relations and\nrelations in shared buffers. I left that behavior as is, though it seems\nlike it is wrong.\n\nI added pgstat_io_start() to pgstat.c -- not sure if it is best there.\n\nI could separate it into a commit that does this refactoring of the\nexisting counting (without adding pgstat_count_io_time()) and then\nanother that adds pgstat_count_io_time(). I hesitated to do that until I\nknew that the new functions were viable.\n\n> > + pgstat_count_io_op(IOOBJECT_RELATION, io_context, IOOP_WRITE);\n> > pgBufferUsage.shared_blks_written++;\n> >\n> > /*\n> > @@ -3594,6 +3611,9 @@ FlushRelationBuffers(Relation rel)\n> >\n> > if (RelationUsesLocalBuffers(rel))\n> > {\n> > + instr_time io_start,\n> > + io_time;\n> > +\n> > for (i = 0; i < NLocBuffer; i++)\n> > {\n> > uint32 buf_state;\n> > @@ -3616,6 +3636,11 @@ FlushRelationBuffers(Relation rel)\n> >\n> > PageSetChecksumInplace(localpage, bufHdr->tag.blockNum);\n> >\n> > + if (track_io_timing)\n> > + INSTR_TIME_SET_CURRENT(io_start);\n> > + else\n> > + INSTR_TIME_SET_ZERO(io_start);\n> > +\n> > smgrwrite(RelationGetSmgr(rel),\n> > BufTagGetForkNum(&bufHdr->tag),\n> > bufHdr->tag.blockNum,\n>\n> I don't think you need the INSTR_TIME_SET_ZERO() in the body of the loop, to\n> silence the compiler warnings you can do it one level up.\n\nSo, I didn't move it out because I am using pgstat_io_start() which does\nset zero. However, I could eschew the pgstat_io_start() helper function\nand just do what is in the function inline. Do you think the overhead of\nset zero is worth it?\n\n> > @@ -228,6 +230,11 @@ LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n> >\n> > PageSetChecksumInplace(localpage, bufHdr->tag.blockNum);\n> >\n> > + if (track_io_timing)\n> > + INSTR_TIME_SET_CURRENT(io_start);\n> > + else\n> > + INSTR_TIME_SET_ZERO(io_start);\n> > +\n> > /* And write... */\n> > smgrwrite(oreln,\n> > BufTagGetForkNum(&bufHdr->tag),\n> > @@ -239,6 +246,13 @@ LocalBufferAlloc(SMgrRelation smgr, ForkNumber forkNum, BlockNumber blockNum,\n> > buf_state &= ~BM_DIRTY;\n> > pg_atomic_unlocked_write_u32(&bufHdr->state, buf_state);\n> >\n> > + if (track_io_timing)\n> > + {\n> > + INSTR_TIME_SET_CURRENT(io_time);\n> > + INSTR_TIME_SUBTRACT(io_time, io_start);\n> > + pgstat_count_io_time(IOOBJECT_TEMP_RELATION, IOCONTEXT_NORMAL, IOOP_WRITE, io_time);\n> > + }\n> > +\n> > pgstat_count_io_op(IOOBJECT_TEMP_RELATION, IOCONTEXT_NORMAL, IOOP_WRITE);\n> > pgBufferUsage.local_blks_written++;\n> > }\n>\n> Perhaps we can instead introduce a FlushLocalBuffer()? Then we don't need this\n> in multiple write paths.\n\nFlushLocalBuffer() is a good idea. It would be nice to have it contain\nmore than just\n pgstat_io_start()\n smgrwrite()\n pgstat_io_end()\ne.g. to have it include checksumming and marking dirty (more like\nnormal FlushBuffer()). I noticed that LocalBufferAlloc() does not set up\nerror traceback support for ereport and FlushRelationBuffers() does. Is\nthis intentional?\n\n> > diff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c\n> > index 352958e1fe..052875d86a 100644\n> > --- a/src/backend/storage/smgr/md.c\n> > +++ b/src/backend/storage/smgr/md.c\n> > @@ -1030,6 +1030,30 @@ register_dirty_segment(SMgrRelation reln, ForkNumber forknum, MdfdVec *seg)\n> >\n> > if (!RegisterSyncRequest(&tag, SYNC_REQUEST, false /* retryOnError */ ))\n> > {\n> > + instr_time io_start,\n> > + io_time;\n> > +\n> > + if (track_io_timing)\n> > + INSTR_TIME_SET_CURRENT(io_start);\n> > + else\n> > + INSTR_TIME_SET_ZERO(io_start);\n> > +\n> > + ereport(DEBUG1,\n> > + (errmsg_internal(\"could not forward fsync request because request queue is full\")));\n> > +\n> > + if (FileSync(seg->mdfd_vfd, WAIT_EVENT_DATA_FILE_SYNC) < 0)\n> > + ereport(data_sync_elevel(ERROR),\n> > + (errcode_for_file_access(),\n> > + errmsg(\"could not fsync file \\\"%s\\\": %m\",\n> > + FilePathName(seg->mdfd_vfd))));\n> > +\n> > + if (track_io_timing)\n> > + {\n> > + INSTR_TIME_SET_CURRENT(io_time);\n> > + INSTR_TIME_SUBTRACT(io_time, io_start);\n> > + pgstat_count_io_time(IOOBJECT_RELATION, IOCONTEXT_NORMAL, IOOP_FSYNC, io_time);\n> > + }\n> > +\n> > /*\n> > * We have no way of knowing if the current IOContext is\n> > * IOCONTEXT_NORMAL or IOCONTEXT_[BULKREAD, BULKWRITE, VACUUM] at this\n> > @@ -1042,15 +1066,6 @@ register_dirty_segment(SMgrRelation reln, ForkNumber forknum, MdfdVec *seg)\n> > * backend fsyncs.\n> > */\n> > pgstat_count_io_op(IOOBJECT_RELATION, IOCONTEXT_NORMAL, IOOP_FSYNC);\n> > -\n> > - ereport(DEBUG1,\n> > - (errmsg_internal(\"could not forward fsync request because request queue is full\")));\n> > -\n> > - if (FileSync(seg->mdfd_vfd, WAIT_EVENT_DATA_FILE_SYNC) < 0)\n> > - ereport(data_sync_elevel(ERROR),\n> > - (errcode_for_file_access(),\n> > - errmsg(\"could not fsync file \\\"%s\\\": %m\",\n> > - FilePathName(seg->mdfd_vfd))));\n> > }\n> > }\n> >\n> > @@ -1399,6 +1414,8 @@ int\n> > mdsyncfiletag(const FileTag *ftag, char *path)\n> > {\n> > SMgrRelation reln = smgropen(ftag->rlocator, InvalidBackendId);\n> > + instr_time io_start,\n> > + io_time;\n> > File file;\n> > bool need_to_close;\n> > int result,\n> > @@ -1425,10 +1442,22 @@ mdsyncfiletag(const FileTag *ftag, char *path)\n> > need_to_close = true;\n> > }\n> >\n> > + if (track_io_timing)\n> > + INSTR_TIME_SET_CURRENT(io_start);\n> > + else\n> > + INSTR_TIME_SET_ZERO(io_start);\n> > +\n> > /* Sync the file. */\n> > result = FileSync(file, WAIT_EVENT_DATA_FILE_SYNC);\n> > save_errno = errno;\n> >\n> > + if (track_io_timing)\n> > + {\n> > + INSTR_TIME_SET_CURRENT(io_time);\n> > + INSTR_TIME_SUBTRACT(io_time, io_start);\n> > + pgstat_count_io_time(IOOBJECT_RELATION, IOCONTEXT_NORMAL, IOOP_FSYNC, io_time);\n> > + }\n> > +\n> > if (need_to_close)\n> > FileClose(file);\n>\n> Perhaps we could have mdsyncfd(), used by both mdsyncfiletag() and\n> register_dirty_segment()?\n\nI agree it would be nice, but it seems like it would take a little bit\nof work and might not be worth doing that in this patchset.\n\n> > @@ -1359,20 +1378,31 @@ pg_stat_get_io(PG_FUNCTION_ARGS)\n> >\n> > for (int io_op = 0; io_op < IOOP_NUM_TYPES; io_op++)\n> > {\n> > - int col_idx = pgstat_get_io_op_index(io_op);\n> > + int i = pgstat_get_io_op_index(io_op);\n> >\n> > /*\n> > * Some combinations of BackendType and IOOp, of IOContext\n> > * and IOOp, and of IOObject and IOOp are not tracked. Set\n> > * these cells in the view NULL.\n> > */\n> > - nulls[col_idx] = !pgstat_tracks_io_op(bktype, io_obj, io_context, io_op);\n> > + if (pgstat_tracks_io_op(bktype, io_obj, io_context, io_op))\n> > + values[i] = Int64GetDatum(bktype_stats->counts[io_obj][io_context][io_op]);\n> > + else\n> > + nulls[i] = true;\n> > + }\n>\n> These lines were already too long, and it's getting worse with this change.\n\nI've started using local variables.\n\n> > typedef struct PgStat_BktypeIO\n> > {\n> > - PgStat_Counter data[IOOBJECT_NUM_TYPES][IOCONTEXT_NUM_TYPES][IOOP_NUM_TYPES];\n> > + PgStat_Counter counts[IOOBJECT_NUM_TYPES][IOCONTEXT_NUM_TYPES][IOOP_NUM_TYPES];\n> > + instr_time times[IOOBJECT_NUM_TYPES][IOCONTEXT_NUM_TYPES][IOOP_NUM_TYPES];\n> > } PgStat_BktypeIO;\n>\n> Ah, you're going to hate me. We can't store instr_time on disk. There's\n> another patch that gets substantial peformance gains by varying the frequency\n> at which instr_time keeps track of time based on the CPU frequency...\n\nWhat does that have to do with what we can store on disk?\n\nIf so, would it not be enough to do this when reading/writing the stats\nfile?\n\nvoid\ninstr_time_deserialize(instr_time *dest, int64 *src, int length)\n{\n for (size_t i = 0; i < length; i++)\n {\n INSTR_TIME_SET_ZERO(dest[i]);\n dest[i].ticks = src[i];\n }\n}\n\nvoid\ninstr_time_serialize(int64 *dest, instr_time *src, int length)\n{\n for (size_t i = 0; i < length; i++)\n dest[i] = INSTR_TIME_GET_NANOSEC(src[i]);\n\n}\n\n> It also just doesn't have enough range to keep track of system wide\n> time on a larger system. A single backend won't run for 293 years, but\n> with a few thousand backends that's a whole different story.\n>\n> I think we need to accumulate in instr_time, but convert to floating point\n> when flushing stats.\n\nHmmm. So, are you saying that we need to read from disk when we query\nthe view and add that to what is in shared memory? That we only store\nthe delta since the last restart in the instr_time array?\n\nBut, I don't see how that avoids the problem of backend total runtime\nbeing 293 years. We would have to reset and write out the delta whenever\nwe thought the times would overflow.\n\nBut, maybe I am misunderstanding something.\n\n- Melanie",
"msg_date": "Thu, 9 Mar 2023 11:50:38 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-09 11:50:38 -0500, Melanie Plageman wrote:\n> On Tue, Mar 7, 2023 at 1:39 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-03-06 11:30:13 -0500, Melanie Plageman wrote:\n> > > > As pgstat_bktype_io_stats_valid() is called only in Assert(), I think that would be a good idea\n> > > > to also check that if counts are not Zero then times are not Zero.\n> > >\n> > > Yes, I think adding some validation around the relationship between\n> > > counts and timing should help prevent developers from forgetting to call\n> > > pg_stat_count_io_op() when calling pgstat_count_io_time() (as relevant).\n> > >\n> > > However, I think that we cannot check that if IO counts are non-zero\n> > > that IO times are non-zero, because the user may not have\n> > > track_io_timing enabled. We can check that if IO times are not zero, IO\n> > > counts are not zero. I've done this in the attached v3.\n> >\n> > And even if track_io_timing is enabled, the timer granularity might be so low\n> > that we *still* get zeroes.\n> >\n> > I wonder if we should get rid of pgStatBlockReadTime, pgStatBlockWriteTime,\n>\n> And then have pg_stat_reset_shared('io') reset pg_stat_database IO\n> stats?\n\nYes.\n\n\n> > > @@ -1000,11 +1000,27 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,\n> > >\n> > > if (isExtend)\n> > > {\n> > > + instr_time io_start,\n> > > + io_time;\n> > > +\n> > > /* new buffers are zero-filled */\n> > > MemSet((char *) bufBlock, 0, BLCKSZ);\n> > > +\n> > > + if (track_io_timing)\n> > > + INSTR_TIME_SET_CURRENT(io_start);\n> > > + else\n> > > + INSTR_TIME_SET_ZERO(io_start);\n> > > +\n> >\n> > I wonder if there's an argument for tracking this in the existing IO stats as\n> > well. But I guess we've lived with this for a long time...\n>\n> Not sure I want to include that in this patchset.\n\nNo, probably not.\n\n\n> > > typedef struct PgStat_BktypeIO\n> > > {\n> > > - PgStat_Counter data[IOOBJECT_NUM_TYPES][IOCONTEXT_NUM_TYPES][IOOP_NUM_TYPES];\n> > > + PgStat_Counter counts[IOOBJECT_NUM_TYPES][IOCONTEXT_NUM_TYPES][IOOP_NUM_TYPES];\n> > > + instr_time times[IOOBJECT_NUM_TYPES][IOCONTEXT_NUM_TYPES][IOOP_NUM_TYPES];\n> > > } PgStat_BktypeIO;\n> >\n> > Ah, you're going to hate me. We can't store instr_time on disk. There's\n> > another patch that gets substantial peformance gains by varying the frequency\n> > at which instr_time keeps track of time based on the CPU frequency...\n>\n> What does that have to do with what we can store on disk?\n\nThe frequency can change.\n\n\n> If so, would it not be enough to do this when reading/writing the stats\n> file?\n\nTheoretically yes. But to me it seems cleaner to do it when flushing to shared\nstats. See also the overflow issue below.\n\n\n\n> void\n> instr_time_deserialize(instr_time *dest, int64 *src, int length)\n> {\n> for (size_t i = 0; i < length; i++)\n> {\n> INSTR_TIME_SET_ZERO(dest[i]);\n> dest[i].ticks = src[i];\n> }\n> }\n\nThat wouldn't be correct, because what ticks means will at some point change\nbetween postgres stopping and starting.\n\n\n\n> > It also just doesn't have enough range to keep track of system wide\n> > time on a larger system. A single backend won't run for 293 years, but\n> > with a few thousand backends that's a whole different story.\n> >\n> > I think we need to accumulate in instr_time, but convert to floating point\n> > when flushing stats.\n>\n> Hmmm. So, are you saying that we need to read from disk when we query\n> the view and add that to what is in shared memory? That we only store\n> the delta since the last restart in the instr_time array?\n\nNo, I don't think I am suggesting that. What I am trying to suggest is that\nPendingIOStats should contain instr_time, but that PgStat_IO should contain\nPgStat_Counter as microseconds, as before.\n\n\n> But, I don't see how that avoids the problem of backend total runtime\n> being 293 years. We would have to reset and write out the delta whenever\n> we thought the times would overflow.\n\nThe overflow risk is due to storing nanoseconds (which is what instr_time\nstores internally on linux now) - which we don't need to do once\naccumulatated. Right now we store them as microseconds.\n\nnanosecond range:\n((2**63) - 1)/(10**9*60*60*24*365) -> 292 years\nmicrosecond range:\n((2**63) - 1)/(10**6*60*60*24*365) -> 292471 years\n\nIf you assume 5k connections continually doing IO, a range of 292 years would\nlast 21 days at nanosecond resolution. At microsecond resolution it's 58\nyears.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Mar 2023 15:56:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "v5 attached mostly addresses instr_time persistence issues.\n\nOn Tue, Mar 14, 2023 at 6:56 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-03-09 11:50:38 -0500, Melanie Plageman wrote:\n> > On Tue, Mar 7, 2023 at 1:39 PM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2023-03-06 11:30:13 -0500, Melanie Plageman wrote:\n> > > > > As pgstat_bktype_io_stats_valid() is called only in Assert(), I think that would be a good idea\n> > > > > to also check that if counts are not Zero then times are not Zero.\n> > > >\n> > > > Yes, I think adding some validation around the relationship between\n> > > > counts and timing should help prevent developers from forgetting to call\n> > > > pg_stat_count_io_op() when calling pgstat_count_io_time() (as relevant).\n> > > >\n> > > > However, I think that we cannot check that if IO counts are non-zero\n> > > > that IO times are non-zero, because the user may not have\n> > > > track_io_timing enabled. We can check that if IO times are not zero, IO\n> > > > counts are not zero. I've done this in the attached v3.\n> > >\n> > > And even if track_io_timing is enabled, the timer granularity might be so low\n> > > that we *still* get zeroes.\n> > >\n> > > I wonder if we should get rid of pgStatBlockReadTime, pgStatBlockWriteTime,\n> >\n> > And then have pg_stat_reset_shared('io') reset pg_stat_database IO\n> > stats?\n>\n> Yes.\n\nI think this makes sense but I am hesitant to do it in this patchset,\nbecause it feels a bit hidden...maybe?\n\nBut, if you feel strongly, I will make the change.\n\n> > > > typedef struct PgStat_BktypeIO\n> > > > {\n> > > > - PgStat_Counter data[IOOBJECT_NUM_TYPES][IOCONTEXT_NUM_TYPES][IOOP_NUM_TYPES];\n> > > > + PgStat_Counter counts[IOOBJECT_NUM_TYPES][IOCONTEXT_NUM_TYPES][IOOP_NUM_TYPES];\n> > > > + instr_time times[IOOBJECT_NUM_TYPES][IOCONTEXT_NUM_TYPES][IOOP_NUM_TYPES];\n> > > > } PgStat_BktypeIO;\n> > >\n> > > Ah, you're going to hate me. We can't store instr_time on disk. There's\n> > > another patch that gets substantial peformance gains by varying the frequency\n> > > at which instr_time keeps track of time based on the CPU frequency...\n> >\n> > What does that have to do with what we can store on disk?\n>\n> The frequency can change.\n\nAh, I see.\n\n> > If so, would it not be enough to do this when reading/writing the stats\n> > file?\n>\n> Theoretically yes. But to me it seems cleaner to do it when flushing to shared\n> stats. See also the overflow issue below.\n>\n> > > It also just doesn't have enough range to keep track of system wide\n> > > time on a larger system. A single backend won't run for 293 years, but\n> > > with a few thousand backends that's a whole different story.\n> > >\n> > > I think we need to accumulate in instr_time, but convert to floating point\n> > > when flushing stats.\n> >\n> > Hmmm. So, are you saying that we need to read from disk when we query\n> > the view and add that to what is in shared memory? That we only store\n> > the delta since the last restart in the instr_time array?\n>\n> No, I don't think I am suggesting that. What I am trying to suggest is that\n> PendingIOStats should contain instr_time, but that PgStat_IO should contain\n> PgStat_Counter as microseconds, as before.\n\nSo, I've modified the code to make a union of instr_time and\nPgStat_Counter in PgStat_BktypeIO. I am not quite sure if this is okay.\nI store in microsec and then in pg_stat_io, I multiply to get\nmilliseconds for display.\n\nI considered refactoring pgstat_io_end() to use INSTR_TIME_ACCUM_DIFF()\nlike [1], but, in the end I actually think I would end up with more\noperations because of the various different counters needing to be\nupdated. As it is now, I do a single subtract and a few adds (one for\neach of the different statistics objects tracking IO times\n(pgBufferUsage, pgStatBlockWrite/ReadTime). Whereas, I would need to do\nan accum diff for every one of those.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/1feedb83-7aa9-cb4b-5086-598349d3f555%40gmail.com",
"msg_date": "Thu, 16 Mar 2023 17:19:16 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-16 17:19:16 -0400, Melanie Plageman wrote:\n> > > > I wonder if we should get rid of pgStatBlockReadTime, pgStatBlockWriteTime,\n> > >\n> > > And then have pg_stat_reset_shared('io') reset pg_stat_database IO\n> > > stats?\n> >\n> > Yes.\n> \n> I think this makes sense but I am hesitant to do it in this patchset,\n> because it feels a bit hidden...maybe?\n\nI'd not do it in the same commit, but I don't see a problem with doing it in\nthe same patchset.\n\nNow that I think about it again, this wouldn't make pg_stat_reset_shared('io')\naffect pg_stat_database - I was thinking we should use pgstat_io.c stats to\nprovide the information for pgstat_database.c, using its own pending counter.\n\n\n> > No, I don't think I am suggesting that. What I am trying to suggest is that\n> > PendingIOStats should contain instr_time, but that PgStat_IO should contain\n> > PgStat_Counter as microseconds, as before.\n> \n> So, I've modified the code to make a union of instr_time and\n> PgStat_Counter in PgStat_BktypeIO. I am not quite sure if this is okay.\n> I store in microsec and then in pg_stat_io, I multiply to get\n> milliseconds for display.\n\nNot a fan - what do we gain by having this union? It seems considerably\ncleaner to have a struct local to pgstat_io.c that uses instr_time and have a\nclean type in PgStat_BktypeIO. In fact, the code worked after just changing\nthat.\n\nI don't think it makes sense to have pgstat_io_start()/end() as well as\npgstat_count_io*. For one, the name seems in a \"too general namespace\" - why\nnot a pgstat_count*?\n\n\n> I considered refactoring pgstat_io_end() to use INSTR_TIME_ACCUM_DIFF()\n> like [1], but, in the end I actually think I would end up with more\n> operations because of the various different counters needing to be\n> updated. As it is now, I do a single subtract and a few adds (one for\n> each of the different statistics objects tracking IO times\n> (pgBufferUsage, pgStatBlockWrite/ReadTime). Whereas, I would need to do\n> an accum diff for every one of those.\n\nRight - that only INSTR_TIME_ACCUM_DIFF() only makes sense if there's just a\nsingle counter to update.\n\n\nWRT:\n\t\t\t\t/* TODO: AFAICT, pgstat_count_buffer_write_time is only called */\n\t\t\t\t/* for shared buffers whereas pgstat_count_buffer_read_time is */\n\t\t\t\t/* called for temp relations and shared buffers. */\n\t\t\t\t/*\n\t\t\t\t * is this intentional and should I match current behavior or\n\t\t\t\t * not?\n\t\t\t\t */\n\nIt's hard to see how that behaviour could be intentional. Probably worth\nfixing in a separate patch. I don't think we're going to backpatch, but it\nwould make this clearer nonetheless.\n\n\nIncremental patch with some of the above changed attached.\n\n\n\nBtw, it's quite nice how one now can attribute time more easily:\n\n20 connections copying an 8MB file 50 times each:\nSELECT reuses, evictions, writes, write_time, extends, extend_time FROM pg_stat_io WHERE backend_type = 'client backend' AND io_object = 'relation' AND io_context='bulkwrite';\n┌────────┬───────────┬────────┬────────────┬─────────┬─────────────┐\n│ reuses │ evictions │ writes │ write_time │ extends │ extend_time │\n├────────┼───────────┼────────┼────────────┼─────────┼─────────────┤\n│ 36112 │ 0 │ 36112 │ 141 │ 1523176 │ 8676 │\n└────────┴───────────┴────────┴────────────┴─────────┴─────────────┘\n\n20 connections copying an 80MB file 5 times each:\n┌─────────┬───────────┬─────────┬────────────┬─────────┬─────────────┐\n│ reuses │ evictions │ writes │ write_time │ extends │ extend_time │\n├─────────┼───────────┼─────────┼────────────┼─────────┼─────────────┤\n│ 1318539 │ 0 │ 1318539 │ 5013 │ 1523339 │ 7873 │\n└─────────┴───────────┴─────────┴────────────┴─────────┴─────────────┘\n(1 row)\n\n\nGreetings,\n\nAndres",
"msg_date": "Mon, 20 Mar 2023 19:34:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 10:34 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-03-16 17:19:16 -0400, Melanie Plageman wrote:\n> > > > > I wonder if we should get rid of pgStatBlockReadTime, pgStatBlockWriteTime,\n> > > >\n> > > > And then have pg_stat_reset_shared('io') reset pg_stat_database IO\n> > > > stats?\n> > >\n> > > Yes.\n> >\n> > I think this makes sense but I am hesitant to do it in this patchset,\n> > because it feels a bit hidden...maybe?\n>\n> I'd not do it in the same commit, but I don't see a problem with doing it in\n> the same patchset.\n>\n> Now that I think about it again, this wouldn't make pg_stat_reset_shared('io')\n> affect pg_stat_database - I was thinking we should use pgstat_io.c stats to\n> provide the information for pgstat_database.c, using its own pending counter.\n\nSo, I've done this in the attached. But, won't resetting pgstat_database\nbe a bit weird if you have built up some IO timing in pending counters\nand right after you reset a flush happens and then suddenly the values\nare way above 0 again?\n\n> > I considered refactoring pgstat_io_end() to use INSTR_TIME_ACCUM_DIFF()\n> > like [1], but, in the end I actually think I would end up with more\n> > operations because of the various different counters needing to be\n> > updated. As it is now, I do a single subtract and a few adds (one for\n> > each of the different statistics objects tracking IO times\n> > (pgBufferUsage, pgStatBlockWrite/ReadTime). Whereas, I would need to do\n> > an accum diff for every one of those.\n>\n> Right - that only INSTR_TIME_ACCUM_DIFF() only makes sense if there's just a\n> single counter to update.\n>\n>\n> WRT:\n> /* TODO: AFAICT, pgstat_count_buffer_write_time is only called */\n> /* for shared buffers whereas pgstat_count_buffer_read_time is */\n> /* called for temp relations and shared buffers. */\n> /*\n> * is this intentional and should I match current behavior or\n> * not?\n> */\n>\n> It's hard to see how that behaviour could be intentional. Probably worth\n> fixing in a separate patch. I don't think we're going to backpatch, but it\n> would make this clearer nonetheless.\n\n\nAttached v7 does this in separate commits.\n\nRemaining feedback is about FlushLocalBuffers(). Is the idea simply to\nget it into bufmgr.c because that is cleaner from an API perspective?\n\n- Melanie",
"msg_date": "Tue, 21 Mar 2023 20:52:52 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "Attached is a rebased version in light of 8aaa04b32d\n\n- Melanie",
"msg_date": "Fri, 31 Mar 2023 15:44:58 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-31 15:44:58 -0400, Melanie Plageman wrote:\n> From 789d4bf1fb749a26523dbcd2c69795916b711c68 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Tue, 21 Mar 2023 16:00:55 -0400\n> Subject: [PATCH v8 1/4] Count IO time for temp relation writes\n>\n> Both pgstat_database and pgBufferUsage write times failed to count\n> timing for flushes of dirty local buffers when acquiring a new local\n> buffer for a temporary relation block.\n\nI think it'd be worth mentioning here that we do count read time? Otherwise\nit'd not be as clear that adding tracking increases consistency...\n\n\n\n> From f4e0db5c833f33b30d4c0b4bebec1096a1745d81 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Tue, 21 Mar 2023 18:20:44 -0400\n> Subject: [PATCH v8 2/4] FlushRelationBuffers() counts temp relation IO timing\n>\n> Add pgstat_database and pgBufferUsage IO timing counting to\n> FlushRelationBuffers() for writes of temporary relations.\n> ---\n> src/backend/storage/buffer/bufmgr.c | 18 ++++++++++++++++++\n> 1 file changed, 18 insertions(+)\n>\n> diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\n> index b3adbbe7d2..05e98d5994 100644\n> --- a/src/backend/storage/buffer/bufmgr.c\n> +++ b/src/backend/storage/buffer/bufmgr.c\n> @@ -3571,6 +3571,8 @@ FlushRelationBuffers(Relation rel)\n> {\n> \tint\t\t\ti;\n> \tBufferDesc *bufHdr;\n> +\tinstr_time\tio_start,\n> +\t\t\t\tio_time;\n>\n> \tif (RelationUsesLocalBuffers(rel))\n> \t{\n> @@ -3596,17 +3598,33 @@ FlushRelationBuffers(Relation rel)\n>\n> \t\t\t\tPageSetChecksumInplace(localpage, bufHdr->tag.blockNum);\n>\n> +\t\t\t\tif (track_io_timing)\n> +\t\t\t\t\tINSTR_TIME_SET_CURRENT(io_start);\n> +\t\t\t\telse\n> +\t\t\t\t\tINSTR_TIME_SET_ZERO(io_start);\n> +\n> \t\t\t\tsmgrwrite(RelationGetSmgr(rel),\n> \t\t\t\t\t\t BufTagGetForkNum(&bufHdr->tag),\n> \t\t\t\t\t\t bufHdr->tag.blockNum,\n> \t\t\t\t\t\t localpage,\n> \t\t\t\t\t\t false);\n>\n> +\n\nSpurious newline.\n\n\n> \t\t\t\tbuf_state &= ~(BM_DIRTY | BM_JUST_DIRTIED);\n> \t\t\t\tpg_atomic_unlocked_write_u32(&bufHdr->state, buf_state);\n>\n> \t\t\t\tpgstat_count_io_op(IOOBJECT_TEMP_RELATION, IOCONTEXT_NORMAL, IOOP_WRITE);\n>\n> +\t\t\t\tif (track_io_timing)\n> +\t\t\t\t{\n> +\t\t\t\t\tINSTR_TIME_SET_CURRENT(io_time);\n> +\t\t\t\t\tINSTR_TIME_SUBTRACT(io_time, io_start);\n> +\t\t\t\t\tpgstat_count_buffer_write_time(INSTR_TIME_GET_MICROSEC(io_time));\n> +\t\t\t\t\tINSTR_TIME_ADD(pgBufferUsage.blk_write_time, io_time);\n> +\t\t\t\t}\n> +\n> +\t\t\t\tpgBufferUsage.local_blks_written++;\n> +\n> \t\t\t\t/* Pop the error context stack */\n> \t\t\t\terror_context_stack = errcallback.previous;\n> \t\t\t}\n> --\n> 2.37.2\n>\n\n\n> From 2bdad725133395ded199ecc726096e052d6e654b Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Fri, 31 Mar 2023 15:32:36 -0400\n> Subject: [PATCH v8 3/4] Track IO times in pg_stat_io\n>\n> Add IO timing for reads, writes, extends, and fsyncs to pg_stat_io.\n>\n> Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>\n> Reviewed-by: Andres Freund <andres@anarazel.de>\n> Discussion: https://www.postgresql.org/message-id/flat/CAAKRu_ay5iKmnbXZ3DsauViF3eMxu4m1oNnJXqV_HyqYeg55Ww%40mail.gmail.com\n> ---\n\n> -static PgStat_BktypeIO PendingIOStats;\n> +typedef struct PgStat_PendingIO\n> +{\n> +\tPgStat_Counter counts[IOOBJECT_NUM_TYPES][IOCONTEXT_NUM_TYPES][IOOP_NUM_TYPES];\n> +\tinstr_time\tpending_times[IOOBJECT_NUM_TYPES][IOCONTEXT_NUM_TYPES][IOOP_NUM_TYPES];\n> +}\t\t\tPgStat_PendingIO;\n\nProbably will look less awful after adding the typedef to typedefs.list.\n\n\n> +\t\t\t\t/* we do track it */\n> +\t\t\t\tif (pgstat_tracks_io_op(bktype, io_object, io_context, io_op))\n> +\t\t\t\t{\n> +\t\t\t\t\t/* ensure that if IO times are non-zero, counts are > 0 */\n> +\t\t\t\t\tif (backend_io->times[io_object][io_context][io_op] != 0 &&\n> +\t\t\t\t\t\tbackend_io->counts[io_object][io_context][io_op] <= 0)\n> +\t\t\t\t\t\treturn false;\n> +\n> \t\t\t\t\tcontinue;\n> +\t\t\t\t}\n>\n> -\t\t\t\t/* There are stats and there shouldn't be */\n> -\t\t\t\tif (!bktype_tracked ||\n> -\t\t\t\t\t!pgstat_tracks_io_op(bktype, io_object, io_context, io_op))\n> +\t\t\t\t/* we don't track it, and it is not 0 */\n> +\t\t\t\tif (backend_io->counts[io_object][io_context][io_op] != 0)\n> \t\t\t\t\treturn false;\n> +\n> +\t\t\t\t/* we don't track this IOOp, so make sure its IO time is zero */\n> +\t\t\t\tif (pgstat_tracks_io_time(io_op) > -1)\n> +\t\t\t\t{\n> +\t\t\t\t\tif (backend_io->times[io_object][io_context][io_op] != 0)\n> +\t\t\t\t\t\treturn false;\n> +\t\t\t\t}\n\nI'm somehow doubtful it's worth having pgstat_tracks_io_time, what kind of\nerror would be caught by this check?\n\n\n> +/*\n> + * Get the number of the column containing IO times for the specified IOOp. If\n> + * the specified IOOp is one for which IO time is not tracked, return -1. Note\n> + * that this function assumes that IO time for an IOOp is displayed in the view\n> + * in the column directly after the IOOp counts.\n> + */\n> +static io_stat_col\n> +pgstat_get_io_time_index(IOOp io_op)\n> +{\n> +\tif (pgstat_tracks_io_time(io_op) == -1)\n> +\t\treturn -1;\n\nThat seems dangerous - won't it just lead to accessing something from before\nthe start of the array? Probably should just assert.\n\n\n\n> @@ -1363,20 +1389,32 @@ pg_stat_get_io(PG_FUNCTION_ARGS)\n>\n> \t\t\t\tfor (int io_op = 0; io_op < IOOP_NUM_TYPES; io_op++)\n> \t\t\t\t{\n> -\t\t\t\t\tint\t\t\tcol_idx = pgstat_get_io_op_index(io_op);\n> +\t\t\t\t\tPgStat_Counter count = bktype_stats->counts[io_obj][io_context][io_op];\n> +\t\t\t\t\tint\t\t\ti = pgstat_get_io_op_index(io_op);\n>\n> \t\t\t\t\t/*\n> \t\t\t\t\t * Some combinations of BackendType and IOOp, of IOContext\n> \t\t\t\t\t * and IOOp, and of IOObject and IOOp are not tracked. Set\n> \t\t\t\t\t * these cells in the view NULL.\n> \t\t\t\t\t */\n> -\t\t\t\t\tnulls[col_idx] = !pgstat_tracks_io_op(bktype, io_obj, io_context, io_op);\n> +\t\t\t\t\tif (pgstat_tracks_io_op(bktype, io_obj, io_context, io_op))\n> +\t\t\t\t\t\tvalues[i] = Int64GetDatum(count);\n> +\t\t\t\t\telse\n> +\t\t\t\t\t\tnulls[i] = true;\n> +\t\t\t\t}\n>\n> -\t\t\t\t\tif (nulls[col_idx])\n> +\t\t\t\tfor (int io_op = 0; io_op < IOOP_NUM_TYPES; io_op++)\n> +\t\t\t\t{\n> +\t\t\t\t\tPgStat_Counter time = bktype_stats->times[io_obj][io_context][io_op];\n> +\t\t\t\t\tint\t\t\ti = pgstat_get_io_time_index(io_op);\n> +\n> +\t\t\t\t\tif (i == -1)\n> \t\t\t\t\t\tcontinue;\n>\n> -\t\t\t\t\tvalues[col_idx] =\n> -\t\t\t\t\t\tInt64GetDatum(bktype_stats->data[io_obj][io_context][io_op]);\n> +\t\t\t\t\tif (!nulls[pgstat_get_io_op_index(io_op)])\n> +\t\t\t\t\t\tvalues[i] = Float8GetDatum(pg_stat_micro_to_millisecs(time));\n> +\t\t\t\t\telse\n> +\t\t\t\t\t\tnulls[i] = true;\n> \t\t\t\t}\n\nWhy two loops?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 Apr 2023 17:59:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "Attached v9 addresses review feedback as well as resolving merge\nconflicts with recent relation extension patchset.\n\nI've changed pgstat_count_io_op_time() to take a count and call\npgstat_count_io_op_n() so it can be used with smgrzeroextend(). I do\nwish that the parameter to pgstat_count_io_op_n() was called \"count\" and\nnot \"cnt\"...\n\nI've also reordered the call site of pgstat_count_io_op_time() in a few\nlocations, but I have some questions about this.\n\nBefore, I didn't think it mattered much that we didn't finish counting\nIO time until after setting BM_VALID or BM_DIRTY and unsetting\nBM_IO_IN_PROGRESS. With the relation extension code doing this for many\nbuffers at once, though, I wondered if this will make the IO timing too\ninaccurate.\n\nAs such, I've moved pgstat_count_io_op_time() to before we set those\nflags in all locations. I did wonder if it is bad to prolong having the\nbuffer pinned and not having those flags set, though.\n\nOn Tue, Apr 4, 2023 at 8:59 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-03-31 15:44:58 -0400, Melanie Plageman wrote:\n> > From 789d4bf1fb749a26523dbcd2c69795916b711c68 Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Tue, 21 Mar 2023 16:00:55 -0400\n> > Subject: [PATCH v8 1/4] Count IO time for temp relation writes\n> >\n> > Both pgstat_database and pgBufferUsage write times failed to count\n> > timing for flushes of dirty local buffers when acquiring a new local\n> > buffer for a temporary relation block.\n>\n> I think it'd be worth mentioning here that we do count read time? Otherwise\n> it'd not be as clear that adding tracking increases consistency...\n\nDone\n\n> > From f4e0db5c833f33b30d4c0b4bebec1096a1745d81 Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Tue, 21 Mar 2023 18:20:44 -0400\n> > Subject: [PATCH v8 2/4] FlushRelationBuffers() counts temp relation IO timing\n> >\n> > Add pgstat_database and pgBufferUsage IO timing counting to\n> > FlushRelationBuffers() for writes of temporary relations.\n> > ---\n> > src/backend/storage/buffer/bufmgr.c | 18 ++++++++++++++++++\n> > 1 file changed, 18 insertions(+)\n> >\n> > diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\n> > index b3adbbe7d2..05e98d5994 100644\n> > --- a/src/backend/storage/buffer/bufmgr.c\n> > +++ b/src/backend/storage/buffer/bufmgr.c\n> > @@ -3571,6 +3571,8 @@ FlushRelationBuffers(Relation rel)\n> > {\n> > int i;\n> > BufferDesc *bufHdr;\n> > + instr_time io_start,\n> > + io_time;\n> >\n> > if (RelationUsesLocalBuffers(rel))\n> > {\n> > @@ -3596,17 +3598,33 @@ FlushRelationBuffers(Relation rel)\n> >\n> > PageSetChecksumInplace(localpage, bufHdr->tag.blockNum);\n> >\n> > + if (track_io_timing)\n> > + INSTR_TIME_SET_CURRENT(io_start);\n> > + else\n> > + INSTR_TIME_SET_ZERO(io_start);\n> > +\n> > smgrwrite(RelationGetSmgr(rel),\n> > BufTagGetForkNum(&bufHdr->tag),\n> > bufHdr->tag.blockNum,\n> > localpage,\n> > false);\n> >\n> > +\n>\n> Spurious newline.\n\nFixed.\n\n> > From 2bdad725133395ded199ecc726096e052d6e654b Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Fri, 31 Mar 2023 15:32:36 -0400\n> > Subject: [PATCH v8 3/4] Track IO times in pg_stat_io\n> >\n> > Add IO timing for reads, writes, extends, and fsyncs to pg_stat_io.\n> >\n> > Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>\n> > Reviewed-by: Andres Freund <andres@anarazel.de>\n> > Discussion: https://www.postgresql.org/message-id/flat/CAAKRu_ay5iKmnbXZ3DsauViF3eMxu4m1oNnJXqV_HyqYeg55Ww%40mail.gmail.com\n> > ---\n>\n> > -static PgStat_BktypeIO PendingIOStats;\n> > +typedef struct PgStat_PendingIO\n> > +{\n> > + PgStat_Counter counts[IOOBJECT_NUM_TYPES][IOCONTEXT_NUM_TYPES][IOOP_NUM_TYPES];\n> > + instr_time pending_times[IOOBJECT_NUM_TYPES][IOCONTEXT_NUM_TYPES][IOOP_NUM_TYPES];\n> > +} PgStat_PendingIO;\n>\n> Probably will look less awful after adding the typedef to typedefs.list.\n\nDone.\nOne day I will remember to add things to typedefs.list.\n\n> > + /* we do track it */\n> > + if (pgstat_tracks_io_op(bktype, io_object, io_context, io_op))\n> > + {\n> > + /* ensure that if IO times are non-zero, counts are > 0 */\n> > + if (backend_io->times[io_object][io_context][io_op] != 0 &&\n> > + backend_io->counts[io_object][io_context][io_op] <= 0)\n> > + return false;\n> > +\n> > continue;\n> > + }\n> >\n> > - /* There are stats and there shouldn't be */\n> > - if (!bktype_tracked ||\n> > - !pgstat_tracks_io_op(bktype, io_object, io_context, io_op))\n> > + /* we don't track it, and it is not 0 */\n> > + if (backend_io->counts[io_object][io_context][io_op] != 0)\n> > return false;\n> > +\n> > + /* we don't track this IOOp, so make sure its IO time is zero */\n> > + if (pgstat_tracks_io_time(io_op) > -1)\n> > + {\n> > + if (backend_io->times[io_object][io_context][io_op] != 0)\n> > + return false;\n> > + }\n>\n> I'm somehow doubtful it's worth having pgstat_tracks_io_time, what kind of\n> error would be caught by this check?\n\nYea, now that the function to count IO timing also increments the count,\nI don't think this can happen.\n\nHowever, pgstat_tracks_io_time() is useful in its other call site in\npgstatfuncs which lets us continue in the loop if we don't need to fill\nin that IO time. Perhaps it could be replaced with a if (io_op ==\nIOOP_EVICT || io_op == IOOP_REUSE ... but I kind of like the function?\nBut, maybe it is overkill...\n\nFor now, I've moved pgstat_tracks_io_time() into pgstatfuncs.c as a\nhelper.\n\n> > +/*\n> > + * Get the number of the column containing IO times for the specified IOOp. If\n> > + * the specified IOOp is one for which IO time is not tracked, return -1. Note\n> > + * that this function assumes that IO time for an IOOp is displayed in the view\n> > + * in the column directly after the IOOp counts.\n> > + */\n> > +static io_stat_col\n> > +pgstat_get_io_time_index(IOOp io_op)\n> > +{\n> > + if (pgstat_tracks_io_time(io_op) == -1)\n> > + return -1;\n>\n> That seems dangerous - won't it just lead to accessing something from before\n> the start of the array? Probably should just assert.\n\nYea. I've removed it entirely as the passed in io_op can't be negative\n(unless we change the enum values) and we add one to it before\nreturning.\n\n> > @@ -1363,20 +1389,32 @@ pg_stat_get_io(PG_FUNCTION_ARGS)\n> >\n> > for (int io_op = 0; io_op < IOOP_NUM_TYPES; io_op++)\n> > {\n> > - int col_idx = pgstat_get_io_op_index(io_op);\n> > + PgStat_Counter count = bktype_stats->counts[io_obj][io_context][io_op];\n> > + int i = pgstat_get_io_op_index(io_op);\n> >\n> > /*\n> > * Some combinations of BackendType and IOOp, of IOContext\n> > * and IOOp, and of IOObject and IOOp are not tracked. Set\n> > * these cells in the view NULL.\n> > */\n> > - nulls[col_idx] = !pgstat_tracks_io_op(bktype, io_obj, io_context, io_op);\n> > + if (pgstat_tracks_io_op(bktype, io_obj, io_context, io_op))\n> > + values[i] = Int64GetDatum(count);\n> > + else\n> > + nulls[i] = true;\n> > + }\n> >\n> > - if (nulls[col_idx])\n> > + for (int io_op = 0; io_op < IOOP_NUM_TYPES; io_op++)\n> > + {\n> > + PgStat_Counter time = bktype_stats->times[io_obj][io_context][io_op];\n> > + int i = pgstat_get_io_time_index(io_op);\n> > +\n> > + if (i == -1)\n> > continue;\n> >\n> > - values[col_idx] =\n> > - Int64GetDatum(bktype_stats->data[io_obj][io_context][io_op]);\n> > + if (!nulls[pgstat_get_io_op_index(io_op)])\n> > + values[i] = Float8GetDatum(pg_stat_micro_to_millisecs(time));\n> > + else\n> > + nulls[i] = true;\n> > }\n>\n> Why two loops?\n\nWell, it was a stylistic choice that I now realize is actually\nconfusing.\nI consolidated them.\n\n- Melanie",
"msg_date": "Fri, 7 Apr 2023 12:17:38 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-07 12:17:38 -0400, Melanie Plageman wrote:\n> Attached v9 addresses review feedback as well as resolving merge\n> conflicts with recent relation extension patchset.\n\nI've edited it a bit more:\n\n- removed pgstat_tracks_io_time() and replaced it by returning the new\n IO_COL_INVALID = -1 from pgstat_get_io_time_index() when there's no time\n\n- moved PgStat_Counter count, time into the respective branches. It feels\n somewhat wrong to access the time when we then decide there is no time.\n\n- s/io_object/io_obj/ in pgstat_count_io_op_time(), combined with added\n linebreaks, got the code to under 80 chars\n\n- renamed pg_stat_microseconds_to_milliseconds to pg_stat_us_to_ms\n\n- removed a spurious newline\n\n- the times reported by pg_stat_io had their fractional part removed, due to\n pg_stat_us_to_ms returning an integer\n\n\nVerifying this, I saw that the write time visible in pg_stat_io didn't quite\nmatch what I saw in log_checkpoints. But not always. Eventually I figured out\nthat that's not pg_stat_io's fault - log_checkpoint's write includes a lot of\nthings, including several other CheckPoint* routines, flushing WAL, asking the\nkernel to flush things to disk... The biggest portion in my case were the\nsmgrwriteback() calls - which pg_stat_io doesn't track - oops.\n\nPushed up to and including 0003.\n\n\n> I've changed pgstat_count_io_op_time() to take a count and call\n> pgstat_count_io_op_n() so it can be used with smgrzeroextend(). I do\n> wish that the parameter to pgstat_count_io_op_n() was called \"count\" and\n> not \"cnt\"...\n\nHeh.\n\n\n> I've also reordered the call site of pgstat_count_io_op_time() in a few\n> locations, but I have some questions about this.\n> \n> Before, I didn't think it mattered much that we didn't finish counting\n> IO time until after setting BM_VALID or BM_DIRTY and unsetting\n> BM_IO_IN_PROGRESS. With the relation extension code doing this for many\n> buffers at once, though, I wondered if this will make the IO timing too\n> inaccurate.\n\n> As such, I've moved pgstat_count_io_op_time() to before we set those\n> flags in all locations. I did wonder if it is bad to prolong having the\n> buffer pinned and not having those flags set, though.\n\nI went back and forth about this before. I think it's ok the way you did it.\n\n\nI think 0004 needs a bit more work. At the very least we would have to swap\nthe order of pgstat_flush_pending_entries() and pgstat_flush_io() - entirely\ndoable. Unlike 0003, this doesn't make pg_stat_io more complete, or such, so\nI'm inclined to leave it for 17. I think there might be some more\nopportunities for having counts \"flow down\", like the patch does.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Apr 2023 17:09:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn Wed, Mar 08, 2023 at 04:34:38PM -0800, Andres Freund wrote:\n> On 2023-03-08 12:55:34 +0100, Drouvot, Bertrand wrote:\n> > - pg_stat_io is \"global\" across all sessions. So, even if one session is doing some \"testing\" and needs to turn track_io_timing on, then it\n> > is even not sure it's only reflecting its own testing (as other sessions may have turned it on too).\n> \n> I think for 17 we should provide access to per-existing-connection pg_stat_io\n> stats, and also provide a database aggregated version. Neither should be\n> particularly hard.\n\nFWIW, I think that would be great and plan to have a look at this (unless someone\nbeats me to it).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 23 Aug 2024 07:32:16 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "Hi,\n\nOn Fri, Aug 23, 2024 at 07:32:16AM +0000, Bertrand Drouvot wrote:\n> Hi,\n> \n> On Wed, Mar 08, 2023 at 04:34:38PM -0800, Andres Freund wrote:\n> > On 2023-03-08 12:55:34 +0100, Drouvot, Bertrand wrote:\n> > > - pg_stat_io is \"global\" across all sessions. So, even if one session is doing some \"testing\" and needs to turn track_io_timing on, then it\n> > > is even not sure it's only reflecting its own testing (as other sessions may have turned it on too).\n> > \n> > I think for 17 we should provide access to per-existing-connection pg_stat_io\n> > stats, and also provide a database aggregated version. Neither should be\n> > particularly hard.\n> \n> FWIW, I think that would be great and plan to have a look at this (unless someone\n> beats me to it).\n\nFWIW, here is the patch proposal for per backend I/O statistics [1].\n\n[1]: https://www.postgresql.org/message-id/ZtXR%2BCtkEVVE/LHF%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 2 Sep 2024 15:00:32 +0000",
"msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Track IO times in pg_stat_io"
},
{
"msg_contents": "On Mon, Sep 02, 2024 at 03:00:32PM +0000, Bertrand Drouvot wrote:\n> On Fri, Aug 23, 2024 at 07:32:16AM +0000, Bertrand Drouvot wrote:\n>> FWIW, I think that would be great and plan to have a look at this (unless someone\n>> beats me to it).\n> \n> FWIW, here is the patch proposal for per backend I/O statistics [1].\n> \n> [1]: https://www.postgresql.org/message-id/ZtXR%2BCtkEVVE/LHF%40ip-10-97-1-34.eu-west-3.compute.internal\n\nCool, thanks!\n--\nMichael",
"msg_date": "Tue, 3 Sep 2024 08:19:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Track IO times in pg_stat_io"
}
] |
[
{
"msg_contents": "use these sqls:\r\ncreate table t(a text);\r\ninsert into t values('a');\r\nselect lp,lp_len,t_data from heap_page_items(get_raw_page('t',0));\r\nlp | lp_len | t_data \r\n----+--------+--------\r\n 1 | 26 | \\x0561\r\nas you can see, the 61 is 'a', so what's the 05??? strange.\r\n\r\n\r\njacktby@gmail.com\r\n\n\nuse these sqls:\ncreate table t(a text);insert into t values('a');select lp,lp_len,t_data from heap_page_items(get_raw_page('t',0));lp | lp_len | t_data ----+--------+-------- 1 | 26 | \\x0561as you can see, the 61 is 'a', so what's the 05??? strange.\njacktby@gmail.com",
"msg_date": "Mon, 27 Feb 2023 00:16:44 +0800",
"msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>",
"msg_from_op": true,
"msg_subject": "What's the prefix?"
},
{
"msg_contents": "On Sun, Feb 26, 2023 at 9:16 AM jacktby@gmail.com <jacktby@gmail.com> wrote:\n\n> use these sqls:\n> create table t(a text);\n> insert into t values('a');\n> select lp,lp_len,t_data from heap_page_items(get_raw_page('t',0));\n> lp | lp_len | t_data\n> ----+--------+--------\n> 1 | 26 | \\x0561\n> as you can see, the 61 is 'a', so what's the 05??? strange.\n>\n\ntext is variable length so there is header information built into the\ndatatype representation that indicates how long the content is.\n\nDavid J.\n\nOn Sun, Feb 26, 2023 at 9:16 AM jacktby@gmail.com <jacktby@gmail.com> wrote:\nuse these sqls:\ncreate table t(a text);insert into t values('a');select lp,lp_len,t_data from heap_page_items(get_raw_page('t',0));lp | lp_len | t_data ----+--------+-------- 1 | 26 | \\x0561as you can see, the 61 is 'a', so what's the 05??? strange.text is variable length so there is header information built into the datatype representation that indicates how long the content is.David J.",
"msg_date": "Sun, 26 Feb 2023 09:27:07 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: What's the prefix?"
},
{
"msg_contents": "From: David G. Johnston\r\nDate: 2023-02-27 00:27\r\nTo: jacktby@gmail.com\r\nCC: pgsql-hackers\r\nSubject: Re: What's the prefix?\r\nOn Sun, Feb 26, 2023 at 9:16 AM jacktby@gmail.com <jacktby@gmail.com> wrote:\r\nuse these sqls:\r\ncreate table t(a text);\r\ninsert into t values('a');\r\nselect lp,lp_len,t_data from heap_page_items(get_raw_page('t',0));\r\nlp | lp_len | t_data \r\n----+--------+--------\r\n 1 | 26 | \\x0561\r\nas you can see, the 61 is 'a', so what's the 05??? strange.\r\n\r\ntext is variable length so there is header information built into the datatype representation that indicates how long the content is.\r\n\r\nDavid J.\r\n\r\nNo, this is the varlena struct:\r\nstruct varlena\r\n{\r\nchar vl_len_[4]; /* Do not touch this field directly! */\r\nchar vl_dat[FLEXIBLE_ARRAY_MEMBER]; /* Data content is here */\r\n};\r\nwhen I insert 'a', this struct will be {\r\n vl_len : 00 00 00 05\r\n vl_dat: 'a'\r\n}\r\nthe t_data should be \\x0000000561, but it's \\x0561? strange\r\n----------------------------------------------------------------------------------\r\njacktby@gmail.com\r\n\n\nFrom: David G. JohnstonDate: 2023-02-27 00:27To: jacktby@gmail.comCC: pgsql-hackersSubject: Re: What's the prefix?On Sun, Feb 26, 2023 at 9:16 AM jacktby@gmail.com <jacktby@gmail.com> wrote:\nuse these sqls:\ncreate table t(a text);insert into t values('a');select lp,lp_len,t_data from heap_page_items(get_raw_page('t',0));lp | lp_len | t_data ----+--------+-------- 1 | 26 | \\x0561as you can see, the 61 is 'a', so what's the 05??? strange.text is variable length so there is header information built into the datatype representation that indicates how long the content is.David J.No, this is the varlena struct:struct varlena{ char vl_len_[4]; /* Do not touch this field directly! */ char vl_dat[FLEXIBLE_ARRAY_MEMBER]; /* Data content is here */};when I insert 'a', this struct will be { vl_len : 00 00 00 05 vl_dat: 'a'}the t_data should be \\x0000000561, but it's \\x0561? strange----------------------------------------------------------------------------------jacktby@gmail.com",
"msg_date": "Mon, 27 Feb 2023 10:04:09 +0800",
"msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Re: What's the prefix?"
},
{
"msg_contents": "\"jacktby@gmail.com\" <jacktby@gmail.com> writes:\n>> text is variable length so there is header information built into the datatype representation that indicates how long the content is.\n\nDavid's statement is accurate.\n\n> No, this is the varlena struct:\n> struct varlena\n> {\n> char vl_len_[4]; /* Do not touch this field directly! */\n> char vl_dat[FLEXIBLE_ARRAY_MEMBER]; /* Data content is here */\n\nThis struct only accurately describes \"untoasted\" varlenas.\nThe one you are looking at is a \"short header\" varlena;\nsee varattrib_1b and nearby comments in src/include/varatt.h,\nor in postgres.h if you're not looking at current HEAD branch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Feb 2023 21:18:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What's the prefix?"
}
] |
[
{
"msg_contents": "I noticed warnings:\nUse of uninitialized value $ENV{\"with_icu\"} in string eq at /home/pryzbyj/src/postgres/src/bin/pg_dump/t/002_pg_dump.pl line 56.\n\nand looked through: git grep ^export '*/Makefile'\n\nand found that:\nsrc/bin/pg_dump/meson.build is missing with_icu since 396d348b0\n\nAlso, e6927270c added ZSTD to src/bin/pg_basebackup/meson.build, but\nit's not in ./Makefile ?? Maybe that was for consistency with other\nplaces, or pre-emptive in case the tap tests want to do tests involving\nthe ZSTD tool. But it'd be better if ./Makefile had it too.\n\nThe rest I think are not errors:\n\nsrc/test/meson.build is missing PG_TEST_EXTRA\nsrc/bin/pg_upgrade/meson.build and ../src/test/recovery/meson.build\nare missing REGRESS_SHLIB\n\nIs there any consideration of promoting these or other warnings to\nfatal?\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 26 Feb 2023 16:52:39 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "meson vs make: missing/inconsistent ENV"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-26 16:52:39 -0600, Justin Pryzby wrote:\n> I noticed warnings:\n> Use of uninitialized value $ENV{\"with_icu\"} in string eq at /home/pryzbyj/src/postgres/src/bin/pg_dump/t/002_pg_dump.pl line 56.\n> \n> and looked through: git grep ^export '*/Makefile'\n> \n> and found that:\n> src/bin/pg_dump/meson.build is missing with_icu since 396d348b0\n\nLooks like it.\n\n\n> Also, e6927270c added ZSTD to src/bin/pg_basebackup/meson.build, but\n> it's not in ./Makefile ?? Maybe that was for consistency with other\n> places, or pre-emptive in case the tap tests want to do tests involving\n> the ZSTD tool. But it'd be better if ./Makefile had it too.\n\nI suspect I just over-eagerly added it when the pg_basebackup zstd support\nwent in, using the GZIP_PROGRAM/LZ4 cases as a template. And foolishly\nassuming a newly added compression method would be tested.\n\n\n> The rest I think are not errors:\n> \n> src/test/meson.build is missing PG_TEST_EXTRA\n\n> src/bin/pg_upgrade/meson.build and ../src/test/recovery/meson.build\n> are missing REGRESS_SHLIB\n\nYep, these are added in the top-level meson.build.\n\n\n> Is there any consideration of promoting these or other warnings to\n> fatal?\n\nYou mean the perl warnings?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Feb 2023 15:21:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: meson vs make: missing/inconsistent ENV"
},
{
"msg_contents": "On Sun, Feb 26, 2023 at 03:21:04PM -0800, Andres Freund wrote:\n> > Is there any consideration of promoting these or other warnings to\n> > fatal?\n> \n> You mean the perl warnings?\n\nYes - it'd be nice if the warnings caused an obvious failure to allow\naddressing the issue. I noticed the icu warning while looking at a bug\nin 0da243fed, and updating to add ZSTD. \n\n\n",
"msg_date": "Sun, 26 Feb 2023 20:00:30 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: meson vs make: missing/inconsistent ENV"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n\n> On Sun, Feb 26, 2023 at 03:21:04PM -0800, Andres Freund wrote:\n>> > Is there any consideration of promoting these or other warnings to\n>> > fatal?\n>> \n>> You mean the perl warnings?\n>\n> Yes - it'd be nice if the warnings caused an obvious failure to allow\n> addressing the issue. I noticed the icu warning while looking at a bug\n> in 0da243fed, and updating to add ZSTD. \n\nPerl warnings can be made fatal with `use warnings FATAL =>\n<categories>;`, but one should be careful about which categories to\nfatalise, per <https://metacpan.org/pod/warnings#Fatal-Warnings>.\n\nSome categories are inherently unsafe to fatalise, as documented in\n<https://metacpan.org/pod/strictures#CATEGORY-SELECTIONS>.\n\n- ilmari\n\n\n",
"msg_date": "Mon, 27 Feb 2023 11:17:45 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: meson vs make: missing/inconsistent ENV"
},
{
"msg_contents": "Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n>\n>> On Sun, Feb 26, 2023 at 03:21:04PM -0800, Andres Freund wrote:\n>>> > Is there any consideration of promoting these or other warnings to\n>>> > fatal?\n>>> \n>>> You mean the perl warnings?\n>>\n>> Yes - it'd be nice if the warnings caused an obvious failure to allow\n>> addressing the issue. I noticed the icu warning while looking at a bug\n>> in 0da243fed, and updating to add ZSTD. \n>\n> Perl warnings can be made fatal with `use warnings FATAL =>\n> <categories>;`, but one should be careful about which categories to\n> fatalise, per <https://metacpan.org/pod/warnings#Fatal-Warnings>.\n>\n> Some categories are inherently unsafe to fatalise, as documented in\n> <https://metacpan.org/pod/strictures#CATEGORY-SELECTIONS>.\n\nOne disadvantage of making the warnings fatal is that it immediately\naborts the test. Another option would be to to turn warnings into test\nfailures, à la https://metacpan.org/pod/Test::Warnings or\nhttps://metacpan.org/pod/Test::FailWarnings. Both those modules support\nall the Perl versions we do, and have no non-core dependencies, but if\nwe don't want to add any more dependencies we can incorporate the logic\ninto one of our own testing modules.\n\n- ilmari\n\n\n",
"msg_date": "Mon, 27 Feb 2023 12:30:59 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: meson vs make: missing/inconsistent ENV"
},
{
"msg_contents": "On 2023-02-27 Mo 06:17, Dagfinn Ilmari Mannsåker wrote:\n> Justin Pryzby<pryzby@telsasoft.com> writes:\n>\n>> On Sun, Feb 26, 2023 at 03:21:04PM -0800, Andres Freund wrote:\n>>>> Is there any consideration of promoting these or other warnings to\n>>>> fatal?\n>>> You mean the perl warnings?\n>> Yes - it'd be nice if the warnings caused an obvious failure to allow\n>> addressing the issue. I noticed the icu warning while looking at a bug\n>> in 0da243fed, and updating to add ZSTD.\n> Perl warnings can be made fatal with `use warnings FATAL =>\n> <categories>;`, but one should be careful about which categories to\n> fatalise, per<https://metacpan.org/pod/warnings#Fatal-Warnings>.\n>\n> Some categories are inherently unsafe to fatalise, as documented in\n> <https://metacpan.org/pod/strictures#CATEGORY-SELECTIONS>.\n>\n\nYeah.\n\n\nIt would be nice if there were some fuller explanation of the various \ncategories, but I don't know of one.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-27 Mo 06:17, Dagfinn Ilmari\n Mannsåker wrote:\n\n\nJustin Pryzby <pryzby@telsasoft.com> writes:\n\n\n\nOn Sun, Feb 26, 2023 at 03:21:04PM -0800, Andres Freund wrote:\n\n\n\nIs there any consideration of promoting these or other warnings to\nfatal?\n\n\n\nYou mean the perl warnings?\n\n\n\nYes - it'd be nice if the warnings caused an obvious failure to allow\naddressing the issue. I noticed the icu warning while looking at a bug\nin 0da243fed, and updating to add ZSTD. \n\n\n\nPerl warnings can be made fatal with `use warnings FATAL =>\n<categories>;`, but one should be careful about which categories to\nfatalise, per <https://metacpan.org/pod/warnings#Fatal-Warnings>.\n\nSome categories are inherently unsafe to fatalise, as documented in\n<https://metacpan.org/pod/strictures#CATEGORY-SELECTIONS>.\n\n\n\n\n\nYeah.\n\n\nIt would be nice if there were some fuller explanation of the\n various categories, but I don't know of one.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 27 Feb 2023 07:33:33 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: meson vs make: missing/inconsistent ENV"
},
{
"msg_contents": "On 2023-02-27 Mo 07:33, Andrew Dunstan wrote:\n>\n>\n> On 2023-02-27 Mo 06:17, Dagfinn Ilmari Mannsåker wrote:\n>> Justin Pryzby<pryzby@telsasoft.com> writes:\n>>\n>>> On Sun, Feb 26, 2023 at 03:21:04PM -0800, Andres Freund wrote:\n>>>>> Is there any consideration of promoting these or other warnings to\n>>>>> fatal?\n>>>> You mean the perl warnings?\n>>> Yes - it'd be nice if the warnings caused an obvious failure to allow\n>>> addressing the issue. I noticed the icu warning while looking at a bug\n>>> in 0da243fed, and updating to add ZSTD.\n>> Perl warnings can be made fatal with `use warnings FATAL =>\n>> <categories>;`, but one should be careful about which categories to\n>> fatalise, per<https://metacpan.org/pod/warnings#Fatal-Warnings>.\n>>\n>> Some categories are inherently unsafe to fatalise, as documented in\n>> <https://metacpan.org/pod/strictures#CATEGORY-SELECTIONS>.\n>>\n>\n> Yeah.\n>\n>\n> It would be nice if there were some fuller explanation of the various \n> categories, but I don't know of one.\n>\n>\n>\n\nLooks like the explanations are in the perldiag manual page. \n<https://perldoc.perl.org/perldiag>\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-27 Mo 07:33, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-02-27 Mo 06:17, Dagfinn\n Ilmari Mannsåker wrote:\n\n\nJustin Pryzby <pryzby@telsasoft.com> writes:\n\n\n\nOn Sun, Feb 26, 2023 at 03:21:04PM -0800, Andres Freund wrote:\n\n\n\nIs there any consideration of promoting these or other warnings to\nfatal?\n\n\nYou mean the perl warnings?\n\n\nYes - it'd be nice if the warnings caused an obvious failure to allow\naddressing the issue. I noticed the icu warning while looking at a bug\nin 0da243fed, and updating to add ZSTD. \n\n\nPerl warnings can be made fatal with `use warnings FATAL =>\n<categories>;`, but one should be careful about which categories to\nfatalise, per <https://metacpan.org/pod/warnings#Fatal-Warnings>.\n\nSome categories are inherently unsafe to fatalise, as documented in\n<https://metacpan.org/pod/strictures#CATEGORY-SELECTIONS>.\n\n\n\n\n\nYeah.\n\n\nIt would be nice if there were some fuller explanation of the\n various categories, but I don't know of one.\n\n\n\n\n\n\nLooks like the explanations are in the perldiag manual page.\n <https://perldoc.perl.org/perldiag>\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 27 Feb 2023 08:36:12 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: meson vs make: missing/inconsistent ENV"
},
{
"msg_contents": "On Sun, Feb 26, 2023 at 03:21:04PM -0800, Andres Freund wrote:\n> On 2023-02-26 16:52:39 -0600, Justin Pryzby wrote:\n>> Also, e6927270c added ZSTD to src/bin/pg_basebackup/meson.build, but\n>> it's not in ./Makefile ?? Maybe that was for consistency with other\n>> places, or pre-emptive in case the tap tests want to do tests involving\n>> the ZSTD tool. But it'd be better if ./Makefile had it too.\n> \n> I suspect I just over-eagerly added it when the pg_basebackup zstd support\n> went in, using the GZIP_PROGRAM/LZ4 cases as a template. And foolishly\n> assuming a newly added compression method would be tested.\n\nleaving the discussion with the perl warnings aside for the moment,\nthese still need to be adjusted. Justin, would you like to write a\npatch with everything you have found?\n--\nMichael",
"msg_date": "Thu, 9 Mar 2023 09:36:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: meson vs make: missing/inconsistent ENV"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-09 09:36:52 +0900, Michael Paquier wrote:\n> On Sun, Feb 26, 2023 at 03:21:04PM -0800, Andres Freund wrote:\n> > On 2023-02-26 16:52:39 -0600, Justin Pryzby wrote:\n> >> Also, e6927270c added ZSTD to src/bin/pg_basebackup/meson.build, but\n> >> it's not in ./Makefile ?? Maybe that was for consistency with other\n> >> places, or pre-emptive in case the tap tests want to do tests involving\n> >> the ZSTD tool. But it'd be better if ./Makefile had it too.\n> > \n> > I suspect I just over-eagerly added it when the pg_basebackup zstd support\n> > went in, using the GZIP_PROGRAM/LZ4 cases as a template. And foolishly\n> > assuming a newly added compression method would be tested.\n> \n> leaving the discussion with the perl warnings aside for the moment,\n> these still need to be adjusted. Justin, would you like to write a\n> patch with everything you have found?\n\nI now pushed a fix for the two obvious cases pointed out by Justin.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Mar 2023 17:59:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: meson vs make: missing/inconsistent ENV"
},
{
"msg_contents": "On Wed, Mar 08, 2023 at 05:59:13PM -0800, Andres Freund wrote:\n> I now pushed a fix for the two obvious cases pointed out by Justin.\n\nThanks!\n--\nMichael",
"msg_date": "Thu, 9 Mar 2023 12:21:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: meson vs make: missing/inconsistent ENV"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile doing something I should not have done, I have been able to\ntrigger latch.c with the error of $subject. Adding in the elog\ngenerated some information about the PID owning the latch and\nMyProcPid has made me understand immediately why I was wrong. Would\nthere be any objections to add more information in this case?\n\nThe attached patch does so.\nThanks,\n--\nMichael",
"msg_date": "Mon, 27 Feb 2023 09:20:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Provide PID data for \"cannot wait on a latch owned by another\n process\" in latch.c"
},
{
"msg_contents": "At Mon, 27 Feb 2023 09:20:39 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Hi all,\n> \n> While doing something I should not have done, I have been able to\n> trigger latch.c with the error of $subject. Adding in the elog\n> generated some information about the PID owning the latch and\n> MyProcPid has made me understand immediately why I was wrong. Would\n> there be any objections to add more information in this case?\n> \n> The attached patch does so.\n> Thanks,\n\nPlease tidy up the followging sentence properly and natural but in a moderately formal way, within the context of computer programs, and provide explanations for the individual changes you made.\n\n+1 for adding that information, I'm afraid that MyProcId is not\nnecessary since it is displayed in log lines in most cases. If you\nwant to display the both PIDs I suggest making them more distinctive.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 27 Feb 2023 17:48:10 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Provide PID data for \"cannot wait on a latch owned by another\n process\" in latch.c"
},
{
"msg_contents": "Uggg!\n\nAt Mon, 27 Feb 2023 17:48:10 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Mon, 27 Feb 2023 09:20:39 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > Hi all,\n> > \n> > While doing something I should not have done, I have been able to\n> > trigger latch.c with the error of $subject. Adding in the elog\n> > generated some information about the PID owning the latch and\n> > MyProcPid has made me understand immediately why I was wrong. Would\n> > there be any objections to add more information in this case?\n> > \n> > The attached patch does so.\n> > Thanks,\n> \n> Please tidy up the followging sentence properly and natural but in a moderately formal way, within the context of computer programs, and provide explanations for the individual changes you made.\n\nPlease ignore the following sentense. It is an extra sentence\nmistakenly copy-pasted in.\n\n> +1 for adding that information, I'm afraid that MyProcId is not\n> necessary since it is displayed in log lines in most cases. If you\n> want to display the both PIDs I suggest making them more distinctive.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 27 Feb 2023 17:53:08 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Provide PID data for \"cannot wait on a latch owned by another\n process\" in latch.c"
},
{
"msg_contents": "Uggg^2\n\nAt Mon, 27 Feb 2023 17:53:08 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > Please tidy up the followging sentence properly and natural but in a moderately formal way, within the context of computer programs, and provide explanations for the individual changes you made.\n> \n> Please ignore the following sentense. It is an extra sentence\n\ns/following/above/;\n\n> mistakenly copy-pasted in.\n> \n> > +1 for adding that information, I'm afraid that MyProcId is not\n> > necessary since it is displayed in log lines in most cases. If you\n> > want to display the both PIDs I suggest making them more distinctive.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 27 Feb 2023 17:54:06 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Provide PID data for \"cannot wait on a latch owned by another\n process\" in latch.c"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 05:48:10PM +0900, Kyotaro Horiguchi wrote:\n> +1 for adding that information, I'm afraid that MyProcId is not\n> necessary since it is displayed in log lines in most cases. If you\n> want to display the both PIDs I suggest making them more distinctive.\n\nWhat would you suggest? This message is basically impossible to\nreach so the wording of the patch was OK for me (see async.c) so you\nwould need to look at the internals anyway. Now if you'd like\nsomething like \"could not blah: owner PID=%d, MyProcPid=%d\", that's\nalso fine by me.\n--\nMichael",
"msg_date": "Tue, 28 Feb 2023 08:59:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Provide PID data for \"cannot wait on a latch owned by another\n process\" in latch.c"
},
{
"msg_contents": "On 28.02.23 00:59, Michael Paquier wrote:\n> On Mon, Feb 27, 2023 at 05:48:10PM +0900, Kyotaro Horiguchi wrote:\n>> +1 for adding that information, I'm afraid that MyProcId is not\n>> necessary since it is displayed in log lines in most cases. If you\n>> want to display the both PIDs I suggest making them more distinctive.\n> \n> What would you suggest? This message is basically impossible to\n> reach so the wording of the patch was OK for me (see async.c) so you\n> would need to look at the internals anyway. Now if you'd like\n> something like \"could not blah: owner PID=%d, MyProcPid=%d\", that's\n> also fine by me.\n\nI would also have asked for some kind of prefix that introduces the numbers.\n\nI wonder what these numbers are useful for though? Is this a \ndevelopment aid? Can you do anything with these numbers?\n\n\n",
"msg_date": "Tue, 28 Feb 2023 08:18:16 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Provide PID data for \"cannot wait on a latch owned by another\n process\" in latch.c"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 08:18:16AM +0100, Peter Eisentraut wrote:\n> I would also have asked for some kind of prefix that introduces the numbers.\n\nOkay.\n\n> I wonder what these numbers are useful for though? Is this a development\n> aid?\n\nYes.\n\n> Can you do anything with these numbers?\n\nYes. They immediately pointed out that I missed to mark a latch as\nowned in a process, hence the owner_pid was showing up as 0 when\ntrying to use it. The second showed me the process that was involved,\nwhich was still useful once cross-checked with the contents of the\nlogs prefixed with %p.\n--\nMichael",
"msg_date": "Tue, 28 Feb 2023 17:53:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Provide PID data for \"cannot wait on a latch owned by another\n process\" in latch.c"
}
] |
[
{
"msg_contents": "# I:\n (default target) (1) -> (Link target) ->\n libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertOpenStore, capi_open_store \n libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertCloseStore, capi_find_key \n libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertEnumCertificatesInStore, capi_find_cert \n libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertFindCertificateInStore, capi_find_cert\n libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertDuplicateCertificateContext, capi_load_ssl_client_cert\n libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertFreeCertificateContext, capi_dsa_free\n libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertGetCertificateContextProperty, capi_cert_get_fname \n\n# A:\n loss crypt32.lib\n\n# Fix:\n Mkvcbuild.pm: fix: add:\n $libpq->AddLibrary('crypt32.lib');\n $postgres->AddLibrary('crypt32.lib')\n\n and simple fix: \"Unable to determine Visual Studio version\":\n replace(\n \"my $vsVersion = DetermineVisualStudioVersion();\",\n \"my $vsVersion = \"17.00\";\");",
"msg_date": "Mon, 27 Feb 2023 09:58:28 +0800",
"msg_from": "\"=?gb18030?B?Z2FtZWZ1bmM=?=\" <32686647@qq.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] fix msvc build libpq error LNK2019 when link openssl;"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 09:58:28AM +0800, gamefunc wrote:\n> # I:\n> (default target) (1) -> (Link target) ->\n> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertOpenStore, capi_open_store \n> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertCloseStore, capi_find_key \n> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertEnumCertificatesInStore, capi_find_cert \n> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertFindCertificateInStore, capi_find_cert\n> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertDuplicateCertificateContext, capi_load_ssl_client_cert\n> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertFreeCertificateContext, capi_dsa_free\n> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertGetCertificateContextProperty, capi_cert_get_fname \n\n@@ -94,7 +94,7 @@ sub mkvcbuild\n die 'Must run from root or msvc directory'\n unless (-d 'src/tools/msvc' && -d 'src')\n\n- my $vsVersion = DetermineVisualStudioVersion();\n+ my $vsVersion = \"17.00\";\n\nThis diff forces the creation of a VS2022Solution(), which would be\nincorrect when using an MSVC environment different than 17.0 as\nversion number, no?\n\nNote that buildfarm member drongo is providing coverage for 64b\nWindows builds with Visual 2019 and OpenSSL:\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=drongo&br=HEAD\n\nAre you sure that you just didn't mix 64b builds with 32b libraries,\nor vice-versa?\n--\nMichael",
"msg_date": "Mon, 27 Feb 2023 14:27:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix msvc build libpq error LNK2019 when link openssl;"
},
{
"msg_contents": "sorry i should have made it clear – I am only reporting my build error issue;\nthe patch is only for illustrative purpose, not for merging to upstream;\nAnd my English is not good, so I can only write simple descriptions;\n>> This diff forces the creation of a VS2022Solution():\nyes, my env: vs2022, openssl3.0.7; \n\n>> Note that buildfarm member drongo is providing coverage for 64b:\nthank you, But I need to build libpq myself because I need to add functions to libpq for my example use; https://github.com/gamefunc/Aiolibpq_simple;\n\n>> Are you sure that you just didn't mix 64b builds with 32b libraries;\nno, all 64b;\n\nFrom: Michael Paquier\nDate: 2023年2月27日 13:27\nTo: gamefunc\nCC: pgsql-hackers\nSubject: Re: [PATCH] fix msvc build libpq error LNK2019 when link openssl;\n\nOn Mon, Feb 27, 2023 at 09:58:28AM +0800, gamefunc wrote:\n> # I:\n> (default target) (1) -> (Link target) ->\n> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertOpenStore, capi_open_store \n> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertCloseStore, capi_find_key \n> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertEnumCertificatesInStore, capi_find_cert \n> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertFindCertificateInStore, capi_find_cert\n> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertDuplicateCertificateContext, capi_load_ssl_client_cert\n> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertFreeCertificateContext, capi_dsa_free\n> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertGetCertificateContextProperty, capi_cert_get_fname \n\n@@ -94,7 +94,7 @@ sub mkvcbuild\n die 'Must run from root or msvc directory'\n unless (-d 'src/tools/msvc' && -d 'src')\n\n- my $vsVersion = DetermineVisualStudioVersion();\n+ my $vsVersion = \"17.00\";\n\nThis diff forces the creation of a VS2022Solution(), which would be\nincorrect when using an MSVC environment different than 17.0 as\nversion number, no?\n\nNote that buildfarm member drongo is providing coverage for 64b\nWindows builds with Visual 2019 and OpenSSL:\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=drongo&br=HEAD\n\nAre you sure that you just didn't mix 64b builds with 32b libraries,\nor vice-versa?\n--\nMichael\n\n\n\nsorry i should have made it clear – I am only reporting my build error issue;the patch is only for illustrative purpose, not for merging to upstream;And my English is not good, so I can only write simple descriptions;>> This diff forces the creation of a VS2022Solution():yes, my env: vs2022, openssl3.0.7; >> Note that buildfarm member drongo is providing coverage for 64b:thank you, But I need to build libpq myself because I need to add functions to libpq for my example use; https://github.com/gamefunc/Aiolibpq_simple; >> Are you sure that you just didn't mix 64b builds with 32b libraries;no, all 64b; From: Michael PaquierDate: 2023年2月27日 13:27To: gamefuncCC: pgsql-hackersSubject: Re: [PATCH] fix msvc build libpq error LNK2019 when link openssl; On Mon, Feb 27, 2023 at 09:58:28AM +0800, gamefunc wrote:> # I:> (default target) (1) -> (Link target) ->> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertOpenStore, capi_open_store > libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertCloseStore, capi_find_key > libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertEnumCertificatesInStore, capi_find_cert > libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertFindCertificateInStore, capi_find_cert> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertDuplicateCertificateContext, capi_load_ssl_client_cert> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertFreeCertificateContext, capi_dsa_free> libcrypto.lib(libcrypto-lib-e_capi.obj) : error LNK2019: __imp_CertGetCertificateContextProperty, capi_cert_get_fname @@ -94,7 +94,7 @@ sub mkvcbuild die 'Must run from root or msvc directory' unless (-d 'src/tools/msvc' && -d 'src') - my $vsVersion = DetermineVisualStudioVersion();+ my $vsVersion = \"17.00\"; This diff forces the creation of a VS2022Solution(), which would beincorrect when using an MSVC environment different than 17.0 asversion number, no? Note that buildfarm member drongo is providing coverage for 64bWindows builds with Visual 2019 and OpenSSL:https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=drongo&br=HEAD Are you sure that you just didn't mix 64b builds with 32b libraries,or vice-versa?--Michael",
"msg_date": "Mon, 27 Feb 2023 15:33:50 +0800",
"msg_from": "gamefunc <32686647@qq.com>",
"msg_from_op": false,
"msg_subject": "RE: [PATCH] fix msvc build libpq error LNK2019 when link openssl;"
}
] |
[
{
"msg_contents": "Hi all,\n\nBefore PG 14, walsender process has to handle invalid message in one\nXLOG (PG 14 provide a particular XLOG type: XLOG_XACT_INVALIDATIONS).\nThis may bring some problems which has been discussed in previous\nmail: https://www.postgresql.org/message-id/flat/CAM_vCufO3eeRZ_O04z9reiE%2BB644%2BRgczbAVo9C5%2BoHV9S7%2B-g%40mail.gmail.com#981e65567784e0aefa4474cc3fd840f6\n\nThis patch can solve the problem. It has three parts:\n1. pgoutput do not do useless invalid cache anymore;\n2. Add a relid->relfilenode hash map to invoid hash seq search;\n3. test case: It needs two or three minutes to finish.\n\nThe patch is based on the HEAD of branch REL_13_STABLE. It also works\nfor PG 10~12.\n\nThanks.\nBowenshi",
"msg_date": "Mon, 27 Feb 2023 16:12:11 +0800",
"msg_from": "Bowen Shi <zxwsbg12138@gmail.com>",
"msg_from_op": true,
"msg_subject": "Optimize walsender handling invalid messages of 'drop publication'"
},
{
"msg_contents": "Dears,\n\nThis issue has been pending for several months without any response.\nAnd this problem still exists in the latest minor versions of PG 12\nand PG 13.\n\nI believe that the fix in this patch is helpful.\n\nThe patch has been submitted\nhttps://commitfest.postgresql.org/43/4393/ . Anyone who is interested\nin this issue can help with the review.\n\nRegards!\nBowenShi\n\nOn Mon, 27 Feb 2023 at 16:12, Bowen Shi <zxwsbg12138@gmail.com> wrote:\n>\n> Hi all,\n>\n> Before PG 14, walsender process has to handle invalid message in one\n> XLOG (PG 14 provide a particular XLOG type: XLOG_XACT_INVALIDATIONS).\n> This may bring some problems which has been discussed in previous\n> mail: https://www.postgresql.org/message-id/flat/CAM_vCufO3eeRZ_O04z9reiE%2BB644%2BRgczbAVo9C5%2BoHV9S7%2B-g%40mail.gmail.com#981e65567784e0aefa4474cc3fd840f6\n>\n> This patch can solve the problem. It has three parts:\n> 1. pgoutput do not do useless invalid cache anymore;\n> 2. Add a relid->relfilenode hash map to invoid hash seq search;\n> 3. test case: It needs two or three minutes to finish.\n>\n> The patch is based on the HEAD of branch REL_13_STABLE. It also works\n> for PG 10~12.\n>\n> Thanks.\n> Bowenshi\n\n\n",
"msg_date": "Mon, 26 Jun 2023 15:01:22 +0800",
"msg_from": "Bowen Shi <zxwsbg12138@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimize walsender handling invalid messages of 'drop\n publication'"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-26 15:01:22 +0800, Bowen Shi wrote:\n> This issue has been pending for several months without any response.\n> And this problem still exists in the latest minor versions of PG 12\n> and PG 13.\n>\n> I believe that the fix in this patch is helpful.\n>\n> The patch has been submitted\n> https://commitfest.postgresql.org/43/4393/ . Anyone who is interested\n> in this issue can help with the review.\n\nISTM that the path for people encountering this issue is to upgrade.\n\nIt's not unheard of that we'd backpatch a performance improvements to the\nbackbranches, but it's pretty rare. It's one thing to decide to backpatch an\noptimization if it had time to \"mature\" in the development branch, but from\nwhat I undestand you're proposing to apply this just to the back branches.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 26 Jun 2023 14:13:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Optimize walsender handling invalid messages of 'drop\n publication'"
}
] |
[
{
"msg_contents": "Hello.\n\nI found it frustrating that the line \"shared_buffers = 0.1GB\" in\npostgresql.conf postgresql.conf was causing an error and that the\nvalue required (additional) surrounding single quotes. The attached\npatch makes the parser accept the use of non-quoted real values\nfollowed by a unit for such variables. I'm not sure if that syntax\nfully covers the input syntax of strtod, but I beieve it is suffucient\nfor most use cases.\n\nIs the following a correct English sentense?\n\nDo you folks think this makes sense?\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 27 Feb 2023 17:32:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Real config values for bytes needs quotes?"
},
{
"msg_contents": "On 27.02.23 09:32, Kyotaro Horiguchi wrote:\n> I found it frustrating that the line \"shared_buffers = 0.1GB\" in\n> postgresql.conf postgresql.conf was causing an error and that the\n> value required (additional) surrounding single quotes. The attached\n> patch makes the parser accept the use of non-quoted real values\n> followed by a unit for such variables. I'm not sure if that syntax\n> fully covers the input syntax of strtod, but I beieve it is suffucient\n> for most use cases.\n\nThis seems sensible to fix. If you're not sure about the details, write \nsome test cases. :)\n\n\n",
"msg_date": "Thu, 2 Mar 2023 11:54:10 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Real config values for bytes needs quotes?"
},
{
"msg_contents": "At Thu, 2 Mar 2023 11:54:10 +0100, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote in \n> On 27.02.23 09:32, Kyotaro Horiguchi wrote:\n> > I found it frustrating that the line \"shared_buffers = 0.1GB\" in\n> > postgresql.conf postgresql.conf was causing an error and that the\n> > value required (additional) surrounding single quotes. The attached\n> > patch makes the parser accept the use of non-quoted real values\n> > followed by a unit for such variables. I'm not sure if that syntax\n> > fully covers the input syntax of strtod, but I beieve it is suffucient\n> > for most use cases.\n> \n> This seems sensible to fix. If you're not sure about the details,\n> write some test cases. :)\n\nThanks. I initially intended to limit the change for REAL to accept\nunits following a value. However, actually I also modified it to\naccept '1e1'.\n\nman strtod says it accepts the following format.\n\n> The expected form of the (initial portion of the) string is optional\n> leading white space as recognized by isspace(3), an optional plus ('+')\n> or minus sign ('-') and then either (i) a decimal number, or (ii) a\n> hexadecimal number, or (iii) an infinity, or (iv) a NAN (not-a-number).\n>\n> A decimal number consists of a nonempty sequence of decimal digits pos‐\n> sibly containing a radix character (decimal point, locale-dependent,\n> usually '.'), optionally followed by a decimal exponent. A decimal\n> exponent consists of an 'E' or 'e', followed by an optional plus or\n> minus sign, followed by a nonempty sequence of decimal digits, and\n> indicates multiplication by a power of 10.\n\nIt is written in regexp as\n'\\s*[-+]?(\\.\\d+|\\d+(\\.\\d*)?)([Ee][-+]?\\d+)?'. The leading whitespace\nis unnecessary in this specific usage, and we also need to exclude\nINTERGER from this set of matches. Therefore, it should be modified to\n'[-+]?((\\.\\d+|\\d+\\.\\d*)([Ee][-+]?\\d+)?|\\d+([Ee][-+]?\\d+))'.\n\nIt is translated to the following BNF notation. UNIT_LETTER is also\nneeded.\n\n{SIGN}?((\".\"{DIGIT}+|{DIGIT}+\".\"{DIGIT}*){EXPONENT}?|{DIGIT}+{EXPONENT}){UNIT_LETTER}*\n\nThe attached patch applies aforementioned change to guc-file.l and\nincludes tests for values in certain patters.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 03 Mar 2023 14:38:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Real config values for bytes needs quotes?"
}
] |
[
{
"msg_contents": "Now that we have random_normal(), it seems like it would be useful to\nadd the error functions erf() and erfc(), which I think are\npotentially useful to the people who will find random_normal() useful,\nand possibly others.\n\nAn immediate use for erf() is that it allows us to do a\nKolmogorov-Smirnov test for random_normal(), similar to the one for\nrandom().\n\nBoth of these functions are defined in POSIX and C99, so in theory\nthey should be available on all platforms. If that turns out not to be\nthe case, then there's a commonly used implementation (e.g., see [1]),\nwhich we could include. I played around with that (replacing the\ndirect bit manipulation stuff with frexp()/ldexp(), see pg_erf.c\nattached), and it appeared to be accurate to +/-1 ULP across the full\nrange of inputs. Hopefully we won't need that though.\n\nI tested this on a couple of different platforms and found I needed to\nreduce extra_float_digits to -1 to get the regression tests to pass\nconsistently, due to rounding errors. It wouldn't surprise me if that\nneeds to be reduced further, though perhaps it's not necessary to have\nso many tests (I included one test value from each branch, while\ntesting the hand-rolled implementation).\n\nRegards,\nDean\n\n[1] https://github.com/lsds/musl/blob/master/src/math/erf.c",
"msg_date": "Mon, 27 Feb 2023 12:54:35 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add error functions: erf() and erfc()"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 12:54:35PM +0000, Dean Rasheed wrote:\n> +\t/*\n> +\t * For erf, we don't need an errno check because it never overflows.\n> +\t */\n\n> +\t/*\n> +\t * For erfc, we don't need an errno check because it never overflows.\n> +\t */\n\nThe man pages for these seem to indicate that underflow can occur. Do we\nneed to check for that?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 8 Mar 2023 12:11:14 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add error functions: erf() and erfc()"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 1:54 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> Now that we have random_normal(), it seems like it would be useful to\n> add the error functions erf() and erfc(), which I think are\n> potentially useful to the people who will find random_normal() useful,\n> and possibly others.\n>\n> An immediate use for erf() is that it allows us to do a\n> Kolmogorov-Smirnov test for random_normal(), similar to the one for\n> random().\n>\n> Both of these functions are defined in POSIX and C99, so in theory\n> they should be available on all platforms. If that turns out not to be\n> the case, then there's a commonly used implementation (e.g., see [1]),\n> which we could include. I played around with that (replacing the\n> direct bit manipulation stuff with frexp()/ldexp(), see pg_erf.c\n> attached), and it appeared to be accurate to +/-1 ULP across the full\n> range of inputs. Hopefully we won't need that though.\n\nHi,\n\nNo comment on the maths, but I'm pretty sure we won't need a fallback\nimplementation. That stuff goes back to the math libraries of 80s\nUnixes, even though it didn't make it into C until '99. I just\nchecked the man pages for all our target systems and they all show it.\n(There might be some portability details around the tgmath.h versions\non some systems, eg to support different float sizes, I dunno, but\nyou're using the plain math.h versions.)\n\nI wonder if the SQL standard has anything to say about these, or the\nrelated normal CDF. I can't check currently but I doubt it, based on\nsearches and other systems' manuals.\n\nTwo related functions that also arrived in C99 are lgamma() and\ntgamma(). If you'll excuse the digression, that reminded me of\nsomething I was trying to figure out once, for a practical problem.\nMy statistics knowledge is extremely patchy, but I have been trying to\nup my benchmarking game, and that led to a bit of remedial reading on\nStudent's t tests and related stuff. A few shaven yaks later, I\nunderstood that you could probably (if you like pain) do that sort of\nstuff inside PostgreSQL using our existing aggregates, if you took the\napproach of ministat[1]. That tool has a table of critical values\ninside it, indexed by degrees-of-freedom (1-100) and confidence level\n(80, 90, 95, 98, 99, 99.5), and one could probably write SQL queries\nthat spit out an answer like \"p is less than 5%, ship it!\", if we\nstole that table. But what if I actually want to know p? Of course\nyou can do all that good stuff very easily with tools like R, SciPy\netc and maybe that's the best way to do it. But Oracle, and I think\nseveral other analytics-focused SQL systems, can do it in a very easy\nbuilt-in way. I think to get at that you probably need the t CDF, and\nin there[2] I see... Γ(). Huh.\n\n[1] https://man.freebsd.org/cgi/man.cgi?query=ministat\n[2] https://www.mathworks.com/help/stats/tcdf.html\n\n\n",
"msg_date": "Thu, 9 Mar 2023 12:24:05 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add error functions: erf() and erfc()"
},
{
"msg_contents": "On Wed, 8 Mar 2023 at 20:11, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Feb 27, 2023 at 12:54:35PM +0000, Dean Rasheed wrote:\n> > + /*\n> > + * For erf, we don't need an errno check because it never overflows.\n> > + */\n>\n> > + /*\n> > + * For erfc, we don't need an errno check because it never overflows.\n> > + */\n>\n> The man pages for these seem to indicate that underflow can occur. Do we\n> need to check for that?\n>\n\nNo, I don't think so. The docs indicate that if an underflow occurs,\nthe correct result (after rounding) should be returned, so I think we\nshould just return that result (as we do for tanh(), for example).\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 8 Mar 2023 23:29:12 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add error functions: erf() and erfc()"
},
{
"msg_contents": "On Wed, Mar 08, 2023 at 11:29:12PM +0000, Dean Rasheed wrote:\n> On Wed, 8 Mar 2023 at 20:11, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> The man pages for these seem to indicate that underflow can occur. Do we\n>> need to check for that?\n> \n> No, I don't think so. The docs indicate that if an underflow occurs,\n> the correct result (after rounding) should be returned, so I think we\n> should just return that result (as we do for tanh(), for example).\n\nMakes sense.\n\nI'm also wondering about whether we need the isinf() checks. IIUC that\nshould never happen, but maybe you added that \"just in case.\"\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 8 Mar 2023 16:13:13 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add error functions: erf() and erfc()"
},
{
"msg_contents": "On Wed, 8 Mar 2023 at 23:24, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> No comment on the maths, but I'm pretty sure we won't need a fallback\n> implementation. That stuff goes back to the math libraries of 80s\n> Unixes, even though it didn't make it into C until '99. I just\n> checked the man pages for all our target systems and they all show it.\n> (There might be some portability details around the tgmath.h versions\n> on some systems, eg to support different float sizes, I dunno, but\n> you're using the plain math.h versions.)\n>\n\nThanks for checking. Hopefully they will be available everywhere.\n\nI think what's more likely to happen is that the tests will reveal\nimplementation variations, as they did when the hyperbolic functions\nwere added, and it'll be necessary to adjust or remove some of the\ntest cases. When I originally wrote those tests, I picked a value from\neach branch that the FreeBSD implementation handled differently, but I\nthink that's overkill. If the purpose of the tests is just to confirm\nthat the right C library function has been exposed, they could\nprobably be pared all the way down to just testing erf(1) and erfc(1),\nbut it might be useful to first see what platform variations exist.\n\n> Two related functions that also arrived in C99 are lgamma() and\n> tgamma(). If you'll excuse the digression, that reminded me of\n> something I was trying to figure out once, for a practical problem.\n> My statistics knowledge is extremely patchy, but I have been trying to\n> up my benchmarking game, and that led to a bit of remedial reading on\n> Student's t tests and related stuff. A few shaven yaks later, I\n> understood that you could probably (if you like pain) do that sort of\n> stuff inside PostgreSQL using our existing aggregates, if you took the\n> approach of ministat[1]. That tool has a table of critical values\n> inside it, indexed by degrees-of-freedom (1-100) and confidence level\n> (80, 90, 95, 98, 99, 99.5), and one could probably write SQL queries\n> that spit out an answer like \"p is less than 5%, ship it!\", if we\n> stole that table. But what if I actually want to know p? Of course\n> you can do all that good stuff very easily with tools like R, SciPy\n> etc and maybe that's the best way to do it. But Oracle, and I think\n> several other analytics-focused SQL systems, can do it in a very easy\n> built-in way. I think to get at that you probably need the t CDF, and\n> in there[2] I see... Γ(). Huh.\n>\n\nHmm, possibly having the gamma function would be independently useful\nfor other things too. I don't want to get side-tracked though.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 9 Mar 2023 00:16:24 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add error functions: erf() and erfc()"
},
{
"msg_contents": "On Thu, 9 Mar 2023 at 00:13, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Mar 08, 2023 at 11:29:12PM +0000, Dean Rasheed wrote:\n> > On Wed, 8 Mar 2023 at 20:11, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> The man pages for these seem to indicate that underflow can occur. Do we\n> >> need to check for that?\n> >\n> > No, I don't think so. The docs indicate that if an underflow occurs,\n> > the correct result (after rounding) should be returned, so I think we\n> > should just return that result (as we do for tanh(), for example).\n>\n> Makes sense.\n>\n> I'm also wondering about whether we need the isinf() checks. IIUC that\n> should never happen, but maybe you added that \"just in case.\"\n>\n\nI copied those from dtanh(), otherwise I probably wouldn't have\nbothered. erf() is always in the range [-1, 1], just like tanh(), so\nit should never overflow, but maybe it can happen in a broken\nimplementation.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 9 Mar 2023 00:27:47 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add error functions: erf() and erfc()"
},
{
"msg_contents": "On Thu, Mar 09, 2023 at 12:27:47AM +0000, Dean Rasheed wrote:\n> On Thu, 9 Mar 2023 at 00:13, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> I'm also wondering about whether we need the isinf() checks. IIUC that\n>> should never happen, but maybe you added that \"just in case.\"\n> \n> I copied those from dtanh(), otherwise I probably wouldn't have\n> bothered. erf() is always in the range [-1, 1], just like tanh(), so\n> it should never overflow, but maybe it can happen in a broken\n> implementation.\n\nOkay. This looks good to me, then.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 8 Mar 2023 16:30:02 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add error functions: erf() and erfc()"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 1:16 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> On Wed, 8 Mar 2023 at 23:24, Thomas Munro <thomas.munro@gmail.com> wrote:\n> > ... But Oracle, and I think\n> > several other analytics-focused SQL systems, can do it in a very easy\n> > built-in way. I think to get at that you probably need the t CDF, and\n> > in there[2] I see... Γ(). Huh.\n>\n> Hmm, possibly having the gamma function would be independently useful\n> for other things too. I don't want to get side-tracked though.\n\nI guess if we did want to add some nice easy-to-use hypothesis testing\ntools to PostgreSQL, then perhaps gamma wouldn't actually be needed\nfrom SQL, but it might be used inside C code for something higher\nlevel like tcdf()[1], or even very high level like\nt_test_independent_agg(s1, s2) etc. Anyway, just thought I'd mention\nthose in passing, as I see they arrived together; sorry for getting\noff topic.\n\n[1] https://stats.stackexchange.com/questions/394978/how-to-approximate-the-student-t-cdf-at-a-point-without-the-hypergeometric-funct\n\n\n",
"msg_date": "Thu, 9 Mar 2023 14:02:46 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add error functions: erf() and erfc()"
}
] |
[
{
"msg_contents": "Hi,\n\nIn order to compare pairs of XML documents for equivalence it is \nnecessary to convert them first to their canonical form, as described at \nW3C Canonical XML 1.1.[1] This spec basically defines a standard \nphysical representation of xml documents that have more then one \npossible representation, so that it is possible to compare them, e.g. \nforcing UTF-8 encoding, entity reference replacement, attributes \nnormalization, etc.\n\nAlthough it is not part of the XML/SQL standard, it would be nice to \nhave the option CANONICAL in xmlserialize. Additionally, we could also \nadd the attribute WITH [NO] COMMENTS to keep or remove xml comments from \nthe documents.\n\nSomething like this:\n\nWITH t(col) AS (\n VALUES\n ('<?xml version=\"1.0\" encoding=\"ISO-8859-1\"?>\n <!DOCTYPE doc SYSTEM \"doc.dtd\" [\n <!ENTITY val \"42\">\n <!ATTLIST xyz attr CDATA \"default\">\n ]>\n\n <!-- ordering of attributes -->\n <foo ns:c = \"3\" ns:b = \"2\" ns:a = \"1\"\n xmlns:ns=\"http://postgresql.org\">\n\n <!-- Normalization of whitespace in start and end tags -->\n <!-- Elimination of superfluous namespace declarations,\n as already declared in <foo> -->\n <bar xmlns:ns=\"http://postgresql.org\" >&val;</bar >\n\n <!-- Empty element conversion to start-end tag pair -->\n <empty/>\n\n <!-- Effect of transcoding from a sample encoding to UTF-8 -->\n <iso8859>©</iso8859>\n\n <!-- Addition of default attribute -->\n <!-- Whitespace inside tag preserved -->\n <xyz> 321 </xyz>\n </foo>\n <!-- comment outside doc -->'::xml)\n)\nSELECT xmlserialize(DOCUMENT col AS text CANONICAL) FROM t;\nxmlserialize\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n <foo xmlns:ns=\"http://postgresql.org\" ns:a=\"1\" ns:b=\"2\" \nns:c=\"3\"><bar>42</bar><empty></empty><iso8859>©</iso8859><xyz \nattr=\"default\"> 321 </xyz></foo>\n(1 row)\n\n-- using WITH COMMENTS\n\nWITH t(col) AS (\n VALUES\n (' <foo ns:c = \"3\" ns:b = \"2\" ns:a = \"1\"\n xmlns:ns=\"http://postgresql.org\">\n <!-- very important comment -->\n <xyz> 321 </xyz>\n </foo>'::xml)\n)\nSELECT xmlserialize(DOCUMENT col AS text CANONICAL WITH COMMENTS) FROM t;\nxmlserialize\n------------------------------------------------------------------------------------------------------------------------\n <foo xmlns:ns=\"http://postgresql.org\" ns:a=\"1\" ns:b=\"2\" ns:c=\"3\"><!-- \nvery important comment --><xyz> 321 </xyz></foo>\n(1 row)\n\n\nAnother option would be to simply create a new function, e.g. \nxmlcanonical(doc xml, keep_comments boolean), but I'm not sure if this \nwould be the right approach.\n\nAttached a very short draft. What do you think?\n\nBest, Jim\n\n1- https://www.w3.org/TR/xml-c14n11/",
"msg_date": "Mon, 27 Feb 2023 14:16:30 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "[PoC] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "On 27.02.23 14:16, I wrote:\n> Hi,\n>\n> In order to compare pairs of XML documents for equivalence it is \n> necessary to convert them first to their canonical form, as described \n> at W3C Canonical XML 1.1.[1] This spec basically defines a standard \n> physical representation of xml documents that have more then one \n> possible representation, so that it is possible to compare them, e.g. \n> forcing UTF-8 encoding, entity reference replacement, attributes \n> normalization, etc.\n>\n> Although it is not part of the XML/SQL standard, it would be nice to \n> have the option CANONICAL in xmlserialize. Additionally, we could also \n> add the attribute WITH [NO] COMMENTS to keep or remove xml comments \n> from the documents.\n>\n> Something like this:\n>\n> WITH t(col) AS (\n> VALUES\n> ('<?xml version=\"1.0\" encoding=\"ISO-8859-1\"?>\n> <!DOCTYPE doc SYSTEM \"doc.dtd\" [\n> <!ENTITY val \"42\">\n> <!ATTLIST xyz attr CDATA \"default\">\n> ]>\n>\n> <!-- ordering of attributes -->\n> <foo ns:c = \"3\" ns:b = \"2\" ns:a = \"1\"\n> xmlns:ns=\"http://postgresql.org\">\n>\n> <!-- Normalization of whitespace in start and end tags -->\n> <!-- Elimination of superfluous namespace declarations,\n> as already declared in <foo> -->\n> <bar xmlns:ns=\"http://postgresql.org\" >&val;</bar >\n>\n> <!-- Empty element conversion to start-end tag pair -->\n> <empty/>\n>\n> <!-- Effect of transcoding from a sample encoding to UTF-8 -->\n> <iso8859>©</iso8859>\n>\n> <!-- Addition of default attribute -->\n> <!-- Whitespace inside tag preserved -->\n> <xyz> 321 </xyz>\n> </foo>\n> <!-- comment outside doc -->'::xml)\n> )\n> SELECT xmlserialize(DOCUMENT col AS text CANONICAL) FROM t;\n> xmlserialize\n> -------------------------------------------------------------------------------------------------------------------------------------------------------- \n>\n> <foo xmlns:ns=\"http://postgresql.org\" ns:a=\"1\" ns:b=\"2\" \n> ns:c=\"3\"><bar>42</bar><empty></empty><iso8859>©</iso8859><xyz \n> attr=\"default\"> 321 </xyz></foo>\n> (1 row)\n>\n> -- using WITH COMMENTS\n>\n> WITH t(col) AS (\n> VALUES\n> (' <foo ns:c = \"3\" ns:b = \"2\" ns:a = \"1\"\n> xmlns:ns=\"http://postgresql.org\">\n> <!-- very important comment -->\n> <xyz> 321 </xyz>\n> </foo>'::xml)\n> )\n> SELECT xmlserialize(DOCUMENT col AS text CANONICAL WITH COMMENTS) FROM t;\n> xmlserialize\n> ------------------------------------------------------------------------------------------------------------------------ \n>\n> <foo xmlns:ns=\"http://postgresql.org\" ns:a=\"1\" ns:b=\"2\" ns:c=\"3\"><!-- \n> very important comment --><xyz> 321 </xyz></foo>\n> (1 row)\n>\n>\n> Another option would be to simply create a new function, e.g. \n> xmlcanonical(doc xml, keep_comments boolean), but I'm not sure if this \n> would be the right approach.\n>\n> Attached a very short draft. What do you think?\n>\n> Best, Jim\n>\n> 1- https://www.w3.org/TR/xml-c14n11/\n\nThe attached version includes documentation and tests to the patch.\n\nI hope things are clearer now :)\n\nBest, Jim",
"msg_date": "Sun, 5 Mar 2023 19:44:01 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 7:44 AM Jim Jones <jim.jones@uni-muenster.de> wrote:\n> The attached version includes documentation and tests to the patch.\n\nThe CI run for that failed in an interesting way, only on Debian +\nMeson, 32 bit. The diffs appear to show that psql has a different\nopinion of the column width, while building its header (the \"------\"\nyou get at the top of psql's output), even though the actual column\ncontents was the same. regression.diff[2] shows that there is a \"£1\"\nin the output, which is how UTF-8 \"£1\" looks if you view it with\nLatin1 glasses on. Clearly this patch involves transcoding, Latin1\nand UTF-8 and I haven't studied it, but it's pretty weird for the 32\nbit build to give a different result... could be something to do with\nour environment, since .cirrus.yml sets LANG=C in the 32 bit test run\n-- maybe try that locally?\n\nThat run also generated a core dump, but I think that's just our open\nSIGQUIT problem[3] and not relevant here.\n\n[1] https://cirrus-ci.com/build/6319462375227392\n[2] https://api.cirrus-ci.com/v1/artifact/task/5800598633709568/testrun/build-32/testrun/regress/regress/regression.diffs\n[3] https://www.postgresql.org/message-id/flat/20230214202927.xgb2w6b7gnhq6tvv%40awork3.anarazel.de\n\n\n",
"msg_date": "Mon, 6 Mar 2023 10:00:33 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "On 05.03.23 22:00, Thomas Munro wrote:\n> The CI run for that failed in an interesting way, only on Debian +\n> Meson, 32 bit. The diffs appear to show that psql has a different\n> opinion of the column width, while building its header (the \"------\"\n> you get at the top of psql's output), even though the actual column\n> contents was the same. regression.diff[2] shows that there is a \"£1\"\n> in the output, which is how UTF-8 \"£1\" looks if you view it with\n> Latin1 glasses on. Clearly this patch involves transcoding, Latin1\n> and UTF-8\nOne of the use cases of this patch is exactly the transcoding of a non \nutf-8 document to utf-8 - as described in the XML canonical spec.\n> and I haven't studied it, but it's pretty weird for the 32\n> bit build to give a different result...\nYeah, it's pretty weird indeed. I'll try to reproduce this environment \nin a container to see if I get the same diff. Although I'm not sure that \nby \"fixing\" the result set for this environment it won't break all the \nothers.\n> could be something to do with\n> our environment, since .cirrus.yml sets LANG=C in the 32 bit test run\n> -- maybe try that locally?\nAlso using LANGUAGE=C the result is the same for me - all tests pass \njust fine.\n> That run also generated a core dump, but I think that's just our open\n> SIGQUIT problem[3] and not relevant here.\n>\n> [1] https://cirrus-ci.com/build/6319462375227392\n> [2] https://api.cirrus-ci.com/v1/artifact/task/5800598633709568/testrun/build-32/testrun/regress/regress/regression.diffs\n> [3] https://www.postgresql.org/message-id/flat/20230214202927.xgb2w6b7gnhq6tvv%40awork3.anarazel.de\n\nThanks for the quick reply. Much appreciated!\n\n\n\n\n\n",
"msg_date": "Sun, 5 Mar 2023 23:20:19 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 11:20 AM Jim Jones <jim.jones@uni-muenster.de> wrote:\n> On 05.03.23 22:00, Thomas Munro wrote:\n> > could be something to do with\n> > our environment, since .cirrus.yml sets LANG=C in the 32 bit test run\n> > -- maybe try that locally?\n\n> Also using LANGUAGE=C the result is the same for me - all tests pass\n> just fine.\n\nI couldn't reproduce that locally either, but I just tested on CI with\nyour patch applied saw the failure, and then removed\n\"PYTHONCOERCECLOCALE=0 LANG=C\" and it's all green:\n\nhttps://github.com/macdice/postgres/commit/91999f5d13ac2df6f7237a301ed6cf73f2bb5b6d\n\nWithout looking too closely, my first guess would have been that this\njust isn't going to work without UTF-8 database encoding, so you might\nneed to skip the test (see for example\nsrc/test/regress/expected/unicode_1.out). It's annoying that \"xml\"\nalready has 3 expected variants... hmm. BTW shouldn't it be failing\nin a more explicit way somewhere sooner if the database encoding is\nnot UTF-8, rather than getting confused?\n\n\n",
"msg_date": "Mon, 6 Mar 2023 12:32:49 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "On 06.03.23 00:32, Thomas Munro wrote:\n> I couldn't reproduce that locally either, but I just tested on CI with\n> your patch applied saw the failure, and then removed\n> \"PYTHONCOERCECLOCALE=0 LANG=C\" and it's all green:\n>\n> https://github.com/macdice/postgres/commit/91999f5d13ac2df6f7237a301ed6cf73f2bb5b6d\n>\n> Without looking too closely, my first guess would have been that this\n> just isn't going to work without UTF-8 database encoding, so you might\n> need to skip the test (see for example\n> src/test/regress/expected/unicode_1.out). It's annoying that \"xml\"\n> already has 3 expected variants... hmm. BTW shouldn't it be failing\n> in a more explicit way somewhere sooner if the database encoding is\n> not UTF-8, rather than getting confused?\n\nI guess this confusion is happening because xml_parse() was being called \nwith the database encoding from GetDatabaseEncoding().\n\nI added a condition before calling xml_parse() to check if the xml \ndocument has a different encoding than UTF-8\n\nparse_xml_decl(xml_text2xmlChar(data), NULL, NULL, &encodingStr, NULL);\nencoding = encodingStr ? xmlChar_to_encoding(encodingStr) : PG_UTF8;\n\ndoc = xml_parse(data, XMLOPTION_DOCUMENT, false, encoding, NULL);\n\nv2 attached.\n\nThanks!\n\nBest, Jim",
"msg_date": "Mon, 6 Mar 2023 11:50:54 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "On 06.03.23 11:50, I wrote:\n> I guess this confusion is happening because xml_parse() was being \n> called with the database encoding from GetDatabaseEncoding().\n>\n> I added a condition before calling xml_parse() to check if the xml \n> document has a different encoding than UTF-8\n>\n> parse_xml_decl(xml_text2xmlChar(data), NULL, NULL, &encodingStr, NULL);\n> encoding = encodingStr ? xmlChar_to_encoding(encodingStr) : PG_UTF8;\n>\n> doc = xml_parse(data, XMLOPTION_DOCUMENT, false, encoding, NULL);\n\nIt seems that this bug fix didn't change the output of the CI on Debian \n+ Meson, 32bit.\n\nI slightly changed the test case to a character that both encodings can \ndeal with.\n\nv3 attached.",
"msg_date": "Mon, 6 Mar 2023 14:19:43 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "v4 attached fixes an encoding issue at the xml_parse call. It now uses \nGetDatabaseEncoding().\n\nBest, Jim",
"msg_date": "Tue, 14 Mar 2023 08:49:19 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "v5 attached is a rebase over the latest changes in xmlserialize (INDENT \noutput).",
"msg_date": "Fri, 17 Mar 2023 10:46:36 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "After some more testing I realized that v5 was leaking the xmlDocPtr.\n\nNow fixed in v6.",
"msg_date": "Fri, 17 Mar 2023 13:30:49 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "The cfbot started complaining about this patch on \"macOS - Ventura - Meson\"\n\n'Persistent worker failed to start the task: tart isolation failed: \nfailed to create VM cloned from \n\"ghcr.io/cirruslabs/macos-ventura-base:latest\": tart command returned \nnon-zero exit code: \"\"'\n\nIs this a problem in my code or in the CI itself?\n\nThanks!\n\nJim\n\n\n\n\n\n\nThe cfbot started complaining about this\n patch on \"macOS - Ventura - Meson\"\n'Persistent\n worker failed to start the task:\n tart isolation failed: failed to create VM cloned from\n \"ghcr.io/cirruslabs/macos-ventura-base:latest\": tart command\n returned non-zero exit code: \"\"'\nIs\n this a problem in my code or in the CI itself?\nThanks!\nJim",
"msg_date": "Thu, 14 Sep 2023 13:54:24 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "On Thu, Sep 14, 2023 at 11:54 PM Jim Jones <jim.jones@uni-muenster.de> wrote:\n> The cfbot started complaining about this patch on \"macOS - Ventura - Meson\"\n>\n> 'Persistent worker failed to start the task: tart isolation failed: failed to create VM cloned from \"ghcr.io/cirruslabs/macos-ventura-base:latest\": tart command returned non-zero exit code: \"\"'\n>\n> Is this a problem in my code or in the CI itself?\n\nThere was a temporary glitch on one of the new Mac CI runner machines\nthat caused a few tests to fail like that, but it was fixed so that\nshould turn red again later today.\n\n\n",
"msg_date": "Fri, 15 Sep 2023 09:43:51 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "On Fri, 17 Mar 2023 at 18:01, Jim Jones <jim.jones@uni-muenster.de> wrote:\n>\n> After some more testing I realized that v5 was leaking the xmlDocPtr.\n>\n> Now fixed in v6.\n\nFew comments:\n1) Why the default option was chosen without comments shouldn't it be\nthe other way round?\n+opt_xml_serialize_format:\n+ INDENT\n { $$ = XMLSERIALIZE_INDENT; }\n+ | NO INDENT\n { $$ = XMLSERIALIZE_NO_FORMAT; }\n+ | CANONICAL\n { $$ = XMLSERIALIZE_CANONICAL; }\n+ | CANONICAL WITH NO COMMENTS\n { $$ = XMLSERIALIZE_CANONICAL; }\n+ | CANONICAL WITH COMMENTS\n { $$ = XMLSERIALIZE_CANONICAL_WITH_COMMENTS; }\n+ | /*EMPTY*/\n { $$ = XMLSERIALIZE_NO_FORMAT; }\n\n2) This should be added to typedefs.list:\n+typedef enum XmlSerializeFormat\n+{\n+ XMLSERIALIZE_INDENT, /*\npretty-printed xml serialization */\n+ XMLSERIALIZE_CANONICAL, /*\ncanonical form without xml comments */\n+ XMLSERIALIZE_CANONICAL_WITH_COMMENTS, /* canonical form with\nxml comments */\n+ XMLSERIALIZE_NO_FORMAT /*\nunformatted xml representation */\n+} XmlSerializeFormat;\n\n3) This change is not required:\n return result;\n+\n #else\n NO_XML_SUPPORT();\n return NULL;\n\n4) This comment body needs slight reformatting:\n+ /*\n+ * Parse the input according to the xmloption.\n+ * XML canonical expects a well-formed XML input, so here in case of\n+ * XMLSERIALIZE_CANONICAL or XMLSERIALIZE_CANONICAL_WITH_COMMENTS we\n+ * force xml_parse() to parse 'data' as XMLOPTION_DOCUMENT despite\n+ * of the XmlOptionType given in 'xmloption_arg'. This enables the\n+ * canonicalization of CONTENT fragments if they contain a singly-rooted\n+ * XML - xml_parse() will thrown an error otherwise.\n+ */\n\n5) Similarly here too:\n- if (newline == NULL || xmlerrcxt->err_occurred)\n+ * Emit declaration only if the input had one.\nNote: some versions of\n+ * xmlSaveToBuffer leak memory if a non-null\nencoding argument is\n+ * passed, so don't do that. We don't want any\nencoding conversion\n+ * anyway.\n+ */\n+ if (decl_len == 0)\n\n6) Similarly here too:\n+ /*\n+ * Deal with the case where we have\nnon-singly-rooted XML.\n+ * libxml's dump functions don't work\nwell for that without help.\n+ * We build a fake root node that\nserves as a container for the\n+ * content nodes, and then iterate over\nthe nodes.\n+ */\n\n7) Similarly here too:\n+ /*\n+ * We use this node to insert newlines\nin the dump. Note: in at\n+ * least some libxml versions,\nxmlNewDocText would not attach the\n+ * node to the document even if we\npassed it. Therefore, manage\n+ * freeing of this node manually, and\npass NULL here to make sure\n+ * there's not a dangling link.\n+ */\n\n8) Should this:\n+ * of the XmlOptionType given in 'xmloption_arg'. This enables the\n+ * canonicalization of CONTENT fragments if they contain a singly-rooted\n+ * XML - xml_parse() will thrown an error otherwise.\nBe:\n+ * of the XmlOptionType given in 'xmloption_arg'. This enables the\n+ * canonicalization of CONTENT fragments if they contain a singly-rooted\n+ * XML - xml_parse() will throw an error otherwise.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 4 Oct 2023 15:09:44 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "Hi Vignesh\n\nThanks for the thorough review!\n\nOn 04.10.23 11:39, vignesh C wrote:\n> Few comments:\n> 1) Why the default option was chosen without comments shouldn't it be\n> the other way round?\n> +opt_xml_serialize_format:\n> + INDENT\n> { $$ = XMLSERIALIZE_INDENT; }\n> + | NO INDENT\n> { $$ = XMLSERIALIZE_NO_FORMAT; }\n> + | CANONICAL\n> { $$ = XMLSERIALIZE_CANONICAL; }\n> + | CANONICAL WITH NO COMMENTS\n> { $$ = XMLSERIALIZE_CANONICAL; }\n> + | CANONICAL WITH COMMENTS\n> { $$ = XMLSERIALIZE_CANONICAL_WITH_COMMENTS; }\n> + | /*EMPTY*/\n> { $$ = XMLSERIALIZE_NO_FORMAT; }\nI'm not sure it is the way to go. The main idea is to check if two \ndocuments have the same content, and comments might be different even if \nthe contents of two documents are identical. What are your concerns \nregarding this default behaviour?\n> 2) This should be added to typedefs.list:\n> +typedef enum XmlSerializeFormat\n> +{\n> + XMLSERIALIZE_INDENT, /*\n> pretty-printed xml serialization */\n> + XMLSERIALIZE_CANONICAL, /*\n> canonical form without xml comments */\n> + XMLSERIALIZE_CANONICAL_WITH_COMMENTS, /* canonical form with\n> xml comments */\n> + XMLSERIALIZE_NO_FORMAT /*\n> unformatted xml representation */\n> +} XmlSerializeFormat;\nadded.\n> 3) This change is not required:\n> return result;\n> +\n> #else\n> NO_XML_SUPPORT();\n> return NULL;\nremoved.\n>\n> 4) This comment body needs slight reformatting:\n> + /*\n> + * Parse the input according to the xmloption.\n> + * XML canonical expects a well-formed XML input, so here in case of\n> + * XMLSERIALIZE_CANONICAL or XMLSERIALIZE_CANONICAL_WITH_COMMENTS we\n> + * force xml_parse() to parse 'data' as XMLOPTION_DOCUMENT despite\n> + * of the XmlOptionType given in 'xmloption_arg'. This enables the\n> + * canonicalization of CONTENT fragments if they contain a singly-rooted\n> + * XML - xml_parse() will thrown an error otherwise.\n> + */\nreformatted.\n> 5) Similarly here too:\n> - if (newline == NULL || xmlerrcxt->err_occurred)\n> + * Emit declaration only if the input had one.\n> Note: some versions of\n> + * xmlSaveToBuffer leak memory if a non-null\n> encoding argument is\n> + * passed, so don't do that. We don't want any\n> encoding conversion\n> + * anyway.\n> + */\n> + if (decl_len == 0)\nreformatted.\n> 6) Similarly here too:\n> + /*\n> + * Deal with the case where we have\n> non-singly-rooted XML.\n> + * libxml's dump functions don't work\n> well for that without help.\n> + * We build a fake root node that\n> serves as a container for the\n> + * content nodes, and then iterate over\n> the nodes.\n> + */\nreformatted.\n> 7) Similarly here too:\n> + /*\n> + * We use this node to insert newlines\n> in the dump. Note: in at\n> + * least some libxml versions,\n> xmlNewDocText would not attach the\n> + * node to the document even if we\n> passed it. Therefore, manage\n> + * freeing of this node manually, and\n> pass NULL here to make sure\n> + * there's not a dangling link.\n> + */\nreformatted.\n> 8) Should this:\n> + * of the XmlOptionType given in 'xmloption_arg'. This enables the\n> + * canonicalization of CONTENT fragments if they contain a singly-rooted\n> + * XML - xml_parse() will thrown an error otherwise.\n> Be:\n> + * of the XmlOptionType given in 'xmloption_arg'. This enables the\n> + * canonicalization of CONTENT fragments if they contain a singly-rooted\n> + * XML - xml_parse() will throw an error otherwise.\n\nI didn't understand the suggestion in 8) :)\n\nThanks again for the review. Much appreciated!\n\nv7 attached.\n\nBest, Jim",
"msg_date": "Wed, 4 Oct 2023 18:19:18 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "On 2023-10-04 12:19, Jim Jones wrote:\n> On 04.10.23 11:39, vignesh C wrote:\n>> 1) Why the default option was chosen without comments shouldn't it be\n>> the other way round?\n> I'm not sure it is the way to go. The main idea is to check if two \n> documents have the same content, and comments might be different even \n> if the contents of two documents are identical. What are your concerns \n> regarding this default behaviour?\n\nI hope I'm not butting in, but I too would be leery of any default\nbehavior that's going to say thing1 and thing2 are the same thing\nbut ignoring (name part of thing here). If that's the comparison\nI mean to make, and it's as easy as CANONICAL WITHOUT COMMENTS\nto say that's what I mean, I'd be happy to write that. It also means\nthat the next person reading my code will know \"oh, he means\n'same' in *that* way\", without having to think about it.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Wed, 04 Oct 2023 17:05:37 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "Hi Chap\n\nOn 04.10.23 23:05, Chapman Flack wrote:\n> I hope I'm not butting in, but I too would be leery of any default\n> behavior that's going to say thing1 and thing2 are the same thing\n> but ignoring (name part of thing here). If that's the comparison\n> I mean to make, and it's as easy as CANONICAL WITHOUT COMMENTS\n> to say that's what I mean, I'd be happy to write that. It also means\n> that the next person reading my code will know \"oh, he means\n> 'same' in *that* way\", without having to think about it.\n\nThat's a very compelling argument. Thanks for that!\n\nIt is indeed clearer to only remove items from the result set if \nexplicitly said so.\n\nv8 attached changes de default behaviour to WITH COMMENTS.\n\nBest,\n\nJim",
"msg_date": "Thu, 5 Oct 2023 09:38:20 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "On 05.10.23 09:38, Jim Jones wrote:\n>\n> v8 attached changes de default behaviour to WITH COMMENTS.\nv9 attached with rebase due to changes done to primnodes.h in 615f5f6\n\n-- \nJim",
"msg_date": "Fri, 9 Feb 2024 14:19:13 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "On 09.02.24 14:19, Jim Jones wrote:\n> v9 attached with rebase due to changes done to primnodes.h in 615f5f6\n>\nv10 attached with rebase due to changes in primnodes, parsenodes.h, and\ngram.y\n\n-- \nJim",
"msg_date": "Wed, 19 Jun 2024 10:59:25 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "On 19.06.24 10:59, Jim Jones wrote:\n> On 09.02.24 14:19, Jim Jones wrote:\n>> v9 attached with rebase due to changes done to primnodes.h in 615f5f6\n>>\n> v10 attached with rebase due to changes in primnodes, parsenodes.h, and\n> gram.y\n>\nv11 attached with rebase due to changes in xml.c\n\n-- \nJim",
"msg_date": "Tue, 9 Jul 2024 20:48:47 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "Hi\n\nso 24. 8. 2024 v 7:40 odesílatel Jim Jones <jim.jones@uni-muenster.de>\nnapsal:\n\n>\n> On 19.06.24 10:59, Jim Jones wrote:\n> > On 09.02.24 14:19, Jim Jones wrote:\n> >> v9 attached with rebase due to changes done to primnodes.h in 615f5f6\n> >>\n> > v10 attached with rebase due to changes in primnodes, parsenodes.h, and\n> > gram.y\n> >\n> v11 attached with rebase due to changes in xml.c\n>\n\nI try to check this patch\n\nThere is unwanted white space in the patch\n\n-<-><--><-->xmlFreeDoc(doc);\n+<->else if (format == XMLSERIALIZE_CANONICAL || format ==\nXMLSERIALIZE_CANONICAL_WITH_NO_COMMENTS)\n+ <>{\n+<-><-->xmlChar *xmlbuf = NULL;\n+<-><-->int nbytes;\n+<-><-->int\n\n1. the xml is serialized to UTF8 string every time, but when target type is\nvarchar or text, then it should be every time encoded to database encoding.\nIs not possible to hold utf8 string in latin2 database varchar.\n\n2. The proposed feature can increase some confusion in implementation of NO\nIDENT. I am not an expert on this area, so I checked other databases. DB2\ndoes not have anything similar. But Oracle's \"NO IDENT\" clause is very\nsimilar to the proposed \"CANONICAL\". Unfortunately, there is different\nbehaviour of NO IDENT - Oracle's really removes formatting, Postgres does\nnothing.\n\nRegards\n\nPavel\n\n\n\n> --\n> Jim\n>\n\nHiso 24. 8. 2024 v 7:40 odesílatel Jim Jones <jim.jones@uni-muenster.de> napsal:\nOn 19.06.24 10:59, Jim Jones wrote:\n> On 09.02.24 14:19, Jim Jones wrote:\n>> v9 attached with rebase due to changes done to primnodes.h in 615f5f6\n>>\n> v10 attached with rebase due to changes in primnodes, parsenodes.h, and\n> gram.y\n>\nv11 attached with rebase due to changes in xml.cI try to check this patchThere is unwanted white space in the patch-<-><--><-->xmlFreeDoc(doc);+<->else if (format == XMLSERIALIZE_CANONICAL || format == XMLSERIALIZE_CANONICAL_WITH_NO_COMMENTS)+ <>{ +<-><-->xmlChar *xmlbuf = NULL;+<-><-->int nbytes;+<-><-->int 1. the xml is serialized to UTF8 string every time, but when target type is varchar or text, then it should be every time encoded to database encoding. Is not possible to hold utf8 string in latin2 database varchar.2. The proposed feature can increase some confusion in implementation of NO IDENT. I am not an expert on this area, so I checked other databases. DB2 does not have anything similar. But Oracle's \"NO IDENT\" clause is very similar to the proposed \"CANONICAL\". Unfortunately, there is different behaviour of NO IDENT - Oracle's really removes formatting, Postgres does nothing. RegardsPavel\n\n-- \nJim",
"msg_date": "Sun, 25 Aug 2024 20:57:30 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "ne 25. 8. 2024 v 20:57 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> so 24. 8. 2024 v 7:40 odesílatel Jim Jones <jim.jones@uni-muenster.de>\n> napsal:\n>\n>>\n>> On 19.06.24 10:59, Jim Jones wrote:\n>> > On 09.02.24 14:19, Jim Jones wrote:\n>> >> v9 attached with rebase due to changes done to primnodes.h in 615f5f6\n>> >>\n>> > v10 attached with rebase due to changes in primnodes, parsenodes.h, and\n>> > gram.y\n>> >\n>> v11 attached with rebase due to changes in xml.c\n>>\n>\n> I try to check this patch\n>\n> There is unwanted white space in the patch\n>\n> -<-><--><-->xmlFreeDoc(doc);\n> +<->else if (format == XMLSERIALIZE_CANONICAL || format ==\n> XMLSERIALIZE_CANONICAL_WITH_NO_COMMENTS)\n> + <>{\n> +<-><-->xmlChar *xmlbuf = NULL;\n> +<-><-->int nbytes;\n> +<-><-->int\n>\n> 1. the xml is serialized to UTF8 string every time, but when target type\n> is varchar or text, then it should be every time encoded to database\n> encoding. Is not possible to hold utf8 string in latin2 database varchar.\n>\n> 2. The proposed feature can increase some confusion in implementation of\n> NO IDENT. I am not an expert on this area, so I checked other databases.\n> DB2 does not have anything similar. But Oracle's \"NO IDENT\" clause is very\n> similar to the proposed \"CANONICAL\". Unfortunately, there is different\n> behaviour of NO IDENT - Oracle's really removes formatting, Postgres does\n> nothing.\n>\n> Regards\n>\n\nI read https://www.w3.org/TR/xml-c14n11/ and if I understand this document,\nthen CANONICAL <> \"NO INDENT\" ?\n\nRegards\n\nPavel\n\n\n\n> Pavel\n>\n>\n>\n>> --\n>> Jim\n>>\n>\n\nne 25. 8. 2024 v 20:57 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:Hiso 24. 8. 2024 v 7:40 odesílatel Jim Jones <jim.jones@uni-muenster.de> napsal:\nOn 19.06.24 10:59, Jim Jones wrote:\n> On 09.02.24 14:19, Jim Jones wrote:\n>> v9 attached with rebase due to changes done to primnodes.h in 615f5f6\n>>\n> v10 attached with rebase due to changes in primnodes, parsenodes.h, and\n> gram.y\n>\nv11 attached with rebase due to changes in xml.cI try to check this patchThere is unwanted white space in the patch-<-><--><-->xmlFreeDoc(doc);+<->else if (format == XMLSERIALIZE_CANONICAL || format == XMLSERIALIZE_CANONICAL_WITH_NO_COMMENTS)+ <>{ +<-><-->xmlChar *xmlbuf = NULL;+<-><-->int nbytes;+<-><-->int 1. the xml is serialized to UTF8 string every time, but when target type is varchar or text, then it should be every time encoded to database encoding. Is not possible to hold utf8 string in latin2 database varchar.2. The proposed feature can increase some confusion in implementation of NO IDENT. I am not an expert on this area, so I checked other databases. DB2 does not have anything similar. But Oracle's \"NO IDENT\" clause is very similar to the proposed \"CANONICAL\". Unfortunately, there is different behaviour of NO IDENT - Oracle's really removes formatting, Postgres does nothing. RegardsI read https://www.w3.org/TR/xml-c14n11/ and if I understand this document, then CANONICAL <> \"NO INDENT\" ?RegardsPavel Pavel\n\n-- \nJim",
"msg_date": "Sun, 25 Aug 2024 21:49:24 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "Hi Pavel\n\nOn 25.08.24 20:57, Pavel Stehule wrote:\n>\n> There is unwanted white space in the patch\n>\n> -<-><--><-->xmlFreeDoc(doc);\n> +<->else if (format == XMLSERIALIZE_CANONICAL || format ==\n> XMLSERIALIZE_CANONICAL_WITH_NO_COMMENTS)\n> + <>{\n> +<-><-->xmlChar *xmlbuf = NULL;\n> +<-><-->int nbytes;\n> +<-><-->int \n>\nI missed that one. Just removed it, thanks!\n> 1. the xml is serialized to UTF8 string every time, but when target\n> type is varchar or text, then it should be every time encoded to\n> database encoding. Is not possible to hold utf8 string in latin2\n> database varchar.\nI'm calling xml_parse using GetDatabaseEncoding(), so I thought I would\nbe on the safe side\n\nif(format ==XMLSERIALIZE_CANONICAL ||format\n==XMLSERIALIZE_CANONICAL_WITH_NO_COMMENTS)\ndoc =xml_parse(data, XMLOPTION_DOCUMENT, false,\nGetDatabaseEncoding(), NULL, NULL, NULL);\n... or you mean something else?\n\n> 2. The proposed feature can increase some confusion in implementation\n> of NO IDENT. I am not an expert on this area, so I checked other\n> databases. DB2 does not have anything similar. But Oracle's \"NO IDENT\"\n> clause is very similar to the proposed \"CANONICAL\". Unfortunately,\n> there is different behaviour of NO IDENT - Oracle's really removes\n> formatting, Postgres does nothing.\n\nCoincidentally, the [NO] INDENT support for xmlserialize is an old patch\nof mine.\nIt basically \"does nothing\" and prints the xml as is, e.g.\n\nSELECT xmlserialize(DOCUMENT '<foo><bar><val z=\"1\"\na=\"8\"><![CDATA[0&1]]></val></bar></foo>' AS text INDENT);\n xmlserialize \n--------------------------------------------\n <foo> +\n <bar> +\n <val z=\"1\" a=\"8\"><![CDATA[0&1]]></val>+\n </bar> +\n </foo> +\n \n(1 row)\n\nSELECT xmlserialize(DOCUMENT '<foo><bar><val z=\"1\"\na=\"8\"><![CDATA[0&1]]></val></bar></foo>' AS text NO INDENT);\n xmlserialize \n--------------------------------------------------------------\n <foo><bar><val z=\"1\" a=\"8\"><![CDATA[0&1]]></val></bar></foo>\n(1 row)\n\nSELECT xmlserialize(DOCUMENT '<foo><bar><val z=\"1\"\na=\"8\"><![CDATA[0&1]]></val></bar></foo>' AS text);\n xmlserialize \n--------------------------------------------------------------\n <foo><bar><val z=\"1\" a=\"8\"><![CDATA[0&1]]></val></bar></foo>\n(1 row)\n\n.. while CANONICAL converts the xml to its canonical form,[1,2] e.g.\nsorting attributes and replacing CDATA strings by its value:\n\nSELECT xmlserialize(DOCUMENT '<foo><bar><val z=\"1\"\na=\"8\"><![CDATA[0&1]]></val></bar></foo>' AS text CANONICAL);\n xmlserialize \n------------------------------------------------------\n <foo><bar><val a=\"8\" z=\"1\">0&1</val></bar></foo>\n(1 row)\n\nxmlserialize CANONICAL does not exist in any other database and it's not\npart of the SQL/XML standard.\n\nRegarding the different behaviour of NO INDENT in Oracle and PostgreSQL:\nit is not entirely clear to me if SQL/XML states that NO INDENT must\nremove the indentation from xml strings.\nIt says:\n\n\"INDENT — the choice of whether to “pretty-print” the serialized XML by\nmeans of indentation, either\nTrue or False.\n....\ni) If <XML serialize indent> is specified and does not contain NO, then\nlet IND be True.\nii) Otherwise, let IND be False.\"\n\nWhen I wrote the patch I assumed it meant to leave the xml as is .. but\nI might be wrong.\nPerhaps it would be best if we open a new thread for this topic.\n\nThank you for reviewing this patch. Much appreciated!\n\nBest,\n\n-- \nJim\n\n1 - https://www.w3.org/TR/xml-c14n11/\n2 - https://gnome.pages.gitlab.gnome.org/libxml2/devhelp/libxml2-c14n.html\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 11:32:02 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "po 26. 8. 2024 v 11:32 odesílatel Jim Jones <jim.jones@uni-muenster.de>\nnapsal:\n\n> Hi Pavel\n>\n> On 25.08.24 20:57, Pavel Stehule wrote:\n> >\n> > There is unwanted white space in the patch\n> >\n> > -<-><--><-->xmlFreeDoc(doc);\n> > +<->else if (format == XMLSERIALIZE_CANONICAL || format ==\n> > XMLSERIALIZE_CANONICAL_WITH_NO_COMMENTS)\n> > + <>{\n> > +<-><-->xmlChar *xmlbuf = NULL;\n> > +<-><-->int nbytes;\n> > +<-><-->int\n> >\n> I missed that one. Just removed it, thanks!\n> > 1. the xml is serialized to UTF8 string every time, but when target\n> > type is varchar or text, then it should be every time encoded to\n> > database encoding. Is not possible to hold utf8 string in latin2\n> > database varchar.\n> I'm calling xml_parse using GetDatabaseEncoding(), so I thought I would\n> be on the safe side\n>\n> if(format ==XMLSERIALIZE_CANONICAL ||format\n> ==XMLSERIALIZE_CANONICAL_WITH_NO_COMMENTS)\n> doc =xml_parse(data, XMLOPTION_DOCUMENT, false,\n> GetDatabaseEncoding(), NULL, NULL, NULL);\n> ... or you mean something else?\n>\n\nMaybe I was confused by the initial message.\n\n\n>\n> > 2. The proposed feature can increase some confusion in implementation\n> > of NO IDENT. I am not an expert on this area, so I checked other\n> > databases. DB2 does not have anything similar. But Oracle's \"NO IDENT\"\n> > clause is very similar to the proposed \"CANONICAL\". Unfortunately,\n> > there is different behaviour of NO IDENT - Oracle's really removes\n> > formatting, Postgres does nothing.\n>\n> Coincidentally, the [NO] INDENT support for xmlserialize is an old patch\n> of mine.\n> It basically \"does nothing\" and prints the xml as is, e.g.\n>\n> SELECT xmlserialize(DOCUMENT '<foo><bar><val z=\"1\"\n> a=\"8\"><![CDATA[0&1]]></val></bar></foo>' AS text INDENT);\n> xmlserialize\n> --------------------------------------------\n> <foo> +\n> <bar> +\n> <val z=\"1\" a=\"8\"><![CDATA[0&1]]></val>+\n> </bar> +\n> </foo> +\n>\n> (1 row)\n>\n> SELECT xmlserialize(DOCUMENT '<foo><bar><val z=\"1\"\n> a=\"8\"><![CDATA[0&1]]></val></bar></foo>' AS text NO INDENT);\n> xmlserialize\n> --------------------------------------------------------------\n> <foo><bar><val z=\"1\" a=\"8\"><![CDATA[0&1]]></val></bar></foo>\n> (1 row)\n>\n> SELECT xmlserialize(DOCUMENT '<foo><bar><val z=\"1\"\n> a=\"8\"><![CDATA[0&1]]></val></bar></foo>' AS text);\n> xmlserialize\n> --------------------------------------------------------------\n> <foo><bar><val z=\"1\" a=\"8\"><![CDATA[0&1]]></val></bar></foo>\n> (1 row)\n>\n> .. while CANONICAL converts the xml to its canonical form,[1,2] e.g.\n> sorting attributes and replacing CDATA strings by its value:\n>\n> SELECT xmlserialize(DOCUMENT '<foo><bar><val z=\"1\"\n> a=\"8\"><![CDATA[0&1]]></val></bar></foo>' AS text CANONICAL);\n> xmlserialize\n> ------------------------------------------------------\n> <foo><bar><val a=\"8\" z=\"1\">0&1</val></bar></foo>\n> (1 row)\n>\n> xmlserialize CANONICAL does not exist in any other database and it's not\n> part of the SQL/XML standard.\n>\n> Regarding the different behaviour of NO INDENT in Oracle and PostgreSQL:\n> it is not entirely clear to me if SQL/XML states that NO INDENT must\n> remove the indentation from xml strings.\n> It says:\n>\n> \"INDENT — the choice of whether to “pretty-print” the serialized XML by\n> means of indentation, either\n> True or False.\n> ....\n> i) If <XML serialize indent> is specified and does not contain NO, then\n> let IND be True.\n> ii) Otherwise, let IND be False.\"\n>\n> When I wrote the patch I assumed it meant to leave the xml as is .. but\n> I might be wrong.\n> Perhaps it would be best if we open a new thread for this topic.\n>\n\nI think so there should be specified the target of CANONICAL - it is a\npartial replacement of NO INDENT or it produces format just for comparing?\nThe CANONICAL format is not probably extra standardized, because libxml2\nremoves indenting, but examples in https://www.w3.org/TR/xml-c14n11/\ndoesn't do it. So this format makes sense just for local operations.\n\nI like this functionality, and it is great so the functionality from\nlibxml2 can be used, but I think, so the fact that there are four not\ncompatible implementations of xmlserialize is messy. Can be nice, if we\nfind some intersection between SQL/XML, Oracle instead of new proprietary\nsyntax.\n\nIn Oracle syntax the CANONICAL is +/- NO INDENT SHOW DEFAULT ?\n\nMy objection against CANONICAL so SQL/XML and Oracle allows to parametrize\nXMLSERIALIZE more precious and before implementing new feature, we should\nto clean table and say, what we want to have in XMLSERIALIZE.\n\nAn alternative of enhancing of XMLSERIALIZE I can imagine just function\n\"to_canonical(xml, without_comments bool default false)\". In this case we\ndon't need to solve relations against SQL/XML or Oracle.\n\n\n> Thank you for reviewing this patch. Much appreciated!\n>\n> Best,\n>\n> --\n> Jim\n>\n> 1 - https://www.w3.org/TR/xml-c14n11/\n> 2 - https://gnome.pages.gitlab.gnome.org/libxml2/devhelp/libxml2-c14n.html\n>\n>\n\npo 26. 8. 2024 v 11:32 odesílatel Jim Jones <jim.jones@uni-muenster.de> napsal:Hi Pavel\n\nOn 25.08.24 20:57, Pavel Stehule wrote:\n>\n> There is unwanted white space in the patch\n>\n> -<-><--><-->xmlFreeDoc(doc);\n> +<->else if (format == XMLSERIALIZE_CANONICAL || format ==\n> XMLSERIALIZE_CANONICAL_WITH_NO_COMMENTS)\n> + <>{\n> +<-><-->xmlChar *xmlbuf = NULL;\n> +<-><-->int nbytes;\n> +<-><-->int \n>\nI missed that one. Just removed it, thanks!\n> 1. the xml is serialized to UTF8 string every time, but when target\n> type is varchar or text, then it should be every time encoded to\n> database encoding. Is not possible to hold utf8 string in latin2\n> database varchar.\nI'm calling xml_parse using GetDatabaseEncoding(), so I thought I would\nbe on the safe side\n\nif(format ==XMLSERIALIZE_CANONICAL ||format\n==XMLSERIALIZE_CANONICAL_WITH_NO_COMMENTS)\ndoc =xml_parse(data, XMLOPTION_DOCUMENT, false,\nGetDatabaseEncoding(), NULL, NULL, NULL);\n... or you mean something else?Maybe I was confused by the initial message. \n\n> 2. The proposed feature can increase some confusion in implementation\n> of NO IDENT. I am not an expert on this area, so I checked other\n> databases. DB2 does not have anything similar. But Oracle's \"NO IDENT\"\n> clause is very similar to the proposed \"CANONICAL\". Unfortunately,\n> there is different behaviour of NO IDENT - Oracle's really removes\n> formatting, Postgres does nothing.\n\nCoincidentally, the [NO] INDENT support for xmlserialize is an old patch\nof mine.\nIt basically \"does nothing\" and prints the xml as is, e.g.\n\nSELECT xmlserialize(DOCUMENT '<foo><bar><val z=\"1\"\na=\"8\"><![CDATA[0&1]]></val></bar></foo>' AS text INDENT);\n xmlserialize \n--------------------------------------------\n <foo> +\n <bar> +\n <val z=\"1\" a=\"8\"><![CDATA[0&1]]></val>+\n </bar> +\n </foo> +\n \n(1 row)\n\nSELECT xmlserialize(DOCUMENT '<foo><bar><val z=\"1\"\na=\"8\"><![CDATA[0&1]]></val></bar></foo>' AS text NO INDENT);\n xmlserialize \n--------------------------------------------------------------\n <foo><bar><val z=\"1\" a=\"8\"><![CDATA[0&1]]></val></bar></foo>\n(1 row)\n\nSELECT xmlserialize(DOCUMENT '<foo><bar><val z=\"1\"\na=\"8\"><![CDATA[0&1]]></val></bar></foo>' AS text);\n xmlserialize \n--------------------------------------------------------------\n <foo><bar><val z=\"1\" a=\"8\"><![CDATA[0&1]]></val></bar></foo>\n(1 row)\n\n.. while CANONICAL converts the xml to its canonical form,[1,2] e.g.\nsorting attributes and replacing CDATA strings by its value:\n\nSELECT xmlserialize(DOCUMENT '<foo><bar><val z=\"1\"\na=\"8\"><![CDATA[0&1]]></val></bar></foo>' AS text CANONICAL);\n xmlserialize \n------------------------------------------------------\n <foo><bar><val a=\"8\" z=\"1\">0&1</val></bar></foo>\n(1 row)\n\nxmlserialize CANONICAL does not exist in any other database and it's not\npart of the SQL/XML standard.\n\nRegarding the different behaviour of NO INDENT in Oracle and PostgreSQL:\nit is not entirely clear to me if SQL/XML states that NO INDENT must\nremove the indentation from xml strings.\nIt says:\n\n\"INDENT — the choice of whether to “pretty-print” the serialized XML by\nmeans of indentation, either\nTrue or False.\n....\ni) If <XML serialize indent> is specified and does not contain NO, then\nlet IND be True.\nii) Otherwise, let IND be False.\"\n\nWhen I wrote the patch I assumed it meant to leave the xml as is .. but\nI might be wrong.\nPerhaps it would be best if we open a new thread for this topic.I think so there should be specified the target of CANONICAL - it is a partial replacement of NO INDENT or it produces format just for comparing? The CANONICAL format is not probably extra standardized, because libxml2 removes indenting, but examples in https://www.w3.org/TR/xml-c14n11/ doesn't do it. So this format makes sense just for local operations.I like this functionality, and it is great so the functionality from libxml2 can be used, but I think, so the fact that there are four not compatible implementations of xmlserialize is messy. Can be nice, if we find some intersection between SQL/XML, Oracle instead of new proprietary syntax. In Oracle syntax the CANONICAL is +/- NO INDENT SHOW DEFAULT ?My objection against CANONICAL so SQL/XML and Oracle allows to parametrize XMLSERIALIZE more precious and before implementing new feature, we should to clean table and say, what we want to have in XMLSERIALIZE.An alternative of enhancing of XMLSERIALIZE I can imagine just function \"to_canonical(xml, without_comments bool default false)\". In this case we don't need to solve relations against SQL/XML or Oracle. \n\nThank you for reviewing this patch. Much appreciated!\n\nBest,\n\n-- \nJim\n\n1 - https://www.w3.org/TR/xml-c14n11/\n2 - https://gnome.pages.gitlab.gnome.org/libxml2/devhelp/libxml2-c14n.html",
"msg_date": "Mon, 26 Aug 2024 12:30:28 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "\n\nOn 26.08.24 12:30, Pavel Stehule wrote:\n> I think so there should be specified the target of CANONICAL - it is a\n> partial replacement of NO INDENT or it produces format just for\n> comparing? The CANONICAL format is not probably extra standardized,\n> because libxml2 removes indenting, but examples in\n> https://www.w3.org/TR/xml-c14n11/ doesn't do it. So this format makes\n> sense just for local operations.\nMy idea with CANONICAL was not to replace NO INDENT. The intent was to\nformat xml strings in an standardized way, so that they can be compared.\nFor instance, removing comments, sorting attributes, converting CDATA\nstrings, converting empty elements to start-end tag pairs, removing\nwhite spaces between elements, etc ...\n\nThe W3C recommendation for Canonical XML[1] dictates the following\nregarding the removal of whitespaces between elements :\n\n* Whitespace outside of the document element and within start and end\ntags is normalized\n* All whitespace in character content is retained (excluding characters\nremoved during line feed normalization)\n\n>\n> I like this functionality, and it is great so the functionality from\n> libxml2 can be used, but I think, so the fact that there are four not\n> compatible implementations of xmlserialize is messy. Can be nice, if \n> we find some intersection between SQL/XML, Oracle instead of new\n> proprietary syntax. \n>\n> In Oracle syntax the CANONICAL is +/- NO INDENT SHOW DEFAULT ?\n\nNo.\nXMLSERIALIZE ... NO INDENT is supposed, as the name suggests, to\nserialize an xml string without indenting it. One could argue that not\nindenting can be translated as removing indentation, but I couldn't find\nanything concrete about this in the SQL/XML spec. If it's indeed the\ncase, we should correct XMLSERIALIZE .. NO INDENT, but it is unrelated\nto this patch.\n\nCANONICAL serializes a physical representation of an xml document. In a\nnutshell, XMLSERIALIZE ... CANONICAL sort of \"rewrites\" the xml string\nwith the following rules (list from the W3C recommendation):\n\n* The document is encoded in UTF-8\n* Line breaks normalized to #xA on input, before parsing\n* Attribute values are normalized, as if by a validating processor\n* Character and parsed entity references are replaced\n* CDATA sections are replaced with their character content\n* The XML declaration and document type declaration are removed\n* Empty elements are converted to start-end tag pairs\n* Whitespace outside of the document element and within start and end\ntags is normalized\n* All whitespace in character content is retained (excluding characters\nremoved during line feed normalization)\n* Attribute value delimiters are set to quotation marks (double quotes)\n* Special characters in attribute values and character content are\nreplaced by character references\n* Superfluous namespace declarations are removed from each element\n* Default attributes are added to each element\n* Fixup of xml:base attributes [C14N-Issues] is performed\n* Lexicographic order is imposed on the namespace declarations and\nattributes of each element\n\nbtw: Oracle's SIZE =, HIDE DEFAULTS, and SHOW DEFAULTS are not part of\nthe SQL/XML standard either :)\n\n> My objection against CANONICAL so SQL/XML and Oracle allows to\n> parametrize XMLSERIALIZE more precious and before implementing new\n> feature, we should to clean table and say, what we want to have in\n> XMLSERIALIZE.\n>\n> An alternative of enhancing of XMLSERIALIZE I can imagine just\n> function \"to_canonical(xml, without_comments bool default false)\". In\n> this case we don't need to solve relations against SQL/XML or Oracle.\n\nTo create a separated serialization function would be IMHO way less\nelegant than to parametrize XMLSERIALIZE, but it would be something I\ncould live with in case we decide to go down this path.\n\nThanks!\n\n-- \nJim\n\n1 - https://www.w3.org/TR/xml-c14n11/\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 13:28:22 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "po 26. 8. 2024 v 13:28 odesílatel Jim Jones <jim.jones@uni-muenster.de>\nnapsal:\n\n>\n>\n> On 26.08.24 12:30, Pavel Stehule wrote:\n> > I think so there should be specified the target of CANONICAL - it is a\n> > partial replacement of NO INDENT or it produces format just for\n> > comparing? The CANONICAL format is not probably extra standardized,\n> > because libxml2 removes indenting, but examples in\n> > https://www.w3.org/TR/xml-c14n11/ doesn't do it. So this format makes\n> > sense just for local operations.\n> My idea with CANONICAL was not to replace NO INDENT. The intent was to\n> format xml strings in an standardized way, so that they can be compared.\n> For instance, removing comments, sorting attributes, converting CDATA\n> strings, converting empty elements to start-end tag pairs, removing\n> white spaces between elements, etc ...\n>\n> The W3C recommendation for Canonical XML[1] dictates the following\n> regarding the removal of whitespaces between elements :\n>\n> * Whitespace outside of the document element and within start and end\n> tags is normalized\n> * All whitespace in character content is retained (excluding characters\n> removed during line feed normalization)\n>\n> >\n> > I like this functionality, and it is great so the functionality from\n> > libxml2 can be used, but I think, so the fact that there are four not\n> > compatible implementations of xmlserialize is messy. Can be nice, if\n> > we find some intersection between SQL/XML, Oracle instead of new\n> > proprietary syntax.\n> >\n> > In Oracle syntax the CANONICAL is +/- NO INDENT SHOW DEFAULT ?\n>\n> No.\n> XMLSERIALIZE ... NO INDENT is supposed, as the name suggests, to\n> serialize an xml string without indenting it. One could argue that not\n> indenting can be translated as removing indentation, but I couldn't find\n> anything concrete about this in the SQL/XML spec. If it's indeed the\n> case, we should correct XMLSERIALIZE .. NO INDENT, but it is unrelated\n> to this patch.\n>\n> CANONICAL serializes a physical representation of an xml document. In a\n> nutshell, XMLSERIALIZE ... CANONICAL sort of \"rewrites\" the xml string\n> with the following rules (list from the W3C recommendation):\n>\n> * The document is encoded in UTF-8\n> * Line breaks normalized to #xA on input, before parsing\n> * Attribute values are normalized, as if by a validating processor\n> * Character and parsed entity references are replaced\n> * CDATA sections are replaced with their character content\n> * The XML declaration and document type declaration are removed\n> * Empty elements are converted to start-end tag pairs\n> * Whitespace outside of the document element and within start and end\n> tags is normalized\n> * All whitespace in character content is retained (excluding characters\n> removed during line feed normalization)\n> * Attribute value delimiters are set to quotation marks (double quotes)\n> * Special characters in attribute values and character content are\n> replaced by character references\n> * Superfluous namespace declarations are removed from each element\n> * Default attributes are added to each element\n> * Fixup of xml:base attributes [C14N-Issues] is performed\n> * Lexicographic order is imposed on the namespace declarations and\n> attributes of each element\n>\n> btw: Oracle's SIZE =, HIDE DEFAULTS, and SHOW DEFAULTS are not part of\n> the SQL/XML standard either :)\n>\n\nI know - looks so this function is not well designed generally\n\n\n>\n> > My objection against CANONICAL so SQL/XML and Oracle allows to\n> > parametrize XMLSERIALIZE more precious and before implementing new\n> > feature, we should to clean table and say, what we want to have in\n> > XMLSERIALIZE.\n> >\n> > An alternative of enhancing of XMLSERIALIZE I can imagine just\n> > function \"to_canonical(xml, without_comments bool default false)\". In\n> > this case we don't need to solve relations against SQL/XML or Oracle.\n>\n> To create a separated serialization function would be IMHO way less\n> elegant than to parametrize XMLSERIALIZE, but it would be something I\n> could live with in case we decide to go down this path.\n>\n\nI am not strongly against enhancing XMLSERIALIZE, but it can be nice to see\nsome wider concept first. Currently the state looks just random - and I\ndidn't see any serious discussion about implementation fo SQL/XML. We don't\nneed to be necessarily compatible with Oracle, but it can help if we have a\nfunctionality that can be used for conversions.\n\n\n\n> Thanks!\n>\n> --\n> Jim\n>\n> 1 - https://www.w3.org/TR/xml-c14n11/\n>\n>\n\npo 26. 8. 2024 v 13:28 odesílatel Jim Jones <jim.jones@uni-muenster.de> napsal:\n\nOn 26.08.24 12:30, Pavel Stehule wrote:\n> I think so there should be specified the target of CANONICAL - it is a\n> partial replacement of NO INDENT or it produces format just for\n> comparing? The CANONICAL format is not probably extra standardized,\n> because libxml2 removes indenting, but examples in\n> https://www.w3.org/TR/xml-c14n11/ doesn't do it. So this format makes\n> sense just for local operations.\nMy idea with CANONICAL was not to replace NO INDENT. The intent was to\nformat xml strings in an standardized way, so that they can be compared.\nFor instance, removing comments, sorting attributes, converting CDATA\nstrings, converting empty elements to start-end tag pairs, removing\nwhite spaces between elements, etc ...\n\nThe W3C recommendation for Canonical XML[1] dictates the following\nregarding the removal of whitespaces between elements :\n\n* Whitespace outside of the document element and within start and end\ntags is normalized\n* All whitespace in character content is retained (excluding characters\nremoved during line feed normalization)\n\n>\n> I like this functionality, and it is great so the functionality from\n> libxml2 can be used, but I think, so the fact that there are four not\n> compatible implementations of xmlserialize is messy. Can be nice, if \n> we find some intersection between SQL/XML, Oracle instead of new\n> proprietary syntax. \n>\n> In Oracle syntax the CANONICAL is +/- NO INDENT SHOW DEFAULT ?\n\nNo.\nXMLSERIALIZE ... NO INDENT is supposed, as the name suggests, to\nserialize an xml string without indenting it. One could argue that not\nindenting can be translated as removing indentation, but I couldn't find\nanything concrete about this in the SQL/XML spec. If it's indeed the\ncase, we should correct XMLSERIALIZE .. NO INDENT, but it is unrelated\nto this patch.\n\nCANONICAL serializes a physical representation of an xml document. In a\nnutshell, XMLSERIALIZE ... CANONICAL sort of \"rewrites\" the xml string\nwith the following rules (list from the W3C recommendation):\n\n* The document is encoded in UTF-8\n* Line breaks normalized to #xA on input, before parsing\n* Attribute values are normalized, as if by a validating processor\n* Character and parsed entity references are replaced\n* CDATA sections are replaced with their character content\n* The XML declaration and document type declaration are removed\n* Empty elements are converted to start-end tag pairs\n* Whitespace outside of the document element and within start and end\ntags is normalized\n* All whitespace in character content is retained (excluding characters\nremoved during line feed normalization)\n* Attribute value delimiters are set to quotation marks (double quotes)\n* Special characters in attribute values and character content are\nreplaced by character references\n* Superfluous namespace declarations are removed from each element\n* Default attributes are added to each element\n* Fixup of xml:base attributes [C14N-Issues] is performed\n* Lexicographic order is imposed on the namespace declarations and\nattributes of each element\n\nbtw: Oracle's SIZE =, HIDE DEFAULTS, and SHOW DEFAULTS are not part of\nthe SQL/XML standard either :)I know - looks so this function is not well designed generally \n\n> My objection against CANONICAL so SQL/XML and Oracle allows to\n> parametrize XMLSERIALIZE more precious and before implementing new\n> feature, we should to clean table and say, what we want to have in\n> XMLSERIALIZE.\n>\n> An alternative of enhancing of XMLSERIALIZE I can imagine just\n> function \"to_canonical(xml, without_comments bool default false)\". In\n> this case we don't need to solve relations against SQL/XML or Oracle.\n\nTo create a separated serialization function would be IMHO way less\nelegant than to parametrize XMLSERIALIZE, but it would be something I\ncould live with in case we decide to go down this path.I am not strongly against enhancing XMLSERIALIZE, but it can be nice to see some wider concept first. Currently the state looks just random - and I didn't see any serious discussion about implementation fo SQL/XML. We don't need to be necessarily compatible with Oracle, but it can help if we have a functionality that can be used for conversions. \n\nThanks!\n\n-- \nJim\n\n1 - https://www.w3.org/TR/xml-c14n11/",
"msg_date": "Mon, 26 Aug 2024 14:15:56 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "\n\nOn 26.08.24 14:15, Pavel Stehule wrote:\n> I am not strongly against enhancing XMLSERIALIZE, but it can be nice\n> to see some wider concept first. Currently the state looks just random\n> - and I didn't see any serious discussion about implementation fo\n> SQL/XML. We don't need to be necessarily compatible with Oracle, but\n> it can help if we have a functionality that can be used for conversions.\n\nFair point. A road map definitely wouldn't hurt. Not quite sure how to\nstart this motion though :D\nSo far I've just picked the missing SQL/XML features that were listed in\nthe PostgreSQL todo list and that I need for any of my projects. But I\nwould gladly change the priorities if there is any interest in the\ncommunity for specific features.\n\n-- \nJim\n\n\n\n",
"msg_date": "Mon, 26 Aug 2024 16:29:58 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "po 26. 8. 2024 v 16:30 odesílatel Jim Jones <jim.jones@uni-muenster.de>\nnapsal:\n\n>\n>\n> On 26.08.24 14:15, Pavel Stehule wrote:\n> > I am not strongly against enhancing XMLSERIALIZE, but it can be nice\n> > to see some wider concept first. Currently the state looks just random\n> > - and I didn't see any serious discussion about implementation fo\n> > SQL/XML. We don't need to be necessarily compatible with Oracle, but\n> > it can help if we have a functionality that can be used for conversions.\n>\n> Fair point. A road map definitely wouldn't hurt. Not quite sure how to\n> start this motion though :D\n> So far I've just picked the missing SQL/XML features that were listed in\n> the PostgreSQL todo list and that I need for any of my projects. But I\n> would gladly change the priorities if there is any interest in the\n> community for specific features.\n>\n\nyes, \"like\" road map and related questions - just for XMLSERIALIZE (in this\nthread). I see points\n\n1. what about behaviour of NO INDENT - the implementation is not too old,\nso it can be changed if we want (I think), and it is better to do early\nthan too late\n\n2. Are we able to implement SQL/XML syntax with libxml2?\n\n3. Are we able to implement Oracle syntax with libxml2? And there are\nbenefits other than higher possible compatibility?\n\n4. Can there be some possible collision (functionality, syntax) with\nCANONICAL?\n\n5. SQL/XML XMLSERIALIZE supports other target types than varchar. I can\nimagine XMLSERIALIZE with CANONICAL to bytea (then we don't need to force\ndatabase encoding). Does it make sense? Are the results comparable?\n\n\n\n\n> --\n> Jim\n>\n>\n\npo 26. 8. 2024 v 16:30 odesílatel Jim Jones <jim.jones@uni-muenster.de> napsal:\n\nOn 26.08.24 14:15, Pavel Stehule wrote:\n> I am not strongly against enhancing XMLSERIALIZE, but it can be nice\n> to see some wider concept first. Currently the state looks just random\n> - and I didn't see any serious discussion about implementation fo\n> SQL/XML. We don't need to be necessarily compatible with Oracle, but\n> it can help if we have a functionality that can be used for conversions.\n\nFair point. A road map definitely wouldn't hurt. Not quite sure how to\nstart this motion though :D\nSo far I've just picked the missing SQL/XML features that were listed in\nthe PostgreSQL todo list and that I need for any of my projects. But I\nwould gladly change the priorities if there is any interest in the\ncommunity for specific features.yes, \"like\" road map and related questions - just for XMLSERIALIZE (in this thread). I see points1. what about behaviour of NO INDENT - the implementation is not too old, so it can be changed if we want (I think), and it is better to do early than too late2. Are we able to implement SQL/XML syntax with libxml2?3. Are we able to implement Oracle syntax with libxml2? And there are benefits other than higher possible compatibility?4. Can there be some possible collision (functionality, syntax) with CANONICAL? 5. SQL/XML XMLSERIALIZE supports other target types than varchar. I can imagine XMLSERIALIZE with CANONICAL to bytea (then we don't need to force database encoding). Does it make sense? Are the results comparable? \n\n-- \nJim",
"msg_date": "Mon, 26 Aug 2024 16:59:46 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "\n\nOn 26.08.24 16:59, Pavel Stehule wrote:\n>\n> 1. what about behaviour of NO INDENT - the implementation is not too\n> old, so it can be changed if we want (I think), and it is better to do\n> early than too late\n\nWhile checking the feasibility of removing indentation with NO INDENT I\nmay have found a bug in XMLSERIALIZE ... INDENT.\nxmlSaveToBuffer seems to ignore elements if there are whitespaces\nbetween them:\n\nSELECT xmlserialize(DOCUMENT '<foo><bar>42</bar></foo>' AS text INDENT);\n xmlserialize \n-----------------\n <foo> +\n <bar>42</bar>+\n </foo> +\n \n(1 row)\n\nSELECT xmlserialize(DOCUMENT '<foo> <bar>42</bar> </foo>'::xml AS text\nINDENT);\n xmlserialize \n----------------------------\n <foo> <bar>42</bar> </foo>+\n \n(1 row)\n\nI'll take a look at it.\n\nRegarding removing indentation: yes, it would be possible with libxml2.\nThe question is if it would be right to do so.\n> 2. Are we able to implement SQL/XML syntax with libxml2?\n>\n> 3. Are we able to implement Oracle syntax with libxml2? And there are\n> benefits other than higher possible compatibility?\nI guess it would be beneficial if you're migrating from oracle to\npostgres - or the other way around. It certainly wouldn't hurt, but so\nfar I personally had little use for the oracle's extra xmlserialize\nfeatures.\n>\n> 4. Can there be some possible collision (functionality, syntax) with\n> CANONICAL?\nI couldn't find anything in the SQL/XML spec that might refer to\ncanonocal xml.\n>\n> 5. SQL/XML XMLSERIALIZE supports other target types than varchar. I\n> can imagine XMLSERIALIZE with CANONICAL to bytea (then we don't need\n> to force database encoding). Does it make sense? Are the results\n> comparable?\n|\nAs of pg16 bytea is not supported. Currently type| can be |character|,\n|character varying|, or |text - also their other flavours like 'name'.\n\n|\n\n-- \nJim\n\n\n\n",
"msg_date": "Tue, 27 Aug 2024 13:57:24 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "út 27. 8. 2024 v 13:57 odesílatel Jim Jones <jim.jones@uni-muenster.de>\nnapsal:\n\n>\n>\n> On 26.08.24 16:59, Pavel Stehule wrote:\n> >\n> > 1. what about behaviour of NO INDENT - the implementation is not too\n> > old, so it can be changed if we want (I think), and it is better to do\n> > early than too late\n>\n> While checking the feasibility of removing indentation with NO INDENT I\n> may have found a bug in XMLSERIALIZE ... INDENT.\n> xmlSaveToBuffer seems to ignore elements if there are whitespaces\n> between them:\n>\n> SELECT xmlserialize(DOCUMENT '<foo><bar>42</bar></foo>' AS text INDENT);\n> xmlserialize\n> -----------------\n> <foo> +\n> <bar>42</bar>+\n> </foo> +\n>\n> (1 row)\n>\n> SELECT xmlserialize(DOCUMENT '<foo> <bar>42</bar> </foo>'::xml AS text\n> INDENT);\n> xmlserialize\n> ----------------------------\n> <foo> <bar>42</bar> </foo>+\n>\n> (1 row)\n>\n> I'll take a look at it.\n>\n\n+1\n\n\n> Regarding removing indentation: yes, it would be possible with libxml2.\n> The question is if it would be right to do so.\n> > 2. Are we able to implement SQL/XML syntax with libxml2?\n> >\n> > 3. Are we able to implement Oracle syntax with libxml2? And there are\n> > benefits other than higher possible compatibility?\n> I guess it would be beneficial if you're migrating from oracle to\n> postgres - or the other way around. It certainly wouldn't hurt, but so\n> far I personally had little use for the oracle's extra xmlserialize\n> features.\n> >\n> > 4. Can there be some possible collision (functionality, syntax) with\n> > CANONICAL?\n> I couldn't find anything in the SQL/XML spec that might refer to\n> canonocal xml.\n> >\n> > 5. SQL/XML XMLSERIALIZE supports other target types than varchar. I\n> > can imagine XMLSERIALIZE with CANONICAL to bytea (then we don't need\n> > to force database encoding). Does it make sense? Are the results\n> > comparable?\n> |\n> As of pg16 bytea is not supported. Currently type| can be |character|,\n> |character varying|, or |text - also their other flavours like 'name'.\n>\n\nI know, but theoretically, there can be some benefit for CANONICAL if pg\nsupports bytea there. Lot of databases still use non utf8 encoding.\n\nIt is a more theoretical question - if pg supports different types there in\nfuture (because SQL/XML or Oracle), then CANONICAL can be used without\nlimit, or CANONICAL can be used just for text? And you are sure, so you can\ncompare text X text, instead xml X xml?\n\n+SELECT xmlserialize(CONTENT doc AS text CANONICAL) = xmlserialize(CONTENT\ndoc AS text CANONICAL WITH COMMENTS) FROM xmltest_serialize;\n+ ?column?\n+----------\n+ t\n+ t\n+(2 rows)\n\nMaybe I am a little bit confused by these regress tests, because at the end\nit is not too useful - you compare two identical XML, and WITH COMMENTS and\nWITHOUT COMMENTS is tested elsewhere. I tried to search for a sense of this\ntest. Better to use really different documents (columns) instead.\n\nRegards\n\nPavel\n\n\n>\n> |\n>\n> --\n> Jim\n>\n>\n\nút 27. 8. 2024 v 13:57 odesílatel Jim Jones <jim.jones@uni-muenster.de> napsal:\n\nOn 26.08.24 16:59, Pavel Stehule wrote:\n>\n> 1. what about behaviour of NO INDENT - the implementation is not too\n> old, so it can be changed if we want (I think), and it is better to do\n> early than too late\n\nWhile checking the feasibility of removing indentation with NO INDENT I\nmay have found a bug in XMLSERIALIZE ... INDENT.\nxmlSaveToBuffer seems to ignore elements if there are whitespaces\nbetween them:\n\nSELECT xmlserialize(DOCUMENT '<foo><bar>42</bar></foo>' AS text INDENT);\n xmlserialize \n-----------------\n <foo> +\n <bar>42</bar>+\n </foo> +\n \n(1 row)\n\nSELECT xmlserialize(DOCUMENT '<foo> <bar>42</bar> </foo>'::xml AS text\nINDENT);\n xmlserialize \n----------------------------\n <foo> <bar>42</bar> </foo>+\n \n(1 row)\n\nI'll take a look at it.+1 \n\nRegarding removing indentation: yes, it would be possible with libxml2.\nThe question is if it would be right to do so.\n> 2. Are we able to implement SQL/XML syntax with libxml2?\n>\n> 3. Are we able to implement Oracle syntax with libxml2? And there are\n> benefits other than higher possible compatibility?\nI guess it would be beneficial if you're migrating from oracle to\npostgres - or the other way around. It certainly wouldn't hurt, but so\nfar I personally had little use for the oracle's extra xmlserialize\nfeatures.\n>\n> 4. Can there be some possible collision (functionality, syntax) with\n> CANONICAL?\nI couldn't find anything in the SQL/XML spec that might refer to\ncanonocal xml.\n>\n> 5. SQL/XML XMLSERIALIZE supports other target types than varchar. I\n> can imagine XMLSERIALIZE with CANONICAL to bytea (then we don't need\n> to force database encoding). Does it make sense? Are the results\n> comparable?\n|\nAs of pg16 bytea is not supported. Currently type| can be |character|,\n|character varying|, or |text - also their other flavours like 'name'.I know, but theoretically, there can be some benefit for CANONICAL if pg supports bytea there. Lot of databases still use non utf8 encoding.It is a more theoretical question - if pg supports different types there in future (because SQL/XML or Oracle), then CANONICAL can be used without limit, or CANONICAL can be used just for text? And you are sure, so you can compare text X text, instead xml X xml? +SELECT xmlserialize(CONTENT doc AS text CANONICAL) = xmlserialize(CONTENT doc AS text CANONICAL WITH COMMENTS) FROM xmltest_serialize;+ ?column? +----------+ t+ t+(2 rows)Maybe I am a little bit confused by these regress tests, because at the end it is not too useful - you compare two identical XML, and WITH COMMENTS and WITHOUT COMMENTS is tested elsewhere. I tried to search for a sense of this test. Better to use really different documents (columns) instead.RegardsPavel \n\n|\n\n-- \nJim",
"msg_date": "Thu, 29 Aug 2024 20:50:33 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "\n\nOn 29.08.24 20:50, Pavel Stehule wrote:\n>\n> I know, but theoretically, there can be some benefit for CANONICAL if\n> pg supports bytea there. Lot of databases still use non utf8 encoding.\n>\n> It is a more theoretical question - if pg supports different types\n> there in future (because SQL/XML or Oracle), then CANONICAL can be\n> used without limit,\nI like the idea of extending the feature to support bytea. I can\ndefinitely take a look at it, but perhaps in another patch? This change\nwould most likely involve transformXmlSerialize in parse_expr.c, and I'm\nnot sure of the impact in other usages of XMLSERIALIZE.\n> or CANONICAL can be used just for text? And you are sure, so you can\n> compare text X text, instead xml X xml?\nYes, currently it only supports varchar or text - and their cousins. The\nidea is to format the xml and serialize it as text in a way that they\ncan compared based on their content, independently of how they were\nwritten, e.g '<foo a=\"1\" b=\"2\"/>' is equal to '<foo b=\"2\" a=\"1\"/>'.\n\n>\n> +SELECT xmlserialize(CONTENT doc AS text CANONICAL) =\n> xmlserialize(CONTENT doc AS text CANONICAL WITH COMMENTS) FROM\n> xmltest_serialize;\n> + ?column?\n> +----------\n> + t\n> + t\n> +(2 rows)\n>\n> Maybe I am a little bit confused by these regress tests, because at\n> the end it is not too useful - you compare two identical XML, and WITH\n> COMMENTS and WITHOUT COMMENTS is tested elsewhere. I tried to search\n> for a sense of this test. Better to use really different documents\n> (columns) instead.\n\nYeah, I can see that it's confusing. In this example I actually just\nwanted to test that the default option of CANONICAL is CANONICAL WITH\nCOMMENTS, even if you don't mention it. In the docs I mentioned it like\nthis:\n\n\"The optional parameters WITH COMMENTS (which is the default) or WITH NO\nCOMMENTS, respectively, keep or remove XML comments from the given\ndocument.\"\n\nPerhaps I should rephrase it? Or maybe a comment in the regression tests\nwould suffice?\n\nThanks a lot for the input!\n\n-- \nJim\n\n\n\n",
"msg_date": "Thu, 29 Aug 2024 23:54:18 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "čt 29. 8. 2024 v 23:54 odesílatel Jim Jones <jim.jones@uni-muenster.de>\nnapsal:\n\n>\n>\n> On 29.08.24 20:50, Pavel Stehule wrote:\n> >\n> > I know, but theoretically, there can be some benefit for CANONICAL if\n> > pg supports bytea there. Lot of databases still use non utf8 encoding.\n> >\n> > It is a more theoretical question - if pg supports different types\n> > there in future (because SQL/XML or Oracle), then CANONICAL can be\n> > used without limit,\n> I like the idea of extending the feature to support bytea. I can\n> definitely take a look at it, but perhaps in another patch? This change\n> would most likely involve transformXmlSerialize in parse_expr.c, and I'm\n> not sure of the impact in other usages of XMLSERIALIZE.\n> > or CANONICAL can be used just for text? And you are sure, so you can\n> > compare text X text, instead xml X xml?\n> Yes, currently it only supports varchar or text - and their cousins. The\n> idea is to format the xml and serialize it as text in a way that they\n> can compared based on their content, independently of how they were\n> written, e.g '<foo a=\"1\" b=\"2\"/>' is equal to '<foo b=\"2\" a=\"1\"/>'.\n>\n> >\n> > +SELECT xmlserialize(CONTENT doc AS text CANONICAL) =\n> > xmlserialize(CONTENT doc AS text CANONICAL WITH COMMENTS) FROM\n> > xmltest_serialize;\n> > + ?column?\n> > +----------\n> > + t\n> > + t\n> > +(2 rows)\n> >\n> > Maybe I am a little bit confused by these regress tests, because at\n> > the end it is not too useful - you compare two identical XML, and WITH\n> > COMMENTS and WITHOUT COMMENTS is tested elsewhere. I tried to search\n> > for a sense of this test. Better to use really different documents\n> > (columns) instead.\n>\n> Yeah, I can see that it's confusing. In this example I actually just\n> wanted to test that the default option of CANONICAL is CANONICAL WITH\n> COMMENTS, even if you don't mention it. In the docs I mentioned it like\n> this:\n>\n> \"The optional parameters WITH COMMENTS (which is the default) or WITH NO\n> COMMENTS, respectively, keep or remove XML comments from the given\n> document.\"\n>\n> Perhaps I should rephrase it? Or maybe a comment in the regression tests\n> would suffice?\n>\n\ncomment will be enough\n\n\n\n>\n> Thanks a lot for the input!\n>\n> --\n> Jim\n>\n>\n\nčt 29. 8. 2024 v 23:54 odesílatel Jim Jones <jim.jones@uni-muenster.de> napsal:\n\nOn 29.08.24 20:50, Pavel Stehule wrote:\n>\n> I know, but theoretically, there can be some benefit for CANONICAL if\n> pg supports bytea there. Lot of databases still use non utf8 encoding.\n>\n> It is a more theoretical question - if pg supports different types\n> there in future (because SQL/XML or Oracle), then CANONICAL can be\n> used without limit,\nI like the idea of extending the feature to support bytea. I can\ndefinitely take a look at it, but perhaps in another patch? This change\nwould most likely involve transformXmlSerialize in parse_expr.c, and I'm\nnot sure of the impact in other usages of XMLSERIALIZE.\n> or CANONICAL can be used just for text? And you are sure, so you can\n> compare text X text, instead xml X xml?\nYes, currently it only supports varchar or text - and their cousins. The\nidea is to format the xml and serialize it as text in a way that they\ncan compared based on their content, independently of how they were\nwritten, e.g '<foo a=\"1\" b=\"2\"/>' is equal to '<foo b=\"2\" a=\"1\"/>'.\n\n>\n> +SELECT xmlserialize(CONTENT doc AS text CANONICAL) =\n> xmlserialize(CONTENT doc AS text CANONICAL WITH COMMENTS) FROM\n> xmltest_serialize;\n> + ?column?\n> +----------\n> + t\n> + t\n> +(2 rows)\n>\n> Maybe I am a little bit confused by these regress tests, because at\n> the end it is not too useful - you compare two identical XML, and WITH\n> COMMENTS and WITHOUT COMMENTS is tested elsewhere. I tried to search\n> for a sense of this test. Better to use really different documents\n> (columns) instead.\n\nYeah, I can see that it's confusing. In this example I actually just\nwanted to test that the default option of CANONICAL is CANONICAL WITH\nCOMMENTS, even if you don't mention it. In the docs I mentioned it like\nthis:\n\n\"The optional parameters WITH COMMENTS (which is the default) or WITH NO\nCOMMENTS, respectively, keep or remove XML comments from the given\ndocument.\"\n\nPerhaps I should rephrase it? Or maybe a comment in the regression tests\nwould suffice?comment will be enough \n\nThanks a lot for the input!\n\n-- \nJim",
"msg_date": "Fri, 30 Aug 2024 06:46:12 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "On 30.08.24 06:46, Pavel Stehule wrote:\n>\n>\n> čt 29. 8. 2024 v 23:54 odesílatel Jim Jones\n> <jim.jones@uni-muenster.de> napsal:\n>\n>\n> > +SELECT xmlserialize(CONTENT doc AS text CANONICAL) =\n> > xmlserialize(CONTENT doc AS text CANONICAL WITH COMMENTS) FROM\n> > xmltest_serialize;\n> > + ?column?\n> > +----------\n> > + t\n> > + t\n> > +(2 rows)\n> >\n> > Maybe I am a little bit confused by these regress tests, because at\n> > the end it is not too useful - you compare two identical XML,\n> and WITH\n> > COMMENTS and WITHOUT COMMENTS is tested elsewhere. I tried to search\n> > for a sense of this test. Better to use really different documents\n> > (columns) instead.\n>\n> Yeah, I can see that it's confusing. In this example I actually just\n> wanted to test that the default option of CANONICAL is CANONICAL WITH\n> COMMENTS, even if you don't mention it. In the docs I mentioned it\n> like\n> this:\n>\n> \"The optional parameters WITH COMMENTS (which is the default) or\n> WITH NO\n> COMMENTS, respectively, keep or remove XML comments from the given\n> document.\"\n>\n> Perhaps I should rephrase it? Or maybe a comment in the regression\n> tests\n> would suffice?\n>\n>\n> comment will be enough\n>\n\nv12 attached adds a comment to this test.\n\nThanks\n\n-- \nJim",
"msg_date": "Fri, 30 Aug 2024 08:05:27 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "v13 attached removes two variables that were left unused after\nrefactoring parsenodes.h and primnodes.h, both booleans related to the\nINDENT feature of xmlserialize.\n\nOn 30.08.24 08:05, Jim Jones wrote:\n>\n> On 30.08.24 06:46, Pavel Stehule wrote:\n>>\n>> čt 29. 8. 2024 v 23:54 odesílatel Jim Jones\n>> <jim.jones@uni-muenster.de> napsal:\n>>\n>>\n>> > +SELECT xmlserialize(CONTENT doc AS text CANONICAL) =\n>> > xmlserialize(CONTENT doc AS text CANONICAL WITH COMMENTS) FROM\n>> > xmltest_serialize;\n>> > + ?column?\n>> > +----------\n>> > + t\n>> > + t\n>> > +(2 rows)\n>> >\n>> > Maybe I am a little bit confused by these regress tests, because at\n>> > the end it is not too useful - you compare two identical XML,\n>> and WITH\n>> > COMMENTS and WITHOUT COMMENTS is tested elsewhere. I tried to search\n>> > for a sense of this test. Better to use really different documents\n>> > (columns) instead.\n>>\n>> Yeah, I can see that it's confusing. In this example I actually just\n>> wanted to test that the default option of CANONICAL is CANONICAL WITH\n>> COMMENTS, even if you don't mention it. In the docs I mentioned it\n>> like\n>> this:\n>>\n>> \"The optional parameters WITH COMMENTS (which is the default) or\n>> WITH NO\n>> COMMENTS, respectively, keep or remove XML comments from the given\n>> document.\"\n>>\n>> Perhaps I should rephrase it? Or maybe a comment in the regression\n>> tests\n>> would suffice?\n>>\n>>\n>> comment will be enough\n>>\n> v12 attached adds a comment to this test.\n>\n> Thanks\n>\n\n-- \nJim",
"msg_date": "Tue, 3 Sep 2024 16:43:51 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, failed\nSpec compliant: tested, failed\nDocumentation: tested, failed\n\nLGTM\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Sun, 08 Sep 2024 11:43:49 +0000",
"msg_from": "Oliver Ford <ojford@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "On Sun, 2024-09-08 at 11:43 +0000, Oliver Ford wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, failed\n> Implements feature: tested, failed\n> Spec compliant: tested, failed\n> Documentation: tested, failed\n> \n> LGTM\n> \n> The new status of this patch is: Ready for Committer\n\nHuh? Do you mean \"tested, passes\"?\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Sun, 08 Sep 2024 15:44:01 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "On Sun, Sep 8, 2024 at 2:44 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Sun, 2024-09-08 at 11:43 +0000, Oliver Ford wrote:\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: tested, failed\n> > Implements feature: tested, failed\n> > Spec compliant: tested, failed\n> > Documentation: tested, failed\n> >\n> > LGTM\n> >\n> > The new status of this patch is: Ready for Committer\n>\n> Huh? Do you mean \"tested, passes\"?\n>\n> Yours,\n> Laurenz Albe\n\nWhoops, yes all tests and docs pass!\n\n\n",
"msg_date": "Sun, 8 Sep 2024 14:56:47 +0100",
"msg_from": "Oliver Ford <ojford@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "Hi Oliver\n\nOn 08.09.24 15:56, Oliver Ford wrote:\n> Whoops, yes all tests and docs pass!\n\nThanks for the review!\n\nBest, Jim\n\n\n",
"msg_date": "Sun, 8 Sep 2024 17:02:27 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PoC] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "Jim Jones <jim.jones@uni-muenster.de> writes:\n> This patch introduces the CANONICAL option to xmlserialize, which\n> serializes xml documents in their canonical form - as described in\n> the W3C Canonical XML Version 1.1 specification. This option can\n> be used with the additional parameter WITH [NO] COMMENTS to keep\n> or remove xml comments from the canonical xml output.\n\nWhile I don't object to providing this functionality in some form,\nI think that doing it with this specific syntax is a seriously\nbad idea. I think there's significant risk that at some point\nthe SQL committee will either standardize this syntax with a\nsomewhat different meaning or standardize some other syntax for\nthe same functionality.\n\nHow about instead introducing a plain function along the lines of\n\"xml_canonicalize(xml, bool keep_comments) returns text\" ? The SQL\ncommittee will certainly never do that, but we won't regret having\ncreated a plain function whenever they get around to doing something\nin the same space.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Sep 2024 13:43:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
},
{
"msg_contents": "On 10.09.24 19:43, Tom Lane wrote:\n> How about instead introducing a plain function along the lines of\n> \"xml_canonicalize(xml, bool keep_comments) returns text\" ? The SQL\n> committee will certainly never do that, but we won't regret having\n> created a plain function whenever they get around to doing something\n> in the same space.\nA second function to serialize xml documents may sound a bit redundant,\nbut I totally understand the concern of possibly conflicting with\nSQL/XMl spec in the feature. I guess we can always come back here and\nextend xmlserialize when the SQL committee moves in this direction.\n\nv14 attached adds the function xmlcanonicalize, as suggested.\n\nThanks\n\n-- \nJim",
"msg_date": "Thu, 12 Sep 2024 12:56:32 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add CANONICAL option to xmlserialize"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI'm sure I'm not the only one who can never remember which way around\nthe value and delimiter arguments go for string_agg() and has to look it\nup in the manual every time. To make it more convenient, here's a patch\nthat adds proargnames to its pg_proc entries so that it can be seen with\na quick \\df in psql.\n\nI also added names to json(b)_object_agg() for good measure, even though\nthey're more obvious. The remaining built-in multi-argument aggregate\nfunctions are the stats-related ones, where it's all just Y/X (but why\nin that order?), so I didn't think it was necessary. If others feel more\nstrongly, I can add those too.\n\n- ilmari",
"msg_date": "Mon, 27 Feb 2023 13:22:53 +0000",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Adding argument names to aggregate functions"
},
{
"msg_contents": "On 2/27/23 14:22, Dagfinn Ilmari Mannsåker wrote:\n> Hi hackers,\n> \n> I'm sure I'm not the only one who can never remember which way around\n> the value and delimiter arguments go for string_agg() and has to look it\n> up in the manual every time. To make it more convenient, here's a patch\n> that adds proargnames to its pg_proc entries so that it can be seen with\n> a quick \\df in psql.\n> \n> I also added names to json(b)_object_agg() for good measure, even though\n> they're more obvious. The remaining built-in multi-argument aggregate\n> functions are the stats-related ones, where it's all just Y/X (but why\n> in that order?), so I didn't think it was necessary. If others feel more\n> strongly, I can add those too.\n\nNo comment on adding names for everything, but a big +1 for the ones \nincluded here.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 28 Feb 2023 00:38:21 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Adding argument names to aggregate functions"
},
{
"msg_contents": "Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n\n> Hi hackers,\n>\n> I'm sure I'm not the only one who can never remember which way around\n> the value and delimiter arguments go for string_agg() and has to look it\n> up in the manual every time. To make it more convenient, here's a patch\n> that adds proargnames to its pg_proc entries so that it can be seen with\n> a quick \\df in psql.\n\nAdded to the 2023-07 commitfest:\n\nhttps://commitfest.postgresql.org/43/4275/\n\n- ilmari\n\n\n",
"msg_date": "Wed, 12 Apr 2023 18:53:54 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Adding argument names to aggregate functions"
},
{
"msg_contents": "On 12.04.23 19:53, Dagfinn Ilmari Mannsåker wrote:\n> Dagfinn Ilmari Mannsåker<ilmari@ilmari.org> writes:\n>\n>> Hi hackers,\n>>\n>> I'm sure I'm not the only one who can never remember which way around\n>> the value and delimiter arguments go for string_agg() and has to look it\n>> up in the manual every time. To make it more convenient, here's a patch\n>> that adds proargnames to its pg_proc entries so that it can be seen with\n>> a quick \\df in psql.\n> Added to the 2023-07 commitfest:\n>\n> https://commitfest.postgresql.org/43/4275/\n>\n> - ilmari\n\n+1 for adding the argument names.\n\nThe patch needs a rebase though.. it no longer applies :\n\n$ git apply \n~/Downloads/0001-Add-argument-names-to-multi-argument-aggregates.patch\nerror: patch failed: src/include/catalog/pg_proc.dat:8899\nerror: src/include/catalog/pg_proc.dat: patch does not apply\n\nJim\n\n\n\n\n\n\n\n\nOn 12.04.23\n 19:53, Dagfinn Ilmari Mannsåker wrote:\n\n\nDagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n\n\n\nHi hackers,\n\nI'm sure I'm not the only one who can never remember which way around\nthe value and delimiter arguments go for string_agg() and has to look it\nup in the manual every time. To make it more convenient, here's a patch\nthat adds proargnames to its pg_proc entries so that it can be seen with\na quick \\df in psql.\n\n\n\nAdded to the 2023-07 commitfest:\n\nhttps://commitfest.postgresql.org/43/4275/\n\n- ilmari\n\n\n+1 for adding the argument names.\n\n\nThe patch needs a rebase though.. it no\n longer applies :\n\n\n$ git apply\n ~/Downloads/0001-Add-argument-names-to-multi-argument-aggregates.patch\n error: patch failed: src/include/catalog/pg_proc.dat:8899\n error: src/include/catalog/pg_proc.dat: patch does not apply\nJim",
"msg_date": "Fri, 14 Apr 2023 11:12:24 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding argument names to aggregate functions"
},
{
"msg_contents": "Jim Jones <jim.jones@uni-muenster.de> writes:\n\n> On 12.04.23 19:53, Dagfinn Ilmari Mannsåker wrote:\n>> Dagfinn Ilmari Mannsåker<ilmari@ilmari.org> writes:\n>>\n>>> Hi hackers,\n>>>\n>>> I'm sure I'm not the only one who can never remember which way around\n>>> the value and delimiter arguments go for string_agg() and has to look it\n>>> up in the manual every time. To make it more convenient, here's a patch\n>>> that adds proargnames to its pg_proc entries so that it can be seen with\n>>> a quick \\df in psql.\n>> Added to the 2023-07 commitfest:\n>>\n>> https://commitfest.postgresql.org/43/4275/\n>>\n>> - ilmari\n>\n> +1 for adding the argument names.\n>\n> The patch needs a rebase though.. it no longer applies :\n>\n> $ git apply\n> ~/Downloads/0001-Add-argument-names-to-multi-argument-aggregates.patch\n> error: patch failed: src/include/catalog/pg_proc.dat:8899\n> error: src/include/catalog/pg_proc.dat: patch does not apply\n\nThanks for the heads-up, here's a rebased patch. I've also formatted\nthe lines to match what reformat_dat_file.pl wants. It also wanted to\nreformat a bunch of other entries, but I left those alone.\n\n- ilmari",
"msg_date": "Fri, 14 Apr 2023 11:03:03 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Adding argument names to aggregate functions"
},
{
"msg_contents": "On 14.04.23 12:03, Dagfinn Ilmari Mannsåker wrote:\n> Thanks for the heads-up, here's a rebased patch. I've also formatted\n> the lines to match what reformat_dat_file.pl wants. It also wanted to\n> reformat a bunch of other entries, but I left those alone.\n>\n> - ilmari\n\nThe patch applies cleanly now and \\df shows the argument names:\n\npostgres=# \\df string_agg\n List of functions\n Schema | Name | Result data type | Argument data \ntypes | Type\n------------+------------+------------------+------------------------------+------\n pg_catalog | string_agg | bytea | value bytea, delimiter \nbytea | agg\n pg_catalog | string_agg | text | value text, delimiter \ntext | agg\n(2 rows)\n\npostgres=# \\df json_object_agg\n List of functions\n Schema | Name | Result data type | Argument data \ntypes | Type\n------------+-----------------+------------------+------------------------+------\n pg_catalog | json_object_agg | json | key \"any\", value \n\"any\" | agg\n(1 row)\n\n\nI'm wondering if there are some sort of guidelines that dictate when to \nname an argument or not. It would be nice to have one for future reference.\n\nI will mark the CF entry as \"Read for Committer\" and let the committers \ndecide if it's best to first create a guideline for that or not.\n\nBest, Jim\n\n\n\n",
"msg_date": "Tue, 18 Apr 2023 10:58:11 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding argument names to aggregate functions"
},
{
"msg_contents": "On 18.04.23 10:58, I wrote:\n> On 14.04.23 12:03, Dagfinn Ilmari Mannsåker wrote:\n>> Thanks for the heads-up, here's a rebased patch. I've also formatted\n>> the lines to match what reformat_dat_file.pl wants. It also wanted to\n>> reformat a bunch of other entries, but I left those alone.\n>>\n>> - ilmari\n>\n> The patch applies cleanly now and \\df shows the argument names:\n>\n> postgres=# \\df string_agg\n> List of functions\n> Schema | Name | Result data type | Argument data \n> types | Type\n> ------------+------------+------------------+------------------------------+------ \n>\n> pg_catalog | string_agg | bytea | value bytea, delimiter \n> bytea | agg\n> pg_catalog | string_agg | text | value text, delimiter \n> text | agg\n> (2 rows)\n>\n> postgres=# \\df json_object_agg\n> List of functions\n> Schema | Name | Result data type | Argument data \n> types | Type\n> ------------+-----------------+------------------+------------------------+------ \n>\n> pg_catalog | json_object_agg | json | key \"any\", value \n> \"any\" | agg\n> (1 row)\n>\n>\n> I'm wondering if there are some sort of guidelines that dictate when \n> to name an argument or not. It would be nice to have one for future \n> reference.\n>\n> I will mark the CF entry as \"Read for Committer\" and let the \n> committers decide if it's best to first create a guideline for that or \n> not.\n>\n> Best, Jim\n>\nI just saw that the patch is failing[1] on \"macOS - Ventura - Meson\". \nNot sure if it is related to this patch though ..\n\n[1] \nhttps://api.cirrus-ci.com/v1/artifact/task/5881376021413888/meson_log/build/meson-logs/meson-log.txt\n\n\n\n",
"msg_date": "Tue, 18 Apr 2023 11:16:46 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding argument names to aggregate functions"
},
{
"msg_contents": "Jim Jones <jim.jones@uni-muenster.de> writes:\n\n> On 18.04.23 10:58, I wrote:\n>> On 14.04.23 12:03, Dagfinn Ilmari Mannsåker wrote:\n>>> Thanks for the heads-up, here's a rebased patch. I've also formatted\n>>> the lines to match what reformat_dat_file.pl wants. It also wanted to\n>>> reformat a bunch of other entries, but I left those alone.\n>>>\n>>> - ilmari\n>>\n>> The patch applies cleanly now and \\df shows the argument names:\n>>\n>> postgres=# \\df string_agg\n>> List of functions\n>> Schema | Name | Result data type | Argument data\n>> types | Type\n>> ------------+------------+------------------+------------------------------+------ \n>> pg_catalog | string_agg | bytea | value bytea, delimiter bytea | agg\n>> pg_catalog | string_agg | text | value text, delimiter text | agg\n>> (2 rows)\n>>\n>> postgres=# \\df json_object_agg\n>> List of functions\n>> Schema | Name | Result data type | Argument data\n>> types | Type\n>> ------------+-----------------+------------------+------------------------+------ \n>> pg_catalog | json_object_agg | json | key \"any\", value \"any\" | agg\n>> (1 row)\n>>\n>>\n>> I'm wondering if there are some sort of guidelines that dictate when\n>> to name an argument or not. It would be nice to have one for future \n>> reference.\n\nI seemed to recall a patch to add arugment names to a bunch of functions\nin the past, thinking that might have some guidance, but can't for the\nlife of me find it now.\n\n>> I will mark the CF entry as \"Read for Committer\" and let the\n>> committers decide if it's best to first create a guideline for that or \n>> not.\n>>\n>> Best, Jim\n>>\n> I just saw that the patch is failing[1] on \"macOS - Ventura -\n> Meson\". Not sure if it is related to this patch though ..\n>\n> [1]\n> https://api.cirrus-ci.com/v1/artifact/task/5881376021413888/meson_log/build/meson-logs/meson-log.txt\n\nLink to the actual job:\n\nhttps://cirrus-ci.com/task/5881376021413888\n\nThe failure was:\n\n[09:54:38.727] 216/262 postgresql:recovery / recovery/031_recovery_conflict ERROR 198.73s exit status 60\n\nLooking at its log:\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5881376021413888/testrun/build/testrun/recovery/031_recovery_conflict/log/regress_log_031_recovery_conflict\n\nwe see:\n\ntimed out waiting for match: (?^:User was holding a relation lock for too long) at /Users/admin/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 311.\n\nThat looks indeed completely unrelated to this patch.\n\n- ilmari\n\n\n",
"msg_date": "Tue, 18 Apr 2023 11:27:54 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Adding argument names to aggregate functions"
},
{
"msg_contents": "On 18.04.23 12:27, Dagfinn Ilmari Mannsåker wrote:\n> Link to the actual job:\n> https://cirrus-ci.com/task/5881376021413888\n>\n> The failure was:\n>\n> [09:54:38.727] 216/262 postgresql:recovery / recovery/031_recovery_conflict ERROR 198.73s exit status 60\n>\n> Looking at its log:\n>\n> https://api.cirrus-ci.com/v1/artifact/task/5881376021413888/testrun/build/testrun/recovery/031_recovery_conflict/log/regress_log_031_recovery_conflict\n>\n> we see:\n>\n> timed out waiting for match: (?^:User was holding a relation lock for too long) at /Users/admin/pgsql/src/test/recovery/t/031_recovery_conflict.pl line 311.\n>\n> That looks indeed completely unrelated to this patch.\n\nYes, that's what I suspected. The patch passes all tests now :)\n\nI've marked the CF entry as \"Ready for Committer\".\n\nJim\n\n\n\n",
"msg_date": "Tue, 18 Apr 2023 12:39:33 +0200",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": false,
"msg_subject": "Re: Adding argument names to aggregate functions"
},
{
"msg_contents": "This patch no longer applied but had a fairly trivial conflict so I've attached\na rebased v3 addressing the conflict in the hopes of getting this further.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 19 Jul 2023 09:56:29 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Adding argument names to aggregate functions"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n\n> This patch no longer applied but had a fairly trivial conflict so I've attached\n> a rebased v3 addressing the conflict in the hopes of getting this further.\n\nThanks for the heads-up! Turns out the conflict was due to the new\njson(b)_object_agg(_unique)(_strict) functions, which should also have\nproargnames added. Here's an updated patch that does that.\n\n- ilmari",
"msg_date": "Wed, 19 Jul 2023 18:32:16 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Adding argument names to aggregate functions"
},
{
"msg_contents": "> On 19 Jul 2023, at 19:32, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n> \n>> This patch no longer applied but had a fairly trivial conflict so I've attached\n>> a rebased v3 addressing the conflict in the hopes of getting this further.\n> \n> Thanks for the heads-up! Turns out the conflict was due to the new\n> json(b)_object_agg(_unique)(_strict) functions, which should also have\n> proargnames added. Here's an updated patch that does that.\n\nGreat, thanks! I had a quick look at this while rebasing (as well as your\nupdated patch) and it seems like a good idea to add this. Unless there are\nobjections I will look at getting this in.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 19 Jul 2023 21:38:12 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Adding argument names to aggregate functions"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 09:38:12PM +0200, Daniel Gustafsson wrote:\n> Great, thanks! I had a quick look at this while rebasing (as well as your\n> updated patch) and it seems like a good idea to add this. Unless there are\n> objections I will look at getting this in.\n\nHey Daniel, are you still planning on committing this? I can pick it up if\nyou are busy.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 3 Aug 2023 16:36:01 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding argument names to aggregate functions"
},
{
"msg_contents": "> On 4 Aug 2023, at 01:36, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> \n> On Wed, Jul 19, 2023 at 09:38:12PM +0200, Daniel Gustafsson wrote:\n>> Great, thanks! I had a quick look at this while rebasing (as well as your\n>> updated patch) and it seems like a good idea to add this. Unless there are\n>> objections I will look at getting this in.\n> \n> Hey Daniel, are you still planning on committing this? I can pick it up if\n> you are busy.\n\nFinally unburied this from the post-vacation pile on the TODO list and pushed\nit after another once-over.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 24 Aug 2023 12:05:17 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Adding argument names to aggregate functions"
},
{
"msg_contents": "On Thu, 24 Aug 2023, at 11:05, Daniel Gustafsson wrote:\n>> On 4 Aug 2023, at 01:36, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> \n>> On Wed, Jul 19, 2023 at 09:38:12PM +0200, Daniel Gustafsson wrote:\n>>> Great, thanks! I had a quick look at this while rebasing (as well as your\n>>> updated patch) and it seems like a good idea to add this. Unless there are\n>>> objections I will look at getting this in.\n>> \n>> Hey Daniel, are you still planning on committing this? I can pick it up if\n>> you are busy.\n>\n> Finally unburied this from the post-vacation pile on the TODO list and pushed\n> it after another once-over.\n\nThanks!\n\n-- \n- ilmari\n\n\n",
"msg_date": "Thu, 24 Aug 2023 13:38:59 +0100",
"msg_from": "=?UTF-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Adding argument names to aggregate functions"
}
] |
[
{
"msg_contents": "I can't see an obvious way to run the regression tests via meson with \nthe --no-locale setting. This is particularly important on Windows. The \nbuildfarm client first runs the regression tests with this setting and \nthen tests (via installcheck) against instances set up with its \nconfigured locales. We do it this way so we are not subject to the \nvagaries of whatever environment we are running in.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\nI can't see an obvious way to run the\n regression tests via meson with the --no-locale setting. This is\n particularly important on Windows. The buildfarm client first\n runs the regression tests with this setting and then tests (via\n installcheck) against instances set up with its configured\n locales. We do it this way so we are not subject to the vagaries\n of whatever environment we are running in.\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 27 Feb 2023 09:34:54 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "meson / pg_regress --no-locale"
},
{
"msg_contents": "On 2023-02-27 Mo 09:34, Andrew Dunstan wrote:\n>\n> I can't see an obvious way to run the regression tests via meson with \n> the --no-locale setting. This is particularly important on Windows. \n> The buildfarm client first runs the regression tests with this setting \n> and then tests (via installcheck) against instances set up with its \n> configured locales. We do it this way so we are not subject to the \n> vagaries of whatever environment we are running in.\n>\n\nFound a way to do this using --test-args\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-02-27 Mo 09:34, Andrew Dunstan\n wrote:\n\n\n\nI can't see an obvious way to run the\n regression tests via meson with the --no-locale setting. This\n is particularly important on Windows. The buildfarm client\n first runs the regression tests with this setting and then\n tests (via installcheck) against instances set up with its\n configured locales. We do it this way so we are not subject to\n the vagaries of whatever environment we are running in.\n\n\n\n\nFound a way to do this using --test-args\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 27 Feb 2023 14:08:13 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: meson / pg_regress --no-locale"
}
] |
[
{
"msg_contents": "Attached is a patch to add nondecimal integer literals and underscores \nin numeric literals to the SQL JSON path language. This matches the \nrecent additions to the core SQL syntax. It follows ECMAScript in \ncombination with the current SQL draft.\n\nInternally, all the numeric literal parsing of jsonpath goes through \nnumeric_in, which already supports all this, so this patch is just a bit \nof lexer work and some tests.",
"msg_date": "Mon, 27 Feb 2023 20:13:31 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "SQL JSON path enhanced numeric literals"
},
{
"msg_contents": "On 2/27/23 20:13, Peter Eisentraut wrote:\n> Attached is a patch to add nondecimal integer literals and underscores \n> in numeric literals to the SQL JSON path language. This matches the \n> recent additions to the core SQL syntax. It follows ECMAScript in \n> combination with the current SQL draft.\n> \n> Internally, all the numeric literal parsing of jsonpath goes through \n> numeric_in, which already supports all this, so this patch is just a bit \n> of lexer work and some tests.\n\nIs T840 really NO after this patch?\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 28 Feb 2023 01:09:27 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL JSON path enhanced numeric literals"
},
{
"msg_contents": "On 28.02.23 01:09, Vik Fearing wrote:\n> On 2/27/23 20:13, Peter Eisentraut wrote:\n>> Attached is a patch to add nondecimal integer literals and underscores \n>> in numeric literals to the SQL JSON path language. This matches the \n>> recent additions to the core SQL syntax. It follows ECMAScript in \n>> combination with the current SQL draft.\n>>\n>> Internally, all the numeric literal parsing of jsonpath goes through \n>> numeric_in, which already supports all this, so this patch is just a \n>> bit of lexer work and some tests.\n> \n> Is T840 really NO after this patch?\n\nThat was meant to be a YES.\n\n\n\n\n",
"msg_date": "Tue, 28 Feb 2023 08:44:29 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL JSON path enhanced numeric literals"
},
{
"msg_contents": "On Tue, 28 Feb 2023 at 07:44, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Attached is a patch to add nondecimal integer literals and underscores\n> in numeric literals to the SQL JSON path language. This matches the\n> recent additions to the core SQL syntax. It follows ECMAScript in\n> combination with the current SQL draft.\n>\n\nI think this new feature ought to be mentioned in the docs somewhere.\nPerhaps a sentence or two in the note below table 9.49 would suffice,\nsince it looks like that's where jsonpath numbers are mentioned for\nthe first time.\n\nIn jsonpath_scan.l, I think the hex/oct/bininteger cases could do with\na comment, such as\n\n/* Non-decimal integers in ECMAScript; must not have underscore after radix */\nhexinteger 0[xX]{hexdigit}(_?{hexdigit})*\noctinteger 0[oO]{octdigit}(_?{octdigit})*\nbininteger 0[bB]{bindigit}(_?{bindigit})*\n\nsince that's different from the main lexer's syntax.\n\nPerhaps it's worth mentioning that difference in the docs.\n\nOtherwise, this looks good to me.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 3 Mar 2023 20:16:15 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL JSON path enhanced numeric literals"
},
{
"msg_contents": "On 03.03.23 21:16, Dean Rasheed wrote:\n> I think this new feature ought to be mentioned in the docs somewhere.\n> Perhaps a sentence or two in the note below table 9.49 would suffice,\n> since it looks like that's where jsonpath numbers are mentioned for\n> the first time.\n\nDone. I actually put it into the data types chapter, where some other \ndifferences between SQL and SQL/JSON syntax were already discussed.\n\n> In jsonpath_scan.l, I think the hex/oct/bininteger cases could do with\n> a comment, such as\n> \n> /* Non-decimal integers in ECMAScript; must not have underscore after radix */\n> hexinteger 0[xX]{hexdigit}(_?{hexdigit})*\n> octinteger 0[oO]{octdigit}(_?{octdigit})*\n> bininteger 0[bB]{bindigit}(_?{bindigit})*\n> \n> since that's different from the main lexer's syntax.\n\ndone\n\n> Perhaps it's worth mentioning that difference in the docs.\n\ndone\n\n> Otherwise, this looks good to me.\n\ncommitted\n\n\n\n",
"msg_date": "Sun, 5 Mar 2023 16:55:18 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL JSON path enhanced numeric literals"
},
{
"msg_contents": "Hi!\n\nSorry to bother, but there is a question on JsonPath - how many bits in the\nJsonPath\nheader could be used for the version? The JsonPath header is 4 bytes, and\ncurrently\nthe Version part is defined as\n#define JSONPATH_VERSION (0x01)\n\nThanks!\n\nOn Sun, Mar 5, 2023 at 6:55 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 03.03.23 21:16, Dean Rasheed wrote:\n> > I think this new feature ought to be mentioned in the docs somewhere.\n> > Perhaps a sentence or two in the note below table 9.49 would suffice,\n> > since it looks like that's where jsonpath numbers are mentioned for\n> > the first time.\n>\n> Done. I actually put it into the data types chapter, where some other\n> differences between SQL and SQL/JSON syntax were already discussed.\n>\n> > In jsonpath_scan.l, I think the hex/oct/bininteger cases could do with\n> > a comment, such as\n> >\n> > /* Non-decimal integers in ECMAScript; must not have underscore after\n> radix */\n> > hexinteger 0[xX]{hexdigit}(_?{hexdigit})*\n> > octinteger 0[oO]{octdigit}(_?{octdigit})*\n> > bininteger 0[bB]{bindigit}(_?{bindigit})*\n> >\n> > since that's different from the main lexer's syntax.\n>\n> done\n>\n> > Perhaps it's worth mentioning that difference in the docs.\n>\n> done\n>\n> > Otherwise, this looks good to me.\n>\n> committed\n>\n>\n>\n>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi!Sorry to bother, but there is a question on JsonPath - how many bits in the JsonPathheader could be used for the version? The JsonPath header is 4 bytes, and currentlythe Version part is defined as#define JSONPATH_VERSION (0x01)Thanks!On Sun, Mar 5, 2023 at 6:55 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 03.03.23 21:16, Dean Rasheed wrote:\n> I think this new feature ought to be mentioned in the docs somewhere.\n> Perhaps a sentence or two in the note below table 9.49 would suffice,\n> since it looks like that's where jsonpath numbers are mentioned for\n> the first time.\n\nDone. I actually put it into the data types chapter, where some other \ndifferences between SQL and SQL/JSON syntax were already discussed.\n\n> In jsonpath_scan.l, I think the hex/oct/bininteger cases could do with\n> a comment, such as\n> \n> /* Non-decimal integers in ECMAScript; must not have underscore after radix */\n> hexinteger 0[xX]{hexdigit}(_?{hexdigit})*\n> octinteger 0[oO]{octdigit}(_?{octdigit})*\n> bininteger 0[bB]{bindigit}(_?{bindigit})*\n> \n> since that's different from the main lexer's syntax.\n\ndone\n\n> Perhaps it's worth mentioning that difference in the docs.\n\ndone\n\n> Otherwise, this looks good to me.\n\ncommitted\n\n\n\n-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Fri, 31 Mar 2023 17:57:13 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL JSON path enhanced numeric literals"
},
{
"msg_contents": "On 31.03.23 16:57, Nikita Malakhov wrote:\n> Sorry to bother, but there is a question on JsonPath - how many bits in \n> the JsonPath\n> header could be used for the version? The JsonPath header is 4 bytes, \n> and currently\n> the Version part is defined as\n> #define JSONPATH_VERSION (0x01)\n\nI don't know the answer to that. I don't think this patch touched on \nthat question at all.\n\n\n\n",
"msg_date": "Mon, 3 Apr 2023 10:42:02 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL JSON path enhanced numeric literals"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have mentioned on a different thread of -docs that we have no\ndocumentation to achieve $subject, so attached is a patch to add\nsomething. This can be done with the following steps:\nmeson setup -Db_coverage=true .. blah\nninja\nmeson test\nninja coverage-html\n\nAs far as I can see, there is no option to generate anything else than\na HTML report? This portion is telling the contrary, still it does\nnot seem to work here and ninja does the job with coverage-html or\ncoverage as only available targets:\nhttps://mesonbuild.com/howtox.html#producing-a-coverage-report\n\nSide issue: the current code generates no reports for the files that\nare automatically generated in src/backend/nodes/, which are actually\npart of src/include/ for a meson build. I have not looked into that\nyet.\n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 28 Feb 2023 17:49:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Add documentation for coverage reports with meson"
},
{
"msg_contents": "On 28.02.23 09:49, Michael Paquier wrote:\n> - when compiling with GCC, and it requires the <command>gcov</command>\n> - and <command>lcov</command> programs.\n> + when compiling with GCC, and it requires the <command>gcov</command>,\n> + <command>lcov</command> and <command>genhtml</command> programs.\n\ngenhtml is part of the lcov package. I think it would be confusing to \nmention it explicitly, since you won't be able to find it as something \nto install. Maybe leave the original list and change \"programs\" to \n\"packages\"?\n\n> - <para>\n> - A typical workflow looks like this:\n> + <sect2 id=\"regress-coverage-configure\">\n> + <title>Coverage with <filename>configure</filename></title>\n> + <para>\n> + A typical workflow looks like this:\n\nIn the installation chapter we use titles like \"Building and \nInstallation with Autoconf and Make\" and \"Building and Installation with \nMeson\". We should use analogous wordings here.\n\n> + <para>\n> + A typical workflow looks like this:\n> +<screen>\n> +meson setup -Db_coverage=true ... OTHER OPTIONS ...\n> +ninja\n> +meson test\n> +ninja coverage-html\n> +</screen>\n> + Then point your HTML browser\n> + to <filename>./meson-logs/coveragereport/index.html</filename>.\n> + </para>\n\nThis ignores which directory you have to be in. The meson calls have to \nbe at the top level, the ninja calls have to be in the build directory. \nWe should be more precise here, otherwise someone trying this will find \nthat it doesn't work.\n\nPersonally I use \"meson compile\" instead of \"ninja\"; I'm not sure what \nthe best recommendation is, but that least that way all the initial \ncommands are \"meson something\" instead of going back and forth.\n\n\n\n",
"msg_date": "Fri, 3 Mar 2023 10:10:15 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add documentation for coverage reports with meson"
},
{
"msg_contents": "On Fri, Mar 03, 2023 at 10:10:15AM +0100, Peter Eisentraut wrote:\n> genhtml is part of the lcov package. I think it would be confusing to\n> mention it explicitly, since you won't be able to find it as something to\n> install. Maybe leave the original list and change \"programs\" to \"packages\"?\n\nMakes sense.\n\n> In the installation chapter we use titles like \"Building and Installation\n> with Autoconf and Make\" and \"Building and Installation with Meson\". We\n> should use analogous wordings here.\n\nOK, changed to something like that.\n\n> This ignores which directory you have to be in. The meson calls have to be\n> at the top level, the ninja calls have to be in the build directory. We\n> should be more precise here, otherwise someone trying this will find that it\n> doesn't work.\n\nHmm. I can see that it is possible to pass the repository to move to\nwith -C, still it is simpler to move into the build repository.\n\n> Personally I use \"meson compile\" instead of \"ninja\"; I'm not sure what the\n> best recommendation is, but that least that way all the initial commands are\n> \"meson something\" instead of going back and forth.\n\nUsing meson compile is fine by me for the docs. Note that I cannot\nsee an option with meson to do coverage reports, and my environment\nuses 1.0.1. Only ninja handles that.\n\nUpdated version attached.\n--\nMichael",
"msg_date": "Fri, 3 Mar 2023 20:12:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Add documentation for coverage reports with meson"
},
{
"msg_contents": "On 03.03.23 12:12, Michael Paquier wrote:\n> +<screen>\n> +meson setup -Db_coverage=true ... OTHER OPTIONS ... builddir/\n> +cd builddir/\n> +meson compile\n> +meson test\n> +ninja coverage-html\n> +</screen>\n\nThe \"cd\" command needs to be moved after the meson commands, and the \nmeson commands need to have a -C builddir option. So it should be like\n\n<screen>\nmeson setup -Db_coverage=true ... OTHER OPTIONS ... builddir/\nmeson compile -C builddir\nmeson test -C builddir\ncd builddir/\nninja coverage-html\n</screen>\n\nOtherwise, this looks good to me.\n\n\n\n",
"msg_date": "Wed, 8 Mar 2023 17:23:48 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add documentation for coverage reports with meson"
},
{
"msg_contents": "On Wed, Mar 08, 2023 at 05:23:48PM +0100, Peter Eisentraut wrote:\n> The \"cd\" command needs to be moved after the meson commands, and the meson\n> commands need to have a -C builddir option.\n\nStill that's not mandatory, is it? The compile and test commands of\nmeson work as well if you are located at the root of the build\ndirectory, AFAIK.\n\n> So it should be like\n> \n> <screen>\n> meson setup -Db_coverage=true ... OTHER OPTIONS ... builddir/\n> meson compile -C builddir\n> meson test -C builddir\n> cd builddir/\n> ninja coverage-html\n> </screen>\n> \n> Otherwise, this looks good to me.\n\nAnyway, this works as well and I don't have any arguments against\nthat. So I have used your flow, and applied the patch. I have\nactually switched my own scripts to rely more on -C, removing direct\ncalls to ninja ;p\n\nThanks for the feedback.\n--\nMichael",
"msg_date": "Thu, 9 Mar 2023 09:25:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Add documentation for coverage reports with meson"
}
] |
[
{
"msg_contents": "Hi hackers,\n When I was reading postgres code, I found there is a wierd type cast. I'm pondering if it is necessary.\n\n```\n /* Allocate a new typmod number. This will be wasted if we error out. */\n typmod = (int)\n pg_atomic_fetch_add_u32(&CurrentSession->shared_typmod_registry->next_typmod,\n 1);\n\n```\n typmod has u32 type, but we cast it to int first.\n\n And I also have some confusion about why `NextRecordTypmod` and `TupleDescData.tdtypmod` has type of int32, but `SharedTypmodTableEntry.typmod` has type of uint32.\n\nBest regard,\nQinghao Huang\n\n\n\n\n\n\n\n\n\nHi hackers,\n\n When I was reading postgres code, I found there is a wierd type cast. I'm pondering if it is necessary.\n\n\n\n\n```\n\n\n /* Allocate a new typmod number. This will be wasted if we error out. */\n typmod = (int)\n pg_atomic_fetch_add_u32(&CurrentSession->shared_typmod_registry->next_typmod,\n 1);\n\n\n\n```\n\n typmod has u32 type, but we cast it to int first.\n\n\n\n\n And I also have some confusion about why `NextRecordTypmod` and `TupleDescData.tdtypmod` has type of int32, but `SharedTypmodTableEntry.typmod` has type of uint32.\n\n\n\n\nBest regard,\n\nQinghao Huang",
"msg_date": "Tue, 28 Feb 2023 09:56:00 +0000",
"msg_from": "qinghao huang <wfnuser@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Maybe we can remove the type cast in typecache.c"
},
{
"msg_contents": "qinghao huang <wfnuser@hotmail.com> writes:\n> When I was reading postgres code, I found there is a wierd type cast. I'm pondering if it is necessary.\n\n> ```\n> /* Allocate a new typmod number. This will be wasted if we error out. */\n> typmod = (int)\n> pg_atomic_fetch_add_u32(&CurrentSession->shared_typmod_registry->next_typmod,\n> 1);\n\n> ```\n> typmod has u32 type, but we cast it to int first.\n\ntypmods really ought to be int32, not uint32, so IMO none of this is\nexactly right. But it's also true that it makes no real difference.\nPostgres pretty much assumes that \"int\" is 32 bits.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Feb 2023 10:35:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Maybe we can remove the type cast in typecache.c"
}
] |
[
{
"msg_contents": "Intro==========\nThe main purpose of the feature is to achieve\nread-your-writes-consistency, while using async replica for reads and\nprimary for writes. In that case lsn of last modification is stored \ninside\napplication. We cannot store this lsn inside database, since reads are\ndistributed across all replicas and primary.\n\nhttps://www.postgresql.org/message-id/195e2d07ead315b1620f1a053313f490%40postgrespro.ru\n\nSuggestions\n==========\nLots of proposals were made how this feature may look like.\nI aggregate them into the following four types.\n\n1) Classic (wait_classic_v1.patch)\nhttps://www.postgresql.org/message-id/3cc883048264c2e9af022033925ff8db%40postgrespro.ru\n==========\nadvantages: multiple events, standalone WAIT\ndisadvantages: new words in grammar\n\nWAIT FOR [ANY | ALL] event [, ...]\nBEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n [ WAIT FOR [ANY | ALL] event [, ...]]\nwhere event is one of:\nLSN value\nTIMEOUT number_of_milliseconds\ntimestamp\n\n2) After style: Kyotaro and Freund (wait_after_within_v1.patch)\nhttps://www.postgresql.org/message-id/d3ff2e363af60b345f82396992595a03%40postgrespro.ru\n==========\nadvantages: no new words in grammar, standalone AFTER\ndisadvantages: a little harder to understand\n\nAFTER lsn_event [ WITHIN delay_milliseconds ] [, ...]\nBEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n [ AFTER lsn_event [ WITHIN delay_milliseconds ]]\nSTART [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n [ AFTER lsn_event [ WITHIN delay_milliseconds ]]\n\n3) Procedure style: Tom Lane and Kyotaro (wait_proc_v1.patch)\nhttps://www.postgresql.org/message-id/27171.1586439221%40sss.pgh.pa.us\nhttps://www.postgresql.org/message-id/20210121.173009.235021120161403875.horikyota.ntt%40gmail.com\n==========\nadvantages: no new words in grammar,like it made in \npg_last_wal_replay_lsn, no snapshots need\ndisadvantages: a little harder to remember names\nSELECT pg_waitlsn(‘LSN’, timeout);\nSELECT pg_waitlsn_infinite(‘LSN’);\nSELECT pg_waitlsn_no_wait(‘LSN’);\n\n4) Brackets style: Kondratov\nhttps://www.postgresql.org/message-id/a8bff0350a27e0a87a6eaf0905d6737f%40postgrespro.ru \n==========\nadvantages: only one new word in grammar,like it made in VACUUM and \nREINDEX, ability to extend parameters without grammar fixes\ndisadvantages: \nWAIT (LSN '16/B374D848', TIMEOUT 100);\nBEGIN WAIT (LSN '16/B374D848' [, etc_options]);\n...\nCOMMIT;\n\nConsequence\n==========\nBelow I provide the implementation of patches for the first three types.\nI propose to discuss this feature again/\n\nRegards\n\n-- \nIvan Kartyshov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 28 Feb 2023 13:10:47 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Tue, 28 Feb 2023 at 05:13, Kartyshov Ivan <i.kartyshov@postgrespro.ru> wrote:\n>\n> Below I provide the implementation of patches for the first three types.\n> I propose to discuss this feature again/\n\nOof, that doesn't really work with the cfbot. It tries to apply all\nthree patches and of course the second and third fail to apply.\n\nIn any case this seems like a lot of effort to me. I would suggest you\njust pick one avenue and provide that patch for discussion and just\nask whether people would prefer any of the alternative syntaxes.\n\n\nFwiw I prefer the functions approach. I do like me some nice syntax\nbut I don't see any particular advantage of the special syntax in this\ncase. They don't seem to provide any additional expressiveness.\n\nThat said, I'm not a fan of the specific function names. Remember that\nwe have polymorphic functions so you could probably just have an\noption argument:\n\npg_lsn_wait('LSN', [timeout]) returns boolean\n\n(just call it with a timeout of 0 to do a no-wait)\n\nI'll set the patch to \"Waiting on Author\" for now. If you feel you're\nstill looking for more opinions from others maybe set it back to Needs\nReview but honestly there are a lot of patches so you probably won't\nsee much this commitfest unless you have a patch that shows in\ncfbot.cputube.org as applying and which looks ready to commit.\n\n-- \ngreg\n\n\n",
"msg_date": "Wed, 1 Mar 2023 15:31:06 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Wed, Mar 01, 2023 at 03:31:06PM -0500, Greg Stark wrote:\n> Fwiw I prefer the functions approach. I do like me some nice syntax\n> but I don't see any particular advantage of the special syntax in this\n> case. They don't seem to provide any additional expressiveness.\n\nSo do I, eventhough I saw a point that sticking to a function or a\nprocedure approach makes the wait stick with more MVCC rules, like the\nfact that the wait may be holding a snapshot for longer than\nnecessary. The grammar can be more extensible without more keywords\nwith DefElems, still I'd like to think that we should not introduce\nmore restrictions in the parser if we have ways to work around it.\nUsing a procedure or function approach is more extensible in its own\nways, and it also depends on the data waiting for (there could be more\nthan one field as well for a single wait pattern?).\n\n> I'll set the patch to \"Waiting on Author\" for now. If you feel you're\n> still looking for more opinions from others maybe set it back to Needs\n> Review but honestly there are a lot of patches so you probably won't\n> see much this commitfest unless you have a patch that shows in\n> cfbot.cputube.org as applying and which looks ready to commit.\n\nWhile looking at all the patches proposed, I have noticed that all the\napproaches proposed force a wakeup of the waiters in the redo loop of\nthe startup process for each record, before reading the next record.\nIt strikes me that there is some interaction with custom resource\nmanagers here, where it is possible to poke at the waiters not for\neach record, but after reading some specific records. Something\nout-of-core would not be as responsive as the per-record approach,\nstill responsive enough that the waiters wait on input for an\nacceptable amount of time, depending on the frequency of the records\ngenerated by a primary to wake them up. Just something that popped\ninto my mind while looking a bit at the threads.\n--\nMichael",
"msg_date": "Thu, 2 Mar 2023 10:22:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On 28.02.23 11:10, Kartyshov Ivan wrote:\n> 3) Procedure style: Tom Lane and Kyotaro (wait_proc_v1.patch)\n> https://www.postgresql.org/message-id/27171.1586439221%40sss.pgh.pa.us\n> https://www.postgresql.org/message-id/20210121.173009.235021120161403875.horikyota.ntt%40gmail.com\n> ==========\n> advantages: no new words in grammar,like it made in \n> pg_last_wal_replay_lsn, no snapshots need\n> disadvantages: a little harder to remember names\n> SELECT pg_waitlsn(‘LSN’, timeout);\n> SELECT pg_waitlsn_infinite(‘LSN’);\n> SELECT pg_waitlsn_no_wait(‘LSN’);\n\nOf the presented options, I prefer this one. (Maybe with a \"_\" between \n\"wait\" and \"lsn\".)\n\nBut I wonder how a client is going to get the LSN. How would all of \nthis be used by a client? I can think of a scenarios where you have an \napplication that issues a bunch of SQL commands and you have some kind \nof pooler in the middle that redirects those commands to different \nhosts, and what you really want is to have it transparently behave as if \nit's just a single host. Do we want to inject a bunch of \"SELECT \npg_get_lsn()\", \"SELECT pg_wait_lsn()\" calls into that?\n\nI'm tempted to think this could be a protocol-layer facility. Every \nquery automatically returns the current LSN, and every query can also \nsend along an LSN to wait for, and the client library would just keep \ntrack of the LSN for (what it thinks of as) the connection. So you get \nsome automatic serialization without having to modify your client code.\n\nThat said, exposing this functionality using functions could be a valid \nstep in that direction, so that you can at least build out the actual \ninternals of the functionality and test it out.\n\n\n",
"msg_date": "Thu, 2 Mar 2023 11:33:01 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Here I made new patch of feature, discussed above.\n\nWAIT FOR procedure - waits for certain lsn on pause\n==========\nSynopsis\n==========\n SELECT pg_wait_lsn(‘LSN’, timeout) returns boolean\n\n Where timeout = 0, will wait infinite without timeout\n And if timeout = 1, then just check if lsn was replayed\n\nHow to use it\n==========\n\nGreg Stark wrote:\n> That said, I'm not a fan of the specific function names. Remember that\n> we have polymorphic functions so you could probably just have an\n> option argument:\n\nIf you have any example, I will be glade to see them. Ьy searches have\nnot been fruitful.\n\nMichael Paquier wrote:\n> While looking at all the patches proposed, I have noticed that all the\n> approaches proposed force a wakeup of the waiters in the redo loop of\n> the startup process for each record, before reading the next record.\n> It strikes me that there is some interaction with custom resource\n> managers here, where it is possible to poke at the waiters not for\n> each record, but after reading some specific records. Something\n> out-of-core would not be as responsive as the per-record approach,\n> still responsive enough that the waiters wait on input for an\n> acceptable amount of time, depending on the frequency of the records\n> generated by a primary to wake them up. Just something that popped\n> into my mind while looking a bit at the threads.\n\nI`ll work on this idea to have less impact on the redo system.\n\nOn 2023-03-02 13:33, Peter Eisentraut wrote:\n> But I wonder how a client is going to get the LSN. How would all of\n> this be used by a client?\nAs I wrote earlier main purpose of the feature is to achieve\nread-your-writes-consistency, while using async replica for reads and\nprimary for writes. In that case lsn of last modification is stored\ninside application.\n\n> I'm tempted to think this could be a protocol-layer facility. Every\n> query automatically returns the current LSN, and every query can also\n> send along an LSN to wait for, and the client library would just keep\n> track of the LSN for (what it thinks of as) the connection. So you\n> get some automatic serialization without having to modify your client\n> code.\nYes it sounds very tempted. But I think community will be against it.\n\n-- \nIvan Kartyshov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sat, 04 Mar 2023 18:36:49 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Update patch to fix conflict with master\n-- \nIvan Kartyshov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 06 Mar 2023 12:40:16 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Fix build.meson troubles\n\n-- \nIvan Kartyshov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 07 Mar 2023 09:55:48 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "All rebased and tested\n\n--\nIvan Kartyshov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company@postgrespro.ru>",
"msg_date": "Fri, 30 Jun 2023 11:32:23 +0300",
"msg_from": "\n =?utf-8?q?=D0=9A=D0=B0=D1=80=D1=82=D1=8B=D1=88=D0=BE=D0=B2_=D0=98=D0=B2=D0=B0=D0=BD?=\n <i.kartyshov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "=?utf-8?q?Re=3A?= [HACKERS] make async slave to wait for lsn to be\n replayed"
},
{
"msg_contents": "Hi, Ivan!\n\nOn Fri, Jun 30, 2023 at 11:32 AM Картышов Иван <i.kartyshov@postgrespro.ru>\nwrote:\n\n> All rebased and tested\n>\n\nThank you for continuing to work on this patch.\n\nI see you're concentrating on the procedural version of this feature. But\nwhen you're calling a procedure within a normal SQL statement, the executor\ngets a snapshot and holds it until the procedure finishes. In the case the\nWAL record conflicts with this snapshot, the query will be canceled.\nAlternatively, when hot_standby_feedback = on, the query and WAL replayer\nwill be in a deadlock (WAL replayer will wait for the query to finish, and\nthe query will wait for WAL replayed). Do you see this issue? Or do you\nthink I'm missing something?\n\nXLogRecPtr\nGetMinWaitedLSN(void)\n{\n return state->min_lsn.value;\n}\n\nYou definitely shouldn't access directly the fields\ninside pg_atomic_uint64. In this particular case, you should\nuse pg_atomic_read_u64().\n\nAlso, I think there is a race condition.\n\n /* Check if we already reached the needed LSN */\n if (cur_lsn >= target_lsn)\n return true;\n\n AddWaitedLSN(target_lsn);\n\nImagine, PerformWalRecovery() will replay a record after the check, but\nbefore AddWaitedLSN(). This code will start the waiting cycle even if the\nLSN is already achieved. Surely this cycle will end soon because it\nrechecks LSN value each 100 ms. But anyway, I think there should be\nanother check after AddWaitedLSN() for the sake of consistency.\n\n------\nRegards,\nAlexander Korotkov\n\nHi, Ivan!On Fri, Jun 30, 2023 at 11:32 AM Картышов Иван <i.kartyshov@postgrespro.ru> wrote:All rebased and testedThank you for continuing to work on this patch.I see you're concentrating on the procedural version of this feature. But when you're calling a procedure within a normal SQL statement, the executor gets a snapshot and holds it until the procedure finishes. In the case the WAL record conflicts with this snapshot, the query will be canceled. Alternatively, when hot_standby_feedback = on, the query and WAL replayer will be in a deadlock (WAL replayer will wait for the query to finish, and the query will wait for WAL replayed). Do you see this issue? Or do you think I'm missing something?XLogRecPtrGetMinWaitedLSN(void){ return state->min_lsn.value;}You definitely shouldn't access directly the fields inside pg_atomic_uint64. In this particular case, you should use pg_atomic_read_u64().Also, I think there is a race condition. /* Check if we already reached the needed LSN */ if (cur_lsn >= target_lsn) return true; AddWaitedLSN(target_lsn);Imagine, PerformWalRecovery() will replay a record after the check, but before AddWaitedLSN(). This code will start the waiting cycle even if the LSN is already achieved. Surely this cycle will end soon because it rechecks LSN value each 100 ms. But anyway, I think there should be another check after AddWaitedLSN() for the sake of consistency.------Regards,Alexander Korotkov",
"msg_date": "Wed, 4 Oct 2023 13:22:13 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi!\n\nOn Wed, Oct 4, 2023 at 1:22 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> I see you're concentrating on the procedural version of this feature. But when you're calling a procedure within a normal SQL statement, the executor gets a snapshot and holds it until the procedure finishes. In the case the WAL record conflicts with this snapshot, the query will be canceled. Alternatively, when hot_standby_feedback = on, the query and WAL replayer will be in a deadlock (WAL replayer will wait for the query to finish, and the query will wait for WAL replayed). Do you see this issue? Or do you think I'm missing something?\n\nI'm sorry, I actually meant hot_standby_feedback = off\n(hot_standby_feedback = on actually avoids query conflicts). I\nmanaged to reproduce this problem.\n\nmaster: create table test as (select i from generate_series(1,10000) i);\nslave conn1: select pg_wal_replay_pause();\nmaster: delete from test;\nmaster: vacuum test;\nmaster: select pg_current_wal_lsn();\nslave conn2: select pg_wait_lsn('the value from previous query'::pg_lsn, 0);\nslave conn1: select pg_wal_replay_resume();\nslave conn2: ERROR: canceling statement due to conflict with recovery\nDETAIL: User query might have needed to see row versions that must be removed.\n\nNeedless to say, this is very undesirable behavior. This happens\nbecause pg_wait_lsn() has to run within a snapshot as any other\nfunction. This is why I think this functionality should be\nimplemented as a separate statement.\n\nAnother issue I found is that pg_wait_lsn() hangs on the primary. I\nthink an error should be reported instead.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 15 Oct 2023 03:57:18 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Alexander, thank you for your review and pointing this issues. According to\nthem I made some fixes and rebase all patch.\n\nBut I can`t repeat your ERROR. Not with hot_standby_feedback = on nor \nhot_standby_feedback = off.master: create table test as (select i from generate_series(1,10000) i);\nslave conn1: select pg_wal_replay_pause();\nmaster: delete from test;\nmaster: vacuum test;\nmaster: select pg_current_wal_lsn();\nslave conn2: select pg_wait_lsn('the value from previous query'::pg_lsn, 0);\nslave conn1: select pg_wal_replay_resume();\nslave conn2: ERROR: canceling statement due to conflict with recovery\nDETAIL: User query might have needed to see row versions that must be removed.Also I use little hack to work out of snapshot similar to SnapshotResetXmin.\n\nPatch rebased and ready for review.",
"msg_date": "Mon, 20 Nov 2023 14:10:43 +0300",
"msg_from": "\n =?utf-8?q?=D0=9A=D0=B0=D1=80=D1=82=D1=8B=D1=88=D0=BE=D0=B2_=D0=98=D0=B2=D0=B0=D0=BD?=\n <i.kartyshov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "=?utf-8?q?Re=3A?= [HACKERS] make async slave to wait for lsn to be\n replayed"
},
{
"msg_contents": "Hi,\n\nI used the latest code and found some conflicts while applying. Which PG\nversion did you rebase?\n\nRegards\nBowen Shi\n\nHi, I used the latest code and found some conflicts while applying. Which PG version did you rebase?RegardsBowen Shi",
"msg_date": "Thu, 23 Nov 2023 11:52:22 +0800",
"msg_from": "Bowen Shi <zxwsbg12138@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Thu, Nov 23, 2023 at 5:52 AM Bowen Shi <zxwsbg12138@gmail.com> wrote:\n> I used the latest code and found some conflicts while applying. Which PG version did you rebase?\n\nI've successfully applied the patch on bc3c8db8ae. But I've used\n\"patch -p1 < wait_proc_v6.patch\", git am doesn't work.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 27 Nov 2023 02:05:13 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Mon, Nov 20, 2023 at 1:10 PM Картышов Иван\n<i.kartyshov@postgrespro.ru> wrote:\n> Alexander, thank you for your review and pointing this issues. According to\n> them I made some fixes and rebase all patch.\n>\n> But I can`t repeat your ERROR. Not with hot_standby_feedback = on nor\n> hot_standby_feedback = off.\n>\n> master: create table test as (select i from generate_series(1,10000) i);\n> slave conn1: select pg_wal_replay_pause();\n> master: delete from test;\n> master: vacuum test;\n> master: select pg_current_wal_lsn();\n> slave conn2: select pg_wait_lsn('the value from previous query'::pg_lsn, 0);\n> slave conn1: select pg_wal_replay_resume();\n> slave conn2: ERROR: canceling statement due to conflict with recovery\n> DETAIL: User query might have needed to see row versions that must be removed.\n>\n> Also I use little hack to work out of snapshot similar to SnapshotResetXmin.\n>\n> Patch rebased and ready for review.\n\nI've retried my case with v6 and it doesn't fail anymore. But I\nwonder how safe it is to reset xmin within the user-visible function?\nWe have no guarantee that the function is not called inside the\ncomplex query. Then how will the rest of the query work with xmin\nreset? Separate utility statement still looks like more safe option\nfor me.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 27 Nov 2023 02:08:26 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On 2023-11-27 03:08, Alexander Korotkov wrote:\n> I've retried my case with v6 and it doesn't fail anymore. But I\n> wonder how safe it is to reset xmin within the user-visible function?\n> We have no guarantee that the function is not called inside the\n> complex query. Then how will the rest of the query work with xmin\n> reset? Separate utility statement still looks like more safe option\n> for me.\n\nAs you mentioned, we can`t guarantee that the function is not called\ninside the complex query, but we can return the xmin after waiting.\nBut you are right and separate utility statement still looks more safe.\nSo I want to bring up the discussion on separate utility statement \nagain.\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com\n\n\n",
"msg_date": "Fri, 08 Dec 2023 12:20:28 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Should rise disscusion on separate utility statement or find\ncase where procedure version is failed.\n\n1) Classic (wait_classic_v3.patch)\nhttps://www.postgresql.org/message-id/3cc883048264c2e9af022033925ff8db%40postgrespro.ru\n==========\nadvantages: multiple wait events, separate WAIT FOR statement\ndisadvantages: new words in grammar\n\n\n\nWAIT FOR [ANY | ALL] event [, ...]\nBEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n [ WAIT FOR [ANY | ALL] event [, ...]]\nevent:\nLSN value\nTIMEOUT number_of_milliseconds\ntimestamp\n\n\n\n2) After style: Kyotaro and Freund (wait_after_within_v2.patch)\nhttps://www.postgresql.org/message-id/d3ff2e363af60b345f82396992595a03%40postgrespro.ru\n==========\nadvantages: no new words in grammar\ndisadvantages: a little harder to understand\n\n\n\nAFTER lsn_event [ WITHIN delay_milliseconds ] [, ...]\nBEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n [ AFTER lsn_event [ WITHIN delay_milliseconds ]]\nSTART [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n [ AFTER lsn_event [ WITHIN delay_milliseconds ]]\n\n\n\n3) Procedure style: Tom Lane and Kyotaro (wait_proc_v7.patch)\nhttps://www.postgresql.org/message-id/27171.1586439221%40sss.pgh.pa.us\nhttps://www.postgresql.org/message-id/20210121.173009.235021120161403875.horikyota.ntt%40gmail.com\n==========\nadvantages: no new words in grammar,like it made in\npg_last_wal_replay_lsn\ndisadvantages: use snapshot xmin trick\nSELECT pg_waitlsn(‘LSN’, timeout);\nSELECT pg_waitlsn_infinite(‘LSN’);\nSELECT pg_waitlsn_no_wait(‘LSN’);\n\n\nRegards\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com",
"msg_date": "Fri, 08 Dec 2023 12:46:55 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Fri, Dec 8, 2023 at 11:20 AM Kartyshov Ivan\n<i.kartyshov@postgrespro.ru> wrote:\n>\n> On 2023-11-27 03:08, Alexander Korotkov wrote:\n> > I've retried my case with v6 and it doesn't fail anymore. But I\n> > wonder how safe it is to reset xmin within the user-visible function?\n> > We have no guarantee that the function is not called inside the\n> > complex query. Then how will the rest of the query work with xmin\n> > reset? Separate utility statement still looks like more safe option\n> > for me.\n>\n> As you mentioned, we can`t guarantee that the function is not called\n> inside the complex query, but we can return the xmin after waiting.\n\nReturning xmin back isn't safe. Especially after potentially long\nwaiting. The snapshot could be no longer valid, because the\ncorresponding tuples could be VACUUM'ed.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 8 Dec 2023 12:27:16 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Fri, Dec 8, 2023 at 11:46 AM Kartyshov Ivan\n<i.kartyshov@postgrespro.ru> wrote:\n>\n> Should rise disscusion on separate utility statement or find\n> case where procedure version is failed.\n>\n> 1) Classic (wait_classic_v3.patch)\n> https://www.postgresql.org/message-id/3cc883048264c2e9af022033925ff8db%40postgrespro.ru\n> ==========\n> advantages: multiple wait events, separate WAIT FOR statement\n> disadvantages: new words in grammar\n>\n>\n>\n> WAIT FOR [ANY | ALL] event [, ...]\n> BEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n> [ WAIT FOR [ANY | ALL] event [, ...]]\n> event:\n> LSN value\n> TIMEOUT number_of_milliseconds\n> timestamp\n\nNice, but as you stated requires new keywords.\n\n> 2) After style: Kyotaro and Freund (wait_after_within_v2.patch)\n> https://www.postgresql.org/message-id/d3ff2e363af60b345f82396992595a03%40postgrespro.ru\n> ==========\n> advantages: no new words in grammar\n> disadvantages: a little harder to understand\n>\n>\n>\n> AFTER lsn_event [ WITHIN delay_milliseconds ] [, ...]\n> BEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n> [ AFTER lsn_event [ WITHIN delay_milliseconds ]]\n> START [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n> [ AFTER lsn_event [ WITHIN delay_milliseconds ]]\n\n+1 from me\n\n> 3) Procedure style: Tom Lane and Kyotaro (wait_proc_v7.patch)\n> https://www.postgresql.org/message-id/27171.1586439221%40sss.pgh.pa.us\n> https://www.postgresql.org/message-id/20210121.173009.235021120161403875.horikyota.ntt%40gmail.com\n> ==========\n> advantages: no new words in grammar,like it made in\n> pg_last_wal_replay_lsn\n> disadvantages: use snapshot xmin trick\n> SELECT pg_waitlsn(‘LSN’, timeout);\n> SELECT pg_waitlsn_infinite(‘LSN’);\n> SELECT pg_waitlsn_no_wait(‘LSN’);\n\nNice, because simplicity. But only safe if called within the simple\nquery containing nothing else. Validating this from the function\nkills the simplicity.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 8 Dec 2023 12:39:03 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Fri, 8 Dec 2023 at 15:17, Kartyshov Ivan <i.kartyshov@postgrespro.ru> wrote:\n>\n> Should rise disscusion on separate utility statement or find\n> case where procedure version is failed.\n>\n> 1) Classic (wait_classic_v3.patch)\n> https://www.postgresql.org/message-id/3cc883048264c2e9af022033925ff8db%40postgrespro.ru\n> ==========\n> advantages: multiple wait events, separate WAIT FOR statement\n> disadvantages: new words in grammar\n>\n>\n>\n> WAIT FOR [ANY | ALL] event [, ...]\n> BEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n> [ WAIT FOR [ANY | ALL] event [, ...]]\n> event:\n> LSN value\n> TIMEOUT number_of_milliseconds\n> timestamp\n>\n>\n>\n> 2) After style: Kyotaro and Freund (wait_after_within_v2.patch)\n> https://www.postgresql.org/message-id/d3ff2e363af60b345f82396992595a03%40postgrespro.ru\n> ==========\n> advantages: no new words in grammar\n> disadvantages: a little harder to understand\n>\n>\n>\n> AFTER lsn_event [ WITHIN delay_milliseconds ] [, ...]\n> BEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n> [ AFTER lsn_event [ WITHIN delay_milliseconds ]]\n> START [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n> [ AFTER lsn_event [ WITHIN delay_milliseconds ]]\n>\n>\n>\n> 3) Procedure style: Tom Lane and Kyotaro (wait_proc_v7.patch)\n> https://www.postgresql.org/message-id/27171.1586439221%40sss.pgh.pa.us\n> https://www.postgresql.org/message-id/20210121.173009.235021120161403875.horikyota.ntt%40gmail.com\n> ==========\n> advantages: no new words in grammar,like it made in\n> pg_last_wal_replay_lsn\n> disadvantages: use snapshot xmin trick\n> SELECT pg_waitlsn(‘LSN’, timeout);\n> SELECT pg_waitlsn_infinite(‘LSN’);\n> SELECT pg_waitlsn_no_wait(‘LSN’);\n\nFew of the tests have aborted at [1] in CFBot with:\n0000058`9c7ff550 00007ff6`5bdff1f4\npostgres!pg_atomic_compare_exchange_u64_impl(\nstruct pg_atomic_uint64 * ptr = 0x00000000`00000008,\nunsigned int64 * expected = 0x00000058`9c7ff5a0,\nunsigned int64 newval = 0)+0x34\n[c:\\cirrus\\src\\include\\port\\atomics\\generic-msvc.h @ 83]\n00000058`9c7ff580 00007ff6`5bdff256 postgres!pg_atomic_read_u64_impl(\nstruct pg_atomic_uint64 * ptr = 0x00000000`00000008)+0x24\n[c:\\cirrus\\src\\include\\port\\atomics\\generic.h @ 323]\n00000058`9c7ff5c0 00007ff6`5bdfef67 postgres!pg_atomic_read_u64(\nstruct pg_atomic_uint64 * ptr = 0x00000000`00000008)+0x46\n[c:\\cirrus\\src\\include\\port\\atomics.h @ 430]\n00000058`9c7ff5f0 00007ff6`5bc98fc3\npostgres!GetMinWaitedLSN(void)+0x17\n[c:\\cirrus\\src\\backend\\commands\\wait.c @ 176]\n00000058`9c7ff620 00007ff6`5bc82fb9\npostgres!PerformWalRecovery(void)+0x4c3\n[c:\\cirrus\\src\\backend\\access\\transam\\xlogrecovery.c @ 1788]\n00000058`9c7ff6e0 00007ff6`5bffc651\npostgres!StartupXLOG(void)+0x989\n[c:\\cirrus\\src\\backend\\access\\transam\\xlog.c @ 5562]\n00000058`9c7ff870 00007ff6`5bfed38b\npostgres!StartupProcessMain(void)+0xd1\n[c:\\cirrus\\src\\backend\\postmaster\\startup.c @ 288]\n00000058`9c7ff8a0 00007ff6`5bff49fd postgres!AuxiliaryProcessMain(\nAuxProcType auxtype = StartupProcess (0n0))+0x1fb\n[c:\\cirrus\\src\\backend\\postmaster\\auxprocess.c @ 139]\n00000058`9c7ff8e0 00007ff6`5beb7674 postgres!SubPostmasterMain(\n\nMore details are available at [2].\n\n[1] - https://cirrus-ci.com/task/5618308515364864\n[2] - https://api.cirrus-ci.com/v1/artifact/task/5618308515364864/crashlog/crashlog-postgres.exe_0008_2023-12-08_07-48-37-722.txt\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 9 Jan 2024 14:10:17 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Rebased and ready for review.\nI left only versions (due to irreparable problems)\n\n1) Classic (wait_classic_v4.patch)\nhttps://www.postgresql.org/message-id/3cc883048264c2e9af022033925ff8db%40postgrespro.ru\n==========\nadvantages: multiple wait events, separate WAIT FOR statement\ndisadvantages: new words in grammar\n\n\n\nWAIT FOR [ANY | ALL] event [, ...]\nBEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n [ WAIT FOR [ANY | ALL] event [, ...]]\nevent:\nLSN value\nTIMEOUT number_of_milliseconds\ntimestamp\n\n\n\n2) After style: Kyotaro and Freund (wait_after_within_v3.patch)\nhttps://www.postgresql.org/message-id/d3ff2e363af60b345f82396992595a03%40postgrespro.ru\n==========\nadvantages: no new words in grammar\ndisadvantages: a little harder to understand\n\n\n\nAFTER lsn_event [ WITHIN delay_milliseconds ] [, ...]\nBEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n [ AFTER lsn_event [ WITHIN delay_milliseconds ]]\nSTART [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n [ AFTER lsn_event [ WITHIN delay_milliseconds ]]\n\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com",
"msg_date": "Thu, 11 Jan 2024 11:57:44 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Add some fixes and rebase.\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com",
"msg_date": "Wed, 17 Jan 2024 11:16:35 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere was a CFbot test failure last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4221/\n[2] https://cirrus-ci.com/task/5618308515364864\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 16:00:43 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Wed, Jan 17, 2024 at 1:46 PM Kartyshov Ivan\n<i.kartyshov@postgrespro.ru> wrote:\n>\n> Add some fixes and rebase.\n>\nWhile quickly looking into the patch, I understood the idea of what we\nare trying to achieve here and I feel that a useful feature. But\nwhile looking at both the patches I could not quickly differentiate\nbetween these two approaches. I believe, internally at the core both\nare implementing similar wait logic but providing different syntaxes.\nSo if we want to keep both these approaches open for the sake of\ndiscussion then better first to create a patch that implements the\ncore approach i.e. the waiting logic and the other common part and\nthen add top-up patches with 2 different approaches that would be easy\nfor review. I also see in v4 that there is no documentation for the\nsyntax part so it makes it even harder to understand.\n\nI think this thread is implementing a useful feature so my suggestion\nis to add some documentation in v4 and also make it more readable\nw.r.t. What are the clear differences between these two approaches,\nmaybe adding commit message will also help.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 22 Jan 2024 11:19:57 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Updated, rebased, fixed Ci and added documentation.\nWe left two different solutions. Help me please to choose the best.\n\n1) Classic (wait_classic_v6.patch)\nhttps://www.postgresql.org/message-id/3cc883048264c2e9af022033925ff8db%40postgrespro.ru\n==========\nadvantages: multiple wait events, separate WAIT FOR statement\ndisadvantages: new words in grammar\n\n\n\nWAIT FOR [ANY | ALL] event [, ...]\nBEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n [ WAIT FOR [ANY | ALL] event [, ...]]\nevent:\nLSN value\nTIMEOUT number_of_milliseconds\ntimestamp\n\n\n\n2) After style: Kyotaro and Freund (wait_after_within_v5.patch)\nhttps://www.postgresql.org/message-id/d3ff2e363af60b345f82396992595a03%40postgrespro.ru\n==========\nadvantages: no new words in grammar\ndisadvantages: a little harder to understand, fewer events to wait\n\n\n\nAFTER lsn_event [ WITHIN delay_milliseconds ] [, ...]\nBEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n [ AFTER lsn_event [ WITHIN delay_milliseconds ]]\nSTART [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n [ AFTER lsn_event [ WITHIN delay_milliseconds ]]\n\n\n\n\nRegards\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com",
"msg_date": "Fri, 02 Feb 2024 00:29:28 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Fwd: Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Intro\n==========\nThe main purpose of the feature is to achieve\nread-your-writes-consistency, while using async replica for reads and\nprimary for writes. In that case lsn of last modification is stored\ninside application. We cannot store this lsn inside database, since\nreads are distributed across all replicas and primary.\n\n\nTwo implementations of one feature\n==========\nWe left two different solutions. Help me please to choose the best.\n\n\n1) Classic (wait_classic_v7.patch)\nhttps://www.postgresql.org/message-id/3cc883048264c2e9af022033925ff8db%40postgrespro.ru\nSynopsis\n==========\nadvantages: multiple wait events, separate WAIT FOR statement\ndisadvantages: new words in grammar\n\n\n\nWAIT FOR [ANY | ALL] event [, ...]\nBEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n [ WAIT FOR [ANY | ALL] event [, ...]]\nevent:\nLSN value\nTIMEOUT number_of_milliseconds\ntimestamp\n\n\n\n2) After style: Kyotaro and Freund (wait_after_within_v6.patch)\nhttps://www.postgresql.org/message-id/d3ff2e363af60b345f82396992595a03%40postgrespro.ru\nSynopsis\n==========\nadvantages: no new words in grammar\ndisadvantages: a little harder to understand, fewer events to wait\n\n\n\nAFTER lsn_event [ WITHIN delay_milliseconds ] [, ...]\nBEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n [ AFTER lsn_event [ WITHIN delay_milliseconds ]]\nSTART [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n [ AFTER lsn_event [ WITHIN delay_milliseconds ]]\n\n\nExamples\n==========\n\nprimary standby\n------- --------\n postgresql.conf\n recovery_min_apply_delay = 10s\n\nCREATE TABLE tbl AS SELECT generate_series(1,10) AS a;\nINSERT INTO tbl VALUES (generate_series(11, 20));\nSELECT pg_current_wal_lsn();\n\n BEGIN WAIT FOR LSN '0/3002AE8';\n SELECT * FROM tbl; // read fresh insertions\n COMMIT;\n\nRebased and ready for review.\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com",
"msg_date": "Thu, 07 Mar 2024 14:44:32 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Thu, Mar 7, 2024 at 5:14 PM Kartyshov Ivan\n<i.kartyshov@postgrespro.ru> wrote:\n>\n> Intro\n> ==========\n> The main purpose of the feature is to achieve\n> read-your-writes-consistency, while using async replica for reads and\n> primary for writes. In that case lsn of last modification is stored\n> inside application. We cannot store this lsn inside database, since\n> reads are distributed across all replicas and primary.\n>\n>\n> Two implementations of one feature\n> ==========\n> We left two different solutions. Help me please to choose the best.\n>\n>\n> 1) Classic (wait_classic_v7.patch)\n> https://www.postgresql.org/message-id/3cc883048264c2e9af022033925ff8db%40postgrespro.ru\n> Synopsis\n> ==========\n> advantages: multiple wait events, separate WAIT FOR statement\n> disadvantages: new words in grammar\n>\n>\n>\n> WAIT FOR [ANY | ALL] event [, ...]\n> BEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n> [ WAIT FOR [ANY | ALL] event [, ...]]\n> event:\n> LSN value\n> TIMEOUT number_of_milliseconds\n> timestamp\n>\n>\n>\n> 2) After style: Kyotaro and Freund (wait_after_within_v6.patch)\n> https://www.postgresql.org/message-id/d3ff2e363af60b345f82396992595a03%40postgrespro.ru\n> Synopsis\n> ==========\n> advantages: no new words in grammar\n> disadvantages: a little harder to understand, fewer events to wait\n>\n>\n>\n> AFTER lsn_event [ WITHIN delay_milliseconds ] [, ...]\n> BEGIN [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n> [ AFTER lsn_event [ WITHIN delay_milliseconds ]]\n> START [ WORK | TRANSACTION ] [ transaction_mode [, ...] ]\n> [ AFTER lsn_event [ WITHIN delay_milliseconds ]]\n>\n\n+1 for the second one not only because it avoids new words in grammar\nbut also sounds to convey the meaning. I think you can explain in docs\nhow this feature can be used basically how will one get the correct\nLSN value to specify.\n\nAs suggested previously also pick one of the approaches (I would\nadvocate the second one) and keep an option for the second one by\nmentioning it in the commit message. I hope to see more\nreviews/discussions or usage like how will users get the LSN value to\nbe specified on the core logic of the feature at this stage. IF\npossible, state, how real-world applications could leverage this\nfeature.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 7 Mar 2024 17:54:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi!\n\nI've decided to put my hands on this patch.\n\nOn Thu, Mar 7, 2024 at 2:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> +1 for the second one not only because it avoids new words in grammar\n> but also sounds to convey the meaning. I think you can explain in docs\n> how this feature can be used basically how will one get the correct\n> LSN value to specify.\n\nI picked the second option and left only the AFTER clause for the\nBEGIN statement. I think this should be enough for the beginning.\n\n> As suggested previously also pick one of the approaches (I would\n> advocate the second one) and keep an option for the second one by\n> mentioning it in the commit message. I hope to see more\n> reviews/discussions or usage like how will users get the LSN value to\n> be specified on the core logic of the feature at this stage. IF\n> possible, state, how real-world applications could leverage this\n> feature.\n\nI've added a paragraph to the docs about the usage. After you made\nsome changes on primary, you run pg_current_wal_insert_lsn(). Then\nconnect to replica and run 'BEGIN AFTER lsn' with the just obtained\nLSN. Now you're guaranteed to see the changes made to the primary.\n\nAlso, I've significantly reworked other aspects of the patch. The\nmost significant changes are:\n1) Waiters are now stored in the array sorted by LSN. This saves us\nfrom scanning of wholeper-backend array.\n2) Waiters are removed from the array immediately once their LSNs are\nreplayed. Otherwise, the WAL replayer will keep scanning the shared\nmemory array till waiters wake up.\n3) To clean up after errors, we now call WaitLSNCleanup() on backend\nshmem exit. I think this is preferable over the previous approach to\nremove from the queue before ProcessInterrupts().\n4) There is now condition to recheck if LSN is replayed after adding\nto the shared memory array. This should save from the race\nconditions.\n5) I've renamed too generic names for functions and files.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Mon, 11 Mar 2024 12:44:53 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Mon, Mar 11, 2024 at 12:44 PM Alexander Korotkov\n<aekorotkov@gmail.com> wrote:\n> I've decided to put my hands on this patch.\n>\n> On Thu, Mar 7, 2024 at 2:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > +1 for the second one not only because it avoids new words in grammar\n> > but also sounds to convey the meaning. I think you can explain in docs\n> > how this feature can be used basically how will one get the correct\n> > LSN value to specify.\n>\n> I picked the second option and left only the AFTER clause for the\n> BEGIN statement. I think this should be enough for the beginning.\n>\n> > As suggested previously also pick one of the approaches (I would\n> > advocate the second one) and keep an option for the second one by\n> > mentioning it in the commit message. I hope to see more\n> > reviews/discussions or usage like how will users get the LSN value to\n> > be specified on the core logic of the feature at this stage. IF\n> > possible, state, how real-world applications could leverage this\n> > feature.\n>\n> I've added a paragraph to the docs about the usage. After you made\n> some changes on primary, you run pg_current_wal_insert_lsn(). Then\n> connect to replica and run 'BEGIN AFTER lsn' with the just obtained\n> LSN. Now you're guaranteed to see the changes made to the primary.\n>\n> Also, I've significantly reworked other aspects of the patch. The\n> most significant changes are:\n> 1) Waiters are now stored in the array sorted by LSN. This saves us\n> from scanning of wholeper-backend array.\n> 2) Waiters are removed from the array immediately once their LSNs are\n> replayed. Otherwise, the WAL replayer will keep scanning the shared\n> memory array till waiters wake up.\n> 3) To clean up after errors, we now call WaitLSNCleanup() on backend\n> shmem exit. I think this is preferable over the previous approach to\n> remove from the queue before ProcessInterrupts().\n> 4) There is now condition to recheck if LSN is replayed after adding\n> to the shared memory array. This should save from the race\n> conditions.\n> 5) I've renamed too generic names for functions and files.\n\nI went through this patch another time, and made some minor\nadjustments. Now it looks good, I'm going to push it if no\nobjections.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Fri, 15 Mar 2024 16:20:25 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Fri, Mar 15, 2024 at 4:20 PM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Mon, Mar 11, 2024 at 12:44 PM Alexander Korotkov\n> <aekorotkov@gmail.com> wrote:\n> > I've decided to put my hands on this patch.\n> >\n> > On Thu, Mar 7, 2024 at 2:25 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > > +1 for the second one not only because it avoids new words in grammar\n> > > but also sounds to convey the meaning. I think you can explain in docs\n> > > how this feature can be used basically how will one get the correct\n> > > LSN value to specify.\n> >\n> > I picked the second option and left only the AFTER clause for the\n> > BEGIN statement. I think this should be enough for the beginning.\n> >\n> > > As suggested previously also pick one of the approaches (I would\n> > > advocate the second one) and keep an option for the second one by\n> > > mentioning it in the commit message. I hope to see more\n> > > reviews/discussions or usage like how will users get the LSN value to\n> > > be specified on the core logic of the feature at this stage. IF\n> > > possible, state, how real-world applications could leverage this\n> > > feature.\n> >\n> > I've added a paragraph to the docs about the usage. After you made\n> > some changes on primary, you run pg_current_wal_insert_lsn(). Then\n> > connect to replica and run 'BEGIN AFTER lsn' with the just obtained\n> > LSN. Now you're guaranteed to see the changes made to the primary.\n> >\n> > Also, I've significantly reworked other aspects of the patch. The\n> > most significant changes are:\n> > 1) Waiters are now stored in the array sorted by LSN. This saves us\n> > from scanning of wholeper-backend array.\n> > 2) Waiters are removed from the array immediately once their LSNs are\n> > replayed. Otherwise, the WAL replayer will keep scanning the shared\n> > memory array till waiters wake up.\n> > 3) To clean up after errors, we now call WaitLSNCleanup() on backend\n> > shmem exit. I think this is preferable over the previous approach to\n> > remove from the queue before ProcessInterrupts().\n> > 4) There is now condition to recheck if LSN is replayed after adding\n> > to the shared memory array. This should save from the race\n> > conditions.\n> > 5) I've renamed too generic names for functions and files.\n>\n> I went through this patch another time, and made some minor\n> adjustments. Now it looks good, I'm going to push it if no\n> objections.\n>\n\nThe revised patch version with cosmetic fixes proposed by Alexander Lakhin.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Fri, 15 Mar 2024 21:47:55 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On 2024-03-11 13:44, Alexander Korotkov wrote:\n> I picked the second option and left only the AFTER clause for the\n> BEGIN statement. I think this should be enough for the beginning.\n\nThank you for your rework on your patch, here I made some fixes:\n0) autocomplete\n1) less jumps\n2) more description and add cases in doc\n\nI think, it will be useful to have stand-alone statement.\nWhy you would like to see only AFTER clause for the BEGIN statement?\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com",
"msg_date": "Fri, 15 Mar 2024 22:59:44 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On 2024-03-15 22:59, Kartyshov Ivan wrote:\n> On 2024-03-11 13:44, Alexander Korotkov wrote:\n>> I picked the second option and left only the AFTER clause for the\n>> BEGIN statement. I think this should be enough for the beginning.\n> \n> Thank you for your rework on your patch, here I made some fixes:\n> 0) autocomplete\n> 1) less jumps\n> 2) more description and add cases in doc\n> \n> I think, it will be useful to have stand-alone statement.\n> Why you would like to see only AFTER clause for the BEGIN statement?\n\nRebase and update patch.\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com",
"msg_date": "Fri, 15 Mar 2024 23:32:23 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Fri, Mar 15, 2024 at 10:32 PM Kartyshov Ivan\n<i.kartyshov@postgrespro.ru> wrote:\n>\n> On 2024-03-15 22:59, Kartyshov Ivan wrote:\n> > On 2024-03-11 13:44, Alexander Korotkov wrote:\n> >> I picked the second option and left only the AFTER clause for the\n> >> BEGIN statement. I think this should be enough for the beginning.\n> >\n> > Thank you for your rework on your patch, here I made some fixes:\n> > 0) autocomplete\n> > 1) less jumps\n> > 2) more description and add cases in doc\n\nThank you!\n\n> > I think, it will be useful to have stand-alone statement.\n> > Why you would like to see only AFTER clause for the BEGIN statement?\n\nYes, stand-alone statements might be also useful. But I think that\nthe best way for this feature to get into the core would be to commit\nthe minimal version first. The BEGIN clause has minimal invasiveness\nfor the syntax and I believe covers most typical use-cases. Once we\nfigure out it's OK and have positive feedback from users, we can do\nmore enchantments incrementally.\n\n> Rebase and update patch.\n\nCool, I was just about to ask you to do this.\n\n ------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 15 Mar 2024 22:34:24 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Sat, Mar 16, 2024 at 2:04 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> > Rebase and update patch.\n\nThanks for working on this. I took a quick look at v11 patch. Here are\nsome comments:\n\n1.\n+#include \"utils/timestamp.h\"\n+#include \"executor/spi.h\"\n+#include \"utils/fmgrprotos.h\"\n\nPlease place executor/spi.h in the alphabetical order. Also, look at\nall other header files and place them in the order.\n\n2. It seems like pgindent is not happy with\nsrc/backend/access/transam/xlogrecovery.c and\nsrc/backend/commands/waitlsn.c. Please run it to keep BF member koel\nhappy post commit.\n\n3. This patch changes, SQL explicit transaction statement syntax, is\nit (this deviation) okay from SQL standard perspective?\n\n4. I think some more checks are needed for better input validations.\n\n4.1 With invalid LSN succeeds, shouldn't it error out? Or at least,\nadd a fast path/quick exit to WaitForLSN()?\nBEGIN AFTER '0/0';\n\n4.2 With an unreasonably high future LSN, BEGIN command waits\nunboundedly, shouldn't we check if the specified LSN is more than\npg_last_wal_receive_lsn() error out?\nBEGIN AFTER '0/FFFFFFFF';\nSELECT pg_last_wal_receive_lsn() + 1 AS future_receive_lsn \\gset\nBEGIN AFTER :'future_receive_lsn';\n\n4.3 With an unreasonably high wait time, BEGIN command waits\nunboundedly, shouldn't we restrict the wait time to some max value,\nsay a day or so?\nSELECT pg_last_wal_receive_lsn() + 1 AS future_receive_lsn \\gset\nBEGIN AFTER :'future_receive_lsn' WITHIN 100000;\n\n5.\n+#include <float.h>\n+#include <math.h>\n+#include \"postgres.h\"\n+#include \"pgstat.h\"\n\npostgres.h must be included at the first, and then the system header\nfiles, and then all postgres header files, just like below. See a very\nrecent commit https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=97d85be365443eb4bf84373a7468624762382059.\n\n+#include \"postgres.h\"\n\n+#include <float.h>\n+#include <math.h>\n\n+#include \"access/transam.h\"\n+#include \"access/xact.h\"\n+#include \"access/xlog.h\"\n\n6.\n+/* Set all latches in shared memory to signal that new LSN has been replayed */\n+void\n+WaitLSNSetLatches(XLogRecPtr curLSN)\n+{\n\nI see this patch is waking up all the waiters in the recovery path\nafter applying every WAL record, which IMO is a hot path. Is the\nimpact of this change on recovery measured, perhaps using\nhttps://github.com/macdice/redo-bench or similar tools?\n\n7. In continuation to comment #6, why not use Conditional Variables\ninstead of proc latches to sleep and wait for all the waiters in\nWaitLSNSetLatches?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 16 Mar 2024 12:32:43 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Fri, Mar 15, 2024 at 7:50 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> I went through this patch another time, and made some minor\n> adjustments. Now it looks good, I'm going to push it if no\n> objections.\n>\n\nI have a question related to usability, if the regular reads (say a\nSelect statement or reads via function/procedure) need a similar\nguarantee to see the changes on standby then do they also always need\nto first do something like \"BEGIN AFTER '0/3F0FF791' WITHIN 1000;\"? Or\nin other words, shouldn't we think of something for implicit\ntransactions?\n\nIn general, it seems this patch has been stuck for a long time on the\ndecision to choose an appropriate UI (syntax), and we thought of\nmoving it further so that the other parts of the patch can be\nreviewed/discussed. So, I feel before pushing this we should see\ncomments from a few (at least two) other senior members who earlier\nshared their opinion on the syntax. I know we don't have much time\nleft but OTOH pushing such a change (where we didn't have a consensus\non syntax) without much discussion at this point of time could lead to\ndiscussions after commit.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 16 Mar 2024 16:26:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Sat, Mar 16, 2024 at 4:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Mar 15, 2024 at 7:50 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >\n> > I went through this patch another time, and made some minor\n> > adjustments. Now it looks good, I'm going to push it if no\n> > objections.\n> >\n>\n> I have a question related to usability, if the regular reads (say a\n> Select statement or reads via function/procedure) need a similar\n> guarantee to see the changes on standby then do they also always need\n> to first do something like \"BEGIN AFTER '0/3F0FF791' WITHIN 1000;\"? Or\n> in other words, shouldn't we think of something for implicit\n> transactions?\n\n+1 to have support for implicit txns. A strawman solution I can think\nof is to let primary send its current insert LSN to the standby every\ntime it sends a bunch of WAL, and the standby waits for that LSN to be\nreplayed on it at the start of every implicit txn automatically.\n\nThe new BEGIN syntax requires application code changes. This led me to\nthink how one can achieve read-after-write consistency today in a\nprimary - standby set up. All the logic of this patch, that is,\nwaiting for the standby to pass a given primary LSN needs to be done\nin the application code (or in proxy or in load balancer?). I believe\nthere might be someone doing this already, it's good to hear from\nthem.\n\n> In general, it seems this patch has been stuck for a long time on the\n> decision to choose an appropriate UI (syntax), and we thought of\n> moving it further so that the other parts of the patch can be\n> reviewed/discussed. So, I feel before pushing this we should see\n> comments from a few (at least two) other senior members who earlier\n> shared their opinion on the syntax. I know we don't have much time\n> left but OTOH pushing such a change (where we didn't have a consensus\n> on syntax) without much discussion at this point of time could lead to\n> discussions after commit.\n\n+1 to gain consensus first on the syntax changes. With this, we might\nbe violating the SQL standard for explicit txn commands (I stand for\ncorrection about the SQL standard though).\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 16 Mar 2024 20:35:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi Amit,\nHi Bharath,\n\nOn Sat, Mar 16, 2024 at 5:05 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Sat, Mar 16, 2024 at 4:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > In general, it seems this patch has been stuck for a long time on the\n> > decision to choose an appropriate UI (syntax), and we thought of\n> > moving it further so that the other parts of the patch can be\n> > reviewed/discussed. So, I feel before pushing this we should see\n> > comments from a few (at least two) other senior members who earlier\n> > shared their opinion on the syntax. I know we don't have much time\n> > left but OTOH pushing such a change (where we didn't have a consensus\n> > on syntax) without much discussion at this point of time could lead to\n> > discussions after commit.\n>\n> +1 to gain consensus first on the syntax changes. With this, we might\n> be violating the SQL standard for explicit txn commands (I stand for\n> correction about the SQL standard though).\n\nThank you for your feedback. Generally, I agree it's correct to get\nconsensus on syntax first. And yes, this patch has been here since\n2016. We didn't get consensus for syntax for 8 years. Frankly\nspeaking, I don't see a reason why this shouldn't take another 8\nyears. At the same time the ability to wait on standby given LSN is\nreplayed seems like pretty basic and simple functionality. Thus, it's\nquite frustrating it already took that long and still unclear when/how\nthis could be finished.\n\nMy current attempt was to commit minimal implementation as less\ninvasive as possible. A new clause for BEGIN doesn't require\nadditional keywords and doesn't introduce additional statements. But\nyes, this is still a new qual. And, yes, Amit you're right that even\nif I had committed that, there was still a high risk of further\ndebates and revert.\n\nGiven my specsis about agreement over syntax, I'd like to check\nanother time if we could go without new syntax at all. There was an\nattempt to implement waiting for lsn as a function. But function\nholds a snapshot, which could prevent WAL records from being replayed.\nReleasing a snapshot could break the parent query. But now we have\nprocedures, which need a dedicated statement for the call and can even\ncontrol transactions. Could we implement a waitlsn in a procedure\nthat:\n\n1. First, check that it was called with non-atomic context (that is,\nit's not called within a transaction). Trigger error if called with\natomic context.\n2. Release a snapshot to be able to wait without risk of WAL replay\nstuck. Procedure is still called within the snapshot. It's a bit of\na hack to release a snapshot, but Vacuum statements already do so.\n\nAmit, Bharath, what do you think about this approach? Is this a way to go?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 17 Mar 2024 16:09:57 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Sun, Mar 17, 2024 at 7:40 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Sat, Mar 16, 2024 at 5:05 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Sat, Mar 16, 2024 at 4:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > In general, it seems this patch has been stuck for a long time on the\n> > > decision to choose an appropriate UI (syntax), and we thought of\n> > > moving it further so that the other parts of the patch can be\n> > > reviewed/discussed. So, I feel before pushing this we should see\n> > > comments from a few (at least two) other senior members who earlier\n> > > shared their opinion on the syntax. I know we don't have much time\n> > > left but OTOH pushing such a change (where we didn't have a consensus\n> > > on syntax) without much discussion at this point of time could lead to\n> > > discussions after commit.\n> >\n> > +1 to gain consensus first on the syntax changes. With this, we might\n> > be violating the SQL standard for explicit txn commands (I stand for\n> > correction about the SQL standard though).\n>\n> Thank you for your feedback. Generally, I agree it's correct to get\n> consensus on syntax first. And yes, this patch has been here since\n> 2016. We didn't get consensus for syntax for 8 years. Frankly\n> speaking, I don't see a reason why this shouldn't take another 8\n> years. At the same time the ability to wait on standby given LSN is\n> replayed seems like pretty basic and simple functionality. Thus, it's\n> quite frustrating it already took that long and still unclear when/how\n> this could be finished.\n>\n> My current attempt was to commit minimal implementation as less\n> invasive as possible. A new clause for BEGIN doesn't require\n> additional keywords and doesn't introduce additional statements. But\n> yes, this is still a new qual. And, yes, Amit you're right that even\n> if I had committed that, there was still a high risk of further\n> debates and revert.\n>\n> Given my specsis about agreement over syntax, I'd like to check\n> another time if we could go without new syntax at all. There was an\n> attempt to implement waiting for lsn as a function. But function\n> holds a snapshot, which could prevent WAL records from being replayed.\n> Releasing a snapshot could break the parent query. But now we have\n> procedures, which need a dedicated statement for the call and can even\n> control transactions. Could we implement a waitlsn in a procedure\n> that:\n>\n> 1. First, check that it was called with non-atomic context (that is,\n> it's not called within a transaction). Trigger error if called with\n> atomic context.\n> 2. Release a snapshot to be able to wait without risk of WAL replay\n> stuck. Procedure is still called within the snapshot. It's a bit of\n> a hack to release a snapshot, but Vacuum statements already do so.\n>\n\nCan you please provide a bit more details with some example what is\nthe existing problem with functions and how using procedures will\nresolve it? How will this this address the implicit transaction case\nor do we have any other workaround for those cases?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Mar 2024 08:47:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 5:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Sun, Mar 17, 2024 at 7:40 PM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n> > On Sat, Mar 16, 2024 at 5:05 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > On Sat, Mar 16, 2024 at 4:26 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n> > > > In general, it seems this patch has been stuck for a long time on\nthe\n> > > > decision to choose an appropriate UI (syntax), and we thought of\n> > > > moving it further so that the other parts of the patch can be\n> > > > reviewed/discussed. So, I feel before pushing this we should see\n> > > > comments from a few (at least two) other senior members who earlier\n> > > > shared their opinion on the syntax. I know we don't have much time\n> > > > left but OTOH pushing such a change (where we didn't have a\nconsensus\n> > > > on syntax) without much discussion at this point of time could lead\nto\n> > > > discussions after commit.\n> > >\n> > > +1 to gain consensus first on the syntax changes. With this, we might\n> > > be violating the SQL standard for explicit txn commands (I stand for\n> > > correction about the SQL standard though).\n> >\n> > Thank you for your feedback. Generally, I agree it's correct to get\n> > consensus on syntax first. And yes, this patch has been here since\n> > 2016. We didn't get consensus for syntax for 8 years. Frankly\n> > speaking, I don't see a reason why this shouldn't take another 8\n> > years. At the same time the ability to wait on standby given LSN is\n> > replayed seems like pretty basic and simple functionality. Thus, it's\n> > quite frustrating it already took that long and still unclear when/how\n> > this could be finished.\n> >\n> > My current attempt was to commit minimal implementation as less\n> > invasive as possible. A new clause for BEGIN doesn't require\n> > additional keywords and doesn't introduce additional statements. But\n> > yes, this is still a new qual. And, yes, Amit you're right that even\n> > if I had committed that, there was still a high risk of further\n> > debates and revert.\n> >\n> > Given my specsis about agreement over syntax, I'd like to check\n> > another time if we could go without new syntax at all. There was an\n> > attempt to implement waiting for lsn as a function. But function\n> > holds a snapshot, which could prevent WAL records from being replayed.\n> > Releasing a snapshot could break the parent query. But now we have\n> > procedures, which need a dedicated statement for the call and can even\n> > control transactions. Could we implement a waitlsn in a procedure\n> > that:\n> >\n> > 1. First, check that it was called with non-atomic context (that is,\n> > it's not called within a transaction). Trigger error if called with\n> > atomic context.\n> > 2. Release a snapshot to be able to wait without risk of WAL replay\n> > stuck. Procedure is still called within the snapshot. It's a bit of\n> > a hack to release a snapshot, but Vacuum statements already do so.\n> >\n>\n> Can you please provide a bit more details with some example what is\n> the existing problem with functions and how using procedures will\n> resolve it? How will this this address the implicit transaction case\n> or do we have any other workaround for those cases?\n\nPlease check [1] and [2] for the explanation of the problem with functions.\n\nAlso, please find a draft patch implementing the procedure. The issue with\nthe snapshot is addressed with the following lines.\n\nWe first ensure we're in a non-atomic context, then pop an active snapshot\n(tricky, but ExecuteVacuum() does the same). Then we should have no active\nsnapshot and it's safe to wait for lsn replay.\n\n if (context->atomic)\n ereport(ERROR,\n (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n errmsg(\"pg_wait_lsn() must be only called in non-atomic\ncontext\")));\n\n if (ActiveSnapshotSet())\n PopActiveSnapshot();\n Assert(!ActiveSnapshotSet());\n\nThe function call could be added either before the BEGIN statement or\nbefore the implicit transaction.\n\nCALL pg_wait_lsn('my_lsn', my_timeout); BEGIN;\nCALL pg_wait_lsn('my_lsn', my_timeout); SELECT ...;\n\nLinks\n1.\nhttps://www.postgresql.org/message-id/CAPpHfduBSN8j5j5Ynn5x%3DThD%3D8ypNd53D608VXGweBsPzxPvqA%40mail.gmail.com\n2.\nhttps://www.postgresql.org/message-id/CAPpHfdtiGgn0iS1KbW2HTam-1%2BoK%2BvhXZDAcnX9hKaA7Oe%3DF-A%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Mon, 18 Mar 2024 11:54:11 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 3:24 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Mon, Mar 18, 2024 at 5:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > 1. First, check that it was called with non-atomic context (that is,\n> > > it's not called within a transaction). Trigger error if called with\n> > > atomic context.\n> > > 2. Release a snapshot to be able to wait without risk of WAL replay\n> > > stuck. Procedure is still called within the snapshot. It's a bit of\n> > > a hack to release a snapshot, but Vacuum statements already do so.\n> > >\n> >\n> > Can you please provide a bit more details with some example what is\n> > the existing problem with functions and how using procedures will\n> > resolve it? How will this this address the implicit transaction case\n> > or do we have any other workaround for those cases?\n>\n> Please check [1] and [2] for the explanation of the problem with functions.\n>\n> Also, please find a draft patch implementing the procedure. The issue with the snapshot is addressed with the following lines.\n>\n> We first ensure we're in a non-atomic context, then pop an active snapshot (tricky, but ExecuteVacuum() does the same). Then we should have no active snapshot and it's safe to wait for lsn replay.\n>\n> if (context->atomic)\n> ereport(ERROR,\n> (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> errmsg(\"pg_wait_lsn() must be only called in non-atomic context\")));\n>\n> if (ActiveSnapshotSet())\n> PopActiveSnapshot();\n> Assert(!ActiveSnapshotSet());\n>\n> The function call could be added either before the BEGIN statement or before the implicit transaction.\n>\n> CALL pg_wait_lsn('my_lsn', my_timeout); BEGIN;\n> CALL pg_wait_lsn('my_lsn', my_timeout); SELECT ...;\n>\n\nI haven't thought in detail about whether there are any other problems\nwith this idea but sounds like it should solve the problems you shared\nwith a function call approach. BTW, if the application has to anyway\nknow the LSN till where replica needs to wait, why can't they simply\nmonitor the pg_last_wal_replay_lsn() value?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 19 Mar 2024 17:21:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Tue, Mar 19, 2024 at 1:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Mon, Mar 18, 2024 at 3:24 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >\n> > On Mon, Mar 18, 2024 at 5:17 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > 1. First, check that it was called with non-atomic context (that is,\n> > > > it's not called within a transaction). Trigger error if called with\n> > > > atomic context.\n> > > > 2. Release a snapshot to be able to wait without risk of WAL replay\n> > > > stuck. Procedure is still called within the snapshot. It's a bit of\n> > > > a hack to release a snapshot, but Vacuum statements already do so.\n> > > >\n> > >\n> > > Can you please provide a bit more details with some example what is\n> > > the existing problem with functions and how using procedures will\n> > > resolve it? How will this this address the implicit transaction case\n> > > or do we have any other workaround for those cases?\n> >\n> > Please check [1] and [2] for the explanation of the problem with functions.\n> >\n> > Also, please find a draft patch implementing the procedure. The issue with the snapshot is addressed with the following lines.\n> >\n> > We first ensure we're in a non-atomic context, then pop an active snapshot (tricky, but ExecuteVacuum() does the same). Then we should have no active snapshot and it's safe to wait for lsn replay.\n> >\n> > if (context->atomic)\n> > ereport(ERROR,\n> > (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > errmsg(\"pg_wait_lsn() must be only called in non-atomic context\")));\n> >\n> > if (ActiveSnapshotSet())\n> > PopActiveSnapshot();\n> > Assert(!ActiveSnapshotSet());\n> >\n> > The function call could be added either before the BEGIN statement or before the implicit transaction.\n> >\n> > CALL pg_wait_lsn('my_lsn', my_timeout); BEGIN;\n> > CALL pg_wait_lsn('my_lsn', my_timeout); SELECT ...;\n> >\n>\n> I haven't thought in detail about whether there are any other problems\n> with this idea but sounds like it should solve the problems you shared\n> with a function call approach. BTW, if the application has to anyway\n> know the LSN till where replica needs to wait, why can't they simply\n> monitor the pg_last_wal_replay_lsn() value?\n\nAmit, thank you for your feedback.\n\nYes, the application can monitor pg_last_wal_replay_lsn() value,\nthat's our state of the art solution. But that's rather inconvenient\nand takes extra latency and network traffic. And it can't be wrapped\ninto a server-side function in procedural language for the reasons we\ncan't implement it as a built-in function.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 19 Mar 2024 13:58:36 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Intro\n==========\nThe main purpose of the feature is to achieve\nread-your-writes-consistency, while using async replica for reads and\nprimary for writes. In that case lsn of last modification is stored\ninside application. We cannot store this lsn inside database, since\nreads are distributed across all replicas and primary.\n\n\nProcedure style implementation\n==========\nhttps://www.postgresql.org/message-id/27171.1586439221%40sss.pgh.pa.us\nhttps://www.postgresql.org/message-id/20210121.173009.235021120161403875.horikyota.ntt%40gmail.com\n\nCALL pg_wait_lsn(‘LSN’, timeout);\n\nExamples\n==========\n\nprimary standby\n------- --------\n postgresql.conf\n recovery_min_apply_delay = 10s\n\n\nCREATE TABLE tbl AS SELECT generate_series(1,10) AS a;\nINSERT INTO tbl VALUES (generate_series(11, 20));\nSELECT pg_current_wal_lsn();\n\n\n CALL pg_wait_lsn('0/3002AE8', 10000);\n BEGIN;\n SELECT * FROM tbl; // read fresh insertions\n COMMIT;\n\nFixed and ready to review.\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com",
"msg_date": "Tue, 19 Mar 2024 20:38:55 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Bharath Rupireddy, thank you for you review.\nBut here is some points.\n\nOn 2024-03-16 10:02, Bharath Rupireddy wrote:\n> 4.1 With invalid LSN succeeds, shouldn't it error out? Or at least,\n> add a fast path/quick exit to WaitForLSN()?\n> BEGIN AFTER '0/0';\n\nIn postgresql '0/0' is Valid pg_lsn, but it is always reached.\n\n> 4.2 With an unreasonably high future LSN, BEGIN command waits\n> unboundedly, shouldn't we check if the specified LSN is more than\n> pg_last_wal_receive_lsn() error out?\n> BEGIN AFTER '0/FFFFFFFF';\n> SELECT pg_last_wal_receive_lsn() + 1 AS future_receive_lsn \\gset\n> BEGIN AFTER :'future_receive_lsn';\n\nThis case will give ERROR cause '0/FFFFFFFF' + 1 is invalid pg_lsn\n\n> 4.3 With an unreasonably high wait time, BEGIN command waits\n> unboundedly, shouldn't we restrict the wait time to some max value,\n> say a day or so?\n> SELECT pg_last_wal_receive_lsn() + 1 AS future_receive_lsn \\gset\n> BEGIN AFTER :'future_receive_lsn' WITHIN 100000;\n\nGood idea, I put it 1 day. But this limit we should to discuss.\n\n> 6.\n> +/* Set all latches in shared memory to signal that new LSN has been \n> replayed */\n> +void\n> +WaitLSNSetLatches(XLogRecPtr curLSN)\n> +{\n> \n> I see this patch is waking up all the waiters in the recovery path\n> after applying every WAL record, which IMO is a hot path. Is the\n> impact of this change on recovery measured, perhaps using\n> https://github.com/macdice/redo-bench or similar tools?\n> \n> 7. In continuation to comment #6, why not use Conditional Variables\n> instead of proc latches to sleep and wait for all the waiters in\n> WaitLSNSetLatches?\n\nWaiters are stored in the array sorted by LSN. This help us to wake\nonly PIDs with replayed LSN. This saves us from scanning of whole\narray. So it`s not so hot path.\n\nAdd some fixes\n\n1) make waiting timeont more simple (as pg_terminate_backend())\n2) removed the 1 minute wait because INTERRUPTS don’t arrive for a\nlong time, changed it to 0.5 seconds\n3) add more tests\n4) added and expanded sections in the documentation\n5) add default variant of timeout\npg_wait_lsn(trg_lsn pg_lsn, delay int8 DEFAULT 0)\nexample: pg_wait_lsn('0/31B1B60') equal pg_wait_lsn('0/31B1B60', 0)\n6) now big timeout will be restricted to 1 day (86400000ms)\nCALL pg_wait_lsn('0/34FB5A1',10000000000);\nWARNING: Timeout for pg_wait_lsn() restricted to 1 day\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com",
"msg_date": "Wed, 20 Mar 2024 01:34:51 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 12:34 AM Kartyshov Ivan <i.kartyshov@postgrespro.ru>\nwrote:\n> > 4.2 With an unreasonably high future LSN, BEGIN command waits\n> > unboundedly, shouldn't we check if the specified LSN is more than\n> > pg_last_wal_receive_lsn() error out?\n\nI think limiting wait lsn by current received lsn would destroy the whole\nvalue of this feature. The value is to wait till given LSN is replayed,\nwhether it's already received or not.\n\n> > BEGIN AFTER '0/FFFFFFFF';\n> > SELECT pg_last_wal_receive_lsn() + 1 AS future_receive_lsn \\gset\n> > BEGIN AFTER :'future_receive_lsn';\n>\n> This case will give ERROR cause '0/FFFFFFFF' + 1 is invalid pg_lsn\n\nFWIW,\n\n# SELECT '0/FFFFFFFF'::pg_lsn + 1;\n ?column?\n----------\n 1/0\n(1 row)\n\nBut I don't see a problem here. On the replica, it's out of our control to\ncheck which lsn is good and which is not. We can't check whether the lsn,\nwhich is in future for the replica, is already issued by primary.\n\nFor the case of wrong lsn, which could cause potentially infinite wait,\nthere is the timeout and the manual query cancel.\n\n> > 4.3 With an unreasonably high wait time, BEGIN command waits\n> > unboundedly, shouldn't we restrict the wait time to some max value,\n> > say a day or so?\n> > SELECT pg_last_wal_receive_lsn() + 1 AS future_receive_lsn \\gset\n> > BEGIN AFTER :'future_receive_lsn' WITHIN 100000;\n>\n> Good idea, I put it 1 day. But this limit we should to discuss.\n\nDo you think that specifying timeout in milliseconds is suitable? I would\nprefer to switch to seconds (with ability to specify fraction of second).\nThis was expressed before by Alexander Lakhin.\n\n> > 6.\n> > +/* Set all latches in shared memory to signal that new LSN has been\n> > replayed */\n> > +void\n> > +WaitLSNSetLatches(XLogRecPtr curLSN)\n> > +{\n> >\n> > I see this patch is waking up all the waiters in the recovery path\n> > after applying every WAL record, which IMO is a hot path. Is the\n> > impact of this change on recovery measured, perhaps using\n> > https://github.com/macdice/redo-bench or similar tools?\n\nIvan, could you do this?\n\n> > 7. In continuation to comment #6, why not use Conditional Variables\n> > instead of proc latches to sleep and wait for all the waiters in\n> > WaitLSNSetLatches?\n>\n> Waiters are stored in the array sorted by LSN. This help us to wake\n> only PIDs with replayed LSN. This saves us from scanning of whole\n> array. So it`s not so hot path.\n\n+1\nThis saves us from ConditionVariableBroadcast() every time we replay the\nWAL record.\n\n> Add some fixes\n>\n> 1) make waiting timeont more simple (as pg_terminate_backend())\n> 2) removed the 1 minute wait because INTERRUPTS don’t arrive for a\n> long time, changed it to 0.5 seconds\n\nI don't see this change in the patch. Normally if a process gets a signal,\nthat causes WaitLatch() to exit immediately. It also exists immediately on\nquery cancel. IIRC, this 1 minute timeout is needed to handle some extreme\ncases when an interrupt is missing. Other places have it equal to 1\nminute. I don't see why we should have it different.\n\n> 3) add more tests\n> 4) added and expanded sections in the documentation\n\nI don't see this in the patch. I see only a short description\nin func.sgml, which is definitely not sufficient. We need at least\neverything we have in the docs before to be adjusted with the current\napproach of procedure.\n\n> 5) add default variant of timeout\n> pg_wait_lsn(trg_lsn pg_lsn, delay int8 DEFAULT 0)\n> example: pg_wait_lsn('0/31B1B60') equal pg_wait_lsn('0/31B1B60', 0)\n\nDoes zero here mean no timeout? I think this should be documented. Also,\nI would prefer to see the timeout by default. Probably one minute would be\ngood for default.\n\n> 6) now big timeout will be restricted to 1 day (86400000ms)\n> CALL pg_wait_lsn('0/34FB5A1',10000000000);\n> WARNING: Timeout for pg_wait_lsn() restricted to 1 day\n\nI don't think we need to mention individuals, who made proposals, in the\nsource code comments. Otherwise, our source code would be a crazy mess of\nnames. Also, if this is the restriction, it has to be an error. And it\nshould be a proper full ereport().\n\n------\nRegards,\nAlexander Korotkov\n\nOn Wed, Mar 20, 2024 at 12:34 AM Kartyshov Ivan <i.kartyshov@postgrespro.ru> wrote:> > 4.2 With an unreasonably high future LSN, BEGIN command waits> > unboundedly, shouldn't we check if the specified LSN is more than> > pg_last_wal_receive_lsn() error out?I think limiting wait lsn by current received lsn would destroy the whole value of this feature. The value is to wait till given LSN is replayed, whether it's already received or not.> > BEGIN AFTER '0/FFFFFFFF';> > SELECT pg_last_wal_receive_lsn() + 1 AS future_receive_lsn \\gset> > BEGIN AFTER :'future_receive_lsn';>> This case will give ERROR cause '0/FFFFFFFF' + 1 is invalid pg_lsnFWIW,# SELECT '0/FFFFFFFF'::pg_lsn + 1; ?column?---------- 1/0(1 row)But I don't see a problem here. On the replica, it's out of our control to check which lsn is good and which is not. We can't check whether the lsn, which is in future for the replica, is already issued by primary.For the case of wrong lsn, which could cause potentially infinite wait, there is the timeout and the manual query cancel.> > 4.3 With an unreasonably high wait time, BEGIN command waits> > unboundedly, shouldn't we restrict the wait time to some max value,> > say a day or so?> > SELECT pg_last_wal_receive_lsn() + 1 AS future_receive_lsn \\gset> > BEGIN AFTER :'future_receive_lsn' WITHIN 100000;>> Good idea, I put it 1 day. But this limit we should to discuss.Do you think that specifying timeout in milliseconds is suitable? I would prefer to switch to seconds (with ability to specify fraction of second). This was expressed before by Alexander Lakhin.> > 6.> > +/* Set all latches in shared memory to signal that new LSN has been> > replayed */> > +void> > +WaitLSNSetLatches(XLogRecPtr curLSN)> > +{> >> > I see this patch is waking up all the waiters in the recovery path> > after applying every WAL record, which IMO is a hot path. Is the> > impact of this change on recovery measured, perhaps using> > https://github.com/macdice/redo-bench or similar tools?Ivan, could you do this?> > 7. In continuation to comment #6, why not use Conditional Variables> > instead of proc latches to sleep and wait for all the waiters in> > WaitLSNSetLatches?>> Waiters are stored in the array sorted by LSN. This help us to wake> only PIDs with replayed LSN. This saves us from scanning of whole> array. So it`s not so hot path.+1This saves us from ConditionVariableBroadcast() every time we replay the WAL record.> Add some fixes>> 1) make waiting timeont more simple (as pg_terminate_backend())> 2) removed the 1 minute wait because INTERRUPTS don’t arrive for a> long time, changed it to 0.5 secondsI don't see this change in the patch. Normally if a process gets a signal, that causes WaitLatch() to exit immediately. It also exists immediately on query cancel. IIRC, this 1 minute timeout is needed to handle some extreme cases when an interrupt is missing. Other places have it equal to 1 minute. I don't see why we should have it different.> 3) add more tests> 4) added and expanded sections in the documentationI don't see this in the patch. I see only a short description in func.sgml, which is definitely not sufficient. We need at least everything we have in the docs before to be adjusted with the current approach of procedure.> 5) add default variant of timeout> pg_wait_lsn(trg_lsn pg_lsn, delay int8 DEFAULT 0)> example: pg_wait_lsn('0/31B1B60') equal pg_wait_lsn('0/31B1B60', 0)Does zero here mean no timeout? I think this should be documented. Also, I would prefer to see the timeout by default. Probably one minute would be good for default.> 6) now big timeout will be restricted to 1 day (86400000ms)> CALL pg_wait_lsn('0/34FB5A1',10000000000);> WARNING: Timeout for pg_wait_lsn() restricted to 1 dayI don't think we need to mention individuals, who made proposals, in the source code comments. Otherwise, our source code would be a crazy mess of names. Also, if this is the restriction, it has to be an error. And it should be a proper full ereport().------Regards,Alexander Korotkov",
"msg_date": "Wed, 20 Mar 2024 11:11:13 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On 19.03.24 18:38, Kartyshov Ivan wrote:\n> CALL pg_wait_lsn('0/3002AE8', 10000);\n> BEGIN;\n> SELECT * FROM tbl; // read fresh insertions\n> COMMIT;\n\nI'm not endorsing this or any other approach, but I think the timeout \nparameter should be of type interval, not an integer with a unit that is \nhidden in the documentation.\n\n\n\n",
"msg_date": "Thu, 21 Mar 2024 23:50:09 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On 17.03.24 15:09, Alexander Korotkov wrote:\n> My current attempt was to commit minimal implementation as less\n> invasive as possible. A new clause for BEGIN doesn't require\n> additional keywords and doesn't introduce additional statements. But\n> yes, this is still a new qual. And, yes, Amit you're right that even\n> if I had committed that, there was still a high risk of further\n> debates and revert.\n\nI had written in [0] about my questions related to using this with \nconnection poolers. I don't think this was addressed at all. I haven't \nseen any discussion about how to make this kind of facility usable in a \nfull system. You have to manually query and send LSNs; that seems \npretty cumbersome. Sure, this is part of something that could be \nuseful, but how would an actual user with actual application code get to \nuse this?\n\n[0]: \nhttps://www.postgresql.org/message-id/8b5b172f-0ae7-d644-8358-e2851dded43b%40enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 21 Mar 2024 23:58:06 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Thank you for your feedback.\n\nOn 2024-03-20 12:11, Alexander Korotkov wrote:\n> On Wed, Mar 20, 2024 at 12:34 AM Kartyshov Ivan\n> <i.kartyshov@postgrespro.ru> wrote:\n>> > 4.2 With an unreasonably high future LSN, BEGIN command waits\n>> > unboundedly, shouldn't we check if the specified LSN is more than\n>> > pg_last_wal_receive_lsn() error out?\n> \n> I think limiting wait lsn by current received lsn would destroy the\n> whole value of this feature. The value is to wait till given LSN is\n> replayed, whether it's already received or not.\n\nOk sounds reasonable, I`ll rollback the changes.\n\n> But I don't see a problem here. On the replica, it's out of our\n> control to check which lsn is good and which is not. We can't check\n> whether the lsn, which is in future for the replica, is already issued\n> by primary.\n> \n> For the case of wrong lsn, which could cause potentially infinite\n> wait, there is the timeout and the manual query cancel.\n\nFully agree with this take.\n\n>> > 4.3 With an unreasonably high wait time, BEGIN command waits\n>> > unboundedly, shouldn't we restrict the wait time to some max\n> value,\n>> > say a day or so?\n>> > SELECT pg_last_wal_receive_lsn() + 1 AS future_receive_lsn \\gset\n>> > BEGIN AFTER :'future_receive_lsn' WITHIN 100000;\n>> \n>> Good idea, I put it 1 day. But this limit we should to discuss.\n> \n> Do you think that specifying timeout in milliseconds is suitable? I\n> would prefer to switch to seconds (with ability to specify fraction of\n> second). This was expressed before by Alexander Lakhin.\n\nIt sounds like an interesting idea. Please review the result.\n\n>> > https://github.com/macdice/redo-bench or similar tools?\n> \n> Ivan, could you do this?\n\nYes, test redo-bench/crash-recovery.sh\nThis patch on master\n91.327, 1.973\n105.907, 3.338\n98.412, 4.579\n95.818, 4.19\n\nREL_13-STABLE\n116.645, 3.005\n113.212, 2.568\n117.644, 3.183\n111.411, 2.782\n\nmaster\n124.712, 2.047\n117.012, 1.736\n116.328, 2.035\n115.662, 1.797\n\nStrange behavior, patched version is faster then REL_13-STABLE and \nmaster.\n\n> I don't see this change in the patch. Normally if a process gets a\n> signal, that causes WaitLatch() to exit immediately. It also exists\n> immediately on query cancel. IIRC, this 1 minute timeout is needed to\n> handle some extreme cases when an interrupt is missing. Other places\n> have it equal to 1 minute. I don't see why we should have it\n> different.\n\nOk, I`ll rollback my changes.\n\n>> 4) added and expanded sections in the documentation\n> \n> I don't see this in the patch. I see only a short description in\n> func.sgml, which is definitely not sufficient. We need at least\n> everything we have in the docs before to be adjusted with the current\n> approach of procedure.\n\nI didn't find another section where to add the description of \npg_wait_lsn().\nSo I extend description on the bottom of the table.\n\n>> 5) add default variant of timeout\n>> pg_wait_lsn(trg_lsn pg_lsn, delay int8 DEFAULT 0)\n>> example: pg_wait_lsn('0/31B1B60') equal pg_wait_lsn('0/31B1B60', 0)\n> \n> Does zero here mean no timeout? I think this should be documented.\n> Also, I would prefer to see the timeout by default. Probably one\n> minute would be good for default.\n\nLets discuss this point. Loop in function WaitForLSN is made that way,\nif we choose delay=0, only then we can wait infinitely to wait LSN\nwithout timeout. So default must be 0.\n\nPlease take one more look on the patch.\n\nPS sorry, the strange BUG throw my mails out of thread.\nhttps://www.postgresql.org/message-id/flat/f2ff071aa9141405bb8efee67558a058%40postgrespro.ru\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com",
"msg_date": "Fri, 22 Mar 2024 22:42:52 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 4:28 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> I had written in [0] about my questions related to using this with\n> connection poolers. I don't think this was addressed at all. I haven't\n> seen any discussion about how to make this kind of facility usable in a\n> full system. You have to manually query and send LSNs; that seems\n> pretty cumbersome. Sure, this is part of something that could be\n> useful, but how would an actual user with actual application code get to\n> use this?\n>\n> [0]:\n> https://www.postgresql.org/message-id/8b5b172f-0ae7-d644-8358-e2851dded43b%40enterprisedb.com\n\nI share the same concern as yours and had proposed something upthread\n[1]. The idea is something like how each query takes a snapshot at the\nbeginning of txn/query (depending on isolation level), the same way\nthe standby can wait for the primary's current LSN as of the moment\n(at the time of taking snapshot). And, primary keeps sending its\ncurrent LSN as part of regular WAL to standbys so that the standbys\ndoesn't have to make connections to the primary to know its current\nLSN every time. Perhps, this may not even fully guarantee (considered\nto be achieving) the read-after-write consistency on standbys unless\nthere's a way for the application to tell the wait LSN.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/CALj2ACUfS7LH1PaWmSZ5KwH4BpQxO9izeMw4qC3a1DAwi6nfbQ%40mail.gmail.com\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 24 Mar 2024 08:09:19 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Thank you for your interest to the patch.\nI understand you questions, but I fully support Alexander Korotkov idea\nto commit the minimal required functionality. And then keep working on\nother improvements.\n\nOn 2024-03-24 05:39, Bharath Rupireddy wrote:\n> On Fri, Mar 22, 2024 at 4:28 AM Peter Eisentraut <peter@eisentraut.org> \n> wrote:\n>> \n>> I had written in [0] about my questions related to using this with\n>> connection poolers. I don't think this was addressed at all. I \n>> haven't\n>> seen any discussion about how to make this kind of facility usable in \n>> a\n>> full system. You have to manually query and send LSNs; that seems\n>> pretty cumbersome. Sure, this is part of something that could be\n>> useful, but how would an actual user with actual application code get \n>> to\n>> use this?\n>> \n>> [0]:\n>> https://www.postgresql.org/message-id/8b5b172f-0ae7-d644-8358-e2851dded43b%40enterprisedb.com\n\n\n>>> But I wonder how a client is going to get the LSN. How would all of\n>>> this be used by a client? I can think of a scenarios where you have\n>>> an application that issues a bunch of SQL commands and you have some\n>>> kind of pooler in the middle that redirects those commands to\n>>> different hosts, and what you really want is to have it transparently\n>>> behave as if it's just a single host. Do we want to inject a bunch\n>>> of \"SELECT pg_get_lsn()\", \"SELECT pg_wait_lsn()\" calls into that?\n\nAs I understand your question, application make dml on the primary\nserver, get LSN of changes and send bunch SQL read-only commands to \npooler. Transparent behave we can get using #synchronous_commit, but\nit is very slow.\n\n>>> I'm tempted to think this could be a protocol-layer facility. Every\n>>> query automatically returns the current LSN, and every query can also\n>>> send along an LSN to wait for, and the client library would just keep\n>>> track of the LSN for (what it thinks of as) the connection. So you\n>>> get some automatic serialization without having to modify your client \n>>> code.\n\nThank you, it is a good question for future versions.\nYou say about a protocol-layer facility, what you meen. May be we can\nuse signals, like hot_standby_feedback.\n\n> I share the same concern as yours and had proposed something upthread\n> [1]. The idea is something like how each query takes a snapshot at the\n> beginning of txn/query (depending on isolation level), the same way\n> the standby can wait for the primary's current LSN as of the moment\n> (at the time of taking snapshot). And, primary keeps sending its\n> current LSN as part of regular WAL to standbys so that the standbys\n> doesn't have to make connections to the primary to know its current\n> LSN every time. Perhps, this may not even fully guarantee (considered\n> to be achieving) the read-after-write consistency on standbys unless\n> there's a way for the application to tell the wait LSN.\n> \n> Thoughts?\n> \n> [1] \n> https://www.postgresql.org/message-id/CALj2ACUfS7LH1PaWmSZ5KwH4BpQxO9izeMw4qC3a1DAwi6nfbQ%40mail.gmail.com\n\n\n> +1 to have support for implicit txns. A strawman solution I can think\n> of is to let primary send its current insert LSN to the standby every\n> time it sends a bunch of WAL, and the standby waits for that LSN to be\n> replayed on it at the start of every implicit txn automatically.\n\nAnd how standby will get lsn to wait for? All solutions I can think of\nare very invasive and poorly scalable.\n\nFor example, every dml can send back LSN if dml is success. And \napplication could use it to wait actual changes.\n\n> The new BEGIN syntax requires application code changes. This led me to\n> think how one can achieve read-after-write consistency today in a\n> primary - standby set up. All the logic of this patch, that is, waiting\n> for the standby to pass a given primary LSN needs to be done in the\n> application code (or in proxy or in load balancer?). I believe there\n> might be someone doing this already, it's good to hear from them.\n\nYou may use #synchronous_commit mode but it slow. So my implementation\ndon`t make primary to wait all standby to sent its feedbacks.\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com",
"msg_date": "Tue, 26 Mar 2024 17:06:51 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 12:50 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> On 19.03.24 18:38, Kartyshov Ivan wrote:\n> > CALL pg_wait_lsn('0/3002AE8', 10000);\n> > BEGIN;\n> > SELECT * FROM tbl; // read fresh insertions\n> > COMMIT;\n>\n> I'm not endorsing this or any other approach, but I think the timeout\n> parameter should be of type interval, not an integer with a unit that is\n> hidden in the documentation.\n\nI'm not sure a timeout needs to deal with complexity of our interval\ndatatype. At the same time, the integer number of milliseconds looks\na bit weird. Could the float8 number of seconds be an option?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 27 Mar 2024 01:49:17 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 12:58 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> On 17.03.24 15:09, Alexander Korotkov wrote:\n> > My current attempt was to commit minimal implementation as less\n> > invasive as possible. A new clause for BEGIN doesn't require\n> > additional keywords and doesn't introduce additional statements. But\n> > yes, this is still a new qual. And, yes, Amit you're right that even\n> > if I had committed that, there was still a high risk of further\n> > debates and revert.\n>\n> I had written in [0] about my questions related to using this with\n> connection poolers. I don't think this was addressed at all. I haven't\n> seen any discussion about how to make this kind of facility usable in a\n> full system. You have to manually query and send LSNs; that seems\n> pretty cumbersome. Sure, this is part of something that could be\n> useful, but how would an actual user with actual application code get to\n> use this?\n\nThe current usage pattern of this functionality is the following.\n\n1. Do the write transaction on primary\n2. Query pg_current_wal_insert_lsn() on primary\n3. Call pg_wait_lsn() with the value obtained on the previous step on replica\n4. Do the read transaction of replica\n\nThis usage pattern could be implemented either on the application\nlevel, or on the pooler level. For application level, it would\nrequire a somewhat advanced level of database-aware programming, but\nthis is still a valid usage. Regarding poolers, if some poolers\nmanage to automatically distinguish reading and writing queries,\ndealing with LSNs wouldn't be too complex for them.\n\nHaving this functionality on protocol level would be ideal, but let's\ndo this step-by-step. The built-in procedure isn't very invasive, but\nthat could give us some adoption and open the way forward.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 27 Mar 2024 01:59:04 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Sun, Mar 24, 2024 at 4:39 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> I share the same concern as yours and had proposed something upthread\n> [1]. The idea is something like how each query takes a snapshot at the\n> beginning of txn/query (depending on isolation level), the same way\n> the standby can wait for the primary's current LSN as of the moment\n> (at the time of taking snapshot). And, primary keeps sending its\n> current LSN as part of regular WAL to standbys so that the standbys\n> doesn't have to make connections to the primary to know its current\n> LSN every time. Perhps, this may not even fully guarantee (considered\n> to be achieving) the read-after-write consistency on standbys unless\n> there's a way for the application to tell the wait LSN.\n\nOh, no. Please, check [1]. The idea is to wait for a particular\ntransaction to become visible. The one who made a change on primary\nbrings the lsn value from there to replica. For instance, an\napplication made a change on primary and then willing to run some\nreport on replica. And the report should be guaranteed to contain the\nchange just made. So, the application query the LSN from primary\nafter making a write transaction, then calls pg_wait_lsn() on\nreplicate before running the report.\n\nThis is quite simple server functionality, which could be used at\napplication-level, ORM-level or pooler-level. And it unlocks the way\nforward for in-protocol implementation as proposed by Peter\nEisentraut.\n\nLinks.\n1. https://www.postgresql.org/message-id/CAPpHfdtny81end69PzEdRsROKnsybsj%3DOs8DUM-6HeKGKnCuQQ%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 27 Mar 2024 02:06:53 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Tue, Mar 26, 2024 at 4:06 PM Kartyshov Ivan\n<i.kartyshov@postgrespro.ru> wrote:\n> Thank you for your interest to the patch.\n> I understand you questions, but I fully support Alexander Korotkov idea\n> to commit the minimal required functionality. And then keep working on\n> other improvements.\n\nI did further improvements in the patch.\n\nNotably, I decided to rename the procedure to\npg_wait_for_wal_replay_lsn(). This makes the name look consistent\nwith other WAL-related functions. Also it clearly states that we're\nwaiting for lsn to be replayed (not received, written or flushed).\n\nAlso, I did implements in the docs, commit message and some minor code fixes.\n\nI'm continuing to look at this patch.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 28 Mar 2024 02:24:23 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 2:24 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Tue, Mar 26, 2024 at 4:06 PM Kartyshov Ivan\n> <i.kartyshov@postgrespro.ru> wrote:\n> > Thank you for your interest to the patch.\n> > I understand you questions, but I fully support Alexander Korotkov idea\n> > to commit the minimal required functionality. And then keep working on\n> > other improvements.\n>\n> I did further improvements in the patch.\n>\n> Notably, I decided to rename the procedure to\n> pg_wait_for_wal_replay_lsn(). This makes the name look consistent\n> with other WAL-related functions. Also it clearly states that we're\n> waiting for lsn to be replayed (not received, written or flushed).\n>\n> Also, I did implements in the docs, commit message and some minor code fixes.\n>\n> I'm continuing to look at this patch.\n\nSorry, I forgot the attachment.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 28 Mar 2024 08:23:28 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "> v12\n\nHi all,\n\nI didn't review the patch but one thing jumped out: I don't think it's\nOK to hold a spinlock while (1) looping over an array of backends and\n(2) making system calls (SetLatch()).\n\n\n",
"msg_date": "Thu, 28 Mar 2024 20:36:32 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Thu, Mar 28, 2024 at 9:37 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> > v12\n>\n> Hi all,\n>\n> I didn't review the patch but one thing jumped out: I don't think it's\n> OK to hold a spinlock while (1) looping over an array of backends and\n> (2) making system calls (SetLatch()).\n\nGood catch, thank you.\n\nFixed along with other issues spotted by Alexander Lakhin.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 28 Mar 2024 14:39:25 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Thu, Mar 28, 2024, at 9:39 AM, Alexander Korotkov wrote:\n> Fixed along with other issues spotted by Alexander Lakhin.\n\n[I didn't read the whole thread. I'm sorry if I missed something ...]\n\nYou renamed the function in a previous version but let me suggest another one:\npg_wal_replay_wait. It uses the same pattern as the other recovery control\nfunctions [1]. I think \"for\" doesn't add much for the function name and \"lsn\" is\nused in functions that return an LSN (that's not the case here).\n\npostgres=# \\df pg_wal_replay*\nList of functions\n-[ RECORD 1 ]-------+---------------------\nSchema | pg_catalog\nName | pg_wal_replay_pause\nResult data type | void\nArgument data types | \nType | func\n-[ RECORD 2 ]-------+---------------------\nSchema | pg_catalog\nName | pg_wal_replay_resume\nResult data type | void\nArgument data types | \nType | func\n\nRegarding the arguments, I think the timeout should be bigint. There is at least\nanother function that implements a timeout that uses bigint. \n\npostgres=# \\df pg_terminate_backend\nList of functions\n-[ RECORD 1 ]-------+--------------------------------------\nSchema | pg_catalog\nName | pg_terminate_backend\nResult data type | boolean\nArgument data types | pid integer, timeout bigint DEFAULT 0\nType | func\n\nI also suggests that the timeout unit should be milliseconds, hence, using\nbigint is perfectly fine for the timeout argument.\n\n+ <para>\n+ Throws an ERROR if the target <acronym>lsn</acronym> was not replayed\n+ on standby within given timeout. Parameter <parameter>timeout</parameter>\n+ is the time in seconds to wait for the <parameter>target_lsn</parameter>\n+ replay. When <parameter>timeout</parameter> value equals to zero no\n+ timeout is applied.\n+ </para></entry>\n\n\n[1] https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-RECOVERY-CONTROL\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Mar 28, 2024, at 9:39 AM, Alexander Korotkov wrote:Fixed along with other issues spotted by Alexander Lakhin.[I didn't read the whole thread. I'm sorry if I missed something ...]You renamed the function in a previous version but let me suggest another one:pg_wal_replay_wait. It uses the same pattern as the other recovery controlfunctions [1]. I think \"for\" doesn't add much for the function name and \"lsn\" isused in functions that return an LSN (that's not the case here).postgres=# \\df pg_wal_replay*List of functions-[ RECORD 1 ]-------+---------------------Schema | pg_catalogName | pg_wal_replay_pauseResult data type | voidArgument data types | Type | func-[ RECORD 2 ]-------+---------------------Schema | pg_catalogName | pg_wal_replay_resumeResult data type | voidArgument data types | Type | funcRegarding the arguments, I think the timeout should be bigint. There is at leastanother function that implements a timeout that uses bigint. postgres=# \\df pg_terminate_backendList of functions-[ RECORD 1 ]-------+--------------------------------------Schema | pg_catalogName | pg_terminate_backendResult data type | booleanArgument data types | pid integer, timeout bigint DEFAULT 0Type | funcI also suggests that the timeout unit should be milliseconds, hence, usingbigint is perfectly fine for the timeout argument.+ <para>+ Throws an ERROR if the target <acronym>lsn</acronym> was not replayed+ on standby within given timeout. Parameter <parameter>timeout</parameter>+ is the time in seconds to wait for the <parameter>target_lsn</parameter>+ replay. When <parameter>timeout</parameter> value equals to zero no+ timeout is applied.+ </para></entry>[1] https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-RECOVERY-CONTROL--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 28 Mar 2024 20:37:54 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi, Euler!\n\nOn Fri, Mar 29, 2024 at 1:38 AM Euler Taveira <euler@eulerto.com> wrote:\n> On Thu, Mar 28, 2024, at 9:39 AM, Alexander Korotkov wrote:\n>\n> Fixed along with other issues spotted by Alexander Lakhin.\n>\n>\n> [I didn't read the whole thread. I'm sorry if I missed something ...]\n>\n> You renamed the function in a previous version but let me suggest another one:\n> pg_wal_replay_wait. It uses the same pattern as the other recovery control\n> functions [1]. I think \"for\" doesn't add much for the function name and \"lsn\" is\n> used in functions that return an LSN (that's not the case here).\n>\n> postgres=# \\df pg_wal_replay*\n> List of functions\n> -[ RECORD 1 ]-------+---------------------\n> Schema | pg_catalog\n> Name | pg_wal_replay_pause\n> Result data type | void\n> Argument data types |\n> Type | func\n> -[ RECORD 2 ]-------+---------------------\n> Schema | pg_catalog\n> Name | pg_wal_replay_resume\n> Result data type | void\n> Argument data types |\n> Type | func\n\nMakes sense to me. I tried to make a new procedure name consistent\nwith functions acquiring various WAL positions. But you're right,\nit's better to be consistent with other functions controlling wal\nreplay.\n\n> Regarding the arguments, I think the timeout should be bigint. There is at least\n> another function that implements a timeout that uses bigint.\n>\n> postgres=# \\df pg_terminate_backend\n> List of functions\n> -[ RECORD 1 ]-------+--------------------------------------\n> Schema | pg_catalog\n> Name | pg_terminate_backend\n> Result data type | boolean\n> Argument data types | pid integer, timeout bigint DEFAULT 0\n> Type | func\n>\n> I also suggests that the timeout unit should be milliseconds, hence, using\n> bigint is perfectly fine for the timeout argument.\n>\n> + <para>\n> + Throws an ERROR if the target <acronym>lsn</acronym> was not replayed\n> + on standby within given timeout. Parameter <parameter>timeout</parameter>\n> + is the time in seconds to wait for the <parameter>target_lsn</parameter>\n> + replay. When <parameter>timeout</parameter> value equals to zero no\n> + timeout is applied.\n> + </para></entry>\n\nThis generally makes sense, but I'm not sure about this. The\nmilliseconds timeout was used initially but received critics in [1].\n\nLinks.\n1. https://www.postgresql.org/message-id/b45ff979-9d12-4828-a22a-e4cb327e115c%40eisentraut.org\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Fri, 29 Mar 2024 14:44:56 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi, hackers!\n\nOn Fri, 29 Mar 2024 at 16:45, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> Hi, Euler!\n>\n> On Fri, Mar 29, 2024 at 1:38 AM Euler Taveira <euler@eulerto.com> wrote:\n> > On Thu, Mar 28, 2024, at 9:39 AM, Alexander Korotkov wrote:\n> >\n> > Fixed along with other issues spotted by Alexander Lakhin.\n> >\n> >\n> > [I didn't read the whole thread. I'm sorry if I missed something ...]\n> >\n> > You renamed the function in a previous version but let me suggest\n> another one:\n> > pg_wal_replay_wait. It uses the same pattern as the other recovery\n> control\n> > functions [1]. I think \"for\" doesn't add much for the function name and\n> \"lsn\" is\n> > used in functions that return an LSN (that's not the case here).\n> >\n> > postgres=# \\df pg_wal_replay*\n> > List of functions\n> > -[ RECORD 1 ]-------+---------------------\n> > Schema | pg_catalog\n> > Name | pg_wal_replay_pause\n> > Result data type | void\n> > Argument data types |\n> > Type | func\n> > -[ RECORD 2 ]-------+---------------------\n> > Schema | pg_catalog\n> > Name | pg_wal_replay_resume\n> > Result data type | void\n> > Argument data types |\n> > Type | func\n>\n> Makes sense to me. I tried to make a new procedure name consistent\n> with functions acquiring various WAL positions. But you're right,\n> it's better to be consistent with other functions controlling wal\n> replay.\n>\n> > Regarding the arguments, I think the timeout should be bigint. There is\n> at least\n> > another function that implements a timeout that uses bigint.\n> >\n> > postgres=# \\df pg_terminate_backend\n> > List of functions\n> > -[ RECORD 1 ]-------+--------------------------------------\n> > Schema | pg_catalog\n> > Name | pg_terminate_backend\n> > Result data type | boolean\n> > Argument data types | pid integer, timeout bigint DEFAULT 0\n> > Type | func\n> >\n> > I also suggests that the timeout unit should be milliseconds, hence,\n> using\n> > bigint is perfectly fine for the timeout argument.\n> >\n> > + <para>\n> > + Throws an ERROR if the target <acronym>lsn</acronym> was not\n> replayed\n> > + on standby within given timeout. Parameter\n> <parameter>timeout</parameter>\n> > + is the time in seconds to wait for the\n> <parameter>target_lsn</parameter>\n> > + replay. When <parameter>timeout</parameter> value equals to\n> zero no\n> > + timeout is applied.\n> > + </para></entry>\n>\n> This generally makes sense, but I'm not sure about this. The\n> milliseconds timeout was used initially but received critics in [1].\n>\nI see in Postgres we already have different units for timeouts:\n\ne.g in guc's:\nwal_receiver_status_interval in seconds\ntcp_keepalives_idle in seconds\n\ncommit_delay in microseconds\n\ndeadlock_timeout in milliseconds\nmax_standby_archive_delay in milliseconds\nvacuum_cost_delay in milliseconds\nautovacuum_vacuum_cost_delay in milliseconds\netc..\n\nI haven't counted precisely, but I feel that milliseconds are the most\noften used in both guc's and functions. So I'd propose using milliseconds\nfor the patch as it was proposed originally.\n\nRegards,\nPavel Borisov\nSupabase.\n\nHi, hackers!On Fri, 29 Mar 2024 at 16:45, Alexander Korotkov <aekorotkov@gmail.com> wrote:Hi, Euler!\n\nOn Fri, Mar 29, 2024 at 1:38 AM Euler Taveira <euler@eulerto.com> wrote:\n> On Thu, Mar 28, 2024, at 9:39 AM, Alexander Korotkov wrote:\n>\n> Fixed along with other issues spotted by Alexander Lakhin.\n>\n>\n> [I didn't read the whole thread. I'm sorry if I missed something ...]\n>\n> You renamed the function in a previous version but let me suggest another one:\n> pg_wal_replay_wait. It uses the same pattern as the other recovery control\n> functions [1]. I think \"for\" doesn't add much for the function name and \"lsn\" is\n> used in functions that return an LSN (that's not the case here).\n>\n> postgres=# \\df pg_wal_replay*\n> List of functions\n> -[ RECORD 1 ]-------+---------------------\n> Schema | pg_catalog\n> Name | pg_wal_replay_pause\n> Result data type | void\n> Argument data types |\n> Type | func\n> -[ RECORD 2 ]-------+---------------------\n> Schema | pg_catalog\n> Name | pg_wal_replay_resume\n> Result data type | void\n> Argument data types |\n> Type | func\n\nMakes sense to me. I tried to make a new procedure name consistent\nwith functions acquiring various WAL positions. But you're right,\nit's better to be consistent with other functions controlling wal\nreplay.\n\n> Regarding the arguments, I think the timeout should be bigint. There is at least\n> another function that implements a timeout that uses bigint.\n>\n> postgres=# \\df pg_terminate_backend\n> List of functions\n> -[ RECORD 1 ]-------+--------------------------------------\n> Schema | pg_catalog\n> Name | pg_terminate_backend\n> Result data type | boolean\n> Argument data types | pid integer, timeout bigint DEFAULT 0\n> Type | func\n>\n> I also suggests that the timeout unit should be milliseconds, hence, using\n> bigint is perfectly fine for the timeout argument.\n>\n> + <para>\n> + Throws an ERROR if the target <acronym>lsn</acronym> was not replayed\n> + on standby within given timeout. Parameter <parameter>timeout</parameter>\n> + is the time in seconds to wait for the <parameter>target_lsn</parameter>\n> + replay. When <parameter>timeout</parameter> value equals to zero no\n> + timeout is applied.\n> + </para></entry>\n\nThis generally makes sense, but I'm not sure about this. The\nmilliseconds timeout was used initially but received critics in [1].I see in Postgres we already have different units for timeouts:e.g in guc's:wal_receiver_status_interval in secondstcp_keepalives_idle in secondscommit_delay in microsecondsdeadlock_timeout in millisecondsmax_standby_archive_delay in millisecondsvacuum_cost_delay in millisecondsautovacuum_vacuum_cost_delay in millisecondsetc..I haven't counted precisely, but I feel that milliseconds are the most often used in both guc's and functions. So I'd propose using milliseconds for the patch as it was proposed originally. Regards, Pavel BorisovSupabase.",
"msg_date": "Fri, 29 Mar 2024 17:21:23 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Fri, Mar 29, 2024, at 9:44 AM, Alexander Korotkov wrote:\n> This generally makes sense, but I'm not sure about this. The\n> milliseconds timeout was used initially but received critics in [1].\n\nAlexander, I see why you changed the patch.\n\nPeter suggested to use an interval but you proposed another data type:\nfloat. The advantage of the interval data type is that you don't need to\ncarefully think about the unit, however, if you use the integer data\ntype you have to propose one. (If that's the case, milliseconds is a\ngood granularity for this feature.) I don't have a strong preference\nbetween integer and interval data types but I don't like the float for\nthis case. The 2 main reasons are (a) that we treat time units (hours,\nminutes, seconds, ...) as integers so it seems natural for a human being\nto use a unit time as integer and (b) depending on the number of digits\nafter the decimal separator you still don't have an integer in the\ninternal unit, hence, you have to round it to integer.\n\nWe already have functions that use integer (such as pg_terminate_backend)\nand interval (such as pg_sleep_for) and if i searched correctly it will\nbe the first timeout argument as float.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Fri, Mar 29, 2024, at 9:44 AM, Alexander Korotkov wrote:This generally makes sense, but I'm not sure about this. Themilliseconds timeout was used initially but received critics in [1].Alexander, I see why you changed the patch.Peter suggested to use an interval but you proposed another data type:float. The advantage of the interval data type is that you don't need tocarefully think about the unit, however, if you use the integer datatype you have to propose one. (If that's the case, milliseconds is agood granularity for this feature.) I don't have a strong preferencebetween integer and interval data types but I don't like the float forthis case. The 2 main reasons are (a) that we treat time units (hours,minutes, seconds, ...) as integers so it seems natural for a human beingto use a unit time as integer and (b) depending on the number of digitsafter the decimal separator you still don't have an integer in theinternal unit, hence, you have to round it to integer.We already have functions that use integer (such as pg_terminate_backend)and interval (such as pg_sleep_for) and if i searched correctly it willbe the first timeout argument as float.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Fri, 29 Mar 2024 13:50:33 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Fri, Mar 29, 2024 at 6:50 PM Euler Taveira <euler@eulerto.com> wrote:\n> On Fri, Mar 29, 2024, at 9:44 AM, Alexander Korotkov wrote:\n>\n> This generally makes sense, but I'm not sure about this. The\n> milliseconds timeout was used initially but received critics in [1].\n>\n>\n> Alexander, I see why you changed the patch.\n>\n> Peter suggested to use an interval but you proposed another data type:\n> float. The advantage of the interval data type is that you don't need to\n> carefully think about the unit, however, if you use the integer data\n> type you have to propose one. (If that's the case, milliseconds is a\n> good granularity for this feature.) I don't have a strong preference\n> between integer and interval data types but I don't like the float for\n> this case. The 2 main reasons are (a) that we treat time units (hours,\n> minutes, seconds, ...) as integers so it seems natural for a human being\n> to use a unit time as integer and (b) depending on the number of digits\n> after the decimal separator you still don't have an integer in the\n> internal unit, hence, you have to round it to integer.\n>\n> We already have functions that use integer (such as pg_terminate_backend)\n> and interval (such as pg_sleep_for) and if i searched correctly it will\n> be the first timeout argument as float.\n\nThank you for the detailed explanation. Float seconds are used in\npg_sleep() just similar to the interval in pg_sleep_for(). However,\nthat's a delay, not exactly a timeout. Given the precedent of\nmilliseconds timeout in pg_terminate_backend(), your and Pavel's\npoints, I've switched back to integer milliseconds timeout.\n\nSome fixes spotted off-list by Alexander Lakhin.\n1) We don't need an explicit check for the postmaster being alive as\nsoon as we pass WL_EXIT_ON_PM_DEATH to WaitLatch().\n2) When testing for unreachable LSN, we need to select LSN well in\nadvance so that autovacuum couldn't affect that.\n\nI'm going to push this if no objections.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Sat, 30 Mar 2024 16:14:28 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Thank you Alexander for working on patch, may be we should change some \nnames:\n1) test 043_wait_lsn.pl -> to 043_waitlsn.pl like waitlsn.c and \nwaitlsn.h\n\n\nIn waitlsn.c and waitlsn.h variables:\n2) targret_lsn -> trgLSN like curLSN\n\n3) lsn -> trgLSN like curLSN\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com\n\n\n",
"msg_date": "Sat, 30 Mar 2024 19:14:14 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi!\n\nOn Sat, Mar 30, 2024 at 6:14 PM Kartyshov Ivan\n<i.kartyshov@postgrespro.ru> wrote:\n>\n> Thank you Alexander for working on patch, may be we should change some\n> names:\n> 1) test 043_wait_lsn.pl -> to 043_waitlsn.pl like waitlsn.c and\n> waitlsn.h\n\nI renamed that to 043_wal_replay_wait.pl to match the name of SQL procedure.\n\n> In waitlsn.c and waitlsn.h variables:\n> 2) targret_lsn -> trgLSN like curLSN\n\nI prefer this to match the SQL procedure parameter name.\n\n> 3) lsn -> trgLSN like curLSN\n\nDone.\n\nAlso I implemented termination of wal replay waits on standby\npromotion (with test).\n\nIn the test I change recovery_min_apply_delay to 1s in order to make\nthe test pass faster.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Sun, 31 Mar 2024 05:11:27 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Sun, Mar 31, 2024 at 7:41 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> Hi!\n\nThanks for the patch. I have a few comments on the v16 patch.\n\n1. Is there any specific reason for pg_wal_replay_wait() being a\nprocedure rather than a function? I haven't read the full thread, but\nI vaguely noticed the discussion on the new wait mechanism holding up\na snapshot or some other resource. Is that the reason to use a stored\nprocedure over a function? If yes, can we specify it somewhere in the\ncommit message and just before the procedure definition in\nsystem_functions.sql?\n\n2. Is the pg_wal_replay_wait first procedure that postgres provides\nout of the box?\n\n3. Defining a procedure for the first time in system_functions.sql\nwhich is supposed to be for functions seems a bit unusual to me.\n\n4.\n+\n+ endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), timeout);\n+\n\n+ if (timeout > 0)\n+ {\n+ delay_ms = (endtime - GetCurrentTimestamp()) / 1000;\n+ latch_events |= WL_TIMEOUT;\n+ if (delay_ms <= 0)\n+ break;\n+ }\n\nWhy is endtime calculated even for timeout <= 0 only to just skip it\nlater? Can't we just do a fastpath exit if timeout = 0 and targetLSN <\n\n5.\n Parameter\n+ <parameter>timeout</parameter> is the time in milliseconds to wait\n+ for the <parameter>target_lsn</parameter>\n+ replay. When <parameter>timeout</parameter> value equals to zero no\n+ timeout is applied.\n+ replay. When <parameter>timeout</parameter> value equals to zero no\n+ timeout is applied.\n\nIt turns out to be \"When timeout value equals to zero no timeout is\napplied.\" I guess, we can just say something like the following which\nI picked up from pg_terminate_backend timeout parameter description.\n\n <para>\n If <parameter>timeout</parameter> is not specified or zero, this\n function returns if the WAL upto\n<literal>target_lsn</literal> is replayed.\n If the <parameter>timeout</parameter> is specified (in\n milliseconds) and greater than zero, the function waits until the\n server actually replays the WAL upto <literal>target_lsn</literal> or\n until the given time has passed. On timeout, an error is emitted.\n </para></entry>\n\n6.\n+ ereport(ERROR,\n+ (errcode(ERRCODE_QUERY_CANCELED),\n+ errmsg(\"canceling waiting for LSN due to timeout\")));\n\nWe can be a bit more informative here and say targetLSN and currentLSN\nsomething like - \"timed out while waiting for target LSN %X/%X to be\nreplayed; current LSN %X/%X\"?\n\n7.\n+ if (context->atomic)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"pg_wal_replay_wait() must be only called in\nnon-atomic context\")));\n+\n\nCan we say what a \"non-atomic context '' is in a user understandable\nway like explicit txn or whatever that might be? \"non-atomic context'\nmight not sound great to the end -user.\n\n8.\n+ the <literal>movie</literal> table and get the <acronym>lsn</acronym> after\n+ changes just made. This example uses\n<function>pg_current_wal_insert_lsn</function>\n+ to get the <acronym>lsn</acronym> given that\n<varname>synchronous_commit</varname>\n+ could be set to <literal>off</literal>.\n\nCan we just mention that run pg_current_wal_insert_lsn on the primary?\n\n9. To me the following query blocks even though I didn't mention timeout.\nCALL pg_wal_replay_wait('0/fffffff');\n\n10. Can't we do some input validation on the timeout parameter and\nemit an error for negative values just like pg_terminate_backend?\nCALL pg_wal_replay_wait('0/ffffff', -100);\n\n11.\n+\n+ if (timeout > 0)\n+ {\n+ delay_ms = (endtime - GetCurrentTimestamp()) / 1000;\n+ latch_events |= WL_TIMEOUT;\n+ if (delay_ms <= 0)\n+ break;\n+ }\n+\n\nCan we avoid calling GetCurrentTimestamp in a for loop which can be\ncostly at times especially when pg_wal_replay_wait is called with\nlarger timeouts on multiple backends? Can't we reuse\npg_terminate_backend's timeout logic in\npg_wait_until_termination, perhaps reducing waittime to 1msec or so?\n\n12. Why should we let every user run pg_wal_replay_wait procedure?\nCan't we revoke execute from the public in system_functions.sql so\nthat one can decide who to run this function? Per comment #11, one can\neasily cause a lot of activity by running this function on hundreds of\nsessions.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 31 Mar 2024 11:14:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi Bharath,\n\nThank you for your feedback.\n\nOn Sun, Mar 31, 2024 at 8:44 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Sun, Mar 31, 2024 at 7:41 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> Thanks for the patch. I have a few comments on the v16 patch.\n>\n> 1. Is there any specific reason for pg_wal_replay_wait() being a\n> procedure rather than a function? I haven't read the full thread, but\n> I vaguely noticed the discussion on the new wait mechanism holding up\n> a snapshot or some other resource. Is that the reason to use a stored\n> procedure over a function? If yes, can we specify it somewhere in the\n> commit message and just before the procedure definition in\n> system_functions.sql?\n\nSurely, there is a reason. Function should be executed in a snapshot,\nwhich can prevent WAL records from being replayed. See [1] for a\nparticular test scenario. In a procedure we may enforce non-atomic\ncontext and release the snapshot.\n\nI've mentioned that in the commit message and in the procedure code.\nI don't think system_functions.sql is the place for this type of\ncomment. We only use system_functions.sql to push the default values.\n\n> 2. Is the pg_wal_replay_wait first procedure that postgres provides\n> out of the box?\n\nYes, it appears first. I see nothing wrong about that.\n\n> 3. Defining a procedure for the first time in system_functions.sql\n> which is supposed to be for functions seems a bit unusual to me.\n\n From the scope of DDL and system catalogue procedure is just another\nkind of function (prokind == 'p'). So, I don't feel wrong about that.\n\n> 4.\n> +\n> + endtime = TimestampTzPlusMilliseconds(GetCurrentTimestamp(), timeout);\n> +\n>\n> + if (timeout > 0)\n> + {\n> + delay_ms = (endtime - GetCurrentTimestamp()) / 1000;\n> + latch_events |= WL_TIMEOUT;\n> + if (delay_ms <= 0)\n> + break;\n> + }\n>\n> Why is endtime calculated even for timeout <= 0 only to just skip it\n> later? Can't we just do a fastpath exit if timeout = 0 and targetLSN <\n\nOK, fixed.\n\n> 5.\n> Parameter\n> + <parameter>timeout</parameter> is the time in milliseconds to wait\n> + for the <parameter>target_lsn</parameter>\n> + replay. When <parameter>timeout</parameter> value equals to zero no\n> + timeout is applied.\n> + replay. When <parameter>timeout</parameter> value equals to zero no\n> + timeout is applied.\n>\n> It turns out to be \"When timeout value equals to zero no timeout is\n> applied.\" I guess, we can just say something like the following which\n> I picked up from pg_terminate_backend timeout parameter description.\n>\n> <para>\n> If <parameter>timeout</parameter> is not specified or zero, this\n> function returns if the WAL upto\n> <literal>target_lsn</literal> is replayed.\n> If the <parameter>timeout</parameter> is specified (in\n> milliseconds) and greater than zero, the function waits until the\n> server actually replays the WAL upto <literal>target_lsn</literal> or\n> until the given time has passed. On timeout, an error is emitted.\n> </para></entry>\n\nApplied as you suggested with some edits from me.\n\n> 6.\n> + ereport(ERROR,\n> + (errcode(ERRCODE_QUERY_CANCELED),\n> + errmsg(\"canceling waiting for LSN due to timeout\")));\n>\n> We can be a bit more informative here and say targetLSN and currentLSN\n> something like - \"timed out while waiting for target LSN %X/%X to be\n> replayed; current LSN %X/%X\"?\n\nDone this way. Adjusted other ereport()'s as well.\n\n> 7.\n> + if (context->atomic)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"pg_wal_replay_wait() must be only called in\n> non-atomic context\")));\n> +\n>\n> Can we say what a \"non-atomic context '' is in a user understandable\n> way like explicit txn or whatever that might be? \"non-atomic context'\n> might not sound great to the end -user.\n\nAdded errdetail() to this ereport().\n\n> 8.\n> + the <literal>movie</literal> table and get the <acronym>lsn</acronym> after\n> + changes just made. This example uses\n> <function>pg_current_wal_insert_lsn</function>\n> + to get the <acronym>lsn</acronym> given that\n> <varname>synchronous_commit</varname>\n> + could be set to <literal>off</literal>.\n>\n> Can we just mention that run pg_current_wal_insert_lsn on the primary?\n\nThe mention is added.\n\n> 9. To me the following query blocks even though I didn't mention timeout.\n> CALL pg_wal_replay_wait('0/fffffff');\n\nIf your primary server is freshly initialized, you need to do quite\ndata modifications to reach this LSN.\n\n> 10. Can't we do some input validation on the timeout parameter and\n> emit an error for negative values just like pg_terminate_backend?\n> CALL pg_wal_replay_wait('0/ffffff', -100);\n\nReasonable, added.\n\n> 11.\n> +\n> + if (timeout > 0)\n> + {\n> + delay_ms = (endtime - GetCurrentTimestamp()) / 1000;\n> + latch_events |= WL_TIMEOUT;\n> + if (delay_ms <= 0)\n> + break;\n> + }\n> +\n>\n> Can we avoid calling GetCurrentTimestamp in a for loop which can be\n> costly at times especially when pg_wal_replay_wait is called with\n> larger timeouts on multiple backends? Can't we reuse\n> pg_terminate_backend's timeout logic in\n> pg_wait_until_termination, perhaps reducing waittime to 1msec or so?\n\nNormally there shouldn't be many loops. It only happens on spurious\nwakeups. For instance, some process was going to set our latch before\nand for another reason, but due to kernel scheduling it does only now.\nSo, normally there is only one wakeup. pg_wait_until_termination()\nmay sacrifice timeout accuracy due to possible spurious wakeups and\ntime spent outside of WaitLatch(). I don't feel reasonable to repeat\nthis login in WaitForLSN() especially given that we don't need\nfrequent wakeups here.\n\n> 12. Why should we let every user run pg_wal_replay_wait procedure?\n> Can't we revoke execute from the public in system_functions.sql so\n> that one can decide who to run this function? Per comment #11, one can\n> easily cause a lot of activity by running this function on hundreds of\n> sessions.\n\nGenerally, if a user can make many connections, then this user can\nmake them busy and can consume resources. Given my explanation above,\npg_wal_replay_wait() even wouldn't make the connection busy, it would\njust wait on the latch. I don't see why pg_wal_replay_wait() could do\nmore harm than pg_sleep(). So, I would leave pg_wal_replay_wait()\npublic.\n\nLinks.\n1. https://www.postgresql.org/message-id/CAPpHfdtiGgn0iS1KbW2HTam-1%2BoK%2BvhXZDAcnX9hKaA7Oe%3DF-A%40mail.gmail.com\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 1 Apr 2024 03:24:35 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Mon, Apr 1, 2024 at 5:54 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> > 9. To me the following query blocks even though I didn't mention timeout.\n> > CALL pg_wal_replay_wait('0/fffffff');\n>\n> If your primary server is freshly initialized, you need to do quite\n> data modifications to reach this LSN.\n\nRight, but why pg_wal_replay_wait blocks without a timeout? It must\nreturn an error saying it can't reach the target LSN, no?\n\nDid you forget to attach the new patch?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 1 Apr 2024 07:55:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Mon, Apr 1, 2024 at 5:25 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Mon, Apr 1, 2024 at 5:54 AM Alexander Korotkov <aekorotkov@gmail.com>\n> wrote:\n> >\n> > > 9. To me the following query blocks even though I didn't mention\n> timeout.\n> > > CALL pg_wal_replay_wait('0/fffffff');\n> >\n> > If your primary server is freshly initialized, you need to do quite\n> > data modifications to reach this LSN.\n>\n> Right, but why pg_wal_replay_wait blocks without a timeout? It must\n> return an error saying it can't reach the target LSN, no?\n>\n\nHow can replica know this? It doesn't look feasible to distinguish this\nsituation from the situation when connection between primary and replica\nbecame slow. I'd keep this simple. We have pg_sleep_for() which waits for\nthe specified time whatever it is. And we have pg_wal_replay_wait() waits\ntill replay lsn grows to the target whatever it is. It's up to the user to\nspecify the correct value.\n\n\n> Did you forget to attach the new patch?\n>\n\nYes, here it is.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Mon, 1 Apr 2024 13:27:09 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "\nAlexander Korotkov <aekorotkov@gmail.com> writes:\n\nHello,\n\n> \n> Did you forget to attach the new patch?\n>\n> Yes, here it is. \n>\n> ------\n> Regards,\n> Alexander Korotkov \n>\n> [4. text/x-diff; v17-0001-Implement-pg_wal_replay_wait-stored-procedure.patch]...\n\n+ </indexterm>\n+ <function>pg_wal_replay_wait</function> (\n+ <parameter>target_lsn</parameter> <type>pg_lsn</type>,\n+ <parameter>timeout</parameter> <type>bigint</type> <literal>DEFAULT</literal> <literal>0</literal>)\n+ <returnvalue>void</returnvalue>\n+ </para>\n\nShould we return the millseconds of waiting time? I think this\ninformation may be useful for customer if they want to know how long\ntime it waits for for minitor purpose. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Tue, 02 Apr 2024 11:25:57 +0800",
"msg_from": "Andy Fan <zhihuifan1213@163.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi, Andy!\n\nOn Tue, Apr 2, 2024 at 6:29 AM Andy Fan <zhihuifan1213@163.com> wrote:\n> > Did you forget to attach the new patch?\n> >\n> > Yes, here it is.\n> >\n> > [4. text/x-diff; v17-0001-Implement-pg_wal_replay_wait-stored-procedure.patch]...\n>\n> + </indexterm>\n> + <function>pg_wal_replay_wait</function> (\n> + <parameter>target_lsn</parameter> <type>pg_lsn</type>,\n> + <parameter>timeout</parameter> <type>bigint</type> <literal>DEFAULT</literal> <literal>0</literal>)\n> + <returnvalue>void</returnvalue>\n> + </para>\n>\n> Should we return the millseconds of waiting time? I think this\n> information may be useful for customer if they want to know how long\n> time it waits for for minitor purpose.\n\nPlease, check it more carefully. In v17 timeout is in integer milliseconds.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 2 Apr 2024 11:08:17 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "\nHi Alexander!\n\n>> + </indexterm>\n>> + <function>pg_wal_replay_wait</function> (\n>> + <parameter>target_lsn</parameter> <type>pg_lsn</type>,\n>> + <parameter>timeout</parameter> <type>bigint</type> <literal>DEFAULT</literal> <literal>0</literal>)\n>> + <returnvalue>void</returnvalue>\n>> + </para>\n>>\n>> Should we return the millseconds of waiting time? I think this\n>> information may be useful for customer if they want to know how long\n>> time it waits for for minitor purpose.\n>\n> Please, check it more carefully. In v17 timeout is in integer milliseconds.\n\nI guess one of us misunderstand the other:( and I do didn't check the\ncode very carefully. \n\nAcutally I meant the \"return value\" rather than function argument. IIUC\nthe current return value is void per below statement.\n\n>> + <returnvalue>void</returnvalue>\n\nIf so, when users call pg_wal_replay_wait, they can get informed when\nthe wal is replaied to the target_lsn, but they can't know how long time\nit waits unless they check it in application side, I think such\ninformation will be useful for monitor purpose sometimes. \n\nselect pg_wal_replay_wait(lsn, 1000); may just take 1ms in fact, in\nthis case, I want this function return 1. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Tue, 02 Apr 2024 16:14:49 +0800",
"msg_from": "Andy Fan <zhihuifan1213@163.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On 2024-04-02 11:14, Andy Fan wrote:\n> If so, when users call pg_wal_replay_wait, they can get informed when\n> the wal is replaied to the target_lsn, but they can't know how long \n> time\n> it waits unless they check it in application side, I think such\n> information will be useful for monitor purpose sometimes.\n> \n> select pg_wal_replay_wait(lsn, 1000); may just take 1ms in fact, in\n> this case, I want this function return 1.\n\nHi Andy, to get timing we can use \\time in psql.\nHere is an example.\npostgres=# \\timing\nTiming is on.\npostgres=# select 1;\n ?column?\n----------\n 1\n(1 row)\n\nTime: 0.536 ms\n\n\n> <returnvalue>void</returnvalue>\nAnd returning VOID is the best option, rather than returning TRUE|FALSE\nor timing. It left the logic of the procedure very simple, we get an\nerror if LSN is not reached.\n\n8 years, we tried to add this feature, and now we suggest the best way\nfor this feature is to commit the minimal version first.\n\nLet's discuss further improvements in future versions.\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com\n\n\n",
"msg_date": "Tue, 02 Apr 2024 13:11:27 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Tue, Apr 2, 2024 at 3:41 PM Kartyshov Ivan\n<i.kartyshov@postgrespro.ru> wrote:\n>\n> 8 years, we tried to add this feature, and now we suggest the best way\n> for this feature is to commit the minimal version first.\n\nJust curious, do you or anyone else have an immediate use for this\nfunction? If yes, how are they achieving read-after-write-consistency\non streaming standbys in their application right now without a\nfunction like this?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 2 Apr 2024 15:45:22 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "\nBharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n\n> On Tue, Apr 2, 2024 at 3:41 PM Kartyshov Ivan\n> <i.kartyshov@postgrespro.ru> wrote:\n>>\n>> 8 years, we tried to add this feature, and now we suggest the best way\n>> for this feature is to commit the minimal version first.\n>\n> Just curious, do you or anyone else have an immediate use for this\n> function? If yes, how are they achieving read-after-write-consistency\n> on streaming standbys in their application right now without a\n> function like this?\n\nThe link [1] may be helpful and I think the reason there is reasonable\nto me.\n\nActually we also disucss how to make sure the \"read your writes\nconsistency\" internally, and the soluation here looks good to me.\n\nGlad to know that this patch will be committed very soon. \n\n[1]\nhttps://www.postgresql.org/message-id/CAPpHfdtuiL1x4APTs7u1fCmxkVp2-ZruXcdCfprDMdnOzvdC%2BA%40mail.gmail.com \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Tue, 02 Apr 2024 19:43:46 +0800",
"msg_from": "Andy Fan <zhihuifan1213@163.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Tue, Apr 2, 2024 at 1:11 PM Kartyshov Ivan\n<i.kartyshov@postgrespro.ru> wrote:\n> On 2024-04-02 11:14, Andy Fan wrote:\n> > If so, when users call pg_wal_replay_wait, they can get informed when\n> > the wal is replaied to the target_lsn, but they can't know how long\n> > time\n> > it waits unless they check it in application side, I think such\n> > information will be useful for monitor purpose sometimes.\n> >\n> > select pg_wal_replay_wait(lsn, 1000); may just take 1ms in fact, in\n> > this case, I want this function return 1.\n>\n> Hi Andy, to get timing we can use \\time in psql.\n> Here is an example.\n> postgres=# \\timing\n> Timing is on.\n> postgres=# select 1;\n> ?column?\n> ----------\n> 1\n> (1 row)\n>\n> Time: 0.536 ms\n>\n>\n> > <returnvalue>void</returnvalue>\n> And returning VOID is the best option, rather than returning TRUE|FALSE\n> or timing. It left the logic of the procedure very simple, we get an\n> error if LSN is not reached.\n>\n> 8 years, we tried to add this feature, and now we suggest the best way\n> for this feature is to commit the minimal version first.\n>\n> Let's discuss further improvements in future versions.\n\n+1,\nIt seems there was not yet a precedent of builtin PostgreSQL function\nreturning its duration. And I don't think we need to introduce such\nprecedent, at least now. This seems like we're placing the\nresponsibility on monitoring resources usage to an application.\n\nI'd also like to note that pg_wal_replay_wait() comes with a dedicated\nwait event. So, one could monitor the average duration of these waits\nusing sampling of wait events.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 2 Apr 2024 15:25:49 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Tue, Apr 2, 2024 at 2:47 PM Andy Fan <zhihuifan1213@163.com> wrote:\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n>\n> > On Tue, Apr 2, 2024 at 3:41 PM Kartyshov Ivan\n> > <i.kartyshov@postgrespro.ru> wrote:\n> >>\n> >> 8 years, we tried to add this feature, and now we suggest the best way\n> >> for this feature is to commit the minimal version first.\n> >\n> > Just curious, do you or anyone else have an immediate use for this\n> > function? If yes, how are they achieving read-after-write-consistency\n> > on streaming standbys in their application right now without a\n> > function like this?\n>\n> The link [1] may be helpful and I think the reason there is reasonable\n> to me.\n>\n> Actually we also disucss how to make sure the \"read your writes\n> consistency\" internally, and the soluation here looks good to me.\n>\n> Glad to know that this patch will be committed very soon.\n>\n> [1]\n> https://www.postgresql.org/message-id/CAPpHfdtuiL1x4APTs7u1fCmxkVp2-ZruXcdCfprDMdnOzvdC%2BA%40mail.gmail.com\n\nThank you for your feedback.\n\nI also can confirm that a lot of users would be very happy to have\n\"read your writes consistency\" and ready to do something to achieve\nthis at an application level. However, they typically don't know what\nexactly they need.\n\nSo, blogging about pg_wal_replay_wait() and spreading words about it\nat conferences would be highly appreciated.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 2 Apr 2024 15:31:40 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On 2024-04-02 13:15, Bharath Rupireddy wrote:\n> On Tue, Apr 2, 2024 at 3:41 PM Kartyshov Ivan\n> <i.kartyshov@postgrespro.ru> wrote:\n>> \n>> 8 years, we tried to add this feature, and now we suggest the best way\n>> for this feature is to commit the minimal version first.\n> \n> Just curious, do you or anyone else have an immediate use for this\n> function? If yes, how are they achieving read-after-write-consistency\n> on streaming standbys in their application right now without a\n> function like this?\n\nJust now, application runs pg_current_wal_lsn() after update and then\nwaits on loop pg_current_wal_flush_lsn() until updated.\n\nOr use slow synchronous_commit.\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com\n\n\n",
"msg_date": "Tue, 02 Apr 2024 18:21:34 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hello, I noticed that commit 06c418e163e9 uses waitLSN->mutex (a\nspinlock) to protect the contents of waitLSN -- and it's used to walk an\narbitrary long list of processes waiting ... and also, an arbitrary\nnumber of processes could be calling this code. I think using a\nspinlock for this is unwise, as it'll cause busy-waiting whenever\nthere's contention. Wouldn't it be better to use an LWLock for this?\nThen the processes would sleep until the lock is freed.\n\nWhile nosing about the code, other things struck me:\n\nI think there should be more comments about WaitLSNProcInfo and\nWaitLSNState in waitlsn.h.\n\nIn addLSNWaiter it'd be better to assign 'cur' before acquiring the\nlock.\n\nIs a plan array really the most efficient data structure for this,\nconsidering that you have to reorder each time you add an element?\nMaybe it is, but wouldn't it make sense to use memmove() when adding one\nelement rather iterating all the remaining elements to the end of the\nqueue?\n\nI think the include list in waitlsn.c could be tightened a bit:\n\n@@ -18,28 +18,18 @@\n #include <math.h>\n \n #include \"pgstat.h\"\n-#include \"fmgr.h\"\n-#include \"access/transam.h\"\n-#include \"access/xact.h\"\n #include \"access/xlog.h\"\n-#include \"access/xlogdefs.h\"\n #include \"access/xlogrecovery.h\"\n-#include \"catalog/pg_type.h\"\n #include \"commands/waitlsn.h\"\n-#include \"executor/spi.h\"\n #include \"funcapi.h\"\n #include \"miscadmin.h\"\n-#include \"storage/ipc.h\"\n #include \"storage/latch.h\"\n-#include \"storage/pmsignal.h\"\n #include \"storage/proc.h\"\n #include \"storage/shmem.h\"\n-#include \"storage/sinvaladt.h\"\n-#include \"utils/builtins.h\"\n #include \"utils/pg_lsn.h\"\n #include \"utils/snapmgr.h\"\n-#include \"utils/timestamp.h\"\n #include \"utils/fmgrprotos.h\"\n+#include \"utils/wait_event_types.h\"\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Las cosas son buenas o malas segun las hace nuestra opinión\" (Lisias)\n\n\n",
"msg_date": "Wed, 3 Apr 2024 08:58:41 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Buildfarm animal mamba (NetBSD 10.0 on macppc) started failing on this with\nwhat seems like a bogus compiler warning:\n\nwaitlsn.c: In function 'WaitForLSN':\nwaitlsn.c:275:24: error: 'endtime' may be used uninitialized in this function [-Werror=maybe-uninitialized]\n 275 | delay_ms = (endtime - GetCurrentTimestamp()) / 1000;\n | ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~\ncc1: all warnings being treated as errors\n\nendtime is indeed initialized further up, but initializing endtime at\ndeclaration seems innocent enough and should make this compiler happy and the\nbuildfarm greener.\n\ndiff --git a/src/backend/commands/waitlsn.c b/src/backend/commands/waitlsn.c\nindex 6679378156..17ad0057ad 100644\n--- a/src/backend/commands/waitlsn.c\n+++ b/src/backend/commands/waitlsn.c\n@@ -226,7 +226,7 @@ void\n WaitForLSN(XLogRecPtr targetLSN, int64 timeout)\n {\n XLogRecPtr currentLSN;\n- TimestampTz endtime;\n+ TimestampTz endtime = 0;\n\n /* Shouldn't be called when shmem isn't initialized */\n Assert(waitLSN);\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 3 Apr 2024 09:58:29 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "\n> I also can confirm that a lot of users would be very happy to have\n> \"read your writes consistency\" and ready to do something to achieve\n> this at an application level. However, they typically don't know what\n> exactly they need.\n>\n> So, blogging about pg_wal_replay_wait() and spreading words about it\n> at conferences would be highly appreciated.\n\nSure, once it is committed, I promise I can doing a knowledge sharing in\nour organization and write a article in chinese language. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n",
"msg_date": "Wed, 03 Apr 2024 16:36:33 +0800",
"msg_from": "Andy Fan <zhihuifan1213@163.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi, Alvaro!\n\nThank you for your feedback.\n\nOn Wed, Apr 3, 2024 at 9:58 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Hello, I noticed that commit 06c418e163e9 uses waitLSN->mutex (a\n> spinlock) to protect the contents of waitLSN -- and it's used to walk an\n> arbitrary long list of processes waiting ... and also, an arbitrary\n> number of processes could be calling this code. I think using a\n> spinlock for this is unwise, as it'll cause busy-waiting whenever\n> there's contention. Wouldn't it be better to use an LWLock for this?\n> Then the processes would sleep until the lock is freed.\n>\n> While nosing about the code, other things struck me:\n>\n> I think there should be more comments about WaitLSNProcInfo and\n> WaitLSNState in waitlsn.h.\n>\n> In addLSNWaiter it'd be better to assign 'cur' before acquiring the\n> lock.\n>\n> Is a plan array really the most efficient data structure for this,\n> considering that you have to reorder each time you add an element?\n> Maybe it is, but wouldn't it make sense to use memmove() when adding one\n> element rather iterating all the remaining elements to the end of the\n> queue?\n>\n> I think the include list in waitlsn.c could be tightened a bit:\n\nI've just pushed commit, which shortens the include list and fixes the\norder of 'cur' assigning and taking spinlock in addLSNWaiter().\n\nRegarding the shmem data structure for LSN waiters. I didn't pick\nLWLock or ConditionVariable, because I needed the ability to wake up\nonly those waiters whose LSN is already replayed. In my experience\nwaking up a process is way slower than scanning a short flat array.\n\nHowever, I agree that when the number of waiters is very high and flat\narray may become a problem. It seems that the pairing heap is not\nhard to use for shmem structures. The only memory allocation call in\nparitingheap.c is in pairingheap_allocate(). So, it's only needed to\nbe able to initialize the pairing heap in-place, and it will be fine\nfor shmem.\n\nI'll come back with switching to the pairing heap shortly.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 3 Apr 2024 11:42:11 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hello Alexander,\n\nOn 2024-Apr-03, Alexander Korotkov wrote:\n\n> Regarding the shmem data structure for LSN waiters. I didn't pick\n> LWLock or ConditionVariable, because I needed the ability to wake up\n> only those waiters whose LSN is already replayed. In my experience\n> waking up a process is way slower than scanning a short flat array.\n\nI agree, but I think that's unrelated to what I was saying, which is\njust the patch I attach here.\n\n> However, I agree that when the number of waiters is very high and flat\n> array may become a problem. It seems that the pairing heap is not\n> hard to use for shmem structures. The only memory allocation call in\n> paritingheap.c is in pairingheap_allocate(). So, it's only needed to\n> be able to initialize the pairing heap in-place, and it will be fine\n> for shmem.\n\nOk.\n\nWith the code as it stands today, everything in WaitLSNState apart from\nthe pairing heap is accessed without any locking. I think this is at\nleast partly OK because each backend only accesses its own entry; but it\ndeserves a comment. Or maybe something more, because WaitLSNSetLatches\ndoes modify the entry for other backends. (Admittedly, this could only\nhappens for backends that are already sleeping, and it only happens\nwith the lock acquired, so it's probably okay. But clearly it deserves\na comment.)\n\nDon't we need to WaitLSNCleanup() during error recovery or something?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"World domination is proceeding according to plan\" (Andrew Morton)",
"msg_date": "Wed, 3 Apr 2024 18:55:35 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 7:55 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2024-Apr-03, Alexander Korotkov wrote:\n>\n> > Regarding the shmem data structure for LSN waiters. I didn't pick\n> > LWLock or ConditionVariable, because I needed the ability to wake up\n> > only those waiters whose LSN is already replayed. In my experience\n> > waking up a process is way slower than scanning a short flat array.\n>\n> I agree, but I think that's unrelated to what I was saying, which is\n> just the patch I attach here.\n\nOh, sorry for the confusion. I'd re-read your message. Indeed you\nmeant this very clearly!\n\nI'm good with the patch. Attached revision contains a bit of a commit message.\n\n> > However, I agree that when the number of waiters is very high and flat\n> > array may become a problem. It seems that the pairing heap is not\n> > hard to use for shmem structures. The only memory allocation call in\n> > paritingheap.c is in pairingheap_allocate(). So, it's only needed to\n> > be able to initialize the pairing heap in-place, and it will be fine\n> > for shmem.\n>\n> Ok.\n>\n> With the code as it stands today, everything in WaitLSNState apart from\n> the pairing heap is accessed without any locking. I think this is at\n> least partly OK because each backend only accesses its own entry; but it\n> deserves a comment. Or maybe something more, because WaitLSNSetLatches\n> does modify the entry for other backends. (Admittedly, this could only\n> happens for backends that are already sleeping, and it only happens\n> with the lock acquired, so it's probably okay. But clearly it deserves\n> a comment.)\n\nPlease, check 0002 patch attached. I found it easier to move two\nassignments we previously moved out of lock, into the lock; then claim\nWaitLSNState.procInfos is also protected by WaitLSNLock.\n\n> Don't we need to WaitLSNCleanup() during error recovery or something?\n\nYes, there is WaitLSNCleanup(). It's currently only called from one\nplace, given that waiting for LSN can't be invoked from background\nworkers or inside the transaction.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Wed, 3 Apr 2024 21:17:48 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Wed, 3 Apr 2024 at 22:18, Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Wed, Apr 3, 2024 at 7:55 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n> >\n> > On 2024-Apr-03, Alexander Korotkov wrote:\n> >\n> > > Regarding the shmem data structure for LSN waiters. I didn't pick\n> > > LWLock or ConditionVariable, because I needed the ability to wake up\n> > > only those waiters whose LSN is already replayed. In my experience\n> > > waking up a process is way slower than scanning a short flat array.\n> >\n> > I agree, but I think that's unrelated to what I was saying, which is\n> > just the patch I attach here.\n>\n> Oh, sorry for the confusion. I'd re-read your message. Indeed you\n> meant this very clearly!\n>\n> I'm good with the patch. Attached revision contains a bit of a commit\n> message.\n>\n> > > However, I agree that when the number of waiters is very high and flat\n> > > array may become a problem. It seems that the pairing heap is not\n> > > hard to use for shmem structures. The only memory allocation call in\n> > > paritingheap.c is in pairingheap_allocate(). So, it's only needed to\n> > > be able to initialize the pairing heap in-place, and it will be fine\n> > > for shmem.\n> >\n> > Ok.\n> >\n> > With the code as it stands today, everything in WaitLSNState apart from\n> > the pairing heap is accessed without any locking. I think this is at\n> > least partly OK because each backend only accesses its own entry; but it\n> > deserves a comment. Or maybe something more, because WaitLSNSetLatches\n> > does modify the entry for other backends. (Admittedly, this could only\n> > happens for backends that are already sleeping, and it only happens\n> > with the lock acquired, so it's probably okay. But clearly it deserves\n> > a comment.)\n>\n> Please, check 0002 patch attached. I found it easier to move two\n> assignments we previously moved out of lock, into the lock; then claim\n> WaitLSNState.procInfos is also protected by WaitLSNLock.\n>\nCould you re-attach 0002. Seems it failed to attach to the previous\nmessage.\n\nRegards,\nPavel\n\nHi, Alexander!On Wed, 3 Apr 2024 at 22:18, Alexander Korotkov <aekorotkov@gmail.com> wrote:On Wed, Apr 3, 2024 at 7:55 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2024-Apr-03, Alexander Korotkov wrote:\n>\n> > Regarding the shmem data structure for LSN waiters. I didn't pick\n> > LWLock or ConditionVariable, because I needed the ability to wake up\n> > only those waiters whose LSN is already replayed. In my experience\n> > waking up a process is way slower than scanning a short flat array.\n>\n> I agree, but I think that's unrelated to what I was saying, which is\n> just the patch I attach here.\n\nOh, sorry for the confusion. I'd re-read your message. Indeed you\nmeant this very clearly!\n\nI'm good with the patch. Attached revision contains a bit of a commit message.\n\n> > However, I agree that when the number of waiters is very high and flat\n> > array may become a problem. It seems that the pairing heap is not\n> > hard to use for shmem structures. The only memory allocation call in\n> > paritingheap.c is in pairingheap_allocate(). So, it's only needed to\n> > be able to initialize the pairing heap in-place, and it will be fine\n> > for shmem.\n>\n> Ok.\n>\n> With the code as it stands today, everything in WaitLSNState apart from\n> the pairing heap is accessed without any locking. I think this is at\n> least partly OK because each backend only accesses its own entry; but it\n> deserves a comment. Or maybe something more, because WaitLSNSetLatches\n> does modify the entry for other backends. (Admittedly, this could only\n> happens for backends that are already sleeping, and it only happens\n> with the lock acquired, so it's probably okay. But clearly it deserves\n> a comment.)\n\nPlease, check 0002 patch attached. I found it easier to move two\nassignments we previously moved out of lock, into the lock; then claim\nWaitLSNState.procInfos is also protected by WaitLSNLock.Could you re-attach 0002. Seems it failed to attach to the previous message. Regards,Pavel",
"msg_date": "Wed, 3 Apr 2024 23:04:05 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Wed, Apr 3, 2024 at 10:04 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> On Wed, 3 Apr 2024 at 22:18, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>>\n>> On Wed, Apr 3, 2024 at 7:55 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> >\n>> > On 2024-Apr-03, Alexander Korotkov wrote:\n>> >\n>> > > Regarding the shmem data structure for LSN waiters. I didn't pick\n>> > > LWLock or ConditionVariable, because I needed the ability to wake up\n>> > > only those waiters whose LSN is already replayed. In my experience\n>> > > waking up a process is way slower than scanning a short flat array.\n>> >\n>> > I agree, but I think that's unrelated to what I was saying, which is\n>> > just the patch I attach here.\n>>\n>> Oh, sorry for the confusion. I'd re-read your message. Indeed you\n>> meant this very clearly!\n>>\n>> I'm good with the patch. Attached revision contains a bit of a commit message.\n>>\n>> > > However, I agree that when the number of waiters is very high and flat\n>> > > array may become a problem. It seems that the pairing heap is not\n>> > > hard to use for shmem structures. The only memory allocation call in\n>> > > paritingheap.c is in pairingheap_allocate(). So, it's only needed to\n>> > > be able to initialize the pairing heap in-place, and it will be fine\n>> > > for shmem.\n>> >\n>> > Ok.\n>> >\n>> > With the code as it stands today, everything in WaitLSNState apart from\n>> > the pairing heap is accessed without any locking. I think this is at\n>> > least partly OK because each backend only accesses its own entry; but it\n>> > deserves a comment. Or maybe something more, because WaitLSNSetLatches\n>> > does modify the entry for other backends. (Admittedly, this could only\n>> > happens for backends that are already sleeping, and it only happens\n>> > with the lock acquired, so it's probably okay. But clearly it deserves\n>> > a comment.)\n>>\n>> Please, check 0002 patch attached. I found it easier to move two\n>> assignments we previously moved out of lock, into the lock; then claim\n>> WaitLSNState.procInfos is also protected by WaitLSNLock.\n>\n> Could you re-attach 0002. Seems it failed to attach to the previous message.\n\nI actually forgot both!\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Thu, 4 Apr 2024 00:35:37 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hello,\n\nBTW I noticed that \nhttps://coverage.postgresql.org/src/backend/commands/waitlsn.c.gcov.html\nsays that lsn_cmp is not covered by the tests. This probably indicates\nthat the tests are a little too light, but I'm not sure how much extra\neffort we want to spend.\n\nI'm still concerned that WaitLSNCleanup is only called in ProcKill.\nDoes this mean that if a process throws an error while waiting, it'll\nnot get cleaned up until it exits? Maybe this is not a big deal, but it\nseems odd.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Now I have my system running, not a byte was off the shelf;\nIt rarely breaks and when it does I fix the code myself.\nIt's stable, clean and elegant, and lightning fast as well,\nAnd it doesn't cost a nickel, so Bill Gates can go to hell.\"\n\n\n",
"msg_date": "Fri, 5 Apr 2024 20:15:16 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi, Alvaro!\n\nThank you for your care on this matter.\n\nOn Fri, Apr 5, 2024 at 9:15 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> BTW I noticed that\n> https://coverage.postgresql.org/src/backend/commands/waitlsn.c.gcov.html\n> says that lsn_cmp is not covered by the tests. This probably indicates\n> that the tests are a little too light, but I'm not sure how much extra\n> effort we want to spend.\n\nI'm aware of this. Ivan promised to send a patch to improve the test.\nIf he doesn't, I'll care about it.\n\n> I'm still concerned that WaitLSNCleanup is only called in ProcKill.\n> Does this mean that if a process throws an error while waiting, it'll\n> not get cleaned up until it exits? Maybe this is not a big deal, but it\n> seems odd.\n\nI've added WaitLSNCleanup() to the AbortTransaction(). Just pushed\nthat together with the improvements upthread.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Sun, 7 Apr 2024 00:52:47 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "I did some experiments over synchronous replications and\ngot that cascade replication can`t be synchronous. And \npg_wal_replay_wait() allows us to read your writes\nconsistency on cascade replication.\nBeyond that, I added more tests on multi-standby replication\nand cascade replications.\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com",
"msg_date": "Wed, 10 Apr 2024 18:12:00 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On 07/04/2024 00:52, Alexander Korotkov wrote:\n> On Fri, Apr 5, 2024 at 9:15 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> I'm still concerned that WaitLSNCleanup is only called in ProcKill.\n>> Does this mean that if a process throws an error while waiting, it'll\n>> not get cleaned up until it exits? Maybe this is not a big deal, but it\n>> seems odd.\n> \n> I've added WaitLSNCleanup() to the AbortTransaction(). Just pushed\n> that together with the improvements upthread.\n\nRace condition:\n\n1. backend: pg_wal_replay_wait('0/1234') is called. It calls WaitForLSN\n2. backend: WaitForLSN calls addLSNWaiter('0/1234'). It adds the backend \nprocess to the LSN heap and returns\n3. replay: rm_redo record '0/1234'\n4. backend: WaitForLSN enters for-loop, calls GetXLogReplayRecPtr()\n5. backend: current replay LSN location is '0/1234', so we exit the loop\n6. replay: calls WaitLSNSetLatches()\n\nIn a nutshell, it's possible for the loop in WaitForLSN to exit without \ncleaning up the process from the heap. I was able to hit that by adding \na delay after the addLSNWaiter() call:\n\n> TRAP: failed Assert(\"!procInfo->inHeap\"), File: \"../src/backend/commands/waitlsn.c\", Line: 114, PID: 1936152\n> postgres: heikki postgres [local] CALL(ExceptionalCondition+0xab)[0x55da1f68787b]\n> postgres: heikki postgres [local] CALL(+0x331ec8)[0x55da1f204ec8]\n> postgres: heikki postgres [local] CALL(WaitForLSN+0x139)[0x55da1f2052cc]\n> postgres: heikki postgres [local] CALL(pg_wal_replay_wait+0x18b)[0x55da1f2056e5]\n> postgres: heikki postgres [local] CALL(ExecuteCallStmt+0x46e)[0x55da1f18031a]\n> postgres: heikki postgres [local] CALL(standard_ProcessUtility+0x8cf)[0x55da1f4b26c9]\n\nI think there's a similar race condition if the timeout is reached at \nthe same time that the startup process wakes up the process.\n\n> \t * At first, we check that pg_wal_replay_wait() is called in a non-atomic\n> \t * context. That is, a procedure call isn't wrapped into a transaction,\n> \t * another procedure call, or a function call.\n> \t *\n\nIt's pretty unfortunate to have all these restrictions. It would be nice \nto do:\n\nselect pg_wal_replay_wait('0/1234'); select * from foo;\n\nin a single multi-query call, to avoid the round-trip to the client. You \ncan avoid it with libpq or protocol level pipelining, too, but it's more \ncomplicated.\n\n> \t * Secondly, according to PlannedStmtRequiresSnapshot(), even in an atomic\n> \t * context, CallStmt is processed with a snapshot. Thankfully, we can pop\n> \t * this snapshot, because PortalRunUtility() can tolerate this.\n\nThis assumption that PortalRunUtility() can tolerate us popping the \nsnapshot sounds very fishy. I haven't looked at what's going on there, \nbut doesn't sound like a great assumption.\n\nIf recovery ends while a process is waiting for an LSN to arrive, does \nit ever get woken up?\n\nThe docs could use some-copy-editing, but just to point out one issue:\n\n> There are also procedures to control the progress of recovery.\n\nThat's copy-pasted from an earlier sentence at the table that lists \nfunctions like pg_promote(), pg_wal_replay_pause(), and \npg_is_wal_replay_paused(). The pg_wal_replay_wait() doesn't control the \nprogress of recovery like those functions do, it only causes the calling \nbackend to wait.\n\nOverall, this feature doesn't feel quite ready for v17, and IMHO should \nbe reverted. It's a nice feature, so I'd love to have it fixed and \nreviewed early in the v18 cycle.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 11 Apr 2024 01:46:04 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi, Heikki!\n\nThank you for your interest in the subject.\n\nOn Thu, Apr 11, 2024 at 1:46 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 07/04/2024 00:52, Alexander Korotkov wrote:\n> > On Fri, Apr 5, 2024 at 9:15 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >> I'm still concerned that WaitLSNCleanup is only called in ProcKill.\n> >> Does this mean that if a process throws an error while waiting, it'll\n> >> not get cleaned up until it exits? Maybe this is not a big deal, but it\n> >> seems odd.\n> >\n> > I've added WaitLSNCleanup() to the AbortTransaction(). Just pushed\n> > that together with the improvements upthread.\n>\n> Race condition:\n>\n> 1. backend: pg_wal_replay_wait('0/1234') is called. It calls WaitForLSN\n> 2. backend: WaitForLSN calls addLSNWaiter('0/1234'). It adds the backend\n> process to the LSN heap and returns\n> 3. replay: rm_redo record '0/1234'\n> 4. backend: WaitForLSN enters for-loop, calls GetXLogReplayRecPtr()\n> 5. backend: current replay LSN location is '0/1234', so we exit the loop\n> 6. replay: calls WaitLSNSetLatches()\n>\n> In a nutshell, it's possible for the loop in WaitForLSN to exit without\n> cleaning up the process from the heap. I was able to hit that by adding\n> a delay after the addLSNWaiter() call:\n>\n> > TRAP: failed Assert(\"!procInfo->inHeap\"), File: \"../src/backend/commands/waitlsn.c\", Line: 114, PID: 1936152\n> > postgres: heikki postgres [local] CALL(ExceptionalCondition+0xab)[0x55da1f68787b]\n> > postgres: heikki postgres [local] CALL(+0x331ec8)[0x55da1f204ec8]\n> > postgres: heikki postgres [local] CALL(WaitForLSN+0x139)[0x55da1f2052cc]\n> > postgres: heikki postgres [local] CALL(pg_wal_replay_wait+0x18b)[0x55da1f2056e5]\n> > postgres: heikki postgres [local] CALL(ExecuteCallStmt+0x46e)[0x55da1f18031a]\n> > postgres: heikki postgres [local] CALL(standard_ProcessUtility+0x8cf)[0x55da1f4b26c9]\n>\n> I think there's a similar race condition if the timeout is reached at\n> the same time that the startup process wakes up the process.\n\nThank you for catching this. I think WaitForLSN() just needs to call\ndeleteLSNWaiter() unconditionally after exit from the loop.\n\n> > * At first, we check that pg_wal_replay_wait() is called in a non-atomic\n> > * context. That is, a procedure call isn't wrapped into a transaction,\n> > * another procedure call, or a function call.\n> > *\n>\n> It's pretty unfortunate to have all these restrictions. It would be nice\n> to do:\n>\n> select pg_wal_replay_wait('0/1234'); select * from foo;\n\nThis works for me, except that it needs \"call\" not \"select\".\n\n# call pg_wal_replay_wait('0/1234'); select * from foo;\nCALL\n i\n---\n(0 rows)\n\n> in a single multi-query call, to avoid the round-trip to the client. You\n> can avoid it with libpq or protocol level pipelining, too, but it's more\n> complicated.\n>\n> > * Secondly, according to PlannedStmtRequiresSnapshot(), even in an atomic\n> > * context, CallStmt is processed with a snapshot. Thankfully, we can pop\n> > * this snapshot, because PortalRunUtility() can tolerate this.\n>\n> This assumption that PortalRunUtility() can tolerate us popping the\n> snapshot sounds very fishy. I haven't looked at what's going on there,\n> but doesn't sound like a great assumption.\n\nThis is what PortalRunUtility() says about this.\n\n/*\n * Some utility commands (e.g., VACUUM) pop the ActiveSnapshot stack from\n * under us, so don't complain if it's now empty. Otherwise, our snapshot\n * should be the top one; pop it. Note that this could be a different\n * snapshot from the one we made above; see EnsurePortalSnapshotExists.\n */\n\nSo, if the vacuum pops a snapshot when it needs to run without a\nsnapshot, then it's probably OK for other utilities. But I agree this\ndecision needs some consensus.\n\n> If recovery ends while a process is waiting for an LSN to arrive, does\n> it ever get woken up?\n\nIf the recovery target is promote, then the user will get an error.\nIf the recovery target is shutdown, then connection will get\ninterrupted. If the recovery target is pause, then waiting will\ncontinue during the pause. Not sure about the latter case.\n\n> The docs could use some-copy-editing, but just to point out one issue:\n>\n> > There are also procedures to control the progress of recovery.\n>\n> That's copy-pasted from an earlier sentence at the table that lists\n> functions like pg_promote(), pg_wal_replay_pause(), and\n> pg_is_wal_replay_paused(). The pg_wal_replay_wait() doesn't control the\n> progress of recovery like those functions do, it only causes the calling\n> backend to wait.\n>\n> Overall, this feature doesn't feel quite ready for v17, and IMHO should\n> be reverted. It's a nice feature, so I'd love to have it fixed and\n> reviewed early in the v18 cycle.\n\nThank you for your review. I've reverted this. Will repost this for early v18.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 11 Apr 2024 18:09:43 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On 11/04/2024 18:09, Alexander Korotkov wrote:\n> On Thu, Apr 11, 2024 at 1:46 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> On 07/04/2024 00:52, Alexander Korotkov wrote:\n>>> * At first, we check that pg_wal_replay_wait() is called in a non-atomic\n>>> * context. That is, a procedure call isn't wrapped into a transaction,\n>>> * another procedure call, or a function call.\n>>> *\n>>\n>> It's pretty unfortunate to have all these restrictions. It would be nice\n>> to do:\n>>\n>> select pg_wal_replay_wait('0/1234'); select * from foo;\n> \n> This works for me, except that it needs \"call\" not \"select\".\n> \n> # call pg_wal_replay_wait('0/1234'); select * from foo;\n> CALL\n> i\n> ---\n> (0 rows)\n\nIf you do that from psql prompt, it works because psql parses and sends \nit as two separate round-trips. Try:\n\npsql postgres -p 5433 -c \"call pg_wal_replay_wait('0/4101BBB8'); select 1\"\nERROR: pg_wal_replay_wait() must be only called in non-atomic context\nDETAIL: Make sure pg_wal_replay_wait() isn't called within a \ntransaction, another procedure, or a function.\n\n>> This assumption that PortalRunUtility() can tolerate us popping the\n>> snapshot sounds very fishy. I haven't looked at what's going on there,\n>> but doesn't sound like a great assumption.\n> \n> This is what PortalRunUtility() says about this.\n> \n> /*\n> * Some utility commands (e.g., VACUUM) pop the ActiveSnapshot stack from\n> * under us, so don't complain if it's now empty. Otherwise, our snapshot\n> * should be the top one; pop it. Note that this could be a different\n> * snapshot from the one we made above; see EnsurePortalSnapshotExists.\n> */\n> \n> So, if the vacuum pops a snapshot when it needs to run without a\n> snapshot, then it's probably OK for other utilities. But I agree this\n> decision needs some consensus.\n\nOk, so it's at least somewhat documented that it's fine.\n\n>> Overall, this feature doesn't feel quite ready for v17, and IMHO should\n>> be reverted. It's a nice feature, so I'd love to have it fixed and\n>> reviewed early in the v18 cycle.\n> \n> Thank you for your review. I've reverted this. Will repost this for early v18.\n\nThanks Alexander for working on this.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 11 Apr 2024 18:47:16 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi, Alexander, Here, I made some improvements according to your \ndiscussion with Heikki.\n\nOn 2024-04-11 18:09, Alexander Korotkov wrote:\n> On Thu, Apr 11, 2024 at 1:46 AM Heikki Linnakangas <hlinnaka@iki.fi> \n> wrote:\n>> In a nutshell, it's possible for the loop in WaitForLSN to exit \n>> without\n>> cleaning up the process from the heap. I was able to hit that by \n>> adding\n>> a delay after the addLSNWaiter() call:\n>> \n>> > TRAP: failed Assert(\"!procInfo->inHeap\"), File: \"../src/backend/commands/waitlsn.c\", Line: 114, PID: 1936152\n>> > postgres: heikki postgres [local] CALL(ExceptionalCondition+0xab)[0x55da1f68787b]\n>> > postgres: heikki postgres [local] CALL(+0x331ec8)[0x55da1f204ec8]\n>> > postgres: heikki postgres [local] CALL(WaitForLSN+0x139)[0x55da1f2052cc]\n>> > postgres: heikki postgres [local] CALL(pg_wal_replay_wait+0x18b)[0x55da1f2056e5]\n>> > postgres: heikki postgres [local] CALL(ExecuteCallStmt+0x46e)[0x55da1f18031a]\n>> > postgres: heikki postgres [local] CALL(standard_ProcessUtility+0x8cf)[0x55da1f4b26c9]\n>> \n>> I think there's a similar race condition if the timeout is reached at\n>> the same time that the startup process wakes up the process.\n> \n> Thank you for catching this. I think WaitForLSN() just needs to call\n> deleteLSNWaiter() unconditionally after exit from the loop.\n\nFix and add injection point test on this race condition.\n\n> On Thu, Apr 11, 2024 at 1:46 AM Heikki Linnakangas <hlinnaka@iki.fi> \n> wrote:\n>> The docs could use some-copy-editing, but just to point out one issue:\n>> \n>> > There are also procedures to control the progress of recovery.\n>> \n>> That's copy-pasted from an earlier sentence at the table that lists\n>> functions like pg_promote(), pg_wal_replay_pause(), and\n>> pg_is_wal_replay_paused(). The pg_wal_replay_wait() doesn't control \n>> the\n>> progress of recovery like those functions do, it only causes the \n>> calling\n>> backend to wait.\n\nFix documentation and add extra tests on multi-standby replication\nand cascade replication.\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com",
"msg_date": "Wed, 12 Jun 2024 11:36:05 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi, Ivan!\n\nOn Wed, Jun 12, 2024 at 11:36 AM Kartyshov Ivan\n<i.kartyshov@postgrespro.ru> wrote:\n>\n> Hi, Alexander, Here, I made some improvements according to your\n> discussion with Heikki.\n>\n> On 2024-04-11 18:09, Alexander Korotkov wrote:\n> > On Thu, Apr 11, 2024 at 1:46 AM Heikki Linnakangas <hlinnaka@iki.fi>\n> > wrote:\n> >> In a nutshell, it's possible for the loop in WaitForLSN to exit\n> >> without\n> >> cleaning up the process from the heap. I was able to hit that by\n> >> adding\n> >> a delay after the addLSNWaiter() call:\n> >>\n> >> > TRAP: failed Assert(\"!procInfo->inHeap\"), File: \"../src/backend/commands/waitlsn.c\", Line: 114, PID: 1936152\n> >> > postgres: heikki postgres [local] CALL(ExceptionalCondition+0xab)[0x55da1f68787b]\n> >> > postgres: heikki postgres [local] CALL(+0x331ec8)[0x55da1f204ec8]\n> >> > postgres: heikki postgres [local] CALL(WaitForLSN+0x139)[0x55da1f2052cc]\n> >> > postgres: heikki postgres [local] CALL(pg_wal_replay_wait+0x18b)[0x55da1f2056e5]\n> >> > postgres: heikki postgres [local] CALL(ExecuteCallStmt+0x46e)[0x55da1f18031a]\n> >> > postgres: heikki postgres [local] CALL(standard_ProcessUtility+0x8cf)[0x55da1f4b26c9]\n> >>\n> >> I think there's a similar race condition if the timeout is reached at\n> >> the same time that the startup process wakes up the process.\n> >\n> > Thank you for catching this. I think WaitForLSN() just needs to call\n> > deleteLSNWaiter() unconditionally after exit from the loop.\n>\n> Fix and add injection point test on this race condition.\n>\n> > On Thu, Apr 11, 2024 at 1:46 AM Heikki Linnakangas <hlinnaka@iki.fi>\n> > wrote:\n> >> The docs could use some-copy-editing, but just to point out one issue:\n> >>\n> >> > There are also procedures to control the progress of recovery.\n> >>\n> >> That's copy-pasted from an earlier sentence at the table that lists\n> >> functions like pg_promote(), pg_wal_replay_pause(), and\n> >> pg_is_wal_replay_paused(). The pg_wal_replay_wait() doesn't control\n> >> the\n> >> progress of recovery like those functions do, it only causes the\n> >> calling\n> >> backend to wait.\n>\n> Fix documentation and add extra tests on multi-standby replication\n> and cascade replication.\n\nThank you for the revised patch.\n\nI see couple of items which are not addressed in this revision.\n * As Heikki pointed, that it's currently not possible in one round\ntrip to call call pg_wal_replay_wait() and do other job. The attached\npatch addresses this. It milds the requirement. Now, it's not\nnecessary to be in atomic context. It's only necessary to have no\nactive snapshot. This new requirement works for me so far. I\nappreciate a feedback on this.\n * As Alvaro pointed, multiple waiters case isn't covered by the test\nsuite. That leads to no coverage of some code paths. The attached\npatch addresses this by adding a test case with multiple waiters.\n\nThe rest looks good to me.\n\nLinks.\n1. https://www.postgresql.org/message-id/d1303584-b763-446c-9409-f4516118219f%40iki.fi\n2.https://www.postgresql.org/message-id/202404051815.eri4u5q6oj26%40alvherre.pgsql\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Fri, 14 Jun 2024 15:46:15 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Fri, Jun 14, 2024 at 3:46 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Wed, Jun 12, 2024 at 11:36 AM Kartyshov Ivan\n> <i.kartyshov@postgrespro.ru> wrote:\n> >\n> > Hi, Alexander, Here, I made some improvements according to your\n> > discussion with Heikki.\n> >\n> > On 2024-04-11 18:09, Alexander Korotkov wrote:\n> > > On Thu, Apr 11, 2024 at 1:46 AM Heikki Linnakangas <hlinnaka@iki.fi>\n> > > wrote:\n> > >> In a nutshell, it's possible for the loop in WaitForLSN to exit\n> > >> without\n> > >> cleaning up the process from the heap. I was able to hit that by\n> > >> adding\n> > >> a delay after the addLSNWaiter() call:\n> > >>\n> > >> > TRAP: failed Assert(\"!procInfo->inHeap\"), File: \"../src/backend/commands/waitlsn.c\", Line: 114, PID: 1936152\n> > >> > postgres: heikki postgres [local] CALL(ExceptionalCondition+0xab)[0x55da1f68787b]\n> > >> > postgres: heikki postgres [local] CALL(+0x331ec8)[0x55da1f204ec8]\n> > >> > postgres: heikki postgres [local] CALL(WaitForLSN+0x139)[0x55da1f2052cc]\n> > >> > postgres: heikki postgres [local] CALL(pg_wal_replay_wait+0x18b)[0x55da1f2056e5]\n> > >> > postgres: heikki postgres [local] CALL(ExecuteCallStmt+0x46e)[0x55da1f18031a]\n> > >> > postgres: heikki postgres [local] CALL(standard_ProcessUtility+0x8cf)[0x55da1f4b26c9]\n> > >>\n> > >> I think there's a similar race condition if the timeout is reached at\n> > >> the same time that the startup process wakes up the process.\n> > >\n> > > Thank you for catching this. I think WaitForLSN() just needs to call\n> > > deleteLSNWaiter() unconditionally after exit from the loop.\n> >\n> > Fix and add injection point test on this race condition.\n> >\n> > > On Thu, Apr 11, 2024 at 1:46 AM Heikki Linnakangas <hlinnaka@iki.fi>\n> > > wrote:\n> > >> The docs could use some-copy-editing, but just to point out one issue:\n> > >>\n> > >> > There are also procedures to control the progress of recovery.\n> > >>\n> > >> That's copy-pasted from an earlier sentence at the table that lists\n> > >> functions like pg_promote(), pg_wal_replay_pause(), and\n> > >> pg_is_wal_replay_paused(). The pg_wal_replay_wait() doesn't control\n> > >> the\n> > >> progress of recovery like those functions do, it only causes the\n> > >> calling\n> > >> backend to wait.\n> >\n> > Fix documentation and add extra tests on multi-standby replication\n> > and cascade replication.\n>\n> Thank you for the revised patch.\n>\n> I see couple of items which are not addressed in this revision.\n> * As Heikki pointed, that it's currently not possible in one round\n> trip to call call pg_wal_replay_wait() and do other job. The attached\n> patch addresses this. It milds the requirement. Now, it's not\n> necessary to be in atomic context. It's only necessary to have no\n> active snapshot. This new requirement works for me so far. I\n> appreciate a feedback on this.\n> * As Alvaro pointed, multiple waiters case isn't covered by the test\n> suite. That leads to no coverage of some code paths. The attached\n> patch addresses this by adding a test case with multiple waiters.\n>\n> The rest looks good to me.\n\nOh, I forgot some notes about 044_wal_replay_wait_injection_test.pl.\n\n1. It's not clear why this test needs node_standby2 at all. It seems useless.\n2. The target LSN is set to pg_current_wal_insert_lsn() + 10000. This\nlocation seems to be unachievable in this test. So, it's not clear\nwhat race condition this test could potentially detect.\n3. I think it would make sense to check for the race condition\nreported by Heikki. That is to insert the injection point at the\nbeginning of WaitLSNSetLatches().\n\nLinks.\n1. https://www.postgresql.org/message-id/flat/CAPpHfdvGRssjqwX1%2Bidm5Tu-eWsTcx6DthB2LhGqA1tZ29jJaw%40mail.gmail.com#557900e860457a9e24256c93a2ad4920\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Wed, 19 Jun 2024 23:08:50 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi, I looked through the patch and have some comments.\n\n\n====== L68:\n+ <title>Recovery Procedures</title>\n\nIt looks somewhat confusing and appears as if the section is intended\nto explain how to perform recovery. Since this is the first built-in\nprocedure, I'm not sure how should this section be written. However,\nthe section immediately above is named \"Recovery Control Functions\",\nso \"Reocvery Synchronization Functions\" would align better with the\nnaming of the upper section. (I don't believe we need to be so strcit\nabout the distinction between functions and procedures here.)\n\nIt looks strange that the procedure signature includes the return type.\n\n\n====== L93:\n+ If <parameter>timeout</parameter> is not specified or zero, this\n+ procedure returns once WAL is replayed upto\n+ <literal>target_lsn</literal>.\n+ If the <parameter>timeout</parameter> is specified (in\n+ milliseconds) and greater than zero, the procedure waits until the\n+ server actually replays the WAL upto <literal>target_lsn</literal> or\n+ until the given time has passed. On timeout, an error is emitted.\n\nThe first sentence should mention the main functionality. Following\nprecedents, it might be better to use something like \"Waits until\nrecovery surpasses the specified LSN. If no timeout is specified or it\nis set to zero, this procedure waits indefinitely for the LSN. If the\ntimeout is specified (in milliseconds) and is greater than zero, the\nprocedure waits until the LSN is reached or the specified time has\nelapsed. On timeout, or if the server is promoted before the LSN is\nreached, an error is emitted.\"\n\nThe detailed explanation that follows the above seems somewhat too\nverbose to me, as other functions don't have such detailed examples.\n\n====== L484\n/*\n+\t * Set latches for processes, whose waited LSNs are already replayed. This\n+\t * involves spinlocks. So, we shouldn't do this under a spinlock.\n+\t */\n\nHere, I'm not quite sure what specifically spinlock (or mutex?) is\nreferring to. However, more importantly, shouldn't we explain that it\nis okay not to use any locks at all, rather than mentioning that\nspinlocks should not be used here? I found a similar case around\nfreelist.c:238, which is written as follows.\n\n>\t\t * Not acquiring ProcArrayLock here which is slightly icky. It's\n>\t\t * actually fine because procLatch isn't ever freed, so we just can\n>\t\t * potentially set the wrong process' (or no process') latch.\n>\t\t */\n>\t\tSetLatch(&ProcGlobal->allProcs[bgwprocno].procLatch);\n\n===== L518\n+void\n+WaitForLSN(XLogRecPtr targetLSN, int64 timeout)\n\nThis function is only called within the same module. I'm not sure if\nwe need to expose it. I we do, the name should probably be more\nspecific. I'm not quite sure if the division of functionality between\nthis function and its only caller function is appropriate. As a\npossible refactoring, we could have WaitForLSN() just return the\nresult as [reached, timedout, promoted] and delegate prerequisition\nchecks and error reporting to the SQL function.\n\n\n===== L524\n+\t/* Shouldn't be called when shmem isn't initialized */\n+\tAssert(waitLSN);\n\nSeeing this assertion, I feel that the name \"waitLSN\" is a bit\nobscure. How about renaming it to \"waitLSNStates\"?\n\n===== L527\n+\t/* Should be only called by a backend */\n+\tAssert(MyBackendType == B_BACKEND && MyProcNumber <= MaxBackends);\n\nThis is somewhat excessive, causing a server crash when ending with an\nerror would suffice. By the way, if I execute \"CALL\npg_wal_replay_wait('0/0')\" on a logical wansender, the server crashes.\nThe condition doesn't seem appropriate.\n\n\n===== L561\n+\t\t\t\t\t errdetail(\"Recovery ended before replaying the target LSN %X/%X; last replay LSN %X/%X.\",\n\nI don't think we need \"the\" before \"target\" in the above message.\n\n\n===== L565\n+\t\tif (timeout > 0)\n+\t\t{\n+\t\t\tdelay_ms = (endtime - GetCurrentTimestamp()) / 1000;\n+\t\t\tlatch_events |= WL_TIMEOUT;\n+\t\t\tif (delay_ms <= 0)\n+\t\t\t\tbreak;\n\n\"timeout\" is immutable in the function. Therefore, we can calculate\n\"latch_events\" before entering the loop. By the way, the name\n'latch_events' seems a bit off. Latch is a kind of event the function\ncan wait for. Therefore, something like wait_events might be more\nappropriate.\n\n===== L567\n+\t\t\tdelay_ms = (endtime - GetCurrentTimestamp()) / 1000;\n\nWe can use TimestampDifferenceMilliseconds() here.\n\n\n==== L578\n+\t\tif (rc & WL_LATCH_SET)\n+\t\t\tResetLatch(MyLatch);\n\nI think we usually reset latches unconditionally after returning from\nWaitLatch(), even when waiting for timeouts.\n\n\n===== L756\n+{ oid => '16387', descr => 'wait for LSN with timeout',\n\nThe description seems a bit off. While timeout is mentioned, the more\nimportant characteristic that the LSN is the replay LSN is not\nincluded.\n\n===== L791\n+ * WaitLSNProcInfo [e2 80 93] the shared memory structure representing information\n\nThis line contains a non-ascii character EN Dash(U+2013).\n\n\n===== L798\n+\t * A process number, same as the index of this item in waitLSN->procInfos.\n+\t * Stored for convenience.\n+\t */\n+\tint\t\t\tprocnum;\n\nIt is described as \"(just) for convenience\". However, it is referenced\nby Startup to fetch the PGPROC entry for the waiter, which necessary\nfor Startup. That aside, why don't we hold (the pointer to) procLatch\ninstead of procnum? It makes things simpler and I believe it is our\nstandard practice.\n\n\n===== L809\n+\t/* A flag indicating that this item is added to waitLSN->waitersHeap */\n+\tbool\t\tinHeap;\n\nThe name \"inHeap\" seems too leteral and might be hard to understand in\nmost occurances. How about using \"waiting\" instead?\n\n===== L920\n+# I\n+# Make sure that pg_wal_replay_wait() works: add new content to\n\nHmm. I feel that Arabic numerals look nicer than Greek ones here.\n\n\n===== L940\n+# Check that new data is visible after calling pg_wal_replay_wait()\n\nOn the other hand, the comment for the check for this test states that\n\n> +# Make sure the current LSN on standby and is the same as primary's\n> LSN +ok($output eq 30, \"standby reached the same LSN as primary\");\n\nI think the first comment and the second should be consistent.\n\n\n\n\n> Oh, I forgot some notes about 044_wal_replay_wait_injection_test.pl.\n> \n> 1. It's not clear why this test needs node_standby2 at all. It seems useless.\n\nI agree with you. What we would need is a second *waiter client*\nconnecting to the same stanby rather than a second standby. I feel\nlike having a test where the first waiter is removed while multiple\nwaiters are waiting, as well as a test where promotion occurs under\nthe same circumstances.\n\n> 2. The target LSN is set to pg_current_wal_insert_lsn() + 10000. This\n> location seems to be unachievable in this test. So, it's not clear\n> what race condition this test could potentially detect.\n> 3. I think it would make sense to check for the race condition\n> reported by Heikki. That is to insert the injection point at the\n> beginning of WaitLSNSetLatches().\n\nI think the race condition you mentioned refers to the inconsistency\nbetween the inHeap flag and the pairing heap caused by a race\ncondition between timeout and wakeup (or perhaps other combinations?\nI'm not sure which version of the patch the mentioned race condition\nrefers to). However, I imagine it is difficult to reliably reproduce\nthis condition. In that regard, in the latest patch, the coherence\nbetween the inHeap flag and the pairing heap is protected by LWLock,\nso I believe we no longer need that test.\n\nregards.\n\n\n> Links.\n> 1. https://www.postgresql.org/message-id/flat/CAPpHfdvGRssjqwX1%2Bidm5Tu-eWsTcx6DthB2LhGqA1tZ29jJaw%40mail.gmail.com#557900e860457a9e24256c93a2ad4920\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 20 Jun 2024 17:30:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Thank you for your interest in the patch.\n\nOn 2024-06-20 11:30, Kyotaro Horiguchi wrote:\n> Hi, I looked through the patch and have some comments.\n> \n> \n> ====== L68:\n> + <title>Recovery Procedures</title>\n> \n> It looks somewhat confusing and appears as if the section is intended\n> to explain how to perform recovery. Since this is the first built-in\n> procedure, I'm not sure how should this section be written. However,\n> the section immediately above is named \"Recovery Control Functions\",\n> so \"Reocvery Synchronization Functions\" would align better with the\n> naming of the upper section. (I don't believe we need to be so strcit\n> about the distinction between functions and procedures here.)\n> \n> It looks strange that the procedure signature includes the return type.\n\nGood point, change\nRecovery Procedures -> Recovery Synchronization Procedures\n\n> ====== L93:\n> + If <parameter>timeout</parameter> is not specified or zero, \n> this\n> + procedure returns once WAL is replayed upto\n> + <literal>target_lsn</literal>.\n> + If the <parameter>timeout</parameter> is specified (in\n> + milliseconds) and greater than zero, the procedure waits until \n> the\n> + server actually replays the WAL upto \n> <literal>target_lsn</literal> or\n> + until the given time has passed. On timeout, an error is \n> emitted.\n> \n> The first sentence should mention the main functionality. Following\n> precedents, it might be better to use something like \"Waits until\n> recovery surpasses the specified LSN. If no timeout is specified or it\n> is set to zero, this procedure waits indefinitely for the LSN. If the\n> timeout is specified (in milliseconds) and is greater than zero, the\n> procedure waits until the LSN is reached or the specified time has\n> elapsed. On timeout, or if the server is promoted before the LSN is\n> reached, an error is emitted.\"\n> \n> The detailed explanation that follows the above seems somewhat too\n> verbose to me, as other functions don't have such detailed examples.\n\nPlease offer your description. I think it would be better.\n\n> ====== L484\n> /*\n> +\t * Set latches for processes, whose waited LSNs are already replayed. \n> This\n> +\t * involves spinlocks. So, we shouldn't do this under a spinlock.\n> +\t */\n> \n> Here, I'm not quite sure what specifically spinlock (or mutex?) is\n> referring to. However, more importantly, shouldn't we explain that it\n> is okay not to use any locks at all, rather than mentioning that\n> spinlocks should not be used here? I found a similar case around\n> freelist.c:238, which is written as follows.\n> \n>> \t\t * Not acquiring ProcArrayLock here which is slightly icky. It's\n>> \t\t * actually fine because procLatch isn't ever freed, so we just can\n>> \t\t * potentially set the wrong process' (or no process') latch.\n>> \t\t */\n>> \t\tSetLatch(&ProcGlobal->allProcs[bgwprocno].procLatch);\n\n???\n\n> ===== L518\n> +void\n> +WaitForLSN(XLogRecPtr targetLSN, int64 timeout)\n> \n> This function is only called within the same module. I'm not sure if\n> we need to expose it. I we do, the name should probably be more\n> specific. I'm not quite sure if the division of functionality between\n> this function and its only caller function is appropriate. As a\n> possible refactoring, we could have WaitForLSN() just return the\n> result as [reached, timedout, promoted] and delegate prerequisition\n> checks and error reporting to the SQL function.\n\nwaitLSN -> waitLSNStates\nNo, waitLSNStates is not the best name, because waitLSNState is a state,\nand waitLSN is not the array of waitLSNStates. We can think about \nanother name, what you think?\n\n> ===== L524\n> +\t/* Shouldn't be called when shmem isn't initialized */\n> +\tAssert(waitLSN);\n> \n> Seeing this assertion, I feel that the name \"waitLSN\" is a bit\n> obscure. How about renaming it to \"waitLSNStates\"?\n\n\n\n> ===== L527\n> +\t/* Should be only called by a backend */\n> +\tAssert(MyBackendType == B_BACKEND && MyProcNumber <= MaxBackends);\n> \n> This is somewhat excessive, causing a server crash when ending with an\n> error would suffice. By the way, if I execute \"CALL\n> pg_wal_replay_wait('0/0')\" on a logical wansender, the server crashes.\n> The condition doesn't seem appropriate.\n\nCan you give more information on your server crashes, so I could repeat \nthem.\n\n> ===== L565\n> +\t\tif (timeout > 0)\n> +\t\t{\n> +\t\t\tdelay_ms = (endtime - GetCurrentTimestamp()) / 1000;\n> +\t\t\tlatch_events |= WL_TIMEOUT;\n> +\t\t\tif (delay_ms <= 0)\n> +\t\t\t\tbreak;\n> \n> \"timeout\" is immutable in the function. Therefore, we can calculate\n> \"latch_events\" before entering the loop. By the way, the name\n> 'latch_events' seems a bit off. Latch is a kind of event the function\n> can wait for. Therefore, something like wait_events might be more\n> appropriate.\n\n\"wait_event\" - it can't be, because in latch declaration, this events \nresponsible for wake up and not for wait\nint WaitLatch(Latch *latch, int wakeEvents, long timeout, uint32 \nwait_event_info)\n\n> ==== L578\n> +\t\tif (rc & WL_LATCH_SET)\n> +\t\t\tResetLatch(MyLatch);\n> \n> I think we usually reset latches unconditionally after returning from\n> WaitLatch(), even when waiting for timeouts.\n\nNo, it depends on you logic, when you have several wake_events and you \nwant to choose what event ignited your latch.\nCheck applyparallelworker.c:813\n\n> ===== L798\n> +\t * A process number, same as the index of this item in \n> waitLSN->procInfos.\n> +\t * Stored for convenience.\n> +\t */\n> +\tint\t\t\tprocnum;\n> \n> It is described as \"(just) for convenience\". However, it is referenced\n> by Startup to fetch the PGPROC entry for the waiter, which necessary\n> for Startup. That aside, why don't we hold (the pointer to) procLatch\n> instead of procnum? It makes things simpler and I believe it is our\n> standard practice.\n\nCan you deeper explane what you meen and give the example?\n\n> ===== L809\n> +\t/* A flag indicating that this item is added to waitLSN->waitersHeap \n> */\n> +\tbool\t\tinHeap;\n> \n> The name \"inHeap\" seems too leteral and might be hard to understand in\n> most occurances. How about using \"waiting\" instead?\n\nNo, inHeap leteral mean indeed inHeap. Check the comment.\nPlease suggest a more suitable one.\n\n> ===== L940\n> +# Check that new data is visible after calling pg_wal_replay_wait()\n> \n> On the other hand, the comment for the check for this test states that\n> \n>> +# Make sure the current LSN on standby and is the same as primary's\n>> LSN +ok($output eq 30, \"standby reached the same LSN as primary\");\n> \n> I think the first comment and the second should be consistent.\n\nThanks, I'll rephrase this comment\n\n>> Oh, I forgot some notes about 044_wal_replay_wait_injection_test.pl.\n>> \n>> 1. It's not clear why this test needs node_standby2 at all. It seems \n>> useless.\n> \n> I agree with you. What we would need is a second *waiter client*\n> connecting to the same stanby rather than a second standby. I feel\n> like having a test where the first waiter is removed while multiple\n> waiters are waiting, as well as a test where promotion occurs under\n> the same circumstances.\n\nCan you give more information about this cases step by step, and what\nmeans \"remove\" and \"promotion\".\n\n>> 2. The target LSN is set to pg_current_wal_insert_lsn() + 10000. This\n>> location seems to be unachievable in this test. So, it's not clear\n>> what race condition this test could potentially detect.\n>> 3. I think it would make sense to check for the race condition\n>> reported by Heikki. That is to insert the injection point at the\n>> beginning of WaitLSNSetLatches().\n> \n> I think the race condition you mentioned refers to the inconsistency\n> between the inHeap flag and the pairing heap caused by a race\n> condition between timeout and wakeup (or perhaps other combinations?\n> I'm not sure which version of the patch the mentioned race condition\n> refers to). However, I imagine it is difficult to reliably reproduce\n> this condition. In that regard, in the latest patch, the coherence\n> between the inHeap flag and the pairing heap is protected by LWLock,\n> so I believe we no longer need that test.\n\nNo, Alexandre means that Heikki point on race condition just before\nLWLock. But injection point we can inject and stepin on backend, and\nWaitLSNSetLatches is used from Recovery process. But I have trouble\nto wakeup injection point on server.\n\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com",
"msg_date": "Tue, 09 Jul 2024 12:51:23 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": ">> I think the race condition you mentioned refers to the inconsistency\n>> between the inHeap flag and the pairing heap caused by a race\n>> condition between timeout and wakeup (or perhaps other combinations?\n>> I'm not sure which version of the patch the mentioned race condition\n>> refers to). However, I imagine it is difficult to reliably reproduce\n>> this condition. In that regard, in the latest patch, the coherence\n>> between the inHeap flag and the pairing heap is protected by LWLock,\n>> so I believe we no longer need that test.\n> \n> No, Alexandre means that Heikki point on race condition just before\n> LWLock. But injection point we can inject and stepin on backend, and\n> WaitLSNSetLatches is used from Recovery process. But I have trouble\n> to wakeup injection point on server.\n\nOne more thing. I want to point, when you move injection point to\ncontrib dir and do the same test (044_wal_replay_wait_injection_test.pl)\nstep by step with hands, wakeup works well.\n\n-- \nIvan Kartyshov\nPostgres Professional: www.postgrespro.com\n\n\n",
"msg_date": "Wed, 10 Jul 2024 07:58:23 +0300",
"msg_from": "Kartyshov Ivan <i.kartyshov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Thanks to Kyotaro for the review. And thanks to Ivan for the patch\nrevision. I made another revision of the patch.\n\nOn Tue, Jul 9, 2024 at 12:51 PM Kartyshov Ivan\n<i.kartyshov@postgrespro.ru> wrote:\n> Thank you for your interest in the patch.\n>\n> On 2024-06-20 11:30, Kyotaro Horiguchi wrote:\n> > Hi, I looked through the patch and have some comments.\n> >\n> >\n> > ====== L68:\n> > + <title>Recovery Procedures</title>\n> >\n> > It looks somewhat confusing and appears as if the section is intended\n> > to explain how to perform recovery. Since this is the first built-in\n> > procedure, I'm not sure how should this section be written. However,\n> > the section immediately above is named \"Recovery Control Functions\",\n> > so \"Reocvery Synchronization Functions\" would align better with the\n> > naming of the upper section. (I don't believe we need to be so strcit\n> > about the distinction between functions and procedures here.)\n> >\n> > It looks strange that the procedure signature includes the return type.\n>\n> Good point, change\n> Recovery Procedures -> Recovery Synchronization Procedures\n\nThank you, looks good to me.\n\n> > ====== L93:\n> > + If <parameter>timeout</parameter> is not specified or zero,\n> > this\n> > + procedure returns once WAL is replayed upto\n> > + <literal>target_lsn</literal>.\n> > + If the <parameter>timeout</parameter> is specified (in\n> > + milliseconds) and greater than zero, the procedure waits until\n> > the\n> > + server actually replays the WAL upto\n> > <literal>target_lsn</literal> or\n> > + until the given time has passed. On timeout, an error is\n> > emitted.\n> >\n> > The first sentence should mention the main functionality. Following\n> > precedents, it might be better to use something like \"Waits until\n> > recovery surpasses the specified LSN. If no timeout is specified or it\n> > is set to zero, this procedure waits indefinitely for the LSN. If the\n> > timeout is specified (in milliseconds) and is greater than zero, the\n> > procedure waits until the LSN is reached or the specified time has\n> > elapsed. On timeout, or if the server is promoted before the LSN is\n> > reached, an error is emitted.\"\n> >\n> > The detailed explanation that follows the above seems somewhat too\n> > verbose to me, as other functions don't have such detailed examples.\n>\n> Please offer your description. I think it would be better.\n\nKyotaro actually provided a paragraph in his message. I've integrated\nit into the patch.\n\n> > ====== L484\n> > /*\n> > + * Set latches for processes, whose waited LSNs are already replayed.\n> > This\n> > + * involves spinlocks. So, we shouldn't do this under a spinlock.\n> > + */\n> >\n> > Here, I'm not quite sure what specifically spinlock (or mutex?) is\n> > referring to. However, more importantly, shouldn't we explain that it\n> > is okay not to use any locks at all, rather than mentioning that\n> > spinlocks should not be used here? I found a similar case around\n> > freelist.c:238, which is written as follows.\n> >\n> >> * Not acquiring ProcArrayLock here which is slightly icky. It's\n> >> * actually fine because procLatch isn't ever freed, so we just can\n> >> * potentially set the wrong process' (or no process') latch.\n> >> */\n> >> SetLatch(&ProcGlobal->allProcs[bgwprocno].procLatch);\n>\n> ???\n\nI've revised the comment.\n\n> > ===== L518\n> > +void\n> > +WaitForLSN(XLogRecPtr targetLSN, int64 timeout)\n> >\n> > This function is only called within the same module. I'm not sure if\n> > we need to expose it. I we do, the name should probably be more\n> > specific. I'm not quite sure if the division of functionality between\n> > this function and its only caller function is appropriate. As a\n> > possible refactoring, we could have WaitForLSN() just return the\n> > result as [reached, timedout, promoted] and delegate prerequisition\n> > checks and error reporting to the SQL function.\n\nI think WaitForLSNReplay() is better name. And it's not clear what\nAPI we could need. So I would prefer to keep it static for now.\n\n> waitLSN -> waitLSNStates\n> No, waitLSNStates is not the best name, because waitLSNState is a state,\n> and waitLSN is not the array of waitLSNStates. We can think about\n> another name, what you think?\n>\n> > ===== L524\n> > + /* Shouldn't be called when shmem isn't initialized */\n> > + Assert(waitLSN);\n> >\n> > Seeing this assertion, I feel that the name \"waitLSN\" is a bit\n> > obscure. How about renaming it to \"waitLSNStates\"?\n\nI agree that waitLSN is too generic. I've renamed it to waitLSNState.\n\n> > ===== L527\n> > + /* Should be only called by a backend */\n> > + Assert(MyBackendType == B_BACKEND && MyProcNumber <= MaxBackends);\n> >\n> > This is somewhat excessive, causing a server crash when ending with an\n> > error would suffice. By the way, if I execute \"CALL\n> > pg_wal_replay_wait('0/0')\" on a logical wansender, the server crashes.\n> > The condition doesn't seem appropriate.\n>\n> Can you give more information on your server crashes, so I could repeat\n> them.\n\nI've rechecked this. We don't need to assert the process to be a\nbackend given that MaxBackends include background workers too.\nReplaced this just with assert that MyProcNumber is valid.\n\n> > ===== L565\n> > + if (timeout > 0)\n> > + {\n> > + delay_ms = (endtime - GetCurrentTimestamp()) / 1000;\n> > + latch_events |= WL_TIMEOUT;\n> > + if (delay_ms <= 0)\n> > + break;\n> >\n> > \"timeout\" is immutable in the function. Therefore, we can calculate\n> > \"latch_events\" before entering the loop. By the way, the name\n> > 'latch_events' seems a bit off. Latch is a kind of event the function\n> > can wait for. Therefore, something like wait_events might be more\n> > appropriate.\n>\n> \"wait_event\" - it can't be, because in latch declaration, this events\n> responsible for wake up and not for wait\n> int WaitLatch(Latch *latch, int wakeEvents, long timeout, uint32\n> wait_event_info)\n\nI agree with change \"latch_events\" => \"wake_events\" by Ivan. I also\nmade change to calculate wake_events in advance as proposed by\nKyotaro.\n\n> > ==== L578\n> > + if (rc & WL_LATCH_SET)\n> > + ResetLatch(MyLatch);\n> >\n> > I think we usually reset latches unconditionally after returning from\n> > WaitLatch(), even when waiting for timeouts.\n>\n> No, it depends on you logic, when you have several wake_events and you\n> want to choose what event ignited your latch.\n> Check applyparallelworker.c:813\n\n+1,\nNot necessary to reset latch if it hasn't been set. A lot of places\nwhere we do that conditionally.\n\n> > ===== L798\n> > + * A process number, same as the index of this item in\n> > waitLSN->procInfos.\n> > + * Stored for convenience.\n> > + */\n> > + int procnum;\n> >\n> > It is described as \"(just) for convenience\". However, it is referenced\n> > by Startup to fetch the PGPROC entry for the waiter, which necessary\n> > for Startup. That aside, why don't we hold (the pointer to) procLatch\n> > instead of procnum? It makes things simpler and I believe it is our\n> > standard practice.\n>\n> Can you deeper explane what you meen and give the example?\n\nI wrote the comment that we store procnum for convenience meaning that\nalternatively we could calculate it as the offset in the\nwaitLSN->procInfos array. But I like your idea to keep latch instead.\nThis should give more flexibility for future advancements.\n\n> > ===== L809\n> > + /* A flag indicating that this item is added to waitLSN->waitersHeap\n> > */\n> > + bool inHeap;\n> >\n> > The name \"inHeap\" seems too leteral and might be hard to understand in\n> > most occurances. How about using \"waiting\" instead?\n>\n> No, inHeap leteral mean indeed inHeap. Check the comment.\n> Please suggest a more suitable one.\n\n+1,\nWe use this flag for consistency during operations with heap. I don't\nsee a reason to rename this.\n\n> > ===== L940\n> > +# Check that new data is visible after calling pg_wal_replay_wait()\n> >\n> > On the other hand, the comment for the check for this test states that\n> >\n> >> +# Make sure the current LSN on standby and is the same as primary's\n> >> LSN +ok($output eq 30, \"standby reached the same LSN as primary\");\n> >\n> > I think the first comment and the second should be consistent.\n>\n> Thanks, I'll rephrase this comment\n\nI had to revised this comment furthermore.\n\n> >> Oh, I forgot some notes about 044_wal_replay_wait_injection_test.pl.\n> >>\n> >> 1. It's not clear why this test needs node_standby2 at all. It seems\n> >> useless.\n> >\n> > I agree with you. What we would need is a second *waiter client*\n> > connecting to the same stanby rather than a second standby. I feel\n> > like having a test where the first waiter is removed while multiple\n> > waiters are waiting, as well as a test where promotion occurs under\n> > the same circumstances.\n>\n> Can you give more information about this cases step by step, and what\n> means \"remove\" and \"promotion\".\n>\n> >> 2. The target LSN is set to pg_current_wal_insert_lsn() + 10000. This\n> >> location seems to be unachievable in this test. So, it's not clear\n> >> what race condition this test could potentially detect.\n> >> 3. I think it would make sense to check for the race condition\n> >> reported by Heikki. That is to insert the injection point at the\n> >> beginning of WaitLSNSetLatches().\n> >\n> > I think the race condition you mentioned refers to the inconsistency\n> > between the inHeap flag and the pairing heap caused by a race\n> > condition between timeout and wakeup (or perhaps other combinations?\n> > I'm not sure which version of the patch the mentioned race condition\n> > refers to). However, I imagine it is difficult to reliably reproduce\n> > this condition. In that regard, in the latest patch, the coherence\n> > between the inHeap flag and the pairing heap is protected by LWLock,\n> > so I believe we no longer need that test.\n>\n> No, Alexandre means that Heikki point on race condition just before\n> LWLock. But injection point we can inject and stepin on backend, and\n> WaitLSNSetLatches is used from Recovery process. But I have trouble\n> to wakeup injection point on server.\n\nMy initial hope was to add elegant enough test that will check for\nconcurrency issues between startup process and lsn waiter process.\nThat test should check an issue reported by Heikki and hopefully more.\nI've tried to revise 044_wal_replay_wait_injection_test.pl, but it\nbecomes cumbersome and finally unclear what it's intended to test.\nThis is why I finally decide to remote this.\n\nRegarding test for multiple waiters, 043_wal_replay_wait.pl has some.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Mon, 15 Jul 2024 04:24:19 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Mon, Jul 15, 2024 at 4:24 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> Thanks to Kyotaro for the review. And thanks to Ivan for the patch\n> revision. I made another revision of the patch.\n\nI've noticed failures on cfbot. The attached revision addressed docs\nbuild failure. Also it adds some \"quits\" for background psql sessions\nfor tests. Probably this will address test hangs on windows.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Mon, 15 Jul 2024 14:02:03 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Mon, Jul 15, 2024 at 2:02 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Mon, Jul 15, 2024 at 4:24 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > Thanks to Kyotaro for the review. And thanks to Ivan for the patch\n> > revision. I made another revision of the patch.\n>\n> I've noticed failures on cfbot. The attached revision addressed docs\n> build failure. Also it adds some \"quits\" for background psql sessions\n> for tests. Probably this will address test hangs on windows.\n\nI made the following changes to the patch.\n\n1) I've changed the check for snapshot in pg_wal_replay_wait(). Now\nit checks that GetOldestSnapshot() returns NULL. It happens when both\nActiveSnapshot is NULL and RegisteredSnapshots pairing heap is empty.\nThis is the same condition when SnapshotResetXmin() sets out xmin to\ninvalid. Thus, we are not preventing WAL from replay. This should be\nsatisfied when pg_wal_replay_wait() isn't called within a transaction\nwith an isolation level higher than READ COMMITTED, another procedure,\nor a function. Documented it this way.\n\n2) Explicitly document in the PortalRunUtility() comment that\npg_wal_replay_wait() is another case when active snapshot gets\nreleased.\n\n3) I've removed tests with cascading replication. It's rather unclear\nwhat problem these tests could potentially spot.\n\n4) Did some improvements to docs, comments and commit message to make\nthem consistent with the patch contents.\n\nThe commit to pg17 was inconsiderate. But I feel this patch got much\nbetter shape. Especially, now it's clear when the\npg_wal_replay_wait() procedure can be used. So, I dare commit this to\npg18 if nobody objects.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Wed, 31 Jul 2024 19:40:48 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "I noticed the commit and had a question and a comment.\nThere is a small problem in func.sgml in the sentence \"After that the\nchanges made of primary\". Should be \"on primary\".\n\nIn the for loop in WaitForLSNReplay, shouldn't the check for in-recovery be\nmoved up above the call to GetXLogReplayRecPtr?\nIf we get promoted while waiting for the timeout we could\ncall GetXLogReplayRecPtr while not in recovery.\n\nThanks,\nKevin.\n\nI noticed the commit and had a question and a comment.There is a small problem in func.sgml in the sentence \"After that the changes made of primary\". Should be \"on primary\".In the for loop in WaitForLSNReplay, shouldn't the check for in-recovery be moved up above the call to GetXLogReplayRecPtr?If we get promoted while waiting for the timeout we could call GetXLogReplayRecPtr while not in recovery.Thanks,Kevin.",
"msg_date": "Fri, 2 Aug 2024 18:44:58 -0600",
"msg_from": "Kevin Hale Boyes <kcboyes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Sat, Aug 3, 2024 at 3:45 AM Kevin Hale Boyes <kcboyes@gmail.com> wrote:\n> I noticed the commit and had a question and a comment.\n> There is a small problem in func.sgml in the sentence \"After that the changes made of primary\". Should be \"on primary\".\n\nThank you for spotting this. Will fix.\n\n> In the for loop in WaitForLSNReplay, shouldn't the check for in-recovery be moved up above the call to GetXLogReplayRecPtr?\n> If we get promoted while waiting for the timeout we could call GetXLogReplayRecPtr while not in recovery.\n\nThis is intentional. After standby gets promoted,\nGetXLogReplayRecPtr() returns the last WAL position being replayed\nwhile being standby. So, if standby reached target lsn before being\npromoted, we don't have to throw an error.\n\nBut this gave me an idea that before the loop we probably need to put\nRecoveryInProgress() check after GetXLogReplayRecPtr() too. I'll\nrecheck that.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Sat, 3 Aug 2024 06:07:32 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Sat, Aug 3, 2024 at 6:07 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Sat, Aug 3, 2024 at 3:45 AM Kevin Hale Boyes <kcboyes@gmail.com> wrote:\n> > In the for loop in WaitForLSNReplay, shouldn't the check for in-recovery be moved up above the call to GetXLogReplayRecPtr?\n> > If we get promoted while waiting for the timeout we could call GetXLogReplayRecPtr while not in recovery.\n>\n> This is intentional. After standby gets promoted,\n> GetXLogReplayRecPtr() returns the last WAL position being replayed\n> while being standby. So, if standby reached target lsn before being\n> promoted, we don't have to throw an error.\n>\n> But this gave me an idea that before the loop we probably need to put\n> RecoveryInProgress() check after GetXLogReplayRecPtr() too. I'll\n> recheck that.\n\nThe attached patchset comprises assorted improvements for pg_wal_replay_wait().\n\nThe 0001 patch is intended to improve this situation. Actually, it's\nnot right to just put RecoveryInProgress() after\nGetXLogReplayRecPtr(), because more wal could be replayed between\nthese calls. Instead we need to recheck GetXLogReplayRecPtr() after\ngetting negative result of RecoveryInProgress() because WAL replay\nposition couldn't get updated after.\n0002 patch comprises fix for the header comment of WaitLSNSetLatches() function\n0003 patch comprises tests for pg_wal_replay_wait() errors.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Tue, 6 Aug 2024 05:17:10 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Tue, Aug 06, 2024 at 05:17:10AM +0300, Alexander Korotkov wrote:\n> The 0001 patch is intended to improve this situation. Actually, it's\n> not right to just put RecoveryInProgress() after\n> GetXLogReplayRecPtr(), because more wal could be replayed between\n> these calls. Instead we need to recheck GetXLogReplayRecPtr() after\n> getting negative result of RecoveryInProgress() because WAL replay\n> position couldn't get updated after.\n> 0002 patch comprises fix for the header comment of WaitLSNSetLatches() function\n> 0003 patch comprises tests for pg_wal_replay_wait() errors.\n\nBefore adding more tests, could it be possible to stabilize what's in\nthe tree? drongo has reported one failure with the recovery test\n043_wal_replay_wait.pl introduced recently by 3c5db1d6b016:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-05%2004%3A24%3A54\n--\nMichael",
"msg_date": "Tue, 6 Aug 2024 14:36:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Tue, Aug 6, 2024 at 8:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Aug 06, 2024 at 05:17:10AM +0300, Alexander Korotkov wrote:\n> > The 0001 patch is intended to improve this situation. Actually, it's\n> > not right to just put RecoveryInProgress() after\n> > GetXLogReplayRecPtr(), because more wal could be replayed between\n> > these calls. Instead we need to recheck GetXLogReplayRecPtr() after\n> > getting negative result of RecoveryInProgress() because WAL replay\n> > position couldn't get updated after.\n> > 0002 patch comprises fix for the header comment of WaitLSNSetLatches() function\n> > 0003 patch comprises tests for pg_wal_replay_wait() errors.\n>\n> Before adding more tests, could it be possible to stabilize what's in\n> the tree? drongo has reported one failure with the recovery test\n> 043_wal_replay_wait.pl introduced recently by 3c5db1d6b016:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-05%2004%3A24%3A54\n\nThank you for pointing!\nSurely, I'll fix this before.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Tue, 6 Aug 2024 11:18:05 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Tue, Aug 6, 2024 at 11:18 AM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n> On Tue, Aug 6, 2024 at 8:36 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n> > On Tue, Aug 06, 2024 at 05:17:10AM +0300, Alexander Korotkov wrote:\n> > > The 0001 patch is intended to improve this situation. Actually, it's\n> > > not right to just put RecoveryInProgress() after\n> > > GetXLogReplayRecPtr(), because more wal could be replayed between\n> > > these calls. Instead we need to recheck GetXLogReplayRecPtr() after\n> > > getting negative result of RecoveryInProgress() because WAL replay\n> > > position couldn't get updated after.\n> > > 0002 patch comprises fix for the header comment of\nWaitLSNSetLatches() function\n> > > 0003 patch comprises tests for pg_wal_replay_wait() errors.\n> >\n> > Before adding more tests, could it be possible to stabilize what's in\n> > the tree? drongo has reported one failure with the recovery test\n> > 043_wal_replay_wait.pl introduced recently by 3c5db1d6b016:\n> >\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-05%2004%3A24%3A54\n>\n> Thank you for pointing!\n> Surely, I'll fix this before.\n\nSomething breaks in these lines during second iteration of the loop.\n\"SELECT pg_current_wal_insert_lsn()\" has been queried from primary, but\nstandby didn't receive \"CALL pg_wal_replay_wait('...');\"\n\nfor (my $i = 0; $i < 5; $i++)\n{\n print($i);\n $node_primary->safe_psql('postgres',\n \"INSERT INTO wait_test VALUES (${i});\");\n my $lsn =\n $node_primary->safe_psql('postgres',\n \"SELECT pg_current_wal_insert_lsn()\");\n $psql_sessions[$i] = $node_standby1->background_psql('postgres');\n $psql_sessions[$i]->query_until(\n qr/start/, qq[\n \\\\echo start\n CALL pg_wal_replay_wait('${lsn}');\n SELECT log_count(${i});\n ]);\n}\n\nI wonder what could it be. Probably something hangs inside launching\nbackground psql... I'll investigate this more.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\nOn Tue, Aug 6, 2024 at 11:18 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:> On Tue, Aug 6, 2024 at 8:36 AM Michael Paquier <michael@paquier.xyz> wrote:> > On Tue, Aug 06, 2024 at 05:17:10AM +0300, Alexander Korotkov wrote:> > > The 0001 patch is intended to improve this situation. Actually, it's> > > not right to just put RecoveryInProgress() after> > > GetXLogReplayRecPtr(), because more wal could be replayed between> > > these calls. Instead we need to recheck GetXLogReplayRecPtr() after> > > getting negative result of RecoveryInProgress() because WAL replay> > > position couldn't get updated after.> > > 0002 patch comprises fix for the header comment of WaitLSNSetLatches() function> > > 0003 patch comprises tests for pg_wal_replay_wait() errors.> >> > Before adding more tests, could it be possible to stabilize what's in> > the tree? drongo has reported one failure with the recovery test> > 043_wal_replay_wait.pl introduced recently by 3c5db1d6b016:> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-05%2004%3A24%3A54>> Thank you for pointing!> Surely, I'll fix this before.Something breaks in these lines during second iteration of the loop. \"SELECT pg_current_wal_insert_lsn()\" has been queried from primary, but standby didn't receive \"CALL pg_wal_replay_wait('...');\"for (my $i = 0; $i < 5; $i++){ print($i); $node_primary->safe_psql('postgres', \"INSERT INTO wait_test VALUES (${i});\"); my $lsn = $node_primary->safe_psql('postgres', \"SELECT pg_current_wal_insert_lsn()\"); $psql_sessions[$i] = $node_standby1->background_psql('postgres'); $psql_sessions[$i]->query_until( qr/start/, qq[ \\\\echo start CALL pg_wal_replay_wait('${lsn}'); SELECT log_count(${i}); ]);}I wonder what could it be. Probably something hangs inside launching background psql... I'll investigate this more.------Regards,Alexander KorotkovSupabase",
"msg_date": "Tue, 6 Aug 2024 13:23:35 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Tue, Aug 6, 2024 at 8:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, Aug 06, 2024 at 05:17:10AM +0300, Alexander Korotkov wrote:\n> > The 0001 patch is intended to improve this situation. Actually, it's\n> > not right to just put RecoveryInProgress() after\n> > GetXLogReplayRecPtr(), because more wal could be replayed between\n> > these calls. Instead we need to recheck GetXLogReplayRecPtr() after\n> > getting negative result of RecoveryInProgress() because WAL replay\n> > position couldn't get updated after.\n> > 0002 patch comprises fix for the header comment of WaitLSNSetLatches() function\n> > 0003 patch comprises tests for pg_wal_replay_wait() errors.\n>\n> Before adding more tests, could it be possible to stabilize what's in\n> the tree? drongo has reported one failure with the recovery test\n> 043_wal_replay_wait.pl introduced recently by 3c5db1d6b016:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-05%2004%3A24%3A54\n\nI'm currently running a 043_wal_replay_wait test in a loop of drongo.\nNo failures during more than 10 hours. As I pointed in [1] it seems\nthat test stuck somewhere on launching BackgroundPsql. Given that\ndrongo have some strange failures from time to time (for instance [2]\nor [3]), I doubt there is something specifically wrong in\n043_wal_replay_wait test that caused the subject failure.\n\nTherefore, while I'm going to continue looking at the reason of\nfailure on drongo in background, I'm going to go ahead with my\nimprovements for pg_wal_replay_wait().\n\nLinks.\n1. https://www.postgresql.org/message-id/CAPpHfduYkve0sw-qy4aCCmJv_MXfuuAQ7wyRQsX8NjaLVKDE1Q%40mail.gmail.com\n2. https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-02%2010%3A34%3A45\n3. https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-06-06%2012%3A36%3A11\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Sat, 10 Aug 2024 18:58:31 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Tue, Aug 6, 2024 at 5:17 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Sat, Aug 3, 2024 at 6:07 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Sat, Aug 3, 2024 at 3:45 AM Kevin Hale Boyes <kcboyes@gmail.com> wrote:\n> > > In the for loop in WaitForLSNReplay, shouldn't the check for in-recovery be moved up above the call to GetXLogReplayRecPtr?\n> > > If we get promoted while waiting for the timeout we could call GetXLogReplayRecPtr while not in recovery.\n> >\n> > This is intentional. After standby gets promoted,\n> > GetXLogReplayRecPtr() returns the last WAL position being replayed\n> > while being standby. So, if standby reached target lsn before being\n> > promoted, we don't have to throw an error.\n> >\n> > But this gave me an idea that before the loop we probably need to put\n> > RecoveryInProgress() check after GetXLogReplayRecPtr() too. I'll\n> > recheck that.\n>\n> The attached patchset comprises assorted improvements for pg_wal_replay_wait().\n>\n> The 0001 patch is intended to improve this situation. Actually, it's\n> not right to just put RecoveryInProgress() after\n> GetXLogReplayRecPtr(), because more wal could be replayed between\n> these calls. Instead we need to recheck GetXLogReplayRecPtr() after\n> getting negative result of RecoveryInProgress() because WAL replay\n> position couldn't get updated after.\n> 0002 patch comprises fix for the header comment of WaitLSNSetLatches() function\n> 0003 patch comprises tests for pg_wal_replay_wait() errors.\n\nHere is a revised version of the patchset. I've fixed some typos,\nidentation, etc. I'm going to push this once it passes cfbot.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Sat, 10 Aug 2024 19:33:57 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Sat, Aug 10, 2024 at 7:33 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Tue, Aug 6, 2024 at 5:17 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Sat, Aug 3, 2024 at 6:07 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > > On Sat, Aug 3, 2024 at 3:45 AM Kevin Hale Boyes <kcboyes@gmail.com> wrote:\n> > > > In the for loop in WaitForLSNReplay, shouldn't the check for in-recovery be moved up above the call to GetXLogReplayRecPtr?\n> > > > If we get promoted while waiting for the timeout we could call GetXLogReplayRecPtr while not in recovery.\n> > >\n> > > This is intentional. After standby gets promoted,\n> > > GetXLogReplayRecPtr() returns the last WAL position being replayed\n> > > while being standby. So, if standby reached target lsn before being\n> > > promoted, we don't have to throw an error.\n> > >\n> > > But this gave me an idea that before the loop we probably need to put\n> > > RecoveryInProgress() check after GetXLogReplayRecPtr() too. I'll\n> > > recheck that.\n> >\n> > The attached patchset comprises assorted improvements for pg_wal_replay_wait().\n> >\n> > The 0001 patch is intended to improve this situation. Actually, it's\n> > not right to just put RecoveryInProgress() after\n> > GetXLogReplayRecPtr(), because more wal could be replayed between\n> > these calls. Instead we need to recheck GetXLogReplayRecPtr() after\n> > getting negative result of RecoveryInProgress() because WAL replay\n> > position couldn't get updated after.\n> > 0002 patch comprises fix for the header comment of WaitLSNSetLatches() function\n> > 0003 patch comprises tests for pg_wal_replay_wait() errors.\n>\n> Here is a revised version of the patchset. I've fixed some typos,\n> identation, etc. I'm going to push this once it passes cfbot.\n\nThe next revison of the patchset fixes uninitialized variable usage\nspotted by cfbot.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Sat, 10 Aug 2024 20:18:53 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Sat, Aug 10, 2024 at 6:58 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Tue, Aug 6, 2024 at 8:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Tue, Aug 06, 2024 at 05:17:10AM +0300, Alexander Korotkov wrote:\n> > > The 0001 patch is intended to improve this situation. Actually, it's\n> > > not right to just put RecoveryInProgress() after\n> > > GetXLogReplayRecPtr(), because more wal could be replayed between\n> > > these calls. Instead we need to recheck GetXLogReplayRecPtr() after\n> > > getting negative result of RecoveryInProgress() because WAL replay\n> > > position couldn't get updated after.\n> > > 0002 patch comprises fix for the header comment of WaitLSNSetLatches() function\n> > > 0003 patch comprises tests for pg_wal_replay_wait() errors.\n> >\n> > Before adding more tests, could it be possible to stabilize what's in\n> > the tree? drongo has reported one failure with the recovery test\n> > 043_wal_replay_wait.pl introduced recently by 3c5db1d6b016:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-08-05%2004%3A24%3A54\n>\n> I'm currently running a 043_wal_replay_wait test in a loop of drongo.\n> No failures during more than 10 hours. As I pointed in [1] it seems\n> that test stuck somewhere on launching BackgroundPsql. Given that\n> drongo have some strange failures from time to time (for instance [2]\n> or [3]), I doubt there is something specifically wrong in\n> 043_wal_replay_wait test that caused the subject failure.\n\nWith help of Andrew Dunstan, I've run 043_wal_replay_wait.pl in a loop\nfor two days, then the whole test suite also for two days. Haven't\nseen any failures. I don't see the point to run more experiments,\nbecause Andrew needs to bring drongo back online as a buildfarm\nmember. It might happen that something exceptional happened on drongo\n(like inability to launch a new process or something). For now, I\nthink the reasonable strategy would be to wait and see if something\nsimilar will repeat on buildfarm.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n",
"msg_date": "Tue, 13 Aug 2024 00:25:47 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi Alexander,\n\n10.08.2024 20:18, Alexander Korotkov wrote:\n> On Sat, Aug 10, 2024 at 7:33 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>> On Tue, Aug 6, 2024 at 5:17 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>> ...\n>> Here is a revised version of the patchset. I've fixed some typos,\n>> identation, etc. I'm going to push this once it passes cfbot.\n> The next revison of the patchset fixes uninitialized variable usage\n> spotted by cfbot.\n\nWhen running check-world on a rather slow armv7 device, I came across the\n043_wal_replay_wait.pl test failure:\nt/043_wal_replay_wait.pl .............. 7/? # Tests were run but no plan was declared and done_testing() was not seen.\n# Looks like your test exited with 29 just after 8.\n\nregress_log_043_wal_replay_wait contains:\n...\n01234[21:58:56.370](1.594s) ok 7 - multiple LSN waiters reported consistent data\n### Promoting node \"standby\"\n# Running: pg_ctl -D .../src/test/recovery/tmp_check/t_043_wal_replay_wait_standby_data/pgdata -l \n.../src/test/recovery/tmp_check/log/043_wal_replay_wait_standby.log promote\nwaiting for server to promote.... done\nserver promoted\n[21:58:56.637](0.268s) ok 8 - got error after standby promote\nerror running SQL: 'psql:<stdin>:1: ERROR: recovery is not in progress\nHINT: Waiting for LSN can only be executed during recovery.'\nwhile running 'psql -XAtq -d port=10228 host=/tmp/Ftj8qpTQht dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'CALL \npg_wal_replay_wait('0/300D0E8');' at .../src/test/recovery/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 2140.\n\n043_wal_replay_wait_standby.log contains:\n2024-09-12 21:58:56.518 UTC [15220:1] [unknown] LOG: connection received: host=[local]\n2024-09-12 21:58:56.520 UTC [15220:2] [unknown] LOG: connection authenticated: user=\"android\" method=trust \n(.../src/test/recovery/tmp_check/t_043_wal_replay_wait_standby_data/pgdata/pg_hba.conf:117)\n2024-09-12 21:58:56.520 UTC [15220:3] [unknown] LOG: connection authorized: user=android database=postgres \napplication_name=043_wal_replay_wait.pl\n2024-09-12 21:58:56.527 UTC [15220:4] 043_wal_replay_wait.pl LOG: statement: CALL pg_wal_replay_wait('2/570CB4E8');\n2024-09-12 21:58:56.535 UTC [15123:7] LOG: received promote request\n2024-09-12 21:58:56.535 UTC [15124:2] FATAL: terminating walreceiver process due to administrator command\n2024-09-12 21:58:56.537 UTC [15123:8] LOG: invalid record length at 0/300D0B0: expected at least 24, got 0\n2024-09-12 21:58:56.537 UTC [15123:9] LOG: redo done at 0/300D088 system usage: CPU: user: 0.01 s, system: 0.00 s, \nelapsed: 14.23 s\n2024-09-12 21:58:56.537 UTC [15123:10] LOG: last completed transaction was at log time 2024-09-12 21:58:55.322831+00\n2024-09-12 21:58:56.540 UTC [15123:11] LOG: selected new timeline ID: 2\n2024-09-12 21:58:56.589 UTC [15123:12] LOG: archive recovery complete\n2024-09-12 21:58:56.590 UTC [15220:5] 043_wal_replay_wait.pl ERROR: recovery is not in progress\n2024-09-12 21:58:56.590 UTC [15220:6] 043_wal_replay_wait.pl DETAIL: Recovery ended before replaying target LSN \n2/570CB4E8; last replay LSN 0/300D0B0.\n2024-09-12 21:58:56.591 UTC [15121:1] LOG: checkpoint starting: force\n2024-09-12 21:58:56.592 UTC [15220:7] 043_wal_replay_wait.pl LOG: disconnection: session time: 0:00:00.075 user=android \ndatabase=postgres host=[local]\n2024-09-12 21:58:56.595 UTC [15120:4] LOG: database system is ready to accept connections\n2024-09-12 21:58:56.665 UTC [15227:1] [unknown] LOG: connection received: host=[local]\n2024-09-12 21:58:56.668 UTC [15227:2] [unknown] LOG: connection authenticated: user=\"android\" method=trust \n(.../src/test/recovery/tmp_check/t_043_wal_replay_wait_standby_data/pgdata/pg_hba.conf:117)\n2024-09-12 21:58:56.668 UTC [15227:3] [unknown] LOG: connection authorized: user=android database=postgres \napplication_name=043_wal_replay_wait.pl\n2024-09-12 21:58:56.675 UTC [15227:4] 043_wal_replay_wait.pl LOG: statement: CALL pg_wal_replay_wait('0/300D0E8');\n2024-09-12 21:58:56.677 UTC [15227:5] 043_wal_replay_wait.pl ERROR: recovery is not in progress\n2024-09-12 21:58:56.677 UTC [15227:6] 043_wal_replay_wait.pl HINT: Waiting for LSN can only be executed during recovery.\n2024-09-12 21:58:56.679 UTC [15227:7] 043_wal_replay_wait.pl LOG: disconnection: session time: 0:00:00.015 user=android \ndatabase=postgres host=[local]\n\nNote that last replay LSN is 300D0B0, but the latter pg_wal_replay_wait\ncall wants to wait for 300D0E8.\n\npg_waldump -p src/test/recovery/tmp_check/t_043_wal_replay_wait_primary_data/pgdata/pg_wal/ 000000010000000000000003\nshows:\nrmgr: Heap len (rec/tot): 59/ 59, tx: 748, lsn: 0/0300D048, prev 0/0300D020, desc: INSERT off: 35, \nflags: 0x00, blkref #0: rel 1663/5/16384 blk 0\nrmgr: Transaction len (rec/tot): 34/ 34, tx: 748, lsn: 0/0300D088, prev 0/0300D048, desc: COMMIT \n2024-09-12 21:58:55.322831 UTC\nrmgr: Standby len (rec/tot): 50/ 50, tx: 0, lsn: 0/0300D0B0, prev 0/0300D088, desc: RUNNING_XACTS \nnextXid 749 latestCompletedXid 748 oldestRunningXid 749\n\nI could reproduce this failure on my workstation with bgworker modified\nas below:\n--- a/src/backend/postmaster/bgwriter.c\n+++ b/src/backend/postmaster/bgwriter.c\n@@ -69 +69 @@ int BgWriterDelay = 200;\n-#define LOG_SNAPSHOT_INTERVAL_MS 15000\n+#define LOG_SNAPSHOT_INTERVAL_MS 15\n@@ -307 +307 @@ BackgroundWriterMain(char *startup_data, size_t startup_data_len)\n- BgWriterDelay /* ms */ , WAIT_EVENT_BGWRITER_MAIN);\n+ 1 /* ms */ , WAIT_EVENT_BGWRITER_MAIN);\n\n\nWhen looking at the test, I noticed probably a typo in the test message:\nwait for already replayed LSN exists immediately ...\nshouldn't it be \"exits\" there (though maybe the whole phrase could be\nimproved)?\n\nI also suspect that \"achieve\" is not suitable word in the context of LSNs\nand timeouts. Maybe you would find it appropriate to replace it with\n\"reach\"?\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 13 Sep 2024 15:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Fri, Sep 13, 2024 at 3:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>\n> 10.08.2024 20:18, Alexander Korotkov wrote:\n> > On Sat, Aug 10, 2024 at 7:33 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >> On Tue, Aug 6, 2024 at 5:17 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >> ...\n> >> Here is a revised version of the patchset. I've fixed some typos,\n> >> identation, etc. I'm going to push this once it passes cfbot.\n> > The next revison of the patchset fixes uninitialized variable usage\n> > spotted by cfbot.\n>\n> When running check-world on a rather slow armv7 device, I came across the\n> 043_wal_replay_wait.pl test failure:\n> t/043_wal_replay_wait.pl .............. 7/? # Tests were run but no plan was declared and done_testing() was not seen.\n> # Looks like your test exited with 29 just after 8.\n>\n> regress_log_043_wal_replay_wait contains:\n> ...\n> 01234[21:58:56.370](1.594s) ok 7 - multiple LSN waiters reported consistent data\n> ### Promoting node \"standby\"\n> # Running: pg_ctl -D .../src/test/recovery/tmp_check/t_043_wal_replay_wait_standby_data/pgdata -l\n> .../src/test/recovery/tmp_check/log/043_wal_replay_wait_standby.log promote\n> waiting for server to promote.... done\n> server promoted\n> [21:58:56.637](0.268s) ok 8 - got error after standby promote\n> error running SQL: 'psql:<stdin>:1: ERROR: recovery is not in progress\n> HINT: Waiting for LSN can only be executed during recovery.'\n> while running 'psql -XAtq -d port=10228 host=/tmp/Ftj8qpTQht dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'CALL\n> pg_wal_replay_wait('0/300D0E8');' at .../src/test/recovery/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 2140.\n>\n> 043_wal_replay_wait_standby.log contains:\n> 2024-09-12 21:58:56.518 UTC [15220:1] [unknown] LOG: connection received: host=[local]\n> 2024-09-12 21:58:56.520 UTC [15220:2] [unknown] LOG: connection authenticated: user=\"android\" method=trust\n> (.../src/test/recovery/tmp_check/t_043_wal_replay_wait_standby_data/pgdata/pg_hba.conf:117)\n> 2024-09-12 21:58:56.520 UTC [15220:3] [unknown] LOG: connection authorized: user=android database=postgres\n> application_name=043_wal_replay_wait.pl\n> 2024-09-12 21:58:56.527 UTC [15220:4] 043_wal_replay_wait.pl LOG: statement: CALL pg_wal_replay_wait('2/570CB4E8');\n> 2024-09-12 21:58:56.535 UTC [15123:7] LOG: received promote request\n> 2024-09-12 21:58:56.535 UTC [15124:2] FATAL: terminating walreceiver process due to administrator command\n> 2024-09-12 21:58:56.537 UTC [15123:8] LOG: invalid record length at 0/300D0B0: expected at least 24, got 0\n> 2024-09-12 21:58:56.537 UTC [15123:9] LOG: redo done at 0/300D088 system usage: CPU: user: 0.01 s, system: 0.00 s,\n> elapsed: 14.23 s\n> 2024-09-12 21:58:56.537 UTC [15123:10] LOG: last completed transaction was at log time 2024-09-12 21:58:55.322831+00\n> 2024-09-12 21:58:56.540 UTC [15123:11] LOG: selected new timeline ID: 2\n> 2024-09-12 21:58:56.589 UTC [15123:12] LOG: archive recovery complete\n> 2024-09-12 21:58:56.590 UTC [15220:5] 043_wal_replay_wait.pl ERROR: recovery is not in progress\n> 2024-09-12 21:58:56.590 UTC [15220:6] 043_wal_replay_wait.pl DETAIL: Recovery ended before replaying target LSN\n> 2/570CB4E8; last replay LSN 0/300D0B0.\n> 2024-09-12 21:58:56.591 UTC [15121:1] LOG: checkpoint starting: force\n> 2024-09-12 21:58:56.592 UTC [15220:7] 043_wal_replay_wait.pl LOG: disconnection: session time: 0:00:00.075 user=android\n> database=postgres host=[local]\n> 2024-09-12 21:58:56.595 UTC [15120:4] LOG: database system is ready to accept connections\n> 2024-09-12 21:58:56.665 UTC [15227:1] [unknown] LOG: connection received: host=[local]\n> 2024-09-12 21:58:56.668 UTC [15227:2] [unknown] LOG: connection authenticated: user=\"android\" method=trust\n> (.../src/test/recovery/tmp_check/t_043_wal_replay_wait_standby_data/pgdata/pg_hba.conf:117)\n> 2024-09-12 21:58:56.668 UTC [15227:3] [unknown] LOG: connection authorized: user=android database=postgres\n> application_name=043_wal_replay_wait.pl\n> 2024-09-12 21:58:56.675 UTC [15227:4] 043_wal_replay_wait.pl LOG: statement: CALL pg_wal_replay_wait('0/300D0E8');\n> 2024-09-12 21:58:56.677 UTC [15227:5] 043_wal_replay_wait.pl ERROR: recovery is not in progress\n> 2024-09-12 21:58:56.677 UTC [15227:6] 043_wal_replay_wait.pl HINT: Waiting for LSN can only be executed during recovery.\n> 2024-09-12 21:58:56.679 UTC [15227:7] 043_wal_replay_wait.pl LOG: disconnection: session time: 0:00:00.015 user=android\n> database=postgres host=[local]\n>\n> Note that last replay LSN is 300D0B0, but the latter pg_wal_replay_wait\n> call wants to wait for 300D0E8.\n>\n> pg_waldump -p src/test/recovery/tmp_check/t_043_wal_replay_wait_primary_data/pgdata/pg_wal/ 000000010000000000000003\n> shows:\n> rmgr: Heap len (rec/tot): 59/ 59, tx: 748, lsn: 0/0300D048, prev 0/0300D020, desc: INSERT off: 35,\n> flags: 0x00, blkref #0: rel 1663/5/16384 blk 0\n> rmgr: Transaction len (rec/tot): 34/ 34, tx: 748, lsn: 0/0300D088, prev 0/0300D048, desc: COMMIT\n> 2024-09-12 21:58:55.322831 UTC\n> rmgr: Standby len (rec/tot): 50/ 50, tx: 0, lsn: 0/0300D0B0, prev 0/0300D088, desc: RUNNING_XACTS\n> nextXid 749 latestCompletedXid 748 oldestRunningXid 749\n>\n> I could reproduce this failure on my workstation with bgworker modified\n> as below:\n> --- a/src/backend/postmaster/bgwriter.c\n> +++ b/src/backend/postmaster/bgwriter.c\n> @@ -69 +69 @@ int BgWriterDelay = 200;\n> -#define LOG_SNAPSHOT_INTERVAL_MS 15000\n> +#define LOG_SNAPSHOT_INTERVAL_MS 15\n> @@ -307 +307 @@ BackgroundWriterMain(char *startup_data, size_t startup_data_len)\n> - BgWriterDelay /* ms */ , WAIT_EVENT_BGWRITER_MAIN);\n> + 1 /* ms */ , WAIT_EVENT_BGWRITER_MAIN);\n>\n>\n> When looking at the test, I noticed probably a typo in the test message:\n> wait for already replayed LSN exists immediately ...\n> shouldn't it be \"exits\" there (though maybe the whole phrase could be\n> improved)?\n>\n> I also suspect that \"achieve\" is not suitable word in the context of LSNs\n> and timeouts. Maybe you would find it appropriate to replace it with\n> \"reach\"?\n\nThank you for your report!\n\nPlease find two patches attached. The first one does minor cleanup\nincluding misuse of words you've pointed. The second one adds missing\nwait_for_catchup(). That should fix the test failure you've spotted.\nPlease, check if it fixes an issue for you.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Mon, 16 Sep 2024 21:55:50 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "Hi Alexander,\n\n16.09.2024 21:55, Alexander Korotkov wrote:\n> Please find two patches attached. The first one does minor cleanup\n> including misuse of words you've pointed. The second one adds missing\n> wait_for_catchup(). That should fix the test failure you've spotted.\n> Please, check if it fixes an issue for you.\n\nThank you for looking at that!\n\nUnfortunately, the issue is still here — the test failed for me 6 out of\n10 runs, as below:\n[05:14:02.807](0.135s) ok 8 - got error after standby promote\nerror running SQL: 'psql:<stdin>:1: ERROR: recovery is not in progress\nHINT: Waiting for LSN can only be executed during recovery.'\nwhile running 'psql -XAtq -d port=12734 host=/tmp/04hQ75NuXf dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'CALL \npg_wal_replay_wait('0/300F248');' at .../src/test/recovery/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 2140.\n\n043_wal_replay_wait_standby.log:\n2024-09-17 05:14:02.714 UTC [1817258] 043_wal_replay_wait.pl ERROR: recovery is not in progress\n2024-09-17 05:14:02.714 UTC [1817258] 043_wal_replay_wait.pl DETAIL: Recovery ended before replaying target LSN \n2/570CD648; last replay LSN 0/300F210.\n2024-09-17 05:14:02.714 UTC [1817258] 043_wal_replay_wait.pl STATEMENT: CALL pg_wal_replay_wait('2/570CD648');\n2024-09-17 05:14:02.714 UTC [1817155] LOG: checkpoint starting: force\n2024-09-17 05:14:02.714 UTC [1817154] LOG: database system is ready to accept connections\n2024-09-17 05:14:02.811 UTC [1817270] 043_wal_replay_wait.pl LOG: statement: CALL pg_wal_replay_wait('0/300F248');\n2024-09-17 05:14:02.811 UTC [1817270] 043_wal_replay_wait.pl ERROR: recovery is not in progress\n\npg_waldump -p .../t_043_wal_replay_wait_primary_data/pgdata/pg_wal/ 000000010000000000000003\nrmgr: Transaction len (rec/tot): 34/ 34, tx: 748, lsn: 0/0300F1E8, prev 0/0300F1A8, desc: COMMIT \n2024-09-17 05:14:01.654874 UTC\nrmgr: Standby len (rec/tot): 50/ 50, tx: 0, lsn: 0/0300F210, prev 0/0300F1E8, desc: RUNNING_XACTS \nnextXid 749 latestCompletedXid 748 oldestRunningXid 749\n\nI wonder, can you reproduce this with that bgwriter's modification?\n\nI've also found two more \"achievements\" coined by 3c5db1d6b:\ndoc/src/sgml/func.sgml: It may also happen that target <acronym>lsn</acronym> is not achieved\nsrc/backend/access/transam/xlog.c- * recovery was ended before achieving the target LSN.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 17 Sep 2024 09:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "On Tue, Sep 17, 2024 at 9:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> 16.09.2024 21:55, Alexander Korotkov wrote:\n> > Please find two patches attached. The first one does minor cleanup\n> > including misuse of words you've pointed. The second one adds missing\n> > wait_for_catchup(). That should fix the test failure you've spotted.\n> > Please, check if it fixes an issue for you.\n>\n> Thank you for looking at that!\n>\n> Unfortunately, the issue is still here — the test failed for me 6 out of\n> 10 runs, as below:\n> [05:14:02.807](0.135s) ok 8 - got error after standby promote\n> error running SQL: 'psql:<stdin>:1: ERROR: recovery is not in progress\n> HINT: Waiting for LSN can only be executed during recovery.'\n> while running 'psql -XAtq -d port=12734 host=/tmp/04hQ75NuXf dbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'CALL\n> pg_wal_replay_wait('0/300F248');' at .../src/test/recovery/../../../src/test/perl/PostgreSQL/Test/Cluster.pm line 2140.\n>\n> 043_wal_replay_wait_standby.log:\n> 2024-09-17 05:14:02.714 UTC [1817258] 043_wal_replay_wait.pl ERROR: recovery is not in progress\n> 2024-09-17 05:14:02.714 UTC [1817258] 043_wal_replay_wait.pl DETAIL: Recovery ended before replaying target LSN\n> 2/570CD648; last replay LSN 0/300F210.\n> 2024-09-17 05:14:02.714 UTC [1817258] 043_wal_replay_wait.pl STATEMENT: CALL pg_wal_replay_wait('2/570CD648');\n> 2024-09-17 05:14:02.714 UTC [1817155] LOG: checkpoint starting: force\n> 2024-09-17 05:14:02.714 UTC [1817154] LOG: database system is ready to accept connections\n> 2024-09-17 05:14:02.811 UTC [1817270] 043_wal_replay_wait.pl LOG: statement: CALL pg_wal_replay_wait('0/300F248');\n> 2024-09-17 05:14:02.811 UTC [1817270] 043_wal_replay_wait.pl ERROR: recovery is not in progress\n>\n> pg_waldump -p .../t_043_wal_replay_wait_primary_data/pgdata/pg_wal/ 000000010000000000000003\n> rmgr: Transaction len (rec/tot): 34/ 34, tx: 748, lsn: 0/0300F1E8, prev 0/0300F1A8, desc: COMMIT\n> 2024-09-17 05:14:01.654874 UTC\n> rmgr: Standby len (rec/tot): 50/ 50, tx: 0, lsn: 0/0300F210, prev 0/0300F1E8, desc: RUNNING_XACTS\n> nextXid 749 latestCompletedXid 748 oldestRunningXid 749\n>\n> I wonder, can you reproduce this with that bgwriter's modification?\n\nYes, now I did reproduce. I got that the problem could be that insert\nLSN is not yet written at primary, thus wait_for_catchup doesn't wait\nfor it. I've workarounded that using pg_switch_wal(). The revised\npatchset is attached.\n\n> I've also found two more \"achievements\" coined by 3c5db1d6b:\n> doc/src/sgml/func.sgml: It may also happen that target <acronym>lsn</acronym> is not achieved\n> src/backend/access/transam/xlog.c- * recovery was ended before achieving the target LSN.\n\nFixed this at well in 0001.\n\n------\nRegards,\nAlexander Korotkov\nSupabase",
"msg_date": "Tue, 17 Sep 2024 10:47:02 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
},
{
"msg_contents": "17.09.2024 10:47, Alexander Korotkov wrote:\n> Yes, now I did reproduce. I got that the problem could be that insert\n> LSN is not yet written at primary, thus wait_for_catchup doesn't wait\n> for it. I've workarounded that using pg_switch_wal(). The revised\n> patchset is attached.\n\nThank you for the revised patch!\n\nThe improved test works reliably for me (100 out of 100 runs passed),\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Tue, 17 Sep 2024 12:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] make async slave to wait for lsn to be replayed"
}
] |
[
{
"msg_contents": "Hi,\n\nMost of the multiplexed SIGUSR1 handlers are setting latch explicitly\nwhen the procsignal_sigusr1_handler() can do that for them at the end.\nThese multiplexed handlers are currently being used as SIGUSR1\nhandlers, not as independent handlers, so no problem if SetLatch() is\nremoved from them. A few others do it right by saying /* latch will be\nset by procsignal_sigusr1_handler */. Although, calling SetLatch() in\nquick succession does no harm (it just returns if the latch was\npreviously set), it seems unnecessary.\n\nI'm attaching a patch that avoids multiple SetLatch() calls.\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 28 Feb 2023 21:00:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Avoid multiple SetLatch() calls in procsignal_sigusr1_handler()"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 9:01 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Most of the multiplexed SIGUSR1 handlers are setting latch explicitly\n> when the procsignal_sigusr1_handler() can do that for them at the end.\n> These multiplexed handlers are currently being used as SIGUSR1\n> handlers, not as independent handlers, so no problem if SetLatch() is\n> removed from them. A few others do it right by saying /* latch will be\n> set by procsignal_sigusr1_handler */. Although, calling SetLatch() in\n> quick succession does no harm (it just returns if the latch was\n> previously set), it seems unnecessary.\n>\n+1\n\n\n\n-- \nThanks & Regards,\nKuntal Ghosh\n\n\n",
"msg_date": "Wed, 1 Mar 2023 15:14:36 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid multiple SetLatch() calls in procsignal_sigusr1_handler()"
},
{
"msg_contents": "Hi,\n\nOn 2/28/23 4:30 PM, Bharath Rupireddy wrote:\n> Hi,\n> \n> Most of the multiplexed SIGUSR1 handlers are setting latch explicitly\n> when the procsignal_sigusr1_handler() can do that for them at the end.\n\nRight.\n\n> These multiplexed handlers are currently being used as SIGUSR1\n> handlers, not as independent handlers, so no problem if SetLatch() is\n> removed from them. \n\nAgree, they are only used in procsignal_sigusr1_handler().\n\n> A few others do it right by saying /* latch will be\n> set by procsignal_sigusr1_handler */.\n\nYeap, so do HandleProcSignalBarrierInterrupt() and HandleLogMemoryContextInterrupt().\n\n> Although, calling SetLatch() in\n> quick succession does no harm (it just returns if the latch was\n> previously set), it seems unnecessary.\n> \n\nAgree.\n\n> I'm attaching a patch that avoids multiple SetLatch() calls.\n> \n> Thoughts?\n> \n\nI agree with the idea behind the patch. The thing\nthat worry me a bit is that the changed functions are defined\nas external and so may produce an impact outside of core pg and I'm\nnot sure that's worth it.\n\nOtherwise the patch LGTM.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 9 Mar 2023 13:24:32 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid multiple SetLatch() calls in procsignal_sigusr1_handler()"
}
] |
[
{
"msg_contents": "Hello all,\n\nA customer is facing out of memory query which looks similar to this situation:\n\n https://www.postgresql.org/message-id/flat/12064.1555298699%40sss.pgh.pa.us#eb519865575bbc549007878a5fb7219b\n\nThis PostgreSQL version is 11.18. Some settings:\n\n* shared_buffers: 8GB\n* work_mem: 64MB\n* effective_cache_size: 24GB\n* random/seq_page_cost are by default\n* physical memory: 32GB\n\nThe query is really large and actually update kind of a materialized view.\n\nThe customer records the plans of this query on a regular basis. The explain\nanalyze of this query before running out of memory was:\n\n https://explain.depesz.com/s/sGOH\n\nThe customer is aware he should rewrite this query to optimize it, but it's a\nlong time process he can not start immediately. To make it run in the meantime,\nhe actually removed the top CTE to a dedicated table. According to their\nexperience, it's not the first time they had to split a query this way to make\nit work.\n\nI've been able to run this query on a standby myself. I've \"call\nMemoryContextStats(TopMemoryContext)\" every 10s on a run, see the data parsed\n(best view with \"less -S\") and the graph associated with it in attachment. It\nshows:\n\n* HashBatchContext goes up to 1441MB after 240s then stay flat until the end\n (400s as the last record)\n* ALL other context are stable before 240s, but ExecutorState\n* ExecutorState keeps rising up to 13GB with no interruption until the memory\n exhaustion\n\nI did another run with interactive gdb session (see the messy log session in\nattachment, for what it worth). Looking at some backtraces during the memory\ninflation close to the end of the query, all of them were having these frames in\ncommon:\n\n [...]\n #6 0x0000000000621ffc in ExecHashJoinImpl (parallel=false, pstate=0x31a3378)\n at nodeHashjoin.c:398 [...]\n\n...which is not really helpful but at least, it seems to come from a hash join\nnode or some other hash related code. See the gdb session log for more details.\nAfter the out of mem, pmap of this process shows:\n\n 430: postgres: postgres <dbname> [local] EXPLAIN\n Address Kbytes RSS Dirty Mode Mapping\n [...]\n 0000000002c5e000 13719620 8062376 8062376 rw--- [ anon ]\n [...]\n\nIs it usual a backend is requesting such large memory size (13 GB) and\nactually use less of 60% of it (7.7GB of RSS)?\n\nSadly, the database is 1.5TB large and I can not test on a newer major version.\nI did not try to check how large would be the required data set to reproduce\nthis, but it moves 10s of million of rows from multiple tables anyway...\n\nAny idea? How could I help to have a better idea if a leak is actually\noccurring and where exactly?\n\nRegards,",
"msg_date": "Tue, 28 Feb 2023 19:06:43 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 07:06:43PM +0100, Jehan-Guillaume de Rorthais wrote:\n> Hello all,\n> \n> A customer is facing out of memory query which looks similar to this situation:\n> \n> https://www.postgresql.org/message-id/flat/12064.1555298699%40sss.pgh.pa.us#eb519865575bbc549007878a5fb7219b\n> \n> This PostgreSQL version is 11.18. Some settings:\n\nhash joins could exceed work_mem until v13:\n\n|Allow hash aggregation to use disk storage for large aggregation result\n|sets (Jeff Davis)\n|\n|Previously, hash aggregation was avoided if it was expected to use more\n|than work_mem memory. Now, a hash aggregation plan can be chosen despite\n|that. The hash table will be spilled to disk if it exceeds work_mem\n|times hash_mem_multiplier.\n|\n|This behavior is normally preferable to the old behavior, in which once\n|hash aggregation had been chosen, the hash table would be kept in memory\n|no matter how large it got — which could be very large if the planner\n|had misestimated. If necessary, behavior similar to that can be obtained\n|by increasing hash_mem_multiplier.\n\n> https://explain.depesz.com/s/sGOH\n\nThis shows multiple plan nodes underestimating the row counts by factors\nof ~50,000, which could lead to the issue fixed in v13.\n\nI think you should try to improve the estimates, which might improve\nother queries in addition to this one, in addition to maybe avoiding the\nissue with joins.\n\n> The customer is aware he should rewrite this query to optimize it, but it's a\n> long time process he can not start immediately. To make it run in the meantime,\n> he actually removed the top CTE to a dedicated table.\n\nIs the table analyzed ?\n\n> Is it usual a backend is requesting such large memory size (13 GB) and\n> actually use less of 60% of it (7.7GB of RSS)?\n\nIt's possible it's \"using less\" simply because it's not available. Is\nthe process swapping ?\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 28 Feb 2023 12:25:08 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "\n\nOn 2/28/23 19:25, Justin Pryzby wrote:\n> On Tue, Feb 28, 2023 at 07:06:43PM +0100, Jehan-Guillaume de Rorthais wrote:\n>> Hello all,\n>>\n>> A customer is facing out of memory query which looks similar to this situation:\n>>\n>> https://www.postgresql.org/message-id/flat/12064.1555298699%40sss.pgh.pa.us#eb519865575bbc549007878a5fb7219b\n>>\n>> This PostgreSQL version is 11.18. Some settings:\n> \n> hash joins could exceed work_mem until v13:\n> \n> |Allow hash aggregation to use disk storage for large aggregation result\n> |sets (Jeff Davis)\n> |\n\nThat's hash aggregate, not hash join.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Feb 2023 19:55:54 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On 2/28/23 19:06, Jehan-Guillaume de Rorthais wrote:\n> Hello all,\n> \n> A customer is facing out of memory query which looks similar to this situation:\n> \n> https://www.postgresql.org/message-id/flat/12064.1555298699%40sss.pgh.pa.us#eb519865575bbc549007878a5fb7219b\n> \n> This PostgreSQL version is 11.18. Some settings:\n> \n> * shared_buffers: 8GB\n> * work_mem: 64MB\n> * effective_cache_size: 24GB\n> * random/seq_page_cost are by default\n> * physical memory: 32GB\n> \n> The query is really large and actually update kind of a materialized view.\n> \n> The customer records the plans of this query on a regular basis. The explain\n> analyze of this query before running out of memory was:\n> \n> https://explain.depesz.com/s/sGOH\n> \n> The customer is aware he should rewrite this query to optimize it, but it's a\n> long time process he can not start immediately. To make it run in the meantime,\n> he actually removed the top CTE to a dedicated table. According to their\n> experience, it's not the first time they had to split a query this way to make\n> it work.\n> \n> I've been able to run this query on a standby myself. I've \"call\n> MemoryContextStats(TopMemoryContext)\" every 10s on a run, see the data parsed\n> (best view with \"less -S\") and the graph associated with it in attachment. It\n> shows:\n> \n> * HashBatchContext goes up to 1441MB after 240s then stay flat until the end\n> (400s as the last record)\n\nThat's interesting. We're using HashBatchContext for very few things, so\nwhat could it consume so much memory? But e.g. the number of buckets\nshould be limited by work_mem, so how could it get to 1.4GB?\n\nCan you break at ExecHashIncreaseNumBatches/ExecHashIncreaseNumBuckets\nand print how many batches/butches are there?\n\n> * ALL other context are stable before 240s, but ExecutorState\n> * ExecutorState keeps rising up to 13GB with no interruption until the memory\n> exhaustion\n> \n> I did another run with interactive gdb session (see the messy log session in\n> attachment, for what it worth). Looking at some backtraces during the memory\n> inflation close to the end of the query, all of them were having these frames in\n> common:\n> \n> [...]\n> #6 0x0000000000621ffc in ExecHashJoinImpl (parallel=false, pstate=0x31a3378)\n> at nodeHashjoin.c:398 [...]\n> \n> ...which is not really helpful but at least, it seems to come from a hash join\n> node or some other hash related code. See the gdb session log for more details.\n> After the out of mem, pmap of this process shows:\n> \n> 430: postgres: postgres <dbname> [local] EXPLAIN\n> Address Kbytes RSS Dirty Mode Mapping\n> [...]\n> 0000000002c5e000 13719620 8062376 8062376 rw--- [ anon ]\n> [...]\n> \n> Is it usual a backend is requesting such large memory size (13 GB) and\n> actually use less of 60% of it (7.7GB of RSS)?\n> \n\nNo idea. Interpreting this info is pretty tricky, in my experience. It\nmight mean the memory is no longer used but sbrk couldn't return it to\nthe OS yet, or something like that.\n\n> Sadly, the database is 1.5TB large and I can not test on a newer major version.\n> I did not try to check how large would be the required data set to reproduce\n> this, but it moves 10s of million of rows from multiple tables anyway...\n> \n> Any idea? How could I help to have a better idea if a leak is actually\n> occurring and where exactly?\n> \n\nInvestigating memory leaks is tough, especially for generic memory\ncontexts like ExecutorState :-( Even more so when you can't reproduce it\non a machine with custom builds.\n\nWhat I'd try is this:\n\n1) attach breakpoints to all returns in AllocSetAlloc(), printing the\npointer and size for ExecutorState context, so something like\n\n break aset.c:783 if strcmp(\"ExecutorState\",context->header.name) == 0\n commands\n print MemoryChunkGetPointer(chunk) size\n cont\n end\n\n2) do the same for AllocSetFree()\n\n3) Match the palloc/pfree calls (using the pointer address), to\ndetermine which ones are not freed and do some stats on the size.\nUsually there's only a couple distinct sizes that account for most of\nthe leaked memory.\n\n4) Break AllocSetAlloc on those leaked sizes, to determine where the\ncalls come from.\n\nThis usually gives enough info about the leak or at least allows\nfocusing the investigation to a particular area of code.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Feb 2023 20:51:02 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "Hi Justin,\n\nOn Tue, 28 Feb 2023 12:25:08 -0600\nJustin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Tue, Feb 28, 2023 at 07:06:43PM +0100, Jehan-Guillaume de Rorthais wrote:\n> > Hello all,\n> > \n> > A customer is facing out of memory query which looks similar to this\n> > situation:\n> > \n> > https://www.postgresql.org/message-id/flat/12064.1555298699%40sss.pgh.pa.us#eb519865575bbc549007878a5fb7219b\n> > \n> > This PostgreSQL version is 11.18. Some settings: \n> \n> hash joins could exceed work_mem until v13:\n\nYes, I am aware of this. But as far as I understand Tom Lane explanations from\nthe discussion mentioned up thread, it should not be ExecutorState.\nExecutorState (13GB) is at least ten times bigger than any other context,\nincluding HashBatchContext (1.4GB) or HashTableContext (16MB). So maybe some\naggregate is walking toward the wall because of bad estimation, but something\nelse is racing way faster to the wall. And presently it might be something\nrelated to some JOIN node.\n\nAbout your other points, you are right, there's numerous things we could do to\nimprove this query, and our customer is considering it as well. It's just a\nmatter of time now.\n\nBut in the meantime, we are facing a query with a memory behavior that seemed\nsuspect. Following the 4 years old thread I mentioned, my goal is to inspect\nand provide all possible information to make sure it's a \"normal\" behavior or\nsomething that might/should be fixed.\n\nThank you for your help!\n\n\n",
"msg_date": "Wed, 1 Mar 2023 10:46:12 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "\n\nOn 3/1/23 10:46, Jehan-Guillaume de Rorthais wrote:\n> Hi Justin,\n> \n> On Tue, 28 Feb 2023 12:25:08 -0600\n> Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n>> On Tue, Feb 28, 2023 at 07:06:43PM +0100, Jehan-Guillaume de Rorthais wrote:\n>>> Hello all,\n>>>\n>>> A customer is facing out of memory query which looks similar to this\n>>> situation:\n>>>\n>>> https://www.postgresql.org/message-id/flat/12064.1555298699%40sss.pgh.pa.us#eb519865575bbc549007878a5fb7219b\n>>>\n>>> This PostgreSQL version is 11.18. Some settings: \n>>\n>> hash joins could exceed work_mem until v13:\n> \n> Yes, I am aware of this. But as far as I understand Tom Lane explanations from\n> the discussion mentioned up thread, it should not be ExecutorState.\n> ExecutorState (13GB) is at least ten times bigger than any other context,\n> including HashBatchContext (1.4GB) or HashTableContext (16MB). So maybe some\n> aggregate is walking toward the wall because of bad estimation, but something\n> else is racing way faster to the wall. And presently it might be something\n> related to some JOIN node.\n> \n\nI still don't understand why would this be due to a hash aggregate. That\nshould not allocate memory in ExecutorState at all. And HashBatchContext\n(which is the one bloated) is used by hashjoin, so the issue is likely\nsomewhere in that area.\n\n> About your other points, you are right, there's numerous things we could do to\n> improve this query, and our customer is considering it as well. It's just a\n> matter of time now.\n> \n> But in the meantime, we are facing a query with a memory behavior that seemed\n> suspect. Following the 4 years old thread I mentioned, my goal is to inspect\n> and provide all possible information to make sure it's a \"normal\" behavior or\n> something that might/should be fixed.\n> \n\nIt'd be interesting to see if the gdb stuff I suggested yesterday yields\nsome interesting info.\n\nFurthermore, I realized the plan you posted yesterday may not be the\ncase used for the failing query. It'd be interesting to see what plan is\nused for the case that actually fails. Can you do at least explain on\nit? Or alternatively, if the query is already running and eating a lot\nof memory, attach gdb and print the plan in ExecutorStart\n\n set print elements 0\n p nodeToString(queryDesc->plannedstmt->planTree)\n\nThinking about this, I have one suspicion. Hashjoins try to fit into\nwork_mem by increasing the number of batches - when a batch gets too\nlarge, we double the number of batches (and split the batch into two, to\nreduce the size). But if there's a lot of tuples for a particular key\n(or at least the hash value), we quickly run into work_mem and keep\nadding more and more batches.\n\nThe problem with this theory is that the batches are allocated in\nHashTableContext, and that doesn't grow very much. And the 1.4GB\nHashBatchContext is used for buckets - but we should not allocate that\nmany, because we cap that to nbuckets_optimal (see 30d7ae3c76). And it\ndoes not explain the ExecutorState bloat either.\n\nNevertheless, it'd be interesting to see the hashtable parameters:\n\n p *hashtable\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 1 Mar 2023 11:40:51 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "Hi Tomas,\n\nOn Tue, 28 Feb 2023 20:51:02 +0100\nTomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> On 2/28/23 19:06, Jehan-Guillaume de Rorthais wrote:\n> > * HashBatchContext goes up to 1441MB after 240s then stay flat until the end\n> > (400s as the last record) \n> \n> That's interesting. We're using HashBatchContext for very few things, so\n> what could it consume so much memory? But e.g. the number of buckets\n> should be limited by work_mem, so how could it get to 1.4GB?\n> \n> Can you break at ExecHashIncreaseNumBatches/ExecHashIncreaseNumBuckets\n> and print how many batches/butches are there?\n\nI did this test this morning.\n\nBatches and buckets increased really quickly to 1048576/1048576.\n\nExecHashIncreaseNumBatches was really chatty, having hundreds of thousands of\ncalls, always short-cut'ed to 1048576, I guess because of the conditional block\n«/* safety check to avoid overflow */» appearing early in this function.\n\nI disabled the breakpoint on ExecHashIncreaseNumBatches a few time to make the\nquery run faster. Enabling it at 19.1GB of memory consumption, it stayed\nsilent till the memory exhaustion (around 21 or 22GB, I don't remember exactly).\n\nThe breakpoint on ExecHashIncreaseNumBuckets triggered some times at beginning,\nand a last time close to the end of the query execution.\n\n> > Any idea? How could I help to have a better idea if a leak is actually\n> > occurring and where exactly?\n> \n> Investigating memory leaks is tough, especially for generic memory\n> contexts like ExecutorState :-( Even more so when you can't reproduce it\n> on a machine with custom builds.\n> \n> What I'd try is this:\n> \n> 1) attach breakpoints to all returns in AllocSetAlloc(), printing the\n> pointer and size for ExecutorState context, so something like\n> \n> break aset.c:783 if strcmp(\"ExecutorState\",context->header.name) == 0\n> commands\n> print MemoryChunkGetPointer(chunk) size\n> cont\n> end\n> \n> 2) do the same for AllocSetFree()\n> \n> 3) Match the palloc/pfree calls (using the pointer address), to\n> determine which ones are not freed and do some stats on the size.\n> Usually there's only a couple distinct sizes that account for most of\n> the leaked memory.\n\nSo here is what I end up with this afternoon, using file, lines and macro from\nREL_11_18:\n\n set logging on\n set pagination off\n \n break aset.c:781 if strcmp(\"ExecutorState\",context.name) == 0\n commands 1\n print (((char *)(chunk)) + sizeof(struct AllocChunkData))\n print chunk->size\n cont\n end\n \n break aset.c:820 if strcmp(\"ExecutorState\",context.name) == 0\n commands 2\n print (((char *)(chunk)) + sizeof(struct AllocChunkData))\n print chunk->size\n cont \n end\n \n break aset.c:979 if strcmp(\"ExecutorState\",context.name) == 0\n commands 3\n print (((char *)(chunk)) + sizeof(struct AllocChunkData))\n print chunk->size\n cont \n end\n \n break AllocSetFree if strcmp(\"ExecutorState\",context.name) == 0\n commands 4 \n print pointer\n cont\n end\n\nSo far, gdb had more than 3h of CPU time and is eating 2.4GB of memory. The\nbackend had only 3'32\" of CPU time:\n\n VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 2727284 2.4g 17840 R 99.0 7.7 181:25.07 gdb\n 9054688 220648 103056 t 1.3 0.7 3:32.05 postmaster\n\nInterestingly, the RES memory of the backend did not explode yet, but VIRT is\nalready high.\n\nI suppose the query will run for some more hours, hopefully, gdb will not\nexhaust the memory in the meantime...\n\nYou'll find some intermediate stats I already collected in attachment:\n\n* break 1, 2 and 3 are from AllocSetAlloc, break 4 is from AllocSetFree.\n* most of the non-free'd chunk are allocated since the very beginning, before\n the 5000's allocation call for almost 1M call so far.\n* 3754 of them have a chunk->size of 0\n* it seems there's some buggy stats or data:\n # this one actually really comes from the gdb log\n 0x38a77b8: break=3 num=191 sz=4711441762604810240 (weird sz)\n # this one might be a bug in my script\n 0x2: break=2 num=945346 sz=2 (weird address)\n* ignoring the weird size requested during the 191st call, the total amount\n of non free'd memory is currently 5488MB\n\nI couldn't print \"size\" as it is optimzed away, that's why I tracked\nchunk->size... Is there anything wrong with my current run and gdb log? \n\nThe gdb log is 5MB compressed. I'll keep it off-list, but I can provide it if\nneeded.\n\nStay tuned...\n\nThank you!",
"msg_date": "Wed, 1 Mar 2023 18:48:40 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Wed, 1 Mar 2023 18:48:40 +0100\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n...\n> You'll find some intermediate stats I already collected in attachment:\n> \n> * break 1, 2 and 3 are from AllocSetAlloc, break 4 is from AllocSetFree.\n> * most of the non-free'd chunk are allocated since the very beginning, before\n> the 5000's allocation call for almost 1M call so far.\n> * 3754 of them have a chunk->size of 0\n> * it seems there's some buggy stats or data:\n> # this one actually really comes from the gdb log\n> 0x38a77b8: break=3 num=191 sz=4711441762604810240 (weird sz)\n> # this one might be a bug in my script\n> 0x2: break=2 num=945346 sz=2 (weird address)\n> * ignoring the weird size requested during the 191st call, the total amount\n> of non free'd memory is currently 5488MB\n\nI forgot one stat. I don't know if this is expected, normal or not, but 53\nchunks has been allocated on an existing address that was not free'd before.\n\nRegards,\n\n\n",
"msg_date": "Wed, 1 Mar 2023 19:09:44 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On 3/1/23 18:48, Jehan-Guillaume de Rorthais wrote:\n> Hi Tomas,\n> \n> On Tue, 28 Feb 2023 20:51:02 +0100\n> Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>> On 2/28/23 19:06, Jehan-Guillaume de Rorthais wrote:\n>>> * HashBatchContext goes up to 1441MB after 240s then stay flat until the end\n>>> (400s as the last record) \n>>\n>> That's interesting. We're using HashBatchContext for very few things, so\n>> what could it consume so much memory? But e.g. the number of buckets\n>> should be limited by work_mem, so how could it get to 1.4GB?\n>>\n>> Can you break at ExecHashIncreaseNumBatches/ExecHashIncreaseNumBuckets\n>> and print how many batches/butches are there?\n> \n> I did this test this morning.\n> \n> Batches and buckets increased really quickly to 1048576/1048576.\n> \n\nOK. I think 1M buckets is mostly expected for work_mem=64MB. It means\nbuckets will use 8MB, which leaves ~56B per tuple (we're aiming for\nfillfactor 1.0).\n\nBut 1M batches? I guess that could be problematic. It doesn't seem like\nmuch, but we need 1M files on each side - 1M for the hash table, 1M for\nthe outer relation. That's 16MB of pointers, but the files are BufFile\nand we keep 8kB buffer for each of them. That's ~16GB right there :-(\n\nIn practice it probably won't be that bad, because not all files will be\nallocated/opened concurrently (especially if this is due to many tuples\nhaving the same value). Assuming that's what's happening here, ofc.\n\n> ExecHashIncreaseNumBatches was really chatty, having hundreds of thousands of\n> calls, always short-cut'ed to 1048576, I guess because of the conditional block\n> «/* safety check to avoid overflow */» appearing early in this function.\n> \n\nHmmm, that's a bit weird, no? I mean, the check is\n\n /* safety check to avoid overflow */\n if (oldnbatch > Min(INT_MAX / 2, MaxAllocSize / (sizeof(void *) * 2)))\n return;\n\nWhy would it stop at 1048576? It certainly is not higher than INT_MAX/2\nand with MaxAllocSize = ~1GB the second value should be ~33M. So what's\nhappening here?\n\n> I disabled the breakpoint on ExecHashIncreaseNumBatches a few time to make the\n> query run faster. Enabling it at 19.1GB of memory consumption, it stayed\n> silent till the memory exhaustion (around 21 or 22GB, I don't remember exactly).\n> \n> The breakpoint on ExecHashIncreaseNumBuckets triggered some times at beginning,\n> and a last time close to the end of the query execution.\n> \n>>> Any idea? How could I help to have a better idea if a leak is actually\n>>> occurring and where exactly?\n>>\n>> Investigating memory leaks is tough, especially for generic memory\n>> contexts like ExecutorState :-( Even more so when you can't reproduce it\n>> on a machine with custom builds.\n>>\n>> What I'd try is this:\n>>\n>> 1) attach breakpoints to all returns in AllocSetAlloc(), printing the\n>> pointer and size for ExecutorState context, so something like\n>>\n>> break aset.c:783 if strcmp(\"ExecutorState\",context->header.name) == 0\n>> commands\n>> print MemoryChunkGetPointer(chunk) size\n>> cont\n>> end\n>>\n>> 2) do the same for AllocSetFree()\n>>\n>> 3) Match the palloc/pfree calls (using the pointer address), to\n>> determine which ones are not freed and do some stats on the size.\n>> Usually there's only a couple distinct sizes that account for most of\n>> the leaked memory.\n> \n> So here is what I end up with this afternoon, using file, lines and macro from\n> REL_11_18:\n> \n> set logging on\n> set pagination off\n> \n> break aset.c:781 if strcmp(\"ExecutorState\",context.name) == 0\n> commands 1\n> print (((char *)(chunk)) + sizeof(struct AllocChunkData))\n> print chunk->size\n> cont\n> end\n> \n> break aset.c:820 if strcmp(\"ExecutorState\",context.name) == 0\n> commands 2\n> print (((char *)(chunk)) + sizeof(struct AllocChunkData))\n> print chunk->size\n> cont \n> end\n> \n> break aset.c:979 if strcmp(\"ExecutorState\",context.name) == 0\n> commands 3\n> print (((char *)(chunk)) + sizeof(struct AllocChunkData))\n> print chunk->size\n> cont \n> end\n> \n> break AllocSetFree if strcmp(\"ExecutorState\",context.name) == 0\n> commands 4 \n> print pointer\n> cont\n> end\n> \n> So far, gdb had more than 3h of CPU time and is eating 2.4GB of memory. The\n> backend had only 3'32\" of CPU time:\n> \n> VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 2727284 2.4g 17840 R 99.0 7.7 181:25.07 gdb\n> 9054688 220648 103056 t 1.3 0.7 3:32.05 postmaster\n> \n> Interestingly, the RES memory of the backend did not explode yet, but VIRT is\n> already high.\n> \n> I suppose the query will run for some more hours, hopefully, gdb will not\n> exhaust the memory in the meantime...\n> \n> You'll find some intermediate stats I already collected in attachment:\n> \n> * break 1, 2 and 3 are from AllocSetAlloc, break 4 is from AllocSetFree.\n> * most of the non-free'd chunk are allocated since the very beginning, before\n> the 5000's allocation call for almost 1M call so far.\n> * 3754 of them have a chunk->size of 0\n> * it seems there's some buggy stats or data:\n> # this one actually really comes from the gdb log\n> 0x38a77b8: break=3 num=191 sz=4711441762604810240 (weird sz)\n> # this one might be a bug in my script\n> 0x2: break=2 num=945346 sz=2 (weird address)\n> * ignoring the weird size requested during the 191st call, the total amount\n> of non free'd memory is currently 5488MB\n> \n> I couldn't print \"size\" as it is optimzed away, that's why I tracked\n> chunk->size... Is there anything wrong with my current run and gdb log? \n> \n\nThere's definitely something wrong. The size should not be 0, and\nneither it should be > 1GB. I suspect it's because some of the variables\nget optimized out, and gdb just uses some nonsense :-(\n\nI guess you'll need to debug the individual breakpoints, and see what's\navailable. It probably depends on the compiler version, etc. For example\nI don't see the \"chunk\" for breakpoint 3, but \"chunk_size\" works and I\ncan print the chunk pointer with a bit of arithmetics:\n\n p (block->freeptr - chunk_size)\n\nI suppose similar gympastics could work for the other breakpoints.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 1 Mar 2023 20:29:11 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "\n\nOn 3/1/23 19:09, Jehan-Guillaume de Rorthais wrote:\n> On Wed, 1 Mar 2023 18:48:40 +0100\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> ...\n>> You'll find some intermediate stats I already collected in attachment:\n>>\n>> * break 1, 2 and 3 are from AllocSetAlloc, break 4 is from AllocSetFree.\n>> * most of the non-free'd chunk are allocated since the very beginning, before\n>> the 5000's allocation call for almost 1M call so far.\n>> * 3754 of them have a chunk->size of 0\n>> * it seems there's some buggy stats or data:\n>> # this one actually really comes from the gdb log\n>> 0x38a77b8: break=3 num=191 sz=4711441762604810240 (weird sz)\n>> # this one might be a bug in my script\n>> 0x2: break=2 num=945346 sz=2 (weird address)\n>> * ignoring the weird size requested during the 191st call, the total amount\n>> of non free'd memory is currently 5488MB\n> \n> I forgot one stat. I don't know if this is expected, normal or not, but 53\n> chunks has been allocated on an existing address that was not free'd before.\n> \n\nIt's likely chunk was freed by repalloc() and not by pfree() directly.\nOr maybe the whole context got destroyed/reset, in which case we don't\nfree individual chunks. But that's unlikely for the ExecutorState.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 1 Mar 2023 20:34:08 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Wed, 1 Mar 2023 20:34:08 +0100\nTomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\n> On 3/1/23 19:09, Jehan-Guillaume de Rorthais wrote:\n> > On Wed, 1 Mar 2023 18:48:40 +0100\n> > Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> > ... \n> >> You'll find some intermediate stats I already collected in attachment:\n> >>\n> >> * break 1, 2 and 3 are from AllocSetAlloc, break 4 is from AllocSetFree.\n> >> * most of the non-free'd chunk are allocated since the very beginning,\n> >> before the 5000's allocation call for almost 1M call so far.\n> >> * 3754 of them have a chunk->size of 0\n> >> * it seems there's some buggy stats or data:\n> >> # this one actually really comes from the gdb log\n> >> 0x38a77b8: break=3 num=191 sz=4711441762604810240 (weird sz)\n> >> # this one might be a bug in my script\n> >> 0x2: break=2 num=945346 sz=2 (weird\n> >> address)\n> >> * ignoring the weird size requested during the 191st call, the total amount\n> >> of non free'd memory is currently 5488MB \n> > \n> > I forgot one stat. I don't know if this is expected, normal or not, but 53\n> > chunks has been allocated on an existing address that was not free'd before.\n> > \n> \n> It's likely chunk was freed by repalloc() and not by pfree() directly.\n> Or maybe the whole context got destroyed/reset, in which case we don't\n> free individual chunks. But that's unlikely for the ExecutorState.\n\nWell, as all breakpoints were conditional on ExecutorState, I suppose this\nmight be repalloc then.\n\nRegards,\n\n\n",
"msg_date": "Wed, 1 Mar 2023 22:45:58 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "Hi,\n\nOn Wed, 1 Mar 2023 20:29:11 +0100\nTomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> On 3/1/23 18:48, Jehan-Guillaume de Rorthais wrote:\n> > On Tue, 28 Feb 2023 20:51:02 +0100\n> > Tomas Vondra <tomas.vondra@enterprisedb.com> wrote: \n> >> On 2/28/23 19:06, Jehan-Guillaume de Rorthais wrote: \n> >>> * HashBatchContext goes up to 1441MB after 240s then stay flat until the\n> >>> end (400s as the last record)\n> >>\n> >> That's interesting. We're using HashBatchContext for very few things, so\n> >> what could it consume so much memory? But e.g. the number of buckets\n> >> should be limited by work_mem, so how could it get to 1.4GB?\n> >>\n> >> Can you break at ExecHashIncreaseNumBatches/ExecHashIncreaseNumBuckets\n> >> and print how many batches/butches are there? \n> > \n> > I did this test this morning.\n> > \n> > Batches and buckets increased really quickly to 1048576/1048576.\n> \n> OK. I think 1M buckets is mostly expected for work_mem=64MB. It means\n> buckets will use 8MB, which leaves ~56B per tuple (we're aiming for\n> fillfactor 1.0).\n> \n> But 1M batches? I guess that could be problematic. It doesn't seem like\n> much, but we need 1M files on each side - 1M for the hash table, 1M for\n> the outer relation. That's 16MB of pointers, but the files are BufFile\n> and we keep 8kB buffer for each of them. That's ~16GB right there :-(\n>\n> In practice it probably won't be that bad, because not all files will be\n> allocated/opened concurrently (especially if this is due to many tuples\n> having the same value). Assuming that's what's happening here, ofc.\n\nAnd I suppose they are close/freed concurrently as well?\n\n> > ExecHashIncreaseNumBatches was really chatty, having hundreds of thousands\n> > of calls, always short-cut'ed to 1048576, I guess because of the\n> > conditional block «/* safety check to avoid overflow */» appearing early in\n> > this function. \n> \n> Hmmm, that's a bit weird, no? I mean, the check is\n> \n> /* safety check to avoid overflow */\n> if (oldnbatch > Min(INT_MAX / 2, MaxAllocSize / (sizeof(void *) * 2)))\n> return;\n> \n> Why would it stop at 1048576? It certainly is not higher than INT_MAX/2\n> and with MaxAllocSize = ~1GB the second value should be ~33M. So what's\n> happening here?\n\nIndeed, not the good suspect. But what about this other short-cut then?\n\n /* do nothing if we've decided to shut off growth */\n if (!hashtable->growEnabled)\n return;\n\n [...]\n\n /*\n * If we dumped out either all or none of the tuples in the table, disable\n * further expansion of nbatch. This situation implies that we have\n * enough tuples of identical hashvalues to overflow spaceAllowed.\n * Increasing nbatch will not fix it since there's no way to subdivide the\n * group any more finely. We have to just gut it out and hope the server\n * has enough RAM.\n */\n if (nfreed == 0 || nfreed == ninmemory)\n {\n hashtable->growEnabled = false;\n #ifdef HJDEBUG\n printf(\"Hashjoin %p: disabling further increase of nbatch\\n\",\n hashtable);\n #endif\n }\n\nIf I guess correctly, the function is not able to split the current batch, so\nit sits and hopes. This is a much better suspect and I can surely track this\nfrom gdb.\n\nBeing able to find what are the fields involved in the join could help as well\nto check or gather some stats about them, but I hadn't time to dig this yet...\n\n[...]\n> >> Investigating memory leaks is tough, especially for generic memory\n> >> contexts like ExecutorState :-( Even more so when you can't reproduce it\n> >> on a machine with custom builds.\n> >>\n> >> What I'd try is this:\n\n[...]\n> > I couldn't print \"size\" as it is optimzed away, that's why I tracked\n> > chunk->size... Is there anything wrong with my current run and gdb log? \n> \n> There's definitely something wrong. The size should not be 0, and\n> neither it should be > 1GB. I suspect it's because some of the variables\n> get optimized out, and gdb just uses some nonsense :-(\n> \n> I guess you'll need to debug the individual breakpoints, and see what's\n> available. It probably depends on the compiler version, etc. For example\n> I don't see the \"chunk\" for breakpoint 3, but \"chunk_size\" works and I\n> can print the chunk pointer with a bit of arithmetics:\n> \n> p (block->freeptr - chunk_size)\n> \n> I suppose similar gympastics could work for the other breakpoints.\n\nOK, I'll give it a try tomorrow.\n\nThank you!\n\nNB: the query has been killed by the replication.\n\n\n",
"msg_date": "Thu, 2 Mar 2023 00:18:27 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On 3/2/23 00:18, Jehan-Guillaume de Rorthais wrote:\n> Hi,\n> \n> On Wed, 1 Mar 2023 20:29:11 +0100\n> Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>> On 3/1/23 18:48, Jehan-Guillaume de Rorthais wrote:\n>>> On Tue, 28 Feb 2023 20:51:02 +0100\n>>> Tomas Vondra <tomas.vondra@enterprisedb.com> wrote: \n>>>> On 2/28/23 19:06, Jehan-Guillaume de Rorthais wrote: \n>>>>> * HashBatchContext goes up to 1441MB after 240s then stay flat until the\n>>>>> end (400s as the last record)\n>>>>\n>>>> That's interesting. We're using HashBatchContext for very few things, so\n>>>> what could it consume so much memory? But e.g. the number of buckets\n>>>> should be limited by work_mem, so how could it get to 1.4GB?\n>>>>\n>>>> Can you break at ExecHashIncreaseNumBatches/ExecHashIncreaseNumBuckets\n>>>> and print how many batches/butches are there? \n>>>\n>>> I did this test this morning.\n>>>\n>>> Batches and buckets increased really quickly to 1048576/1048576.\n>>\n>> OK. I think 1M buckets is mostly expected for work_mem=64MB. It means\n>> buckets will use 8MB, which leaves ~56B per tuple (we're aiming for\n>> fillfactor 1.0).\n>>\n>> But 1M batches? I guess that could be problematic. It doesn't seem like\n>> much, but we need 1M files on each side - 1M for the hash table, 1M for\n>> the outer relation. That's 16MB of pointers, but the files are BufFile\n>> and we keep 8kB buffer for each of them. That's ~16GB right there :-(\n>>\n>> In practice it probably won't be that bad, because not all files will be\n>> allocated/opened concurrently (especially if this is due to many tuples\n>> having the same value). Assuming that's what's happening here, ofc.\n> \n> And I suppose they are close/freed concurrently as well?\n> \n\nYeah. There can be different subsets of the files used, depending on\nwhen the number of batches start to explode, etc.\n\n>>> ExecHashIncreaseNumBatches was really chatty, having hundreds of thousands\n>>> of calls, always short-cut'ed to 1048576, I guess because of the\n>>> conditional block «/* safety check to avoid overflow */» appearing early in\n>>> this function. \n>>\n>> Hmmm, that's a bit weird, no? I mean, the check is\n>>\n>> /* safety check to avoid overflow */\n>> if (oldnbatch > Min(INT_MAX / 2, MaxAllocSize / (sizeof(void *) * 2)))\n>> return;\n>>\n>> Why would it stop at 1048576? It certainly is not higher than INT_MAX/2\n>> and with MaxAllocSize = ~1GB the second value should be ~33M. So what's\n>> happening here?\n> \n> Indeed, not the good suspect. But what about this other short-cut then?\n> \n> /* do nothing if we've decided to shut off growth */\n> if (!hashtable->growEnabled)\n> return;\n> \n> [...]\n> \n> /*\n> * If we dumped out either all or none of the tuples in the table, disable\n> * further expansion of nbatch. This situation implies that we have\n> * enough tuples of identical hashvalues to overflow spaceAllowed.\n> * Increasing nbatch will not fix it since there's no way to subdivide the\n> * group any more finely. We have to just gut it out and hope the server\n> * has enough RAM.\n> */\n> if (nfreed == 0 || nfreed == ninmemory)\n> {\n> hashtable->growEnabled = false;\n> #ifdef HJDEBUG\n> printf(\"Hashjoin %p: disabling further increase of nbatch\\n\",\n> hashtable);\n> #endif\n> }\n> \n> If I guess correctly, the function is not able to split the current batch, so\n> it sits and hopes. This is a much better suspect and I can surely track this\n> from gdb.\n> \n\nYes, this would make much more sense - it'd be consistent with the\nhypothesis that this is due to number of batches exploding (it's a\nprotection exactly against that).\n\nYou specifically mentioned the other check earlier, but now I realize\nyou've been just speculating it might be that.\n\n> Being able to find what are the fields involved in the join could help as well\n> to check or gather some stats about them, but I hadn't time to dig this yet...\n> \n\nIt's going to be tricky, because all parts of the plan may be doing\nsomething, and there may be multiple hash joins. So you won't know if\nyou're executing the part of the plan that's causing issues :-(\n\n\nBut I have another idea - put a breakpoint on makeBufFile() which is the\nbit that allocates the temp files including the 8kB buffer, and print in\nwhat context we allocate that. I have a hunch we may be allocating it in\nthe ExecutorState. That'd explain all the symptoms.\n\n\nBTW with how many batches does the hash join start?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Mar 2023 01:30:27 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Thu, 2 Mar 2023 01:30:27 +0100\nTomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> On 3/2/23 00:18, Jehan-Guillaume de Rorthais wrote:\n> >>> ExecHashIncreaseNumBatches was really chatty, having hundreds of thousands\n> >>> of calls, always short-cut'ed to 1048576, I guess because of the\n> >>> conditional block «/* safety check to avoid overflow */» appearing early\n> >>> in this function. \n> >[...] But what about this other short-cut then?\n> > \n> > /* do nothing if we've decided to shut off growth */\n> > if (!hashtable->growEnabled)\n> > return;\n> > \n> > [...]\n> > \n> > /*\n> > * If we dumped out either all or none of the tuples in the table,\n> > * disable\n> > * further expansion of nbatch. This situation implies that we have\n> > * enough tuples of identical hashvalues to overflow spaceAllowed.\n> > * Increasing nbatch will not fix it since there's no way to subdivide\n> > * the\n> > * group any more finely. We have to just gut it out and hope the server\n> > * has enough RAM.\n> > */\n> > if (nfreed == 0 || nfreed == ninmemory)\n> > {\n> > hashtable->growEnabled = false;\n> > #ifdef HJDEBUG\n> > printf(\"Hashjoin %p: disabling further increase of nbatch\\n\",\n> > hashtable);\n> > #endif\n> > }\n> > \n> > If I guess correctly, the function is not able to split the current batch,\n> > so it sits and hopes. This is a much better suspect and I can surely track\n> > this from gdb.\n> \n> Yes, this would make much more sense - it'd be consistent with the\n> hypothesis that this is due to number of batches exploding (it's a\n> protection exactly against that).\n> \n> You specifically mentioned the other check earlier, but now I realize\n> you've been just speculating it might be that.\n\nYes, sorry about that, I jumped on this speculation without actually digging it\nmuch...\n\n[...]\n> But I have another idea - put a breakpoint on makeBufFile() which is the\n> bit that allocates the temp files including the 8kB buffer, and print in\n> what context we allocate that. I have a hunch we may be allocating it in\n> the ExecutorState. That'd explain all the symptoms.\n\nThat what I was wondering as well yesterday night.\n\nSo, on your advice, I set a breakpoint on makeBufFile:\n\n (gdb) info br\n Num Type Disp Enb Address What\n 1 breakpoint keep y 0x00000000007229df in makeBufFile\n bt 10\n p CurrentMemoryContext.name\n\n\nThen, I disabled it and ran the query up to this mem usage:\n\n VIRT RES SHR S %CPU %MEM\n 20.1g 7.0g 88504 t 0.0 22.5\n\nThen, I enabled the breakpoint and look at around 600 bt and context name\nbefore getting bored. They **all** looked like that:\n\n Breakpoint 1, BufFileCreateTemp (...) at buffile.c:201\n 201 in buffile.c\n #0 BufFileCreateTemp (...) buffile.c:201\n #1 ExecHashJoinSaveTuple (tuple=0x1952c180, ...) nodeHashjoin.c:1238\n #2 ExecHashJoinImpl (parallel=false, pstate=0x31a6418) nodeHashjoin.c:398\n #3 ExecHashJoin (pstate=0x31a6418) nodeHashjoin.c:584\n #4 ExecProcNodeInstr (node=<optimized out>) execProcnode.c:462\n #5 ExecProcNode (node=0x31a6418)\n #6 ExecSort (pstate=0x31a6308)\n #7 ExecProcNodeInstr (node=<optimized out>)\n #8 ExecProcNode (node=0x31a6308)\n #9 fetch_input_tuple (aggstate=aggstate@entry=0x31a5ea0)\n \n $421643 = 0x99d7f7 \"ExecutorState\"\n\nThese 600-ish 8kB buffer were all allocated in \"ExecutorState\". I could\nprobably log much more of them if more checks/stats need to be collected, but\nit really slow down the query a lot, granting it only 1-5% of CPU time instead\nof the usual 100%.\n\nSo It's not exactly a leakage, as memory would be released at the end of the\nquery, but I suppose they should be allocated in a shorter living context,\nto avoid this memory bloat, am I right?\n\n> BTW with how many batches does the hash join start?\n\n* batches went from 32 to 1048576 before being growEnabled=false as suspected\n* original and current nbuckets were set to 1048576 immediately\n* allowed space is set to the work_mem, but current space usage is 1.3GB, as\n measured previously close before system refuse more memory allocation.\n\nHere are the full details about the hash associated with the previous backtrace:\n\n (gdb) up\n (gdb) up\n (gdb) p *((HashJoinState*)pstate)->hj_HashTable\n $421652 = {\n nbuckets = 1048576,\n log2_nbuckets = 20,\n nbuckets_original = 1048576,\n nbuckets_optimal = 1048576,\n log2_nbuckets_optimal = 20,\n buckets = {unshared = 0x68f12e8, shared = 0x68f12e8},\n keepNulls = true,\n skewEnabled = false,\n skewBucket = 0x0,\n skewBucketLen = 0,\n nSkewBuckets = 0,\n skewBucketNums = 0x0,\n nbatch = 1048576,\n curbatch = 0,\n nbatch_original = 32,\n nbatch_outstart = 1048576,\n growEnabled = false,\n totalTuples = 19541735,\n partialTuples = 19541735,\n skewTuples = 0,\n innerBatchFile = 0xdfcd168,\n outerBatchFile = 0xe7cd1a8,\n outer_hashfunctions = 0x68ed3a0,\n inner_hashfunctions = 0x68ed3f0,\n hashStrict = 0x68ed440,\n spaceUsed = 1302386440,\n spaceAllowed = 67108864,\n spacePeak = 1302386440,\n spaceUsedSkew = 0,\n spaceAllowedSkew = 1342177,\n hashCxt = 0x68ed290,\n batchCxt = 0x68ef2a0,\n chunks = 0x251f28e88,\n current_chunk = 0x0,\n area = 0x0,\n parallel_state = 0x0,\n batches = 0x0,\n current_chunk_shared = 1103827828993\n }\n\nFor what it worth, contexts are:\n\n (gdb) p ((HashJoinState*)pstate)->hj_HashTable->hashCxt.name\n $421657 = 0x99e3c0 \"HashTableContext\"\n\n (gdb) p ((HashJoinState*)pstate)->hj_HashTable->batchCxt.name\n $421658 = 0x99e3d1 \"HashBatchContext\"\n\nRegards,\n\n\n",
"msg_date": "Thu, 2 Mar 2023 13:08:38 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "\nOn 3/2/23 13:08, Jehan-Guillaume de Rorthais wrote:\n> ...\n> [...]\n>> But I have another idea - put a breakpoint on makeBufFile() which is the\n>> bit that allocates the temp files including the 8kB buffer, and print in\n>> what context we allocate that. I have a hunch we may be allocating it in\n>> the ExecutorState. That'd explain all the symptoms.\n> \n> That what I was wondering as well yesterday night.\n> \n> So, on your advice, I set a breakpoint on makeBufFile:\n> \n> (gdb) info br\n> Num Type Disp Enb Address What\n> 1 breakpoint keep y 0x00000000007229df in makeBufFile\n> bt 10\n> p CurrentMemoryContext.name\n> \n> \n> Then, I disabled it and ran the query up to this mem usage:\n> \n> VIRT RES SHR S %CPU %MEM\n> 20.1g 7.0g 88504 t 0.0 22.5\n> \n> Then, I enabled the breakpoint and look at around 600 bt and context name\n> before getting bored. They **all** looked like that:\n> \n> Breakpoint 1, BufFileCreateTemp (...) at buffile.c:201\n> 201 in buffile.c\n> #0 BufFileCreateTemp (...) buffile.c:201\n> #1 ExecHashJoinSaveTuple (tuple=0x1952c180, ...) nodeHashjoin.c:1238\n> #2 ExecHashJoinImpl (parallel=false, pstate=0x31a6418) nodeHashjoin.c:398\n> #3 ExecHashJoin (pstate=0x31a6418) nodeHashjoin.c:584\n> #4 ExecProcNodeInstr (node=<optimized out>) execProcnode.c:462\n> #5 ExecProcNode (node=0x31a6418)\n> #6 ExecSort (pstate=0x31a6308)\n> #7 ExecProcNodeInstr (node=<optimized out>)\n> #8 ExecProcNode (node=0x31a6308)\n> #9 fetch_input_tuple (aggstate=aggstate@entry=0x31a5ea0)\n> \n> $421643 = 0x99d7f7 \"ExecutorState\"\n> \n> These 600-ish 8kB buffer were all allocated in \"ExecutorState\". I could\n> probably log much more of them if more checks/stats need to be collected, but\n> it really slow down the query a lot, granting it only 1-5% of CPU time instead\n> of the usual 100%.\n> \n\nBingo!\n\n> So It's not exactly a leakage, as memory would be released at the end of the\n> query, but I suppose they should be allocated in a shorter living context,\n> to avoid this memory bloat, am I right?\n> \n\nWell, yeah and no.\n\nIn principle we could/should have allocated the BufFiles in a different\ncontext (possibly hashCxt). But in practice it probably won't make any\ndifference, because the query will probably run all the hashjoins at the\nsame time. Imagine a sequence of joins - we build all the hashes, and\nthen tuples from the outer side bubble up through the plans. And then\nyou process the last tuple and release all the hashes.\n\nThis would not fix the issue. It'd be helpful for accounting purposes\n(we'd know it's the buffiles and perhaps for which hashjoin node). But\nwe'd still have to allocate the memory etc. (so still OOM).\n\nThere's only one thing I think could help - increase the work_mem enough\nnot to trigger the explosive growth in number of batches. Imagine\nthere's one very common value, accounting for ~65MB of tuples. With\nwork_mem=64MB this leads to exactly the explosive growth you're\nobserving here. With 128MB it'd probably run just fine.\n\nThe problem is we don't know how large the work_mem would need to be :-(\nSo you'll have to try and experiment a bit.\n\nI remembered there was a thread [1] about *exactly* this issue in 2019.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/bc138e9f-c89e-9147-5395-61d51a757b3b%40gusw.net\n\nI even posted a couple patches that try to address this by accounting\nfor the BufFile memory, and increasing work_mem a bit instead of just\nblindly increasing the number of batches (ignoring the fact that means\nmore memory will be used for the BufFile stuff).\n\nI don't recall why it went nowhere, TBH. But I recall there were\ndiscussions about maybe doing something like \"block nestloop\" at the\ntime, or something. Or maybe the thread just went cold.\n\n>> BTW with how many batches does the hash join start?\n> \n> * batches went from 32 to 1048576 before being growEnabled=false as suspected\n> * original and current nbuckets were set to 1048576 immediately\n> * allowed space is set to the work_mem, but current space usage is 1.3GB, as\n> measured previously close before system refuse more memory allocation.\n> \n\nYeah, I think this is pretty expected. We start with multiple batches,\nso we pick optimal buckets for the whole work_mem (so no growth here).\n\nBut then batches explode, in the futile hope to keep this in work_mem.\nOnce that growth gets disabled, we end up with 1.3GB hash table.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Mar 2023 13:44:52 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "Hi!\n\nOn Thu, 2 Mar 2023 13:44:52 +0100\nTomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> Well, yeah and no.\n> \n> In principle we could/should have allocated the BufFiles in a different\n> context (possibly hashCxt). But in practice it probably won't make any\n> difference, because the query will probably run all the hashjoins at the\n> same time. Imagine a sequence of joins - we build all the hashes, and\n> then tuples from the outer side bubble up through the plans. And then\n> you process the last tuple and release all the hashes.\n> \n> This would not fix the issue. It'd be helpful for accounting purposes\n> (we'd know it's the buffiles and perhaps for which hashjoin node). But\n> we'd still have to allocate the memory etc. (so still OOM).\n\nWell, accounting things in the correct context would already worth a patch I\nsuppose. At least, it help to investigate faster. Plus, you already wrote a\npatch about that[1]:\n\nhttps://www.postgresql.org/message-id/20190421114618.z3mpgmimc3rmubi4%40development\n\nNote that I did reference the \"Out of Memory errors are frustrating as heck!\"\nthread in my first email, pointing on a Tom Lane's email explaining that\nExecutorState was not supposed to be so large[2].\n\n[2] https://www.postgresql.org/message-id/flat/12064.1555298699%40sss.pgh.pa.us#eb519865575bbc549007878a5fb7219b\n\n> There's only one thing I think could help - increase the work_mem enough\n> not to trigger the explosive growth in number of batches. Imagine\n> there's one very common value, accounting for ~65MB of tuples. With\n> work_mem=64MB this leads to exactly the explosive growth you're\n> observing here. With 128MB it'd probably run just fine.\n> \n> The problem is we don't know how large the work_mem would need to be :-(\n> So you'll have to try and experiment a bit.\n> \n> I remembered there was a thread [1] about *exactly* this issue in 2019.\n> \n> [1]\n> https://www.postgresql.org/message-id/flat/bc138e9f-c89e-9147-5395-61d51a757b3b%40gusw.net\n>\n> I even posted a couple patches that try to address this by accounting\n> for the BufFile memory, and increasing work_mem a bit instead of just\n> blindly increasing the number of batches (ignoring the fact that means\n> more memory will be used for the BufFile stuff).\n> \n> I don't recall why it went nowhere, TBH. But I recall there were\n> discussions about maybe doing something like \"block nestloop\" at the\n> time, or something. Or maybe the thread just went cold.\n\nSo I read the full thread now. I'm still not sure why we try to avoid hash\ncollision so hard, and why a similar data subset barely larger than work_mem\nmakes the number of batchs explode, but I think I have a better understanding of\nthe discussion and the proposed solutions.\n\nThere was some thoughts about how to make a better usage of the memory. As\nmemory is exploding way beyond work_mem, at least, avoid to waste it with too\nmany buffers of BufFile. So you expand either the work_mem or the number of\nbatch, depending on what move is smarter. TJis is explained and tested here:\n\nhttps://www.postgresql.org/message-id/20190421161434.4hedytsadpbnglgk%40development\nhttps://www.postgresql.org/message-id/20190422030927.3huxq7gghms4kmf4%40development\n\nAnd then, another patch to overflow each batch to a dedicated temp file and\nstay inside work_mem (v4-per-slice-overflow-file.patch):\n\nhttps://www.postgresql.org/message-id/20190428141901.5dsbge2ka3rxmpk6%40development\n\nThen, nothing more on the discussion about this last patch. So I guess it just\nwent cold.\n\nFor what it worth, these two patches seems really interesting to me. Do you need\nany help to revive it?\n\nRegards,\n\n\n",
"msg_date": "Thu, 2 Mar 2023 19:15:30 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Thu, 2 Mar 2023 19:15:30 +0100\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n[...]\n> For what it worth, these two patches seems really interesting to me. Do you\n> need any help to revive it?\n\nTo avoid confusion, the two patches I meant were:\n\n* 0001-move-BufFile-stuff-into-separate-context.patch \t\n* v4-per-slice-overflow-file.patch\n\nRegards,\n\n\n",
"msg_date": "Thu, 2 Mar 2023 19:37:27 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "\n\nOn 3/2/23 19:15, Jehan-Guillaume de Rorthais wrote:\n> Hi!\n> \n> On Thu, 2 Mar 2023 13:44:52 +0100\n> Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>> Well, yeah and no.\n>>\n>> In principle we could/should have allocated the BufFiles in a different\n>> context (possibly hashCxt). But in practice it probably won't make any\n>> difference, because the query will probably run all the hashjoins at the\n>> same time. Imagine a sequence of joins - we build all the hashes, and\n>> then tuples from the outer side bubble up through the plans. And then\n>> you process the last tuple and release all the hashes.\n>>\n>> This would not fix the issue. It'd be helpful for accounting purposes\n>> (we'd know it's the buffiles and perhaps for which hashjoin node). But\n>> we'd still have to allocate the memory etc. (so still OOM).\n> \n> Well, accounting things in the correct context would already worth a patch I\n> suppose. At least, it help to investigate faster. Plus, you already wrote a\n> patch about that[1]:\n> \n> https://www.postgresql.org/message-id/20190421114618.z3mpgmimc3rmubi4%40development\n> \n> Note that I did reference the \"Out of Memory errors are frustrating as heck!\"\n> thread in my first email, pointing on a Tom Lane's email explaining that\n> ExecutorState was not supposed to be so large[2].\n> \n> [2] https://www.postgresql.org/message-id/flat/12064.1555298699%40sss.pgh.pa.us#eb519865575bbc549007878a5fb7219b\n> \n\nAh, right, I didn't realize it's the same thread. There are far too many\nthreads about this sort of things, and I probably submitted half-baked\npatches to most of them :-/\n\n>> There's only one thing I think could help - increase the work_mem enough\n>> not to trigger the explosive growth in number of batches. Imagine\n>> there's one very common value, accounting for ~65MB of tuples. With\n>> work_mem=64MB this leads to exactly the explosive growth you're\n>> observing here. With 128MB it'd probably run just fine.\n>>\n>> The problem is we don't know how large the work_mem would need to be :-(\n>> So you'll have to try and experiment a bit.\n>>\n>> I remembered there was a thread [1] about *exactly* this issue in 2019.\n>>\n>> [1]\n>> https://www.postgresql.org/message-id/flat/bc138e9f-c89e-9147-5395-61d51a757b3b%40gusw.net\n>>\n>> I even posted a couple patches that try to address this by accounting\n>> for the BufFile memory, and increasing work_mem a bit instead of just\n>> blindly increasing the number of batches (ignoring the fact that means\n>> more memory will be used for the BufFile stuff).\n>>\n>> I don't recall why it went nowhere, TBH. But I recall there were\n>> discussions about maybe doing something like \"block nestloop\" at the\n>> time, or something. Or maybe the thread just went cold.\n> \n> So I read the full thread now. I'm still not sure why we try to avoid hash\n> collision so hard, and why a similar data subset barely larger than work_mem\n> makes the number of batchs explode, but I think I have a better understanding of\n> the discussion and the proposed solutions.\n> \n\nI don't think this is about hash collisions (i.e. the same hash value\nbeing computed for different values). You can construct cases like this,\nof course, particularly if you only look at a subset of the bits (for 1M\nbatches we only look at the first 20 bits), but I'd say it's fairly\nunlikely to happen unless you do that intentionally.\n\n(I'm assuming regular data types with reasonable hash functions. If the\nquery joins on custom data types with some silly hash function, it may\nbe more likely to have conflicts.)\n\nIMHO a much more likely explanation is there actually is a very common\nvalue in the data. For example there might be a value repeated 1M times,\nand that'd be enough to break this.\n\nWe do build a special \"skew\" buckets for values from an MCV, but maybe\nthe stats are not updated yet, or maybe there are too many such values\nto fit into MCV?\n\nI now realize there's probably another way to get into this - oversized\nrows. Could there be a huge row (e.g. with a large text/bytea value)?\nImagine a row that's 65MB - that'd be game over with work_mem=64MB. Or\nthere might be smaller rows, but a couple hash collisions would suffice.\n\n> There was some thoughts about how to make a better usage of the memory. As\n> memory is exploding way beyond work_mem, at least, avoid to waste it with too\n> many buffers of BufFile. So you expand either the work_mem or the number of\n> batch, depending on what move is smarter. TJis is explained and tested here:\n> \n> https://www.postgresql.org/message-id/20190421161434.4hedytsadpbnglgk%40development\n> https://www.postgresql.org/message-id/20190422030927.3huxq7gghms4kmf4%40development\n> \n> And then, another patch to overflow each batch to a dedicated temp file and\n> stay inside work_mem (v4-per-slice-overflow-file.patch):\n> \n> https://www.postgresql.org/message-id/20190428141901.5dsbge2ka3rxmpk6%40development\n> \n> Then, nothing more on the discussion about this last patch. So I guess it just\n> went cold.\n> \n\nI think a contributing factor was that the OP did not respond for a\ncouple months, so the thread went cold.\n\n> For what it worth, these two patches seems really interesting to me. Do you need\n> any help to revive it?\n> \n\nI think another reason why that thread went nowhere were some that we've\nbeen exploring a different (and likely better) approach to fix this by\nfalling back to a nested loop for the \"problematic\" batches.\n\nAs proposed in this thread:\n\n https://www.postgresql.org/message-id/20190421161434.4hedytsadpbnglgk%40development\n\nSo I guess the best thing would be to go through these threads, see what\nthe status is, restart the discussion and propose what to do. If you do\nthat, I'm happy to rebase the patches, and maybe see if I could improve\nthem in some way.\n\nI was hoping we'd solve this by the BNL, but if we didn't get that in 4\nyears, maybe we shouldn't stall and get at least an imperfect stop-gap\nsolution ...\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Mar 2023 19:53:14 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Thu, 2 Mar 2023 19:53:14 +0100\nTomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> On 3/2/23 19:15, Jehan-Guillaume de Rorthais wrote:\n...\n\n> > There was some thoughts about how to make a better usage of the memory. As\n> > memory is exploding way beyond work_mem, at least, avoid to waste it with\n> > too many buffers of BufFile. So you expand either the work_mem or the\n> > number of batch, depending on what move is smarter. TJis is explained and\n> > tested here:\n> > \n> > https://www.postgresql.org/message-id/20190421161434.4hedytsadpbnglgk%40development\n> > https://www.postgresql.org/message-id/20190422030927.3huxq7gghms4kmf4%40development\n> > \n> > And then, another patch to overflow each batch to a dedicated temp file and\n> > stay inside work_mem (v4-per-slice-overflow-file.patch):\n> > \n> > https://www.postgresql.org/message-id/20190428141901.5dsbge2ka3rxmpk6%40development\n> > \n> > Then, nothing more on the discussion about this last patch. So I guess it\n> > just went cold.\n> \n> I think a contributing factor was that the OP did not respond for a\n> couple months, so the thread went cold.\n> \n> > For what it worth, these two patches seems really interesting to me. Do you\n> > need any help to revive it?\n> \n> I think another reason why that thread went nowhere were some that we've\n> been exploring a different (and likely better) approach to fix this by\n> falling back to a nested loop for the \"problematic\" batches.\n> \n> As proposed in this thread:\n> \n> https://www.postgresql.org/message-id/20190421161434.4hedytsadpbnglgk%40development\n\nUnless I'm wrong, you are linking to the same «frustrated as heck!» discussion,\nfor your patch v2-0001-account-for-size-of-BatchFile-structure-in-hashJo.patch\n(balancing between increasing batches *and* work_mem).\n\nNo sign of turning \"problematic\" batches to nested loop. Did I miss something?\n\nDo you have a link close to your hand about such algo/patch test by any chance?\n\n> I was hoping we'd solve this by the BNL, but if we didn't get that in 4\n> years, maybe we shouldn't stall and get at least an imperfect stop-gap\n> solution ...\n\nI'll keep searching tomorrow about existing BLN discussions (is it block level\nnested loops?).\n\nRegards,\n\n\n",
"msg_date": "Thu, 2 Mar 2023 23:57:21 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "\n\nOn 3/2/23 23:57, Jehan-Guillaume de Rorthais wrote:\n> On Thu, 2 Mar 2023 19:53:14 +0100\n> Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>> On 3/2/23 19:15, Jehan-Guillaume de Rorthais wrote:\n> ...\n> \n>>> There was some thoughts about how to make a better usage of the memory. As\n>>> memory is exploding way beyond work_mem, at least, avoid to waste it with\n>>> too many buffers of BufFile. So you expand either the work_mem or the\n>>> number of batch, depending on what move is smarter. TJis is explained and\n>>> tested here:\n>>>\n>>> https://www.postgresql.org/message-id/20190421161434.4hedytsadpbnglgk%40development\n>>> https://www.postgresql.org/message-id/20190422030927.3huxq7gghms4kmf4%40development\n>>>\n>>> And then, another patch to overflow each batch to a dedicated temp file and\n>>> stay inside work_mem (v4-per-slice-overflow-file.patch):\n>>>\n>>> https://www.postgresql.org/message-id/20190428141901.5dsbge2ka3rxmpk6%40development\n>>>\n>>> Then, nothing more on the discussion about this last patch. So I guess it\n>>> just went cold.\n>>\n>> I think a contributing factor was that the OP did not respond for a\n>> couple months, so the thread went cold.\n>>\n>>> For what it worth, these two patches seems really interesting to me. Do you\n>>> need any help to revive it?\n>>\n>> I think another reason why that thread went nowhere were some that we've\n>> been exploring a different (and likely better) approach to fix this by\n>> falling back to a nested loop for the \"problematic\" batches.\n>>\n>> As proposed in this thread:\n>>\n>> https://www.postgresql.org/message-id/20190421161434.4hedytsadpbnglgk%40development\n> \n> Unless I'm wrong, you are linking to the same «frustrated as heck!» discussion,\n> for your patch v2-0001-account-for-size-of-BatchFile-structure-in-hashJo.patch\n> (balancing between increasing batches *and* work_mem).\n> \n> No sign of turning \"problematic\" batches to nested loop. Did I miss something?\n> \n> Do you have a link close to your hand about such algo/patch test by any chance?\n> \n\nGah! My apologies, I meant to post a link to this thread:\n\nhttps://www.postgresql.org/message-id/CAAKRu_b6+jC93WP+pWxqK5KAZJC5Rmxm8uquKtEf-KQ++1Li6Q@mail.gmail.com\n\nwhich then points to this BNL patch\n\nhttps://www.postgresql.org/message-id/CAAKRu_YsWm7gc_b2nBGWFPE6wuhdOLfc1LBZ786DUzaCPUDXCA%40mail.gmail.com\n\nThat discussion apparently stalled in August 2020, so maybe that's where\nwe should pick up and see in what shape that patch is.\n\nregards\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Mar 2023 00:24:50 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "Hi,\n\n> So I guess the best thing would be to go through these threads, see what\n> the status is, restart the discussion and propose what to do. If you do\n> that, I'm happy to rebase the patches, and maybe see if I could improve\n> them in some way.\n\nOK! It took me some time, but I did it. I'll try to sum up the situation as\nsimply as possible.\n\nI reviewed the following threads:\n\n* Out of Memory errors are frustrating as heck!\n 2019-04-14 -> 2019-04-28\n https://www.postgresql.org/message-id/flat/bc138e9f-c89e-9147-5395-61d51a757b3b%40gusw.net\n\n This discussion stalled, waiting for OP, but ideas there ignited all other\n discussions.\n\n* accounting for memory used for BufFile during hash joins\n 2019-05-04 -> 2019-09-10\n https://www.postgresql.org/message-id/flat/20190504003414.bulcbnge3rhwhcsh%40development\n\n This was suppose to push forward a patch discussed on previous thread, but\n it actually took over it and more ideas pops from there.\n\n* Replace hashtable growEnable flag\n 2019-05-15 -> 2019-05-16\n https://www.postgresql.org/message-id/flat/CAB0yrekv%3D6_T_eUe2kOEvWUMwufcvfd15SFmCABtYFOkxCFdfA%40mail.gmail.com\n\n This one quickly merged to the next one.\n\n* Avoiding hash join batch explosions with extreme skew and weird stats\n 2019-05-16 -> 2020-09-24\n https://www.postgresql.org/message-id/flat/CA%2BhUKGKWWmf%3DWELLG%3DaUGbcugRaSQbtm0tKYiBut-B2rVKX63g%40mail.gmail.com\n\n Another thread discussing another facet of the problem, but eventually end up\n discussing / reviewing the BNLJ implementation.\n \nFive possible fixes/ideas were discussed all over these threads:\n\n\n1. \"move BufFile stuff into separate context\"\n last found patch: 2019-04-21\n https://www.postgresql.org/message-id/20190421114618.z3mpgmimc3rmubi4%40development\n https://www.postgresql.org/message-id/attachment/100822/0001-move-BufFile-stuff-into-separate-context.patch\n\n This patch helps with observability/debug by allocating the bufFiles in the\n appropriate context instead of the \"ExecutorState\" one.\n\n I suppose this simple one has been forgotten in the fog of all other\n discussions. Also, this probably worth to be backpatched.\n\n2. \"account for size of BatchFile structure in hashJoin\"\n last found patch: 2019-04-22\n https://www.postgresql.org/message-id/20190428141901.5dsbge2ka3rxmpk6%40development\n https://www.postgresql.org/message-id/attachment/100951/v2-simple-rebalance.patch\n\n This patch seems like a good first step:\n\n * it definitely helps older versions where other patches discussed are way\n too invasive to be backpatched\n * it doesn't step on the way of other discussed patches\n\n While looking at the discussions around this patch, I was wondering if the\n planner considers the memory allocation of bufFiles. But of course, Melanie\n already noticed that long before I was aware of this problem and discussion:\n\n 2019-07-10: «I do think that accounting for Buffile overhead when estimating\n the size of the hashtable during ExecChooseHashTableSize() so it can be\n used during planning is a worthwhile patch by itself (though I know it\n is not even part of this patch).»\n https://www.postgresql.org/message-id/CAAKRu_Yiam-%3D06L%2BR8FR%2BVaceb-ozQzzMqRiY2pDYku1VdZ%3DEw%40mail.gmail.com\n \n Tomas Vondra agreed with this in his answer, but no new version of the patch\n where produced.\n\n Finally, Melanie was pushing the idea to commit this patch no matter other\n pending patches/ideas:\n\n 2019-09-05: «If Tomas or someone else has time to pick up and modify BufFile\n accounting patch, committing that still seems like the nest logical\n step.»\n https://www.postgresql.org/message-id/CAAKRu_b6%2BjC93WP%2BpWxqK5KAZJC5Rmxm8uquKtEf-KQ%2B%2B1Li6Q%40mail.gmail.com\n\n Unless I'm wrong, no one down voted this.\n\n3. \"per slice overflow file\"\n last found patch: 2019-05-08\n https://www.postgresql.org/message-id/20190508150844.rij36rtuk4lhvztw%40development\n https://www.postgresql.org/message-id/attachment/101080/v4-per-slice-overflow-file-20190508.patch\n\n This patch has been withdraw after an off-list discussion with Thomas Munro\n because of a missing parallel hashJoin implementation. Plus, before any\n effort started on the parallel implementation, the BNLJ idea appeared and\n seemed more appealing.\n\n See:\n https://www.postgresql.org/message-id/20190529145517.sj2poqmb3cr4cg6w%40development\n\n By the time, it still seems to have some interest despite the BNLJ patch:\n\n 2019-07-10: «If slicing is made to work for parallel-aware hashjoin and the\n code is in a committable state (and probably has the threshold I mentioned\n above), then I think that this patch should go in.»\n https://www.postgresql.org/message-id/CAAKRu_Yiam-%3D06L%2BR8FR%2BVaceb-ozQzzMqRiY2pDYku1VdZ%3DEw%40mail.gmail.com\n\n But this might have been disapproved later by Tomas:\n\n 2019-09-10: «I have to admit I kinda lost track [...] My feeling is that we\n should get the BNLJ committed first, and then maybe use some of those\n additional strategies as fallbacks (depending on which issues are still\n unsolved by the BNLJ).»\n https://www.postgresql.org/message-id/20190910134751.x64idfqj6qgt37om%40development\n\n4. \"Block Nested Loop Join\"\n last found patch: 2020-08-31\n https://www.postgresql.org/message-id/CAAKRu_aLMRHX6_y%3DK5i5wBMTMQvoPMO8DT3eyCziTHjsY11cVA%40mail.gmail.com\n https://www.postgresql.org/message-id/attachment/113608/v11-0001-Implement-Adaptive-Hashjoin.patch\n\n Most of the discussion was consideration about the BNLJ parallel and\n semi-join implementation. Melanie put a lot of work on this. This looks like\n the most advanced patch so far and add a fair amount of complexity.\n\n There were some open TODOs, but Melanie was waiting for some more review and\n feedback on v11 first.\n\n5. Only split the skewed batches\n Discussion: 2019-07-11\n https://www.postgresql.org/message-id/CA%2BTgmoYqpbzC1g%2By0bxDFkpM60Kr2fnn0hVvT-RfVWonRY2dMA%40mail.gmail.com\n https://www.postgresql.org/message-id/CAB0yremvswRAT86Afb9MZ_PaLHyY9BT313-adCHbhMJ%3Dx_GEcg%40mail.gmail.com\n\n Robert Haas pointed out that current implementation and discussion were not\n really responding to the skew in a very effective way. He's considering\n splitting batches unevenly. Hubert Zhang stepped in, detailed some more and\n volunteer to work on such a patch.\n\n No one reacted.\n\n It seems to me this is an interesting track to explore. This looks like a\n good complement of 2. (\"account for size of BatchFile structure in hashJoin\"\n patch). However, this idea probably couldn't be backpatched.\n\n Note that it could help with 3. as well by slicing only the remaining skewed\n values.\n\n Also, this could have some impact on the \"Block Nested Loop Join\" patch if\n the later is kept to deal with the remaining skewed batches.\n\n> I was hoping we'd solve this by the BNL, but if we didn't get that in 4\n> years, maybe we shouldn't stall and get at least an imperfect stop-gap\n> solution ...\n\nIndeed. So, to sum-up:\n\n* Patch 1 could be rebased/applied/backpatched\n* Patch 2 is worth considering to backpatch\n* Patch 3 seemed withdrawn in favor of BNLJ\n* Patch 4 is waiting for some more review and has some TODO\n* discussion 5 worth few minutes to discuss before jumping on previous topics\n\n1 & 2 are imperfect solution but doesn't weight much and could be backpatched.\n4 & 5 are long-term solutions for a futur major version needing some more\ndiscussions, test and reviews.\n3 is not 100% buried, but a last round in the arena might settle its destiny for\ngood.\n\nHopefully this sum-up is exhaustive and will help clarify this 3-years-old\ntopic.\n\nRegards,\n\n\n",
"msg_date": "Fri, 10 Mar 2023 19:51:14 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "Hi there,\n\nOn Fri, 10 Mar 2023 19:51:14 +0100\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n\n> > So I guess the best thing would be to go through these threads, see what\n> > the status is, restart the discussion and propose what to do. If you do\n> > that, I'm happy to rebase the patches, and maybe see if I could improve\n> > them in some way. \n> \n> [...]\n> \n> > I was hoping we'd solve this by the BNL, but if we didn't get that in 4\n> > years, maybe we shouldn't stall and get at least an imperfect stop-gap\n> > solution ... \n> \n> Indeed. So, to sum-up:\n> \n> * Patch 1 could be rebased/applied/backpatched\n\nWould it help if I rebase Patch 1 (\"move BufFile stuff into separate context\")?\n\n> * Patch 2 is worth considering to backpatch\n\nSame question.\n\n> * Patch 3 seemed withdrawn in favor of BNLJ\n> * Patch 4 is waiting for some more review and has some TODO\n> * discussion 5 worth few minutes to discuss before jumping on previous topics\n\nThese other patches needs more discussions and hacking. They have a low\npriority compare to other discussions and running commitfest. However, how can\navoid losing them in limbo again?\n\nRegards,\n\n\n",
"msg_date": "Fri, 17 Mar 2023 09:18:34 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "\nOn 3/17/23 09:18, Jehan-Guillaume de Rorthais wrote:\n> Hi there,\n> \n> On Fri, 10 Mar 2023 19:51:14 +0100\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> \n>>> So I guess the best thing would be to go through these threads, see what\n>>> the status is, restart the discussion and propose what to do. If you do\n>>> that, I'm happy to rebase the patches, and maybe see if I could improve\n>>> them in some way. \n>>\n>> [...]\n>>\n>>> I was hoping we'd solve this by the BNL, but if we didn't get that in 4\n>>> years, maybe we shouldn't stall and get at least an imperfect stop-gap\n>>> solution ... \n>>\n>> Indeed. So, to sum-up:\n>>\n>> * Patch 1 could be rebased/applied/backpatched\n> \n> Would it help if I rebase Patch 1 (\"move BufFile stuff into separate context\")?\n> \n\nYeah, I think this is something we'd want to do. It doesn't change the\nbehavior, but it makes it easier to track the memory consumption etc.\n\n>> * Patch 2 is worth considering to backpatch\n> \n\nI'm not quite sure what exactly are the numbered patches, as some of the\nthreads had a number of different patch ideas, and I'm not sure which\none was/is the most promising one.\n\nIIRC there were two directions:\n\na) \"balancing\" i.e. increasing work_mem to minimize the total memory\nconsumption (best effort, but allowing exceeding work_mem)\n\nb) limiting the number of BufFiles, and combining data from \"future\"\nbatches into a single file\n\nI think the spilling is \"nicer\" in that it actually enforces work_mem\nmore strictly than (a), but we should probably spend a bit more time on\nthe exact spilling strategy. I only really tried two trivial ideas, but\nmaybe we can be smarter about allocating / sizing the files? Maybe\ninstead of slices of constant size we could/should make them larger,\nsimilarly to what log-structured storage does.\n\nFor example we might double the number of batches a file represents, so\nthe first file would be for batch 1, the next one for batches 2+3, then\n4+5+6+7, then 8-15, then 16-31, ...\n\nWe should have some model calculating (i) amount of disk space we would\nneed for the spilled files, and (ii) the amount of I/O we do when\nwriting the data between temp files.\n\n> Same question.\n> \n>> * Patch 3 seemed withdrawn in favor of BNLJ\n>> * Patch 4 is waiting for some more review and has some TODO\n>> * discussion 5 worth few minutes to discuss before jumping on previous topics\n> \n> These other patches needs more discussions and hacking. They have a low\n> priority compare to other discussions and running commitfest. However, how can\n> avoid losing them in limbo again?\n> \n\nI think focusing on the backpatchability is not particularly good\napproach. It's probably be better to fist check if there's any hope of\nrestarting the work on BNLJ, which seems like a \"proper\" solution for\nthe future.\n\nIt getting that soon (in PG17) is unlikely, let's revive the rebalance\nand/or spilling patches. Imperfect but better than nothing.\n\nAnd then in the end we can talk about if/what can be backpatched.\n\n\nFWIW I don't think there's a lot of rush, considering this is clearly a\nmatter for PG17. So the summer CF at the earliest, people are going to\nbe busy until then.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 17 Mar 2023 17:41:11 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 05:41:11PM +0100, Tomas Vondra wrote:\n> >> * Patch 2 is worth considering to backpatch\n> \n> I'm not quite sure what exactly are the numbered patches, as some of the\n> threads had a number of different patch ideas, and I'm not sure which\n> one was/is the most promising one.\n\npatch 2 is referring to the list of patches that was compiled\nhttps://www.postgresql.org/message-id/20230310195114.6d0c5406%40karst\n\n\n",
"msg_date": "Sun, 19 Mar 2023 14:31:51 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On 3/19/23 20:31, Justin Pryzby wrote:\n> On Fri, Mar 17, 2023 at 05:41:11PM +0100, Tomas Vondra wrote:\n>>>> * Patch 2 is worth considering to backpatch\n>>\n>> I'm not quite sure what exactly are the numbered patches, as some of the\n>> threads had a number of different patch ideas, and I'm not sure which\n>> one was/is the most promising one.\n> \n> patch 2 is referring to the list of patches that was compiled\n> https://www.postgresql.org/message-id/20230310195114.6d0c5406%40karst\n\nAh, I see - it's just the \"rebalancing\" patch which minimizes the total\namount of memory used (i.e. grow work_mem a bit, so that we don't\nallocate too many files).\n\nYeah, I think that's the best we can do without reworking how we spill\ndata (slicing or whatever).\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 20 Mar 2023 09:32:17 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Mon, 20 Mar 2023 09:32:17 +0100\nTomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\n> >> * Patch 1 could be rebased/applied/backpatched \n> > \n> > Would it help if I rebase Patch 1 (\"move BufFile stuff into separate\n> > context\")? \n> \n> Yeah, I think this is something we'd want to do. It doesn't change the\n> behavior, but it makes it easier to track the memory consumption etc.\n\nWill do this week.\n\n> I think focusing on the backpatchability is not particularly good\n> approach. It's probably be better to fist check if there's any hope of\n> restarting the work on BNLJ, which seems like a \"proper\" solution for\n> the future.\n\nBackpatching worth some consideration. Balancing is almost there and could help\nin 16. As time is flying, it might well miss release 16, but maybe it could be\nbackpacthed in 16.1 or a later minor release? But what if it is not eligible for\nbackpatching? We would stall another year without having at least an imperfect\nstop-gap solution (sorry for picking your words ;)).\n\nBNJL and/or other considerations are for 17 or even after. In the meantime,\nMelanie, who authored BNLJ, +1 the balancing patch as it can coexists with other\ndiscussed solutions. No one down vote since then. Melanie, what is your\nopinion today on this patch? Did you change your mind as you worked for many\nmonths on BNLJ since then?\n\n> On 3/19/23 20:31, Justin Pryzby wrote:\n> > On Fri, Mar 17, 2023 at 05:41:11PM +0100, Tomas Vondra wrote: \n> >>>> * Patch 2 is worth considering to backpatch \n> >>\n> >> I'm not quite sure what exactly are the numbered patches, as some of the\n> >> threads had a number of different patch ideas, and I'm not sure which\n> >> one was/is the most promising one. \n> > \n> > patch 2 is referring to the list of patches that was compiled\n> > https://www.postgresql.org/message-id/20230310195114.6d0c5406%40karst \n> \n> Ah, I see - it's just the \"rebalancing\" patch which minimizes the total\n> amount of memory used (i.e. grow work_mem a bit, so that we don't\n> allocate too many files).\n> \n> Yeah, I think that's the best we can do without reworking how we spill\n> data (slicing or whatever).\n\nIndeed, that's why I was speaking about backpatching for this one:\n\n* it surely helps 16 (and maybe previous release) in such skewed situation\n* it's constrained\n* it's not too invasive, it doesn't shake to whole algorithm all over the place\n* Melanie was +1 for it, no one down vote. \n\nWhat need to be discussed/worked:\n\n* any regressions for existing queries running fine or without OOM?\n* add the bufFile memory consumption in the planner considerations?\n\n> I think the spilling is \"nicer\" in that it actually enforces work_mem\n> more strictly than (a), \n\nSure it enforces work_mem more strictly, but this is a discussion for 17 or\nlater in my humble opinion.\n\n> but we should probably spend a bit more time on\n> the exact spilling strategy. I only really tried two trivial ideas, but\n> maybe we can be smarter about allocating / sizing the files? Maybe\n> instead of slices of constant size we could/should make them larger,\n> similarly to what log-structured storage does.\n> \n> For example we might double the number of batches a file represents, so\n> the first file would be for batch 1, the next one for batches 2+3, then\n> 4+5+6+7, then 8-15, then 16-31, ...\n> \n> We should have some model calculating (i) amount of disk space we would\n> need for the spilled files, and (ii) the amount of I/O we do when\n> writing the data between temp files.\n\nAnd:\n\n* what about Robert's discussion on uneven batch distribution? Why is it\n ignored? Maybe there was some IRL or off-list discussions? Or did I missed\n some mails?\n* what about dealing with all normal batch first, then revamp in\n freshly emptied batches the skewed ones, spliting them if needed, then rince &\n repeat? At some point, we would probably still need something like slicing\n and/or BNLJ though...\n\n> let's revive the rebalance and/or spilling patches. Imperfect but better than\n> nothing.\n\n+1 for rebalance. I'm not even sure it could make it to 16 as we are running\nout time, but it worth to try as it's the closest one that could be\nreviewed and ready'd-for-commiter.\n\nI might lack of ambition, but spilling patches seems too much to make it for\n16. It seems to belongs with other larger patches/ideas (patches 4 a 5 in my sum\nup). But this is just my humble feeling.\n\n> And then in the end we can talk about if/what can be backpatched.\n> \n> FWIW I don't think there's a lot of rush, considering this is clearly a\n> matter for PG17. So the summer CF at the earliest, people are going to\n> be busy until then.\n\n100% agree, there's no rush for patches 3, 4 ... and 5.\n\nThanks!\n\n\n",
"msg_date": "Mon, 20 Mar 2023 15:12:34 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 1:51 PM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n> > So I guess the best thing would be to go through these threads, see what\n> > the status is, restart the discussion and propose what to do. If you do\n> > that, I'm happy to rebase the patches, and maybe see if I could improve\n> > them in some way.\n>\n> OK! It took me some time, but I did it. I'll try to sum up the situation as\n> simply as possible.\n\nWow, so many memories!\n\nI'm excited that someone looked at this old work (though it is sad that\na customer faced this issue). And, Jehan, I really appreciate your great\nsummarization of all these threads. This will be a useful reference.\n\n> 1. \"move BufFile stuff into separate context\"\n> last found patch: 2019-04-21\n> https://www.postgresql.org/message-id/20190421114618.z3mpgmimc3rmubi4%40development\n> https://www.postgresql.org/message-id/attachment/100822/0001-move-BufFile-stuff-into-separate-context.patch\n>\n> This patch helps with observability/debug by allocating the bufFiles in the\n> appropriate context instead of the \"ExecutorState\" one.\n>\n> I suppose this simple one has been forgotten in the fog of all other\n> discussions. Also, this probably worth to be backpatched.\n\nI agree with Jehan-Guillaume and Tomas that this seems fine to commit\nalone.\n\nTomas:\n> Yeah, I think this is something we'd want to do. It doesn't change the\n> behavior, but it makes it easier to track the memory consumption etc.\n\n> 2. \"account for size of BatchFile structure in hashJoin\"\n> last found patch: 2019-04-22\n> https://www.postgresql.org/message-id/20190428141901.5dsbge2ka3rxmpk6%40development\n> https://www.postgresql.org/message-id/attachment/100951/v2-simple-rebalance.patch\n>\n> This patch seems like a good first step:\n>\n> * it definitely helps older versions where other patches discussed are way\n> too invasive to be backpatched\n> * it doesn't step on the way of other discussed patches\n>\n> While looking at the discussions around this patch, I was wondering if the\n> planner considers the memory allocation of bufFiles. But of course, Melanie\n> already noticed that long before I was aware of this problem and discussion:\n>\n> 2019-07-10: «I do think that accounting for Buffile overhead when estimating\n> the size of the hashtable during ExecChooseHashTableSize() so it can be\n> used during planning is a worthwhile patch by itself (though I know it\n> is not even part of this patch).»\n> https://www.postgresql.org/message-id/CAAKRu_Yiam-%3D06L%2BR8FR%2BVaceb-ozQzzMqRiY2pDYku1VdZ%3DEw%40mail.gmail.com\n>\n> Tomas Vondra agreed with this in his answer, but no new version of the patch\n> where produced.\n>\n> Finally, Melanie was pushing the idea to commit this patch no matter other\n> pending patches/ideas:\n>\n> 2019-09-05: «If Tomas or someone else has time to pick up and modify BufFile\n> accounting patch, committing that still seems like the nest logical\n> step.»\n> https://www.postgresql.org/message-id/CAAKRu_b6%2BjC93WP%2BpWxqK5KAZJC5Rmxm8uquKtEf-KQ%2B%2B1Li6Q%40mail.gmail.com\n\nI think I would have to see a modern version of a patch which does this\nto assess if it makes sense. But, I probably still agree with 2019\nMelanie :)\nOverall, I think anything that makes it faster to identify customer\ncases of this bug is good (which, I would think granular memory contexts\nand accounting would do). Even if it doesn't fix it, we can determine\nmore easily how often customers are hitting this issue, which helps\njustify an admittedly large design change to hash join.\n\nOn Mon, Mar 20, 2023 at 10:12 AM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n> BNJL and/or other considerations are for 17 or even after. In the meantime,\n> Melanie, who authored BNLJ, +1 the balancing patch as it can coexists with other\n> discussed solutions. No one down vote since then. Melanie, what is your\n> opinion today on this patch? Did you change your mind as you worked for many\n> months on BNLJ since then?\n\nSo, in order to avoid deadlock, my design of adaptive hash join/block\nnested loop hash join required a new parallelism concept not yet in\nPostgres at the time -- the idea of a lone worker remaining around to do\nwork when others have left.\n\nSee: BarrierArriveAndDetachExceptLast()\nintroduced in 7888b09994\n\nThomas Munro had suggested we needed to battle test this concept in a\nmore straightforward feature first, so I implemented parallel full outer\nhash join and parallel right outer hash join with it.\n\nhttps://commitfest.postgresql.org/42/2903/\n\nThis has been stalled ready-for-committer for two years. It happened to\nchange timing such that it made an existing rarely hit parallel hash\njoin bug more likely to be hit. Thomas recently committed our fix for\nthis in 8d578b9b2e37a4d (last week). It is my great hope that parallel\nfull outer hash join goes in before the 16 feature freeze.\n\nIf it does, I think it could make sense to try and find committable\nsmaller pieces of the adaptive hash join work. As it is today, parallel\nhash join does not respect work_mem, and, in some sense, is a bit broken.\n\nI would be happy to work on this feature again, or, if you were\ninterested in picking it up, to provide review and any help I can if for\nyou to work on it.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 23 Mar 2023 08:07:04 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "\n\nOn 3/23/23 13:07, Melanie Plageman wrote:\n> On Fri, Mar 10, 2023 at 1:51 PM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n>>> So I guess the best thing would be to go through these threads, see what\n>>> the status is, restart the discussion and propose what to do. If you do\n>>> that, I'm happy to rebase the patches, and maybe see if I could improve\n>>> them in some way.\n>>\n>> OK! It took me some time, but I did it. I'll try to sum up the situation as\n>> simply as possible.\n> \n> Wow, so many memories!\n> \n> I'm excited that someone looked at this old work (though it is sad that\n> a customer faced this issue). And, Jehan, I really appreciate your great\n> summarization of all these threads. This will be a useful reference.\n> >> 1. \"move BufFile stuff into separate context\"\n>> last found patch: 2019-04-21\n>> https://www.postgresql.org/message-id/20190421114618.z3mpgmimc3rmubi4%40development\n>> https://www.postgresql.org/message-id/attachment/100822/0001-move-BufFile-stuff-into-separate-context.patch\n>>\n>> This patch helps with observability/debug by allocating the bufFiles in the\n>> appropriate context instead of the \"ExecutorState\" one.\n>>\n>> I suppose this simple one has been forgotten in the fog of all other\n>> discussions. Also, this probably worth to be backpatched.\n> \n> I agree with Jehan-Guillaume and Tomas that this seems fine to commit\n> alone.\n> \n\n+1 to that. I think the separate memory contexts would be\nnon-controversial for backpatching too.\n\n> Tomas:\n>> Yeah, I think this is something we'd want to do. It doesn't change the\n>> behavior, but it makes it easier to track the memory consumption etc.\n> \n>> 2. \"account for size of BatchFile structure in hashJoin\"\n>> last found patch: 2019-04-22\n>> https://www.postgresql.org/message-id/20190428141901.5dsbge2ka3rxmpk6%40development\n>> https://www.postgresql.org/message-id/attachment/100951/v2-simple-rebalance.patch\n>>\n>> This patch seems like a good first step:\n>>\n>> * it definitely helps older versions where other patches discussed are way\n>> too invasive to be backpatched\n>> * it doesn't step on the way of other discussed patches\n>>\n>> While looking at the discussions around this patch, I was wondering if the\n>> planner considers the memory allocation of bufFiles. But of course, Melanie\n>> already noticed that long before I was aware of this problem and discussion:\n>>\n>> 2019-07-10: «I do think that accounting for Buffile overhead when estimating\n>> the size of the hashtable during ExecChooseHashTableSize() so it can be\n>> used during planning is a worthwhile patch by itself (though I know it\n>> is not even part of this patch).»\n>> https://www.postgresql.org/message-id/CAAKRu_Yiam-%3D06L%2BR8FR%2BVaceb-ozQzzMqRiY2pDYku1VdZ%3DEw%40mail.gmail.com\n>>\n>> Tomas Vondra agreed with this in his answer, but no new version of the patch\n>> where produced.\n>>\n>> Finally, Melanie was pushing the idea to commit this patch no matter other\n>> pending patches/ideas:\n>>\n>> 2019-09-05: «If Tomas or someone else has time to pick up and modify BufFile\n>> accounting patch, committing that still seems like the nest logical\n>> step.»\n>> https://www.postgresql.org/message-id/CAAKRu_b6%2BjC93WP%2BpWxqK5KAZJC5Rmxm8uquKtEf-KQ%2B%2B1Li6Q%40mail.gmail.com\n> \n> I think I would have to see a modern version of a patch which does this\n> to assess if it makes sense. But, I probably still agree with 2019\n> Melanie :)\n> Overall, I think anything that makes it faster to identify customer\n> cases of this bug is good (which, I would think granular memory contexts\n> and accounting would do). Even if it doesn't fix it, we can determine\n> more easily how often customers are hitting this issue, which helps\n> justify an admittedly large design change to hash join.\n> \n\nAgreed. This issue is quite rare (we only get a report once a year or\ntwo), just enough to forget about it and have to rediscover it's there.\nSo having something that clearly identifies it would be good.\n\nA separate BufFile memory context helps, although people won't see it\nunless they attach a debugger, I think. Better than nothing, but I was\nwondering if we could maybe print some warnings when the number of batch\nfiles gets too high ...\n\n> On Mon, Mar 20, 2023 at 10:12 AM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n>> BNJL and/or other considerations are for 17 or even after. In the meantime,\n>> Melanie, who authored BNLJ, +1 the balancing patch as it can coexists with other\n>> discussed solutions. No one down vote since then. Melanie, what is your\n>> opinion today on this patch? Did you change your mind as you worked for many\n>> months on BNLJ since then?\n> \n> So, in order to avoid deadlock, my design of adaptive hash join/block\n> nested loop hash join required a new parallelism concept not yet in\n> Postgres at the time -- the idea of a lone worker remaining around to do\n> work when others have left.\n> \n> See: BarrierArriveAndDetachExceptLast()\n> introduced in 7888b09994\n> \n> Thomas Munro had suggested we needed to battle test this concept in a\n> more straightforward feature first, so I implemented parallel full outer\n> hash join and parallel right outer hash join with it.\n> \n> https://commitfest.postgresql.org/42/2903/\n> \n> This has been stalled ready-for-committer for two years. It happened to\n> change timing such that it made an existing rarely hit parallel hash\n> join bug more likely to be hit. Thomas recently committed our fix for\n> this in 8d578b9b2e37a4d (last week). It is my great hope that parallel\n> full outer hash join goes in before the 16 feature freeze.\n> \n\nGood to hear this is moving forward.\n\n> If it does, I think it could make sense to try and find committable\n> smaller pieces of the adaptive hash join work. As it is today, parallel\n> hash join does not respect work_mem, and, in some sense, is a bit broken.\n> \n> I would be happy to work on this feature again, or, if you were\n> interested in picking it up, to provide review and any help I can if for\n> you to work on it.\n> \n\nI'm no expert in parallel hashjoin expert, but I'm willing to take a\nlook a rebased patch. I'd however recommend breaking the patch into\nsmaller pieces - the last version I see in the thread is ~250kB, which\nis rather daunting, and difficult to review without interruption. (I\ndon't remember the details of the patch, so maybe that's not possible\nfor some reason.)\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 23 Mar 2023 19:49:43 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 2:49 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> > On Mon, Mar 20, 2023 at 10:12 AM Jehan-Guillaume de Rorthais\n> > <jgdr@dalibo.com> wrote:\n> >> BNJL and/or other considerations are for 17 or even after. In the meantime,\n> >> Melanie, who authored BNLJ, +1 the balancing patch as it can coexists with other\n> >> discussed solutions. No one down vote since then. Melanie, what is your\n> >> opinion today on this patch? Did you change your mind as you worked for many\n> >> months on BNLJ since then?\n> >\n> > So, in order to avoid deadlock, my design of adaptive hash join/block\n> > nested loop hash join required a new parallelism concept not yet in\n> > Postgres at the time -- the idea of a lone worker remaining around to do\n> > work when others have left.\n> >\n> > See: BarrierArriveAndDetachExceptLast()\n> > introduced in 7888b09994\n> >\n> > Thomas Munro had suggested we needed to battle test this concept in a\n> > more straightforward feature first, so I implemented parallel full outer\n> > hash join and parallel right outer hash join with it.\n> >\n> > https://commitfest.postgresql.org/42/2903/\n> >\n> > This has been stalled ready-for-committer for two years. It happened to\n> > change timing such that it made an existing rarely hit parallel hash\n> > join bug more likely to be hit. Thomas recently committed our fix for\n> > this in 8d578b9b2e37a4d (last week). It is my great hope that parallel\n> > full outer hash join goes in before the 16 feature freeze.\n> >\n>\n> Good to hear this is moving forward.\n>\n> > If it does, I think it could make sense to try and find committable\n> > smaller pieces of the adaptive hash join work. As it is today, parallel\n> > hash join does not respect work_mem, and, in some sense, is a bit broken.\n> >\n> > I would be happy to work on this feature again, or, if you were\n> > interested in picking it up, to provide review and any help I can if for\n> > you to work on it.\n> >\n>\n> I'm no expert in parallel hashjoin expert, but I'm willing to take a\n> look a rebased patch. I'd however recommend breaking the patch into\n> smaller pieces - the last version I see in the thread is ~250kB, which\n> is rather daunting, and difficult to review without interruption. (I\n> don't remember the details of the patch, so maybe that's not possible\n> for some reason.)\n\nGreat! I will rebase and take a stab at splitting up the patch into\nsmaller commits, with a focus on finding pieces that may have standalone\nbenefits, in the 17 dev cycle.\n\n- Melanie\n\n\n",
"msg_date": "Mon, 27 Mar 2023 10:01:03 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "Hi,\n\nOn Mon, 20 Mar 2023 15:12:34 +0100\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n\n> On Mon, 20 Mar 2023 09:32:17 +0100\n> Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n> > >> * Patch 1 could be rebased/applied/backpatched \n> > > \n> > > Would it help if I rebase Patch 1 (\"move BufFile stuff into separate\n> > > context\")? \n> > \n> > Yeah, I think this is something we'd want to do. It doesn't change the\n> > behavior, but it makes it easier to track the memory consumption etc. \n> \n> Will do this week.\n\nPlease, find in attachment a patch to allocate bufFiles in a dedicated context.\nI picked up your patch, backpatch'd it, went through it and did some minor\nchanges to it. I have some comment/questions thought.\n\n 1. I'm not sure why we must allocate the \"HashBatchFiles\" new context\n under ExecutorState and not under hashtable->hashCxt?\n\n The only references I could find was in hashjoin.h:30:\n\n /* [...]\n * [...] (Exception: data associated with the temp files lives in the\n * per-query context too, since we always call buffile.c in that context.)\n\n And in nodeHashjoin.c:1243:ExecHashJoinSaveTuple() (I reworded this\n original comment in the patch):\n\n /* [...]\n * Note: it is important always to call this in the regular executor\n * context, not in a shorter-lived context; else the temp file buffers\n * will get messed up.\n\n\n But these are not explanation of why BufFile related allocations must be under\n a per-query context. \n\n\n 2. Wrapping each call of ExecHashJoinSaveTuple() with a memory context switch\n seems fragile as it could be forgotten in futur code path/changes. So I\n added an Assert() in the function to make sure the current memory context is\n \"HashBatchFiles\" as expected.\n Another way to tie this up might be to pass the memory context as argument to\n the function.\n ... Or maybe I'm over precautionary.\n\n\n 3. You wrote:\n\n>> A separate BufFile memory context helps, although people won't see it\n>> unless they attach a debugger, I think. Better than nothing, but I was\n>> wondering if we could maybe print some warnings when the number of batch\n>> files gets too high ...\n\n So I added a WARNING when batches memory are exhausting the memory size\n allowed.\n\n + if (hashtable->fileCxt->mem_allocated > hashtable->spaceAllowed)\n + elog(WARNING, \"Growing number of hash batch is exhausting memory\");\n\n This is repeated on each call of ExecHashIncreaseNumBatches when BufFile\n overflows the memory budget. I realize now I should probably add the memory\n limit, the number of current batch and their memory consumption.\n The message is probably too cryptic for a user. It could probably be\n reworded, but some doc or additionnal hint around this message might help.\n\n 4. I left the debug messages for some more review rounds\n\nRegards,",
"msg_date": "Mon, 27 Mar 2023 23:13:23 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On 3/27/23 23:13, Jehan-Guillaume de Rorthais wrote:\n> Hi,\n> \n> On Mon, 20 Mar 2023 15:12:34 +0100\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> \n>> On Mon, 20 Mar 2023 09:32:17 +0100\n>> Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>\n>>>>> * Patch 1 could be rebased/applied/backpatched \n>>>>\n>>>> Would it help if I rebase Patch 1 (\"move BufFile stuff into separate\n>>>> context\")? \n>>>\n>>> Yeah, I think this is something we'd want to do. It doesn't change the\n>>> behavior, but it makes it easier to track the memory consumption etc. \n>>\n>> Will do this week.\n> \n> Please, find in attachment a patch to allocate bufFiles in a dedicated context.\n> I picked up your patch, backpatch'd it, went through it and did some minor\n> changes to it. I have some comment/questions thought.\n> \n> 1. I'm not sure why we must allocate the \"HashBatchFiles\" new context\n> under ExecutorState and not under hashtable->hashCxt?\n> \n> The only references I could find was in hashjoin.h:30:\n> \n> /* [...]\n> * [...] (Exception: data associated with the temp files lives in the\n> * per-query context too, since we always call buffile.c in that context.)\n> \n> And in nodeHashjoin.c:1243:ExecHashJoinSaveTuple() (I reworded this\n> original comment in the patch):\n> \n> /* [...]\n> * Note: it is important always to call this in the regular executor\n> * context, not in a shorter-lived context; else the temp file buffers\n> * will get messed up.\n> \n> \n> But these are not explanation of why BufFile related allocations must be under\n> a per-query context. \n> \n\nDoesn't that simply describe the current (unpatched) behavior where\nBufFile is allocated in the per-query context? I mean, the current code\ncalls BufFileCreateTemp() without switching the context, so it's in the\nExecutorState. But with the patch it very clearly is not.\n\nAnd I'm pretty sure the patch should do\n\n hashtable->fileCxt = AllocSetContextCreate(hashtable->hashCxt,\n \"HashBatchFiles\",\n ALLOCSET_DEFAULT_SIZES);\n\nand it'd still work. Or why do you think we *must* allocate it under\nExecutorState?\n\nFWIW The comment in hashjoin.h needs updating to reflect the change.\n\n> \n> 2. Wrapping each call of ExecHashJoinSaveTuple() with a memory context switch\n> seems fragile as it could be forgotten in futur code path/changes. So I\n> added an Assert() in the function to make sure the current memory context is\n> \"HashBatchFiles\" as expected.\n> Another way to tie this up might be to pass the memory context as argument to\n> the function.\n> ... Or maybe I'm over precautionary.\n> \n\nI'm not sure I'd call that fragile, we have plenty other code that\nexpects the memory context to be set correctly. Not sure about the\nassert, but we don't have similar asserts anywhere else.\n\nBut I think it's just ugly and overly verbose - it'd be much nicer to\ne.g. pass the memory context as a parameter, and do the switch inside.\n\n> \n> 3. You wrote:\n> \n>>> A separate BufFile memory context helps, although people won't see it\n>>> unless they attach a debugger, I think. Better than nothing, but I was\n>>> wondering if we could maybe print some warnings when the number of batch\n>>> files gets too high ...\n> \n> So I added a WARNING when batches memory are exhausting the memory size\n> allowed.\n> \n> + if (hashtable->fileCxt->mem_allocated > hashtable->spaceAllowed)\n> + elog(WARNING, \"Growing number of hash batch is exhausting memory\");\n> \n> This is repeated on each call of ExecHashIncreaseNumBatches when BufFile\n> overflows the memory budget. I realize now I should probably add the memory\n> limit, the number of current batch and their memory consumption.\n> The message is probably too cryptic for a user. It could probably be\n> reworded, but some doc or additionnal hint around this message might help.\n> \n\nHmmm, not sure is WARNING is a good approach, but I don't have a better\nidea at the moment.\n\n> 4. I left the debug messages for some more review rounds\n> \n\nOK\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Mar 2023 00:43:34 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Tue, 28 Mar 2023 00:43:34 +0200\nTomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\n> On 3/27/23 23:13, Jehan-Guillaume de Rorthais wrote:\n> > Please, find in attachment a patch to allocate bufFiles in a dedicated\n> > context. I picked up your patch, backpatch'd it, went through it and did\n> > some minor changes to it. I have some comment/questions thought.\n> > \n> > 1. I'm not sure why we must allocate the \"HashBatchFiles\" new context\n> > under ExecutorState and not under hashtable->hashCxt?\n> > \n> > The only references I could find was in hashjoin.h:30:\n> > \n> > /* [...]\n> > * [...] (Exception: data associated with the temp files lives in the\n> > * per-query context too, since we always call buffile.c in that\n> > context.)\n> > \n> > And in nodeHashjoin.c:1243:ExecHashJoinSaveTuple() (I reworded this\n> > original comment in the patch):\n> > \n> > /* [...]\n> > * Note: it is important always to call this in the regular executor\n> > * context, not in a shorter-lived context; else the temp file buffers\n> > * will get messed up.\n> > \n> > \n> > But these are not explanation of why BufFile related allocations must be\n> > under a per-query context. \n> > \n> \n> Doesn't that simply describe the current (unpatched) behavior where\n> BufFile is allocated in the per-query context? \n\nI wasn't sure. The first quote from hashjoin.h seems to describe a stronger\nrule about «**always** call buffile.c in per-query context». But maybe it ought\nto be «always call buffile.c from one of the sub-query context»? I assume the\naim is to enforce the tmp files removal on query end/error?\n\n> I mean, the current code calls BufFileCreateTemp() without switching the\n> context, so it's in the ExecutorState. But with the patch it very clearly is\n> not.\n> \n> And I'm pretty sure the patch should do\n> \n> hashtable->fileCxt = AllocSetContextCreate(hashtable->hashCxt,\n> \"HashBatchFiles\",\n> ALLOCSET_DEFAULT_SIZES);\n> \n> and it'd still work. Or why do you think we *must* allocate it under\n> ExecutorState?\n\nThat was actually my very first patch and it indeed worked. But I was confused\nabout the previous quoted code comments. That's why I kept your original code\nand decided to rise the discussion here.\n\nFixed in new patch in attachment.\n\n> FWIW The comment in hashjoin.h needs updating to reflect the change.\n\nDone in the last patch. Is my rewording accurate?\n\n> > 2. Wrapping each call of ExecHashJoinSaveTuple() with a memory context\n> > switch seems fragile as it could be forgotten in futur code path/changes.\n> > So I added an Assert() in the function to make sure the current memory\n> > context is \"HashBatchFiles\" as expected.\n> > Another way to tie this up might be to pass the memory context as\n> > argument to the function.\n> > ... Or maybe I'm over precautionary.\n> > \n> \n> I'm not sure I'd call that fragile, we have plenty other code that\n> expects the memory context to be set correctly. Not sure about the\n> assert, but we don't have similar asserts anywhere else.\n\nI mostly sticked it there to stimulate the discussion around this as I needed\nto scratch that itch.\n\n> But I think it's just ugly and overly verbose\n\n+1\n\nYour patch was just a demo/debug patch by the time. It needed some cleanup now\n:)\n\n> it'd be much nicer to e.g. pass the memory context as a parameter, and do\n> the switch inside.\n\nThat was a proposition in my previous mail, so I did it in the new patch. Let's\nsee what other reviewers think.\n\n> > 3. You wrote:\n> > \n> >>> A separate BufFile memory context helps, although people won't see it\n> >>> unless they attach a debugger, I think. Better than nothing, but I was\n> >>> wondering if we could maybe print some warnings when the number of batch\n> >>> files gets too high ... \n> > \n> > So I added a WARNING when batches memory are exhausting the memory size\n> > allowed.\n> > \n> > + if (hashtable->fileCxt->mem_allocated > hashtable->spaceAllowed)\n> > + elog(WARNING, \"Growing number of hash batch is exhausting\n> > memory\");\n> > \n> > This is repeated on each call of ExecHashIncreaseNumBatches when BufFile\n> > overflows the memory budget. I realize now I should probably add the\n> > memory limit, the number of current batch and their memory consumption.\n> > The message is probably too cryptic for a user. It could probably be\n> > reworded, but some doc or additionnal hint around this message might help.\n> > \n> \n> Hmmm, not sure is WARNING is a good approach, but I don't have a better\n> idea at the moment.\n\nI stepped it down to NOTICE and added some more infos.\n\nHere is the output of the last patch with a 1MB work_mem:\n\n =# explain analyze select * from small join large using (id);\n WARNING: increasing number of batches from 1 to 2\n WARNING: increasing number of batches from 2 to 4\n WARNING: increasing number of batches from 4 to 8\n WARNING: increasing number of batches from 8 to 16\n WARNING: increasing number of batches from 16 to 32\n WARNING: increasing number of batches from 32 to 64\n WARNING: increasing number of batches from 64 to 128\n WARNING: increasing number of batches from 128 to 256\n WARNING: increasing number of batches from 256 to 512\n NOTICE: Growing number of hash batch to 512 is exhausting allowed memory\n (2164736 > 2097152)\n WARNING: increasing number of batches from 512 to 1024\n NOTICE: Growing number of hash batch to 1024 is exhausting allowed memory\n (4329472 > 2097152)\n WARNING: increasing number of batches from 1024 to 2048\n NOTICE: Growing number of hash batch to 2048 is exhausting allowed memory\n (8626304 > 2097152)\n WARNING: increasing number of batches from 2048 to 4096\n NOTICE: Growing number of hash batch to 4096 is exhausting allowed memory\n (17252480 > 2097152)\n WARNING: increasing number of batches from 4096 to 8192\n NOTICE: Growing number of hash batch to 8192 is exhausting allowed memory\n (34504832 > 2097152)\n WARNING: increasing number of batches from 8192 to 16384\n NOTICE: Growing number of hash batch to 16384 is exhausting allowed memory\n (68747392 > 2097152)\n WARNING: increasing number of batches from 16384 to 32768\n NOTICE: Growing number of hash batch to 32768 is exhausting allowed memory\n (137494656 > 2097152)\n\n QUERY PLAN\n --------------------------------------------------------------------------\n Hash Join (cost=6542057.16..7834651.23 rows=7 width=74)\n (actual time=558502.127..724007.708 rows=7040 loops=1)\n Hash Cond: (small.id = large.id)\n -> Seq Scan on small (cost=0.00..940094.00 rows=94000000 width=41)\n (actual time=0.035..3.666 rows=10000 loops=1)\n -> Hash (cost=6542057.07..6542057.07 rows=7 width=41)\n (actual time=558184.152..558184.153 rows=700000000 loops=1) \n Buckets: 32768 (originally 1024)\n Batches: 32768 (originally 1)\n Memory Usage: 1921kB\n -> Seq Scan on large (cost=0.00..6542057.07 rows=7 width=41)\n (actual time=0.324..193750.567 rows=700000000 loops=1)\n Planning Time: 1.588 ms\n Execution Time: 724011.074 ms (8 rows)\n\nRegards,",
"msg_date": "Tue, 28 Mar 2023 15:17:45 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "Hi,\n\nSorry for the late answer, I was reviewing the first patch and it took me some\ntime to study and dig around.\n\nOn Thu, 23 Mar 2023 08:07:04 -0400\nMelanie Plageman <melanieplageman@gmail.com> wrote:\n\n> On Fri, Mar 10, 2023 at 1:51 PM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n> > > So I guess the best thing would be to go through these threads, see what\n> > > the status is, restart the discussion and propose what to do. If you do\n> > > that, I'm happy to rebase the patches, and maybe see if I could improve\n> > > them in some way. \n> >\n> > OK! It took me some time, but I did it. I'll try to sum up the situation as\n> > simply as possible. \n> \n> Wow, so many memories!\n> \n> I'm excited that someone looked at this old work (though it is sad that\n> a customer faced this issue). And, Jehan, I really appreciate your great\n> summarization of all these threads. This will be a useful reference.\n\nThank you!\n\n> > 1. \"move BufFile stuff into separate context\"\n> > [...]\n> > I suppose this simple one has been forgotten in the fog of all other\n> > discussions. Also, this probably worth to be backpatched. \n> \n> I agree with Jehan-Guillaume and Tomas that this seems fine to commit\n> alone.\n\nThis is a WIP.\n\n> > 2. \"account for size of BatchFile structure in hashJoin\"\n> > [...] \n> \n> I think I would have to see a modern version of a patch which does this\n> to assess if it makes sense. But, I probably still agree with 2019\n> Melanie :)\n\nI volunteer to work on this after the memory context patch, unless someone grab\nit in the meantime.\n\n> [...]\n> On Mon, Mar 20, 2023 at 10:12 AM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n> > BNJL and/or other considerations are for 17 or even after. In the meantime,\n> > Melanie, who authored BNLJ, +1 the balancing patch as it can coexists with\n> > other discussed solutions. No one down vote since then. Melanie, what is\n> > your opinion today on this patch? Did you change your mind as you worked\n> > for many months on BNLJ since then? \n> \n> So, in order to avoid deadlock, my design of adaptive hash join/block\n> nested loop hash join required a new parallelism concept not yet in\n> Postgres at the time -- the idea of a lone worker remaining around to do\n> work when others have left.\n> \n> See: BarrierArriveAndDetachExceptLast()\n> introduced in 7888b09994\n> \n> Thomas Munro had suggested we needed to battle test this concept in a\n> more straightforward feature first, so I implemented parallel full outer\n> hash join and parallel right outer hash join with it.\n> \n> https://commitfest.postgresql.org/42/2903/\n> \n> This has been stalled ready-for-committer for two years. It happened to\n> change timing such that it made an existing rarely hit parallel hash\n> join bug more likely to be hit. Thomas recently committed our fix for\n> this in 8d578b9b2e37a4d (last week). It is my great hope that parallel\n> full outer hash join goes in before the 16 feature freeze.\n\nThis is really interesting to follow. I kinda feel/remember how this could\nbe useful for your BNLJ patch. It's good to see things are moving, step by\nstep.\n\nThanks for the pointers.\n\n> If it does, I think it could make sense to try and find committable\n> smaller pieces of the adaptive hash join work. As it is today, parallel\n> hash join does not respect work_mem, and, in some sense, is a bit broken.\n> \n> I would be happy to work on this feature again, or, if you were\n> interested in picking it up, to provide review and any help I can if for\n> you to work on it.\n\nI don't think I would be able to pick up such a large and complex patch. But I'm\ninterested to help, test and review, as far as I can!\n\nRegards,\n\n\n",
"msg_date": "Tue, 28 Mar 2023 16:56:17 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "\n\nOn 3/28/23 15:17, Jehan-Guillaume de Rorthais wrote:\n> On Tue, 28 Mar 2023 00:43:34 +0200\n> Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n>> On 3/27/23 23:13, Jehan-Guillaume de Rorthais wrote:\n>>> Please, find in attachment a patch to allocate bufFiles in a dedicated\n>>> context. I picked up your patch, backpatch'd it, went through it and did\n>>> some minor changes to it. I have some comment/questions thought.\n>>>\n>>> 1. I'm not sure why we must allocate the \"HashBatchFiles\" new context\n>>> under ExecutorState and not under hashtable->hashCxt?\n>>>\n>>> The only references I could find was in hashjoin.h:30:\n>>>\n>>> /* [...]\n>>> * [...] (Exception: data associated with the temp files lives in the\n>>> * per-query context too, since we always call buffile.c in that\n>>> context.)\n>>>\n>>> And in nodeHashjoin.c:1243:ExecHashJoinSaveTuple() (I reworded this\n>>> original comment in the patch):\n>>>\n>>> /* [...]\n>>> * Note: it is important always to call this in the regular executor\n>>> * context, not in a shorter-lived context; else the temp file buffers\n>>> * will get messed up.\n>>>\n>>>\n>>> But these are not explanation of why BufFile related allocations must be\n>>> under a per-query context. \n>>> \n>>\n>> Doesn't that simply describe the current (unpatched) behavior where\n>> BufFile is allocated in the per-query context? \n> \n> I wasn't sure. The first quote from hashjoin.h seems to describe a stronger\n> rule about «**always** call buffile.c in per-query context». But maybe it ought\n> to be «always call buffile.c from one of the sub-query context»? I assume the\n> aim is to enforce the tmp files removal on query end/error?\n> \n\nI don't think we need this info for tempfile cleanup - CleanupTempFiles\nrelies on the VfdCache, which does malloc/realloc (so it's not using\nmemory contexts at all).\n\nI'm not very familiar with this part of the code, so I might be missing\nsomething. But you can try that - just stick an elog(ERROR) somewhere\ninto the hashjoin code, and see if that breaks the cleanup.\n\nNot an explicit proof, but if there was some hard-wired requirement in\nwhich memory context to allocate BufFile stuff, I'd expect it to be\ndocumented in buffile.c. But that actually says this:\n\n * Note that BufFile structs are allocated with palloc(), and therefore\n * will go away automatically at query/transaction end. Since the\nunderlying\n * virtual Files are made with OpenTemporaryFile, all resources for\n * the file are certain to be cleaned up even if processing is aborted\n * by ereport(ERROR). The data structures required are made in the\n * palloc context that was current when the BufFile was created, and\n * any external resources such as temp files are owned by the ResourceOwner\n * that was current at that time.\n\nwhich I take as confirmation that it's legal to allocate BufFile in any\nmemory context, and that cleanup is handled by the cache in fd.c.\n\n\n>> I mean, the current code calls BufFileCreateTemp() without switching the\n>> context, so it's in the ExecutorState. But with the patch it very clearly is\n>> not.\n>>\n>> And I'm pretty sure the patch should do\n>>\n>> hashtable->fileCxt = AllocSetContextCreate(hashtable->hashCxt,\n>> \"HashBatchFiles\",\n>> ALLOCSET_DEFAULT_SIZES);\n>>\n>> and it'd still work. Or why do you think we *must* allocate it under\n>> ExecutorState?\n> \n> That was actually my very first patch and it indeed worked. But I was confused\n> about the previous quoted code comments. That's why I kept your original code\n> and decided to rise the discussion here.\n> \n\nIIRC I was just lazy when writing the experimental patch, there was not\nmuch thought about stuff like this.\n\n> Fixed in new patch in attachment.\n> \n>> FWIW The comment in hashjoin.h needs updating to reflect the change.\n> \n> Done in the last patch. Is my rewording accurate?\n> \n>>> 2. Wrapping each call of ExecHashJoinSaveTuple() with a memory context\n>>> switch seems fragile as it could be forgotten in futur code path/changes.\n>>> So I added an Assert() in the function to make sure the current memory\n>>> context is \"HashBatchFiles\" as expected.\n>>> Another way to tie this up might be to pass the memory context as\n>>> argument to the function.\n>>> ... Or maybe I'm over precautionary.\n>>> \n>>\n>> I'm not sure I'd call that fragile, we have plenty other code that\n>> expects the memory context to be set correctly. Not sure about the\n>> assert, but we don't have similar asserts anywhere else.\n> \n> I mostly sticked it there to stimulate the discussion around this as I needed\n> to scratch that itch.\n> \n>> But I think it's just ugly and overly verbose\n> \n> +1\n> \n> Your patch was just a demo/debug patch by the time. It needed some cleanup now\n> :)\n> \n>> it'd be much nicer to e.g. pass the memory context as a parameter, and do\n>> the switch inside.\n> \n> That was a proposition in my previous mail, so I did it in the new patch. Let's\n> see what other reviewers think.\n> \n\n+1\n\n>>> 3. You wrote:\n>>> \n>>>>> A separate BufFile memory context helps, although people won't see it\n>>>>> unless they attach a debugger, I think. Better than nothing, but I was\n>>>>> wondering if we could maybe print some warnings when the number of batch\n>>>>> files gets too high ... \n>>>\n>>> So I added a WARNING when batches memory are exhausting the memory size\n>>> allowed.\n>>>\n>>> + if (hashtable->fileCxt->mem_allocated > hashtable->spaceAllowed)\n>>> + elog(WARNING, \"Growing number of hash batch is exhausting\n>>> memory\");\n>>>\n>>> This is repeated on each call of ExecHashIncreaseNumBatches when BufFile\n>>> overflows the memory budget. I realize now I should probably add the\n>>> memory limit, the number of current batch and their memory consumption.\n>>> The message is probably too cryptic for a user. It could probably be\n>>> reworded, but some doc or additionnal hint around this message might help.\n>>> \n>>\n>> Hmmm, not sure is WARNING is a good approach, but I don't have a better\n>> idea at the moment.\n> \n> I stepped it down to NOTICE and added some more infos.\n> \n> Here is the output of the last patch with a 1MB work_mem:\n> \n> =# explain analyze select * from small join large using (id);\n> WARNING: increasing number of batches from 1 to 2\n> WARNING: increasing number of batches from 2 to 4\n> WARNING: increasing number of batches from 4 to 8\n> WARNING: increasing number of batches from 8 to 16\n> WARNING: increasing number of batches from 16 to 32\n> WARNING: increasing number of batches from 32 to 64\n> WARNING: increasing number of batches from 64 to 128\n> WARNING: increasing number of batches from 128 to 256\n> WARNING: increasing number of batches from 256 to 512\n> NOTICE: Growing number of hash batch to 512 is exhausting allowed memory\n> (2164736 > 2097152)\n> WARNING: increasing number of batches from 512 to 1024\n> NOTICE: Growing number of hash batch to 1024 is exhausting allowed memory\n> (4329472 > 2097152)\n> WARNING: increasing number of batches from 1024 to 2048\n> NOTICE: Growing number of hash batch to 2048 is exhausting allowed memory\n> (8626304 > 2097152)\n> WARNING: increasing number of batches from 2048 to 4096\n> NOTICE: Growing number of hash batch to 4096 is exhausting allowed memory\n> (17252480 > 2097152)\n> WARNING: increasing number of batches from 4096 to 8192\n> NOTICE: Growing number of hash batch to 8192 is exhausting allowed memory\n> (34504832 > 2097152)\n> WARNING: increasing number of batches from 8192 to 16384\n> NOTICE: Growing number of hash batch to 16384 is exhausting allowed memory\n> (68747392 > 2097152)\n> WARNING: increasing number of batches from 16384 to 32768\n> NOTICE: Growing number of hash batch to 32768 is exhausting allowed memory\n> (137494656 > 2097152)\n> \n> QUERY PLAN\n> --------------------------------------------------------------------------\n> Hash Join (cost=6542057.16..7834651.23 rows=7 width=74)\n> (actual time=558502.127..724007.708 rows=7040 loops=1)\n> Hash Cond: (small.id = large.id)\n> -> Seq Scan on small (cost=0.00..940094.00 rows=94000000 width=41)\n> (actual time=0.035..3.666 rows=10000 loops=1)\n> -> Hash (cost=6542057.07..6542057.07 rows=7 width=41)\n> (actual time=558184.152..558184.153 rows=700000000 loops=1) \n> Buckets: 32768 (originally 1024)\n> Batches: 32768 (originally 1)\n> Memory Usage: 1921kB\n> -> Seq Scan on large (cost=0.00..6542057.07 rows=7 width=41)\n> (actual time=0.324..193750.567 rows=700000000 loops=1)\n> Planning Time: 1.588 ms\n> Execution Time: 724011.074 ms (8 rows)\n> \n> Regards,\n\nOK, although NOTICE that may actually make it less useful - the default\nlevel is WARNING, and regular users are unable to change the level. So\nvery few people will actually see these messages.\n\n\nthanks\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Mar 2023 17:25:49 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Tue, 28 Mar 2023 17:25:49 +0200\nTomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n...\n> * Note that BufFile structs are allocated with palloc(), and therefore\n> * will go away automatically at query/transaction end. Since the\n> underlying\n> * virtual Files are made with OpenTemporaryFile, all resources for\n> * the file are certain to be cleaned up even if processing is aborted\n> * by ereport(ERROR). The data structures required are made in the\n> * palloc context that was current when the BufFile was created, and\n> * any external resources such as temp files are owned by the ResourceOwner\n> * that was current at that time.\n> \n> which I take as confirmation that it's legal to allocate BufFile in any\n> memory context, and that cleanup is handled by the cache in fd.c.\n\nOK. I just over interpreted comments and been over prudent.\n\n> [...]\n> >> Hmmm, not sure is WARNING is a good approach, but I don't have a better\n> >> idea at the moment. \n> > \n> > I stepped it down to NOTICE and added some more infos.\n> > \n> > [...]\n> > NOTICE: Growing number of hash batch to 32768 is exhausting allowed\n> > memory (137494656 > 2097152)\n> [...]\n> \n> OK, although NOTICE that may actually make it less useful - the default\n> level is WARNING, and regular users are unable to change the level. So\n> very few people will actually see these messages.\n\nThe main purpose of NOTICE was to notice user/dev, as client_min_messages=notice\nby default. \n\nBut while playing with it, I wonder if the message is in the good place anyway.\nIt is probably pessimistic as it shows memory consumption when increasing the\nnumber of batch, but before actual buffile are (lazily) allocated. The message\nshould probably pop a bit sooner with better numbers.\n\nAnyway, maybe this should be added in the light of next patch, balancing\nbetween increasing batches and allowed memory. The WARNING/LOG/NOTICE message\ncould appears when we actually break memory rules because of some bad HJ\nsituation.\n\nAnother way to expose the bad memory consumption would be to add memory infos to\nthe HJ node in the explain output, or maybe collect some min/max/mean for\npg_stat_statement, but I suspect tracking memory spikes by query is another\nchallenge altogether...\n\nIn the meantime, find in attachment v3 of the patch with debug and NOTICE\nmessages removed. Given the same plan from my previous email, here is the\nmemory contexts close to the query end:\n\n ExecutorState: 32768 total in 3 blocks; 15512 free (6 chunks); 17256 used\n HashTableContext: 8192 total in 1 blocks; 7720 free (0 chunks); 472 used \n HashBatchFiles: 28605680 total in 3256 blocks; 970128 free (38180 chunks);\n 27635552 used\n HashBatchContext: 960544 total in 23 blocks; 7928 free (0 chunks); 952616 used\n\n\nRegards,",
"msg_date": "Fri, 31 Mar 2023 14:06:11 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Fri, 31 Mar 2023 14:06:11 +0200\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n\n> > [...]\n> > >> Hmmm, not sure is WARNING is a good approach, but I don't have a better\n> > >> idea at the moment. \n> > > \n> > > I stepped it down to NOTICE and added some more infos.\n> > > \n> > > [...]\n> > > NOTICE: Growing number of hash batch to 32768 is exhausting allowed\n> > > memory (137494656 > 2097152)\n> > [...]\n> > \n> > OK, although NOTICE that may actually make it less useful - the default\n> > level is WARNING, and regular users are unable to change the level. So\n> > very few people will actually see these messages.\n[...]\n> Anyway, maybe this should be added in the light of next patch, balancing\n> between increasing batches and allowed memory. The WARNING/LOG/NOTICE message\n> could appears when we actually break memory rules because of some bad HJ\n> situation.\n\nSo I did some more minor editions to the memory context patch and start working\non the balancing memory patch. Please, find in attachment the v4 patch set:\n\n* 0001-v4-Describe-hybrid-hash-join-implementation.patch:\n Adds documentation written by Melanie few years ago\n* 0002-v4-Allocate-hash-batches-related-BufFile-in-a-dedicated.patch:\n The batches' BufFile dedicated memory context patch\n* 0003-v4-Add-some-debug-and-metrics.patch:\n A pure debug patch I use to track memory in my various tests\n* 0004-v4-Limit-BufFile-memory-explosion-with-bad-HashJoin.patch\n A new and somewhat different version of the balancing memory patch, inspired\n from Tomas work.\n\nAfter rebasing Tomas' memory balancing patch, I did some memory measures\nto answer some of my questions. Please, find in attachment the resulting charts\n\"HJ-HEAD.png\" and \"balancing-v3.png\" to compare memory consumption between HEAD\nand Tomas' patch. They shows an alternance of numbers before/after calling\nExecHashIncreaseNumBatches (see the debug patch). I didn't try to find the\nexact last total peak of memory consumption during the join phase and before\nall the BufFiles are destroyed. So the last number might be underestimated.\n\nLooking at Tomas' patch, I was quite surprised to find that data+bufFile\nactually didn't fill memory up to spaceAllowed before splitting the batches and\nrising the memory limit. This is because the patch assume the building phase\nconsume inner and outer BufFiles equally, where only the inner side is really\nallocated. That's why the peakMB value is wrong compared to actual bufFileMB\nmeasured.\n\nSo I worked on the v4 patch were BufFile are accounted in spaceUsed. Moreover,\ninstead of rising the limit and splitting the batches in the same step, the\npatch first rise the memory limit if needed, then split in a later call if we\nhave enough room. The \"balancing-v4.png\" chart shows the resulting memory\nactivity. We might need to discuss the proper balancing between memory\nconsumption and batches.\n\nNote that the patch now log a message when breaking the work_mem. Eg.:\n\n WARNING: Hash Join node must grow outside of work_mem\n DETAIL: Rising memory limit from 4194304 to 6291456\n HINT: You might need to ANALYZE your table or tune its statistics collection.\n\nRegards,",
"msg_date": "Sat, 8 Apr 2023 02:01:19 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Sat, 8 Apr 2023 02:01:19 +0200\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n\n> On Fri, 31 Mar 2023 14:06:11 +0200\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> \n> [...] \n> \n> After rebasing Tomas' memory balancing patch, I did some memory measures\n> to answer some of my questions. Please, find in attachment the resulting\n> charts \"HJ-HEAD.png\" and \"balancing-v3.png\" to compare memory consumption\n> between HEAD and Tomas' patch. They shows an alternance of numbers\n> before/after calling ExecHashIncreaseNumBatches (see the debug patch). I\n> didn't try to find the exact last total peak of memory consumption during the\n> join phase and before all the BufFiles are destroyed. So the last number\n> might be underestimated.\n\nI did some more analysis about the total memory consumption in filecxt of HEAD,\nv3 and v4 patches. My previous debug numbers only prints memory metrics during\nbatch increments or hash table destruction. That means:\n\n* for HEAD: we miss the batches consumed during the outer scan\n* for v3: adds twice nbatch in spaceUsed, which is a rough estimation\n* for v4: batches are tracked in spaceUsed, so they are reflected in spacePeak\n\nUsing a breakpoint in ExecHashJoinSaveTuple to print \"filecxt->mem_allocated\"\nfrom there, here are the maximum allocated memory for bufFile context for each\nbranch:\n\n batches max bufFiles total spaceAllowed rise\n HEAD 16384 199966960 ~194MB\n v3 4096 65419456 ~78MB\n v4(*3) 2048 34273280 48MB nbatch*sizeof(PGAlignedBlock)*3\n v4(*4) 1024 17170160 60.6MB nbatch*sizeof(PGAlignedBlock)*4\n v4(*5) 2048 34273280 42.5MB nbatch*sizeof(PGAlignedBlock)*5\n\nIt seems account for bufFile in spaceUsed allows a better memory balancing and\nmanagement. The precise factor to rise spaceAllowed is yet to be defined. *3 or\n*4 looks good, but this is based on a single artificial test case.\n\nAlso, note that HEAD is currently reporting ~4MB of memory usage. This is by\nfar wrong with the reality. So even if we don't commit the balancing memory\npatch in v16, maybe we could account for filecxt in spaceUsed as a bugfix?\n\nRegards,\n\n\n",
"msg_date": "Tue, 11 Apr 2023 19:14:24 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "\n\nOn 11.04.2023 8:14 PM, Jehan-Guillaume de Rorthais wrote:\n> On Sat, 8 Apr 2023 02:01:19 +0200\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n>\n>> On Fri, 31 Mar 2023 14:06:11 +0200\n>> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n>>\n>> [...]\n>>\n>> After rebasing Tomas' memory balancing patch, I did some memory measures\n>> to answer some of my questions. Please, find in attachment the resulting\n>> charts \"HJ-HEAD.png\" and \"balancing-v3.png\" to compare memory consumption\n>> between HEAD and Tomas' patch. They shows an alternance of numbers\n>> before/after calling ExecHashIncreaseNumBatches (see the debug patch). I\n>> didn't try to find the exact last total peak of memory consumption during the\n>> join phase and before all the BufFiles are destroyed. So the last number\n>> might be underestimated.\n> I did some more analysis about the total memory consumption in filecxt of HEAD,\n> v3 and v4 patches. My previous debug numbers only prints memory metrics during\n> batch increments or hash table destruction. That means:\n>\n> * for HEAD: we miss the batches consumed during the outer scan\n> * for v3: adds twice nbatch in spaceUsed, which is a rough estimation\n> * for v4: batches are tracked in spaceUsed, so they are reflected in spacePeak\n>\n> Using a breakpoint in ExecHashJoinSaveTuple to print \"filecxt->mem_allocated\"\n> from there, here are the maximum allocated memory for bufFile context for each\n> branch:\n>\n> batches max bufFiles total spaceAllowed rise\n> HEAD 16384 199966960 ~194MB\n> v3 4096 65419456 ~78MB\n> v4(*3) 2048 34273280 48MB nbatch*sizeof(PGAlignedBlock)*3\n> v4(*4) 1024 17170160 60.6MB nbatch*sizeof(PGAlignedBlock)*4\n> v4(*5) 2048 34273280 42.5MB nbatch*sizeof(PGAlignedBlock)*5\n>\n> It seems account for bufFile in spaceUsed allows a better memory balancing and\n> management. The precise factor to rise spaceAllowed is yet to be defined. *3 or\n> *4 looks good, but this is based on a single artificial test case.\n>\n> Also, note that HEAD is currently reporting ~4MB of memory usage. This is by\n> far wrong with the reality. So even if we don't commit the balancing memory\n> patch in v16, maybe we could account for filecxt in spaceUsed as a bugfix?\n>\n> Regards,\n\nThank you for the patch.\nI faced with the same problem (OOM caused by hash join).\nI tried to create simplest test reproducing the problem:\n\ncreate table t(pk int, val int);\ninsert into t values (generate_series(1,100000000),0);\nset work_mem='64kB';\nexplain (analyze,buffers) select count(*) from t t1 join t t2 on \n(t1.pk=t2.pk);\n\n\nThere are three workers and size of each exceeds 1.3Gb.\n\nPlan is the following:\n\n Finalize Aggregate (cost=355905977972.87..355905977972.88 rows=1 \nwidth=8) (actual time=2\n12961.033..226097.513 rows=1 loops=1)\n Buffers: shared hit=32644 read=852474 dirtied=437947 written=426374, \ntemp read=944407 w\nritten=1130380\n -> Gather (cost=355905977972.65..355905977972.86 rows=2 width=8) \n(actual time=212943.\n505..226097.497 rows=3 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=32644 read=852474 dirtied=437947 \nwritten=426374, temp read=94\n4407 written=1130380\n -> Partial Aggregate (cost=355905976972.65..355905976972.66 \nrows=1 width=8) (ac\ntual time=212938.410..212940.035 rows=1 loops=3)\n Buffers: shared hit=32644 read=852474 dirtied=437947 \nwritten=426374, temp r\nead=944407 written=1130380\n -> Parallel Hash Join (cost=1542739.26..303822614472.65 \nrows=20833345000002 width=0) (actual time=163268.274..207829.524 \nrows=33333333 loops=3)\n Hash Cond: (t1.pk = t2.pk)\n Buffers: shared hit=32644 read=852474 \ndirtied=437947 written=426374, temp read=944407 written=1130380\n -> Parallel Seq Scan on t t1 \n(cost=0.00..859144.78 rows=41666678 width=4) (actual \ntime=0.045..30828.051 rows=33333333 loops=3)\n Buffers: shared hit=16389 read=426089 written=87\n -> Parallel Hash (cost=859144.78..859144.78 \nrows=41666678 width=4) (actual time=82202.445..82202.447 rows=33333333 \nloops=3)\n Buckets: 4096 (originally 4096) Batches: \n32768 (originally 8192) Memory Usage: 192kB\n Buffers: shared hit=16095 read=426383 \ndirtied=437947 written=426287, temp read=267898 written=737164\n -> Parallel Seq Scan on t t2 \n(cost=0.00..859144.78 rows=41666678 width=4) (actual \ntime=0.054..12647.534 rows=33333333 loops=3)\n Buffers: shared hit=16095 read=426383 \ndirtied=437947 writ\nten=426287\n Planning:\n Buffers: shared hit=69 read=38\n Planning Time: 2.819 ms\n Execution Time: 226113.292 ms\n(22 rows)\n\n\n\n-----------------------------\n\nSo we have increased number of batches to 32k.\nI applied your patches 0001-0004 but unfortunately them have not reduced \nmemory consumption - still size of each backend is more than 1.3Gb.\n\nI wonder what can be the prefered solution of the problem?\nWe have to limit size of hash table which we can hold in memory.\nAnd number of batches is calculate as inner relation size divided by \nhash table size.\nSo for arbitrary large inner relation we can get arbitrary large numer \nof batches which may consume arbitrary larger amount of memory.\nWe should either prohibit further increase of number of batches - it \nwill not solve the problem completely but at least in the test above prevent\nincrease of number of batches from 8k to 32k, either prohibit use of \nhash join in this case at all (assign very high cost to this path).\n\nAlso I winder why do we crete so larger number of files for each batch?\nCan it be reduced?\n\n\n\n\n",
"msg_date": "Thu, 20 Apr 2023 19:42:40 +0300",
"msg_from": "Konstantin Knizhnik <knizhnik@garret.ru>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Thu, Apr 20, 2023 at 12:42 PM Konstantin Knizhnik <knizhnik@garret.ru> wrote:\n> On 11.04.2023 8:14 PM, Jehan-Guillaume de Rorthais wrote:\n> > On Sat, 8 Apr 2023 02:01:19 +0200\n> > Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> >\n> >> On Fri, 31 Mar 2023 14:06:11 +0200\n> >> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> >>\n> >> [...]\n> >>\n> >> After rebasing Tomas' memory balancing patch, I did some memory measures\n> >> to answer some of my questions. Please, find in attachment the resulting\n> >> charts \"HJ-HEAD.png\" and \"balancing-v3.png\" to compare memory consumption\n> >> between HEAD and Tomas' patch. They shows an alternance of numbers\n> >> before/after calling ExecHashIncreaseNumBatches (see the debug patch). I\n> >> didn't try to find the exact last total peak of memory consumption during the\n> >> join phase and before all the BufFiles are destroyed. So the last number\n> >> might be underestimated.\n> > I did some more analysis about the total memory consumption in filecxt of HEAD,\n> > v3 and v4 patches. My previous debug numbers only prints memory metrics during\n> > batch increments or hash table destruction. That means:\n> >\n> > * for HEAD: we miss the batches consumed during the outer scan\n> > * for v3: adds twice nbatch in spaceUsed, which is a rough estimation\n> > * for v4: batches are tracked in spaceUsed, so they are reflected in spacePeak\n> >\n> > Using a breakpoint in ExecHashJoinSaveTuple to print \"filecxt->mem_allocated\"\n> > from there, here are the maximum allocated memory for bufFile context for each\n> > branch:\n> >\n> > batches max bufFiles total spaceAllowed rise\n> > HEAD 16384 199966960 ~194MB\n> > v3 4096 65419456 ~78MB\n> > v4(*3) 2048 34273280 48MB nbatch*sizeof(PGAlignedBlock)*3\n> > v4(*4) 1024 17170160 60.6MB nbatch*sizeof(PGAlignedBlock)*4\n> > v4(*5) 2048 34273280 42.5MB nbatch*sizeof(PGAlignedBlock)*5\n> >\n> > It seems account for bufFile in spaceUsed allows a better memory balancing and\n> > management. The precise factor to rise spaceAllowed is yet to be defined. *3 or\n> > *4 looks good, but this is based on a single artificial test case.\n> >\n> > Also, note that HEAD is currently reporting ~4MB of memory usage. This is by\n> > far wrong with the reality. So even if we don't commit the balancing memory\n> > patch in v16, maybe we could account for filecxt in spaceUsed as a bugfix?\n> >\n> > Regards,\n>\n> Thank you for the patch.\n> I faced with the same problem (OOM caused by hash join).\n> I tried to create simplest test reproducing the problem:\n>\n> create table t(pk int, val int);\n> insert into t values (generate_series(1,100000000),0);\n> set work_mem='64kB';\n> explain (analyze,buffers) select count(*) from t t1 join t t2 on\n> (t1.pk=t2.pk);\n>\n>\n> There are three workers and size of each exceeds 1.3Gb.\n>\n> Plan is the following:\n>\n> Finalize Aggregate (cost=355905977972.87..355905977972.88 rows=1\n> width=8) (actual time=2\n> 12961.033..226097.513 rows=1 loops=1)\n> Buffers: shared hit=32644 read=852474 dirtied=437947 written=426374,\n> temp read=944407 w\n> ritten=1130380\n> -> Gather (cost=355905977972.65..355905977972.86 rows=2 width=8)\n> (actual time=212943.\n> 505..226097.497 rows=3 loops=1)\n> Workers Planned: 2\n> Workers Launched: 2\n> Buffers: shared hit=32644 read=852474 dirtied=437947\n> written=426374, temp read=94\n> 4407 written=1130380\n> -> Partial Aggregate (cost=355905976972.65..355905976972.66\n> rows=1 width=8) (ac\n> tual time=212938.410..212940.035 rows=1 loops=3)\n> Buffers: shared hit=32644 read=852474 dirtied=437947\n> written=426374, temp r\n> ead=944407 written=1130380\n> -> Parallel Hash Join (cost=1542739.26..303822614472.65\n> rows=20833345000002 width=0) (actual time=163268.274..207829.524\n> rows=33333333 loops=3)\n> Hash Cond: (t1.pk = t2.pk)\n> Buffers: shared hit=32644 read=852474\n> dirtied=437947 written=426374, temp read=944407 written=1130380\n> -> Parallel Seq Scan on t t1\n> (cost=0.00..859144.78 rows=41666678 width=4) (actual\n> time=0.045..30828.051 rows=33333333 loops=3)\n> Buffers: shared hit=16389 read=426089 written=87\n> -> Parallel Hash (cost=859144.78..859144.78\n> rows=41666678 width=4) (actual time=82202.445..82202.447 rows=33333333\n> loops=3)\n> Buckets: 4096 (originally 4096) Batches:\n> 32768 (originally 8192) Memory Usage: 192kB\n> Buffers: shared hit=16095 read=426383\n> dirtied=437947 written=426287, temp read=267898 written=737164\n> -> Parallel Seq Scan on t t2\n> (cost=0.00..859144.78 rows=41666678 width=4) (actual\n> time=0.054..12647.534 rows=33333333 loops=3)\n> Buffers: shared hit=16095 read=426383\n> dirtied=437947 writ\n> ten=426287\n> Planning:\n> Buffers: shared hit=69 read=38\n> Planning Time: 2.819 ms\n> Execution Time: 226113.292 ms\n> (22 rows)\n>\n>\n>\n> -----------------------------\n>\n> So we have increased number of batches to 32k.\n> I applied your patches 0001-0004 but unfortunately them have not reduced\n> memory consumption - still size of each backend is more than 1.3Gb.\n\nIs this EXPLAIN ANALYZE run on an instance with Jehan-Guillaume's\npatchset applied or without?\n\nI'm asking because the fourth patch in the series updates spaceUsed with\nthe size of the BufFile->buffer, but I notice in your EXPLAIN ANALZYE,\nMemory Usage for the hashtable is reported as 192 kB, which, while\nlarger than the 64kB work_mem you set, isn't as large as I might expect.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 20 Apr 2023 18:51:19 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On 21.04.2023 1:51 AM, Melanie Plageman wrote:\n> On Thu, Apr 20, 2023 at 12:42 PM Konstantin Knizhnik <knizhnik@garret.ru> wrote:\n>> On 11.04.2023 8:14 PM, Jehan-Guillaume de Rorthais wrote:\n>>> On Sat, 8 Apr 2023 02:01:19 +0200\n>>> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n>>>\n>>>> On Fri, 31 Mar 2023 14:06:11 +0200\n>>>> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n>>>>\n>>>> [...]\n>>>>\n>>>> After rebasing Tomas' memory balancing patch, I did some memory measures\n>>>> to answer some of my questions. Please, find in attachment the resulting\n>>>> charts \"HJ-HEAD.png\" and \"balancing-v3.png\" to compare memory consumption\n>>>> between HEAD and Tomas' patch. They shows an alternance of numbers\n>>>> before/after calling ExecHashIncreaseNumBatches (see the debug patch). I\n>>>> didn't try to find the exact last total peak of memory consumption during the\n>>>> join phase and before all the BufFiles are destroyed. So the last number\n>>>> might be underestimated.\n>>> I did some more analysis about the total memory consumption in filecxt of HEAD,\n>>> v3 and v4 patches. My previous debug numbers only prints memory metrics during\n>>> batch increments or hash table destruction. That means:\n>>>\n>>> * for HEAD: we miss the batches consumed during the outer scan\n>>> * for v3: adds twice nbatch in spaceUsed, which is a rough estimation\n>>> * for v4: batches are tracked in spaceUsed, so they are reflected in spacePeak\n>>>\n>>> Using a breakpoint in ExecHashJoinSaveTuple to print \"filecxt->mem_allocated\"\n>>> from there, here are the maximum allocated memory for bufFile context for each\n>>> branch:\n>>>\n>>> batches max bufFiles total spaceAllowed rise\n>>> HEAD 16384 199966960 ~194MB\n>>> v3 4096 65419456 ~78MB\n>>> v4(*3) 2048 34273280 48MB nbatch*sizeof(PGAlignedBlock)*3\n>>> v4(*4) 1024 17170160 60.6MB nbatch*sizeof(PGAlignedBlock)*4\n>>> v4(*5) 2048 34273280 42.5MB nbatch*sizeof(PGAlignedBlock)*5\n>>>\n>>> It seems account for bufFile in spaceUsed allows a better memory balancing and\n>>> management. The precise factor to rise spaceAllowed is yet to be defined. *3 or\n>>> *4 looks good, but this is based on a single artificial test case.\n>>>\n>>> Also, note that HEAD is currently reporting ~4MB of memory usage. This is by\n>>> far wrong with the reality. So even if we don't commit the balancing memory\n>>> patch in v16, maybe we could account for filecxt in spaceUsed as a bugfix?\n>>>\n>>> Regards,\n>> Thank you for the patch.\n>> I faced with the same problem (OOM caused by hash join).\n>> I tried to create simplest test reproducing the problem:\n>>\n>> create table t(pk int, val int);\n>> insert into t values (generate_series(1,100000000),0);\n>> set work_mem='64kB';\n>> explain (analyze,buffers) select count(*) from t t1 join t t2 on\n>> (t1.pk=t2.pk);\n>>\n>>\n>> There are three workers and size of each exceeds 1.3Gb.\n>>\n>> Plan is the following:\n>>\n>> Finalize Aggregate (cost=355905977972.87..355905977972.88 rows=1\n>> width=8) (actual time=2\n>> 12961.033..226097.513 rows=1 loops=1)\n>> Buffers: shared hit=32644 read=852474 dirtied=437947 written=426374,\n>> temp read=944407 w\n>> ritten=1130380\n>> -> Gather (cost=355905977972.65..355905977972.86 rows=2 width=8)\n>> (actual time=212943.\n>> 505..226097.497 rows=3 loops=1)\n>> Workers Planned: 2\n>> Workers Launched: 2\n>> Buffers: shared hit=32644 read=852474 dirtied=437947\n>> written=426374, temp read=94\n>> 4407 written=1130380\n>> -> Partial Aggregate (cost=355905976972.65..355905976972.66\n>> rows=1 width=8) (ac\n>> tual time=212938.410..212940.035 rows=1 loops=3)\n>> Buffers: shared hit=32644 read=852474 dirtied=437947\n>> written=426374, temp r\n>> ead=944407 written=1130380\n>> -> Parallel Hash Join (cost=1542739.26..303822614472.65\n>> rows=20833345000002 width=0) (actual time=163268.274..207829.524\n>> rows=33333333 loops=3)\n>> Hash Cond: (t1.pk = t2.pk)\n>> Buffers: shared hit=32644 read=852474\n>> dirtied=437947 written=426374, temp read=944407 written=1130380\n>> -> Parallel Seq Scan on t t1\n>> (cost=0.00..859144.78 rows=41666678 width=4) (actual\n>> time=0.045..30828.051 rows=33333333 loops=3)\n>> Buffers: shared hit=16389 read=426089 written=87\n>> -> Parallel Hash (cost=859144.78..859144.78\n>> rows=41666678 width=4) (actual time=82202.445..82202.447 rows=33333333\n>> loops=3)\n>> Buckets: 4096 (originally 4096) Batches:\n>> 32768 (originally 8192) Memory Usage: 192kB\n>> Buffers: shared hit=16095 read=426383\n>> dirtied=437947 written=426287, temp read=267898 written=737164\n>> -> Parallel Seq Scan on t t2\n>> (cost=0.00..859144.78 rows=41666678 width=4) (actual\n>> time=0.054..12647.534 rows=33333333 loops=3)\n>> Buffers: shared hit=16095 read=426383\n>> dirtied=437947 writ\n>> ten=426287\n>> Planning:\n>> Buffers: shared hit=69 read=38\n>> Planning Time: 2.819 ms\n>> Execution Time: 226113.292 ms\n>> (22 rows)\n>>\n>>\n>>\n>> -----------------------------\n>>\n>> So we have increased number of batches to 32k.\n>> I applied your patches 0001-0004 but unfortunately them have not reduced\n>> memory consumption - still size of each backend is more than 1.3Gb.\n> Is this EXPLAIN ANALYZE run on an instance with Jehan-Guillaume's\n> patchset applied or without?\n>\n> I'm asking because the fourth patch in the series updates spaceUsed with\n> the size of the BufFile->buffer, but I notice in your EXPLAIN ANALZYE,\n> Memory Usage for the hashtable is reported as 192 kB, which, while\n> larger than the 64kB work_mem you set, isn't as large as I might expect.\n>\n> - Melanie\nYes, this is explain analyze for the Postgres version with applied 4 \npatches:\n\n0001-v4-Describe-hybrid-hash-join-implementation.patch\n0002-v4-Allocate-hash-batches-related-BufFile-in-a-dedicated.patch\n0003-v4-Add-some-debug-and-metrics.patch\n0004-v4-Limit-BufFile-memory-explosion-with-bad-HashJoin.patch\n\nJust as workaround I tried the attached patch - it prevents backups \nmemory footprint growth\nby limiting number of created batches. I am not sure that it is right \nsolution, because in any case we allocate more memory than specified by \nwork_mem. The alternative is to prohibit hash join plan in this case. \nBut it is also not so good solution, because merge join is used to be \nmuch slower.",
"msg_date": "Fri, 21 Apr 2023 22:18:49 +0300",
"msg_from": "Konstantin Knizhnik <knizhnik@garret.ru>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 8:01 PM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n>\n> On Fri, 31 Mar 2023 14:06:11 +0200\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n>\n> > > [...]\n> > > >> Hmmm, not sure is WARNING is a good approach, but I don't have a better\n> > > >> idea at the moment.\n> > > >\n> > > > I stepped it down to NOTICE and added some more infos.\n> > > >\n> > > > [...]\n> > > > NOTICE: Growing number of hash batch to 32768 is exhausting allowed\n> > > > memory (137494656 > 2097152)\n> > > [...]\n> > >\n> > > OK, although NOTICE that may actually make it less useful - the default\n> > > level is WARNING, and regular users are unable to change the level. So\n> > > very few people will actually see these messages.\n> [...]\n> > Anyway, maybe this should be added in the light of next patch, balancing\n> > between increasing batches and allowed memory. The WARNING/LOG/NOTICE message\n> > could appears when we actually break memory rules because of some bad HJ\n> > situation.\n>\n> So I did some more minor editions to the memory context patch and start working\n> on the balancing memory patch. Please, find in attachment the v4 patch set:\n>\n> * 0001-v4-Describe-hybrid-hash-join-implementation.patch:\n> Adds documentation written by Melanie few years ago\n> * 0002-v4-Allocate-hash-batches-related-BufFile-in-a-dedicated.patch:\n> The batches' BufFile dedicated memory context patch\n\nThis is only a review of the code in patch 0002 (the patch to use a more\ngranular memory context for multi-batch hash join batch files). I have\nnot reviewed the changes to comments in detail either.\n\nI think the biggest change that is needed is to implement this memory\ncontext usage for parallel hash join.\n\nTo implement a file buffer memory context for multi-patch parallel hash\njoin we would need, at a minimum, to switch into the proposed fileCxt\nmemory context in sts_puttuple() before BufFileCreateFileSet().\n\nWe should also consider changing the SharedTuplestoreAccessor->context\nfrom HashTableContext to the proposed fileCxt.\n\nIn parallel hash join code, the SharedTuplestoreAccessor->context is\nonly used when allocating the SharedTuplestoreAccessor->write_chunk and\nread_chunk. Those are the buffers for writing out and reading from the\nSharedTuplestore and are part of the memory overhead of file buffers for\na multi-batch hash join. Note that we will allocate STS_CHUNK_PAGES *\nBLCKSZ size buffer for every batch -- this is on top of the BufFile\nbuffer per batch.\n\nsts_initialize() and sts_attach() set the\nSharedTuplestoreAccessor->context to the CurrentMemoryContext. We could\nchange into the fileCxt before calling those functions from the hash\njoin code. That would mean that the SharedTuplestoreAccessor data\nstructure would also be counted in the fileCxt (and probably\nhashtable->batches). This is probably what we want anyway.\n\nAs for this patch's current implementation and use of the fileCxt , I\nthink you are going to need to switch into the fileCxt before calling\nBufFileClose() with the batch files throughout the hash join code (e.g.\nin ExecHashJoinNewBatch()) if you want the mem_allocated to be accurate\n(since it frees the BufFile buffer). Once you figure out if that makes\nsense and implement it, I think we will have to revisit if it still\nmakes sense to pass the fileCxt as an argument to\nExecHashJoinSaveTuple().\n\n- Melanie\n\n\n",
"msg_date": "Fri, 21 Apr 2023 16:44:48 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "Hi,\n\nOn Fri, 21 Apr 2023 16:44:48 -0400\nMelanie Plageman <melanieplageman@gmail.com> wrote:\n\n> On Fri, Apr 7, 2023 at 8:01 PM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n> >\n> > On Fri, 31 Mar 2023 14:06:11 +0200\n> > Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> > \n> > > > [...] \n> > > > >> Hmmm, not sure is WARNING is a good approach, but I don't have a\n> > > > >> better idea at the moment. \n> > > > >\n> > > > > I stepped it down to NOTICE and added some more infos.\n> > > > >\n> > > > > [...]\n> > > > > NOTICE: Growing number of hash batch to 32768 is exhausting allowed\n> > > > > memory (137494656 > 2097152) \n> > > > [...]\n> > > >\n> > > > OK, although NOTICE that may actually make it less useful - the default\n> > > > level is WARNING, and regular users are unable to change the level. So\n> > > > very few people will actually see these messages. \n> > [...] \n> > > Anyway, maybe this should be added in the light of next patch, balancing\n> > > between increasing batches and allowed memory. The WARNING/LOG/NOTICE\n> > > message could appears when we actually break memory rules because of some\n> > > bad HJ situation. \n> >\n> > So I did some more minor editions to the memory context patch and start\n> > working on the balancing memory patch. Please, find in attachment the v4\n> > patch set:\n> >\n> > * 0001-v4-Describe-hybrid-hash-join-implementation.patch:\n> > Adds documentation written by Melanie few years ago\n> > * 0002-v4-Allocate-hash-batches-related-BufFile-in-a-dedicated.patch:\n> > The batches' BufFile dedicated memory context patch \n> \n> This is only a review of the code in patch 0002 (the patch to use a more\n> granular memory context for multi-batch hash join batch files). I have\n> not reviewed the changes to comments in detail either.\n\nOk.\n\n> I think the biggest change that is needed is to implement this memory\n> context usage for parallel hash join.\n\nIndeed. \n\n> To implement a file buffer memory context for multi-patch parallel hash\n> join [...]\n\nThank you for your review and pointers!\n\nAfter (some days off and then) studying the parallel code, I end up with this:\n\n1. As explained by Melanie, the v5 patch sets accessor->context to fileCxt.\n\n2. BufFile buffers were wrongly allocated in ExecutorState context for\n accessor->read|write_file, from sts_puttuple and sts_parallel_scan_next when\n they first need to work with the accessor FileSet.\n\n The v5 patch now allocate them in accessor->context, directly in\n sts_puttuple and sts_parallel_scan_next. This avoids to wrap each single\n call of these functions inside MemoryContextSwitchTo calls. I suppose this\n is correct as the comment about accessor->context says \n \"/* Memory context for **buffers**. */\" in struct SharedTuplestoreAccessor.\n\n3. accessor->write_chunk is currently already allocated in accessor->context.\n\n In consequence, this buffer is now allocated in the fileCxt instead\n of hashCxt context. \n\n This is a bit far fetched, but I suppose this is ok as it act as a second\n level buffer, on top of the BufFile.\n\n4. accessor->read_buffer is currently allocated in accessor->context as well.\n\n This buffer holds tuple read from the fileset. This is still a buffer, but\n not related to any file anymore...\n\nBecause of 3 and 4, and the gross context granularity of SharedTuplestore\nrelated-code, I'm now wondering if \"fileCxt\" shouldn't be renamed \"bufCxt\".\n\nRegards,",
"msg_date": "Thu, 4 May 2023 19:30:06 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "Thanks for continuing to work on this!\n\nOn Thu, May 04, 2023 at 07:30:06PM +0200, Jehan-Guillaume de Rorthais wrote:\n> On Fri, 21 Apr 2023 16:44:48 -0400 Melanie Plageman <melanieplageman@gmail.com> wrote:\n...\n> > I think the biggest change that is needed is to implement this memory\n> > context usage for parallel hash join.\n> \n> Indeed. \n\n...\n\n> 4. accessor->read_buffer is currently allocated in accessor->context as well.\n> \n> This buffer holds tuple read from the fileset. This is still a buffer, but\n> not related to any file anymore...\n> \n> Because of 3 and 4, and the gross context granularity of SharedTuplestore\n> related-code, I'm now wondering if \"fileCxt\" shouldn't be renamed \"bufCxt\".\n\nI think bufCxt is a potentially confusing name. The context contains\npointers to the batch files and saying those are related to buffers is\nconfusing. It also sounds like it could include any kind of buffer\nrelated to the hashtable or hash join.\n\nPerhaps we could call this memory context the \"spillCxt\"--indicating it\nis for the memory required for spilling to permanent storage while\nexecuting hash joins.\n\nI discuss this more in my code review below.\n\n> From c5ed2ae2c2749af4f5058b012dc5e8a9e1529127 Mon Sep 17 00:00:00 2001\n> From: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n> Date: Mon, 27 Mar 2023 15:54:39 +0200\n> Subject: [PATCH 2/3] Allocate hash batches related BufFile in a dedicated\n> context\n\n> diff --git a/src/backend/executor/nodeHash.c b/src/backend/executor/nodeHash.c\n> index 5fd1c5553b..a4fbf29301 100644\n> --- a/src/backend/executor/nodeHash.c\n> +++ b/src/backend/executor/nodeHash.c\n> @@ -570,15 +574,21 @@ ExecHashTableCreate(HashState *state, List *hashOperators, List *hashCollations,\n> \n> \tif (nbatch > 1 && hashtable->parallel_state == NULL)\n> \t{\n> +\t\tMemoryContext oldctx;\n> +\n> \t\t/*\n> \t\t * allocate and initialize the file arrays in hashCxt (not needed for\n> \t\t * parallel case which uses shared tuplestores instead of raw files)\n> \t\t */\n> +\t\toldctx = MemoryContextSwitchTo(hashtable->fileCxt);\n> +\n> \t\thashtable->innerBatchFile = palloc0_array(BufFile *, nbatch);\n> \t\thashtable->outerBatchFile = palloc0_array(BufFile *, nbatch);\n> \t\t/* The files will not be opened until needed... */\n> \t\t/* ... but make sure we have temp tablespaces established for them */\n\nI haven't studied it closely, but I wonder if we shouldn't switch back\ninto the oldctx before calling PrepareTempTablespaces().\nPrepareTempTablespaces() is explicit about what memory context it is\nallocating in, however, I'm not sure it makes sense to call it in the\nfileCxt. If you have a reason, you should add a comment and do the same\nin ExecHashIncreaseNumBatches().\n\n> \t\tPrepareTempTablespaces();\n> +\n> +\t\tMemoryContextSwitchTo(oldctx);\n> \t}\n\n> @@ -934,13 +943,16 @@ ExecHashIncreaseNumBatches(HashJoinTable hashtable)\n> \t\t hashtable, nbatch, hashtable->spaceUsed);\n> #endif\n> \n> -\toldcxt = MemoryContextSwitchTo(hashtable->hashCxt);\n> -\n> \tif (hashtable->innerBatchFile == NULL)\n> \t{\n> +\t\tMemoryContext oldcxt = MemoryContextSwitchTo(hashtable->fileCxt);\n> +\n> \t\t/* we had no file arrays before */\n> \t\thashtable->innerBatchFile = palloc0_array(BufFile *, nbatch);\n> \t\thashtable->outerBatchFile = palloc0_array(BufFile *, nbatch);\n> +\n\nAs mentioned above, you should likely make ExecHashTableCreate()\nconsistent with this.\n\n> +\t\tMemoryContextSwitchTo(oldcxt);\n> +\n> \t\t/* time to establish the temp tablespaces, too */\n> \t\tPrepareTempTablespaces();\n> \t}\n> @@ -951,8 +963,6 @@ ExecHashIncreaseNumBatches(HashJoinTable hashtable)\n\nI don't see a reason to call repalloc0_array() in a different context\nthan the initial palloc0_array().\n\n> \t\thashtable->outerBatchFile = repalloc0_array(hashtable->outerBatchFile, BufFile *, oldnbatch, nbatch);\n> \t}\n> \n> -\tMemoryContextSwitchTo(oldcxt);\n> -\n> \thashtable->nbatch = nbatch;\n> \n> \t/*\n\n> diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c\n> index 920d1831c2..ac72fbfbb6 100644\n> --- a/src/backend/executor/nodeHashjoin.c\n> +++ b/src/backend/executor/nodeHashjoin.c\n> @@ -485,8 +485,10 @@ ExecHashJoinImpl(PlanState *pstate, bool parallel)\n> \t\t\t\t\t */\n> \t\t\t\t\tAssert(parallel_state == NULL);\n> \t\t\t\t\tAssert(batchno > hashtable->curbatch);\n> +\n> \t\t\t\t\tExecHashJoinSaveTuple(mintuple, hashvalue,\n> -\t\t\t\t\t\t\t\t\t\t &hashtable->outerBatchFile[batchno]);\n> +\t\t\t\t\t\t\t\t\t\t &hashtable->outerBatchFile[batchno],\n> +\t\t\t\t\t\t\t\t\t\t hashtable->fileCxt);\n> \n> \t\t\t\t\tif (shouldFree)\n> \t\t\t\t\t\theap_free_minimal_tuple(mintuple);\n> @@ -1308,21 +1310,27 @@ ExecParallelHashJoinNewBatch(HashJoinState *hjstate)\n> * The data recorded in the file for each tuple is its hash value,\n\nIt doesn't sound accurate to me to say that it should be called *in* the\nHashBatchFiles context. We now switch into that context, so you can\nprobably remove this comment and add a comment above the switch into the\nfilecxt which explains that the temp file buffers should be allocated in\nthe filecxt (both because they need to be allocated in a sufficiently\nlong-lived context and to increase visibility of their memory overhead).\n\n> * then the tuple in MinimalTuple format.\n> *\n> - * Note: it is important always to call this in the regular executor\n> - * context, not in a shorter-lived context; else the temp file buffers\n> - * will get messed up.\n> + * Note: it is important always to call this in the HashBatchFiles context,\n> + * not in a shorter-lived context; else the temp file buffers will get messed\n> + * up.\n> */\n> void\n> ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue,\n> -\t\t\t\t\t BufFile **fileptr)\n> +\t\t\t\t\t BufFile **fileptr, MemoryContext filecxt)\n> {\n> \tBufFile *file = *fileptr;\n> \n> \tif (file == NULL)\n> \t{\n> +\t\tMemoryContext oldctx;\n> +\n> +\t\toldctx = MemoryContextSwitchTo(filecxt);\n> +\n> \t\t/* First write to this batch file, so open it. */\n> \t\tfile = BufFileCreateTemp(false);\n> \t\t*fileptr = file;\n> +\n> +\t\tMemoryContextSwitchTo(oldctx);\n> \t}\n\n> diff --git a/src/include/executor/hashjoin.h b/src/include/executor/hashjoin.h\n> index 8ee59d2c71..74867c3e40 100644\n> --- a/src/include/executor/hashjoin.h\n> +++ b/src/include/executor/hashjoin.h\n> @@ -25,10 +25,14 @@\n> *\n> * Each active hashjoin has a HashJoinTable control block, which is\n> * palloc'd in the executor's per-query context. All other storage needed\n> - * for the hashjoin is kept in private memory contexts, two for each hashjoin.\n> + * for the hashjoin is kept in private memory contexts, three for each\n> + * hashjoin:\n\nMaybe \"hash table control block\". I know the phrase \"control block\" is\nused elsewhere in the comments, but it is a bit confusing. Also, I wish\nthere was a way to make it clear this is for the hashtable but relevant\nto all batches.\n\n> + * - HashTableContext (hashCxt): the control block associated to the hash table\n\nI might say \"per-batch storage for serial hash join\".\n\n> + * - HashBatchContext (batchCxt): storages for batches\n\nSo, if we are going to allocate the array of pointers to the spill files\nin the fileCxt, we should revise this comment. As I mentioned above, I\nagree with you that if the SharedTupleStore-related buffers are also\nallocated in this context, perhaps it shouldn't be called the fileCxt.\n\nOne idea I had is calling it the spillCxt. Almost all memory allocated in this\ncontext is related to needing to spill to permanent storage during execution.\n\nThe one potential confusing part of this is batch 0 for parallel hash\njoin. I would have to dig back into it again, but from a cursory look at\nExecParallelHashJoinSetUpBatches() it seems like batch 0 also gets a\nshared tuplestore with associated accessors and files even if it is a\nsingle batch parallel hashjoin.\n\nAre the parallel hash join read_buffer and write_chunk also used for a\nsingle batch parallel hash join?\n\nThough, perhaps spillCxt still makes sense? It doesn't specify\nmulti-batch...\n\n> --- a/src/include/executor/nodeHashjoin.h\n> +++ b/src/include/executor/nodeHashjoin.h\n> @@ -29,6 +29,6 @@ extern void ExecHashJoinInitializeWorker(HashJoinState *state,\n> \t\t\t\t\t\t\t\t\t\t ParallelWorkerContext *pwcxt);\n> \n\nI would add a comment explaining why ExecHashJoinSaveTuple() is passed\nwith the fileCxt as a parameter.\n\n> extern void ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue,\n> -\t\t\t\t\t\t\t\t BufFile **fileptr);\n> +\t\t\t\t\t\t\t\t BufFile **fileptr, MemoryContext filecxt);\n\n- Melanie\n\n\n",
"msg_date": "Mon, 8 May 2023 11:56:48 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "Thank you for your review!\n\nOn Mon, 8 May 2023 11:56:48 -0400\nMelanie Plageman <melanieplageman@gmail.com> wrote:\n\n> ...\n> > 4. accessor->read_buffer is currently allocated in accessor->context as\n> > well.\n> > \n> > This buffer holds tuple read from the fileset. This is still a buffer,\n> > but not related to any file anymore...\n> > \n> > Because of 3 and 4, and the gross context granularity of SharedTuplestore\n> > related-code, I'm now wondering if \"fileCxt\" shouldn't be renamed \"bufCxt\".\n> \n> I think bufCxt is a potentially confusing name. The context contains\n> pointers to the batch files and saying those are related to buffers is\n> confusing. It also sounds like it could include any kind of buffer\n> related to the hashtable or hash join.\n> \n> Perhaps we could call this memory context the \"spillCxt\"--indicating it\n> is for the memory required for spilling to permanent storage while\n> executing hash joins.\n\n\"Spilling\" seems fair and a large enough net to grab everything around temp\nfiles and accessing them.\n\n> I discuss this more in my code review below.\n> \n> > From c5ed2ae2c2749af4f5058b012dc5e8a9e1529127 Mon Sep 17 00:00:00 2001\n> > From: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n> > Date: Mon, 27 Mar 2023 15:54:39 +0200\n> > Subject: [PATCH 2/3] Allocate hash batches related BufFile in a dedicated\n> > context \n> \n> > diff --git a/src/backend/executor/nodeHash.c\n> > b/src/backend/executor/nodeHash.c index 5fd1c5553b..a4fbf29301 100644\n> > --- a/src/backend/executor/nodeHash.c\n> > +++ b/src/backend/executor/nodeHash.c\n> > @@ -570,15 +574,21 @@ ExecHashTableCreate(HashState *state, List\n> > *hashOperators, List *hashCollations, \n> > \tif (nbatch > 1 && hashtable->parallel_state == NULL)\n> > \t{\n> > +\t\tMemoryContext oldctx;\n> > +\n> > \t\t/*\n> > \t\t * allocate and initialize the file arrays in hashCxt (not\n> > needed for\n> > \t\t * parallel case which uses shared tuplestores instead of\n> > raw files) */\n> > +\t\toldctx = MemoryContextSwitchTo(hashtable->fileCxt);\n> > +\n> > \t\thashtable->innerBatchFile = palloc0_array(BufFile *,\n> > nbatch); hashtable->outerBatchFile = palloc0_array(BufFile *, nbatch);\n> > \t\t/* The files will not be opened until needed... */\n> > \t\t/* ... but make sure we have temp tablespaces established\n> > for them */ \n> \n> I haven't studied it closely, but I wonder if we shouldn't switch back\n> into the oldctx before calling PrepareTempTablespaces().\n> PrepareTempTablespaces() is explicit about what memory context it is\n> allocating in, however, I'm not sure it makes sense to call it in the\n> fileCxt. If you have a reason, you should add a comment and do the same\n> in ExecHashIncreaseNumBatches().\n\nI had no reason. I catched it in ExecHashIncreaseNumBatches() while reviewing\nmyself, but not here.\n\nLine moved in v6.\n\n> > @@ -934,13 +943,16 @@ ExecHashIncreaseNumBatches(HashJoinTable hashtable)\n> > \t\t hashtable, nbatch, hashtable->spaceUsed);\n> > #endif\n> > \n> > -\toldcxt = MemoryContextSwitchTo(hashtable->hashCxt);\n> > -\n> > \tif (hashtable->innerBatchFile == NULL)\n> > \t{\n> > +\t\tMemoryContext oldcxt =\n> > MemoryContextSwitchTo(hashtable->fileCxt); +\n> > \t\t/* we had no file arrays before */\n> > \t\thashtable->innerBatchFile = palloc0_array(BufFile *,\n> > nbatch); hashtable->outerBatchFile = palloc0_array(BufFile *, nbatch);\n> > + \n> > +\t\tMemoryContextSwitchTo(oldcxt);\n> > +\n> > \t\t/* time to establish the temp tablespaces, too */\n> > \t\tPrepareTempTablespaces();\n> > \t}\n> > @@ -951,8 +963,6 @@ ExecHashIncreaseNumBatches(HashJoinTable hashtable) \n> \n> I don't see a reason to call repalloc0_array() in a different context\n> than the initial palloc0_array().\n\nUnless I'm wrong, wrapping the whole if/else blocks in memory context\nhashtable->fileCxt seemed useless as repalloc() actually realloc in the context\nthe original allocation occurred. So I only wrapped the palloc() calls.\n\n> > diff --git a/src/backend/executor/nodeHashjoin.c\n> > ...\n> > @@ -1308,21 +1310,27 @@ ExecParallelHashJoinNewBatch(HashJoinState\n> > *hjstate)\n> > * The data recorded in the file for each tuple is its hash value, \n> \n> It doesn't sound accurate to me to say that it should be called *in* the\n> HashBatchFiles context. We now switch into that context, so you can\n> probably remove this comment and add a comment above the switch into the\n> filecxt which explains that the temp file buffers should be allocated in\n> the filecxt (both because they need to be allocated in a sufficiently\n> long-lived context and to increase visibility of their memory overhead).\n\nIndeed. Comment moved and reworded in v6.\n\n> > diff --git a/src/include/executor/hashjoin.h\n> > b/src/include/executor/hashjoin.h index 8ee59d2c71..74867c3e40 100644\n> > --- a/src/include/executor/hashjoin.h\n> > +++ b/src/include/executor/hashjoin.h\n> > @@ -25,10 +25,14 @@\n> > *\n> > * Each active hashjoin has a HashJoinTable control block, which is\n> > * palloc'd in the executor's per-query context. All other storage needed\n> > - * for the hashjoin is kept in private memory contexts, two for each\n> > hashjoin.\n> > + * for the hashjoin is kept in private memory contexts, three for each\n> > + * hashjoin: \n> \n> Maybe \"hash table control block\". I know the phrase \"control block\" is\n> used elsewhere in the comments, but it is a bit confusing. Also, I wish\n> there was a way to make it clear this is for the hashtable but relevant\n> to all batches.\n\nI tried to reword the comment with this additional info in mind in v6. Does it\nmatch what you had in mind?\n\n> ...\n> So, if we are going to allocate the array of pointers to the spill files\n> in the fileCxt, we should revise this comment. As I mentioned above, I\n> agree with you that if the SharedTupleStore-related buffers are also\n> allocated in this context, perhaps it shouldn't be called the fileCxt.\n> \n> One idea I had is calling it the spillCxt. Almost all memory allocated in this\n> context is related to needing to spill to permanent storage during execution.\n\nAgree\n\n> The one potential confusing part of this is batch 0 for parallel hash\n> join. I would have to dig back into it again, but from a cursory look at\n> ExecParallelHashJoinSetUpBatches() it seems like batch 0 also gets a\n> shared tuplestore with associated accessors and files even if it is a\n> single batch parallel hashjoin.\n> \n> Are the parallel hash join read_buffer and write_chunk also used for a\n> single batch parallel hash join?\n\nI don't think so.\n\nFor the inner side, there's various Assert() around the batchno==0 special\ncase. Plus, it always has his own block when inserting in a batch, to directly\nwrite in shared memory calling ExecParallelHashPushTuple().\n\nThe outer side of the join actually creates all batches using shared tuple\nstorage mechanism, including batch 0, **only** if the number of batch is\ngreater than 1. See in ExecParallelHashJoinOuterGetTuple:\n\n /*\n * In the Parallel Hash case we only run the outer plan directly for\n * single-batch hash joins. Otherwise we have to go to batch files, even\n * for batch 0.\n */\n if (curbatch == 0 && hashtable->nbatch == 1)\n {\n \tslot = ExecProcNode(outerNode);\n\nSo, for a single batch PHJ, it seems there's no temp files involved.\n\n> Though, perhaps spillCxt still makes sense? It doesn't specify\n> multi-batch...\n\nI'm not sure the see where would be the confusing part here? Is it that some\nSTS mechanism are allocated but never used? When the number of batch is 1, it\ndoesn't really matter much I suppose, as the context consumption stays\nreally low. Plus, there's some other useless STS/context around there (ie. inner\nbatch 0 and batch context in PHJ). I'm not sure it worth trying optimizing this\ncompare to the cost of the added code complexity.\n\nOr am I off-topic and missing something obvious?\n\n> > --- a/src/include/executor/nodeHashjoin.h\n> > +++ b/src/include/executor/nodeHashjoin.h\n> > @@ -29,6 +29,6 @@ extern void ExecHashJoinInitializeWorker(HashJoinState\n> > *state, ParallelWorkerContext *pwcxt);\n> > \n> \n> I would add a comment explaining why ExecHashJoinSaveTuple() is passed\n> with the fileCxt as a parameter.\n> \n> > extern void ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue,\n\nIsn't the comment added in the function itself, in v6, enough? It seems\nuncommon to comment on function parameters in headers.\n\nLast, about your TODO in 0001 patch, do you mean that we should document\nthat after splitting a batch N, its rows can only redispatch in N0 or N1 ?\n\nRegards,",
"msg_date": "Wed, 10 May 2023 14:24:19 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "Thanks for continuing to work on this.\n\nAre you planning to modify what is displayed for memory usage in\nEXPLAIN ANALYZE?\n\nAlso, since that won't help a user who OOMs, I wondered if the spillCxt\nis helpful on its own or if we need some kind of logging message for\nusers to discover that this is what led them to running out of memory.\n\nOn Wed, May 10, 2023 at 02:24:19PM +0200, Jehan-Guillaume de Rorthais wrote:\n> On Mon, 8 May 2023 11:56:48 -0400 Melanie Plageman <melanieplageman@gmail.com> wrote:\n> \n> > > @@ -934,13 +943,16 @@ ExecHashIncreaseNumBatches(HashJoinTable hashtable)\n> > > \t\t hashtable, nbatch, hashtable->spaceUsed);\n> > > #endif\n> > > \n> > > -\toldcxt = MemoryContextSwitchTo(hashtable->hashCxt);\n> > > -\n> > > \tif (hashtable->innerBatchFile == NULL)\n> > > \t{\n> > > +\t\tMemoryContext oldcxt =\n> > > MemoryContextSwitchTo(hashtable->fileCxt); +\n> > > \t\t/* we had no file arrays before */\n> > > \t\thashtable->innerBatchFile = palloc0_array(BufFile *,\n> > > nbatch); hashtable->outerBatchFile = palloc0_array(BufFile *, nbatch);\n> > > + \n> > > +\t\tMemoryContextSwitchTo(oldcxt);\n> > > +\n> > > \t\t/* time to establish the temp tablespaces, too */\n> > > \t\tPrepareTempTablespaces();\n> > > \t}\n> > > @@ -951,8 +963,6 @@ ExecHashIncreaseNumBatches(HashJoinTable hashtable) \n> > \n> > I don't see a reason to call repalloc0_array() in a different context\n> > than the initial palloc0_array().\n> \n> Unless I'm wrong, wrapping the whole if/else blocks in memory context\n> hashtable->fileCxt seemed useless as repalloc() actually realloc in the context\n> the original allocation occurred. So I only wrapped the palloc() calls.\n\nMy objection is less about correctness and more about the diff. The\nexisting memory context switch is around the whole if/else block. If you\nwant to move it to only wrap the if statement, I would do that in a\nseparate commit with a message describing the rationale. It doesn't seem\nto save us much and it makes the diff a bit more confusing. I don't feel\nstrongly enough about this to protest much more, though.\n\n> > > diff --git a/src/include/executor/hashjoin.h\n> > > b/src/include/executor/hashjoin.h index 8ee59d2c71..74867c3e40 100644\n> > > --- a/src/include/executor/hashjoin.h\n> > > +++ b/src/include/executor/hashjoin.h\n> > > @@ -25,10 +25,14 @@\n> > > *\n> > > * Each active hashjoin has a HashJoinTable control block, which is\n> > > * palloc'd in the executor's per-query context. All other storage needed\n> > > - * for the hashjoin is kept in private memory contexts, two for each\n> > > hashjoin.\n> > > + * for the hashjoin is kept in private memory contexts, three for each\n> > > + * hashjoin: \n> > \n> > Maybe \"hash table control block\". I know the phrase \"control block\" is\n> > used elsewhere in the comments, but it is a bit confusing. Also, I wish\n> > there was a way to make it clear this is for the hashtable but relevant\n> > to all batches.\n> \n> I tried to reword the comment with this additional info in mind in v6. Does it\n> match what you had in mind?\n\nReview of that below.\n\n> > So, if we are going to allocate the array of pointers to the spill files\n> > in the fileCxt, we should revise this comment. As I mentioned above, I\n> > agree with you that if the SharedTupleStore-related buffers are also\n> > allocated in this context, perhaps it shouldn't be called the fileCxt.\n> > \n> > One idea I had is calling it the spillCxt. Almost all memory allocated in this\n> > context is related to needing to spill to permanent storage during execution.\n> \n> Agree\n> \n> > The one potential confusing part of this is batch 0 for parallel hash\n> > join. I would have to dig back into it again, but from a cursory look at\n> > ExecParallelHashJoinSetUpBatches() it seems like batch 0 also gets a\n> > shared tuplestore with associated accessors and files even if it is a\n> > single batch parallel hashjoin.\n> > \n> > Are the parallel hash join read_buffer and write_chunk also used for a\n> > single batch parallel hash join?\n> \n> I don't think so.\n> \n> For the inner side, there's various Assert() around the batchno==0 special\n> case. Plus, it always has his own block when inserting in a batch, to directly\n> write in shared memory calling ExecParallelHashPushTuple().\n> \n> The outer side of the join actually creates all batches using shared tuple\n> storage mechanism, including batch 0, **only** if the number of batch is\n> greater than 1. See in ExecParallelHashJoinOuterGetTuple:\n> \n> /*\n> * In the Parallel Hash case we only run the outer plan directly for\n> * single-batch hash joins. Otherwise we have to go to batch files, even\n> * for batch 0.\n> */\n> if (curbatch == 0 && hashtable->nbatch == 1)\n> {\n> \tslot = ExecProcNode(outerNode);\n> \n> So, for a single batch PHJ, it seems there's no temp files involved.\n\nspill context seems appropriate, then.\n\n> \n> > Though, perhaps spillCxt still makes sense? It doesn't specify\n> > multi-batch...\n> \n> I'm not sure the see where would be the confusing part here? Is it that some\n> STS mechanism are allocated but never used? When the number of batch is 1, it\n> doesn't really matter much I suppose, as the context consumption stays\n> really low. Plus, there's some other useless STS/context around there (ie. inner\n> batch 0 and batch context in PHJ). I'm not sure it worth trying optimizing this\n> compare to the cost of the added code complexity.\n> \n> Or am I off-topic and missing something obvious?\n\nMy concern was that, because the shared tuplestore is used for single\nbatch parallel hashjoins, if memory allocated for the shared tuplestore\nwas allocated in the spill context (like the accessors or some buffers)\nbut no temp files were used because it is a single batch hash join, that\nit would be confusing (more for the developer than the user) to see\nmemory allocated in a \"spill\" context (which implies spilling). But, I\nthink there is no point in belaboring this any further.\n\n> > > --- a/src/include/executor/nodeHashjoin.h\n> > > +++ b/src/include/executor/nodeHashjoin.h\n> > > @@ -29,6 +29,6 @@ extern void ExecHashJoinInitializeWorker(HashJoinState\n> > > *state, ParallelWorkerContext *pwcxt);\n> > > \n> > \n> > I would add a comment explaining why ExecHashJoinSaveTuple() is passed\n> > with the fileCxt as a parameter.\n> > \n> > > extern void ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue,\n> \n> Isn't the comment added in the function itself, in v6, enough? It seems\n> uncommon to comment on function parameters in headers.\n\nThe comment seems good to me.\n\n> Last, about your TODO in 0001 patch, do you mean that we should document\n> that after splitting a batch N, its rows can only redispatch in N0 or N1 ?\n\nI mean that if we split batch 5 during execution, tuples can only be\nspilled to batches 6+ (since we will have already processed batches\n0-4). I think this is how it works, though I haven't reviewed this part\nof the code in some time.\n\n> From 6c9056979eb4d638f0555a05453686e01b1d1d11 Mon Sep 17 00:00:00 2001\n> From: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n> Date: Mon, 27 Mar 2023 15:54:39 +0200\n> Subject: [PATCH 2/3] Allocate hash batches related BufFile in a dedicated\n> context\n\nThis patch mainly looks good now. I had some suggested rewording below.\n \n> diff --git a/src/include/executor/hashjoin.h b/src/include/executor/hashjoin.h\n\nI've suggested a few edits to your block comment in hashjoin.h below:\n\n> /* ----------------------------------------------------------------\n> *\t\t\t\thash-join hash table structures\n> *\n\n[...]\n * Each active hashjoin has a HashJoinTable structure, which is\n * palloc'd in the executor's per-query context. Other storage needed for\n * each hashjoin is split amongst three child contexts:\n * - HashTableContext (hashCxt): the top hash table storage context\n * - HashSpillContext (spillCxt): storage for temp files buffers\n * - HashBatchContext (batchCxt): storage for a batch in serial hash join\n *\n [...]\n *\n * Data is allocated in the \"hashCxt\" when it should live throughout the\n * lifetime of the join. This mainly consists of hashtable metadata.\n *\n * Data associated with temporary files needed when the hash join must\n * spill to disk is allocated in the \"spillCxt\" context. This context\n * lives lives for the duration of the join, as spill files concerning\n * multiple batches coexist. These files are explicitly destroyed by\n * calling BufFileClose() when the hash join has finished executing the\n * batch. done with them. The aim of this context is to help account for\n * the memory dedicated to temp files and their buffers.\n *\n * Finally, storage that is only wanted for the current batch is\n * allocated in the \"batchCxt\". By resetting the batchCxt at the end of\n * each batch, we free all the per-batch storage reliably and without\n * tedium.\n *\n [...]\n\n\n- Melanie\n\n\n",
"msg_date": "Fri, 12 May 2023 17:36:06 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "Hi,\n\nThanks for the patches. A couple mostly minor comments, to complement\nMelanie's review:\n\n0001\n\nI'm not really sure about calling this \"hybrid hash-join\". What does it\neven mean? The new comments simply describe the existing batching, and\nhow we increment number of batches, etc.\n\nWhen someone says \"hybrid\" it usually means a combination of two very\ndifferent approaches. Say, a join switching between NL and HJ would be\nhybrid. But this is not it.\n\nI'm not against describing the behavior, but the comment either needs to\nexplain why it's hybrid or it should not be called that.\n\n\n0002\n\nI wouldn't call the new ExecHashJoinSaveTuple parameter spillcxt but\njust something generic (e.g. cxt). Yes, we're passing spillCxt, but from\nthe function's POV it's just a pointer.\n\nI also wouldn't move the ExecHashJoinSaveTuple comment inside - it just\nneeds to be reworded that we're expecting the context to be with the\nright lifespan. The function comment is the right place to document such\nexpectations, people are less likely to read the function body.\n\nThe modified comment in hashjoin.h has a some alignment issues.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 13 May 2023 23:47:53 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On 5/12/23 23:36, Melanie Plageman wrote:\n> Thanks for continuing to work on this.\n> \n> Are you planning to modify what is displayed for memory usage in\n> EXPLAIN ANALYZE?\n> \n\nWe could do that, but we can do that separately - it's a separate and\nindependent improvement, I think.\n\nAlso, do you have a proposal how to change the explain output? In\nprinciple we already have the number of batches, so people can calculate\nthe \"peak\" amount of memory (assuming they realize what it means).\n\nI think the main problem with adding this info to EXPLAIN is that I'm\nnot sure it's very useful in practice. I've only really heard about this\nmemory explosion issue when the query dies with OOM or takes forever,\nbut EXPLAIN ANALYZE requires the query to complete.\n\nA separate memory context (which pg_log_backend_memory_contexts can\ndump to server log) is more valuable, I think.\n\n> Also, since that won't help a user who OOMs, I wondered if the spillCxt\n> is helpful on its own or if we need some kind of logging message for\n> users to discover that this is what led them to running out of memory.\n> \n\nI think the separate memory context is definitely an improvement,\nhelpful on it's own it makes it clear *what* allocated the memory. It\nrequires having the memory context stats, but we may already dump them\ninto the server log if malloc returns NULL. Granted, it depends on how\nthe system is configured and it won't help when OOM killer hits :-(\n\nI wouldn't object to having some sort of log message, but when exactly\nwould we emit it? Presumably after exceeding some amount of memory, but\nwhat would it be? It can't be too low (not to trigger it too often) or\ntoo high (failing to report the issue). Also, do you think it should go\nto the user or just to the server log?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 14 May 2023 00:10:00 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Sun, 14 May 2023 00:10:00 +0200\nTomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\n> On 5/12/23 23:36, Melanie Plageman wrote:\n> > Thanks for continuing to work on this.\n> > \n> > Are you planning to modify what is displayed for memory usage in\n> > EXPLAIN ANALYZE?\n\nYes, I already start to work on this. Tracking spilling memory in\nspaceUsed/spacePeak change the current behavior of the serialized HJ because it\nwill increase the number of batch much faster, so this is a no go for v16.\n\nI'll try to accumulate the total allocated (used+not used) spill context memory\nin instrumentation. This is gross, but it avoids to track the spilling memory\nin its own structure entry.\n\n> We could do that, but we can do that separately - it's a separate and\n> independent improvement, I think.\n\n+1\n\n> Also, do you have a proposal how to change the explain output? In\n> principle we already have the number of batches, so people can calculate\n> the \"peak\" amount of memory (assuming they realize what it means).\n\nWe could add the batch memory consumption with the number of batches. Eg.:\n\n Buckets: 4096 (originally 4096) \n Batches: 32768 (originally 8192) using 256MB\n Memory Usage: 192kB\n\n> I think the main problem with adding this info to EXPLAIN is that I'm\n> not sure it's very useful in practice. I've only really heard about this\n> memory explosion issue when the query dies with OOM or takes forever,\n> but EXPLAIN ANALYZE requires the query to complete.\n\nIt could be useful to help admins tuning their queries realize that the current\nnumber of batches is consuming much more memory than the join itself.\n\nThis could help them fix the issue before OOM happen.\n\nRegards,\n\n\n",
"msg_date": "Mon, 15 May 2023 16:15:26 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "Hi,\n\nThanks for your review!\n\nOn Sat, 13 May 2023 23:47:53 +0200\nTomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\n> Thanks for the patches. A couple mostly minor comments, to complement\n> Melanie's review:\n> \n> 0001\n> \n> I'm not really sure about calling this \"hybrid hash-join\". What does it\n> even mean? The new comments simply describe the existing batching, and\n> how we increment number of batches, etc.\n\nUnless I'm wrong, I believed it comes from the \"Hybrid hash join algorithm\" as\ndescribed here (+ see pdf in this page ref):\n\n https://en.wikipedia.org/wiki/Hash_join#Hybrid_hash_join\n\nI added the ref in the v7 documentation to avoid futur confusion, is it ok?\n\n> 0002\n> \n> I wouldn't call the new ExecHashJoinSaveTuple parameter spillcxt but\n> just something generic (e.g. cxt). Yes, we're passing spillCxt, but from\n> the function's POV it's just a pointer.\n\nchanged in v7.\n\n> I also wouldn't move the ExecHashJoinSaveTuple comment inside - it just\n> needs to be reworded that we're expecting the context to be with the\n> right lifespan. The function comment is the right place to document such\n> expectations, people are less likely to read the function body.\n\nmoved and reworded in v7.\n\n> The modified comment in hashjoin.h has a some alignment issues.\n\nI see no alignment issue. I suppose what bother you might be the spaces\nbefore spillCxt and batchCxt to show they are childs of hashCxt? Should I\nremove them?\n\nRegards,",
"msg_date": "Tue, 16 May 2023 00:15:02 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On 5/16/23 00:15, Jehan-Guillaume de Rorthais wrote:\n> Hi,\n> \n> Thanks for your review!\n> \n> On Sat, 13 May 2023 23:47:53 +0200\n> Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> \n>> Thanks for the patches. A couple mostly minor comments, to complement\n>> Melanie's review:\n>>\n>> 0001\n>>\n>> I'm not really sure about calling this \"hybrid hash-join\". What does it\n>> even mean? The new comments simply describe the existing batching, and\n>> how we increment number of batches, etc.\n> \n> Unless I'm wrong, I believed it comes from the \"Hybrid hash join algorithm\" as\n> described here (+ see pdf in this page ref):\n> \n> https://en.wikipedia.org/wiki/Hash_join#Hybrid_hash_join\n> \n> I added the ref in the v7 documentation to avoid futur confusion, is it ok?\n> \n\nAh, I see. I'd just leave out the \"hybrid\" entirely. We've lived without\nit until now, we know what the implementation does ...\n\n>> 0002\n>>\n>> I wouldn't call the new ExecHashJoinSaveTuple parameter spillcxt but\n>> just something generic (e.g. cxt). Yes, we're passing spillCxt, but from\n>> the function's POV it's just a pointer.\n> \n> changed in v7.\n> \n>> I also wouldn't move the ExecHashJoinSaveTuple comment inside - it just\n>> needs to be reworded that we're expecting the context to be with the\n>> right lifespan. The function comment is the right place to document such\n>> expectations, people are less likely to read the function body.\n> \n> moved and reworded in v7.\n> \n>> The modified comment in hashjoin.h has a some alignment issues.\n> \n> I see no alignment issue. I suppose what bother you might be the spaces\n> before spillCxt and batchCxt to show they are childs of hashCxt? Should I\n> remove them?\n> \n\nIt didn't occur to me this is intentional to show the contexts are\nchildren of hashCxt, so maybe it's not a good way to document that. I'd\nremove it, and if you think it's something worth mentioning, I'd add an\nexplicit comment.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 16 May 2023 12:01:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "Hi,\n\nOn Tue, 16 May 2023 12:01:51 +0200 Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> On 5/16/23 00:15, Jehan-Guillaume de Rorthais wrote:\n> > On Sat, 13 May 2023 23:47:53 +0200\n> > Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> ...\n> >> I'm not really sure about calling this \"hybrid hash-join\". What does it\n> >> even mean? The new comments simply describe the existing batching, and\n> >> how we increment number of batches, etc. \n> > \n> > Unless I'm wrong, I believed it comes from the \"Hybrid hash join algorithm\"\n> > as described here (+ see pdf in this page ref):\n> > \n> > https://en.wikipedia.org/wiki/Hash_join#Hybrid_hash_join\n> > \n> > I added the ref in the v7 documentation to avoid futur confusion, is it ok?\n> \n> Ah, I see. I'd just leave out the \"hybrid\" entirely. We've lived without\n> it until now, we know what the implementation does ...\n\nI changed the title, but kept the reference. There's still two other uses\nof \"hybrid hash join algorithm\" in function and code comments. Keeping the ref\nin this header doesn't cost much and help new comers.\n\n> >> 0002\n> >> ...\n> >> The modified comment in hashjoin.h has a some alignment issues. \n> > \n> > I see no alignment issue. I suppose what bother you might be the spaces\n> > before spillCxt and batchCxt to show they are childs of hashCxt? Should I\n> > remove them?\n> \n> It didn't occur to me this is intentional to show the contexts are\n> children of hashCxt, so maybe it's not a good way to document that. I'd\n> remove it, and if you think it's something worth mentioning, I'd add an\n> explicit comment.\n\nChanged.\n\nThanks,",
"msg_date": "Tue, 16 May 2023 16:00:51 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Tue, May 16, 2023 at 04:00:51PM +0200, Jehan-Guillaume de Rorthais wrote:\n\n> From e5ecd466172b7bae2f1be294c1a5e70ce2b43ed8 Mon Sep 17 00:00:00 2001\n> From: Melanie Plageman <melanieplageman@gmail.com>\n> Date: Thu, 30 Apr 2020 07:16:28 -0700\n> Subject: [PATCH v8 1/3] Describe hash join implementation\n> \n> This is just a draft to spark conversation on what a good comment might\n> be like in this file on how the hybrid hash join algorithm is\n> implemented in Postgres. I'm pretty sure this is the accepted term for\n> this algorithm https://en.wikipedia.org/wiki/Hash_join#Hybrid_hash_join\n\nI recommend changing the commit message to something like this:\n\n\tDescribe hash join implementation\n\n\tAdd details to the block comment in nodeHashjoin.c describing the\n\thybrid hash join algorithm at a high level.\n\n\tAuthor: Melanie Plageman <melanieplageman@gmail.com>\n\tAuthor: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n\tReviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>\n\tDiscussion: https://postgr.es/m/20230516160051.4267a800%40karst\n\n> diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c\n> index 0a3f32f731..93a78f6f74 100644\n> --- a/src/backend/executor/nodeHashjoin.c\n> +++ b/src/backend/executor/nodeHashjoin.c\n> @@ -10,6 +10,47 @@\n> * IDENTIFICATION\n> *\t src/backend/executor/nodeHashjoin.c\n> *\n> + * HASH JOIN\n> + *\n> + * This is based on the \"hybrid hash join\" algorithm described shortly in the\n> + * following page, and in length in the pdf in page's notes:\n> + *\n> + * https://en.wikipedia.org/wiki/Hash_join#Hybrid_hash_join\n> + *\n> + * If the inner side tuples of a hash join do not fit in memory, the hash join\n> + * can be executed in multiple batches.\n> + *\n> + * If the statistics on the inner side relation are accurate, planner chooses a\n> + * multi-batch strategy and estimates the number of batches.\n> + *\n> + * The query executor measures the real size of the hashtable and increases the\n> + * number of batches if the hashtable grows too large.\n> + *\n> + * The number of batches is always a power of two, so an increase in the number\n> + * of batches doubles it.\n> + *\n> + * Serial hash join measures batch size lazily -- waiting until it is loading a\n> + * batch to determine if it will fit in memory. While inserting tuples into the\n> + * hashtable, serial hash join will, if that tuple were to exceed work_mem,\n> + * dump out the hashtable and reassign them either to other batch files or the\n> + * current batch resident in the hashtable.\n> + *\n> + * Parallel hash join, on the other hand, completes all changes to the number\n> + * of batches during the build phase. If it increases the number of batches, it\n> + * dumps out all the tuples from all batches and reassigns them to entirely new\n> + * batch files. Then it checks every batch to ensure it will fit in the space\n> + * budget for the query.\n> + *\n> + * In both parallel and serial hash join, the executor currently makes a best\n> + * effort. If a particular batch will not fit in memory, it tries doubling the\n> + * number of batches. If after a batch increase, there is a batch which\n> + * retained all or none of its tuples, the executor disables growth in the\n> + * number of batches globally. After growth is disabled, all batches that would\n> + * have previously triggered an increase in the number of batches instead\n> + * exceed the space allowed.\n> + *\n> + * TODO: should we discuss that tuples can only spill forward?\n\nI would just cut this for now since we haven't started on an agreed-upon\nwording.\n\n> From 309ad354b7a9e4dfa01b2985bd883829f5e0eba0 Mon Sep 17 00:00:00 2001\n> From: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n> Date: Tue, 16 May 2023 15:42:14 +0200\n> Subject: [PATCH v8 2/3] Allocate hash batches related BufFile in a dedicated\n> context\n\nHere is a draft commit message for the second patch:\n\n Dedicated memory context for hash join spill metadata\n \n A hash join's hashtable may be split up into multiple batches if it\n would otherwise exceed work_mem. The number of batches is doubled each\n time a given batch is determined not to fit in memory. Each batch file\n is allocated with a block-sized buffer for buffering tuples (parallel\n hash join has additional sharedtuplestore accessor buffers).\n \n In some cases, often with skewed data, bad stats, or very large\n datasets, users can run out-of-memory while attempting to fit an\n oversized batch in memory solely from the memory overhead of all the\n batch files' buffers.\n \n Batch files were allocated in the ExecutorState memory context, making\n it very hard to identify when this batch explosion was the source of an\n OOM. By allocating the batch files in a dedicated memory context, it\n should be easier for users to identify the cause of an OOM and work to\n avoid it.\n\nI recommend editing and adding links to the various discussions on this\ntopic from your research.\n\nAs for the patch itself, I noticed that there are few things needing\npgindenting.\n\nI usually do the following to run pgindent (in case you haven't done\nthis recently).\n\nChange the pg_bsd_indent meson file \"install\" value to true (like this\ndiff):\n\ndiff --git a/src/tools/pg_bsd_indent/meson.build b/src/tools/pg_bsd_indent/meson.build\nindex 5545c097bf..85bedf13f6 100644\n--- a/src/tools/pg_bsd_indent/meson.build\n+++ b/src/tools/pg_bsd_indent/meson.build\n@@ -21,7 +21,7 @@ pg_bsd_indent = executable('pg_bsd_indent',\n dependencies: [frontend_code],\n include_directories: include_directories('.'),\n kwargs: default_bin_args + {\n- 'install': false,\n+ 'install': true,\n # possibly at some point do this:\n # 'install_dir': dir_pgxs / 'src/tools/pg_bsd_indent',\n },\n\nInstall pg_bsd_indent.\nRun pg_indent.\nI do the following to run pgindent:\n\nsrc/tools/pgindent/pgindent --indent \\\n$INSTALL_PATH/pg_bsd_indent --typedef \\\nsrc/tools/pgindent/typedefs.list -- $(git diff origin/master --name-only '*.c' '*.h')\n\nThere are some existing indentation issues in these files, but you can\nleave those or put them in a separate commit.\n\n> @@ -3093,8 +3107,11 @@ ExecParallelHashJoinSetUpBatches(HashJoinTable hashtable, int nbatch)\n> \tpstate->nbatch = nbatch;\n> \tbatches = dsa_get_address(hashtable->area, pstate->batches);\n> \n> -\t/* Use hash join memory context. */\n> -\toldcxt = MemoryContextSwitchTo(hashtable->hashCxt);\n\nAdd a period at the end of this comment.\n\n> +\t/*\n> +\t * Use hash join spill memory context to allocate accessors and their\n> +\t * buffers\n> +\t */\n> +\toldcxt = MemoryContextSwitchTo(hashtable->spillCxt);\n> \n> \t/* Allocate this backend's accessor array. */\n> \thashtable->nbatch = nbatch;\n> @@ -3196,8 +3213,8 @@ ExecParallelHashEnsureBatchAccessors(HashJoinTable hashtable)\n> \t */\n> \tAssert(DsaPointerIsValid(pstate->batches));\n> \n> -\t/* Use hash join memory context. */\n> -\toldcxt = MemoryContextSwitchTo(hashtable->hashCxt);\n\nAdd a period at the end of this comment.\n\n> +\t/* Use hash join spill memory context to allocate accessors */\n> +\toldcxt = MemoryContextSwitchTo(hashtable->spillCxt);\n> \n> \t/* Allocate this backend's accessor array. */\n> \thashtable->nbatch = pstate->nbatch;\n\n> diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c\n> @@ -1313,21 +1314,30 @@ ExecParallelHashJoinNewBatch(HashJoinState *hjstate)\n> * The data recorded in the file for each tuple is its hash value,\n> * then the tuple in MinimalTuple format.\n> *\n> - * Note: it is important always to call this in the regular executor\n> - * context, not in a shorter-lived context; else the temp file buffers\n> - * will get messed up.\n> + * If this is the first write to the batch file, this function first\n> + * create it. The associated BufFile buffer is allocated in the given\n> + * context. It is important to always give the HashSpillContext\n> + * context. First to avoid a shorter-lived context, else the temp file\n> + * buffers will get messed up. Second, for a better accounting of the\n> + * spilling memory consumption.\n> + *\n> */\n\nHere is my suggested wording fot this block comment:\n\nThe batch file is lazily created. If this is the first tuple written to\nthis batch, the batch file is created and its buffer is allocated in the\ngiven context. It is important to pass in a memory context which will\nlive for the entirety of the lifespan of the batch.\n\nSince we went to the trouble of naming the context something generic,\nperhaps move the comment about accounting for the memory consumption to\nthe call site.\n\n> void\n> ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue,\n> -\t\t\t\t\t BufFile **fileptr)\n> +\t\t\t\t\t BufFile **fileptr, MemoryContext cxt)\n> {\n\n> diff --git a/src/include/executor/hashjoin.h b/src/include/executor/hashjoin.h\n> index 8ee59d2c71..ac27222d18 100644\n> --- a/src/include/executor/hashjoin.h\n> +++ b/src/include/executor/hashjoin.h\n> @@ -23,12 +23,12 @@\n> /* ----------------------------------------------------------------\n> *\t\t\t\thash-join hash table structures\n> *\n> - * Each active hashjoin has a HashJoinTable control block, which is\n> - * palloc'd in the executor's per-query context. All other storage needed\n> - * for the hashjoin is kept in private memory contexts, two for each hashjoin.\n> - * This makes it easy and fast to release the storage when we don't need it\n> - * anymore. (Exception: data associated with the temp files lives in the\n> - * per-query context too, since we always call buffile.c in that context.)\n> + * Each active hashjoin has a HashJoinTable structure, which is\n\n\"Other storages\" should be \"Other storage\"\n\n> + * palloc'd in the executor's per-query context. Other storages needed for\n> + * each hashjoin is kept in child contexts, three for each hashjoin:\n> + * - HashTableContext (hashCxt): the parent hash table storage context\n> + * - HashSpillContext (spillCxt): storage for temp files buffers\n> + * - HashBatchContext (batchCxt): storage for a batch in serial hash join\n> *\n> * The hashtable contexts are made children of the per-query context, ensuring\n> * that they will be discarded at end of statement even if the join is\n> @@ -36,9 +36,19 @@\n> * be cleaned up by the virtual file manager in event of an error.)\n> *\n> * Storage that should live through the entire join is allocated from the\n> - * \"hashCxt\", while storage that is only wanted for the current batch is\n> - * allocated in the \"batchCxt\". By resetting the batchCxt at the end of\n> - * each batch, we free all the per-batch storage reliably and without tedium.\n\n\"mainly hash's meta datas\" -> \"mainly the hashtable's metadata\"\n\n> + * \"hashCxt\" (mainly hash's meta datas). Also, the \"hashCxt\" context is the\n> + * parent of \"spillCxt\" and \"batchCxt\". It makes it easy and fast to release\n> + * the storage when we don't need it anymore.\n> + *\n\nSuggested alternative wording for the below:\n\n* Data associated with temp files is allocated in the \"spillCxt\" context\n* which lives for the duration of the entire join as batch files'\n* creation and usage may span batch execution. These files are\n* explicitly destroyed by calling BufFileClose() when the code is done\n* with them. The aim of this context is to help accounting for the\n* memory allocated for temp files and their buffers.\n\n> + * Data associated with temp files lives in the \"spillCxt\" context which lives\n> + * during the entire join as temp files might need to survives batches. These\n> + * files are explicitly destroyed by calling BufFileClose() when the code is\n> + * done with them. The aim of this context is to help accounting the memory\n> + * allocations dedicated to temp files and their buffers.\n> + *\n\nSuggested alternative wording for the below:\n\n* Finally, data used only during a single batch's execution is allocated\n* in the \"batchCxt\". By resetting the batchCxt at the end of each batch,\n* we free all the per-batch storage reliably and without tedium.\n\n> + * Finaly, storage that is only wanted for the current batch is allocated in\n> + * the \"batchCxt\". By resetting the batchCxt at the end of each batch, we free\n> + * all the per-batch storage reliably and without tedium.\n\n- Melanie\n\n\n",
"msg_date": "Tue, 16 May 2023 16:00:52 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Sun, May 14, 2023 at 12:10:00AM +0200, Tomas Vondra wrote:\n> On 5/12/23 23:36, Melanie Plageman wrote:\n> > Thanks for continuing to work on this.\n> > \n> > Are you planning to modify what is displayed for memory usage in\n> > EXPLAIN ANALYZE?\n> > \n> \n> We could do that, but we can do that separately - it's a separate and\n> independent improvement, I think.\n> \n> Also, do you have a proposal how to change the explain output? In\n> principle we already have the number of batches, so people can calculate\n> the \"peak\" amount of memory (assuming they realize what it means).\n\nI don't know that we can expect people looking at the EXPLAIN output to\nknow how much space the different file buffers are taking up. Not to\nmention that it is different for parallel hash join.\n\nI like Jean-Guillaume's idea in his email responding to this point:\n\n> We could add the batch memory consumption with the number of batches. Eg.:\n\n> Buckets: 4096 (originally 4096) \n> Batches: 32768 (originally 8192) using 256MB\n> Memory Usage: 192kB\n\nHowever, I think we can discuss this for 17.\n\n> I think the main problem with adding this info to EXPLAIN is that I'm\n> not sure it's very useful in practice. I've only really heard about this\n> memory explosion issue when the query dies with OOM or takes forever,\n> but EXPLAIN ANALYZE requires the query to complete.\n> \n> A separate memory context (which pg_log_backend_memory_contexts can\n> dump to server log) is more valuable, I think.\n\nYes, I'm satisfied with scoping this to only the patch with the\ndedicated memory context for now.\n\n> > Also, since that won't help a user who OOMs, I wondered if the spillCxt\n> > is helpful on its own or if we need some kind of logging message for\n> > users to discover that this is what led them to running out of memory.\n> > \n> \n> I think the separate memory context is definitely an improvement,\n> helpful on it's own it makes it clear *what* allocated the memory. It\n> requires having the memory context stats, but we may already dump them\n> into the server log if malloc returns NULL. Granted, it depends on how\n> the system is configured and it won't help when OOM killer hits :-(\n\nRight. I suppose if someone had an OOM and the OOM killer ran, they may\nbe motivated to disable vm overcommit and then perhaps the memory\ncontext name will show up somewhere in an error message or log?\n\n> I wouldn't object to having some sort of log message, but when exactly\n> would we emit it? Presumably after exceeding some amount of memory, but\n> what would it be? It can't be too low (not to trigger it too often) or\n> too high (failing to report the issue). Also, do you think it should go\n> to the user or just to the server log?\n\nI think where the log is delivered is dependent on under what conditions\nwe log -- if it is fairly preemptive, then doing so in the server log is\nenough.\n\nHowever, I think we can discuss this in the future. You are right that\nthe dedicated memory context by itself is an improvement.\nDetermining when to emit the log message seems like it will be too\ndifficult to accomplish in a day or so.\n\n- Melanie\n\n\n",
"msg_date": "Tue, 16 May 2023 17:10:25 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Tue, 16 May 2023 16:00:52 -0400\nMelanie Plageman <melanieplageman@gmail.com> wrote:\n\n> On Tue, May 16, 2023 at 04:00:51PM +0200, Jehan-Guillaume de Rorthais wrote:\n> \n> > From e5ecd466172b7bae2f1be294c1a5e70ce2b43ed8 Mon Sep 17 00:00:00 2001\n> > From: Melanie Plageman <melanieplageman@gmail.com>\n> > Date: Thu, 30 Apr 2020 07:16:28 -0700\n> > Subject: [PATCH v8 1/3] Describe hash join implementation\n> > \n> > This is just a draft to spark conversation on what a good comment might\n> > be like in this file on how the hybrid hash join algorithm is\n> > implemented in Postgres. I'm pretty sure this is the accepted term for\n> > this algorithm https://en.wikipedia.org/wiki/Hash_join#Hybrid_hash_join \n> \n> I recommend changing the commit message to something like this:\n> \n> \tDescribe hash join implementation\n> \n> \tAdd details to the block comment in nodeHashjoin.c describing the\n> \thybrid hash join algorithm at a high level.\n> \n> \tAuthor: Melanie Plageman <melanieplageman@gmail.com>\n> \tAuthor: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n> \tReviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>\n> \tDiscussion: https://postgr.es/m/20230516160051.4267a800%40karst\n\nDone, but assigning myself as a reviewer as I don't remember having authored\nanything in this but the reference to the Hybrid hash page, which is\nquestionable according to Tomas :)\n\n> > diff --git a/src/backend/executor/nodeHashjoin.c\n> > b/src/backend/executor/nodeHashjoin.c index 0a3f32f731..93a78f6f74 100644\n> > --- a/src/backend/executor/nodeHashjoin.c\n> > ...\n> > + * TODO: should we discuss that tuples can only spill forward? \n> \n> I would just cut this for now since we haven't started on an agreed-upon\n> wording.\n\nRemoved in v9.\n\n> > From 309ad354b7a9e4dfa01b2985bd883829f5e0eba0 Mon Sep 17 00:00:00 2001\n> > From: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n> > Date: Tue, 16 May 2023 15:42:14 +0200\n> > Subject: [PATCH v8 2/3] Allocate hash batches related BufFile in a dedicated\n> > context \n> \n> Here is a draft commit message for the second patch:\n> \n> ...\n\nThanks. Adopted with some minor rewording... hopefully it's OK.\n\n> I recommend editing and adding links to the various discussions on this\n> topic from your research.\n\nDone in v9.\n\n> As for the patch itself, I noticed that there are few things needing\n> pgindenting.\n>\n> I usually do the following to run pgindent (in case you haven't done\n> this recently).\n> \n> ...\n\nThank you for your recipe.\n\n> There are some existing indentation issues in these files, but you can\n> leave those or put them in a separate commit.\n\nReindented in v9.\n\nI put existing indentation issues in a separate commit to keep the actual\npatches clean from distractions.\n\n> ...\n> Add a period at the end of this comment.\n> \n> > +\t/*\n> > +\t * Use hash join spill memory context to allocate accessors and\n> > their\n> > +\t * buffers\n> > +\t */\n\nFixed in v9.\n\n> Add a period at the end of this comment.\n> \n> > +\t/* Use hash join spill memory context to allocate accessors */\n\nFixed in v9.\n\n> > diff --git a/src/backend/executor/nodeHashjoin.c\n> > b/src/backend/executor/nodeHashjoin.c @@ -1313,21 +1314,30 @@\n> > ExecParallelHashJoinNewBatch(HashJoinState *hjstate)\n> > * The data recorded in the file for each tuple is its hash value,\n> > * then the tuple in MinimalTuple format.\n> > *\n> > - * Note: it is important always to call this in the regular executor\n> > - * context, not in a shorter-lived context; else the temp file buffers\n> > - * will get messed up.\n> > + * If this is the first write to the batch file, this function first\n> > + * create it. The associated BufFile buffer is allocated in the given\n> > + * context. It is important to always give the HashSpillContext\n> > + * context. First to avoid a shorter-lived context, else the temp file\n> > + * buffers will get messed up. Second, for a better accounting of the\n> > + * spilling memory consumption.\n> > + *\n> > */ \n> \n> Here is my suggested wording fot this block comment:\n> \n> The batch file is lazily created. If this is the first tuple written to\n> this batch, the batch file is created and its buffer is allocated in the\n> given context. It is important to pass in a memory context which will\n> live for the entirety of the lifespan of the batch.\n\nReworded. The context must actually survive the batch itself, not just live\nduring the lifespan of the batch.\n\n> > void\n> > ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue,\n> > -\t\t\t\t\t BufFile **fileptr)\n> > +\t\t\t\t\t BufFile **fileptr, MemoryContext\n> > cxt) { \n\nNote that I changed this to:\n\n ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue,\n BufFile **fileptr, HashJoinTable hashtable) {\n\nAs this function must allocate BufFile buffer in spillCxt, I suppose\nwe should force it explicitly in the function code.\n\nPlus, having hashtable locally could be useful for later patch to eg. fine track\nallocated memory in spaceUsed.\n\n> > diff --git a/src/include/executor/hashjoin.h\n> > b/src/include/executor/hashjoin.h index 8ee59d2c71..ac27222d18 100644\n> > --- a/src/include/executor/hashjoin.h\n> > +++ b/src/include/executor/hashjoin.h\n> > @@ -23,12 +23,12 @@\n> > /* ----------------------------------------------------------------\n> > *\t\t\t\thash-join hash table structures\n> > *\n> > - * Each active hashjoin has a HashJoinTable control block, which is\n> > - * palloc'd in the executor's per-query context. All other storage needed\n> > - * for the hashjoin is kept in private memory contexts, two for each\n> > hashjoin.\n> > - * This makes it easy and fast to release the storage when we don't need it\n> > - * anymore. (Exception: data associated with the temp files lives in the\n> > - * per-query context too, since we always call buffile.c in that context.)\n> > + * Each active hashjoin has a HashJoinTable structure, which is \n> \n> \"Other storages\" should be \"Other storage\"\n> \n> > + * palloc'd in the executor's per-query context. Other storages needed for\n\nFixed in v9.\n\n> ... \n> \n> \"mainly hash's meta datas\" -> \"mainly the hashtable's metadata\"\n> \n> > + * \"hashCxt\" (mainly hash's meta datas). Also, the \"hashCxt\" context is the\n\nFixed in v9.\n\n> Suggested alternative wording for the below:\n> \n> * Data associated with temp files is allocated in the \"spillCxt\" context\n> * which lives for the duration of the entire join as batch files'\n> * creation and usage may span batch execution. These files are\n> * explicitly destroyed by calling BufFileClose() when the code is done\n> * with them. The aim of this context is to help accounting for the\n> * memory allocated for temp files and their buffers.\n\nAdopted in v9.\n\n> Suggested alternative wording for the below:\n> \n> * Finally, data used only during a single batch's execution is allocated\n> * in the \"batchCxt\". By resetting the batchCxt at the end of each batch,\n> * we free all the per-batch storage reliably and without tedium.\n\nAdopted in v9.\n\nThank you for your review!\n\nRegards,",
"msg_date": "Wed, 17 May 2023 19:10:08 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Wed, May 17, 2023 at 07:10:08PM +0200, Jehan-Guillaume de Rorthais wrote:\n> On Tue, 16 May 2023 16:00:52 -0400\n> Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > > From 309ad354b7a9e4dfa01b2985bd883829f5e0eba0 Mon Sep 17 00:00:00 2001\n> > > From: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n> > > Date: Tue, 16 May 2023 15:42:14 +0200\n> > > Subject: [PATCH v8 2/3] Allocate hash batches related BufFile in a dedicated\n> > There are some existing indentation issues in these files, but you can\n> > leave those or put them in a separate commit.\n> \n> Reindented in v9.\n> \n> I put existing indentation issues in a separate commit to keep the actual\n> patches clean from distractions.\n\nIt is a matter of opinion, but I tend to prefer the commit with the fix\nfor the existing indentation issues to be first in the patch set.\n\n> > > diff --git a/src/backend/executor/nodeHashjoin.c\n> > > b/src/backend/executor/nodeHashjoin.c @@ -1313,21 +1314,30 @@\n> > > ExecParallelHashJoinNewBatch(HashJoinState *hjstate)\n> > > * The data recorded in the file for each tuple is its hash value,\n> > > * then the tuple in MinimalTuple format.\n> > > *\n> > > - * Note: it is important always to call this in the regular executor\n> > > - * context, not in a shorter-lived context; else the temp file buffers\n> > > - * will get messed up.\n> > > + * If this is the first write to the batch file, this function first\n> > > + * create it. The associated BufFile buffer is allocated in the given\n> > > + * context. It is important to always give the HashSpillContext\n> > > + * context. First to avoid a shorter-lived context, else the temp file\n> > > + * buffers will get messed up. Second, for a better accounting of the\n> > > + * spilling memory consumption.\n> > > + *\n> > > */ \n> > \n> > Here is my suggested wording fot this block comment:\n> > \n> > The batch file is lazily created. If this is the first tuple written to\n> > this batch, the batch file is created and its buffer is allocated in the\n> > given context. It is important to pass in a memory context which will\n> > live for the entirety of the lifespan of the batch.\n> \n> Reworded. The context must actually survive the batch itself, not just live\n> during the lifespan of the batch.\n\nI've added a small recommended change to this inline.\n\n> > > void\n> > > ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue,\n> > > -\t\t\t\t\t BufFile **fileptr)\n> > > +\t\t\t\t\t BufFile **fileptr, MemoryContext\n> > > cxt) { \n> \n> Note that I changed this to:\n> \n> ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue,\n> BufFile **fileptr, HashJoinTable hashtable) {\n> \n> As this function must allocate BufFile buffer in spillCxt, I suppose\n> we should force it explicitly in the function code.\n> \n> Plus, having hashtable locally could be useful for later patch to eg. fine track\n> allocated memory in spaceUsed.\n\nI think if you want to pass the hashtable instead of the memory context,\nI think you'll need to explain why you still pass the buffile pointer\n(because ExecHashJoinSaveTuple() is called for inner and outer batch\nfiles) instead of getting it from the hashtable's arrays of buffile\npointers.\n\n> From c7b70dec3f4c162ea590b53a407c39dfd7ade873 Mon Sep 17 00:00:00 2001\n> From: Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n> Date: Tue, 16 May 2023 15:42:14 +0200\n> Subject: [PATCH v9 2/3] Dedicated memory context for hash join spill buffers\n\n> @@ -1310,22 +1311,38 @@ ExecParallelHashJoinNewBatch(HashJoinState *hjstate)\n> *\n> * The data recorded in the file for each tuple is its hash value,\n> * then the tuple in MinimalTuple format.\n> - *\n> - * Note: it is important always to call this in the regular executor\n> - * context, not in a shorter-lived context; else the temp file buffers\n> - * will get messed up.\n> */\n> void\n> ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue,\n> -\t\t\t\t\t BufFile **fileptr)\n> +\t\t\t\t\t BufFile **fileptr, HashJoinTable hashtable)\n> {\n> \tBufFile *file = *fileptr;\n> \n> \tif (file == NULL)\n> \t{\n> -\t\t/* First write to this batch file, so open it. */\n> +\t\tMemoryContext oldctx;\n> +\n> +\t\t/*\n> +\t\t * The batch file is lazily created. If this is the first tuple\n> +\t\t * written to this batch, the batch file is created and its buffer is\n> +\t\t * allocated in the spillCxt context, NOT in the batchCxt.\n> +\t\t *\n> +\t\t * During the building phase, inner batch are created with their temp\n> +\t\t * file buffers. These buffers are released later, after the batch is\n> +\t\t * loaded back to memory during the outer side scan. That explains why\n> +\t\t * it is important to use a memory context which live longer than the\n> +\t\t * batch itself or some temp file buffers will get messed up.\n> +\t\t *\n> +\t\t * Also, we use spillCxt instead of hashCxt for a better accounting of\n> +\t\t * the spilling memory consumption.\n> +\t\t */\n\nSuggested small edit to the second paragraph:\n\n\tDuring the build phase, buffered files are created for inner batches.\n\tEach batch's buffered file is closed (and its buffer freed) after the\n\tbatch is loaded into memory during the outer side scan. Therefore, it is\n\tnecessary to allocate the batch file buffer in a memory context which\n\toutlives the batch itself.\n\nI'd also mention the reason for passing the buffile pointer above the\nfunction. I would basically say:\n\n\tThe data recorded in the file for each tuple is its hash value,\n\tthen the tuple in MinimalTuple format. fileptr may refer to either an\n\tinner or outer side batch file.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 17 May 2023 13:46:35 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Wed, 17 May 2023 13:46:35 -0400\nMelanie Plageman <melanieplageman@gmail.com> wrote:\n\n> On Wed, May 17, 2023 at 07:10:08PM +0200, Jehan-Guillaume de Rorthais wrote:\n> > On Tue, 16 May 2023 16:00:52 -0400\n> > Melanie Plageman <melanieplageman@gmail.com> wrote: \n> > > ...\n> > > There are some existing indentation issues in these files, but you can\n> > > leave those or put them in a separate commit. \n> > \n> > Reindented in v9.\n> > \n> > I put existing indentation issues in a separate commit to keep the actual\n> > patches clean from distractions. \n> \n> It is a matter of opinion, but I tend to prefer the commit with the fix\n> for the existing indentation issues to be first in the patch set.\n\nOK. moved in v10 patch set.\n\n> ...\n> > > > void\n> > > > ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue,\n> > > > -\t\t\t\t\t BufFile **fileptr)\n> > > > +\t\t\t\t\t BufFile **fileptr,\n> > > > MemoryContext cxt) { \n> > \n> > Note that I changed this to:\n> > \n> > ExecHashJoinSaveTuple(MinimalTuple tuple, uint32 hashvalue,\n> > BufFile **fileptr, HashJoinTable hashtable) {\n> > \n> > As this function must allocate BufFile buffer in spillCxt, I suppose\n> > we should force it explicitly in the function code.\n> > \n> > Plus, having hashtable locally could be useful for later patch to eg. fine\n> > track allocated memory in spaceUsed. \n> \n> I think if you want to pass the hashtable instead of the memory context,\n> I think you'll need to explain why you still pass the buffile pointer\n> (because ExecHashJoinSaveTuple() is called for inner and outer batch\n> files) instead of getting it from the hashtable's arrays of buffile\n> pointers.\n\nComment added in v10\n\n> > @@ -1310,22 +1311,38 @@ ExecParallelHashJoinNewBatch(HashJoinState *hjstate)\n> ...\n> \n> Suggested small edit to the second paragraph:\n> \n> \tDuring the build phase, buffered files are created for inner batches.\n> \tEach batch's buffered file is closed (and its buffer freed) after the\n> \tbatch is loaded into memory during the outer side scan. Therefore, it\n> \tis necessary to allocate the batch file buffer in a memory context\n> \twhich outlives the batch itself.\n\nChanged.\n\n> I'd also mention the reason for passing the buffile pointer above the\n> function.\n\nAdded.\n\nRegards,",
"msg_date": "Thu, 18 May 2023 00:35:29 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Wed, May 17, 2023 at 6:35 PM Jehan-Guillaume de Rorthais\n<jgdr@dalibo.com> wrote:\n>\n> On Wed, 17 May 2023 13:46:35 -0400\n> Melanie Plageman <melanieplageman@gmail.com> wrote:\n>\n> > On Wed, May 17, 2023 at 07:10:08PM +0200, Jehan-Guillaume de Rorthais wrote:\n> > > On Tue, 16 May 2023 16:00:52 -0400\n> > > Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > > > ...\n> > > > There are some existing indentation issues in these files, but you can\n> > > > leave those or put them in a separate commit.\n> > >\n> > > Reindented in v9.\n> > >\n> > > I put existing indentation issues in a separate commit to keep the actual\n> > > patches clean from distractions.\n> >\n> > It is a matter of opinion, but I tend to prefer the commit with the fix\n> > for the existing indentation issues to be first in the patch set.\n>\n> OK. moved in v10 patch set.\n\nv10 LGTM.\n\n\n",
"msg_date": "Thu, 18 May 2023 18:27:24 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On 5/19/23 00:27, Melanie Plageman wrote:\n> On Wed, May 17, 2023 at 6:35 PM Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote:\n>>\n>> On Wed, 17 May 2023 13:46:35 -0400\n>> Melanie Plageman <melanieplageman@gmail.com> wrote:\n>>\n>>> On Wed, May 17, 2023 at 07:10:08PM +0200, Jehan-Guillaume de Rorthais wrote:\n>>>> On Tue, 16 May 2023 16:00:52 -0400\n>>>> Melanie Plageman <melanieplageman@gmail.com> wrote:\n>>>>> ...\n>>>>> There are some existing indentation issues in these files, but you can\n>>>>> leave those or put them in a separate commit.\n>>>>\n>>>> Reindented in v9.\n>>>>\n>>>> I put existing indentation issues in a separate commit to keep the actual\n>>>> patches clean from distractions.\n>>>\n>>> It is a matter of opinion, but I tend to prefer the commit with the fix\n>>> for the existing indentation issues to be first in the patch set.\n>>\n>> OK. moved in v10 patch set.\n> \n> v10 LGTM.\n\nThanks!\n\nI've pushed 0002 and 0003, after some general bikeshedding and minor\nrewording (a bit audacious, admittedly).\n\nI didn't push 0001, I don't think generally do separate pgindent patches\nlike this (I only run pgindent on large patches to ensure it doesn't\ncause massive breakage, not separately like this, but YMMV).\n\nAnyway, that's it for PG16. Let's see if we can do more in this area for\nPG17.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 May 2023 17:23:56 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> I didn't push 0001, I don't think generally do separate pgindent patches\n> like this (I only run pgindent on large patches to ensure it doesn't\n> cause massive breakage, not separately like this, but YMMV).\n\nIt's especially pointless when the main pgindent run for v16 is going\nto happen today (as soon as I get done clearing out my other queue\nitems).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 May 2023 11:28:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak from ExecutorState context?"
},
{
"msg_contents": "On Fri, 19 May 2023 17:23:56 +0200\nTomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> On 5/19/23 00:27, Melanie Plageman wrote:\n> > v10 LGTM. \n> \n> Thanks!\n> \n> I've pushed 0002 and 0003, after some general bikeshedding and minor\n> rewording (a bit audacious, admittedly).\n\nThank you Melanie et Tomas for your help, review et commit!\n\n\n\n",
"msg_date": "Mon, 22 May 2023 10:37:05 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak from ExecutorState context?"
}
] |
[
{
"msg_contents": "So I'm not sure if I'll be CFM this month but I'm assuming I will be\nat this point....\n\nRegardless, Commitfest 2023-03 starts tomorrow!\n\nSo this is a good time to check your submitted patches and ensure\nthey're actually in building and don't need a rebase. Take a look at\nhttp://cfbot.cputube.org/ for example. I'll do an initial pass marking\nanything red here as Waiting on Author\n\nThe next pass would be to grab any patches not marked Ready for\nCommitter and if they look like they'll need more than a one round of\nfeedback and a couple weeks to polish they'll probably get bounced to\nthe next commitfest too. It sucks not getting feedback on your patches\nfor so long but there are really just sooo many patches and so few\neyeballs... It would be great if people could do initial reviews of\nthese patches before we bounce them because it really is discouraging\nfor developers to send patches and not get feedback. But realistically\nit's going to happen to a lot of patches.\n\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 28 Feb 2023 13:45:27 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 01:45:27PM -0500, Greg Stark wrote:\n> So I'm not sure if I'll be CFM this month but I'm assuming I will be\n> at this point....\n\nOkay, that's OK for me! Thanks for helping out.\n\n> The next pass would be to grab any patches not marked Ready for\n> Committer and if they look like they'll need more than a one round of\n> feedback and a couple weeks to polish they'll probably get bounced to\n> the next commitfest too. It sucks not getting feedback on your patches\n> for so long but there are really just sooo many patches and so few\n> eyeballs... It would be great if people could do initial reviews of\n> these patches before we bounce them because it really is discouraging\n> for developers to send patches and not get feedback. But realistically\n> it's going to happen to a lot of patches.\n\nI don't have many patches registered this time for the sole reason of\nbeing able to spend more cycles on reviews and see what could make the\ncut. So we'll see how it goes, I guess..\n\nThe CF would begin in more or less 5 hours as of the moment of this\nmessage:\nhttps://www.timeanddate.com/time/zones/aoe\n--\nMichael",
"msg_date": "Wed, 1 Mar 2023 15:47:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Wed, Mar 01, 2023 at 03:47:17PM +0900, Michael Paquier wrote:\n> The CF would begin in more or less 5 hours as of the moment of this\n> message:\n> https://www.timeanddate.com/time/zones/aoe\n\nNote: I have switched this CF as \"In Process\" a few hours ago.\n--\nMichael",
"msg_date": "Thu, 2 Mar 2023 10:26:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "Sorry, I wasn't feeling very well since Friday. I'm still not 100% but\nI'm going to try to do some triage this afternoon.\n\nThere are a few patches that need a rebase. And a few patches failing\nMeson builds or autoconf stages -- I wonder if there's something\nunrelated broken there?\n\nBut what I think is really needed is for committers to pick up patches\nthat are ready to commit and grab them. There are currently two\npatches with macdice marked as committer and one with michael-kun\n(i.e. you:)\n\nBut what can we do to get more some patches picked up now instead of\nat the end of the commitfest? Would it help if I started asking on\nexisting threads if there's a committer willing to take it up?\n\nOn Wed, 1 Mar 2023 at 20:27, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Mar 01, 2023 at 03:47:17PM +0900, Michael Paquier wrote:\n> > The CF would begin in more or less 5 hours as of the moment of this\n> > message:\n> > https://www.timeanddate.com/time/zones/aoe\n>\n> Note: I have switched this CF as \"In Process\" a few hours ago.\n> --\n> Michael\n\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Mon, 6 Mar 2023 13:46:54 -0500",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "So, sorry I've been a bit under the weather but can concentrate on the\ncommitfest now. I tried to recapitulate the history but the activity\nlog only goes back a certain distance on the web. If I can log in to\nthe database I guess I could construct the history from sql queries if\nit was important.\n\nBut where we stand now is:\n\nStatus summary:\n Needs review: 152.\n Waiting on Author: 42.\n Ready for Committer: 39.\n Committed: 61.\n Moved to next CF: 4.\n Withdrawn: 17.\n Returned with Feedback: 4.\nTotal: 319.\n\nOf the Needs Review patches there are 81 that have received no email\nmessages since the CF began. A lot of those have reviews attached but\npresumably those reviewers are mostly from earlier CFs and may have\nalready contributed all they can.\n\nI don't know, should we move some or all of these to the next CF\nalready? I'm reluctant to bounce them en masse because there are\ndefinitely some patches that will get reviewed and some that should\nreally be marked \"Ready for Committer\".\n\nThese patches that are \"Needs Review\" and have received no comments at\nall since before March 1st are these. If your patch is amongst this\nlist I would suggest any of:\n\n1) Move it yourself to the next CF (or withdraw it)\n2) Post to the list with any pending questions asking for specific\nfeedback -- it's much more likely to get feedback than just a generic\n\"here's a patch plz review\"...\n3) Mark it Ready for Committer and possibly post explaining the\nresolution to any earlier questions to make it easier for a committer\nto understand the state\n\nIf there's still no emails on these at some point I suppose it will\nmake sense to move them out of the CF.\n\n * ALTER TABLE SET ACCESS METHOD on partitioned tables\n * New hooks in the connection path\n * Add log messages when replication slots become active and inactive\n * Avoid use deprecated Windows Memory API\n * Remove dead macro exec_subplan_get_plan\n * Consider parallel for LATERAL subqueries having LIMIT/OFFSET\n * pg_rewind WAL deletion pitfall\n * Simplify find_my_exec by using realpath(3)\n * Move backup-related code to xlogbackup.c/.h\n * Avoid hiding shared filesets in pg_ls_tmpdir (pg_ls_* functions for\nshowing metadata ...)\n * Fix bogus error emitted by pg_recvlogical when interrupted\n * warn if GUC set to an invalid shared library\n * Check consistency of GUC defaults between .sample.conf and\npg_settings.boot_val\n * Code checks for App Devs, using new options for transaction behavior\n * Lockless queue of waiters based on atomic operations for LWLock\n * Fix assertion failure with next_phase_at in snapbuild.c\n * Add SPLIT PARTITION/MERGE PARTITIONS commands\n * Add sortsupport for range types and btree_gist\n * asynchronous execution support for Custom Scan\n * Periodic burst growth of the checkpoint_req counter on replica.\n * CREATE INDEX CONCURRENTLY on partitioned table\n * Fix ParamPathInfo for union-all AppendPath\n * Add OR REPLACE option for CREATE OPERATOR\n * ALTER TABLE and CLUSTER fail to use a BulkInsertState for toast tables\n * Partial aggregates push down\n * Non-replayable WAL records through overflows and >MaxAllocSize lengths\n * Enable jitlink as an alternative jit linker of legacy Rtdyld and\nadd riscv jitting support\n * Test for function error in logrep worker\n * basebackup: support zstd long distance matching\n * pgbench - adding pl/pgsql versions of tests\n * Function to log backtrace of postgres processes\n * More scalable multixacts buffers and locking\n * Remove nonmeaningful prefixes in PgStat_* fields\n * COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all columns\n * postgres_fdw: commit remote (sub)transactions in parallel during pre-commit\n * Add semi-join pushdown to postgres_fdw\n * Skip replicating the tables specified in except table option\n * Split index and table statistics into different types of stats\n * Exclusion constraints on partitioned tables\n * Post-special Page Storage TDE support\n * Direct I/O (developer-only feature)\n * Improve doc for autovacuum on partitioned tables\n * Patch to implement missing join selectivity estimation for range types\n * Clarify the behavior of the system when approaching XID wraparound\n * Set arbitrary GUC options during initdb\n * An attempt to avoid\nlocally-committed-but-not-replicated-to-standby-transactions in\nsynchronous replication\n * Check lateral references within PHVs for memoize cache keys\n * Add n_tup_newpage_upd to pg_stat table views\n * monitoring usage count distribution\n * Reduce wakeup on idle for bgwriter & walwriter for >5s\n * Report the query string that caused a memory error under Valgrind\n * New [relation] options engine\n * Data is copied twice when specifying both child and parent table in\npublication\n * possibility to take name, signature and oid of currently executed\nfunction in GET DIAGNOSTICS statement\n * Named Operators\n * nbtree performance improvements through specialization on key shape\n * Fix assertion failure in SnapBuildInitialSnapshot()\n * Speed up releasing of locks\n * Compression dictionaries\n * Improve pg_bsd_indent's handling of multiline initialization expressions\n * Add EXPLAIN option GENERIC_PLAN for parameterized queries\n * User functions for building SCRAM secrets\n * Exit walsender before confirming remote flush in logical replication\n * Refactoring postgres_fdw/connection.c\n * Add pg_stat_session\n * Doc: Improve note about copying into postgres_fdw foreign tables in batch\n * Kerberos/GSSAPI Credential Delegation\n * archive modules loose ends\n * Fix dsa_free() to re-bin segment\n * Reduce timing overhead of EXPLAIN ANALYZE using rdtsc\n * clean up permission checks after 599b33b94\n * Some revises in adding sorting path\n * ResourceOwner refactoring\n * Fix the description of GUC \"max_locks_per_transaction\" and\n\"max_pred_locks_per_transaction\" in guc_table.c\n * some namespace.c refactoring\n * Add function to_oct\n * Switching XLog source from archive to streaming when primary available\n * Dynamic result sets from procedures\n * BRIN - SK_SEARCHARRAY and scan key preprocessing\n * MERGE ... WHEN NOT MATCHED BY SOURCE\n * Reuse Workers and Replication Slots during Logical Replication\n\n\n",
"msg_date": "Wed, 15 Mar 2023 14:29:26 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Wed, 15 Mar 2023 at 14:29, Gregory Stark (as CFM)\n<stark.cfm@gmail.com> wrote:\n>\n> These patches that are \"Needs Review\" and have received no comments at\n> all since before March 1st are these. If your patch is amongst this\n> list I would suggest any of:\n>\n> 1) Move it yourself to the next CF (or withdraw it)\n> 2) Post to the list with any pending questions asking for specific\n> feedback -- it's much more likely to get feedback than just a generic\n> \"here's a patch plz review\"...\n> 3) Mark it Ready for Committer and possibly post explaining the\n> resolution to any earlier questions to make it easier for a committer\n> to understand the state\n>\n> If there's still no emails on these at some point I suppose it will\n> make sense to move them out of the CF.\n\nI'm going to go ahead and do this today. Any of these patches that are\n\"Waiting on Author\" and haven't received any emails or status changes\nsince March 1 I'm going to move out of the commitfest(*). If you\nreally think your patch in this list is important to get committed\nthen please respond to the thread explaining any plans or feedback\nneeded.\n\nIt would be nice to actually do Returned With Feedback where\nappropriate but there are too many to go through them thoroughly. I'll\nonly be able to do a quick review of each thread checking for\nimportant bug fixes or obviously rejected patches.\n\n(*) I reserve the right to skip and leave some patches where\nappropriate. In particular I'll skip patches that are actually from\ncommitters on the theory that they could just commit them when they\nfeel like it anyways. Some patches may be intentionally waiting until\nthe end of the release cycle to avoid conflicts too.\n\n> * ALTER TABLE SET ACCESS METHOD on partitioned tables\n> * New hooks in the connection path\n> * Add log messages when replication slots become active and inactive\n> * Avoid use deprecated Windows Memory API\n> * Remove dead macro exec_subplan_get_plan\n> * Consider parallel for LATERAL subqueries having LIMIT/OFFSET\n> * pg_rewind WAL deletion pitfall\n> * Simplify find_my_exec by using realpath(3)\n> * Move backup-related code to xlogbackup.c/.h\n> * Avoid hiding shared filesets in pg_ls_tmpdir (pg_ls_* functions for\n> showing metadata ...)\n> * Fix bogus error emitted by pg_recvlogical when interrupted\n> * warn if GUC set to an invalid shared library\n> * Check consistency of GUC defaults between .sample.conf and\n> pg_settings.boot_val\n> * Code checks for App Devs, using new options for transaction behavior\n> * Lockless queue of waiters based on atomic operations for LWLock\n> * Fix assertion failure with next_phase_at in snapbuild.c\n> * Add SPLIT PARTITION/MERGE PARTITIONS commands\n> * Add sortsupport for range types and btree_gist\n> * asynchronous execution support for Custom Scan\n> * Periodic burst growth of the checkpoint_req counter on replica.\n> * CREATE INDEX CONCURRENTLY on partitioned table\n> * Fix ParamPathInfo for union-all AppendPath\n> * Add OR REPLACE option for CREATE OPERATOR\n> * ALTER TABLE and CLUSTER fail to use a BulkInsertState for toast tables\n> * Partial aggregates push down\n> * Non-replayable WAL records through overflows and >MaxAllocSize lengths\n> * Enable jitlink as an alternative jit linker of legacy Rtdyld and\n> add riscv jitting support\n> * Test for function error in logrep worker\n> * basebackup: support zstd long distance matching\n> * pgbench - adding pl/pgsql versions of tests\n> * Function to log backtrace of postgres processes\n> * More scalable multixacts buffers and locking\n> * Remove nonmeaningful prefixes in PgStat_* fields\n> * COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all columns\n> * postgres_fdw: commit remote (sub)transactions in parallel during pre-commit\n> * Add semi-join pushdown to postgres_fdw\n> * Skip replicating the tables specified in except table option\n> * Split index and table statistics into different types of stats\n> * Exclusion constraints on partitioned tables\n> * Post-special Page Storage TDE support\n> * Direct I/O (developer-only feature)\n> * Improve doc for autovacuum on partitioned tables\n> * Patch to implement missing join selectivity estimation for range types\n> * Clarify the behavior of the system when approaching XID wraparound\n> * Set arbitrary GUC options during initdb\n> * An attempt to avoid\n> locally-committed-but-not-replicated-to-standby-transactions in\n> synchronous replication\n> * Check lateral references within PHVs for memoize cache keys\n> * Add n_tup_newpage_upd to pg_stat table views\n> * monitoring usage count distribution\n> * Reduce wakeup on idle for bgwriter & walwriter for >5s\n> * Report the query string that caused a memory error under Valgrind\n> * New [relation] options engine\n> * Data is copied twice when specifying both child and parent table in\n> publication\n> * possibility to take name, signature and oid of currently executed\n> function in GET DIAGNOSTICS statement\n> * Named Operators\n> * nbtree performance improvements through specialization on key shape\n> * Fix assertion failure in SnapBuildInitialSnapshot()\n> * Speed up releasing of locks\n> * Compression dictionaries\n> * Improve pg_bsd_indent's handling of multiline initialization expressions\n> * Add EXPLAIN option GENERIC_PLAN for parameterized queries\n> * User functions for building SCRAM secrets\n> * Exit walsender before confirming remote flush in logical replication\n> * Refactoring postgres_fdw/connection.c\n> * Add pg_stat_session\n> * Doc: Improve note about copying into postgres_fdw foreign tables in batch\n> * Kerberos/GSSAPI Credential Delegation\n> * archive modules loose ends\n> * Fix dsa_free() to re-bin segment\n> * Reduce timing overhead of EXPLAIN ANALYZE using rdtsc\n> * clean up permission checks after 599b33b94\n> * Some revises in adding sorting path\n> * ResourceOwner refactoring\n> * Fix the description of GUC \"max_locks_per_transaction\" and\n> \"max_pred_locks_per_transaction\" in guc_table.c\n> * some namespace.c refactoring\n> * Add function to_oct\n> * Switching XLog source from archive to streaming when primary available\n> * Dynamic result sets from procedures\n> * BRIN - SK_SEARCHARRAY and scan key preprocessing\n> * MERGE ... WHEN NOT MATCHED BY SOURCE\n> * Reuse Workers and Replication Slots during Logical Replication\n\n\n\n-- \ngreg\n\n\n",
"msg_date": "Fri, 17 Mar 2023 09:43:21 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "Hi Greg,\n\n> > These patches that are \"Needs Review\" and have received no comments at\n> > all since before March 1st are these. If your patch is amongst this\n> > list I would suggest any of:\n> >\n> > 1) Move it yourself to the next CF (or withdraw it)\n> > 2) Post to the list with any pending questions asking for specific\n> > feedback -- it's much more likely to get feedback than just a generic\n> > \"here's a patch plz review\"...\n> > 3) Mark it Ready for Committer and possibly post explaining the\n> > resolution to any earlier questions to make it easier for a committer\n> > to understand the state\n\nSorry for the late reply. It was a busy week. I see several patches I\nauthored and/or reviewed in the list. I would like to comment on\nthose.\n\n* Avoid use deprecated Windows Memory API\n\nWe can reject or mark as RwF this one due to controversy and the fact\nthat the patch doesn't currently apply. I poked the author today.\n\n* Clarify the behavior of the system when approaching XID wraparound\n\nThis is a wanted [1][see the discussion] and a pretty straightforward\nchange. I think it should be targeting PG16.\n\n* Compression dictionaries\n\nThis one doesn't target PG16. Moved to the next CF.\n\n* Add pg_stat_session\n\nThis patch was in good shape last time I checked but other people had\ncertain questions. The author hasn't replied since Feb 16th. So it's\nunlikely to end up in PG6 and I suggest moving it to the next CF,\nunless anyone objects.\n\n* ResourceOwner refactoring\n\nIMO this one still has a chance to make it to PG16. Let's keep it in\nthe CF for now.\n\nAdditionally:\n\n* Add 64-bit XIDs into PostgreSQL 16\n\nIs not going to make it to PG16, moving to the next CF.\n\n* Pluggable toaster\n\nThe discussion is happening in the \"Compression dictionaries\" thread\nnow, since we decided to join our efforts in this area, see the latest\nmessages. I suggest marking this thread as RwF, unless anyone objects.\n\n[1]: https://www.postgresql.org/message-id/CAH2-Wz%3D3mmHST-t9aR5LNkivXC%2B18JD_XC0ht4y5LQBLzq%2Bpsg%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 17 Mar 2023 17:29:04 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n>> These patches that are \"Needs Review\" and have received no comments at\n>> all since before March 1st are these.\n\nJust a couple of comments on ones that caught my eye:\n\n>> * Simplify find_my_exec by using realpath(3)\n\nThe problem with this one is that Peter would like it to do something\nother than what I think it should do. Not sure how to resolve that.\n\n>> * Fix assertion failure with next_phase_at in snapbuild.c\n\nThis one, and others that are bug fixes, probably deserve more slack.\n\n>> * Periodic burst growth of the checkpoint_req counter on replica.\n\nThere is recent discussion of this one no?\n\n>> * Fix ParamPathInfo for union-all AppendPath\n\nI pushed this yesterday.\n\n>> * Add OR REPLACE option for CREATE OPERATOR\n\nI think this one should be flat-out rejected.\n\n>> * Partial aggregates push down\n\nYou've listed a lot of small features here that still have time to\nget some love --- it's not like we're hard up against the end of the CF.\nIf they'd been in Waiting on Author state for awhile, I'd agree with\nbooting them, but not when they're in Needs Review.\n\n>> * Set arbitrary GUC options during initdb\n\nI do indeed intend to push this one on my own authority at some point,\nbut I'm happy to leave it there for now in case anyone wants to take\nanother look.\n\n>> * Check lateral references within PHVs for memoize cache keys\n\nI think this one is a bug fix too.\n\n>> * Data is copied twice when specifying both child and parent table in\n>> publication\n\nIsn't there active discussion of this one?\n\n>> * Improve pg_bsd_indent's handling of multiline initialization expressions\n\nThis is going to get pushed, it's just waiting until the commitfest\nsettles. I guess you can move it to the next one if you want, but\nthat won't accomplish much.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Mar 2023 10:38:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 02:29:26PM -0400, Gregory Stark (as CFM) wrote:\n> 1) Move it yourself to the next CF (or withdraw it)\n> 2) Post to the list with any pending questions asking for specific\n> feedback -- it's much more likely to get feedback than just a generic\n> \"here's a patch plz review\"...\n> 3) Mark it Ready for Committer and possibly post explaining the\n> resolution to any earlier questions to make it easier for a committer\n> to understand the state\n> \n> If there's still no emails on these at some point I suppose it will\n> make sense to move them out of the CF.\n\n> * Avoid hiding shared filesets in pg_ls_tmpdir (pg_ls_* functions for\n> showing metadata ...)\n\nMy patch. I don't care if it's in v1[3456], but I wish it would somehow\nprogress - idk what else is needed. I wrote this after Thomas Munro +1\nmy thinking that it's absurd for pg_ls_tmpdir() to not show [shared]\nfilesets in tempdirs...\n\n> * CREATE INDEX CONCURRENTLY on partitioned table\n\nMy patch. I think there's agreement that this patch is ready, except\nthat it's waiting on the bugfix for the progress reporting patch. IDK\nif there's interest in this, but it'd be a good candidate for v16.\n\n> * basebackup: support zstd long distance matching\n\nMy patch. No discussion, but I'm hopeful and don't see why this\nshouldn't be in v16.\n\n> * warn if GUC set to an invalid shared library\n\nMy patch. I'm waiting for feedback on 0001, which has gotten no\nresponse. I moved it.\n\n> * ALTER TABLE and CLUSTER fail to use a BulkInsertState for toast tables\n\nMy patch. There's been no recent discussion, so I guess I'll postpone\nit for v17.\n\n> * Check consistency of GUC defaults between .sample.conf and pg_settings.boot_val\n\nIDK what's needed to progress this; If left here, since it will cause\n*this* patch to fail if someone else forgets to add a new GUC to the\nsample config. Which is odd.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 17 Mar 2023 09:56:09 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Fri, 17 Mar 2023 at 10:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> You've listed a lot of small features here that still have time to\n> get some love --- it's not like we're hard up against the end of the CF.\n> If they'd been in Waiting on Author state for awhile, I'd agree with\n> booting them, but not when they're in Needs Review.\n\nOh, that's exactly my intent -- when I listed them two days ago it was\na list of Waiting on Author patches without updates since March 1. But\nI didn't recheck them this morning yet.\n\nIf they've gotten comments in the last two days or had their status\nupdated then great. It's also possible there are threads that aren't\nattached to the commitfest or are attached to a related patch that I\nmay not be aware of.\n\n-- \ngreg\n\n\n",
"msg_date": "Fri, 17 Mar 2023 11:08:10 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On 2023-Mar-17, Greg Stark wrote:\n\n> I'm going to go ahead and do this today. Any of these patches that are\n> \"Waiting on Author\" and haven't received any emails or status changes\n> since March 1 I'm going to move out of the commitfest(*).\n\nSo I've come around to thinking that booting patches out of commitfest\nis not really such a great idea. It turns out that the number of active\npatch submitters seems to have reached a peak during the Postgres 12\ntimeframe, and has been steadily decreasing since then; and I think\nthis is partly due to frustration caused by our patch process.\n\nIt turns out that we expect that contributors will keep the patches the\nsubmit up to date, rebasing over and over for months on end, with no\nactual review occurring, and if this rebasing activity stops for a few\nweeks, we boot these patches out. This is demotivating: people went\ngreat lengths to introduce themselves to our admittedly antiquated\nprocess (no pull requests, remember), we gave them no feedback, and then\nwe reject their patches with no further effort? I think this is not\ngood.\n\nAt this point, I'm going to suggest that reviewers should be open to the\nidea of applying a submitted patch to some older Git commit in order to\nreview it. If we have given feedback, then it's OK to put a patch as\n\"waiting on author\" and eventually boot it; but if we have not given\nfeedback, and there is no reason to think that the merge conflicts some\nhow make the patch fundamentally obsolete, then we should *not* set it\nWaiting on Author. After all, it is quite easy to \"git checkout\" a\nslightly older tree to get the patch to apply cleanly and review it\nthere.\n\nAuthors should, of course, be encouraged to keep patches conflict-free,\nbut this should not be a hard requirement.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sat, 18 Mar 2023 21:26:42 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Sat, Mar 18, 2023 at 1:26 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> At this point, I'm going to suggest that reviewers should be open to the\n> idea of applying a submitted patch to some older Git commit in order to\n> review it. If we have given feedback, then it's OK to put a patch as\n> \"waiting on author\" and eventually boot it; but if we have not given\n> feedback, and there is no reason to think that the merge conflicts some\n> how make the patch fundamentally obsolete, then we should *not* set it\n> Waiting on Author. After all, it is quite easy to \"git checkout\" a\n> slightly older tree to get the patch to apply cleanly and review it\n> there.\n\nIt seems plausible that improved tooling that makes it quick and easy\nto test a given patch locally could improve things for everybody.\n\nIt's possible to do a git checkout to a slightly older tree today, of\ncourse. But in practice it's harder than it really should be. It would\nbe very nice if there was an easy way to fetch from a git remote, and\nthen check out a branch with a given patch applied on top of the \"last\nknown good git tip\" commit. The tricky part would be systematically\ntracking which precise master branch commit is the last known \"good\ncommit\" for a given CF entry. That seems doable to me.\n\nI suspect that removing friction when it comes to testing a patch\nlocally (often just \"kicking the tires\" of a patch) could have an\noutsized impact. If something is made extremely easy, and requires\nlittle or no context to get going with, then people tend to do much\nmore of it. Even when they theoretically don't have a good reason to\ndo so. And even when they theoretically already had a good reason to\ndo so, before the improved tooling/workflow was in place.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 18 Mar 2023 14:43:38 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Sat, Mar 18, 2023 at 02:43:38PM -0700, Peter Geoghegan wrote:\n> On Sat, Mar 18, 2023 at 1:26 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > At this point, I'm going to suggest that reviewers should be open to the\n> > idea of applying a submitted patch to some older Git commit in order to\n> > review it. If we have given feedback, then it's OK to put a patch as\n> > \"waiting on author\" and eventually boot it; but if we have not given\n> > feedback, and there is no reason to think that the merge conflicts some\n> > how make the patch fundamentally obsolete, then we should *not* set it\n> > Waiting on Author. After all, it is quite easy to \"git checkout\" a\n> > slightly older tree to get the patch to apply cleanly and review it\n> > there.\n> \n> It seems plausible that improved tooling that makes it quick and easy\n> to test a given patch locally could improve things for everybody.\n> \n> It's possible to do a git checkout to a slightly older tree today, of\n> course. But in practice it's harder than it really should be. It would\n> be very nice if there was an easy way to fetch from a git remote, and\n> then check out a branch with a given patch applied on top of the \"last\n> known good git tip\" commit. The tricky part would be systematically\n> tracking which precise master branch commit is the last known \"good\n> commit\" for a given CF entry. That seems doable to me.\n\nIt's not only doable, but already possible.\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGLW2PnHxabF3JZGoPfcKFYRCtx%2Bhu5a5yw%3DKWy57yW5cg%40mail.gmail.com\n\nThe only issue with this is that cfbot has squished all the commits into\none, and lost the original commit messages (if any). I submitted\npatches to address that but still waiting for feedback.\n\nhttps://www.postgresql.org/message-id/20220623193125.GB22452@telsasoft.com\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 18 Mar 2023 18:19:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Sat, Mar 18, 2023 at 4:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> The only issue with this is that cfbot has squished all the commits into\n> one, and lost the original commit messages (if any). I submitted\n> patches to address that but still waiting for feedback.\n>\n> https://www.postgresql.org/message-id/20220623193125.GB22452@telsasoft.com\n\nRight. I would like to see that change. But you still need to have CF\ntester/the CF app remember the last master branch commit that worked\nbefore bitrot. And you have to provide an easy way to get that\ninformation.\n\nI generally don't care if that means that I have to initdb - I do that\nall the time. It's a small price to pay for a workflow that I know is\npractically guaranteed to get me a usable postgres executable on the\nfirst try, without requiring any special effort. I don't want to even\nthink about bitrot until I'm at least 10 minutes into looking at\nsomething.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 18 Mar 2023 16:28:02 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Sat, Mar 18, 2023 at 04:28:02PM -0700, Peter Geoghegan wrote:\n> On Sat, Mar 18, 2023 at 4:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > The only issue with this is that cfbot has squished all the commits into\n> > one, and lost the original commit messages (if any). I submitted\n> > patches to address that but still waiting for feedback.\n> >\n> > https://www.postgresql.org/message-id/20220623193125.GB22452@telsasoft.com\n> \n> Right. I would like to see that change. But you still need to have CF\n> tester/the CF app remember the last master branch commit that worked\n> before bitrot. And you have to provide an easy way to get that\n> information.\n\nNo - the last in cfbot's repo is from the last time it successfully\napplied the patch. You can summarily check checkout cfbot's branch to\nbuild (or just to git log -p it, if you dislike github's web interface).\n\nIf you're curious and still wanted to know what commit it was applied\non, it's currently the 2nd commit in \"git log\" (due to squishing\nall patches into one).\n\n\n",
"msg_date": "Sat, 18 Mar 2023 18:44:29 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On 17.03.23 15:38, Tom Lane wrote:\n>>> Simplify find_my_exec by using realpath(3)\n> The problem with this one is that Peter would like it to do something\n> other than what I think it should do. Not sure how to resolve that.\n\nI have no objection to changing the internal coding of the current behavior.\n\n\n",
"msg_date": "Sun, 19 Mar 2023 17:05:13 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 17.03.23 15:38, Tom Lane wrote:\n>>> Simplify find_my_exec by using realpath(3)\n>> The problem with this one is that Peter would like it to do something\n>> other than what I think it should do. Not sure how to resolve that.\n\n> I have no objection to changing the internal coding of the current behavior.\n\nOh ... where the thread trailed off [1] was you not answering whether\nyou'd accept a compromise behavior. If it's okay to stick with the\nbehavior we have, then I'll just do the original patch (modulo Munro's\nobservations about _fullpath's error reporting).\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/2319396.1664978360%40sss.pgh.pa.us\n\n\n",
"msg_date": "Sun, 19 Mar 2023 16:56:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Sun, Mar 19, 2023 at 12:44 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Sat, Mar 18, 2023 at 04:28:02PM -0700, Peter Geoghegan wrote:\n> > On Sat, Mar 18, 2023 at 4:19 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > The only issue with this is that cfbot has squished all the commits into\n> > > one, and lost the original commit messages (if any). I submitted\n> > > patches to address that but still waiting for feedback.\n> > >\n> > > https://www.postgresql.org/message-id/20220623193125.GB22452@telsasoft.com\n> >\n> > Right. I would like to see that change. But you still need to have CF\n> > tester/the CF app remember the last master branch commit that worked\n> > before bitrot. And you have to provide an easy way to get that\n> > information.\n>\n> No - the last in cfbot's repo is from the last time it successfully\n> applied the patch. You can summarily check checkout cfbot's branch to\n> build (or just to git log -p it, if you dislike github's web interface).\n>\n> If you're curious and still wanted to know what commit it was applied\n> on, it's currently the 2nd commit in \"git log\" (due to squishing\n> all patches into one).\n\nI realised that part of Alvaro's complaint was probably caused by\ncfbot's refusal to show any useful information just because it\ncouldn't apply a patch the last time it tried. A small improvement\ntoday: now it shows a ♲ symbol (with hover text \"Rebase needed\") if it\ndoesn't currently apply, but you can still see the most recent CI test\nresults. And from there you can find your way to the parent commit\nID.\n\nThe reason for the previous behaviour is that it had no memory, but I\nhad to give it one that so I can study flapping tests, log highlights,\nstatistical trends etc. Reminds me, I also need to teach it to track\nthe postgres/postgres master mirror's CI results, because it's still\n(rather stupidly) testing patches when master itself is failing (eg\nthe recent slapd commits), which ought to be easy enough to avoid\ngiven the data...\n\n\n",
"msg_date": "Mon, 20 Mar 2023 11:13:19 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 11:13 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I realised that part of Alvaro's complaint was probably caused by\n> cfbot's refusal to show any useful information just because it\n> couldn't apply a patch the last time it tried. A small improvement\n> today: now it shows a ♲ symbol (with hover text \"Rebase needed\") if it\n> doesn't currently apply, but you can still see the most recent CI test\n> results. And from there you can find your way to the parent commit\n> ID.\n\nAnd in the cases where it still shows no results, that'd be because\nthe patch set hasn't successfully applied in the past 46 days, ie\nsince 1 Feb, when cfbot started retaining history. That visible\namnesia should gradually disappear as those patches make progress and\nthe history window expands. I suppose then someone might complain\nthat it should be clearer if a patch hasn't applied for a very long\ntime; suggestions for how to show that are welcome. I wondered about\nmaking them gradually fade out to white, ghost memories that\neventually disappear completely after a few commitfests :-D\n\n\n",
"msg_date": "Mon, 20 Mar 2023 13:05:29 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> ... I suppose then someone might complain\n> that it should be clearer if a patch hasn't applied for a very long\n> time; suggestions for how to show that are welcome.\n\nCan you make the pop-up tooltip text read \"Rebase needed since\nYYYY-MM-DD\"?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 19 Mar 2023 20:10:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 1:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > ... I suppose then someone might complain\n> > that it should be clearer if a patch hasn't applied for a very long\n> > time; suggestions for how to show that are welcome.\n>\n> Can you make the pop-up tooltip text read \"Rebase needed since\n> YYYY-MM-DD\"?\n\nDone. It's the GMT date of the first failure to apply.\n\n\n",
"msg_date": "Mon, 20 Mar 2023 13:53:45 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "The next level of this would be something like notifying the committer\nwith a list of patches in the CF that a commit broke. I don't\nimmediately see how to integrate that with our workflow but I have\nseen something like this work well in a previous job. When committing\ncode you often went and updated other unrelated projects to adapt to\nthe new API (or could adjust the code you were committing to cause\nless breakage).\n\n\n",
"msg_date": "Mon, 20 Mar 2023 10:14:47 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 3:15 AM Greg Stark <stark@mit.edu> wrote:\n> The next level of this would be something like notifying the committer\n> with a list of patches in the CF that a commit broke. I don't\n> immediately see how to integrate that with our workflow but I have\n> seen something like this work well in a previous job. When committing\n> code you often went and updated other unrelated projects to adapt to\n> the new API (or could adjust the code you were committing to cause\n> less breakage).\n\nI've been hesitant to make it send email. The most obvious message to\nsend would be \"hello, you posted a patch, but it fails on CI\" to the\nsubmitter. Cfbot has been running for about 5 years now, and I'd say\nthe rate of spurious/bogus failures has come down a lot over that time\nas we've chased down the flappers in master, but it's still enough\nthat you would quickly become desensitised/annoyed by the emails, I\nguess (one of the reasons I recently started keeping history is to be\nable to do some analysis of that so we can direct attention to chasing\ndown the rest, or have some smart detection of known but not yet\nresolved flappers, I dunno, something like that). Speaking as someone\nwho occasionally sends patches to other projects, it's confusing and\nunsettling when you get automated emails from github PRs telling you\nthat this broke, that broke and the other broke, but only the project\ninsiders know which things are newsworthy and which things are \"oh\nyeah that test is a bit noisy, ignore that one\".\n\n\n",
"msg_date": "Tue, 21 Mar 2023 09:07:33 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Mon, 20 Mar 2023 at 16:08, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Tue, Mar 21, 2023 at 3:15 AM Greg Stark <stark@mit.edu> wrote:\n> > The next level of this would be something like notifying the committer\n> > with a list of patches in the CF that a commit broke. I don't\n> > immediately see how to integrate that with our workflow but I have\n> > seen something like this work well in a previous job. When committing\n> > code you often went and updated other unrelated projects to adapt to\n> > the new API (or could adjust the code you were committing to cause\n> > less breakage).\n>\n> I've been hesitant to make it send email. The most obvious message to\n> send would be \"hello, you posted a patch, but it fails on CI\" to the\n> submitter.\n\nYeah, even aside from flappers there's the problem that it's about as\ncommon for real commits to break some test as it is for patches to\nstart failing tests. So often it's a real failure but it's nothing to\ndo with the patch, it has to do with the commit that went into git.\n\nWhat I'm interested in is something like \"hey, your commit caused 17\npatches to start failing tests\". And it doesn't necessarily have to be\nan email. Having a historical page so when I look at a patch I can go\ncheck, \"hey did this start failing at the same time as 17 other\npatches on the same commit?\" would be the same question.\n\n\n-- \ngreg\n\n\n",
"msg_date": "Mon, 20 Mar 2023 16:22:18 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 9:22 AM Greg Stark <stark@mit.edu> wrote:\n> Yeah, even aside from flappers there's the problem that it's about as\n> common for real commits to break some test as it is for patches to\n> start failing tests. So often it's a real failure but it's nothing to\n> do with the patch, it has to do with the commit that went into git.\n\nThat is true. The best solution to that problem, I think, is to teach\ncfbot that it should test on top of the most recent master commit that\nsucceeded in CI here\nhttps://github.com/postgres/postgres/commits/master . It's on my\nlist...\n\n\n",
"msg_date": "Tue, 21 Mar 2023 09:34:26 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On 2023-Mar-20, Thomas Munro wrote:\n\n> I realised that part of Alvaro's complaint was probably caused by\n> cfbot's refusal to show any useful information just because it\n> couldn't apply a patch the last time it tried. A small improvement\n> today: now it shows a ♲ symbol (with hover text \"Rebase needed\") if it\n> doesn't currently apply, but you can still see the most recent CI test\n> results. And from there you can find your way to the parent commit\n> ID.\n\nThank you for improving and continue to think about further enhancements\nto the CF bot. It has clearly improved our workflow a lot.\n\nMy complaint wasn't actually targetted at the CF bot. It turns out that\nI gave a talk on Friday at a private EDB mini-conference about the\nPostgreSQL open source process; and while preparing for that one, I\nran some 'git log' commands to obtain the number of code contributors\nfor each release, going back to 9.4 (when we started using the\n'Authors:' tag more prominently). What I saw is a decline in the number\nof unique contributors, from its maximum at version 12, down to the\nnumbers we had in 9.5. We went back 4 years. That scared me a lot.\n\nSo I started a conversation about that and some people told me that it's\nvery easy to be discouraged by our process. I don't need to mention\nthat it's antiquated -- this in itself turns off youngsters. But in\naddition to that, I think newbies might be discouraged because their\ncontributions seem to go nowhere even after following the process.\n\nThis led me to suggesting that perhaps we need to be more lenient when\nit comes to new contributors. As I said, for seasoned contributors,\nit's not a problem to keep up with our requirements, however silly they\nare. But people who spend their evenings a whole week or month trying\nto understand how to patch for one thing that they want, to be received\nby six months of silence followed by a constant influx of \"please rebase\nplease rebase please rebase\", no useful feedback, and termination with\n\"eh, you haven't rebased for the 1001th time, your patch has been WoA\nfor X days, we're setting it RwF, feel free to return next year\" ...\nthey are most certainly off-put and will *not* try again next year.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Por suerte hoy explotó el califont porque si no me habría muerto\n de aburrido\" (Papelucho)\n\n\n",
"msg_date": "Tue, 21 Mar 2023 10:59:20 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Tue, 21 Mar 2023 at 05:59, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2023-Mar-20, Thomas Munro wrote:\n>\n> This led me to suggesting that perhaps we need to be more lenient when\n> it comes to new contributors. As I said, for seasoned contributors,\n> it's not a problem to keep up with our requirements, however silly they\n> are. But people who spend their evenings a whole week or month trying\n> to understand how to patch for one thing that they want, to be received\n> by six months of silence followed by a constant influx of \"please rebase\n> please rebase please rebase\", no useful feedback, and termination with\n> \"eh, you haven't rebased for the 1001th time, your patch has been WoA\n> for X days, we're setting it RwF, feel free to return next year\" ...\n> they are most certainly off-put and will *not* try again next year.\n\nI feel like the \"no useful feedback\" is the real problem though. If\nthe patches had been reviewed in earlier commitfests the original\ncontributor would still have been around to finish it... Like, I think\nwhat would actually solve this problem would be if we kept a \"clean\"\nhouse where patches were committed within one or two commitfests\nrather than dragged forward until the final commitfest.\n\nI do agree though. It would be nice if it was easier for anyone to do\ntrivial merges and update the commitfest entry. That's the kind of\nthing gitlab/github are better positioned to solve when they can have\nintegral editors and built-in CI...\n\nI haven't been RwF or moving to the next commitfest when the merge\nlooked trivial. And in one case I actually did the merge myself :) But\nthat only goes so far. If the merge actually requires understanding\nthe patch in depth then the counter-argument is that the committer\nmight be spending a lot of time on a patch that won't get committed\nwhile others sit ignored entirely.\n\n--\ngreg\n\n\n",
"msg_date": "Tue, 21 Mar 2023 23:07:28 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "So a week later\n\n Status summary: March 15 March 22\n Needs review: 152 128\n Waiting on Author: 42 36\n Ready for Committer: 39 32\n Committed: 61 82\n Moved to next CF: 4 15\n Withdrawn: 17 16 (?)\n Rejected: 0 5\n Returned with Feedback: 4 5\n Total: 319.\n\n\nThese patches that are \"Needs Review\" and have received no comments at\nall since before March 1st are now below. There are about 20 fewer\nsuch patches than there were last week.\n\nNo emails since August-December 2022:\n\n* New hooks in the connection path\n* Add log messages when replication slots become active and inactive\n* Remove dead macro exec_subplan_get_plan\n* Consider parallel for LATERAL subqueries having LIMIT/OFFSET\n* pg_rewind WAL deletion pitfall\n* Simplify find_my_exec by using realpath(3)\n* Move backup-related code to xlogbackup.c/.h\n* Avoid hiding shared filesets in pg_ls_tmpdir (pg_ls_* functions for\nshowing metadata ...)\n* Fix bogus error emitted by pg_recvlogical when interrupted\n* Check consistency of GUC defaults between .sample.conf and\npg_settings.boot_val\n* Code checks for App Devs, using new options for transaction behavior\n* Lockless queue of waiters based on atomic operations for LWLock\n* Fix assertion failure with next_phase_at in snapbuild.c\n* Add sortsupport for range types and btree_gist\n* asynchronous execution support for Custom Scan\n* CREATE INDEX CONCURRENTLY on partitioned table\n* Partial aggregates push down\n* Non-replayable WAL records through overflows and >MaxAllocSize lengths\n\nNo emails since January 2023\n\n* Enable jitlink as an alternative jit linker of legacy Rtdyld and add\nriscv jitting support\n* basebackup: support zstd long distance matching\n* pgbench - adding pl/pgsql versions of tests\n* Function to log backtrace of postgres processes\n* More scalable multixacts buffers and locking\n* COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all columns\n* postgres_fdw: commit remote (sub)transactions in parallel during pre-commit\n* Add semi-join pushdown to postgres_fdw\n* Skip replicating the tables specified in except table option\n* Post-special Page Storage TDE support\n* Direct I/O (developer-only feature)\n* Improve doc for autovacuum on partitioned tables\n* Set arbitrary GUC options during initdb\n* An attempt to avoid\nlocally-committed-but-not-replicated-to-standby-transactions in\nsynchronous replication\n* Check lateral references within PHVs for memoize cache keys\n* monitoring usage count distribution\n* Reduce wakeup on idle for bgwriter & walwriter for >5s\n* Report the query string that caused a memory error under Valgrind\n\nNo emails since February 2023\n\n* New [relation] options engine\n* possibility to take name, signature and oid of currently executed\nfunction in GET DIAGNOSTICS statement\n* Named Operators\n* nbtree performance improvements through specialization on key shape\n* Fix assertion failure in SnapBuildInitialSnapshot()\n* Speed up releasing of locks\n* Improve pg_bsd_indent's handling of multiline initialization expressions\n* User functions for building SCRAM secrets\n* Refactoring postgres_fdw/connection.c\n* Add pg_stat_session\n* Doc: Improve note about copying into postgres_fdw foreign tables in batch\n* archive modules loose ends\n* Fix dsa_free() to re-bin segment\n* Reduce timing overhead of EXPLAIN ANALYZE using rdtsc\n* clean up permission checks after 599b33b94\n* Some revises in adding sorting path\n* ResourceOwner refactoring\n* Fix the description of GUC \"max_locks_per_transaction\" and\n\"max_pred_locks_per_transaction\" in guc_table.c\n* some namespace.c refactoring\n* Add function to_oct\n* Switching XLog source from archive to streaming when primary available\n* Dynamic result sets from procedures\n* BRIN - SK_SEARCHARRAY and scan key preprocessing\n* Reuse Workers and Replication Slots during Logical Replication\n\n\n",
"msg_date": "Wed, 22 Mar 2023 00:05:38 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 10:59 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I gave a talk on Friday at a private EDB mini-conference about the\n> PostgreSQL open source process; and while preparing for that one, I\n> ran some 'git log' commands to obtain the number of code contributors\n> for each release, going back to 9.4 (when we started using the\n> 'Authors:' tag more prominently). What I saw is a decline in the number\n> of unique contributors, from its maximum at version 12, down to the\n> numbers we had in 9.5. We went back 4 years. That scared me a lot.\n\nCan you share the subtotals?\n\nOne immediate thought about commit log-based data is that we're not\nusing git Author, and the Author footer convention is only used by\nsome committers. So I guess it must have been pretty laborious to\nread the prose-form data? We do have machine-readable Discussion\nfooters though. By scanning those threads for SMTP From headers on\nmessages that had patches attached, we can find the set of (distinct)\naddresses that contributed to each commit. (I understand that some\npeople are co-authors and may not send an email, but if you counted\nthose and I didn't then you counted more, not fewer, contributors I\nguess? On the other hand if someone posted a patch that wasn't used\nin the commit, or posted from two home/work/whatever accounts that's a\nfalse positive for my technique.)\n\nIn a quick and dirty attempt at this made from bits of Python I\nalready had lying around (which may of course later turn out to be\nflawed and need refinement), I extracted, for example:\n\npostgres=# select * from t where commit =\n'8d578b9b2e37a4d9d6f422ced5126acec62365a7';\n commit | time |\n address\n------------------------------------------+------------------------+----------------------------------------------\n 8d578b9b2e37a4d9d6f422ced5126acec62365a7 | 2023-03-21 14:29:34+13 |\nMelanie Plageman <melanieplageman@gmail.com>\n 8d578b9b2e37a4d9d6f422ced5126acec62365a7 | 2023-03-21 14:29:34+13 |\nThomas Munro <thomas.munro@gmail.com>\n(2 rows)\n\nYou can really only go back about 5-7 years before that technique runs\nout of steam, as the links run out. For what they're worth, these\nnumbers seem to suggests around ~260 distinct email addresses send\npatches to threads referenced by commits. Maybe we're in a 3-year\nlong plateau, but I don't see a peak back in r12:\n\npostgres=# select date_trunc('year', time), count(distinct address)\nfrom t group by 1 order by 1;\n date_trunc | count\n------------------------+-------\n 2015-01-01 00:00:00+13 | 13\n 2016-01-01 00:00:00+13 | 37\n 2017-01-01 00:00:00+13 | 144\n 2018-01-01 00:00:00+13 | 187\n 2019-01-01 00:00:00+13 | 225\n 2020-01-01 00:00:00+13 | 260\n 2021-01-01 00:00:00+13 | 256\n 2022-01-01 00:00:00+13 | 262\n 2023-01-01 00:00:00+13 | 119\n(9 rows)\n\nOf course 2023 is only just getting started. Zooming in closer, the\npeak period for this measurement is March/April, as I guess a lot of\nlittle things make it into the final push:\n\npostgres=# select date_trunc('month', time), count(distinct address)\nfrom t where time > '2021-01-01' group by 1 order by 1;\n date_trunc | count\n------------------------+-------\n 2021-01-01 00:00:00+13 | 83\n 2021-02-01 00:00:00+13 | 70\n 2021-03-01 00:00:00+13 | 100\n 2021-04-01 00:00:00+13 | 109\n 2021-05-01 00:00:00+12 | 54\n 2021-06-01 00:00:00+12 | 82\n 2021-07-01 00:00:00+12 | 86\n 2021-08-01 00:00:00+12 | 83\n 2021-09-01 00:00:00+12 | 73\n 2021-10-01 00:00:00+13 | 68\n 2021-11-01 00:00:00+13 | 66\n 2021-12-01 00:00:00+13 | 48\n 2022-01-01 00:00:00+13 | 68\n 2022-02-01 00:00:00+13 | 73\n 2022-03-01 00:00:00+13 | 110\n 2022-04-01 00:00:00+13 | 90\n 2022-05-01 00:00:00+12 | 47\n 2022-06-01 00:00:00+12 | 50\n 2022-07-01 00:00:00+12 | 72\n 2022-08-01 00:00:00+12 | 81\n 2022-09-01 00:00:00+12 | 105\n 2022-10-01 00:00:00+13 | 68\n 2022-11-01 00:00:00+13 | 74\n 2022-12-01 00:00:00+13 | 58\n 2023-01-01 00:00:00+13 | 65\n 2023-02-01 00:00:00+13 | 61\n 2023-03-01 00:00:00+13 | 64\n(27 rows)\n\nPerhaps the present March is looking a little light compared to the\nusual 100+ number, but actually if you take just the 1st to the 21st\nof previous Marches, they were similar sorts of numbers.\n\npostgres=# select date_trunc('month', time), count(distinct address)\n from t\n where (time >= '2022-03-01' and time <= '2022-03-21') or\n (time >= '2021-03-01' and time <= '2021-03-21') or\n (time >= '2020-03-01' and time <= '2020-03-21') or\n (time >= '2019-03-01' and time <= '2019-03-21')\n group by 1 order by 1;\n date_trunc | count\n------------------------+-------\n 2019-03-01 00:00:00+13 | 57\n 2020-03-01 00:00:00+13 | 57\n 2021-03-01 00:00:00+13 | 77\n 2022-03-01 00:00:00+13 | 72\n(4 rows)\n\nAnother thing we could count is distinct names in the Commitfest app.\nI count 162 names in Commitfest 42 today. Unfortunately I don't have\nthe data to hand to look at earlier Commitfests. That'd be\ninteresting. I've plotted that before back in 2018 for some\nconference talk, and it was at ~100 and climbing back then.\n\n> So I started a conversation about that and some people told me that it's\n> very easy to be discouraged by our process. I don't need to mention\n> that it's antiquated -- this in itself turns off youngsters. But in\n> addition to that, I think newbies might be discouraged because their\n> contributions seem to go nowhere even after following the process.\n\nI don't disagree with your sentiment, though.\n\n> This led me to suggesting that perhaps we need to be more lenient when\n> it comes to new contributors. As I said, for seasoned contributors,\n> it's not a problem to keep up with our requirements, however silly they\n> are. But people who spend their evenings a whole week or month trying\n> to understand how to patch for one thing that they want, to be received\n> by six months of silence followed by a constant influx of \"please rebase\n> please rebase please rebase\", no useful feedback, and termination with\n> \"eh, you haven't rebased for the 1001th time, your patch has been WoA\n> for X days, we're setting it RwF, feel free to return next year\" ...\n> they are most certainly off-put and will *not* try again next year.\n\nRight, that is pretty discouraging.\n\n\n",
"msg_date": "Wed, 22 Mar 2023 18:45:40 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Tue, Mar 21, 2023 at 10:59 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> This led me to suggesting that perhaps we need to be more lenient when\n>> it comes to new contributors. As I said, for seasoned contributors,\n>> it's not a problem to keep up with our requirements, however silly they\n>> are. But people who spend their evenings a whole week or month trying\n>> to understand how to patch for one thing that they want, to be received\n>> by six months of silence followed by a constant influx of \"please rebase\n>> please rebase please rebase\", no useful feedback, and termination with\n>> \"eh, you haven't rebased for the 1001th time, your patch has been WoA\n>> for X days, we're setting it RwF, feel free to return next year\" ...\n>> they are most certainly off-put and will *not* try again next year.\n\n> Right, that is pretty discouraging.\n\nIt is that. I think that the fundamental problem is that we don't have\nenough reviewing/committing manpower to deal with all this stuff in a\ntimely fashion. That doesn't seem to have an easy fix :-(.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Mar 2023 02:30:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On 21.03.23 10:59, Alvaro Herrera wrote:\n> This led me to suggesting that perhaps we need to be more lenient when\n> it comes to new contributors. As I said, for seasoned contributors,\n> it's not a problem to keep up with our requirements, however silly they\n> are. But people who spend their evenings a whole week or month trying\n> to understand how to patch for one thing that they want, to be received\n> by six months of silence followed by a constant influx of \"please rebase\n> please rebase please rebase\", no useful feedback, and termination with\n> \"eh, you haven't rebased for the 1001th time, your patch has been WoA\n> for X days, we're setting it RwF, feel free to return next year\" ...\n> they are most certainly off-put and will*not* try again next year.\n\nPersonally, if a patch isn't rebased up to the minute doesn't bother me \nat all. It's easy to check out as of when the email was sent (or extra \nbonus points for using git format-patch --base). Now, rebasing every \nmonth or so is nice, but daily rebases during a commit fest are almost \nmore distracting than just leaving it.\n\n\n\n",
"msg_date": "Wed, 22 Mar 2023 10:39:25 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "> On 22 Mar 2023, at 10:39, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> Personally, if a patch isn't rebased up to the minute doesn't bother me at all. It's easy to check out as of when the email was sent (or extra bonus points for using git format-patch --base). Now, rebasing every month or so is nice, but daily rebases during a commit fest are almost more distracting than just leaving it.\n\n+1. As long as the patch is rebased and builds/tests green when the CF starts\nI'm not too worried about not having it always rebased during the CF. If\nresolving the conflicts are non-trivial/obvious then of course, but if only to\nstay recent and avoid fuzz in applying then it's more distracting.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 22 Mar 2023 11:22:44 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "So of the patches with no emails since August-December 2022:\n\n* New hooks in the connection path\n\n* Add log messages when replication slots become active and inactive\n - Peter Smith and Alvaro Herrera have picked up this one\n\n* Remove dead macro exec_subplan_get_plan\n - Minor cleanup\n\n* Consider parallel for LATERAL subqueries having LIMIT/OFFSET\n - No response since Sept 2022. There was quite a lot of discussion\nand Tom Lane and Robert Haas expressed some safety concerns which were\nresponded to but I guess people have put in as much time as they can\nafford on this. I'll mark this Returned with Feedback.\n\n* pg_rewind WAL deletion pitfall\n - I think this is a bug fix to pg_rewind that maybe should be Ready\nfor Committer?\n\n* Simplify find_my_exec by using realpath(3)\n - Tom Lane is author but I don't know if he intends to apply this in\nthis release\n\n* Move backup-related code to xlogbackup.c/.h\n - It looks like neither Alvaro Herrera nor Michael Paquier are\nparticularly convinced this is an improvement and nobody has put more\ntime in this since last October. I'm inclined to mark this Rejected?\n\n* Avoid hiding shared filesets in pg_ls_tmpdir (pg_ls_* functions for\nshowing metadata ...)\n - According to the internet \"As part of their 39 month old\ndevelopment and milestones, your patch should be able to see like an\nadult (20/20 vision), be able to run, walk, hop and balance himself\nwhile standing with one foot quite confidently.\" Can it do all those\nthings yet?\n\n - Should this be broken up into smaller CF entries so at least some\nof them can be Ready for Committer and closed?\n\n> * Fix bogus error emitted by pg_recvlogical when interrupted\n - Is this a minor cleanup?\n\n> * Check consistency of GUC defaults between .sample.conf and pg_settings.boot_val\n - It looks like this was pretty active until last October and might\nhave been ready to apply at least partially? But no further work or\nreview has happened since.\n\n> * Code checks for App Devs, using new options for transaction behavior\n - This is an interesting set of features from Simon Riggs to handle\n\"dry-run\" style SQL execution by changing the semantics of BEGIN and\nEND and COMMIT. It has feedback from Erik Rijkers and Dilip Kumar but\nI don't think it's gotten any serious design discussion. I posted a\nquick review myself just now but still the point remains.\n\nI think features supporting a \"dry-run\" mode would be great but\njudging by the lack of response this doesn't look like the version of\nthat people are interested in.\n\nI'm inclined to mark this Rejected, even if that's only by default. If\nsomeone is interested in running with this in the future maybe\nReturned with Feedback would be better even there really wasn't much\nfeedback. In practice it amounts to the same thing I think.\n\n> * Lockless queue of waiters based on atomic operations for LWLock\n - Is this going in this CF? It looks like something we don't want to\nlose though\n\n> * Fix assertion failure with next_phase_at in snapbuild.c\n - It's a bug fix but it doesn't look like the bug has been entirely fixed?\n\n> * Add sortsupport for range types and btree_gist\n - It doesn't l look like anyone has interested in reviewing this\npatch. It's been bouncing forward from CF to CF since last August. I'm\nnot sure what to do. Maybe we just have to say it's rejected for lack\nof reviewers interested/competent to review this area of the code.\n\n> * asynchronous execution support for Custom Scan\n - This is a pretty small isolated feature.\n\n> * CREATE INDEX CONCURRENTLY on partitioned table\n - I'm guessing this patch is too big and too late to go in this CF.\nAnd it sounds like there's still work to be done? Should this be\nmarked RwF?\n\n> * Partial aggregates push down\n\n - I'm not sure what the state of this is, it's had about a year and\na half of work and seems to have had real work going into it during\nall that time. It's just a big change. Is it ready for commit or are\nthere still open questions? Is it for this release or next release?\n\n> * Non-replayable WAL records through overflows and >MaxAllocSize lengths\n\n - Andres says it's a bug fix\n\n-- \ngreg\n\n\n",
"msg_date": "Thu, 23 Mar 2023 16:41:39 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> * Simplify find_my_exec by using realpath(3)\n> - Tom Lane is author but I don't know if he intends to apply this in\n> this release\n\nI'd like to get it done. It's currently stuck because Peter asked\nfor some behavioral changes that I was dubious about, and we're trying\nto come to a mutually-agreeable idea. However ... the only relation\nbetween that issue and what I actually want to do is that it's touching\nthe same code. Maybe we should put the behavioral-change ideas on the\nshelf and just get the realpath() reimplementation done for now.\nI'm getting antsy about letting that wait much longer, because it's\nentirely possible that there will be exciting portability problems;\nwaiting till almost feature freeze to find out doesn't seem wise.\n\n>> * Code checks for App Devs, using new options for transaction behavior\n\n> I think features supporting a \"dry-run\" mode would be great but\n> judging by the lack of response this doesn't look like the version of\n> that people are interested in.\n\nThis version of it seems quite unsafe. I wonder if we could have some\nsession-level property that says \"no changes made by this session will\nactually get committed, but it will look to the session as if they did\"\n(thus fixing the inability to test DDL that you noted). I'm not sure\nhow well that could work, or what leakiness there might be in the\nabstraction. There would probably be locking oddities at the least.\n\nI agree with RwF, or maybe Withdrawn given that Simon's retired?\nThe same for his other patches --- somebody else will need to push\nthem forward if anything's to come of them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 Mar 2023 17:05:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 04:41:39PM -0400, Greg Stark wrote:\n> * Avoid hiding shared filesets in pg_ls_tmpdir (pg_ls_* functions for\n> showing metadata ...)\n> - According to the internet \"As part of their 39 month old\n> development and milestones, your patch should be able to see like an\n> adult (20/20 vision), be able to run, walk, hop and balance himself\n> while standing with one foot quite confidently.\" Can it do all those\n> things yet?\n\nIt's not a large patch, and if you read the summaries that I've written,\nyou'll see that I've presented it as several patches specifically to\nallow the essential, early patches to progress; the later patches are\noptional - I stopped sending them since people were evidently distracted\nby the optional patches at the exclusion of the essential patches.\n\n> - Should this be broken up into smaller CF entries so at least some\n> of them can be Ready for Committer and closed?\n\nOpening and closing CF entries sounds like something which would\nmaximize the administrative overhead of the process rather than\nprogressing the patch.\n\n> > * CREATE INDEX CONCURRENTLY on partitioned table\n> - I'm guessing this patch is too big and too late to go in this CF.\n> And it sounds like there's still work to be done? Should this be\n> marked RwF?\n\nIf you look, you'll see that's it's straightforward and *also* small.\nAs I wrote last week, it's very viable for v16.\n\nOn Fri, Mar 17, 2023 at 09:56:10AM -0500, Justin Pryzby wrote:\n> > * CREATE INDEX CONCURRENTLY on partitioned table\n> \n> My patch. I think there's agreement that this patch is ready, except\n> that it's waiting on the bugfix for the progress reporting patch. IDK\n> if there's interest in this, but it'd be a good candidate for v16.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 23 Mar 2023 17:19:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 5:42 AM Greg Stark <stark@mit.edu> wrote:\n>\n>\n> > * Fix assertion failure with next_phase_at in snapbuild.c\n> - It's a bug fix but it doesn't look like the bug has been entirely fixed?\n>\n\nWe have a patch fixing the issue and reproducible steps. Another bug\nwas reported late in the discussion but it was the same issue as CF\nitem \"Assertion failure in SnapBuildInitialSnapshot()\".\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 24 Mar 2023 09:48:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 04:41:39PM -0400, Greg Stark wrote:\n> * Move backup-related code to xlogbackup.c/.h\n> - It looks like neither Alvaro Herrera nor Michael Paquier are\n> particularly convinced this is an improvement and nobody has put more\n> time in this since last October. I'm inclined to mark this Rejected?\n\nAgreed.\n\n>> * Check consistency of GUC defaults between .sample.conf and pg_settings.boot_val\n> - It looks like this was pretty active until last October and might\n> have been ready to apply at least partially? But no further work or\n> review has happened since.\n\nFWIW, I don't find much appealing the addition of two GUC flags for\nonly the sole purpose of that, particularly as we get a stronger\ndependency between GUCs that can be switched dynamically at\ninitialization and at compile-time.\n\n>> * Non-replayable WAL records through overflows and >MaxAllocSize lengths\n> \n> - Andres says it's a bug fix\n\nIt is a bug fix. Something I would dare backpatch? Perhaps not. The\nhardcoded limit of MaxXLogRecordSize makes me feel a bit\nuncomfortable, though perhaps we could live with that.\n--\nMichael",
"msg_date": "Fri, 24 Mar 2023 10:24:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 10:24:43AM +0900, Michael Paquier wrote:\n> >> * Check consistency of GUC defaults between .sample.conf and pg_settings.boot_val\n> > - It looks like this was pretty active until last October and might\n> > have been ready to apply at least partially? But no further work or\n> > review has happened since.\n> \n> FWIW, I don't find much appealing the addition of two GUC flags for\n> only the sole purpose of that,\n\nThe flags seem independently interesting - adding them here follows\na suggestion Andres made in response to your complaint.\n20220713234900.z4rniuaerkq34s4v@awork3.anarazel.de\n\n> particularly as we get a stronger\n> dependency between GUCs that can be switched dynamically at\n> initialization and at compile-time.\n\nWhat do you mean by \"stronger dependency between GUCs\" ?\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 23 Mar 2023 20:59:57 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 5:42 AM Greg Stark <stark@mit.edu> wrote:\n> > * asynchronous execution support for Custom Scan\n> - This is a pretty small isolated feature.\n\nI was planning to review this patch, but unfortunately, I did not have\ntime for that, and I do not think I will for v16. So if anyone wants\nto work on this, please do so; if not, I want to in the next\ndevelopment cycle for v17.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Sat, 25 Mar 2023 18:24:57 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "Status summary:\n Needs review: 116.\n Waiting on Author: 30.\n Ready for Committer: 32.\n Committed: 94.\n Moved to next CF: 17.\n Returned with Feedback: 6.\n Rejected: 6.\n Withdrawn: 18.\nTotal: 319.\n\n\n\nOk, here are the patches that have been stuck in \"Waiting\non Author\" for a while. I divided them into three groups.\n\n* The first group have been stuck for over a week and mostly look like\n they should be RwF. Some I guess just moved to next CF. But some of\n them I'm uncertain if I should leave or if they really should be RfC\n or NR.\n\n* The other two groups have had some updates in the last week\n (actually I used 10 days). But some of them still look like they're\n pretty much dead for this CF and should either be moved forward or\n Rwf or Rejected.\n\nSo here's the triage list. I'm going to send emails and start clearing\nout the patches pretty much right away. Some of these are pretty\nclearcut.\n\n\nNothing in over a week:\n----------------------\n\n* Better infrastructure for automated testing of concurrency issues\n\n- Consensus that this is desirable. But it's not clear what it's\n actually waiting on Author for. RwF?\n\n* Provide the facility to set binary format output for specific OID's\nper session\n\n- I think Dave was looking for feedback and got it from Tom and\n Peter. I don't actually see a specific patch here but there are two\n patches linked in the original message. There seems to be enough\n feedback to proceed but nobody's working on it. RwF?\n\n* pg_visibility's pg_check_visible() yields false positive when\nworking in parallel with autovacuum\n\n- Bug, but tentatively a false positive...\n\n* CAST( ... ON DEFAULT)\n\n- it'll have to wait till there's something solid from the committee\"\n-- Rejected?\n\n* Fix tab completion MERGE\n\n- Partly committed but\n v9-0002-psql-Add-PartialMatches-macro-for-better-tab-complet.patch\n remains. There was a review from Dean Rasheed. Move to next CF?\n\n* Fix recovery conflict SIGUSR1 handling\n\n- This looks like a suite of bug fixes and looks like it should be\n Needs Review or Ready for Commit\n\n* Prefetch the next tuple's memory during seqscans\n\n- David Rowley said it was dependeny on \"heapgettup() refactoring\"\n which has been refactored. So is it now Needs Review or Ready for\n Commit? Is it likely to happen this CF?\n\n* Pluggable toaster\n\n- This seems to have digressed from the original patch. There were\n patches early on and a lot of feedback. Is the result that the\n original patches are Rejected or are they still live?\n\n* psql - refactor echo code\n\n- \"I think this patch requires an up-to-date summary and explanation\"\n from Peter. But it seems like Tom was ok with it and just had some\n additional improvements he wanted that were added. It sounds like\n this might be \"Ready for Commit\" if someone familiar with the patch\n looked at it.\n\n* Push aggregation down to base relations and joins\n\n- Needs a significant rebase (since March 1).\n\n* Remove self join on a unique column\n\n- An offer of a Bounty! There was one failing test which was\n apparently fixed? But it looks like this should be in Needs Review\n or Ready for Commit.\n\n* Split index and table statistics into different types of stats\n\n- Was stuck on \"Generate pg_stat_get_xact*() functions with Macros\"\n which was committed. So \"Ready for Commit\" now?\n\n* Default to ICU during initdb\n\n- Partly committed, 0001 waiting until after CF\n\n* suppressing useless wakeups in logical/worker.c\n\n- Got feedback March 17. Doesn't look like it's going to be ready this CF.\n\n* explain analyze rows=%.0f\n\n- Patch updated January, but I think Tom still had some simple if\n tedious changes he asked for\n\n* Fix order of checking ICU options in initdb and create database\n\n- Feedback Last November but no further questions or progress\n\n* Introduce array_shuffle() and array_sample() functions\n\n- Feedback from Tom last September. No further questions or progress\n\n\n\nStatus Updates in last week:\n----------------------------\n\n* Some revises in adding sorting path\n\n- Got feedback Feb 21 and author responded but doesn't look like it's\n going to be ready this CF\n\n* Add TAP tests for psql \\g piped into program\n\n- Peter Eisentraut asked for a one-line change, otherwise it looks\n like it's Ready for Commit?\n\n* Improvements to Meson docs\n\n- Some feedback March 15 but no response. I assume this is still in\n play\n\n\n\nEmails in last week:\n-------------------\n\n* RADIUS tests and improvements\n\n- last feedback March 20, last patch March 4. Should probably be moved\n to the next CF unless there's progress soon.\n\n* Direct SSL Connections\n\n- (This is mine) Code for SSL is pretty finished. The last patch for\n ALPN support needs a bit of polish. I'll be doing that momentarily.\n\n* Fix alter subscription concurrency errors\n\n- \"patch as-submitted is pretty uninteresting\" and \"patch that I don't\n care much about\" ... I guess this is Rejected or Withdrawn\n\n* Fix improper qual pushdown after applying outer join identity 3\n\n- Tom Lane's patch. Active discussion as of March 21.\n\n* Error \"initial slot snapshot too large\" in create replication slot\n\n- Active discussion as of March 24. Is this now Needs Review or Ready\n for Committer?\n\n* Transparent column encryption\n\n- Active discussion as of March 24\n\n* Make ON_ERROR_STOP stop on shell script failure\n\n- I rebased this but I think it needs a better review. I may have a\n chance to do that or someone else could. The original author\n bt22nakamorit@oss.nttdata.com seems to have disappeared but the\n patch seems to be perhaps committable?\n\n* pg_stats and range statistics\n\n- Updated patch as of March 24, should be Needs Review I guess?\n\n* TDE key management patches\n\n- Actively under discussion\n\n* Reconcile stats in find_tabstat_entry() and get rid of\nPgStat_BackendFunctionEntry\n\n- Actively under discussion\n\n--\nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Tue, 28 Mar 2023 12:12:12 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "Hi,\n\n> * Pluggable toaster\n\n> - This seems to have digressed from the original patch. There were\n> patches early on and a lot of feedback. Is the result that the\n> original patches are Rejected or are they still live?\n\nWe agreed to work on a completely new RFC which is currently discussed\nwithin the \"Compression dictionaries\" thread, see [1][2] and below. I\nguess it means either Rejected or RwF.\n\n[1]: https://www.postgresql.org/message-id/20230203095540.zutul5vmsbmantbm%40alvherre.pgsql\n[2]: https://www.postgresql.org/message-id/20230203095658.imkcw2sypawe3py3%40alvherre.pgsql\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 29 Mar 2023 12:03:38 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "Only a few more days remain before feature freeze. We've now crossed\nover 100 patches committed according to the CF app:\n\n Status summary: March 15 March 22 March 28 April 4\n Needs review: 152 128 116 96\n Waiting on Author: 42 36 30 21\n Ready for Committer: 39 32 32 35\n Committed: 61 82 94 101\n Moved to next CF: 4 15 17 28\n Withdrawn: 17 16 18 20\n Rejected: 0 5 6 8\n Returned with Feedback: 4 5 6 10\n Total: 319.\n\nPerhaps more importantly we've crossed *under* 100 patches waiting for review.\n\nHowever I tried to do a pass of the Needs Review patches and found\nthat a lot of them were really waiting for comment and seemed to be\nuseful features or bug fixes that already had a significant amount of\nwork put into them and seemed likely to get committed if there was\ntime available to work on them.\n\nThere seems to be a bit of a mix of either\n\na) patches that just never got any feedback -- in some cases\npresumably because the patch required special competency in a niche\narea\n\nor\n\nb) patches that had active discussion and patches being updated until\ndiscussion died out. Presumably because the author either got busy\nelsewhere or perhaps the discussion seemed unproductive and exhausted\nthem.\n\nWhat I didn't see, that I expected to see, was patches that were just\nuninteresting to anyone other than the author but that people were\njust too polite to reject.\n\nSo I think these patches are actual useful patches that we would want\nto have but are likely, modulo some bug fixes, to get moved along to\nthe next CF again without any progress this CF:\n\n\n* Remove dead macro exec_subplan_get_plan\n* pg_rewind WAL deletion pitfall\n* Avoid hiding shared filesets in pg_ls_tmpdir (pg_ls_* functions for\nshowing metadata ...)\n* Fix bogus error emitted by pg_recvlogical when interrupted\n* Lockless queue of waiters based on atomic operations for LWLock\n* Add sortsupport for range types and btree_gist\n* Enable jitlink as an alternative jit linker of legacy Rtdyld and add\nriscv jitting support\n* basebackup: support zstd long distance matching\n* Function to log backtrace of postgres processes\n* More scalable multixacts buffers and locking\n* COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all columns\n* Add semi-join pushdown to postgres_fdw\n* Skip replicating the tables specified in except table option\n* Post-special Page Storage TDE support\n* Direct I/O (developer-only feature)\n* Improve doc for autovacuum on partitioned tables\n* An attempt to avoid\nlocally-committed-but-not-replicated-to-standby-transactions in\nsynchronous replication\n* Check lateral references within PHVs for memoize cache keys\n* monitoring usage count distribution\n* New [relation] options engine\n* nbtree performance improvements through specialization on key shape\n* Fix assertion failure in SnapBuildInitialSnapshot()\n* Speed up releasing of locks\n* Improve pg_bsd_indent's handling of multiline initialization expressions\n* Refactoring postgres_fdw/connection.c\n* Add pg_stat_session\n* archive modules loose ends\n* Fix dsa_free() to re-bin segment\n* Reduce timing overhead of EXPLAIN ANALYZE using rdtsc\n* clean up permission checks after 599b33b94\n* Fix the description of GUC \"max_locks_per_transaction\" and\n\"max_pred_locks_per_transaction\" in guc_table.c\n* some namespace.c refactoring\n* Add function to_oct\n* Switching XLog source from archive to streaming when primary available\n* BRIN - SK_SEARCHARRAY and scan key preprocessing\n* Reuse Workers and Replication Slots during Logical Replication\n\n\n",
"msg_date": "Tue, 4 Apr 2023 11:04:00 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com> writes:\n> However I tried to do a pass of the Needs Review patches and found\n> that a lot of them were really waiting for comment and seemed to be\n> useful features or bug fixes that already had a significant amount of\n> work put into them and seemed likely to get committed if there was\n> time available to work on them.\n\nYeah, we just don't have enough people ...\n\n> So I think these patches are actual useful patches that we would want\n> to have but are likely, modulo some bug fixes, to get moved along to\n> the next CF again without any progress this CF:\n\nI have comments on a few of these:\n\n> * Remove dead macro exec_subplan_get_plan\n\nTBH, I'd reject this one as not being worth the trouble.\n\n> * monitoring usage count distribution\n\nAnd I'm dubious how many people care about this, either.\n\n> * Improve pg_bsd_indent's handling of multiline initialization expressions\n\nThis is going to go in once the commit fest is done; we're just holding\noff to avoid creating merge issues during the CF time crunch.\n\n> * clean up permission checks after 599b33b94\n\nI believe that the actual bug fixes are in, and what's left is just a test\ncase that people weren't very excited about adding. So maybe this should\nget closed out as committed.\n\nPerhaps we'll get some of the others done by the end of the week.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Apr 2023 11:18:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Tue, 4 Apr 2023 at 11:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > * clean up permission checks after 599b33b94\n>\n> I believe that the actual bug fixes are in, and what's left is just a test\n> case that people weren't very excited about adding. So maybe this should\n> get closed out as committed.\n\nI'm not super convinced about this one. I'm not a big \"all tests are\ngood tests\" believer but this test seems like a pretty reasonable one.\nPermissions checks and user mappings are user-visible behaviour that\nare easy to overlook when making changes with unexpected side effects.\n\nIt seems like the test would be just as easy to commit as to not\ncommit and I don't see anything tricky about it that would necessitate\na more in depth review.\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 4 Apr 2023 14:36:01 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "> On 4 Apr 2023, at 20:36, Greg Stark <stark@mit.edu> wrote:\n> \n> On Tue, 4 Apr 2023 at 11:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>>> * clean up permission checks after 599b33b94\n>> \n>> I believe that the actual bug fixes are in, and what's left is just a test\n>> case that people weren't very excited about adding. So maybe this should\n>> get closed out as committed.\n> \n> I'm not super convinced about this one. I'm not a big \"all tests are\n> good tests\" believer but this test seems like a pretty reasonable one.\n> Permissions checks and user mappings are user-visible behaviour that\n> are easy to overlook when making changes with unexpected side effects.\n> \n> It seems like the test would be just as easy to commit as to not\n> commit and I don't see anything tricky about it that would necessitate\n> a more in depth review.\n\nAgreed, I think this test has value and don't see a strong reason not to commit\nit.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 5 Apr 2023 09:49:31 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "As announced on this list feature freeze is at 00:00 April 8 AoE.\nThat's less than 24 hours away. If you need to set your watches to AoE\ntimezone it's currently:\n\n$ TZ=AOE+12 date\nFri 07 Apr 2023 02:05:50 AM AOE\n\nAs we stand we have:\n\nStatus summary:\n Needs review: 82\n Waiting on Author: 16\n Ready for Committer: 27\n Committed: 115\n Moved to next CF: 38\n Returned with Feedback: 10\n Rejected: 9\n Withdrawn: 22\nTotal: 319.\n\nIn less than 24h most of the remaining patches will get rolled forward\nto the next CF. The 16 that are Waiting on Author might be RwF\nperhaps. The only exceptions would be non-features like Bug Fixes and\ncleanup patches that have been intentionally held until the end --\nthose become Open Issues for the release.\n\nSo if we move forward all the remaining patches (so these numbers are\nhigh by about half a dozen) the *next* CF would look like:\n\nCommitfest 2023-07: Now April 8\n Needs review: 46. 128\n Waiting on Author: 17. 33\n Ready for Committer: 3. 30\nTotal: 66 191\n\nI suppose that's better than the 319 we came into this CF with but\nthere's 3 months to accumulate more unreviewed patches...\n\nI had hoped to find lots of patches that I could bring the hammer down\non and say there's just no interest in or there's no author still\nmaintaining. But that wasn't the case. Nearly all the patches still\nhad actively interested authors and looked like they were legitimately\ninteresting and worthwhile features that people just haven't had the\ntime to review or commit.\n\n\n--\ngreg\n\n\n",
"msg_date": "Fri, 7 Apr 2023 10:20:56 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 10:21 AM Greg Stark <stark@mit.edu> wrote:\n\n> As announced on this list feature freeze is at 00:00 April 8 AoE.\n> That's less than 24 hours away. If you need to set your watches to AoE\n> timezone it's currently:\n>\n> $ TZ=AOE+12 date\n> Fri 07 Apr 2023 02:05:50 AM AOE\n>\n> As we stand we have:\n>\n> Status summary:\n> Needs review: 82\n> Waiting on Author: 16\n> Ready for Committer: 27\n> Committed: 115\n> Moved to next CF: 38\n> Returned with Feedback: 10\n> Rejected: 9\n> Withdrawn: 22\n> Total: 319.\n>\n> In less than 24h most of the remaining patches will get rolled forward\n> to the next CF. The 16 that are Waiting on Author might be RwF\n> perhaps. The only exceptions would be non-features like Bug Fixes and\n> cleanup patches that have been intentionally held until the end --\n> those become Open Issues for the release.\n>\n> So if we move forward all the remaining patches (so these numbers are\n> high by about half a dozen) the *next* CF would look like:\n>\n> Commitfest 2023-07: Now April 8\n> Needs review: 46. 128\n> Waiting on Author: 17. 33\n> Ready for Committer: 3. 30\n> Total: 66 191\n>\n> I suppose that's better than the 319 we came into this CF with but\n> there's 3 months to accumulate more unreviewed patches...\n>\n> I had hoped to find lots of patches that I could bring the hammer down\n> on and say there's just no interest in or there's no author still\n> maintaining. But that wasn't the case. Nearly all the patches still\n> had actively interested authors and looked like they were legitimately\n> interesting and worthwhile features that people just haven't had the\n> time to review or commit.\n>\n>\n> --\n> greg\n>\n> The %T added to the PSQL Prompt is about 5 lines of code. Reviewed and\nReady to commit.\nThat could knock one more off really quickly :-)\n\nExcellent work to everyone.\n\nThanks, Kirk\n\nOn Fri, Apr 7, 2023 at 10:21 AM Greg Stark <stark@mit.edu> wrote:As announced on this list feature freeze is at 00:00 April 8 AoE.\nThat's less than 24 hours away. If you need to set your watches to AoE\ntimezone it's currently:\n\n$ TZ=AOE+12 date\nFri 07 Apr 2023 02:05:50 AM AOE\n\nAs we stand we have:\n\nStatus summary:\n Needs review: 82\n Waiting on Author: 16\n Ready for Committer: 27\n Committed: 115\n Moved to next CF: 38\n Returned with Feedback: 10\n Rejected: 9\n Withdrawn: 22\nTotal: 319.\n\nIn less than 24h most of the remaining patches will get rolled forward\nto the next CF. The 16 that are Waiting on Author might be RwF\nperhaps. The only exceptions would be non-features like Bug Fixes and\ncleanup patches that have been intentionally held until the end --\nthose become Open Issues for the release.\n\nSo if we move forward all the remaining patches (so these numbers are\nhigh by about half a dozen) the *next* CF would look like:\n\nCommitfest 2023-07: Now April 8\n Needs review: 46. 128\n Waiting on Author: 17. 33\n Ready for Committer: 3. 30\nTotal: 66 191\n\nI suppose that's better than the 319 we came into this CF with but\nthere's 3 months to accumulate more unreviewed patches...\n\nI had hoped to find lots of patches that I could bring the hammer down\non and say there's just no interest in or there's no author still\nmaintaining. But that wasn't the case. Nearly all the patches still\nhad actively interested authors and looked like they were legitimately\ninteresting and worthwhile features that people just haven't had the\ntime to review or commit.\n\n\n--\ngregThe %T added to the PSQL Prompt is about 5 lines of code. Reviewed and Ready to commit.That could knock one more off really quickly :-)Excellent work to everyone.Thanks, Kirk",
"msg_date": "Fri, 7 Apr 2023 18:01:30 -0400",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "Kirk Wolak <wolakk@gmail.com> writes:\n> The %T added to the PSQL Prompt is about 5 lines of code. Reviewed and\n> Ready to commit.\n> That could knock one more off really quickly :-)\n\nI'm still objecting to it, for the same reason as before.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 07 Apr 2023 18:29:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Fri, Apr 7, 2023 at 6:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Kirk Wolak <wolakk@gmail.com> writes:\n> > The %T added to the PSQL Prompt is about 5 lines of code. Reviewed and\n> > Ready to commit.\n> > That could knock one more off really quickly :-)\n>\n> I'm still objecting to it, for the same reason as before.\n>\n> regards, tom lane\n>\n\nTom,\n I got no response to my point that the backquote solution is cumbersome\nbecause I have to use* psql in both windows*\n*and in linux environments* (realizing I am the odd duck in this group).\nBut my fall back was a common script file. Then I shared my\npsqlrc file with a co-worker, and they ran into the missing script file.\n[ie, the same command does not work in both systems].\n\n I won't argue beyond this point, I'd just like to hear that you\nconsidered this final point...\nand I can move on.\n\nThanks, Kirk\n\nOn Fri, Apr 7, 2023 at 6:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Kirk Wolak <wolakk@gmail.com> writes:\n> The %T added to the PSQL Prompt is about 5 lines of code. Reviewed and\n> Ready to commit.\n> That could knock one more off really quickly :-)\n\nI'm still objecting to it, for the same reason as before.\n\n regards, tom laneTom, I got no response to my point that the backquote solution is cumbersome because I have to use psql in both windowsand in linux environments (realizing I am the odd duck in this group). But my fall back was a common script file. Then I shared mypsqlrc file with a co-worker, and they ran into the missing script file. [ie, the same command does not work in both systems]. I won't argue beyond this point, I'd just like to hear that you considered this final point...and I can move on.Thanks, Kirk",
"msg_date": "Fri, 7 Apr 2023 22:40:01 -0400",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "So here we are at the end of the CF:\n\n Status summary: March 15 March 22 March 28 April 4 April 8\n Needs review: 152 128 116 96 74\n Waiting on Author: 42 36 30 21 14\n Ready for Committer: 39 32 32 35 27\n Committed: 61 82 94 101 124\n Moved to next CF: 4 15 17 28 39\n Withdrawn: 17 16 18 20 10\n Rejected: 0 5 6 8 9\n Returned with Feedback: 4 5 6 10 22\n Total: 319.\n\nI'm now going to go through and:\n\na) Mark Waiting on Author any patches that aren't building Waiting on\nAuthor. There was some pushback about asking authors to do trivial\nrebases but in three months it won't make sense to start a CF with\nalready-non-applying patches.\n\nb) Mark any patches that received solid feedback in the last week with\neither Returned with Feedback or Rejected. I think this was already\ndone though with 12 RwF and 1 Rejected in the past four days alone.\n\nc) Pick out the Bug Fixes, cleanup patches, and other non-feature\npatches that might be open issues for v16.\n\nd) Move to Next CF any patches that remain.\n\n-- \ngreg\n\n\n",
"msg_date": "Sat, 8 Apr 2023 11:37:56 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Sat, 8 Apr 2023 at 11:37, Greg Stark <stark@mit.edu> wrote:\n>\n> c) Pick out the Bug Fixes, cleanup patches, and other non-feature\n> patches that might be open issues for v16.\n\nSo on further examination it seems there are multiple category of\npatches that are worth holding onto rather than shifting to the next\nrelease:\n\n* Bug Fixes\n* Documentation patches\n* Build system patches, especially meson-related patches\n* Testing patches, especially if they're testing new features\n* Patches that are altering new features that were committed in this release\n\nSome of these, especially the last category, are challenging for me to\ndetermine. If I move forward a patch of yours that you think makes\nsense to treat as an open issue that should be resolved in this\nrelease then feel free to say.\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Sat, 8 Apr 2023 21:45:04 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com> writes:\n> Some of these, especially the last category, are challenging for me to\n> determine. If I move forward a patch of yours that you think makes\n> sense to treat as an open issue that should be resolved in this\n> release then feel free to say.\n\nI think that's largely independent. We don't look back at closed-out CFs\nas a kind of TODO list; anything that's left behind there is basically\nnever going to be seen again, until the author makes a new CF entry.\n\nAnything that's to be treated as an open item for v16 should get added\nto the wiki page at\n\nhttps://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items\n\nIt's not necessary that a patch exist to do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Apr 2023 21:50:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Fri, Apr 07, 2023 at 10:40:01PM -0400, Kirk Wolak wrote:\n> I got no response to my point that the backquote solution is cumbersome\n> because I have to use* psql in both windows*\n> *and in linux environments* (realizing I am the odd duck in this group).\n> But my fall back was a common script file. Then I shared my\n> psqlrc file with a co-worker, and they ran into the missing script file.\n> [ie, the same command does not work in both systems].\n> \n> I won't argue beyond this point, I'd just like to hear that you\n> considered this final point...\n> and I can move on.\n\nFYI, this specific patch has been moved to the next commit fest of\n2023-07:\nhttps://commitfest.postgresql.org/43/4227/\n\nThis implies that this discussion will be considered for the\ndevelopment cycle of v17, planned to begin in July.\n--\nMichael",
"msg_date": "Mon, 10 Apr 2023 08:46:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Sat, Apr 08, 2023 at 09:50:49PM -0400, Tom Lane wrote:\n> I think that's largely independent. We don't look back at closed-out CFs\n> as a kind of TODO list; anything that's left behind there is basically\n> never going to be seen again, until the author makes a new CF entry.\n> \n> Anything that's to be treated as an open item for v16 should get added\n> to the wiki page at\n> \n> https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items\n> \n> It's not necessary that a patch exist to do that.\n\nAnd patches that are marked as bug fixes in the CF app had better not\nbe lost in the long run, even if they are not listed as an open item\nas a live issue. Sometimes people apply \"Bug Fix\" as a category which\nwould imply a backpatch in most cases, and looking at the patch proves\nthat this can be wrong.\n--\nMichael",
"msg_date": "Mon, 10 Apr 2023 08:55:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "So here's the list of CF entries that I thought *might* still get\ncommitted either because they're an Open Issue or they're one of those\nother categories. I had intended to go through and add them all to the\nOpen Issues but it turns out I only feel confident in a couple of them\nqualifying for that:\n\nAlready added:\n* Default to ICU during initdb\n* Assertion failure with parallel full hash join\n* RecoveryConflictInterrupt() is unsafe in a signal handler\n* pg_visibility's pg_check_visible() yields false positive when\nworking in parallel with autovacuum\n\nNot added:\n* Fix bogus error emitted by pg_recvlogical when interrupted\n* clean up permission checks after 599b33b94\n* Fix assertion failure with next_phase_at in snapbuild.c\n* Fix assertion failure in SnapBuildInitialSnapshot()\n* Fix improper qual pushdown after applying outer join identity 3\n* Add document is_superuser\n* Improve doc for autovacuum on partitioned tables\n* Create visible links for HTML elements that have an id to make them\ndiscoverable via the web interface\n* Fix inconsistency in reporting checkpointer stats\n* pg_rewind WAL deletion pitfall\n* Update relfrozenxmin when truncating temp tables\n* Testing autovacuum wraparound\n* Add TAP tests for psql \\g piped into program\n\nI'll move these CF entries to the next CF now. I think they all are\narguably open issues though of varying severity.\n\n-- \ngreg\n\n\n",
"msg_date": "Sun, 9 Apr 2023 20:43:00 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": true,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> Not added:\n> * Fix improper qual pushdown after applying outer join identity 3\n\nI already made an open item for that one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 09 Apr 2023 20:49:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "Hi,\n \nAfter catching up with this thread, where pending bugs are listed and discussed,\nI wonder if the current patches trying to lower the HashJoin memory explosion[1]\ncould be added to the \"Older bugs affecting stable branches\" list of\nhttps://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items as I think they\ndeserve some discussion/triage for v16?\n\nThe patch as it stands is not invasive. As I wrote previously[2], if we\ncouldn't handle such situation better in v16, and if this patch is not\nbackpatch-able in a minor release, then we will keep living another year, maybe\nmore, with this bad memory behavior. \n\nThank you,\n\n[1] https://www.postgresql.org/message-id/20230408020119.32a0841b%40karst \n[2] https://www.postgresql.org/message-id/20230320151234.38b2235e%40karst\n\n\n\n",
"msg_date": "Fri, 21 Apr 2023 09:17:15 +0200",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com> writes:\n> After catching up with this thread, where pending bugs are listed and discussed,\n> I wonder if the current patches trying to lower the HashJoin memory explosion[1]\n> could be added to the \"Older bugs affecting stable branches\" list of\n> https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items as I think they\n> deserve some discussion/triage for v16?\n\nThey do not. That patch is clearly nowhere near ready to commit, and\neven if it was, I don't think we'd consider it post-feature-freeze.\nAny improvement in this space would be a feature, not a bug fix,\ndespite anyone's attempts to label it a bug fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Apr 2023 09:50:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 9:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> writes:\n> > After catching up with this thread, where pending bugs are listed and discussed,\n> > I wonder if the current patches trying to lower the HashJoin memory explosion[1]\n> > could be added to the \"Older bugs affecting stable branches\" list of\n> > https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items as I think they\n> > deserve some discussion/triage for v16?\n>\n> They do not. That patch is clearly nowhere near ready to commit, and\n> even if it was, I don't think we'd consider it post-feature-freeze.\n> Any improvement in this space would be a feature, not a bug fix,\n> despite anyone's attempts to label it a bug fix.\n\nSo, I think this may be a bit harsh. The second patch in the set only\nmoves hash join batch buffile creation into a more granular memory\ncontext to make it easier to identify instances of this bug (which\ncauses OOMs). It is missing a parallel hash join implementation and a\nbit more review. But it is not changing any behavior.\n\nIf using a separate memory context solely for the purpose of accounting\nis considered an anti-pattern, we could use some arithmetic like\nhash_agg_update_metrics() to calculate how much space is taken up by\nthese temporary file buffers. Ultimately, either method is a relatively\nsmall change (both LOC and impact AFAICT).\n\nCurrently, it isn't possible for a user to understand what is consuming\nso much memory when hash join batch file buffers substantially exceed\nthe size of the actual hashtable. This memory usage is not displayed in\nEXPLAIN ANALYZE or anywhere else. I think adding a debugging message\nwith some advice for is a reasonable concession to the user. This may\nnot constitute a bug \"fix\", but I don't really see how this is a\nfeature.\n\n- Melanie\n\n\n",
"msg_date": "Fri, 21 Apr 2023 12:49:40 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 12:49 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> If using a separate memory context solely for the purpose of accounting\n> is considered an anti-pattern, ...\n\nThis thread isn't the right place to argue about the merits of that\npatch, at least IMHO, but I don't think that's an anti-pattern. If we\nneed to keep track of how much memory is being used, it sure sounds\nlike a better idea to use a memory context for that than to invent\nsome bespoke infrastructure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 21 Apr 2023 13:12:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Apr 21, 2023 at 12:49 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n>> If using a separate memory context solely for the purpose of accounting\n>> is considered an anti-pattern, ...\n\n> This thread isn't the right place to argue about the merits of that\n> patch, at least IMHO, but I don't think that's an anti-pattern.\n\nI didn't say that either. If the proposal is to apply only that change,\nI could be on board with doing that post-feature-freeze. I just don't\nthink the parts of the patch that change the memory management heuristics\nare ready.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Apr 2023 13:41:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest 2023-03 starting tomorrow!"
}
] |
[
{
"msg_contents": "Today we have two fairly common patterns around extracting an attr from a\ncached tuple:\n\n a = SysCacheGetAttr(OID, tuple, Anum_pg_foo_bar, &isnull);\n Assert(!isnull);\n\n a = SysCacheGetAttr(OID, tuple, Anum_pg_foo_bar, &isnull);\n if (isnull)\n elog(ERROR, \"..\");\n\nThe error message in the elog() cases also vary quite a lot. I've been unable\nto find much in terms of guidelines for when to use en elog or an Assert, with\nthe likelyhood of a NULL value seemingly being the guiding principle (but not\nin all cases IIUC).\n\nThe attached refactoring introduce SysCacheGetAttrNotNull as a wrapper around\nSysCacheGetAttr where a NULL value triggers an elog(). This removes a lot of\nboilerplate error handling which IMO leads to increased readability as the\nerror handling *in these cases* don't add much (there are other cases where\nchecking isnull does a lot of valuable work of course). Personally I much\nprefer the error-out automatically style of APIs like how palloc saves a ton of\nchecking the returned allocation for null, this aims at providing a similar\nabstraction.\n\nThis will reduce granularity of error messages, and as the patch sits now it\ndoes so a lot since the message is left to work on - I wanted to see if this\nwas at all seen as a net positive before spending time on that part. I chose\nan elog since I as a user would prefer to hit an elog instead of a silent keep\ngoing with an assert, this is of course debateable.\n\nThoughts?\n\n--\nDaniel Gustafsson",
"msg_date": "Tue, 28 Feb 2023 21:14:19 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> The attached refactoring introduce SysCacheGetAttrNotNull as a wrapper around\n> SysCacheGetAttr where a NULL value triggers an elog().\n\n+1, seems like a good idea. (I didn't review the patch details.)\n\n> This will reduce granularity of error messages, and as the patch sits now it\n> does so a lot since the message is left to work on - I wanted to see if this\n> was at all seen as a net positive before spending time on that part. I chose\n> an elog since I as a user would prefer to hit an elog instead of a silent keep\n> going with an assert, this is of course debateable.\n\nI'd venture that the Assert cases are mostly from laziness, and\nthat once we centralize this it's plenty worthwhile to generate\na decent elog message. You ought to be able to look up the\ntable and column name from the info that is at hand.\n\nAlso ... at least in assert-enabled builds, maybe we could check that\nthe column being fetched this way is actually marked attnotnull?\nThat would help to catch misuse.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Feb 2023 18:20:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "On 28.02.23 21:14, Daniel Gustafsson wrote:\n> Today we have two fairly common patterns around extracting an attr from a\n> cached tuple:\n> \n> a = SysCacheGetAttr(OID, tuple, Anum_pg_foo_bar, &isnull);\n> Assert(!isnull);\n> \n> a = SysCacheGetAttr(OID, tuple, Anum_pg_foo_bar, &isnull);\n> if (isnull)\n> elog(ERROR, \"..\");\n\n> The attached refactoring introduce SysCacheGetAttrNotNull as a wrapper around\n> SysCacheGetAttr where a NULL value triggers an elog(). This removes a lot of\n> boilerplate error handling which IMO leads to increased readability as the\n> error handling *in these cases* don't add much (there are other cases where\n> checking isnull does a lot of valuable work of course). Personally I much\n> prefer the error-out automatically style of APIs like how palloc saves a ton of\n> checking the returned allocation for null, this aims at providing a similar\n> abstraction.\n\nYes please!\n\nI have occasionally wondered whether just passing the isnull argument as \nNULL would be sufficient, so we don't need a new function.\n\n\n\n",
"msg_date": "Wed, 1 Mar 2023 20:49:31 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Yes please!\n\n> I have occasionally wondered whether just passing the isnull argument as \n> NULL would be sufficient, so we don't need a new function.\n\nI thought about that too. I think I prefer Daniel's formulation\nwith the new function, but I'm not especially set on that.\n\nAn advantage of using a new function name is it'd be more obvious\nwhat's wrong if you try to back-patch such code into a branch that\nlacks the feature. (Or, of course, we could back-patch the feature.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Mar 2023 15:04:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "> On 1 Mar 2023, at 21:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> Yes please!\n> \n>> I have occasionally wondered whether just passing the isnull argument as \n>> NULL would be sufficient, so we don't need a new function.\n> \n> I thought about that too. I think I prefer Daniel's formulation\n> with the new function, but I'm not especially set on that.\n\nI prefer the new function since the name makes the code self documented rather\nthan developers not used to the API having to look up what the last NULL\nactually means.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 1 Mar 2023 21:34:11 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "On 28.02.23 21:14, Daniel Gustafsson wrote:\n> The attached refactoring introduce SysCacheGetAttrNotNull as a wrapper around\n> SysCacheGetAttr where a NULL value triggers an elog(). This removes a lot of\n> boilerplate error handling which IMO leads to increased readability as the\n> error handling *in these cases* don't add much (there are other cases where\n> checking isnull does a lot of valuable work of course). Personally I much\n> prefer the error-out automatically style of APIs like how palloc saves a ton of\n> checking the returned allocation for null, this aims at providing a similar\n> abstraction.\n\nI looked through the patch. The changes look ok to me. In some cases, \nmore line breaks could be removed (that is, the whole call could be put \non one line now).\n\n> This will reduce granularity of error messages, and as the patch sits now it\n> does so a lot since the message is left to work on - I wanted to see if this\n> was at all seen as a net positive before spending time on that part. I chose\n> an elog since I as a user would prefer to hit an elog instead of a silent keep\n> going with an assert, this is of course debateable.\n\nI think an error message like\n\n \"unexpected null value in system cache %d column %d\"\n\nis sufficient. Since these are \"can't happen\" errors, we don't need to \nspend too much extra effort to make it prettier.\n\nI don't think the unlikely() is going to buy much. If you are worried \non that level, SysCacheGetAttrNotNull() ought to be made inline. \nLooking through the sites of the changes, I didn't find any callers \nwhere I'd be worried on that level.\n\n\n\n",
"msg_date": "Thu, 2 Mar 2023 10:59:17 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "> On 1 Mar 2023, at 00:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Also ... at least in assert-enabled builds, maybe we could check that\n> the column being fetched this way is actually marked attnotnull?\n> That would help to catch misuse.\n\nWe could, but that would limit the API to attnotnull columns, rather than when\nthe caller knows that the attr cannot be NULL either due to attnotnull or due\nto intrinsic knowledge based on what is being extracted.\n\nAn example of the latter is build_function_result_tupdesc_t() which knows that\nproallargtypes cannot be NULL when calling SysCacheGetAttr.\n\nI think I prefer to allow those cases rather than the strict mode where\nattnotnull has to be true, do you think it's preferrable to align the API with\nattnotnull and keep the current coding for cases like the above?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 2 Mar 2023 12:05:02 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "> On 2 Mar 2023, at 10:59, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 28.02.23 21:14, Daniel Gustafsson wrote:\n>> The attached refactoring introduce SysCacheGetAttrNotNull as a wrapper around\n>> SysCacheGetAttr where a NULL value triggers an elog(). This removes a lot of\n>> boilerplate error handling which IMO leads to increased readability as the\n>> error handling *in these cases* don't add much (there are other cases where\n>> checking isnull does a lot of valuable work of course). Personally I much\n>> prefer the error-out automatically style of APIs like how palloc saves a ton of\n>> checking the returned allocation for null, this aims at providing a similar\n>> abstraction.\n> \n> I looked through the patch. \n\nThanks!\n\n> The changes look ok to me. In some cases, more line breaks could be removed (that is, the whole call could be put on one line now).\n\nI've tried to find those that would fit on a single line in the attached v2.\n\n>> This will reduce granularity of error messages, and as the patch sits now it\n>> does so a lot since the message is left to work on - I wanted to see if this\n>> was at all seen as a net positive before spending time on that part. I chose\n>> an elog since I as a user would prefer to hit an elog instead of a silent keep\n>> going with an assert, this is of course debateable.\n> \n> I think an error message like\n> \n> \"unexpected null value in system cache %d column %d\"\n> \n> is sufficient. Since these are \"can't happen\" errors, we don't need to spend too much extra effort to make it prettier.\n\nThey really should never happen, but since we have all the information we need\nit seems reasonable to ease debugging. I've made a slightly extended elog in\nthe attached patch.\n\nCallsites which had a detailed errormessage have been left passing isnull, like\nfor example statext_expressions_load().\n\n> I don't think the unlikely() is going to buy much. If you are worried on that level, SysCacheGetAttrNotNull() ought to be made inline. Looking through the sites of the changes, I didn't find any callers where I'd be worried on that level.\n\nFair enough, removed.\n\n--\nDaniel Gustafsson",
"msg_date": "Thu, 2 Mar 2023 12:32:14 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 1 Mar 2023, at 00:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Also ... at least in assert-enabled builds, maybe we could check that\n>> the column being fetched this way is actually marked attnotnull?\n\n> We could, but that would limit the API to attnotnull columns, rather than when\n> the caller knows that the attr cannot be NULL either due to attnotnull or due\n> to intrinsic knowledge based on what is being extracted.\n> An example of the latter is build_function_result_tupdesc_t() which knows that\n> proallargtypes cannot be NULL when calling SysCacheGetAttr.\n\nOK, if there are counterexamples then never mind that. I don't think\nwe want to discourage call sites from using this function.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Mar 2023 09:24:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I think an error message like\n> \"unexpected null value in system cache %d column %d\"\n> is sufficient. Since these are \"can't happen\" errors, we don't need to \n> spend too much extra effort to make it prettier.\n\nI'd at least like to see it give the catalog's OID. That's easily\nconvertible to a name, and it doesn't tend to move around across PG\nversions, neither of which are true for syscache IDs.\n\nAlso, I'm fairly unconvinced that it's a \"can't happen\" --- this\nwould be very likely to fire as a result of catalog corruption,\nso it would be good if it's at least minimally interpretable\nby a non-expert. Given that we'll now have just one copy of the\ncode, ISTM there's a good case for doing the small extra work\nto report catalog and column by name.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Mar 2023 09:44:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "> On 2 Mar 2023, at 15:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> I think an error message like\n>> \"unexpected null value in system cache %d column %d\"\n>> is sufficient. Since these are \"can't happen\" errors, we don't need to \n>> spend too much extra effort to make it prettier.\n> \n> I'd at least like to see it give the catalog's OID. That's easily\n> convertible to a name, and it doesn't tend to move around across PG\n> versions, neither of which are true for syscache IDs.\n> \n> Also, I'm fairly unconvinced that it's a \"can't happen\" --- this\n> would be very likely to fire as a result of catalog corruption,\n> so it would be good if it's at least minimally interpretable\n> by a non-expert. Given that we'll now have just one copy of the\n> code, ISTM there's a good case for doing the small extra work\n> to report catalog and column by name.\n\nRebased v3 on top of recent conflicting ICU changes causing the patch to not\napply anymore. Also took another look around the tree to see if there were\nmissed callsites but found none new.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 13 Mar 2023 14:19:07 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "On 13.03.23 14:19, Daniel Gustafsson wrote:\n>> On 2 Mar 2023, at 15:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>> I think an error message like\n>>> \"unexpected null value in system cache %d column %d\"\n>>> is sufficient. Since these are \"can't happen\" errors, we don't need to\n>>> spend too much extra effort to make it prettier.\n>>\n>> I'd at least like to see it give the catalog's OID. That's easily\n>> convertible to a name, and it doesn't tend to move around across PG\n>> versions, neither of which are true for syscache IDs.\n>>\n>> Also, I'm fairly unconvinced that it's a \"can't happen\" --- this\n>> would be very likely to fire as a result of catalog corruption,\n>> so it would be good if it's at least minimally interpretable\n>> by a non-expert. Given that we'll now have just one copy of the\n>> code, ISTM there's a good case for doing the small extra work\n>> to report catalog and column by name.\n> \n> Rebased v3 on top of recent conflicting ICU changes causing the patch to not\n> apply anymore. Also took another look around the tree to see if there were\n> missed callsites but found none new.\n\nI think the only open question here was the granularity of the error \nmessage, which I think you had addressed in v2.\n\n+\tif (isnull)\n+\t{\n+\t\telog(ERROR,\n+\t\t\t \"unexpected NULL value in cached tuple for pg_catalog.%s.%s\",\n+\t\t\t get_rel_name(cacheinfo[cacheId].reloid),\n+\t\t\t NameStr(TupleDescAttr(SysCache[cacheId]->cc_tupdesc, \nattributeNumber - 1)->attname));\n+\t}\n\nI prefer to use \"null value\" for SQL null values, and NULL for the C symbol.\n\nI'm a bit hesitant about hardcoding pg_catalog here. That happens to be \ntrue, of course, but isn't actually enforced, I think. I think that \ncould be left off. It's not like people will be confused about which \nschema \"pg_class.relname\" is in.\n\nAlso, the cached tuple isn't really for the attribute, so maybe split \nthat up a bit, like\n\n\"unexpected null value in cached tuple for catalog %s column %s\"\n\n\n",
"msg_date": "Tue, 14 Mar 2023 08:00:34 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "On Tue, 14 Mar 2023 at 02:19, Daniel Gustafsson <daniel@yesql.se> wrote:\n> Rebased v3 on top of recent conflicting ICU changes causing the patch to not\n> apply anymore. Also took another look around the tree to see if there were\n> missed callsites but found none new.\n\nI had a look at this. It generally seems like a good change.\n\nOne thing I thought about while looking is it stage 2 might do\nsomething similar for SearchSysCacheN. I then wondered if we're more\nlikely to want to keep the localised __FILE__, __LINE__ and __func__\nin the elog for those or not. It's probably less important that we're\nlosing those for this change, but worth noting here at least in case\nnobody else thought of it.\n\nI only noticed in a couple of places you have a few lines at 80 chars\nbefore the LF. Ideally those would wrap at 79 so that it's 80\nincluding LF. No big deal though.\n\nDavid\n\n\n",
"msg_date": "Thu, 23 Mar 2023 21:52:35 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "On 23.03.23 09:52, David Rowley wrote:\n> One thing I thought about while looking is it stage 2 might do\n> something similar for SearchSysCacheN. I then wondered if we're more\n> likely to want to keep the localised __FILE__, __LINE__ and __func__\n> in the elog for those or not. It's probably less important that we're\n> losing those for this change, but worth noting here at least in case\n> nobody else thought of it.\n\nI don't follow what you are asking for here.\n\n\n",
"msg_date": "Fri, 24 Mar 2023 08:31:04 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "On Fri, 24 Mar 2023 at 20:31, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 23.03.23 09:52, David Rowley wrote:\n> > One thing I thought about while looking is it stage 2 might do\n> > something similar for SearchSysCacheN. I then wondered if we're more\n> > likely to want to keep the localised __FILE__, __LINE__ and __func__\n> > in the elog for those or not. It's probably less important that we're\n> > losing those for this change, but worth noting here at least in case\n> > nobody else thought of it.\n>\n> I don't follow what you are asking for here.\n\nI had two points:\n\n1. Doing something similar for SearchSysCache1 and co might be a good\nphase two to this change.\n2. With the change Daniel is proposing here, \\set VERBOSITY verbose is\nnot going to print as useful information to tracking down where any\nunexpected nulls in the catalogue originates.\n\nFor #2, I don't think that's necessarily a problem. I can think of two\nreasons why SysCacheGetAttrNotNull might throw an ERROR:\n\na) We used SysCacheGetAttrNotNull() when we should have used SysCacheGetAttr().\nb) Catalogue corruption.\n\nA more localised ERROR message might just help more easily tracking\ndown type a) problems. I imagine it won't be too difficult to just\ngrep for all the SysCacheGetAttrNotNull calls for the particular\nnullable column to find the one causing the issue. For b), the error\nmessage in SysCacheGetAttrNotNull is sufficient without needing to\nknow where the SysCacheGetAttrNotNull call came from.\n\nDavid\n\n\n",
"msg_date": "Sat, 25 Mar 2023 09:59:08 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "> On 24 Mar 2023, at 21:59, David Rowley <dgrowleyml@gmail.com> wrote:\n> On Fri, 24 Mar 2023 at 20:31, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n>> On 23.03.23 09:52, David Rowley wrote:\n>>> One thing I thought about while looking is it stage 2 might do\n>>> something similar for SearchSysCacheN. I then wondered if we're more\n>>> likely to want to keep the localised __FILE__, __LINE__ and __func__\n>>> in the elog for those or not. It's probably less important that we're\n>>> losing those for this change, but worth noting here at least in case\n>>> nobody else thought of it.\n>> \n>> I don't follow what you are asking for here.\n> \n> I had two points:\n> \n> 1. Doing something similar for SearchSysCache1 and co might be a good\n> phase two to this change.\n\nQuite possibly yes, they do follow a pretty repeatable pattern.\n\n> 2. With the change Daniel is proposing here, \\set VERBOSITY verbose is\n> not going to print as useful information to tracking down where any\n> unexpected nulls in the catalogue originates.\n\nThats a fair point for the elog() removals, for the rather many assertions it\nmight be a net positive to get a non-local elog when failing in production.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 24 Mar 2023 22:12:13 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "> On 14 Mar 2023, at 08:00, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> I prefer to use \"null value\" for SQL null values, and NULL for the C symbol.\n\nThats a fair point, I agree with that.\n\n> I'm a bit hesitant about hardcoding pg_catalog here. That happens to be true, of course, but isn't actually enforced, I think. I think that could be left off. It's not like people will be confused about which schema \"pg_class.relname\" is in.\n> \n> Also, the cached tuple isn't really for the attribute, so maybe split that up a bit, like\n> \n> \"unexpected null value in cached tuple for catalog %s column %s\"\n\nNo objections, so changed to that wording.\n\nWith these changes and a pgindent run across it per Davids comment downthread,\nI've pushed this now. Thanks for review!\n\nI'm keeping a watchful eye on the buildfarm; francolin has errored in\nrecoveryCheck which I'm looking into but at first glance I don't think it's\nrelated (other animals have since passed it and it works locally, but I'll keep\ndigging at it to make sure).\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Sat, 25 Mar 2023 23:10:46 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 02:19:07PM +0100, Daniel Gustafsson wrote:\n> > On 2 Mar 2023, at 15:44, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > \n> > Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> >> I think an error message like\n> >> \"unexpected null value in system cache %d column %d\"\n> >> is sufficient. Since these are \"can't happen\" errors, we don't need to \n> >> spend too much extra effort to make it prettier.\n> > \n> > I'd at least like to see it give the catalog's OID. That's easily\n> > convertible to a name, and it doesn't tend to move around across PG\n> > versions, neither of which are true for syscache IDs.\n> > \n> > Also, I'm fairly unconvinced that it's a \"can't happen\" --- this\n> > would be very likely to fire as a result of catalog corruption,\n> > so it would be good if it's at least minimally interpretable\n> > by a non-expert. Given that we'll now have just one copy of the\n> > code, ISTM there's a good case for doing the small extra work\n> > to report catalog and column by name.\n> \n> Rebased v3 on top of recent conflicting ICU changes causing the patch to not\n> apply anymore. Also took another look around the tree to see if there were\n> missed callsites but found none new.\n\n+++ b/src/backend/utils/cache/syscache.c\n@@ -77,6 +77,7 @@\n #include \"catalog/pg_user_mapping.h\"\n #include \"lib/qunique.h\"\n #include \"utils/catcache.h\"\n+#include \"utils/lsyscache.h\"\n #include \"utils/rel.h\"\n #include \"utils/syscache.h\"\n\n@@ -1099,6 +1100,32 @@ SysCacheGetAttr(int cacheId, HeapTuple tup,\n+ elog(ERROR,\n+ \"unexpected NULL value in cached tuple for pg_catalog.%s.%s\",\n+ get_rel_name(cacheinfo[cacheId].reloid),\n\nQuestion: Is it safe to be making catalog access inside an error\nhandler, when one of the most likely reason for hitting the error is\ncatalog corruption ?\n\nMaybe the answer is that it's not \"safe\" but \"safe enough\" - IOW, if\nyou're willing to throw an assertion, it's good enough to try to show\nthe table name, and if the error report crashes the server, that's \"not\nmuch worse\" than having Asserted().\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 25 Mar 2023 21:59:49 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Mon, Mar 13, 2023 at 02:19:07PM +0100, Daniel Gustafsson wrote:\n>> + elog(ERROR,\n>> + \"unexpected NULL value in cached tuple for pg_catalog.%s.%s\",\n>> + get_rel_name(cacheinfo[cacheId].reloid),\n\n> Question: Is it safe to be making catalog access inside an error\n> handler, when one of the most likely reason for hitting the error is\n> catalog corruption ?\n\nI don't see a big problem here. If there were catalog corruption\npreventing fetching the catalog's pg_class row, it's highly unlikely\nthat you'd have managed to retrieve a catalog row to complain about.\n(That is, corruption in this particular catalog entry probably does\nnot extend to the metadata about the catalog containing it.)\n\n> Maybe the answer is that it's not \"safe\" but \"safe enough\"\n\nRight.\n\nIf we got concerned about this we could dodge the extra catalog access\nby adding the catalog's name to CatCache entries. I doubt it's worth\nit though. We can always re-evaluate if we see actual evidence of\nproblems.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Mar 2023 23:15:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring SysCacheGetAttr to know when attr cannot be NULL"
}
] |
[
{
"msg_contents": "When I designed the Bitmapset module, I set things up so that an empty\nBitmapset could be represented either by a NULL pointer, or by an\nallocated object all of whose bits are zero. I've recently come to\nthe conclusion that that was a bad idea and we should instead have\na convention like the longstanding invariant for Lists: an empty\nlist is represented by NIL and nothing else.\n\nTo do this, we need to fix bms_intersect, bms_difference, and a couple\nof other functions to check for having produced an empty result; but\nthen we can replace bms_is_empty() by a simple NULL test. I originally\nguessed that that would be a bad tradeoff, but now I think it likely\nis a win performance-wise, because we call bms_is_empty() many more\ntimes than those other functions put together.\n\nHowever, any performance gain would likely be marginal; the real\nreason why I'm pushing this is that we have various places that have\nhand-implemented a rule about \"this Bitmapset variable must be exactly\nNULL if empty\", so that they can use checks-for-null in place of\nbms_is_empty() calls in particularly hot code paths. That is a really\nfragile, mistake-prone way to do things, and I'm surprised that we've\nseldom been bitten by it. It's not well documented at all which\nvariables have this property, so you can't readily tell which code\nmight be violating those conventions.\n\nSo basically I'd like to establish that convention everywhere and\nget rid of these ad-hoc reduce-to-NULL checks. I put together the\nattached draft patch to do so. I've not done any hard performance\ntesting on it --- I did do one benchmark that showed maybe 0.8%\nspeedup, but I'd regard that as below the noise.\n\nI found just a few places that have issues with this idea. One thing\nthat is problematic is bms_first_member(): assuming you allow it to\nloop to completion, it ends with the passed Bitmapset being empty,\nwhich is now an invariant violation. I made it pfree the argument\nat that point, and fixed a couple of callers that would be broken\nthereby; but I wonder if it wouldn't be better to get rid of that\nfunction entirely and convert all its callers to use bms_next_member.\nThere are only about half a dozen.\n\nI also discovered that nodeAppend.c is relying on bms_del_members\nnot reducing a non-empty set to NULL, because it uses the nullness\nof appendstate->as_valid_subplans as a state boolean. That was\nprobably acceptable when it was written, but whoever added\nclassify_matching_subplans made a hash of the state invariants here,\nbecause that can set as_valid_subplans to empty. I added a separate\nboolean as an easy way out, but maybe that code could do with a more\nthorough revisit.\n\nI'll add this to the about-to-start CF.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 28 Feb 2023 16:59:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 04:59:48PM -0500, Tom Lane wrote:\n> When I designed the Bitmapset module, I set things up so that an empty\n> Bitmapset could be represented either by a NULL pointer, or by an\n> allocated object all of whose bits are zero. I've recently come to\n> the conclusion that that was a bad idea and we should instead have\n> a convention like the longstanding invariant for Lists: an empty\n> list is represented by NIL and nothing else.\n\n+1\n\n> I found just a few places that have issues with this idea. One thing\n> that is problematic is bms_first_member(): assuming you allow it to\n> loop to completion, it ends with the passed Bitmapset being empty,\n> which is now an invariant violation. I made it pfree the argument\n> at that point, and fixed a couple of callers that would be broken\n> thereby; but I wonder if it wouldn't be better to get rid of that\n> function entirely and convert all its callers to use bms_next_member.\n> There are only about half a dozen.\n\nUnless there is a way to avoid the invariant violation that doesn't involve\nscanning the rest of the words before bms_first_member returns, +1 to\nremoving it. Perhaps we could add a num_members variable to the struct so\nthat we know right away when the set becomes empty. But maintaining that\nmight be more trouble than it's worth.\n\n> I also discovered that nodeAppend.c is relying on bms_del_members\n> not reducing a non-empty set to NULL, because it uses the nullness\n> of appendstate->as_valid_subplans as a state boolean. That was\n> probably acceptable when it was written, but whoever added\n> classify_matching_subplans made a hash of the state invariants here,\n> because that can set as_valid_subplans to empty. I added a separate\n> boolean as an easy way out, but maybe that code could do with a more\n> thorough revisit.\n\nThe separate boolean certainly seems less fragile. That might even be\nworthwhile independent of the rest of the patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 1 Mar 2023 12:19:51 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 1:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I also discovered that nodeAppend.c is relying on bms_del_members\n> not reducing a non-empty set to NULL, because it uses the nullness\n> of appendstate->as_valid_subplans as a state boolean.\n\nI seem to recall that Deep and I tripped into this during the zedstore\ncolumn projection work. I think we started out using NULL as a\nsentinel value for our bitmaps, too, and it looked like it worked,\nuntil it didn't... so +1 to the simplification.\n\n--Jacob\n\n\n",
"msg_date": "Wed, 1 Mar 2023 13:35:08 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Tue, Feb 28, 2023 at 04:59:48PM -0500, Tom Lane wrote:\n>> When I designed the Bitmapset module, I set things up so that an empty\n>> Bitmapset could be represented either by a NULL pointer, or by an\n>> allocated object all of whose bits are zero. I've recently come to\n>> the conclusion that that was a bad idea and we should instead have\n>> a convention like the longstanding invariant for Lists: an empty\n>> list is represented by NIL and nothing else.\n\n> +1\n\nThanks for looking at this.\n\n> Unless there is a way to avoid the invariant violation that doesn't involve\n> scanning the rest of the words before bms_first_member returns, +1 to\n> removing it. Perhaps we could add a num_members variable to the struct so\n> that we know right away when the set becomes empty. But maintaining that\n> might be more trouble than it's worth.\n\nbms_first_member is definitely legacy code, so let's just get\nrid of it. Done like that in 0001 below. (This was slightly more\ncomplex than I foresaw, because some of the callers were modifying\nthe result variables. But they're probably cleaner this way anyway.)\n\n>> I also discovered that nodeAppend.c is relying on bms_del_members\n>> not reducing a non-empty set to NULL, because it uses the nullness\n>> of appendstate->as_valid_subplans as a state boolean.\n\n> The separate boolean certainly seems less fragile. That might even be\n> worthwhile independent of the rest of the patch.\n\nYeah. I split out those executor fixes as 0002; 0003 is the changes\nto bitmapsets proper, and then 0004 removes now-dead code.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 01 Mar 2023 17:59:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "On Wed, Mar 01, 2023 at 05:59:45PM -0500, Tom Lane wrote:\n> bms_first_member is definitely legacy code, so let's just get\n> rid of it. Done like that in 0001 below. (This was slightly more\n> complex than I foresaw, because some of the callers were modifying\n> the result variables. But they're probably cleaner this way anyway.)\n\nYeah, it's nice that you don't have to subtract\nFirstLowInvalidHeapAttributeNumber in so many places. I think future\nchanges might end up using attidx when they ought to use attrnum (and vice\nversa), but you could just as easily forget to subtract\nFirstLowInvalidHeapAttributeNumber today, so it's probably fine.\n\n> + /* attidx is zero-based, attrnum is the normal attribute number */\n> + int attrnum = attidx + FirstLowInvalidHeapAttributeNumber;\n\nnitpick: Shouldn't this be an AttrNumber?\n\n> Yeah. I split out those executor fixes as 0002; 0003 is the changes\n> to bitmapsets proper, and then 0004 removes now-dead code.\n\nThese all looked reasonable to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 1 Mar 2023 16:22:19 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Wed, Mar 01, 2023 at 05:59:45PM -0500, Tom Lane wrote:\n>> + /* attidx is zero-based, attrnum is the normal attribute number */\n>> + int attrnum = attidx + FirstLowInvalidHeapAttributeNumber;\n\n> nitpick: Shouldn't this be an AttrNumber?\n\nI stuck with the existing type choices for those variables,\nbut I don't mind changing to AttrNumber here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Mar 2023 19:26:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 6:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Yeah. I split out those executor fixes as 0002; 0003 is the changes\n> to bitmapsets proper, and then 0004 removes now-dead code.\n\n\n+1 to all these patches. Some minor comments from me.\n\n*0003\nIt seems that the Bitmapset checked by bms_is_empty_internal cannot be\nNULL from how it is computed by a function. So I wonder if we can\nremove the check of 'a' being NULL in that function, or reduce it to an\nAssert.\n\n- if (a == NULL)\n- return true;\n+ Assert(a != NULL);\n\n*0004\nIt seems that in create_lateral_join_info around line 689, the\nbms_is_empty check of lateral_relids is not necessary, since we've\nchecked that lateral_relids cannot be NULL several lines earlier.\n\n@@ -682,12 +682,6 @@ create_lateral_join_info(PlannerInfo *root)\n if (lateral_relids == NULL)\n continue;\n\n- /*\n- * We should not have broken the invariant that lateral_relids is\n- * exactly NULL if empty.\n- */\n- Assert(!bms_is_empty(lateral_relids));\n-\n\nThanks\nRichard\n\nOn Thu, Mar 2, 2023 at 6:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nYeah. I split out those executor fixes as 0002; 0003 is the changes\nto bitmapsets proper, and then 0004 removes now-dead code. +1 to all these patches. Some minor comments from me.*0003It seems that the Bitmapset checked by bms_is_empty_internal cannot beNULL from how it is computed by a function. So I wonder if we canremove the check of 'a' being NULL in that function, or reduce it to anAssert.- if (a == NULL)- return true;+ Assert(a != NULL);*0004It seems that in create_lateral_join_info around line 689, thebms_is_empty check of lateral_relids is not necessary, since we'vechecked that lateral_relids cannot be NULL several lines earlier.@@ -682,12 +682,6 @@ create_lateral_join_info(PlannerInfo *root) if (lateral_relids == NULL) continue;- /*- * We should not have broken the invariant that lateral_relids is- * exactly NULL if empty.- */- Assert(!bms_is_empty(lateral_relids));-ThanksRichard",
"msg_date": "Thu, 2 Mar 2023 11:13:39 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> It seems that the Bitmapset checked by bms_is_empty_internal cannot be\n> NULL from how it is computed by a function. So I wonder if we can\n> remove the check of 'a' being NULL in that function, or reduce it to an\n> Assert.\n\nYeah, I think just removing it is sufficient. The subsequent attempts\nto dereference the pointer will crash just fine if it's NULL; we don't\nneed an Assert to help things along.\n\n> It seems that in create_lateral_join_info around line 689, the\n> bms_is_empty check of lateral_relids is not necessary, since we've\n> checked that lateral_relids cannot be NULL several lines earlier.\n\nGood catch, I missed that one.\n\nPushed, thanks for reviewing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Mar 2023 12:04:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "On Wed, 1 Mar 2023 at 10:59, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> When I designed the Bitmapset module, I set things up so that an empty\n> Bitmapset could be represented either by a NULL pointer, or by an\n> allocated object all of whose bits are zero. I've recently come to\n> the conclusion that that was a bad idea and we should instead have\n> a convention like the longstanding invariant for Lists: an empty\n> list is represented by NIL and nothing else.\n\nI know I'm late to the party here, but I just wanted to add that I\nagree with this and also that I've never been a fan of having to\ndecide if it was safe to check for NULL when I needed performance or\nif I needed to use bms_is_empty() because the set might have all its\nwords set to 0.\n\nI suggest tightening the rule even further so instead of just empty\nsets having to be represented as NULL, the rule should be that sets\nshould never contain any trailing zero words, which is effectively a\nsuperset of the \"empty is NULL\" rule that you've just changed.\n\nIf we did this, then various functions can shake loose some crufty\ncode and various other functions become more optimal due to not having\nto loop over trailing zero words needlessly. For example.\n\n* bms_equal() and bms_compare() can precheck nwords match before\ntroubling themselves with looping over each member, and;\n* bms_is_subset() can return false early if 'a' has more words than 'b', and;\n* bms_subset_compare() becomes more simple as it does not have to look\nfor trailing 0 words, and;\n* bms_nonempty_difference() can return true early if 'a' has more\nwords than 'b', plus no need to check for trailing zero words at the\nend.\n\nWe can also chop the set down to size in; bms_intersect(),\nbms_difference(), bms_int_members(), bms_del_members() and\nbms_intersect() which saves looping needlessly over empty words when\nvarious other BMS operations are performed later on the set, for\nexample, bms_next_member(), bms_prev_member, bms_copy(), etc.\n\nThe only reason I can think of that this would be a bad idea is that\nif we want to add members again then we need to do repalloc(). If\nwe're only increasing the nwords back to what it had been on some\nprevious occasion then repalloc() is effectively a no-op, so I doubt\nthis will really be a net negative. I think the effort we'll waste by\ncarrying around needless trailing zero words in most cases is likely\nto outweigh the overhead of any no-op repallocs. Take\nbms_int_members(), for example, we'll purposefully 0 out all the\ntrailing words possibly having to read in new cachelines from RAM to\ndo so. It would be better to leave having to read those in again\nuntil we actually need to do something more useful with them, like\nadding some new members to the set again. We'll have to dirty those\ncachelines then anyway and we may have flushed those cachelines out of\nthe CPU cache by the time we get around to adding the new members\nagain.\n\nI've coded this up in the attached and followed the lead in list.c and\nadded a function named check_bitmapset_invariants() which ensures the\nabove rule is followed. I think the code as it stands today should\nreally have something like that anyway.\n\nThe patch also optimizes sub-optimal newly added code which calls\nbms_is_empty_internal() when we have other more optimal means to\ndetermine if the set is empty or not.\n\nDavid",
"msg_date": "Fri, 3 Mar 2023 14:52:22 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I suggest tightening the rule even further so instead of just empty\n> sets having to be represented as NULL, the rule should be that sets\n> should never contain any trailing zero words, which is effectively a\n> superset of the \"empty is NULL\" rule that you've just changed.\n\nHmm, I'm not immediately a fan of that, because it'd mean more\ninteraction with aset.c to change the allocated size of results.\n(Is it worth carrying both \"allocated words\" and \"nonzero words\"\nfields to avoid useless memory-management effort? Dunno.)\n\nAnother point here is that I'm pretty sure that just about all\nbitmapsets we deal with are only one or two words, so I'm not\nconvinced you're going to get any performance win to justify\nthe added management overhead.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Mar 2023 21:17:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "On Fri, 3 Mar 2023 at 15:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> (Is it worth carrying both \"allocated words\" and \"nonzero words\"\n> fields to avoid useless memory-management effort? Dunno.)\n\nIt would have been a more appealing thing to do before Bitmapset\nbecame a node type as we'd have had empty space in the struct to have\nanother 32-bit word on 64-bit builds.\n\n> Another point here is that I'm pretty sure that just about all\n> bitmapsets we deal with are only one or two words, so I'm not\n> convinced you're going to get any performance win to justify\n> the added management overhead.\n\nIt's true that the majority of Bitmapsets are going to be just 1 word,\nbut it's important to acknowledge that we do suffer in some more\nextreme cases when Bitmapsets become large. Partition with large\nnumbers of partitions is one such case.\n\ncreate table lp(a int) partition by list(a);\nselect 'create table lp'||x||' partition of lp for values\nin('||x||');' from generate_series(0,9999)x;\n\\gexec\n\n# cat bench.sql\nselect * from lp where a > 1 and a < 3;\n\n$ pgbench -n -T 15 -f bench.sql postgres | grep tps\n\nmaster:\ntps = 28055.619289 (without initial connection time)\ntps = 27819.235083 (without initial connection time)\ntps = 28486.099808 (without initial connection time)\n\nmaster + bms_no_trailing_zero_words.patch:\ntps = 30840.840266 (without initial connection time)\ntps = 29491.519705 (without initial connection time)\ntps = 29471.083938 (without initial connection time)\n\n(~6.45% faster)\n\nOf course, it's an extreme case, I'm merely trying to show that\ntrimming the Bitmapsets down can have an impact in some cases.\n\nDavid\n\n\n",
"msg_date": "Fri, 3 Mar 2023 16:22:01 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Fri, 3 Mar 2023 at 15:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Another point here is that I'm pretty sure that just about all\n>> bitmapsets we deal with are only one or two words, so I'm not\n>> convinced you're going to get any performance win to justify\n>> the added management overhead.\n\n> It's true that the majority of Bitmapsets are going to be just 1 word,\n> but it's important to acknowledge that we do suffer in some more\n> extreme cases when Bitmapsets become large. Partition with large\n> numbers of partitions is one such case.\n\nMaybe, but optimizing for that while pessimizing every other case\ndoesn't sound very attractive from here. I think we need some\nbenchmarks on normal-size bitmapsets before considering doing much\nin this area.\n\nAlso, if we're going to make any sort of changes here it'd probably\nbehoove us to make struct Bitmapset private in bitmapset.c, so that\nwe can have confidence that nobody is playing games with them.\nI had a go at that and was pleasantly surprised to find that\nactually nobody has; the attached passes check-world. It'd probably\nbe smart to commit this as a follow-on to 00b41463c, whether or not\nwe go any further.\n\nAlso, given that we do this, I don't think that check_bitmapset_invariants\nas you propose it is worth the trouble. The reason we've gone to such\nlengths with checking List invariants is that initially we had a large\nnumber of places doing creative and not-too-structured things with Lists,\nplus we've made several absolutely fundamental changes to that data\nstructure. Despite the far larger bug surface, I don't recall that those\ninvariant checks ever found anything after the initial rounds of changes.\nSo I don't buy that there's an argument for a similarly expensive set\nof checks here. bitmapset.c is small enough that we should be able to\npretty much prove it correct by eyeball.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 03 Mar 2023 17:08:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "On Sat, 4 Mar 2023 at 11:08, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Fri, 3 Mar 2023 at 15:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > It's true that the majority of Bitmapsets are going to be just 1 word,\n> > but it's important to acknowledge that we do suffer in some more\n> > extreme cases when Bitmapsets become large. Partition with large\n> > numbers of partitions is one such case.\n>\n> Maybe, but optimizing for that while pessimizing every other case\n> doesn't sound very attractive from here. I think we need some\n> benchmarks on normal-size bitmapsets before considering doing much\n> in this area.\n\nAfter thinking about this again and looking at the code, I'm not\nreally sure where the pessimism has been added. For the changes made\nto say bms_equal(), there was already a branch that checked the nwords\ncolumn so that we could find the shorter and longer sets out of the\ntwo input sets. That's been replaced with a comparison of both input\nset's nwords, which does not really seem any more expensive. For\nbms_compare() we needed to do Min(a->nwords, b->nwords) to find the\nshortest set, likewise for bms_nonempty_difference() and\nbms_is_subset(). That does not seem less expensive than the\nreplacement code.\n\nI think the times where we have sets that we do manage to trim down\nthe nword count with that we actually end up having to expand again\nare likely fairly rare.\n\nI also wondered if we could shave off a few instructions by utilising\nthe knowledge that nwords is never 0. That would mean that some of\nthe loops could be written as:\n\ni = 0; do { <stuff>; } while (++i < set->nwords);\n\ninstead of:\n\nfor (i = 0; i < set->nwords; i++) { <stuff>; }\n\nif we do assume that the vast majority of sets are nwords==1 sets,\nthen this reduces the loop condition checks by half for all those.\n\nI see that gcc manages to optimize: for (i = 0; i < set->nwords || i\n== 0; i++) { <stuff>; } into the same code as the do while loop. clang\ndoes not seem to manage that.\n\n> Also, if we're going to make any sort of changes here it'd probably\n> behoove us to make struct Bitmapset private in bitmapset.c, so that\n> we can have confidence that nobody is playing games with them.\n> I had a go at that and was pleasantly surprised to find that\n> actually nobody has; the attached passes check-world. It'd probably\n> be smart to commit this as a follow-on to 00b41463c, whether or not\n> we go any further.\n\nThat seems like a good idea. This will give us extra reassurance that\nnobody is making up their own Bitmapsets through some custom function\nthat don't follow the rules.\n\n> Also, given that we do this, I don't think that check_bitmapset_invariants\n> as you propose it is worth the trouble.\n\nI wondered if maybe just Assert(a == NULL || a->words[a->nwords - 1]\n!= 0); would be worthwhile anywhere. However, I don't see any\nparticular function that is more likely to catch those errors, so it's\nmaybe not worth doing anywhere if we're not doing it everywhere.\n\nI adjusted the patch to remove the invariant checks and fixed up a\ncouple of things I'd missed. The 0002 patch changes the for loops\ninto do while loops. I wanted to see if we could see any performance\ngains from doing this.\n\nThe performance numbers are nowhere near as stable as I'd like them to\nhave been, but testing the patch shows:\n\nTest 1:\n\nsetup:\ncreate table t1 (a int) partition by list(a);\nselect 'create table t1_'||x||' partition of t1 for values\nin('||x||');' from generate_series(0,9)x;\n\\gexec\n\nTest 1's sql:\nselect * from t1 where a > 1 and a < 3;\n\nfor i in {1..3}; do pgbench -n -f test1.sql -T 15 postgres | grep tps; done\n\nmaster (cf96907aad):\ntps = 29534.189309\ntps = 30465.722545\ntps = 30328.290553\n\nmaster + 0001:\ntps = 28915.174536\ntps = 29817.950994\ntps = 29387.084581\n\nmaster + 0001 + 0002:\ntps = 29438.216512\ntps = 29951.905408\ntps = 31445.191414\n\nTest 2:\n\nsetup:\ncreate table t2 (a int) partition by list(a);\nselect 'create table t2_'||x||' partition of t2 for values\nin('||x||');' from generate_series(0,9999)x;\n\\gexec\n\nTest 2's sql:\nselect * from t2 where a > 1 and a < 3;\n\nfor i in {1..3}; do pgbench -n -f test2.sql -T 15 postgres | grep tps; done\n\nmaster (cf96907aad):\ntps = 28470.504990\ntps = 29175.450905\ntps = 28123.699176\n\nmaster + 0001:\ntps = 28056.256805\ntps = 28380.401746\ntps = 28384.395217\n\nmaster + 0001 + 0002:\ntps = 29365.992219\ntps = 28418.374923\ntps = 28303.924129\n\nTest 3:\n\nsetup:\ncreate table t3a (a int primary key);\ncreate table t3b (a int primary key);\n\nTest 3's sql:\nselect * from t3a inner join t3b on t3a.a = t3b.a;\n\nfor i in {1..3}; do pgbench -n -f test3.sql -T 15 postgres | grep tps; done\n\nmaster (cf96907aad):\ntps = 20458.710550\ntps = 20527.898929\ntps = 20284.165277\n\nmaster + 0001:\ntps = 20700.340713\ntps = 20571.913956\ntps = 20541.771589\n\nmaster + 0001 + 0002:\ntps = 20046.674601\ntps = 20016.649536\ntps = 19487.999853\n\nI've attached a graph of this too. It shows that there might be a\nsmall increase in performance with tests 1 and 2. It seems like test 3\nregresses a bit. I suspect this might just be a code arrangement issue\nas master + 0001 is faster than 0001 + 0002 for that test.\n\nDavid",
"msg_date": "Tue, 7 Mar 2023 17:06:42 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "Hello,\n\nOn Tue, Mar 7, 2023 at 1:07 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I adjusted the patch to remove the invariant checks and fixed up a\n> couple of things I'd missed. The 0002 patch changes the for loops\n> into do while loops. I wanted to see if we could see any performance\n> gains from doing this.\n\nI noticed that these patches caused significant degradation while\nworking on improving planning performance in another thread [1].\n\nIn the experiment, I used the query attached to this email. This\nworkload consists of eight tables, each of which is split into n\npartitions. The \"query.sql\" measures the planning time of a query that\njoins these tables. You can quickly reproduce my experiment using the\nfollowing commands.\n\n=====\npsql -f create-tables.sql\npsql -f query.sql\n=====\n\nI show the result in the following tables. I refer to David's patches\nin [2] as the \"trailing-zero\" patch. When n was large, the\ntrailing-zero patch showed significant degradation. This is due to too\nmany calls of repalloc(). With this patch, we cannot reuse spaces\nafter the last non-zero bitmapword, so we need to call repalloc() more\nfrequently than before. When n is 256, repalloc() was called only 670\ntimes without the patch, while it was called 5694 times with the\npatch.\n\nTable 1: Planning time (ms)\n-----------------------------------------------------------------\n n | Master | Patched (trailing-zero) | Patched (bitwise-OR)\n-----------------------------------------------------------------\n 1 | 37.639 | 37.330 | 36.979\n 2 | 36.066 | 35.646 | 36.044\n 4 | 37.958 | 37.349 | 37.842\n 8 | 42.397 | 42.994 | 39.779\n 16 | 54.565 | 67.713 | 44.186\n 32 | 89.220 | 100.828 | 65.542\n 64 | 227.854 | 269.059 | 150.398\n 128 | 896.513 | 1279.965 | 577.671\n 256 | 4241.994 | 8220.508 | 2538.681\n-----------------------------------------------------------------\n\nTable 2: Planning time speedup (higher is better)\n------------------------------------------------------\n n | Patched (trailing-zero) | Patched (bitwise-OR)\n------------------------------------------------------\n 1 | 100.8% | 101.8%\n 2 | 101.2% | 100.1%\n 4 | 101.6% | 100.3%\n 8 | 98.6% | 106.6%\n 16 | 80.6% | 123.5%\n 32 | 88.5% | 136.1%\n 64 | 84.7% | 151.5%\n 128 | 70.0% | 155.2%\n 256 | 51.6% | 167.1%\n------------------------------------------------------\n\nOn Fri, Mar 3, 2023 at 10:52 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> The patch also optimizes sub-optimal newly added code which calls\n> bms_is_empty_internal() when we have other more optimal means to\n> determine if the set is empty or not.\n\nHowever, I agree with David's opinion regarding the\nbms_is_empty_internal() calls, which is quoted above. I have\nimplemented this optimization in a slightly different way than David.\nMy patch is attached to this email. The difference between my patch\nand David's is in the determination method of whether the result is\nempty: David's patch records the last index of non-zero bitmapword to\nminimize the Bitmapset. If the index is -1, we can conclude that the\nresult is empty. In contrast, my patch uses a more lightweight\noperation. I show my changes as follows.\n\n=====\n@@ -263,6 +261,7 @@ bms_intersect(const Bitmapset *a, const Bitmapset *b)\n const Bitmapset *other;\n int resultlen;\n int i;\n+ bitmapword bitwise_or = 0;\n\n /* Handle cases where either input is NULL */\n if (a == NULL || b == NULL)\n@@ -281,9 +280,17 @@ bms_intersect(const Bitmapset *a, const Bitmapset *b)\n /* And intersect the longer input with the result */\n resultlen = result->nwords;\n for (i = 0; i < resultlen; i++)\n- result->words[i] &= other->words[i];\n+ {\n+ bitmapword w = (result->words[i] &= other->words[i]);\n+\n+ /*\n+ * Compute bitwise OR of all bitmapwords to determine if the result\n+ * is empty\n+ */\n+ bitwise_or |= w;\n+ }\n /* If we computed an empty result, we must return NULL */\n- if (bms_is_empty_internal(result))\n+ if (bitwise_or == 0)\n {\n pfree(result);\n return NULL;\n@@ -711,30 +718,6 @@ bms_membership(const Bitmapset *a)\n return result;\n }\n=====\n\nMy idea is to compute the bitwise OR of all bitmapwords of the result\nBitmapset. The bitwise OR can be represented as a single operation in\nthe machine code and does not require any conditional branches. If the\nbitwise ORed value is zero, we can conclude the result Bitmapset is\nempty. The costs related to this operation can be almost negligible;\nit is significantly cheaper than calling bms_is_empty_internal() and\nless expensive than using a conditional branch such as 'if.'\n\nIn the tables above, I called my patch the \"bitwise-OR\" patch. The\npatch is much faster than the master when n is large. Its speed up\nreached 167.1%. I think just adopting this optimization is worth\nconsidering.\n\n[1] https://www.postgresql.org/message-id/CAJ2pMkY10J_PA2jpH5M-VoOo6BvJnTOO_-t_znu_pOaP0q10pA@mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAApHDvq9eq0W_aFUGrb6ba28ieuQN4zM5Uwqxy7+LMZjJc+VGg@mail.gmail.com\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Thu, 16 Mar 2023 10:30:57 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "Hello,\n\nOn Thu, Mar 16, 2023 at 10:30 AM Yuya Watari <watari.yuya@gmail.com> wrote:\n> My idea is to compute the bitwise OR of all bitmapwords of the result\n> Bitmapset. The bitwise OR can be represented as a single operation in\n> the machine code and does not require any conditional branches. If the\n> bitwise ORed value is zero, we can conclude the result Bitmapset is\n> empty. The costs related to this operation can be almost negligible;\n> it is significantly cheaper than calling bms_is_empty_internal() and\n> less expensive than using a conditional branch such as 'if.'\n\nAfter posting the patch, I noticed that my patch had some bugs. My\nidea above is not applicable to bms_del_member(), and I missed some\nadditional operations required in bms_del_members(). I have attached\nthe fixed version to this email. I really apologize for making the\nmistakes. Should we add new regression tests to prevent this kind of\nbug?\n\nThe following tables illustrate the result of a re-run experiment. The\nsignificant improvement was a mistake, but a speedup of about 2% was\nstill obtained when the number of partitions, namely n, was large.\nThis result indicates that the optimization regarding\nbms_is_empty_internal() is effective on some workloads.\n\nTable 1: Planning time (ms)\n(n: the number of partitions of each table)\n-----------------------------------------------------------------\n n | Master | Patched (trailing-zero) | Patched (bitwise-OR)\n-----------------------------------------------------------------\n 1 | 36.903 | 36.621 | 36.731\n 2 | 35.842 | 35.031 | 35.704\n 4 | 37.756 | 37.457 | 37.409\n 8 | 42.069 | 42.578 | 42.322\n 16 | 53.670 | 67.792 | 53.618\n 32 | 88.412 | 100.605 | 89.147\n 64 | 229.734 | 271.259 | 225.971\n 128 | 889.367 | 1272.270 | 870.472\n 256 | 4209.312 | 8223.623 | 4129.594\n-----------------------------------------------------------------\n\nTable 2: Planning time speedup (higher is better)\n------------------------------------------------------\n n | Patched (trailing-zero) | Patched (bitwise-OR)\n------------------------------------------------------\n 1 | 100.8% | 100.5%\n 2 | 102.3% | 100.4%\n 4 | 100.8% | 100.9%\n 8 | 98.8% | 99.4%\n 16 | 79.2% | 100.1%\n 32 | 87.9% | 99.2%\n 64 | 84.7% | 101.7%\n 128 | 69.9% | 102.2%\n 256 | 51.2% | 101.9%\n------------------------------------------------------\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Thu, 16 Mar 2023 20:45:28 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "Hello,\n\nOn Tue, Mar 7, 2023 at 1:07 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I adjusted the patch to remove the invariant checks and fixed up a\n> couple of things I'd missed. The 0002 patch changes the for loops\n> into do while loops. I wanted to see if we could see any performance\n> gains from doing this.\n\nIn March, I reported that David's patch caused a degradation in\nplanning performance. I have investigated this issue further and found\nsome bugs in the patch. Due to these bugs, Bitmapset operations in the\noriginal patch computed incorrect results. This incorrect computation\nresulted in unexpected behavior, which I observed as performance\ndegradation. After fixing the bugs, David's patch showed significant\nperformance improvements. In particular, it is worth noting that the\npatch obtained a good speedup even when most Bitmapsets have only one\nword.\n\n1.1. Wrong truncation that we should not do (fixed in v2-0003)\n\nThe first bug is in bms_difference() and bms_del_members(). At the end\nof these functions, the original patch truncated the result Bitmapset\nwhen lastnonzero was -1. However, we must not do this when the result\nBitmapset is longer than the other. In such a case, the last word of\nthe result was still non-zero, so we cannot shorten nwords. I fixed\nthis bug in v2-0003.\n\n1.2. Missing truncation that we should do (fixed in v2-0004)\n\nThe other bug is in bms_del_member(). As seen from v2-0004-*.patch,\nthe original patch missed the necessary truncation. I also fixed this\nbug.\n\n2. Experiments\n\nI conducted experiments to evaluate the performance of David's patch\nwith bug fixes. In the experiments, I used two queries attached to\nthis email. The first query, Query A (query-a.sql), joins three tables\nand performs an aggregation. This is quite a simple query. The second\nquery, Query B (query-b.sql), is more complicated because it joins\neight tables. In both queries, every table is split into n partitions.\nI issued these queries with varying n and measured their planning\ntimes. The following tables and attached figure show the results.\n\nTable 1: Planning time and its speedup of Query A\n(n: the number of partitions of each table)\n(Speedup: higher is better)\n---------------------------------------------\n n | Master (ms) | Patched (ms) | Speedup\n---------------------------------------------\n 1 | 0.722 | 0.682 | 105.8%\n 2 | 0.779 | 0.774 | 100.6%\n 4 | 0.977 | 0.958 | 101.9%\n 8 | 1.286 | 1.287 | 99.9%\n 16 | 1.993 | 1.986 | 100.4%\n 32 | 3.967 | 3.900 | 101.7%\n 64 | 7.783 | 7.310 | 106.5%\n 128 | 23.369 | 19.722 | 118.5%\n 256 | 108.723 | 75.149 | 144.7%\n 384 | 265.576 | 167.354 | 158.7%\n 512 | 516.468 | 301.100 | 171.5%\n 640 | 883.167 | 494.960 | 178.4%\n 768 | 1423.839 | 755.201 | 188.5%\n 896 | 2195.935 | 1127.786 | 194.7%\n 1024 | 3041.131 | 1444.145 | 210.6%\n---------------------------------------------\n\nTable 2: Planning time and its speedup of Query B\n--------------------------------------------\n n | Master (ms) | Patched (ms) | Speedup\n--------------------------------------------\n 1 | 36.038 | 35.455 | 101.6%\n 2 | 34.831 | 34.178 | 101.9%\n 4 | 36.537 | 35.998 | 101.5%\n 8 | 41.234 | 40.333 | 102.2%\n 16 | 52.427 | 50.596 | 103.6%\n 32 | 87.064 | 80.013 | 108.8%\n 64 | 228.050 | 187.762 | 121.5%\n 128 | 886.140 | 645.731 | 137.2%\n 256 | 4212.709 | 2853.072 | 147.7%\n--------------------------------------------\n\nYou can quickly reproduce my experiments by the following commands.\n\n== Query A ==\npsql -f create-tables-a.sql\npsql -f query-a.sql\n=============\n\n== Query B ==\npsql -f create-tables-b.sql\npsql -f query-b.sql\n=============\n\nThe above results indicate that David's patch demonstrated outstanding\nperformance. The speedup reached 210.6% for Query A and 147.7% for\nQuery B. Even when n is small, the patch reduced planning time. The\nmain concern about this patch was overheads for Bitmapsets with only\none or two words. My experiments imply that such overheads are\nnon-existent or negligible because some performance improvements were\nobtained even for small sizes.\n\nThe results of my experiments strongly support the effectiveness of\nDavid's patch. I think this optimization is worth considering.\n\nI am looking forward to your comments.\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Mon, 12 Jun 2023 21:31:47 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "On Tue, 13 Jun 2023 at 00:32, Yuya Watari <watari.yuya@gmail.com> wrote:\n> In March, I reported that David's patch caused a degradation in\n> planning performance. I have investigated this issue further and found\n> some bugs in the patch. Due to these bugs, Bitmapset operations in the\n> original patch computed incorrect results. This incorrect computation\n> resulted in unexpected behavior, which I observed as performance\n> degradation. After fixing the bugs, David's patch showed significant\n> performance improvements. In particular, it is worth noting that the\n> patch obtained a good speedup even when most Bitmapsets have only one\n> word.\n\nThank you for looking at this again and finding and fixing the two\nbugs and running some benchmarks.\n\nI've incorporated fixes for the bugs in the attached patch. I didn't\nquite use the same approach as you did. I did the fix for 0003\nslightly differently and added two separate paths. We've no need to\ntrack the last non-zero word when 'a' has more words than 'b' since we\ncan't truncate any zero-words off for that case. Not having to do\nthat makes the do/while loop pretty tight.\n\nFor the fix in the 0004 patch, I think we can do what you did more\nsimply. I don't think there's any need to perform the loop to find\nthe last non-zero word. We're only deleting a member from a single\nword here, so we only need to check if that word is the last word and\nremove it if it's become zero. If it's not the last word then we\ncan't remove it as there must be some other non-zero word after it.\n\nI also made a small adjustment to bms_get_singleton_member() and\nbms_singleton_member() to have them Assert fail if result is < 0 after\nlooping over the set. This should no longer happen so I thought it\nwould make more compact code if that check was just removed. We'd\nlikely do better if we got reports of Assert failures here than, in\nthe case of bms_get_singleton_member, some code accidentally doing the\nwrong thing based on a corrupt Bitmapset.\n\nDavid",
"msg_date": "Tue, 13 Jun 2023 23:07:31 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "Hello,\n\nOn Tue, Jun 13, 2023 at 8:07 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I've incorporated fixes for the bugs in the attached patch. I didn't\n> quite use the same approach as you did. I did the fix for 0003\n> slightly differently and added two separate paths. We've no need to\n> track the last non-zero word when 'a' has more words than 'b' since we\n> can't truncate any zero-words off for that case. Not having to do\n> that makes the do/while loop pretty tight.\n\nI really appreciate your quick response and incorporating those fixes\ninto your patch. The fix for 0003 looks good to me. I believe your\nchange improves performance more.\n\n> For the fix in the 0004 patch, I think we can do what you did more\n> simply. I don't think there's any need to perform the loop to find\n> the last non-zero word. We're only deleting a member from a single\n> word here, so we only need to check if that word is the last word and\n> remove it if it's become zero. If it's not the last word then we\n> can't remove it as there must be some other non-zero word after it.\n\nIf my thinking is correct, the do-while loop I added is still\nnecessary. Consider the following code. The Assertion in this code\npasses in the master but fails in the new patch.\n\n=====\nBitmapset *x = bms_make_singleton(1000);\n\nx = bms_del_member(x, 1000);\nAssert(x == NULL);\n=====\n\nIn the code above, we get a new Bitmapset by bms_make_singleton(1000).\nThis Bitmapset has many words. Only the last word is non-zero, and all\nthe rest are zero. If we call bms_del_member(x, 1000) for the\nBitmapset, all words of the result will be zero, including the last\nword, so we must return NULL. However, the new patch truncates only\nthe last word, leading to an incorrect result. Therefore, we need to\nperform the loop to find the actual non-zero word after the deletion.\nOf course, I agree that if we are not modifying the last word, we\ndon't have to truncate anything, so we can omit the loop.\n\n> I also made a small adjustment to bms_get_singleton_member() and\n> bms_singleton_member() to have them Assert fail if result is < 0 after\n> looping over the set. This should no longer happen so I thought it\n> would make more compact code if that check was just removed. We'd\n> likely do better if we got reports of Assert failures here than, in\n> the case of bms_get_singleton_member, some code accidentally doing the\n> wrong thing based on a corrupt Bitmapset.\n\nI agree with your change. I think failing by Assertion is better than\na runtime error or unexpected behavior.\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Thu, 15 Jun 2023 17:56:54 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "On Thu, 15 Jun 2023 at 20:57, Yuya Watari <watari.yuya@gmail.com> wrote:\n>\n> On Tue, Jun 13, 2023 at 8:07 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > For the fix in the 0004 patch, I think we can do what you did more\n> > simply. I don't think there's any need to perform the loop to find\n> > the last non-zero word. We're only deleting a member from a single\n> > word here, so we only need to check if that word is the last word and\n> > remove it if it's become zero. If it's not the last word then we\n> > can't remove it as there must be some other non-zero word after it.\n>\n> If my thinking is correct, the do-while loop I added is still\n> necessary. Consider the following code. The Assertion in this code\n> passes in the master but fails in the new patch.\n>\n> =====\n> Bitmapset *x = bms_make_singleton(1000);\n>\n> x = bms_del_member(x, 1000);\n> Assert(x == NULL);\n> =====\n\nI'm not sure what I was thinking there. Yeah, you're right, we do\nneed to do the backwards loop over the set to trim off the trailing\nzero words.\n\nI've adjusted the attached patch to do that.\n\nDavid",
"msg_date": "Tue, 20 Jun 2023 16:16:56 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "Hello,\n\nOn Tue, Jun 20, 2023 at 1:17 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I've adjusted the attached patch to do that.\n\nThank you for updating the patch. The v4 patch looks good to me.\n\nI ran another experiment. In the experiment, I issued queries of the\nJoin Order Benchmark [1] and measured its planning times. The\nfollowing table shows the result. The v4 patch obtained outstanding\nperformance improvements in planning time. This result supports the\neffectiveness of the patch in real workloads.\n\nTable 1: Planning time and its speedup of Join Order Benchmark\n(n: the number of partitions of each table)\n(Speedup: higher is better)\n--------------------\n n | Speedup (v4)\n--------------------\n 2 | 102.4%\n 4 | 101.0%\n 8 | 101.6%\n 16 | 103.1%\n 32 | 107.5%\n 64 | 115.7%\n 128 | 142.9%\n 256 | 187.7%\n--------------------\n\n[1] https://github.com/winkyao/join-order-benchmark\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Thu, 22 Jun 2023 17:59:13 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "On Thu, 22 Jun 2023 at 20:59, Yuya Watari <watari.yuya@gmail.com> wrote:\n> Table 1: Planning time and its speedup of Join Order Benchmark\n> (n: the number of partitions of each table)\n> (Speedup: higher is better)\n\n> 64 | 115.7%\n> 128 | 142.9%\n> 256 | 187.7%\n\nThanks for benchmarking. It certainly looks like a win for larger\nsets. Would you be able to profile the 256 partition case to see\nwhere exactly master is so slow? (I'm surprised this patch improves\nperformance that much.)\n\nI think it's also important to check we don't slow anything down for\nmore normal-sized sets. The vast majority of sets will contain just a\nsingle word, so we should probably focus on making sure we're not\nslowing anything down for those.\n\nTo get the ball rolling on that I used the attached plan_times.patch\nso that the planner writes the number of elapsed nanosecond from\ncalling standard_planner(). Patching with this then running make\ninstallcheck kicks out about 35k log lines with times on it.\n\nI ran this on a Linux AMD 3990x machine and also an Apple M2 pro\nmachine. Taking the sum of the nanoseconds and converting into\nseconds, I see:\n\nAMD 3990x\nmaster: 1.384267931 seconds\npatched 1.339178764 seconds (3.37% faster)\n\nM2 pro:\nmaster: 0.58293 seconds\npatched: 0.581483 seconds (0.25% faster)\n\nSo it certainly does not look any slower. Perhaps a little faster with\nthe zen2 machine.\n\n(The m2 only seems to have microsecond resolution on the timer code\nwhereas the zen2 has nanosecond. I don't think this matters much as\nthe planner takes enough microseconds to plan even for simple queries)\n\nI've also attached the v4 patch again as I'll add this patch to the\ncommitfest and if I don't do that then the CFbot will pick up Ranier's\npatch instead of mine.\n\nDavid",
"msg_date": "Sat, 24 Jun 2023 16:15:08 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "Hello,\n\nThank you for your reply and for creating the patch to measure planning times.\n\nOn Sat, Jun 24, 2023 at 1:15 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> Thanks for benchmarking. It certainly looks like a win for larger\n> sets. Would you be able to profile the 256 partition case to see\n> where exactly master is so slow? (I'm surprised this patch improves\n> performance that much.)\n\nI have profiled the 256 partition case of the Join Order Benchmark\nusing the perf command. The attached figures are its frame graphs.\n From these figures, we can see that bms_equal() function calls in blue\ncircles were heavy, and their performance improved after applying the\npatch.\n\nTo investigate this further, I have created a patch\n(profile-patch-for-*.txt) that profiles the bms_equal() function in\nmore detail. This patch\n(1) prints what we are comparing by bms_equal, and\n(2) measures the number of loops executed within bms_equal.\n(1) is for debugging. (2) intends to see the effect of the\noptimization to remove trailing zero words. The guarantee that the\nlast word is always non-zero enables us to immediately determine two\nBitmapsets having different nwords are not the same. When the patch\nworks effectively, the (2) will be much smaller than the total number\nof the function calls. I will show the results as follows.\n\n=== Master ===\n[bms_equal] Comparing (b 335) and (b 35)\n[bms_equal] Comparing (b 1085) and (b 61)\n[bms_equal] Comparing (b 1208) and (b 86)\n[bms_equal] Comparing (b 781) and (b 111)\n[bms_equal] Comparing (b 361) and (b 135)\n...\n[bms_equal] Comparing (b 668) and (b 1773)\n[bms_equal] Comparing (b 651) and (b 1781)\n[bms_equal] Comparing (b 1191) and (b 1789)\n[bms_equal] Comparing (b 771) and (b 1797)\n[bms_equal] Total 3950748839 calls, 3944762037 loops executed\n==============\n\n=== Patched ===\n[bms_equal] Comparing (b 335) and (b 35)\n[bms_equal] Comparing (b 1085) and (b 61)\n[bms_equal] Comparing (b 1208) and (b 86)\n[bms_equal] Comparing (b 781) and (b 111)\n[bms_equal] Comparing (b 361) and (b 135)\n...\n[bms_equal] Comparing (b 668) and (b 1773)\n[bms_equal] Comparing (b 651) and (b 1781)\n[bms_equal] Comparing (b 1191) and (b 1789)\n[bms_equal] Comparing (b 771) and (b 1797)\n[bms_equal] Total 3950748839 calls, 200215204 loops executed\n===============\n\nThe above results reveal that the bms_equal() in this workload\ncompared two singleton Bitmapsets in most cases, and their members\nwere more than 64 apart. Therefore, we could have omitted 94.9% of\n3,950,748,839 loops with the patch, whereas the percentage was only\n0.2% in the master. This is why we obtained a significant performance\nimprovement and is evidence that the optimization of this patch worked\nvery well.\n\nThe attached figures show these bms_equal() function calls exist in\nmake_pathkey_from_sortinfo(). The actual location is\nget_eclass_for_sort_expr(). I quote the code below.\n\n=====\nEquivalenceClass *\nget_eclass_for_sort_expr(PlannerInfo *root,\n Expr *expr,\n List *opfamilies,\n Oid opcintype,\n Oid collation,\n Index sortref,\n Relids rel,\n bool create_it)\n{\n ...\n\n foreach(lc1, root->eq_classes)\n {\n EquivalenceClass *cur_ec = (EquivalenceClass *) lfirst(lc1);\n ...\n\n foreach(lc2, cur_ec->ec_members)\n {\n EquivalenceMember *cur_em = (EquivalenceMember *) lfirst(lc2);\n\n /*\n * Ignore child members unless they match the request.\n */\n if (cur_em->em_is_child &&\n !bms_equal(cur_em->em_relids, rel)) // <--- Here\n continue;\n\n ...\n }\n }\n ...\n}\n=====\n\nThe bms_equal() is used to find an EquivalenceMember satisfying some\nconditions. The above heavy loop was the bottleneck in the master.\nThis bottleneck is what I am trying to optimize in another thread [1]\nwith you. I hope the optimization in this thread will help [1]'s speed\nup. (Looking at CFbot, I noticed that [1]'s patch does not compile due\nto some compilation errors. I will send a fixed version soon.)\n\n> I think it's also important to check we don't slow anything down for\n> more normal-sized sets. The vast majority of sets will contain just a\n> single word, so we should probably focus on making sure we're not\n> slowing anything down for those.\n\nI agree with you and thank you for sharing the results. I ran\ninstallcheck with your patch. The result is as follows. The speedup\nwas 0.33%. At least in my environment, I did not observe any\nregression with this test. So, the patch looks very good.\n\nMaster: 2.559648 seconds\nPatched: 2.551116 seconds (0.33% faster)\n\n[1] https://www.postgresql.org/message-id/CAJ2pMkY10J_PA2jpH5M-VoOo6BvJnTOO_-t_znu_pOaP0q10pA@mail.gmail.com\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Tue, 27 Jun 2023 18:11:14 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "Thank you for running the profiles.\n\nOn Tue, 27 Jun 2023 at 21:11, Yuya Watari <watari.yuya@gmail.com> wrote:\n> On Sat, Jun 24, 2023 at 1:15 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > I think it's also important to check we don't slow anything down for\n> > more normal-sized sets. The vast majority of sets will contain just a\n> > single word, so we should probably focus on making sure we're not\n> > slowing anything down for those.\n>\n> I agree with you and thank you for sharing the results. I ran\n> installcheck with your patch. The result is as follows. The speedup\n> was 0.33%. At least in my environment, I did not observe any\n> regression with this test. So, the patch looks very good.\n>\n> Master: 2.559648 seconds\n> Patched: 2.551116 seconds (0.33% faster)\n\nI wondered if the common case could be made slightly faster by\nchecking the 0th word before checking the word count before going onto\ncheck the remaining words. For bms_equal(), that's something like:\n\nif (a->words[0] != b->words[0] || a->nwords != b->nwords)\n return false;\n\n/* check all the remaining words match */\nfor (int i = 1; i < a->nwords; i++) ...\n\nI wrote the patch and tried it out, but it seems slightly slower than\nthe v4 patch.\n\nLinux with AMD 3990x, again using the patch from [1] with make installcheck\n\nmaster: 1.41720145 seconds\nv4: 1.392969606 seconds (1.74% faster than master)\nv4 with 0th word check: 1.404199748 seconds (0.93% faster than master)\n\nI've attached a delta patch of what I used to test. Since it's not\nany faster, I don't think it's worth doing. It'll also produce\nslightly more compiled code.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvo68m_0JuTHnEHFNsdSJEb2uPphK6BWXStj93u_QEi2rg@mail.gmail.com",
"msg_date": "Wed, 28 Jun 2023 22:58:08 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "Hello,\n\nThank you for your reply and for creating a new patch.\n\nOn Wed, Jun 28, 2023 at 7:58 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> Linux with AMD 3990x, again using the patch from [1] with make installcheck\n>\n> master: 1.41720145 seconds\n> v4: 1.392969606 seconds (1.74% faster than master)\n> v4 with 0th word check: 1.404199748 seconds (0.93% faster than master)\n\nI have tested these versions with installcheck. Since the planning\ntimes obtained by installcheck vary each time, it is important to run\nit repeatedly and examine its distribution. I ran installcheck 100\ntimes for each version. The following tables and the attached figure\nshow the results. From these results, we can conclude that the v4\npatch has no regression in the installcheck test. It seems to be\nslightly (0.31-0.38%) faster than the master. The difference between\nv4 and v4 with 0th word check is not so clear, but v4 may be faster.\n\nTable 1: Total Planning Time During installcheck (seconds)\n---------------------------------------------------------\n | Mean | Median | Stddev\n---------------------------------------------------------\n Master | 2.520865 | 2.521189 | 0.017651\n v4 | 2.511447 | 2.513369 | 0.018299\n v4 with 0th word check | 2.513393 | 2.515652 | 0.018391\n---------------------------------------------------------\n\nTable 2: Speedup (higher is better)\n------------------------------------------------------------\n | Speedup (Mean) | Speedup (Median)\n------------------------------------------------------------\n v4 | 0.38% | 0.31%\n v4 with 0th word check | 0.30% | 0.22%\n------------------------------------------------------------\n\n-- \nBest regards,\nYuya Watari",
"msg_date": "Fri, 30 Jun 2023 11:10:25 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "On Fri, 30 Jun 2023 at 14:11, Yuya Watari <watari.yuya@gmail.com> wrote:\n> I have tested these versions with installcheck. Since the planning\n> times obtained by installcheck vary each time, it is important to run\n> it repeatedly and examine its distribution. I ran installcheck 100\n> times for each version. The following tables and the attached figure\n> show the results. From these results, we can conclude that the v4\n> patch has no regression in the installcheck test. It seems to be\n> slightly (0.31-0.38%) faster than the master. The difference between\n> v4 and v4 with 0th word check is not so clear, but v4 may be faster.\n\nI did the same on the AMD 3990x machine and an Apple M2 Pro machine.\nOn the M2 over the 100 runs v4 came out 1.18% faster and the 3990x was\n1.25% faster than master. I've plotted the results in the attached\ngraphs.\n\nLooking over the patch again, the only thing I'm tempted into changing\nis to add Asserts like: Assert(a == NULL || a->words[a->nword - 1] !=\n0) to each function just as extra reassurance that nothing\naccidentally leaves trailing empty words.\n\nIf nobody else wants to take a look, then I plan to push the v4 + the\nasserts in the next day or so.\n\nDavid",
"msg_date": "Mon, 3 Jul 2023 09:27:25 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "On Mon, 3 Jul 2023 at 09:27, David Rowley <dgrowleyml@gmail.com> wrote:\n> If nobody else wants to take a look, then I plan to push the v4 + the\n> asserts in the next day or so.\n\nHere's the patch which includes those Asserts. I also made some small\ntweaks to a comment.\n\nI understand that Tom thought that the Asserts were a step too far in\n[1], but per the bugs found in [2], I think having them is worthwhile.\n\nIn the attached, I only added Asserts to the locations where the code\nrelies on there being no trailing zero words. I didn't include them\nin places like bms_copy() since nothing there would do the wrong thing\nif there were trailing zero words.\n\nDavid\n\n[1] https://postgr.es/m/2686153.1677881312@sss.pgh.pa.us\n[2] https://postgr.es/m/CAJ2pMkYcKHFBD_OMUSVyhYSQU0-j9T6NZ0pL6pwbZsUCohWc7Q@mail.gmail.com",
"msg_date": "Mon, 3 Jul 2023 12:09:48 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "Hello,\n\nOn Mon, Jul 3, 2023 at 9:10 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> Here's the patch which includes those Asserts. I also made some small\n> tweaks to a comment.\n\nThank you for your reply. I am +1 to your change. I think these\nassertions will help someone who changes the Bitmapset implementations\nin the future.\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Mon, 3 Jul 2023 15:10:16 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "On Mon, 3 Jul 2023 at 18:10, Yuya Watari <watari.yuya@gmail.com> wrote:\n> Thank you for your reply. I am +1 to your change. I think these\n> assertions will help someone who changes the Bitmapset implementations\n> in the future.\n\nI've now pushed the patch.\n\nThanks for all your reviews and detailed benchmarks.\n\nDavid\n\n\n",
"msg_date": "Tue, 4 Jul 2023 12:36:38 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
},
{
"msg_contents": "Hello,\n\nOn Tue, Jul 4, 2023 at 9:36 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> I've now pushed the patch.\n\nThanks for the commit!\n\n-- \nBest regards,\nYuya Watari\n\n\n",
"msg_date": "Tue, 4 Jul 2023 20:24:08 +0900",
"msg_from": "Yuya Watari <watari.yuya@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making empty Bitmapsets always be NULL"
}
] |
[
{
"msg_contents": "I cannot get the last email to show up for the commitfest.\nThis is version 2 of the original patch. [1]\nThanks Jim!\n\n[1]\nhttps://postgr.es/m/CACLU5mSRwHr_8z%3DenMj-nXF1tmC7%2BJn5heZQNiKuLyxYUtL2fg%40mail.gmail.com\n\nRegards Kirk.",
"msg_date": "Tue, 28 Feb 2023 19:59:48 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Proposal: %T Prompt parameter for psql for current time (like Oracle\n has)"
},
{
"msg_contents": "On 01.03.23 01:59, Kirk Wolak wrote:\n> I cannot get the last email to show up for the commitfest.\n> This is version 2 of the original patch. [1]\n> Thanks Jim!\n>\n> [1]https://postgr.es/m/CACLU5mSRwHr_8z%3DenMj-nXF1tmC7%2BJn5heZQNiKuLyxYUtL2fg%40mail.gmail.com\n>\n> Regards Kirk.\n\nThe patch didn't pass the SanityCheck:\n\nhttps://cirrus-ci.com/task/5445242183221248?logs=build#L1337\n\nmissing a header perhaps?\n\n#include \"time.h\"\n\nBest, Jim\n\n\n",
"msg_date": "Wed, 1 Mar 2023 10:41:35 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 4:41 AM Jim Jones <jim.jones@uni-muenster.de> wrote:\n\n> On 01.03.23 01:59, Kirk Wolak wrote:\n> > I cannot get the last email to show up for the commitfest.\n> > This is version 2 of the original patch. [1]\n> > Thanks Jim!\n> >\n> > [1]\n> https://postgr.es/m/CACLU5mSRwHr_8z%3DenMj-nXF1tmC7%2BJn5heZQNiKuLyxYUtL2fg%40mail.gmail.com\n> >\n> > Regards Kirk.\n>\n> The patch didn't pass the SanityCheck:\n>\n> https://cirrus-ci.com/task/5445242183221248?logs=build#L1337\n>\n> missing a header perhaps?\n>\n> #include \"time.h\"\n>\n> Best, Jim\n>\n\nThanks, corrected, and confirmed Unix line endings.\nFWIW, the simplest way to test it is with this command (I usually get it\nwrong on the first guess)\n\n\\set PROMPT1 %T ' ' :PROMPT1\n\nKirk",
"msg_date": "Wed, 1 Mar 2023 11:13:46 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On 01.03.23 17:13, Kirk Wolak wrote:\n> Thanks, corrected, and confirmed Unix line endings.\n> FWIW, the simplest way to test it is with this command (I usually get \n> it wrong on the first guess)\n>\n> \\set PROMPT1 %T ' ' :PROMPT1\n>\n> Kirk\n\nNice. The patch applies clean and the cfbots seem much happier now - all \npassed.\n\n17:23:19 postgres=# SELECT now();\n now\n-------------------------------\n 2023-03-01 17:23:19.807339+01\n(1 row)\n\nThe docs render also just fine. I'm now wondering if HH24:MI:SS should \nbe formatted with, e.g. using <literal>\n\n\"The current time on the client in <literal>HH24:MI:SS</literal> format.\"\n\nBut that I'll leave to the docs experts to judge :)\n\nBest, Jim\n\n\n\n\n\n\nOn 01.03.23\n 17:13, Kirk Wolak wrote:\n\n\n\nThanks, corrected, and confirmed\n Unix line endings.\nFWIW, the simplest way to test it\n is with this command (I usually get it wrong on the first\n guess)\n\n\n\\set PROMPT1 %T ' ' :PROMPT1\n\n\nKirk \n\n\n\n\nNice. The patch applies clean and the\n cfbots seem much happier now - all passed.\n17:23:19 postgres=# SELECT now();\n now \n -------------------------------\n 2023-03-01 17:23:19.807339+01\n (1 row)\n\nThe docs render also just fine. I'm now\n wondering if HH24:MI:SS should be formatted with, e.g. using\n <literal>\n\"The current time on the client in\n <literal>HH24:MI:SS</literal> format.\"\nBut that I'll leave to the docs experts to\n judge :)\n\nBest, Jim",
"msg_date": "Wed, 1 Mar 2023 17:55:48 +0100",
"msg_from": "Jim Jones <jim.jones@uni-muenster.de>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 11:55 AM Jim Jones <jim.jones@uni-muenster.de> wrote:\n\n> On 01.03.23 17:13, Kirk Wolak wrote:\n>\n> Thanks, corrected, and confirmed Unix line endings.\n> FWIW, the simplest way to test it is with this command (I usually get it\n> wrong on the first guess)\n>\n> \\set PROMPT1 %T ' ' :PROMPT1\n>\n> Kirk\n>\n> Nice. The patch applies clean and the cfbots seem much happier now - all\n> passed.\n>\n> 17:23:19 postgres=# SELECT now();\n> now\n> -------------------------------\n> 2023-03-01 17:23:19.807339+01\n> (1 row)\n>\n> The docs render also just fine. I'm now wondering if HH24:MI:SS should be\n> formatted with, e.g. using <literal>\n>\n> \"The current time on the client in <literal>HH24:MI:SS</literal> format.\"\n>\n> But that I'll leave to the docs experts to judge :)\n>\n> Best, Jim\n>\nThanks Jim.\n\nI hope one of the Docs experts chime in. It's easy enough to fix. Just\nnot sure if it's required.\nWhat a great learning experience!\n\nOn Wed, Mar 1, 2023 at 11:55 AM Jim Jones <jim.jones@uni-muenster.de> wrote:\n\nOn 01.03.23\n 17:13, Kirk Wolak wrote:\n\n\n\nThanks, corrected, and confirmed\n Unix line endings.\nFWIW, the simplest way to test it\n is with this command (I usually get it wrong on the first\n guess)\n\n\n\\set PROMPT1 %T ' ' :PROMPT1\n\n\nKirk \n\n\n\n\nNice. The patch applies clean and the\n cfbots seem much happier now - all passed.\n17:23:19 postgres=# SELECT now();\n now \n -------------------------------\n 2023-03-01 17:23:19.807339+01\n (1 row)\n\nThe docs render also just fine. I'm now\n wondering if HH24:MI:SS should be formatted with, e.g. using\n <literal>\n\"The current time on the client in\n <literal>HH24:MI:SS</literal> format.\"\nBut that I'll leave to the docs experts to\n judge :)\n\nBest, JimThanks Jim. I hope one of the Docs experts chime in. It's easy enough to fix. Just not sure if it's required.What a great learning experience!",
"msg_date": "Wed, 1 Mar 2023 16:30:45 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On Wed, 2023-03-01 at 11:13 -0500, Kirk Wolak wrote:\n> Thanks, corrected, and confirmed Unix line endings.\n\nThe patch builds fine and works as intended.\n\nI leave it to the committers to decide whether the patch is worth the\neffort or not, given that you can get a similar effect with %`date`.\nIt adds some value by being simpler and uniform across all platforms.\n\nI'll mark the patch as \"ready for committer\".\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 02 Mar 2023 15:56:39 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 9:56 AM Laurenz Albe <laurenz.albe@cybertec.at>\nwrote:\n\n> On Wed, 2023-03-01 at 11:13 -0500, Kirk Wolak wrote:\n> > Thanks, corrected, and confirmed Unix line endings.\n>\n> The patch builds fine and works as intended.\n>\n> I leave it to the committers to decide whether the patch is worth the\n> effort or not, given that you can get a similar effect with %`date`.\n> It adds some value by being simpler and uniform across all platforms.\n>\n> I'll mark the patch as \"ready for committer\".\n>\n> Yours,\n> Laurenz Albe\n>\n\nThanks Laurenz.\n\nTo be clear, I use windows AND linux, and I share my file between them.\n\nin linux: `date +%H:%M:%S` is used\nin windows: `ECHO %time%`\n\nso, I wrote a ts.cmd and ts.sh so I could share one prompt: `ts`\nbut now every time I connect a new account to this file, I have to go\nfind/copy my ts file.\nSame when I share it with other developers.\n\nThis was the pain that started the quest.\nThanks to everyone for their support!\n\nOn Thu, Mar 2, 2023 at 9:56 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Wed, 2023-03-01 at 11:13 -0500, Kirk Wolak wrote:\n> Thanks, corrected, and confirmed Unix line endings.\n\nThe patch builds fine and works as intended.\n\nI leave it to the committers to decide whether the patch is worth the\neffort or not, given that you can get a similar effect with %`date`.\nIt adds some value by being simpler and uniform across all platforms.\n\nI'll mark the patch as \"ready for committer\".\n\nYours,\nLaurenz AlbeThanks Laurenz.To be clear, I use windows AND linux, and I share my file between them.in linux: `date +%H:%M:%S` is usedin windows: `ECHO %time%`so, I wrote a ts.cmd and ts.sh so I could share one prompt: `ts`but now every time I connect a new account to this file, I have to go find/copy my ts file.Same when I share it with other developers.This was the pain that started the quest.Thanks to everyone for their support!",
"msg_date": "Thu, 2 Mar 2023 10:40:54 -0500",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: %T Prompt parameter for psql for current time (like\n Oracle has)"
}
] |
[
{
"msg_contents": "Hello,\n\nWe are seeing an interesting STANDBY behavior, that’s happening once in 3-4 days.\n\nThe standby suddenly disconnects from the primary, and it throws the error “LOG: invalid record length at <LSN>: wanted 24, got0”.\n\nAnd then it tries to restore the WAL file from the archive. Due to low write activity on primary, the WAL file will be switched and archived only after 1 hr.\n\nSo, it stuck in a loop of switching the WAL sources from STREAM and ARCHIVE without replicating the primary.\n\nDue to this there will be write outage as the standby is synchronous standby.\n\nWe are using “wal_sync_method” as “fsync” assuming WAL file not getting flushed correctly.\n\nBut this is happening even after making it as “fsync” instead of “fdatasync”.\n\nRestarting the STANDBY sometimes fixes this problem, but detecting this automatically is a big problem as the postgres standby process will be still running fine, but WAL RECEIVER process is up and down continuously due to switching of WAL sources.\n\n\nHow can we fix this ? Any suggestions regarding this will be appreciated.\n\n\nPostgres Version: 13.6\nOS: RHEL Linux\n\n\nThank you,\n\n\nBest,\nHarinath.\n\n",
"msg_date": "Tue, 28 Feb 2023 21:21:12 -0800",
"msg_from": "Harinath Kanchu <hkanchu@apple.com>",
"msg_from_op": true,
"msg_subject": "LOG: invalid record length at <LSN> : wanted 24, got 0"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 10:51 AM Harinath Kanchu <hkanchu@apple.com> wrote:\n>\n> Hello,\n>\n> We are seeing an interesting STANDBY behavior, that’s happening once in 3-4 days.\n>\n> The standby suddenly disconnects from the primary, and it throws the error “LOG: invalid record length at <LSN>: wanted 24, got0”.\n\nFirstly, this isn't an error per se, especially for a standby as it\ncan get/retry the same WAL record from other sources. It's a bit hard\nto say anything further just by looking at this LOG message, one needs\nto look at what's happening around the same time. You mentioned that\nthe connection to primary was lost, so you need to dive deep as to why\nit got lost. If the connection was lost half-way through fetching the\nWAL record, the standby may emit such a LOG message.\n\nSecondly, you definitely need to understand why the connection to\nprimary keeps getting lost - network disruption, parameter changes or\nprimary going down, standby going down etc.?\n\n> And then it tries to restore the WAL file from the archive. Due to low write activity on primary, the WAL file will be switched and archived only after 1 hr.\n>\n> So, it stuck in a loop of switching the WAL sources from STREAM and ARCHIVE without replicating the primary.\n>\n> Due to this there will be write outage as the standby is synchronous standby.\n\nI understand this problem and there's a proposed patch to help with\nthis - https://www.postgresql.org/message-id/CALj2ACVryN_PdFmQkbhga1VeW10VgQ4Lv9JXO=3nJkvZT8qgfA@mail.gmail.com.\n\nIt basically allows one to set a timeout as to how much duration the\nstandby can restore from archive before switching to stream.\nTherefore, in your case, the standby doesn't have to wait for 1hr to\nconnect to primary, but it can connect before that.\n\n> We are using “wal_sync_method” as “fsync” assuming WAL file not getting flushed correctly.\n>\n> But this is happening even after making it as “fsync” instead of “fdatasync”.\n\nI don't think that's a problem, unless wal_sync_method isn't changed\nto something else in between.\n\n> Restarting the STANDBY sometimes fixes this problem, but detecting this automatically is a big problem as the postgres standby process will be still running fine, but WAL RECEIVER process is up and down continuously due to switching of WAL sources.\n\nYes, the standby after failure to connect to primary, it switches to\narchive and stays there until it exhausts all the WAL from the archive\nand then switches to stream. You can monitor the replication slot of\nthe standby on the primary, if it's inactive, then one needs to jump\nin. As mentioned above, there's an in-progress feature that helps in\nthese cases.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 1 Mar 2023 12:05:58 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: LOG: invalid record length at <LSN> : wanted 24, got 0"
},
{
"msg_contents": "Thanks Bharath for your response,\n\n> You mentioned that\n> the connection to primary was lost, so you need to dive deep as to why\n> it got lost. If the connection was lost half-way through fetching the\n> WAL record, the standby may emit such a LOG message.\n\nThe connection was lost due to bad network. Currently we are okay with bad network, because applications in general has to always expect bad network in their design.\n\nTo simulate the bad network, we manually killed the wal-sender process in primary multiple times, this should be same as primary unable to send messages to standby due to bad network. And in these experiments, the standby is able to join the primary after checking WAL files in archive, within few seconds.\n\nBut, when the standby gets disconnected due to bad network, the standby is unable to join back, so we wanted to understand why this is happening.\n\n\n> I understand this problem and there's a proposed patch to help with\n> this - https://www.postgresql.org/message-id/CALj2ACVryN_PdFmQkbhga1VeW10VgQ4Lv9JXO=3nJkvZT8qgfA@mail.gmail.com.\n> \n> It basically allows one to set a timeout as to how much duration the\n> standby can restore from archive before switching to stream.\n> Therefore, in your case, the standby doesn't have to wait for 1hr to\n> connect to primary, but it can connect before that.\n\nGood to know about this patch, because we tried to do something similar to this.\n\nBut this patch will not solve our problem, because here the hot-standby is NOT downloading WAL files from the archive for very long time and wasting time, but, the standby simply switching the sources from STREAM to ARCHIVE in a tight loop doing nothing.\n\nIn fact, the standby is trying to connect to the primary for streaming back after it fails to pull the latest WAL file from archive. But trying to stream from the primary fails again immediately.\n\n\nTo explain this situation better, I am adding more logs.\n\nNote: the logs mentioned here includes the custom logging we enable for debugging purposes.\n\n\n\n2023-02-09 19:52:30.909 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Starting the state machine again from starting, changing current source to XLOG_FROM_ARCHIVE\n2023-02-09 19:52:30.909 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: switched WAL source from stream to archive after failure\nERROR: 2023/02/09 19:52:31.616383 Archive '00000006000000020000001C' does not exist.\n2023-02-09 19:52:31.618 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Successfully read the WAL file using XLogFileReadAnyTLI from archive or existing pg_wal, returning true and exit.\n2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: last source archive failed, and now switching to new source.\n2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Moving to XLOG_FROM_STREAM state and start walreceiver if necessary\n2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: switched WAL source from archive to stream after failure with WalStreamingPreferred false\n2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: changing the current file timeline to new tli.\n2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: we have data from wal receiver.\n2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: after streaming file, we dont have WAL file opened, so read now.\n2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Due to WAL receiver, we have WAL file present and opened and hence returning true\n2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: invalid record length at 2/1C0324C8: wanted 24, got 0\n2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: last source stream failed, and now switching to new source.\n2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Failure while streaming, we might have found invalid record in WAL streamed from master\n2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Stopping WAL receiver as we saw serious failure while streaming\nINFO: 2023/02/09 19:52:31.796489 WAL: 00000007.history will be downloaded from archive\nERROR: 2023/02/09 19:52:32.330491 Archive '00000007.history' does not exist.\n2023-02-09 19:52:32.333 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: We were requested to recover to latest timeline, but rescan is NOT needed.\n2023-02-09 19:52:32.333 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: We decided to sleep before retry the state-machine\n2023-02-09 19:52:35.910 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Starting the state machine again from starting, changing current source to XLOG_FROM_ARCHIVE\n2023-02-09 19:52:35.910 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: switched WAL source from stream to archive after failure\nERROR: 2023/02/09 19:52:36.607613 Archive '00000006000000020000001C' does not exist.\n2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Successfully read the WAL file using XLogFileReadAnyTLI from archive or existing pg_wal, returning true and exit.\n2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: last source archive failed, and now switching to new source.\n2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Moving to XLOG_FROM_STREAM state and start walreceiver if necessary\n2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: switched WAL source from archive to stream after failure with WalStreamingPreferred false\n2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: changing the current file timeline to new tli.\n2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: we have data from wal receiver.\n2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: after streaming file, we dont have WAL file opened, so read now.\n2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Due to WAL receiver, we have WAL file present and opened and hence returning true\n2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: invalid record length at 2/1C0324C8: wanted 24, got 0\n\n\n\nSo, we are wondering, why killing of wal-sender in primary abruptly seems okay but a few network failures are not okay for the standby to recover.\n\nThank you.\n\nBest,\nHarinath\n\n\n\n\n\nThanks Bharath for your response,You mentioned thatthe connection to primary was lost, so you need to dive deep as to whyit got lost. If the connection was lost half-way through fetching theWAL record, the standby may emit such a LOG message.The connection was lost due to bad network. Currently we are okay with bad network, because applications in general has to always expect bad network in their design.To simulate the bad network, we manually killed the wal-sender process in primary multiple times, this should be same as primary unable to send messages to standby due to bad network. And in these experiments, the standby is able to join the primary after checking WAL files in archive, within few seconds.But, when the standby gets disconnected due to bad network, the standby is unable to join back, so we wanted to understand why this is happening.I understand this problem and there's a proposed patch to help withthis - https://www.postgresql.org/message-id/CALj2ACVryN_PdFmQkbhga1VeW10VgQ4Lv9JXO=3nJkvZT8qgfA@mail.gmail.com.It basically allows one to set a timeout as to how much duration thestandby can restore from archive before switching to stream.Therefore, in your case, the standby doesn't have to wait for 1hr toconnect to primary, but it can connect before that.Good to know about this patch, because we tried to do something similar to this.But this patch will not solve our problem, because here the hot-standby is NOT downloading WAL files from the archive for very long time and wasting time, but, the standby simply switching the sources from STREAM to ARCHIVE in a tight loop doing nothing.In fact, the standby is trying to connect to the primary for streaming back after it fails to pull the latest WAL file from archive. But trying to stream from the primary fails again immediately.To explain this situation better, I am adding more logs.Note: the logs mentioned here includes the custom logging we enable for debugging purposes.2023-02-09 19:52:30.909 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Starting the state machine again from starting, changing current source to XLOG_FROM_ARCHIVE2023-02-09 19:52:30.909 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: switched WAL source from stream to archive after failureERROR: 2023/02/09 19:52:31.616383 Archive '00000006000000020000001C' does not exist.2023-02-09 19:52:31.618 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Successfully read the WAL file using XLogFileReadAnyTLI from archive or existing pg_wal, returning true and exit.2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: last source archive failed, and now switching to new source.2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Moving to XLOG_FROM_STREAM state and start walreceiver if necessary2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: switched WAL source from archive to stream after failure with WalStreamingPreferred false2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: changing the current file timeline to new tli.2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: we have data from wal receiver.2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: after streaming file, we dont have WAL file opened, so read now.2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Due to WAL receiver, we have WAL file present and opened and hence returning true2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: invalid record length at 2/1C0324C8: wanted 24, got 02023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: last source stream failed, and now switching to new source.2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Failure while streaming, we might have found invalid record in WAL streamed from master2023-02-09 19:52:31.619 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Stopping WAL receiver as we saw serious failure while streamingINFO: 2023/02/09 19:52:31.796489 WAL: 00000007.history will be downloaded from archiveERROR: 2023/02/09 19:52:32.330491 Archive '00000007.history' does not exist.2023-02-09 19:52:32.333 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: We were requested to recover to latest timeline, but rescan is NOT needed.2023-02-09 19:52:32.333 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: We decided to sleep before retry the state-machine2023-02-09 19:52:35.910 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Starting the state machine again from starting, changing current source to XLOG_FROM_ARCHIVE2023-02-09 19:52:35.910 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: switched WAL source from stream to archive after failureERROR: 2023/02/09 19:52:36.607613 Archive '00000006000000020000001C' does not exist.2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Successfully read the WAL file using XLogFileReadAnyTLI from archive or existing pg_wal, returning true and exit.2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: last source archive failed, and now switching to new source.2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Moving to XLOG_FROM_STREAM state and start walreceiver if necessary2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: switched WAL source from archive to stream after failure with WalStreamingPreferred false2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: changing the current file timeline to new tli.2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: we have data from wal receiver.2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: after streaming file, we dont have WAL file opened, so read now.2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: Due to WAL receiver, we have WAL file present and opened and hence returning true2023-02-09 19:52:36.610 GMT [ ] () [0 | 00000 | 63e4244f.54]: LOG: invalid record length at 2/1C0324C8: wanted 24, got 0So, we are wondering, why killing of wal-sender in primary abruptly seems okay but a few network failures are not okay for the standby to recover.Thank you.Best,Harinath",
"msg_date": "Wed, 01 Mar 2023 00:06:45 -0800",
"msg_from": "Harinath Kanchu <hkanchu@apple.com>",
"msg_from_op": true,
"msg_subject": "Re: LOG: invalid record length at <LSN> : wanted 24, got 0"
}
] |
[
{
"msg_contents": "Hi,\n\nIn a recent discussion [1], Michael Paquier asked if we can combine\npg_walinspect till_end_of_wal functions with other functions\npg_get_wal_records_info and pg_get_wal_stats. The code currently looks\nmuch duplicated and the number of functions that pg_walinspect exposes\nto the users is bloated. The point was that the till_end_of_wal\nfunctions determine the end LSN and everything else that they do is\nthe same as their counterpart functions. Well, the idea then was to\nkeep things simple, not clutter the APIs, have better and consistent\nuser-inputted end_lsn validations at the cost of usability and code\nredundancy. However, now I tend to agree with the feedback received.\n\nI'm attaching a patch doing the $subject with the following behavior:\n1. If start_lsn is NULL, error out/return NULL.\n2. If end_lsn isn't specified, default to NULL, then determine the end_lsn.\n3. If end_lsn is specified as NULL, then determine the end_lsn.\n4. If end_lsn is specified as non-NULL, then determine if it is\ngreater than start_lsn if yes, go ahead do the job, otherwise error\nout.\n\nAnother idea is to convert till_end_of_wal flavors to SQL-only\nfunctions and remove the c code from pg_walinspect.c. However, I\nprefer $subject and completely remove till_end_of_wal flavors for\nbetter usability in the long term.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/CALj2ACV-WBN%3DEUgUPyYOGitp%2Brn163vMnQd%3DHcWrnKt-uqFYFA%40mail.gmail.com\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 1 Mar 2023 13:00:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 1:00 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> In a recent discussion [1], Michael Paquier asked if we can combine\n> pg_walinspect till_end_of_wal functions with other functions\n> pg_get_wal_records_info and pg_get_wal_stats. The code currently looks\n> much duplicated and the number of functions that pg_walinspect exposes\n> to the users is bloated. The point was that the till_end_of_wal\n> functions determine the end LSN and everything else that they do is\n> the same as their counterpart functions. Well, the idea then was to\n> keep things simple, not clutter the APIs, have better and consistent\n> user-inputted end_lsn validations at the cost of usability and code\n> redundancy. However, now I tend to agree with the feedback received.\n>\n> I'm attaching a patch doing the $subject with the following behavior:\n> 1. If start_lsn is NULL, error out/return NULL.\n> 2. If end_lsn isn't specified, default to NULL, then determine the end_lsn.\n> 3. If end_lsn is specified as NULL, then determine the end_lsn.\n> 4. If end_lsn is specified as non-NULL, then determine if it is\n> greater than start_lsn if yes, go ahead do the job, otherwise error\n> out.\n>\n> Another idea is to convert till_end_of_wal flavors to SQL-only\n> functions and remove the c code from pg_walinspect.c. However, I\n> prefer $subject and completely remove till_end_of_wal flavors for\n> better usability in the long term.\n>\n> Thoughts?\n>\n> [1] https://www.postgresql.org/message-id/CALj2ACV-WBN%3DEUgUPyYOGitp%2Brn163vMnQd%3DHcWrnKt-uqFYFA%40mail.gmail.com\n\nNeeded a rebase due to 019f8624664dbf1e25e2bd721c7e99822812d109.\nAttaching v2 patch. Sorry for the noise.\n\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 1 Mar 2023 20:30:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Wed, Mar 01, 2023 at 08:30:00PM +0530, Bharath Rupireddy wrote:\n> On Wed, Mar 1, 2023 at 1:00 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > In a recent discussion [1], Michael Paquier asked if we can combine\n> > pg_walinspect till_end_of_wal functions with other functions\n> > pg_get_wal_records_info and pg_get_wal_stats. The code currently looks\n> > much duplicated and the number of functions that pg_walinspect exposes\n> > to the users is bloated. The point was that the till_end_of_wal\n> > functions determine the end LSN and everything else that they do is\n> > the same as their counterpart functions. Well, the idea then was to\n> > keep things simple, not clutter the APIs, have better and consistent\n> > user-inputted end_lsn validations at the cost of usability and code\n> > redundancy. However, now I tend to agree with the feedback received.\n\n+1, especially since I really don't like the use of \"till\" in the function\nnames.\n\n> > I'm attaching a patch doing the $subject with the following behavior:\n> > 1. If start_lsn is NULL, error out/return NULL.\n\nMaybe naive and unrelated question, but is that really helpful? If for some\nreason I want to see information about *all available WAL*, I have to manually\ndig for a suitable LSN. The same action with pg_waldump is easier as I just\nneed to use the oldest available WAL that's present on disk.\n\n> > Another idea is to convert till_end_of_wal flavors to SQL-only\n> > functions and remove the c code from pg_walinspect.c. However, I\n> > prefer $subject and completely remove till_end_of_wal flavors for\n> > better usability in the long term.\n\nI agree that using default arguments is a way better API.\n\nNitpicking:\n\nMaybe we could group the kept unused exported C function at the end of the\nfile?\n\nAlso:\n\n/*\n- * Get info and data of all WAL records from start LSN till end of WAL.\n+ * NB: This function does nothing and stays here for backward compatibility.\n+ * Without it, the extension fails to install.\n *\n- * This function emits an error if a future start i.e. WAL LSN the database\n- * system doesn't know about is specified.\n+ * Try using pg_get_wal_records_info() for the same till_end_of_wal\n+ * functionaility.\n */\n Datum\n pg_get_wal_records_info_till_end_of_wal(PG_FUNCTION_ARGS)\n {\n- XLogRecPtr start_lsn;\n- XLogRecPtr end_lsn = InvalidXLogRecPtr;\n-\n- start_lsn = PG_GETARG_LSN(0);\n-\n- end_lsn = ValidateInputLSNs(true, start_lsn, end_lsn);\n-\n- GetWALRecordsInfo(fcinfo, start_lsn, end_lsn);\n-\n- PG_RETURN_VOID();\n+ PG_RETURN_NULL();\n }\n\nI don't like much this chunk (same for the other kept function). Apart from\nthe obvious typo in \"functionaility\", I don't think that the comment is really\naccurate.\n\nAlso, are we actually helping users if we simply return NULL there? It's quite\npossible that people will start to use the new shared lib while still having\nthe 1.1 SQL definition of the extension installed. In that case, they will\nsimply retrieve a NULL row and may spend some time wondering why until they\neventually realize that their only option is to upgrade the extension first and\nthen use another function. Why not make their life easier and explicity raise\na suitable error at the SQL level if users try to use those functions?\n\n\n",
"msg_date": "Mon, 6 Mar 2023 16:52:18 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 2:22 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> > > I'm attaching a patch doing the $subject with the following behavior:\n> > > 1. If start_lsn is NULL, error out/return NULL.\n>\n> Maybe naive and unrelated question, but is that really helpful? If for some\n> reason I want to see information about *all available WAL*, I have to manually\n> dig for a suitable LSN. The same action with pg_waldump is easier as I just\n> need to use the oldest available WAL that's present on disk.\n\nAre you saying that the pg_walinspect functions should figure out the\noldest available WAL file and LSN, and start from there if start_lsn\nspecified as NULL or invalid? Note that pg_waldump requires either\nexplicit startlsn and/or startseg (WAL file name), it can't search for\nthe oldest WAL file available and start from there automatically.\n\nIf the user wants to figure it out, they can do something like below:\n\nostgres=# select * from pg_ls_waldir() order by name;\n name | size | modification\n--------------------------+----------+------------------------\n 000000010000000000000001 | 16777216 | 2023-03-06 14:54:55+00\n 000000010000000000000002 | 16777216 | 2023-03-06 14:54:55+00\n\nIf we try to make these functions figure out the oldest WAl file and\nstart from there, then it'll unnecessarily complicate the APIs and\nfunctions. If we still think we need a better function for the users\nto figure out the oldest WAL file, perhaps, add a SQL-only\nview/function to pg_walinspect that returns \"select name from\npg_ls_waldir() order by name limit 1;\", but honestly, that's so\ntrivial.\n\n> > > Another idea is to convert till_end_of_wal flavors to SQL-only\n> > > functions and remove the c code from pg_walinspect.c. However, I\n> > > prefer $subject and completely remove till_end_of_wal flavors for\n> > > better usability in the long term.\n>\n> I agree that using default arguments is a way better API.\n\nThanks. Yes, that's true.\n\n> Nitpicking:\n>\n> Maybe we could group the kept unused exported C function at the end of the\n> file?\n\nWill do.\n\n> Also:\n>\n> /*\n> - * Get info and data of all WAL records from start LSN till end of WAL.\n> + * NB: This function does nothing and stays here for backward compatibility.\n> + * Without it, the extension fails to install.\n> *\n> - * This function emits an error if a future start i.e. WAL LSN the database\n> - * system doesn't know about is specified.\n> + * Try using pg_get_wal_records_info() for the same till_end_of_wal\n> + * functionaility.\n>\n> I don't like much this chunk (same for the other kept function). Apart from\n> the obvious typo in \"functionaility\", I don't think that the comment is really\n> accurate.\n\nCan you be more specific what's inaccurate about the comment?\n\n> Also, are we actually helping users if we simply return NULL there? It's quite\n> possible that people will start to use the new shared lib while still having\n> the 1.1 SQL definition of the extension installed. In that case, they will\n> simply retrieve a NULL row and may spend some time wondering why until they\n> eventually realize that their only option is to upgrade the extension first and\n> then use another function. Why not make their life easier and explicity raise\n> a suitable error at the SQL level if users try to use those functions?\n\nI thought about it initially, but wanted to avoid more errors. An\nerror would make them use the new version easily. I will change it\nthat way.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 6 Mar 2023 20:36:17 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Mon, 6 Mar 2023 at 16:06, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> If we try to make these functions figure out the oldest WAl file and\n> start from there, then it'll unnecessarily complicate the APIs and\n> functions. If we still think we need a better function for the users\n> to figure out the oldest WAL file, perhaps, add a SQL-only\n> view/function to pg_walinspect that returns \"select name from\n> pg_ls_waldir() order by name limit 1;\", but honestly, that's so\n> trivial.\n\nThat \"order by name limit 1\" has subtle bugs when you're working on a\nsystem that has experienced timeline switches. It is entirely possible\nthat the first file (as sorted by the default collation) is not the\nfirst record you can inspect, or even in your timeline's history.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 6 Mar 2023 16:21:47 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 8:52 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Mon, 6 Mar 2023 at 16:06, Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > If we try to make these functions figure out the oldest WAl file and\n> > start from there, then it'll unnecessarily complicate the APIs and\n> > functions. If we still think we need a better function for the users\n> > to figure out the oldest WAL file, perhaps, add a SQL-only\n> > view/function to pg_walinspect that returns \"select name from\n> > pg_ls_waldir() order by name limit 1;\", but honestly, that's so\n> > trivial.\n>\n> That \"order by name limit 1\" has subtle bugs when you're working on a\n> system that has experienced timeline switches. It is entirely possible\n> that the first file (as sorted by the default collation) is not the\n> first record you can inspect, or even in your timeline's history.\n\nHm. Note that pg_walinspect currently searches WAL on insertion\ntimeline; it doesn't care about the older timelines. The idea of\nmaking it look at WAL on an older timeline was discussed, but for the\nsake of simplicity we kept the functions simple. If needed, I can try\nadding the timeline as input parameters to all the functions (with\ndefault -1 meaning current insertion timeline; if specified, look for\nWAL on that timeline).\n\nAre you saying that a pg_walinspect function that traverses the pg_wal\ndirectory and figures out the old valid WAL on a given timeline is\nstill useful? Or make the functions look for older WAL if start_lsn is\ngiven as NULL or invalid?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 6 Mar 2023 21:06:48 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Mon, 6 Mar 2023 at 16:37, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, Mar 6, 2023 at 8:52 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> >\n> > On Mon, 6 Mar 2023 at 16:06, Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > If we try to make these functions figure out the oldest WAl file and\n> > > start from there, then it'll unnecessarily complicate the APIs and\n> > > functions. If we still think we need a better function for the users\n> > > to figure out the oldest WAL file, perhaps, add a SQL-only\n> > > view/function to pg_walinspect that returns \"select name from\n> > > pg_ls_waldir() order by name limit 1;\", but honestly, that's so\n> > > trivial.\n> >\n> > That \"order by name limit 1\" has subtle bugs when you're working on a\n> > system that has experienced timeline switches. It is entirely possible\n> > that the first file (as sorted by the default collation) is not the\n> > first record you can inspect, or even in your timeline's history.\n>\n> Hm. Note that pg_walinspect currently searches WAL on insertion\n> timeline; it doesn't care about the older timelines. The idea of\n> making it look at WAL on an older timeline was discussed, but for the\n> sake of simplicity we kept the functions simple. If needed, I can try\n> adding the timeline as input parameters to all the functions (with\n> default -1 meaning current insertion timeline; if specified, look for\n> WAL on that timeline).\n>\n> Are you saying that a pg_walinspect function that traverses the pg_wal\n> directory and figures out the old valid WAL on a given timeline is\n> still useful? Or make the functions look for older WAL if start_lsn is\n> given as NULL or invalid?\n\nThe specific comment I made was only regarding the following issue: An\ninstance may still have WAL segments from before the latest timeline\nswitch. These segments may have a higher LSN and lower timeline number\nthan your current running timeline+LSN (because of e.g. pg_rewind).\nThis will then result in unwanted behaviour when you sort the segments\nnumerically/alphabetically and then assume that the first file's LSN\nis valid (or available) in your current timeline.\n\nThat is why \"order by name limit 1\" isn't a good solution, and that's\nwhat I was commenting on: you need to parse the timeline hierarchy to\ndetermine which timelines you can use which WAL segments of.\n\nTo answer your question on whether I'd like us to traverse timeline\nswitches: Yes, I'd really like it if we were able to decode the\ncurrent timeline's hierarchical WAL of a PG instance in one go, from\nthe start at (iirc) 0x10000 all the way to the current LSN, assuming\nthe segments are available.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 6 Mar 2023 16:56:52 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 08:36:17PM +0530, Bharath Rupireddy wrote:\n> On Mon, Mar 6, 2023 at 2:22 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> > Also:\n> >\n> > /*\n> > - * Get info and data of all WAL records from start LSN till end of WAL.\n> > + * NB: This function does nothing and stays here for backward compatibility.\n> > + * Without it, the extension fails to install.\n> > *\n> > - * This function emits an error if a future start i.e. WAL LSN the database\n> > - * system doesn't know about is specified.\n> > + * Try using pg_get_wal_records_info() for the same till_end_of_wal\n> > + * functionaility.\n> >\n> > I don't like much this chunk (same for the other kept function). Apart from\n> > the obvious typo in \"functionaility\", I don't think that the comment is really\n> > accurate.\n>\n> Can you be more specific what's inaccurate about the comment?\n\nIt's problematic to install the extension if we rely on upgrade scripts only.\nWe could also provide a pg_walinspect--1.2.sql file and it would just work, and\nthat may have been a good idea if there wasn't also the problem of people still\nhaving the version 1.1 locally installed, as we don't want them to see random\nfailures like \"could not find function ... in file ...\", or keeping the ability\nto install the former 1.1 version (with those functions bypassed).\n\n\n",
"msg_date": "Tue, 7 Mar 2023 09:13:46 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Tue, Mar 07, 2023 at 09:13:46AM +0800, Julien Rouhaud wrote:\n> It's problematic to install the extension if we rely on upgrade scripts only.\n> We could also provide a pg_walinspect--1.2.sql file and it would just work, and\n> that may have been a good idea if there wasn't also the problem of people still\n> having the version 1.1 locally installed, as we don't want them to see random\n> failures like \"could not find function ... in file ...\", or keeping the ability\n> to install the former 1.1 version (with those functions bypassed).\n\nWhy would we need a 1.2? HEAD is the only branch with pg_walinspect\n1.1, and it has not been released yet.\n--\nMichael",
"msg_date": "Tue, 7 Mar 2023 13:36:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Tue, 7 Mar 2023, 12:36 Michael Paquier, <michael@paquier.xyz> wrote:\n\n> On Tue, Mar 07, 2023 at 09:13:46AM +0800, Julien Rouhaud wrote:\n> > It's problematic to install the extension if we rely on upgrade scripts\n> only.\n> > We could also provide a pg_walinspect--1.2.sql file and it would just\n> work, and\n> > that may have been a good idea if there wasn't also the problem of\n> people still\n> > having the version 1.1 locally installed, as we don't want them to see\n> random\n> > failures like \"could not find function ... in file ...\", or keeping the\n> ability\n> > to install the former 1.1 version (with those functions bypassed).\n>\n> Why would we need a 1.2? HEAD is the only branch with pg_walinspect\n> 1.1, and it has not been released yet.\n>\n\nah right I should have checked. but the same ABI compatibility concern\nstill exists for version 1.0 of the extension.\n\n>\n\nOn Tue, 7 Mar 2023, 12:36 Michael Paquier, <michael@paquier.xyz> wrote:On Tue, Mar 07, 2023 at 09:13:46AM +0800, Julien Rouhaud wrote:\n> It's problematic to install the extension if we rely on upgrade scripts only.\n> We could also provide a pg_walinspect--1.2.sql file and it would just work, and\n> that may have been a good idea if there wasn't also the problem of people still\n> having the version 1.1 locally installed, as we don't want them to see random\n> failures like \"could not find function ... in file ...\", or keeping the ability\n> to install the former 1.1 version (with those functions bypassed).\n\nWhy would we need a 1.2? HEAD is the only branch with pg_walinspect\n1.1, and it has not been released yet.ah right I should have checked. but the same ABI compatibility concern still exists for version 1.0 of the extension.",
"msg_date": "Tue, 7 Mar 2023 12:42:20 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Tue, Mar 07, 2023 at 12:42:20PM +0800, Julien Rouhaud wrote:\n> ah right I should have checked. but the same ABI compatibility concern\n> still exists for version 1.0 of the extension.\n\nYes, we'd better make sure that the past code is able to run, at\nleast. Now I am not really convinced that we have the need to enforce\nan error with the new code even if 1.0 is still installed, so as it is\npossible to remove all the traces of the code that triggers errors if\nan end LSN is higher than the current insert LSN for primaries or\nreplayed LSN for standbys.\n--\nMichael",
"msg_date": "Tue, 7 Mar 2023 13:56:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Tue, Mar 07, 2023 at 01:56:24PM +0900, Michael Paquier wrote:\n> On Tue, Mar 07, 2023 at 12:42:20PM +0800, Julien Rouhaud wrote:\n> > ah right I should have checked. but the same ABI compatibility concern\n> > still exists for version 1.0 of the extension.\n>\n> Yes, we'd better make sure that the past code is able to run, at\n> least. Now I am not really convinced that we have the need to enforce\n> an error with the new code even if 1.0 is still installed,\n\nSo keep this \"deprecated\" C function working, as it would only be a few lines\nof code?\n\n> so as it is\n> possible to remove all the traces of the code that triggers errors if\n> an end LSN is higher than the current insert LSN for primaries or\n> replayed LSN for standbys.\n\n+1 for that\n\n\n",
"msg_date": "Tue, 7 Mar 2023 13:47:01 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 11:17 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, Mar 07, 2023 at 01:56:24PM +0900, Michael Paquier wrote:\n> > On Tue, Mar 07, 2023 at 12:42:20PM +0800, Julien Rouhaud wrote:\n> > > ah right I should have checked. but the same ABI compatibility concern\n> > > still exists for version 1.0 of the extension.\n> >\n> > Yes, we'd better make sure that the past code is able to run, at\n> > least. Now I am not really convinced that we have the need to enforce\n> > an error with the new code even if 1.0 is still installed,\n>\n> So keep this \"deprecated\" C function working, as it would only be a few lines\n> of code?\n>\n> > so as it is\n> > possible to remove all the traces of the code that triggers errors if\n> > an end LSN is higher than the current insert LSN for primaries or\n> > replayed LSN for standbys.\n>\n> +1 for that\n\nI understand that we want to keep till_end_of_wal functions defined\naround in .c file so that if someone does CREATE EXTENSION\npg_walinspect WITH VERSION '1.0'; on the latest extension shared\nlibrary (with 1.1 version), the till_end_of_wal functions should work\nfor them.\n\nAlso, I noticed that there's some improvement needed for the input\nvalidations, especially for the end_lsn.\n\nHere I'm with the v3 patch addressing the above comments. Please\nreview it further.\n\n1. When start_lsn is NULL or invalid ('0/0'), emit an error. There was\na comment on the functions automatically determining start_lsn to be\nthe oldest WAL LSN. I'm not implementing this change now, as it\nrequires extra work to traverse the pg_wal directory. I'm planning to\ndo it in the next set of improvements where I'm planning to make the\nfunctions timeline-aware, introduce functions for inspecting\nwal_buffers and so on.\n2. When end_lsn is NULL or invalid ('0/0') IOW end_lsn is not\nspecified, deduce end_lsn to be the current flush LSN when not in\nrecovery, current replayed LSN when in recovery. This is the main\nchange that avoids till_end_of_wal functions in version 1.1.\n3. When end_lsn is specified but greater than or equal to the\nstart_lsn, return NULL. Given the above review comments on more errors\nbeing reported, I chose to return NULL for better usability.\n4. When end_lsn is specified but less than the start_lsn, get\ninfo/stats up until end_lsn.\n5. Retained pg_get_wal_records_info_till_end_of_wal and\npg_get_wal_stats_till_end_of_wal for backward compatibility.\n6. Piggybacked these functions and behaviour under the new HEAD-only\nextension version 1.1 introduced recently, instead of bumping to 1.2.\nWhen PG16 is out, users will have 1.1 with all of these new\nfunctionality.\n7. Added tests to verify the extension update path in\noldextversions.sql similar to other extensions'. (suggested by Michael\nPaquier).\n8. Added a note in the pg_walinspect documentation about removal of\npg_get_wal_records_info_till_end_of_wal and\npg_get_wal_stats_till_end_of_wal in version 1.1 and how the other\nfunctions can be used to achieve the same functionality and how these\ntill_end_of_wal functions can work if extension is installed\nexplicitly with version 1.0.\n9. Refactored the tests according to the new behaviours.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 8 Mar 2023 13:40:46 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Tue, Mar 07, 2023 at 01:47:01PM +0800, Julien Rouhaud wrote:\n> So keep this \"deprecated\" C function working, as it would only be a few lines\n> of code?\n\nYes, I guess that this would be the final picture, moving forward I'd\nlike to think that we should just remove the SQL declaration of the\ntill_end_of_wal() functions to keep a clean interface.\n--\nMichael",
"msg_date": "Fri, 10 Mar 2023 12:58:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Wed, Mar 08, 2023 at 01:40:46PM +0530, Bharath Rupireddy wrote:\n> 1. When start_lsn is NULL or invalid ('0/0'), emit an error. There was\n> a comment on the functions automatically determining start_lsn to be\n> the oldest WAL LSN. I'm not implementing this change now, as it\n> requires extra work to traverse the pg_wal directory. I'm planning to\n> do it in the next set of improvements where I'm planning to make the\n> functions timeline-aware, introduce functions for inspecting\n> wal_buffers and so on.\n>\n> [.. long description ..]\n>\n> 9. Refactored the tests according to the new behaviours.\n\nHmm. I think this patch ought to have a result simpler than what's\nproposed here.\n\nFirst, do we really have to begin marking the functions as non-STRICT\nto abide with the treatment of NULL as a special value? The part that\nI've found personally the most annoying with these functions is that\nan incorrect bound leads to a random failure, particularly when such\nqueries are used for monitoring. I would simplify the whole with two\nsimple rules, as of:\n- Keeping all the functions strict.\n- When end_lsn is a LSN in the future of the current LSN inserted or\nreplayed, adjust its value to be the exactly GetXLogReplayRecPtr() or\nGetFlushRecPtr(). This way, monitoring tools can use a value ahead,\nat will.\n- Failing if start_lsn > end_lsn.\n- Failing if start_lsn refers to a position older than what exists is\nstill fine by me.\n\nI would also choose to remove\npg_get_wal_records_info_till_end_of_wal() from the SQL interface in\n1.1 to limit the confusion arount it, but keep a few lines of code so\nas we are still able to use it when pg_walinspect 1.0 is the version\nenabled with CREATE EXTENSION.\n\nIn short, pg_get_wal_records_info_till_end_of_wal() should be able to \nuse exactly the same code as pg_get_wal_records_info(), still you need\nto keep *two* functions for their prosrc with PG_FUNCTION_ARGS as\narguments so as 1.0 would work when dropped in place. The result, it\nseems to me, mostly comes to simplify ValidateInputLSNs() and remove\nits till_end_of_wal argument.\n\n+-- Removed function\n+SELECT pg_get_functiondef('pg_get_wal_records_info_till_end_of_wal'::regproc);\n+ERROR: function \"pg_get_wal_records_info_till_end_of_wal\" does not exist\n+LINE 1: SELECT pg_get_functiondef('pg_get_wal_records_info_till_end_...\n\nIt seems to me that you should just replace all that and anything\ndepending on pg_get_functiondef() with a \\dx+ pg_walinspect, that\nwould list all the objects part of the extension for the specific\nversion you want to test. Not sure that there is a need to list the\nfull function definitions, either. That just bloats the tests.\n\nI think, however, that it is critical to test in oldextversions.out\nthe *executions* of the functions, so as we make sure that they don't\ncrash. The patch is missing that.\n\n+-- Invalid input LSNs\n+SELECT * FROM pg_get_wal_record_info('0/0'); -- ERROR\n+ERROR: invalid input LSN\n--\nMichael",
"msg_date": "Fri, 10 Mar 2023 13:24:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 1:40 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Mar 7, 2023 at 11:17 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Here I'm with the v3 patch addressing the above comments. Please\n> review it further.\n\nNeeded a rebase. v4 patch is attached. I'll address the latest review\ncomments in a bit.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 10 Mar 2023 10:49:43 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 12:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Mar 08, 2023 at 01:40:46PM +0530, Bharath Rupireddy wrote:\n> > 1. When start_lsn is NULL or invalid ('0/0'), emit an error. There was\n> > a comment on the functions automatically determining start_lsn to be\n> > the oldest WAL LSN. I'm not implementing this change now, as it\n> > requires extra work to traverse the pg_wal directory. I'm planning to\n> > do it in the next set of improvements where I'm planning to make the\n> > functions timeline-aware, introduce functions for inspecting\n> > wal_buffers and so on.\n> >\n> > [.. long description ..]\n> >\n> > 9. Refactored the tests according to the new behaviours.\n>\n> Hmm. I think this patch ought to have a result simpler than what's\n> proposed here.\n>\n> First, do we really have to begin marking the functions as non-STRICT\n> to abide with the treatment of NULL as a special value? The part that\n> I've found personally the most annoying with these functions is that\n> an incorrect bound leads to a random failure, particularly when such\n> queries are used for monitoring.\n\nAs long as we provide a sensible default value (so I guess '0/0' to\nmean \"no upper bound\") and that we therefore don't have to manually\nspecify an upper bound if we don't want one I'm fine with keeping the\nfunctions marked as STRICT.\n\n> I would simplify the whole with two\n> simple rules, as of:\n> - Keeping all the functions strict.\n> - When end_lsn is a LSN in the future of the current LSN inserted or\n> replayed, adjust its value to be the exactly GetXLogReplayRecPtr() or\n> GetFlushRecPtr(). This way, monitoring tools can use a value ahead,\n> at will.\n> - Failing if start_lsn > end_lsn.\n> - Failing if start_lsn refers to a position older than what exists is\n> still fine by me.\n\n+1\n\n> I would also choose to remove\n> pg_get_wal_records_info_till_end_of_wal() from the SQL interface in\n> 1.1 to limit the confusion arount it, but keep a few lines of code so\n> as we are still able to use it when pg_walinspect 1.0 is the version\n> enabled with CREATE EXTENSION.\n\nYeah the SQL function should be removed no matter what.\n\n\n",
"msg_date": "Fri, 10 Mar 2023 16:04:15 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 04:04:15PM +0800, Julien Rouhaud wrote:\n> As long as we provide a sensible default value (so I guess '0/0' to\n> mean \"no upper bound\") and that we therefore don't have to manually\n> specify an upper bound if we don't want one I'm fine with keeping the\n> functions marked as STRICT.\n\nFWIW, using also InvalidXLogRecPtr as a shortcut to say \"Don't fail,\njust do the job\" is fine by me. Something like a FFF/FFFFFFFF should\njust mean the same on a fresh cluster, still it gets risky the longer\nthe WAL is generated.\n--\nMichael",
"msg_date": "Fri, 10 Mar 2023 17:14:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Fri, 10 Mar 2023, 16:14 Michael Paquier, <michael@paquier.xyz> wrote:\n\n> On Fri, Mar 10, 2023 at 04:04:15PM +0800, Julien Rouhaud wrote:\n> > As long as we provide a sensible default value (so I guess '0/0' to\n> > mean \"no upper bound\") and that we therefore don't have to manually\n> > specify an upper bound if we don't want one I'm fine with keeping the\n> > functions marked as STRICT.\n>\n> FWIW, using also InvalidXLogRecPtr as a shortcut to say \"Don't fail,\n> just do the job\" is fine by me.\n\n\nisn't '0/0' the same as InvalidXLogRecPtr? but my point is that we\nshouldn't require to spell it explicitly, just rely on the default value.\n\nSomething like a FFF/FFFFFFFF should\n> just mean the same on a fresh cluster, still it gets risky the longer\n> the WAL is generated.\n>\n\nyeah, it would be handy to accept 'infinity' in that context.\n\n>\n\nOn Fri, 10 Mar 2023, 16:14 Michael Paquier, <michael@paquier.xyz> wrote:On Fri, Mar 10, 2023 at 04:04:15PM +0800, Julien Rouhaud wrote:\n> As long as we provide a sensible default value (so I guess '0/0' to\n> mean \"no upper bound\") and that we therefore don't have to manually\n> specify an upper bound if we don't want one I'm fine with keeping the\n> functions marked as STRICT.\n\nFWIW, using also InvalidXLogRecPtr as a shortcut to say \"Don't fail,\njust do the job\" is fine by me.isn't '0/0' the same as InvalidXLogRecPtr? but my point is that we shouldn't require to spell it explicitly, just rely on the default value.Something like a FFF/FFFFFFFF should\njust mean the same on a fresh cluster, still it gets risky the longer\nthe WAL is generated.yeah, it would be handy to accept 'infinity' in that context.",
"msg_date": "Fri, 10 Mar 2023 16:37:23 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 04:37:23PM +0800, Julien Rouhaud wrote:\n> isn't '0/0' the same as InvalidXLogRecPtr? but my point is that we\n> shouldn't require to spell it explicitly, just rely on the default value.\n\nPerhaps. Still the addition of a DEFAULT to the function definitions\nand its value looks like a second patch to me. The first should just\nlift the bound restrictions currently in place while cleaning up the \ntill_* functions.\n--\nMichael",
"msg_date": "Fri, 10 Mar 2023 19:21:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 9:54 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hmm. I think this patch ought to have a result simpler than what's\n> proposed here.\n>\n> First, do we really have to begin marking the functions as non-STRICT\n> to abide with the treatment of NULL as a special value? The part that\n> I've found personally the most annoying with these functions is that\n> an incorrect bound leads to a random failure, particularly when such\n> queries are used for monitoring. I would simplify the whole with two\n> simple rules, as of:\n> - Keeping all the functions strict.\n> - When end_lsn is a LSN in the future of the current LSN inserted or\n> replayed, adjust its value to be the exactly GetXLogReplayRecPtr() or\n> GetFlushRecPtr(). This way, monitoring tools can use a value ahead,\n> at will.\n> - Failing if start_lsn > end_lsn.\n> - Failing if start_lsn refers to a position older than what exists is\n> still fine by me.\n\nDone this way in the attached v5 patch.\n\n> I would also choose to remove\n> pg_get_wal_records_info_till_end_of_wal() from the SQL interface in\n> 1.1 to limit the confusion arount it, but keep a few lines of code so\n> as we are still able to use it when pg_walinspect 1.0 is the version\n> enabled with CREATE EXTENSION.\n>\n> In short, pg_get_wal_records_info_till_end_of_wal() should be able to\n> use exactly the same code as pg_get_wal_records_info(), still you need\n> to keep *two* functions for their prosrc with PG_FUNCTION_ARGS as\n> arguments so as 1.0 would work when dropped in place. The result, it\n> seems to me, mostly comes to simplify ValidateInputLSNs() and remove\n> its till_end_of_wal argument.\n\nThis has already been taken care of in the previous patches, e.g. v3\nand v4 and so in the latest v5 patch.\n\n> +-- Removed function\n> +SELECT pg_get_functiondef('pg_get_wal_records_info_till_end_of_wal'::regproc);\n> +ERROR: function \"pg_get_wal_records_info_till_end_of_wal\" does not exist\n> +LINE 1: SELECT pg_get_functiondef('pg_get_wal_records_info_till_end_...\n>\n> It seems to me that you should just replace all that and anything\n> depending on pg_get_functiondef() with a \\dx+ pg_walinspect, that\n> would list all the objects part of the extension for the specific\n> version you want to test. Not sure that there is a need to list the\n> full function definitions, either. That just bloats the tests.\n\nAgreed and used \\dx+. One can anyways look at the function definitions\nand compare for knowing what's changed.\n\n> I think, however, that it is critical to test in oldextversions.out\n> the *executions* of the functions, so as we make sure that they don't\n> crash. The patch is missing that.\n\nYou mean, we need to test the till_end_of_wal functions that were\nremoved in the latest version 1.1 but they must work if the extension\nis installed with 1.0? If yes, I now added them.\n\n> +-- Invalid input LSNs\n> +SELECT * FROM pg_get_wal_record_info('0/0'); -- ERROR\n> +ERROR: invalid input LSN\n\nRemoved InvalidRecPtr checks for input/start LSN because anyways the\nfunctions will fail with ERROR: could not read WAL at LSN 0/0.\n\nAny comments on the attached v5 patch?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 10 Mar 2023 16:45:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 04:45:06PM +0530, Bharath Rupireddy wrote:\n> Any comments on the attached v5 patch?\n\nI have reviewed the patch, and found it pretty messy. The tests\nshould have been divided into their own patch, I think. This is\nrather straight-forward once the six functions have their checks\ngrouped together. The result was pretty good, so I have begun by\napplying that as 1f282c2. This also includes most of the refinements\nyou have proposed for the whitespaces in the tests. Note that we were\nmissing a few spots with the bound checks for the six functions, so\nnow the coverage should be full.\n\nAfter that comes the rest of the patch, and I have found a couple of\nmistakes.\n\n- pg_get_wal_records_info(start_lsn pg_lsn, end_lsn pg_lsn)\n+ pg_get_wal_records_info(start_lsn pg_lsn, end_lsn pg_lsn DEFAULT NULL)\n returns setof record\n[...]\n- pg_get_wal_stats(start_lsn pg_lsn, end_lsn pg_lsn, per_record boolean DEFAULT false)\n+ pg_get_wal_stats(start_lsn pg_lsn, end_lsn pg_lsn DEFAULT NULL, per_record boolean DEFAULT false)\n\nThis part of the documentation is now incorrect.\n\n+-- Make sure checkpoints don't interfere with the test.\n+SELECT 'init' FROM pg_create_physical_replication_slot('regress_pg_walinspect_slot', true, false);\n\nAdding a physical slot is better for stability of course, but the test\nalso has to drop it or installcheck would cause an existing cluster to\nhave that still around. The third argument could be true, as well, so\nas we'd use a temporary slot.\n\n- If <replaceable>start_lsn</replaceable>\n- or <replaceable>end_lsn</replaceable> are not yet available, the\n- function will raise an error. For example:\n+ If a future <replaceable>end_lsn</replaceable> (i.e. the LSN server\n+ doesn't know about) is specified, it returns stats till end of WAL. It\n+ will raise an error, if the server doesn't have WAL available at given\n+ <replaceable>start_lsn</replaceable> or if the\n+ <replaceable>start_lsn</replaceable> is in future or is past the\n+ <replaceable>end_lsn</replaceable>. For example, usage of the function is\n+ as follows:\n\nHmm. I would simplify that, and just mention that an error is raised\nwhen the start LSN is available, without caring about the rest (valid\nend LSN being past the current insert LSN, and error if start > end,\nthe second being obvious).\n\n+ <note>\n+ <para>\n+ Note that <function>pg_get_wal_records_info_till_end_of_wal</function> and\n+ <function>pg_get_wal_stats_till_end_of_wal</function> functions have been\n+ removed in the <filename>pg_walinspect</filename> version\n+ <literal>1.1</literal>. The same functionality can be achieved with\n+ <function>pg_get_wal_records_info</function> and\n+ <function>pg_get_wal_stats</function> functions by specifying a future\n+ <replaceable>end_lsn</replaceable>. However, <function>till_end_of_wal</function>\n+ functions will still work if the extension is installed explicitly with\n+ version <literal>1.0</literal>.\n+ </para>\n+ </note>\n\nNot convinced that this is necessary.\n\n+ GetInputLSNs(fcinfo, &start_lsn, &end_lsn, till_end_of_wal);\n+\n+ stats_per_record = PG_GETARG_BOOL(2);\n\nThis code in GetWalStats() is incorrect.\npg_get_wal_stats_till_end_of_wal() has a stats_per_record, but as\n*second* argument, so this would be broken.\n\n+ GetInputLSNs(fcinfo, &start_lsn, &end_lsn, till_end_of_wal); \n\nComing from the last point, I think that this interface is confusing,\nand actually incorrect. From what I can see, we should be doing what\n~15 has by grepping the argument values within the main function\ncalls, and just pass them down to the internal routines GetWalStats()\nand GetWALRecordsInfo().\n\n-static bool\n-IsFutureLSN(XLogRecPtr lsn, XLogRecPtr *curr_lsn)\n+static XLogRecPtr\n+GetCurrentLSN(void)\n\nThis wrapper is actually a good idea.\n\nAt the end, I am finishing with the attached. ValidateInputLSNs()\nought to be called, IMO, when the caller of the SQL functions can\ndirectly specify an end_lsn. This means that there is no point to do\nthis check in the two till_end_* functions. This has as cost two\nextra checks to make sure that the start_lsn is not higher than the\ncurrent LSN, but I am fine to live with that. It seemed rather\nnatural to me to let ValidateInputLSNs() do a refresh of the end_lsn\nif it sees that it is higher than the current LSN. And if you look\nclosely, you will see that we only call *once* GetCurrentLSN() for\neach function call, so the maths are more precise. \n\nI have cleaned up the comments of the modules, while on it, as there\nwas not much value in copy-pasting how a function fails while there is\na centralized validation code path. The tests for the till_end()\nfunctions have been moved to the test path where we install 1.0.\n\nWith all these cleanups done, there is less code than at the\nbeginning, which comes from the docs, so the current code does not\nchange in size:\n 7 files changed, 173 insertions(+), 206 deletions(-)\n--\nMichael",
"msg_date": "Mon, 13 Mar 2023 15:56:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 12:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Mar 10, 2023 at 04:45:06PM +0530, Bharath Rupireddy wrote:\n>\n> After that comes the rest of the patch, and I have found a couple of\n> mistakes.\n>\n> - pg_get_wal_records_info(start_lsn pg_lsn, end_lsn pg_lsn)\n> + pg_get_wal_records_info(start_lsn pg_lsn, end_lsn pg_lsn DEFAULT NULL)\n> returns setof record\n> [...]\n> - pg_get_wal_stats(start_lsn pg_lsn, end_lsn pg_lsn, per_record boolean DEFAULT false)\n> + pg_get_wal_stats(start_lsn pg_lsn, end_lsn pg_lsn DEFAULT NULL, per_record boolean DEFAULT false)\n>\n> This part of the documentation is now incorrect.\n\nOh, yeah. Thanks for fixing it.\n\n> +-- Make sure checkpoints don't interfere with the test.\n> +SELECT 'init' FROM pg_create_physical_replication_slot('regress_pg_walinspect_slot', true, false);\n>\n> Adding a physical slot is better for stability of course, but the test\n> also has to drop it or installcheck would cause an existing cluster to\n> have that still around. The third argument could be true, as well, so\n> as we'd use a temporary slot.\n\n# Disabled because these tests require \"wal_level=replica\", which\n# some installcheck users do not have (e.g. buildfarm clients).\nNO_INSTALLCHECK = 1\n\npg_walinspect can't be run under installcheck. I don't think dropping\nthe slot at the end is needed, it's unnecessary. I saw\noldextversions.sql using the same replication slot name, well no\nproblem, but I changed it to a unique name.\n\n> Hmm. I would simplify that, and just mention that an error is raised\n> when the start LSN is available, without caring about the rest (valid\n> end LSN being past the current insert LSN, and error if start > end,\n> the second being obvious).\n\nOkay.\n\n> + <note>\n> + <para>\n> + Note that <function>pg_get_wal_records_info_till_end_of_wal</function> and\n> + <function>pg_get_wal_stats_till_end_of_wal</function> functions have been\n> + removed in the <filename>pg_walinspect</filename> version\n> + <literal>1.1</literal>. The same functionality can be achieved with\n> + <function>pg_get_wal_records_info</function> and\n> + <function>pg_get_wal_stats</function> functions by specifying a future\n> + <replaceable>end_lsn</replaceable>. However, <function>till_end_of_wal</function>\n> + functions will still work if the extension is installed explicitly with\n> + version <literal>1.0</literal>.\n> + </para>\n> + </note>\n>\n> Not convinced that this is necessary.\n\nAs hackers we know that these functions have been removed and how to\nachieve till_end_of_wal with the other functions. I noticed that\nyou've removed my changes (see below) from the docs that were saying\nhow to get info/stats till_end_of_wal. That leaves end users confused\nas to how they can achieve till_end_of_wal functionality. All users\ncan't look for commit history/message but they can easily read the\ndocs. I prefer to have the following (did so in the attached v7) and\nget rid of the above note if you don't feel strongly about it.\n\n+ If a future <replaceable>end_lsn</replaceable>\n+ (i.e. the LSN server doesn't know about) is specified, it returns\n+ informaton till end of WAL.\n\n> + GetInputLSNs(fcinfo, &start_lsn, &end_lsn, till_end_of_wal);\n> +\n> + stats_per_record = PG_GETARG_BOOL(2);\n>\n> This code in GetWalStats() is incorrect.\n> pg_get_wal_stats_till_end_of_wal() has a stats_per_record, but as\n> *second* argument, so this would be broken.\n\nOh, yeah. Thanks for fixing it.\n\n> + GetInputLSNs(fcinfo, &start_lsn, &end_lsn, till_end_of_wal);\n>\n> Coming from the last point, I think that this interface is confusing,\n> and actually incorrect. From what I can see, we should be doing what\n> ~15 has by grepping the argument values within the main function\n> calls, and just pass them down to the internal routines GetWalStats()\n> and GetWALRecordsInfo().\n\nHm, what you have in v6 works for me.\n\n> At the end, I am finishing with the attached. ValidateInputLSNs()\n> ought to be called, IMO, when the caller of the SQL functions can\n> directly specify an end_lsn. This means that there is no point to do\n> this check in the two till_end_* functions. This has as cost two\n> extra checks to make sure that the start_lsn is not higher than the\n> current LSN, but I am fine to live with that. It seemed rather\n> natural to me to let ValidateInputLSNs() do a refresh of the end_lsn\n> if it sees that it is higher than the current LSN. And if you look\n> closely, you will see that we only call *once* GetCurrentLSN() for\n> each function call, so the maths are more precise.\n>\n> I have cleaned up the comments of the modules, while on it, as there\n> was not much value in copy-pasting how a function fails while there is\n> a centralized validation code path. The tests for the till_end()\n> functions have been moved to the test path where we install 1.0.\n\nI have some comments and fixed them in the attached v7 patch:\n\n1.\n+ * pg_get_wal_records_info\n *\n+ * pg_get_wal_stats\n *\nI think you wanted to be consistent with function comments with\nfunction names atop, but missed adding for all functions. Actually, I\ndon't have a strong opinion on these changes as they unnecessarily\nbloat the changes, so I removed them.\n\n2.\n+ if (start_lsn > curr_lsn)\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n- errmsg(\"cannot accept future start LSN\"),\n- errdetail(\"Last known WAL LSN on the database system\nis at %X/%X.\",\n- LSN_FORMAT_ARGS(curr_lsn))));\n- }\n+ errmsg(\"WAL start LSN must be less than current LSN\")));\n\nI don't like this inconsistency much, especially when\npg_get_wal_record_info emits \"cannot accept future input LSN\" with the\ncurrent LSN details (this current LSN will give a bit more information\nto the user). Also, let's be consistent across returning NULLs if\ninput LSN/start LSN equal to the current LSN. I've done these changes\nin the attached v7 patch.\n\n3. I wanted COUNT(*) >= 0 for successful function execution to be\nCOUNT(*) >= 1 so that we will check for at least the functions\nreturning 1 record. And failures to be SELECT * FROM. This was my\nintention but I don't see that in this patch or in the previous\ntest-refactoring commit. I added that in the attached v7 patch again.\nAlso, made test comments conssitent.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 13 Mar 2023 15:53:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 03:53:37PM +0530, Bharath Rupireddy wrote:\n> On Mon, Mar 13, 2023 at 12:26 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> +-- Make sure checkpoints don't interfere with the test.\n>> +SELECT 'init' FROM pg_create_physical_replication_slot('regress_pg_walinspect_slot', true, false);\n>>\n>> Adding a physical slot is better for stability of course, but the test\n>> also has to drop it or installcheck would cause an existing cluster to\n>> have that still around. The third argument could be true, as well, so\n>> as we'd use a temporary slot.\n> \n> # Disabled because these tests require \"wal_level=replica\", which\n> # some installcheck users do not have (e.g. buildfarm clients).\n> NO_INSTALLCHECK = 1\n> \n> pg_walinspect can't be run under installcheck. I don't think dropping\n> the slot at the end is needed, it's unnecessary. I saw\n> oldextversions.sql using the same replication slot name, well no\n> problem, but I changed it to a unique name.\n\n-SELECT pg_drop_replication_slot('regress_pg_walinspect_slot');\n-\n+-- Clean up\n\nIn my opinion, it is an incorrect practice to assume that nobody will\never run these tests on a running instance. FWIW, I have managed\nQE/QA flows in the past that did exactly that. I cannot say for\nalready-deployed clusters that could be used for production still I\ndon't feel comfortable with the idea to assume that nobody would do\never that, and calls of pg_drop_replication_slot() are not a\nbottleneck. So let's be clean and drop these slots to keep the tests\nself-contained. pg_walinspect in REL_15_STABLE gets that right, IMV,\nand that's no different from the role cleanup, as one example.\n\n> As hackers we know that these functions have been removed and how to\n> achieve till_end_of_wal with the other functions. I noticed that\n> you've removed my changes (see below) from the docs that were saying\n> how to get info/stats till_end_of_wal. That leaves end users confused\n> as to how they can achieve till_end_of_wal functionality. All users\n> can't look for commit history/message but they can easily read the\n> docs. I prefer to have the following (did so in the attached v7) and\n> get rid of the above note if you don't feel strongly about it.\n> \n> + If a future <replaceable>end_lsn</replaceable>\n> + (i.e. the LSN server doesn't know about) is specified, it returns\n> + informaton till end of WAL.\n\nFWIW, I don't see a strong need for that, because this documents a\nbehavior where we would not fail. And FWIW, it just feel natural to\nme because the process stops the scan up to where it can. In short,\nit should be enough for the docs to mention the error patterns,\nnothing else.\n\n> I have some comments and fixed them in the attached v7 patch:\n> \n> 1.\n> + * pg_get_wal_records_info\n> *\n> + * pg_get_wal_stats\n> *\n> I think you wanted to be consistent with function comments with\n> function names atop, but missed adding for all functions. Actually, I\n> don't have a strong opinion on these changes as they unnecessarily\n> bloat the changes, so I removed them.\n\nEither is fine if you feel strongly on this one, I am just used to\ndoing that.\n\n> 2.\n> + if (start_lsn > curr_lsn)\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> - errmsg(\"cannot accept future start LSN\"),\n> - errdetail(\"Last known WAL LSN on the database system\n> is at %X/%X.\",\n> - LSN_FORMAT_ARGS(curr_lsn))));\n> - }\n> + errmsg(\"WAL start LSN must be less than current LSN\")));\n> \n> I don't like this inconsistency much, especially when\n> pg_get_wal_record_info emits \"cannot accept future input LSN\" with the\n> current LSN details (this current LSN will give a bit more information\n> to the user). Also, let's be consistent across returning NULLs if\n> input LSN/start LSN equal to the current LSN. I've done these changes\n> in the attached v7 patch.\n\nNo arguments against that, consistency is good.\n\n> 3. I wanted COUNT(*) >= 0 for successful function execution to be\n> COUNT(*) >= 1 so that we will check for at least the functions\n> returning 1 record. And failures to be SELECT * FROM. This was my\n> intention but I don't see that in this patch or in the previous\n> test-refactoring commit. I added that in the attached v7 patch again.\n> Also, made test comments conssitent.\n\nNoticed that as well, still it feels to me that these had better be\nseparated from the rest, and be in their own patch, perhaps *after*\nthe main patch discussed on this thread, or just moved into their own\nthreads. If a commit finishes with a list of bullet points referring\nto a list of completely different things than the subject, there may\nbe a problem. In this v7, we have:\n- Change the behavior of the functions for end LSNs, tweaking the\ntests to do so.\n- Adjust more comments and formats in the tests.\n- Adjust some tests to be pickier with detection of generated WAL\nrecords.\n- Remove the drop slot calls.\nBut what we need to care most here is the first point.\n\nI am not arguing that none of that should not be changed, but it\nshould not be inside a patch that slightly tweaks the behaviors of\nsome existing functions. First, it creates a lot of noise in the\ndiffs, making it harder for anybody reading this change to find the\ncore of what's happening. Second, it increases the odds of mistakes\nand bugs (if a revert is done, the work to-be-done gets greater at the\nend). When it comes to this patch, the changes should only involve\nthe calls of till_end_of_wal() being moved around from\npg_walinspect.sql to oldextversions.sql. If you look at v6, the tests\nare only focusing on this part, and nothing else.\n--\nMichael",
"msg_date": "Tue, 14 Mar 2023 08:32:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 5:02 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> So let's be clean and drop these slots to keep the tests\n> self-contained. pg_walinspect in REL_15_STABLE gets that right, IMV,\n> and that's no different from the role cleanup, as one example.\n\nHm, added replication slot drop back.\n\n> > As hackers we know that these functions have been removed and how to\n> > achieve till_end_of_wal with the other functions. I noticed that\n> > you've removed my changes (see below) from the docs that were saying\n> > how to get info/stats till_end_of_wal. That leaves end users confused\n> > as to how they can achieve till_end_of_wal functionality. All users\n> > can't look for commit history/message but they can easily read the\n> > docs. I prefer to have the following (did so in the attached v7) and\n> > get rid of the above note if you don't feel strongly about it.\n> >\n> > + If a future <replaceable>end_lsn</replaceable>\n> > + (i.e. the LSN server doesn't know about) is specified, it returns\n> > + informaton till end of WAL.\n>\n> FWIW, I don't see a strong need for that, because this documents a\n> behavior where we would not fail. And FWIW, it just feel natural to\n> me because the process stops the scan up to where it can. In short,\n> it should be enough for the docs to mention the error patterns,\n> nothing else.\n\nMy thoughts are simple here - how would one (an end user, not me and\nnot you) figure out how to get info/stats till the end of WAL? I'm\nsure it would be difficult to find that out without looking at the\ncode or commit history. Search for till end of WAL behaviour with new\nversion will be more given the 1.0 version has explicit functions to\ndo that. IMO, there's no harm in being explicit in how to achieve till\nend of WAL functionality around in the docs.\n\n> > 3. I wanted COUNT(*) >= 0 for successful function execution to be\n> > COUNT(*) >= 1 so that we will check for at least the functions\n> > returning 1 record. And failures to be SELECT * FROM. This was my\n> > intention but I don't see that in this patch or in the previous\n> > test-refactoring commit. I added that in the attached v7 patch again.\n> > Also, made test comments conssitent.\n>\n> Noticed that as well, still it feels to me that these had better be\n> separated from the rest, and be in their own patch, perhaps *after*\n> the main patch discussed on this thread, or just moved into their own\n> threads. If a commit finishes with a list of bullet points referring\n> to a list of completely different things than the subject, there may\n> be a problem. In this v7, we have:\n> - Change the behavior of the functions for end LSNs, tweaking the\n> tests to do so.\n> - Adjust more comments and formats in the tests.\n> - Adjust some tests to be pickier with detection of generated WAL\n> records.\n> - Remove the drop slot calls.\n> But what we need to care most here is the first point.\n\nI get it. I divided the patches to 0001 and 0002 with 0001 focussing\non the change of behaviour around future end LSNs, dropping till end\nof WAL functions and tests tweakings related to it. 0002 has all other\ntests tidy up things.\n\nPlease find the attached v8 patch set for further review.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 14 Mar 2023 10:35:43 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "Hi,\n\nI just rebased a patch over\n\ncommit 1f282c24e46\nAuthor: Michael Paquier <michael@paquier.xyz>\nDate: 2023-03-13 13:03:29 +0900\n \n Refactor and improve tests of pg_walinspect\n\nand got a test failure:\n\nhttps://cirrus-ci.com/task/5693041982308352\nhttps://api.cirrus-ci.com/v1/artifact/task/5693041982308352/testrun/build/testrun/pg_walinspect/regress/regression.diffs\n\ndiff -w -U3 C:/cirrus/contrib/pg_walinspect/expected/oldextversions.out C:/cirrus/build/testrun/pg_walinspect/regress/results/oldextversions.out\n--- C:/cirrus/contrib/pg_walinspect/expected/oldextversions.out\t2023-03-14 21:19:01.399716500 +0000\n+++ C:/cirrus/build/testrun/pg_walinspect/regress/results/oldextversions.out\t2023-03-14 21:26:27.504876700 +0000\n@@ -8,10 +8,10 @@\n Object description\n -----------------------------------------------------------\n function pg_get_wal_record_info(pg_lsn)\n- function pg_get_wal_records_info(pg_lsn,pg_lsn)\n function pg_get_wal_records_info_till_end_of_wal(pg_lsn)\n- function pg_get_wal_stats(pg_lsn,pg_lsn,boolean)\n+ function pg_get_wal_records_info(pg_lsn,pg_lsn)\n function pg_get_wal_stats_till_end_of_wal(pg_lsn,boolean)\n+ function pg_get_wal_stats(pg_lsn,pg_lsn,boolean)\n (5 rows)\n\n -- Make sure checkpoints don't interfere with the test.\n\nLooks like it's missing an ORDER BY.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Mar 2023 14:54:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 02:54:40PM -0700, Andres Freund wrote:\n> Object description\n> -----------------------------------------------------------\n> function pg_get_wal_record_info(pg_lsn)\n> - function pg_get_wal_records_info(pg_lsn,pg_lsn)\n> function pg_get_wal_records_info_till_end_of_wal(pg_lsn)\n> - function pg_get_wal_stats(pg_lsn,pg_lsn,boolean)\n> + function pg_get_wal_records_info(pg_lsn,pg_lsn)\n> function pg_get_wal_stats_till_end_of_wal(pg_lsn,boolean)\n> + function pg_get_wal_stats(pg_lsn,pg_lsn,boolean)\n> (5 rows)\n> \n> -- Make sure checkpoints don't interfere with the test.\n> \n> Looks like it's missing an ORDER BY.\n\nInteresting. This is \"\\dx+ pg_walinspect\".\nlistOneExtensionContents() uses pg_describe_object() for that, and\nthere is already an ORDER BY based on it. I would not have expected\nthis part to be that much sensitive. Is this using a specific ICU\ncollation, because this is a side-effect of switching ICU as the\ndefault in initdb?\n\nAs a solution, this could use pg_identify_object(classid, objid, 0) in\nthe ORDER BY clause to enforce a better ordering of the objects dealt\nwith as it decomposes the object name and the object type. That\nshould be enough, I assume, as it looks to be parenthesis vs\nunderscore that switch the order.\n--\nMichael",
"msg_date": "Wed, 15 Mar 2023 09:56:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 10:35:43AM +0530, Bharath Rupireddy wrote:\n> My thoughts are simple here - how would one (an end user, not me and\n> not you) figure out how to get info/stats till the end of WAL? I'm\n> sure it would be difficult to find that out without looking at the\n> code or commit history. Search for till end of WAL behaviour with new\n> version will be more given the 1.0 version has explicit functions to\n> do that. IMO, there's no harm in being explicit in how to achieve till\n> end of WAL functionality around in the docs.\n\nOkay. I have kept these notes, but tweaked the wording to be a bit\ncleaner, replacing the term \"till\" by \"until\". To my surprise while\nstudying this point, \"till\" is a term older than \"until\" in English\nliteracy, but it is rarely used. \n\n> I get it. I divided the patches to 0001 and 0002 with 0001 focussing\n> on the change of behaviour around future end LSNs, dropping till end\n> of WAL functions and tests tweakings related to it. 0002 has all other\n> tests tidy up things.\n> \n> Please find the attached v8 patch set for further review.\n\nThe tests of 0001 were still too complex IMO. The changes can be much\nsimpler as it requires only to move the till_end_of_wal() calls from\npg_walinspect.sql to oldextversions.sql. Nothing more.\n--\nMichael",
"msg_date": "Wed, 15 Mar 2023 10:02:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-15 09:56:10 +0900, Michael Paquier wrote:\n> On Tue, Mar 14, 2023 at 02:54:40PM -0700, Andres Freund wrote:\n> > Object description\n> > -----------------------------------------------------------\n> > function pg_get_wal_record_info(pg_lsn)\n> > - function pg_get_wal_records_info(pg_lsn,pg_lsn)\n> > function pg_get_wal_records_info_till_end_of_wal(pg_lsn)\n> > - function pg_get_wal_stats(pg_lsn,pg_lsn,boolean)\n> > + function pg_get_wal_records_info(pg_lsn,pg_lsn)\n> > function pg_get_wal_stats_till_end_of_wal(pg_lsn,boolean)\n> > + function pg_get_wal_stats(pg_lsn,pg_lsn,boolean)\n> > (5 rows)\n> > \n> > -- Make sure checkpoints don't interfere with the test.\n> > \n> > Looks like it's missing an ORDER BY.\n> \n> Interesting. This is \"\\dx+ pg_walinspect\".\n> listOneExtensionContents() uses pg_describe_object() for that, and\n> there is already an ORDER BY based on it. I would not have expected\n> this part to be that much sensitive. Is this using a specific ICU\n> collation, because this is a side-effect of switching ICU as the\n> default in initdb?\n\nIt's using ICU, but not a specific collation. The build I linked to is WIP\nhackery to add ICU support to windows CI. Here's the initdb output:\nhttps://api.cirrus-ci.com/v1/artifact/task/6288336663347200/testrun/build/testrun/pg_walinspect/regress/log/initdb.log\n\nThe database cluster will be initialized with this locale configuration:\n provider: icu\n ICU locale: en_US\n LC_COLLATE: English_United States.1252\n LC_CTYPE: English_United States.1252\n LC_MESSAGES: English_United States.1252\n LC_MONETARY: English_United States.1252\n LC_NUMERIC: English_United States.1252\n LC_TIME: English_United States.1252\nThe default database encoding has accordingly been set to \"WIN1252\".\nThe default text search configuration will be set to \"english\".\n\nFor comparison, here's a recent CI run (which also failed on windows, but for\nunrelated reasons), without ICU:\nhttps://api.cirrus-ci.com/v1/artifact/task/6478925920993280/testrun/build/testrun/pg_walinspect/regress/log/initdb.log\n\nThe database cluster will be initialized with locale \"English_United States.1252\".\nThe default database encoding has accordingly been set to \"WIN1252\".\nThe default text search configuration will be set to \"english\".\n\n\n> As a solution, this could use pg_identify_object(classid, objid, 0) in\n> the ORDER BY clause to enforce a better ordering of the objects dealt\n> with as it decomposes the object name and the object type. That\n> should be enough, I assume, as it looks to be parenthesis vs\n> underscore that switch the order.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Mar 2023 19:05:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 07:05:20PM -0700, Andres Freund wrote:\n> It's using ICU, but not a specific collation. The build I linked to is WIP\n> hackery to add ICU support to windows CI. Here's the initdb output:\n> https://api.cirrus-ci.com/v1/artifact/task/6288336663347200/testrun/build/testrun/pg_walinspect/regress/log/initdb.log\n\nHmm. Thanks. At the end, I think that I would be tempted to just\nremove this \\dx query and move on. I did not anticipate that this\nordering would be that much sensitive, and the solution of using a\nCOLLATE C at the end of a describe.c query does not sound much\nappealing, either.\n--\nMichael",
"msg_date": "Wed, 15 Mar 2023 15:57:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 12:27 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Mar 14, 2023 at 07:05:20PM -0700, Andres Freund wrote:\n> > It's using ICU, but not a specific collation. The build I linked to is WIP\n> > hackery to add ICU support to windows CI. Here's the initdb output:\n> > https://api.cirrus-ci.com/v1/artifact/task/6288336663347200/testrun/build/testrun/pg_walinspect/regress/log/initdb.log\n>\n> Hmm. Thanks. At the end, I think that I would be tempted to just\n> remove this \\dx query and move on. I did not anticipate that this\n> ordering would be that much sensitive, and the solution of using a\n> COLLATE C at the end of a describe.c query does not sound much\n> appealing, either.\n\n-1 for removing \\dx+ for pg_walinspect version 1.0, because we wanted\nto show the diff of functions along with testing the upgrade path in\nthe oldextversions.sql. Therefore, I prefer something like [1]:\n\n[1]\ndiff --git a/contrib/pg_walinspect/sql/oldextversions.sql\nb/contrib/pg_walinspect/sql/oldextversions.sql\nindex 258a009888..32a059c72d 100644\n--- a/contrib/pg_walinspect/sql/oldextversions.sql\n+++ b/contrib/pg_walinspect/sql/oldextversions.sql\n@@ -5,8 +5,17 @@ CREATE EXTENSION pg_walinspect WITH VERSION '1.0';\n -- Mask DETAIL messages as these could refer to current LSN positions.\n \\set VERBOSITY terse\n\n+-- \\dx+ will give locale-sensitive results, so we can't use it here.\n+CREATE VIEW list_pg_walinspect_objects AS\n+ SELECT pg_describe_object(classid, objid, 0) AS \"Object description\"\n+ FROM pg_depend\n+ WHERE refclassid = 'pg_extension'::regclass AND\n+ refobjid = (SELECT oid FROM pg_extension WHERE\nextname = 'pg_walinspect') AND\n+ deptype = 'e'\n+ ORDER BY pg_describe_object(classid, objid, 0) COLLATE \"C\";\n+\n -- List what version 1.0 contains\n-\\dx+ pg_walinspect\n+SELECT * FROM list_pg_walinspect_objects;\n\n -- Make sure checkpoints don't interfere with the test.\n SELECT 'init' FROM\npg_create_physical_replication_slot('regress_pg_walinspect_slot',\ntrue, false);\n@@ -25,7 +34,7 @@ SELECT COUNT(*) >= 1 AS ok FROM\npg_get_wal_stats_till_end_of_wal('FFFFFFFF/FFFFF\n ALTER EXTENSION pg_walinspect UPDATE TO '1.1';\n\n -- List what version 1.1 contains\n-\\dx+ pg_walinspect\n+SELECT * FROM list_pg_walinspect_objects;\n\n SELECT pg_drop_replication_slot('regress_pg_walinspect_slot');\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 15 Mar 2023 12:40:17 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 12:40:17PM +0530, Bharath Rupireddy wrote:\n> -1 for removing \\dx+ for pg_walinspect version 1.0, because we wanted\n> to show the diff of functions along with testing the upgrade path in\n> the oldextversions.sql. Therefore, I prefer something like [1]:\n\nThis is a duplicate of what describe.c uses, with a COLLATE clause.\nThe main goal was to have a simple check, so I'd still stand by the\nsimplest choice and move on.\n--\nMichael",
"msg_date": "Wed, 15 Mar 2023 18:50:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 06:50:01PM +0900, Michael Paquier wrote:\n> This is a duplicate of what describe.c uses, with a COLLATE clause.\n> The main goal was to have a simple check, so I'd still stand by the\n> simplest choice and move on.\n\nPlease note that I have done something about that with e643a31 by\nreplacing the problematic \\dx with a SELECT query, but left the second\none as it should not be a problem.\n--\nMichael",
"msg_date": "Thu, 16 Mar 2023 16:18:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 12:48 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Mar 15, 2023 at 06:50:01PM +0900, Michael Paquier wrote:\n> > This is a duplicate of what describe.c uses, with a COLLATE clause.\n> > The main goal was to have a simple check, so I'd still stand by the\n> > simplest choice and move on.\n>\n> Please note that I have done something about that with e643a31 by\n> replacing the problematic \\dx with a SELECT query, but left the second\n> one as it should not be a problem.\n\nThanks.\n\nFWIW, I rebased the tests tweaking patch and attached it here as v9.\nThis should keep the pg_walinspect tests consistent across comments,\nspaces, new lines and using count(*) >= 1 for all successful function\nexecutions. Thoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 16 Mar 2023 13:17:59 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 01:17:59PM +0530, Bharath Rupireddy wrote:\n> FWIW, I rebased the tests tweaking patch and attached it here as v9.\n> This should keep the pg_walinspect tests consistent across comments,\n> spaces, new lines and using count(*) >= 1 for all successful function\n> executions. Thoughts?\n\nMostly OK by me, so applied after tweaking a few tiny things. The\nrewrites of the queries where we should have more than one record and\nthe removal of count() for the failure cases have been kept as\nproposed, as are most of the comments.\n--\nMichael",
"msg_date": "Thu, 23 Mar 2023 11:52:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Combine pg_walinspect till_end_of_wal functions with others"
}
] |
[
{
"msg_contents": "\nHi all,\n\nIn pgql-general, I reported that the queue order changed in\nthe following cases. [1]\n・Multiple sessions request row locks for the same tuple\n・Update occurs for target tuple\n\nI would like to hear the opinion of experts on whether it is a\nspecification or a bug.\nI think row locking is a FIFO specification, using tuple header\nand lock managers. Therefore, I think that the above is a bug.\n\n[1]\nhttps://www.postgresql.org/message-id/TYAPR01MB6073506ECCD7B8F51DA807F68A0F9@TYAPR01MB6073.jpnprd01.prod.outlook.com\n\n\nRegerds.\n\n\n",
"msg_date": "Wed, 1 Mar 2023 08:12:46 +0000",
"msg_from": "\"Ryo Yamaji (Fujitsu)\" <yamaji.ryo@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "The order of queues in row lock is changed (not FIFO)"
},
{
"msg_contents": "\"Ryo Yamaji (Fujitsu)\" <yamaji.ryo@fujitsu.com> writes:\n> In pgql-general, I reported that the queue order changed in\n> the following cases. [1]\n> \u001b$B!&\u001b(BMultiple sessions request row locks for the same tuple\n> \u001b$B!&\u001b(BUpdate occurs for target tuple\n\n> I would like to hear the opinion of experts on whether it is a\n> specification or a bug.\n> I think row locking is a FIFO specification, using tuple header\n> and lock managers. Therefore, I think that the above is a bug.\n\n> [1]\n> https://www.postgresql.org/message-id/TYAPR01MB6073506ECCD7B8F51DA807F68A0F9@TYAPR01MB6073.jpnprd01.prod.outlook.com\n\nI don't see a bug here, or at least I'm not willing to move the\ngoalposts to where you want them to be. I believe that we do guarantee\narrival-order locking of individual tuple versions. However, in the\nexample you show, a single row is being updated over and over. So,\ninitially we have a single \"winner\" transaction that got the tuple lock\nfirst and updated the row. When it commits, each other transaction\nserially comes off the wait queue for that tuple lock and discovers\nthat it now needs a lock on a different tuple version than it has got.\nSo it tries to get lock on whichever is the latest tuple version.\nThat might still appear serial as far as the original 100 sessions\ngo, because they were all queued on the same tuple lock to start with.\nBut when the new sessions come in, they effectively line-jump because\nthey will initially try to lock whichever tuple version is committed\nlive at that instant, and thus they get ahead of whichever remain of\nthe original 100 sessions for the lock on that tuple version (since\nthose are all still blocked on some older tuple version, whose lock is\nheld by whichever session is performing the next-to-commit update).\n\nI don't see any way to make that more stable that doesn't involve\nrequiring sessions to take locks on already-dead-to-them tuples;\nwhich sure seems like a nonstarter, not least because we don't even have\na way to find such tuples. The update chains only link forward not back.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Mar 2023 12:00:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: The order of queues in row lock is changed (not FIFO)"
},
{
"msg_contents": "\r\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\r\n> I don't see a bug here, or at least I'm not willing to move the goalposts to where you want them to be.\r\n> I believe that we do guarantee arrival-order locking of individual tuple versions. However, in the \r\n> example you show, a single row is being updated over and over. So, initially we have a single \"winner\" \r\n> transaction that got the tuple lock first and updated the row. When it commits, each other transaction \r\n> serially comes off the wait queue for that tuple lock and discovers that it now needs a lock on a \r\n> different tuple version than it has got.\r\n> So it tries to get lock on whichever is the latest tuple version.\r\n> That might still appear serial as far as the original 100 sessions go, because they were all queued on the \r\n> same tuple lock to start with.\r\n> But when the new sessions come in, they effectively line-jump because they will initially try to lock \r\n> whichever tuple version is committed live at that instant, and thus they get ahead of whichever remain of \r\n> the original 100 sessions for the lock on that tuple version (since those are all still blocked on some older \r\n> tuple version, whose lock is held by whichever session is performing the next-to-commit update).\r\n\r\n> I don't see any way to make that more stable that doesn't involve requiring sessions to take locks on \r\n> already-dead-to-them tuples; which sure seems like a nonstarter, not least because we don't even have a \r\n> way to find such tuples. The update chains only link forward not back.\r\n\r\nThank you for your reply.\r\nWhen I was doing this test, I confirmed the following two actions.\r\n(1) The first 100 sessions are overtaken by the last 10.\r\n(2) the order of the preceding 100 sessions changes\r\n\r\n(1) I was concerned from the user's point of view that the lock order for the same tuple was not preserved.\r\nHowever, as you pointed out, in many cases the order of arrival is guaranteed from the perspective of the tuple.\r\nYou understand the PostgreSQL architecture and understand that you need to use it.\r\n\r\n(2) This behavior is rare. Typically, the first session gets AccessExclusiveLock to the tuple and ShareLock to the\r\ntransaction ID. Subsequent sessions will wait for AccessExclusiveLock to the tuple. However, we ignored\r\nAccessExclusiveLock in the tuple from the log and observed multiple sessions waiting for ShareLock to the\r\ntransaction ID. The log shows that the order of the original 100 sessions has been changed due to the above\r\nmovement.\r\n\r\nAt first, I thought both (1) and (2) were obstacles. However, I understood from your indication that (1) is not a bug.\r\nI would be grateful if you could also give me your opinion on (2).\r\n\r\nShare the following logs:\r\n\r\n[Log]\r\n1. ShareLock has one wait, the rest is in AccessExclusiveLock\r\n\r\n1-1. Only 1369555 is aligned with ShareLock, the transaction ID obtained by 1369547, and the rest with\r\n AccessExclusiveLock, the tuple obtained by 1369555.\r\n This is similar to a pattern in which no updates have occurred to the tuple.\r\n--------------------------------------------------------------\r\n2022-10-26 01:20:08.881 EDT [1369555:19:0] LOG: process 1369555 still waiting for ShareLock on transaction 2501 after 10.072 ms\r\n2022-10-26 01:20:08.881 EDT [1369555:20:0] DETAIL: Process holding the lock: 1369547. Wait queue: 1369555.\r\n〜\r\n2022-10-26 01:21:58.918 EDT [1369898:17:0] LOG: process 1369898 acquired AccessExclusiveLock on tuple (1, 0) of relation 16546 of database 13779 after 10.321 ms\r\n2022-10-26 01:21:58.918 EDT [1369898:18:0] DETAIL: Process holding the lock: 1369555. Wait queue: 1369558, 1369561, 1369564, 1369567, 1369570, 1369573, 1369576, ...\r\n--------------------------------------------------------------\r\n\r\n\r\n2. All processes wait with ShareLock\r\n\r\n2-1. With 1369558 holding the t1 (0, 4) lock, the queue head is 1369561.\r\n--------------------------------------------------------------\r\n2022-10-26 01:22:27.230 EDT [1369623:46:2525] LOG: process 1369623 still waiting for ShareLock on transaction 2504 after 10.133 msprocess 1369623 still waiting for ShareLock on transaction 2504 after 10.133 ms\r\n2022-10-26 01:22:27.242 EDT [1369877:47:2604] DETAIL: Process holding the lock: 1369558. Wait queue: 1369561, 1369623, 1369626, ...\r\n--------------------------------------------------------------\r\n\r\n2-2. When 1369558 locks are released, the first 1369561 in the Wait queue was expected to acquire the lock,\r\n but the process actually acquired 1369787\r\n--------------------------------------------------------------\r\n2022-10-26 01:22:28.237 EDT [1369623:63:2525] LOG: process 1369623 still waiting for ShareLock on transaction 2577 after 10.028 ms\r\n2022-10-26 01:22:28.237 EDT [1369623:64:2525] DETAIL: Process holding the lock: 1369787. Wait queue: 1369623, 1369610, 1369614, 1369617, 1369620.\r\n--------------------------------------------------------------\r\n\r\n2-3. Checking that the 1369561 is rearranging.\r\n--------------------------------------------------------------\r\n2022-10-26 01:22:28.237 EDT [1369629:64:2527] DETAIL: Process holding the lock: 1369623. Wait queue: 1369629, 1369821, 1369644, ... 1369561, ...\r\n--------------------------------------------------------------\r\n\r\n\r\n\r\nRegards, ryo\r\n",
"msg_date": "Tue, 7 Mar 2023 01:48:21 +0000",
"msg_from": "\"Ryo Yamaji (Fujitsu)\" <yamaji.ryo@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: The order of queues in row lock is changed (not FIFO)"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 4:49 PM Ryo Yamaji (Fujitsu)\n<yamaji.ryo@fujitsu.com> wrote:\n>\n> From: Tom Lane <tgl@sss.pgh.pa.us>\n> > I don't see a bug here, or at least I'm not willing to move the goalposts to where you want them to be.\n> > I believe that we do guarantee arrival-order locking of individual tuple versions. However, in the\n> > example you show, a single row is being updated over and over. So, initially we have a single \"winner\"\n> > transaction that got the tuple lock first and updated the row. When it commits, each other transaction\n> > serially comes off the wait queue for that tuple lock and discovers that it now needs a lock on a\n> > different tuple version than it has got.\n> > So it tries to get lock on whichever is the latest tuple version.\n> > That might still appear serial as far as the original 100 sessions go, because they were all queued on the\n> > same tuple lock to start with.\n> > But when the new sessions come in, they effectively line-jump because they will initially try to lock\n> > whichever tuple version is committed live at that instant, and thus they get ahead of whichever remain of\n> > the original 100 sessions for the lock on that tuple version (since those are all still blocked on some older\n> > tuple version, whose lock is held by whichever session is performing the next-to-commit update).\n>\n> > I don't see any way to make that more stable that doesn't involve requiring sessions to take locks on\n> > already-dead-to-them tuples; which sure seems like a nonstarter, not least because we don't even have a\n> > way to find such tuples. The update chains only link forward not back.\n>\n> Thank you for your reply.\n> When I was doing this test, I confirmed the following two actions.\n> (1) The first 100 sessions are overtaken by the last 10.\n> (2) the order of the preceding 100 sessions changes\n>\n> (1) I was concerned from the user's point of view that the lock order for the same tuple was not preserved.\n> However, as you pointed out, in many cases the order of arrival is guaranteed from the perspective of the tuple.\n> You understand the PostgreSQL architecture and understand that you need to use it.\n>\n> (2) This behavior is rare. Typically, the first session gets AccessExclusiveLock to the tuple and ShareLock to the\n> transaction ID. Subsequent sessions will wait for AccessExclusiveLock to the tuple. However, we ignored\n> AccessExclusiveLock in the tuple from the log and observed multiple sessions waiting for ShareLock to the\n> transaction ID. The log shows that the order of the original 100 sessions has been changed due to the above\n> movement.\n>\n\nI think for (2), the test is hitting the case of walking the update\nchain via heap_lock_updated_tuple() where we don't acquire the lock on\nthe tuple. See comments atop heap_lock_updated_tuple(). You can verify\nif that is the case by adding some DEBUG logs in that function.\n\n> At first, I thought both (1) and (2) were obstacles. However, I understood from your indication that (1) is not a bug.\n> I would be grateful if you could also give me your opinion on (2).\n>\n\nIf my above observation is correct then it is not a bug as it is\nbehaving as per the current design.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 22 Apr 2023 15:59:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: The order of queues in row lock is changed (not FIFO)"
}
] |
[
{
"msg_contents": "SQL:2023 should be published within the next 2 months, so I want to \nupdate our SQL conformance information for our PostgreSQL release later \nthis year.\n\nAttached are patches that update the keywords list and the features list \nas usual. (Some of the new features in the JSON area are still being \nworked on. I have just set them all to NO for now, to be revisited later.)\n\nI'm also proposing to get rid of the tracking of subfeatures. This has \nbeen de-facto deprecated: All the subfeatures for optional features have \nbeen removed (replaced by top-level feature codes), and the subfeatures \nfor mandatory features aren't very interesting. The TODO is to remove \nthe columns for the subfeatures in src/backend/catalog/sql_features.txt. \n That is a mechanical change that I did not include in the patch.\n\nI'll leave this patch set in the commit fest, to let those concurrent \ndevelopments shake out and as a reminder to address this when the time \ncomes.",
"msg_date": "Wed, 1 Mar 2023 10:12:18 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "documentation updates for SQL:2023"
},
{
"msg_contents": "On 01.03.23 10:12, Peter Eisentraut wrote:\n> SQL:2023 should be published within the next 2 months, so I want to \n> update our SQL conformance information for our PostgreSQL release later \n> this year.\n> \n> Attached are patches that update the keywords list and the features list \n> as usual. (Some of the new features in the JSON area are still being \n> worked on. I have just set them all to NO for now, to be revisited later.)\n\nI have committed these patches.\n\n> I'm also proposing to get rid of the tracking of subfeatures. This has \n> been de-facto deprecated: All the subfeatures for optional features have \n> been removed (replaced by top-level feature codes), and the subfeatures \n> for mandatory features aren't very interesting. The TODO is to remove \n> the columns for the subfeatures in src/backend/catalog/sql_features.txt. \n> That is a mechanical change that I did not include in the patch.\n\nI have dropped this for now. This got a little bit more complicated \nthan I had hoped, since the sql_features.txt file is also loaded into \nthe information schema in initdb, and I didn't want to reorganize that \nright now. Something to revisit some other time, perhaps.\n\n\n\n",
"msg_date": "Wed, 5 Apr 2023 11:41:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: documentation updates for SQL:2023"
}
] |
[
{
"msg_contents": "The SQL standard defines several standard collations. Most of them are \nonly of legacy interest (IMO), but two are currently relevant: UNICODE \nand UCS_BASIC. UNICODE sorts by the default Unicode collation algorithm \nspecifications and UCS_BASIC sorts by codepoint.\n\nWhen collation support was added to PostgreSQL, we added UCS_BASIC, \nsince that could easily be mapped to the C locale. But there was no \nstraightforward way to provide the UNICODE collation. (Recall that \ncollation support came several releases before ICU support.)\n\nWith ICU support, we can provide the UNICODE collation, since it's just \nthe root locale. I suppose one hesitation was that ICU was not a \nstandard feature, so this would create variations in the default catalog \ncontents, or something like that. But I think now that we are drifting \nto make ICU more prominent, we can just add that anyway. I think being \nable to say\n\n COLLATE UNICODE\n\ninstead of\n\n COLLATE \"und-x-icu\"\n\nor whatever it is, is pretty useful.\n\nSo, attached is a small patch to add this.",
"msg_date": "Wed, 1 Mar 2023 11:09:52 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Add standard collation UNICODE"
},
{
"msg_contents": "On 3/1/23 11:09, Peter Eisentraut wrote:\n> The SQL standard defines several standard collations. Most of them are \n> only of legacy interest (IMO), but two are currently relevant: UNICODE \n> and UCS_BASIC. UNICODE sorts by the default Unicode collation algorithm \n> specifications and UCS_BASIC sorts by codepoint.\n> \n> When collation support was added to PostgreSQL, we added UCS_BASIC, \n> since that could easily be mapped to the C locale. But there was no \n> straightforward way to provide the UNICODE collation. (Recall that \n> collation support came several releases before ICU support.)\n> \n> With ICU support, we can provide the UNICODE collation, since it's just \n> the root locale. I suppose one hesitation was that ICU was not a \n> standard feature, so this would create variations in the default catalog \n> contents, or something like that. But I think now that we are drifting \n> to make ICU more prominent, we can just add that anyway. I think being \n> able to say\n> \n> COLLATE UNICODE\n> \n> instead of\n> \n> COLLATE \"und-x-icu\"\n> \n> or whatever it is, is pretty useful.\n> \n> So, attached is a small patch to add this.\n\nI don't feel competent to review the patch (simple as it is), but +1 on \nthe principle.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Thu, 2 Mar 2023 01:05:34 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "On Wed, 2023-03-01 at 11:09 +0100, Peter Eisentraut wrote:\n\n> When collation support was added to PostgreSQL, we added UCS_BASIC, \n> since that could easily be mapped to the C locale.\n\nSorting by codepoint should be encoding-independent (i.e. decode to\ncodepoint first); but the C collation is just strcmp, which is\nencoding-dependent. So is UCS_BASIC wrong today?\n\n(Aside: I wonder whether we should differentiate between the libc\nprovider, which uses strcoll(), and the provider of non-localized\ncomparisons that just use strcmp(). That would be a better reflection\nof what the code actually does.)\n\n> With ICU support, we can provide the UNICODE collation, since it's\n> just \n> the root locale.\n\n+1\n\n> I suppose one hesitation was that ICU was not a \n> standard feature, so this would create variations in the default\n> catalog \n> contents, or something like that.\n\nIt looks like the way you've handled this is by inserting the collation\nwith collprovider=icu even if built without ICU support. I think that's\na new case, so we need to make sure it throws reasonable user-facing\nerrors.\n\nI do like your approach though because, if someone is using a standard\ncollation, I think \"not built with ICU\" (feature not supported) is a\nbetter error than \"collation doesn't exist\". It also effectively\nreserves the name \"unicode\".\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Sat, 04 Mar 2023 10:29:54 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "On Sun, Mar 5, 2023 at 7:30 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> Sorting by codepoint should be encoding-independent (i.e. decode to\n> codepoint first); but the C collation is just strcmp, which is\n> encoding-dependent. So is UCS_BASIC wrong today?\n\nIt's created for UTF-8 only, and UTF-8 sorts the same way as the\nencoded code points, when interpreted as a sequence of unsigned char\nby memcmp(), strcmp() etc. Seems right?\n\n\n",
"msg_date": "Sun, 5 Mar 2023 08:27:03 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "On Sun, 2023-03-05 at 08:27 +1300, Thomas Munro wrote:\n> It's created for UTF-8 only, and UTF-8 sorts the same way as the\n> encoded code points, when interpreted as a sequence of unsigned char\n> by memcmp(), strcmp() etc. Seems right?\n\nRight, makes sense.\n\nThough in principle, shouldn't someone using another encoding also be\nable to use ucs_basic? I'm not sure if that's a practical problem or\nnot; I'm just curious. Does ICU provide a locale for sorting by code\npoint?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sat, 04 Mar 2023 15:56:48 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> On Sun, 2023-03-05 at 08:27 +1300, Thomas Munro wrote:\n>> It's created for UTF-8 only, and UTF-8 sorts the same way as the\n>> encoded code points, when interpreted as a sequence of unsigned char\n>> by memcmp(), strcmp() etc. Seems right?\n\n> Right, makes sense.\n\n> Though in principle, shouldn't someone using another encoding also be\n> able to use ucs_basic? I'm not sure if that's a practical problem or\n> not; I'm just curious. Does ICU provide a locale for sorting by code\n> point?\n\nISTM we could trivially allow it in LATIN1 encoding as well;\nstrcmp would still have the effect of sorting by unicode code points.\n\nGiven the complete lack of field demand for making it work in\nother encodings, I'm unexcited about spending more effort than that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Mar 2023 19:10:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "On 04.03.23 19:29, Jeff Davis wrote:\n> It looks like the way you've handled this is by inserting the collation\n> with collprovider=icu even if built without ICU support. I think that's\n> a new case, so we need to make sure it throws reasonable user-facing\n> errors.\n\nIt would look like this:\n\n=> select * from t1 order by b collate unicode;\nERROR: 0A000: ICU is not supported in this build\n\n> I do like your approach though because, if someone is using a standard\n> collation, I think \"not built with ICU\" (feature not supported) is a\n> better error than \"collation doesn't exist\". It also effectively\n> reserves the name \"unicode\".\n\nright\n\n\n\n",
"msg_date": "Wed, 8 Mar 2023 07:21:58 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "On 04.03.23 19:29, Jeff Davis wrote:\n> I do like your approach though because, if someone is using a standard\n> collation, I think \"not built with ICU\" (feature not supported) is a\n> better error than \"collation doesn't exist\". It also effectively\n> reserves the name \"unicode\".\n\nBy the way, speaking of reserving names, I don't remember the reason for \nthis bit in initdb.c:\n\n/*\n * Add SQL-standard names. We don't want to pin these, so they don't go\n * in pg_collation.h. But add them before reading system collations, so\n * that they win if libc defines a locale with the same name.\n */\n\nWhy don't we want them pinned?\n\nIf we add them instead as entries into pg_collation.dat, it seems to \nwork for me.\n\nAnother question: What is our current thinking on using BCP 47 names? \nThe documentation says for example\n\n\"\"\"\nThe first example selects the ICU locale using a “language tag” per BCP \n47. The second example uses the traditional ICU-specific locale syntax. \nThe first style is preferred going forward, but it is not supported by \nolder ICU versions.\n\"\"\"\n\nMy patch uses 'und' [BCP 47 style], which appears to be in conflict with \nthat statement.\n\nBut we have had some discussions on how correct that statement is, but I \ndon't remember the outcome.\n\n\n\n",
"msg_date": "Wed, 8 Mar 2023 08:10:02 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "On Wed, 2023-03-08 at 07:21 +0100, Peter Eisentraut wrote:\n> On 04.03.23 19:29, Jeff Davis wrote:\n> > It looks like the way you've handled this is by inserting the\n> > collation\n> > with collprovider=icu even if built without ICU support. I think\n> > that's\n> > a new case, so we need to make sure it throws reasonable user-\n> > facing\n> > errors.\n> \n> It would look like this:\n> \n> => select * from t1 order by b collate unicode;\n> ERROR: 0A000: ICU is not supported in this build\n\nRight, the error looks good. I'm just pointing out that before this\npatch, having provider='i' in a build without ICU was a configuration\nmistake; whereas afterward every database will have a collation with\nprovider='i' whether it has ICU support or not. I think that's fine,\nI'm just double-checking.\n\nWhy is \"unicode\" only provided for the UTF-8 encoding? For \"ucs_basic\"\nthat makes some sense, because the implementation only works in UTF-8.\nBut here we are using ICU, and the \"und\" locale should work for any\nICU-supported encoding. I suggest that we use collencoding=-1 for\n\"unicode\", and the docs can just add a note next to \"ucs_basic\" that it\nonly works for UTF-8, because that's the weird case.\n\nFor the docs, I suggest that you clarify that \"ucs_basic\" has the same\nbehavior as the C locale does *in the UTF-8 encoding*. Not all users\nmight pick up on the subtlety that the C locale has different behaviors\nin different encodings.\n\nOther than that, it looks good.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Wed, 08 Mar 2023 10:25:42 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "On 08.03.23 19:25, Jeff Davis wrote:\n> Why is \"unicode\" only provided for the UTF-8 encoding? For \"ucs_basic\"\n> that makes some sense, because the implementation only works in UTF-8.\n> But here we are using ICU, and the \"und\" locale should work for any\n> ICU-supported encoding. I suggest that we use collencoding=-1 for\n> \"unicode\", and the docs can just add a note next to \"ucs_basic\" that it\n> only works for UTF-8, because that's the weird case.\n\nmake sense\n\n> For the docs, I suggest that you clarify that \"ucs_basic\" has the same\n> behavior as the C locale does *in the UTF-8 encoding*. Not all users\n> might pick up on the subtlety that the C locale has different behaviors\n> in different encodings.\n\nOk, word-smithed a bit more.\n\nHow about this patch version?",
"msg_date": "Thu, 9 Mar 2023 11:21:25 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "On Thu, 2023-03-09 at 11:21 +0100, Peter Eisentraut wrote:\n> How about this patch version?\n\nLooks good to me.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 09 Mar 2023 11:23:35 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "On 09.03.23 20:23, Jeff Davis wrote:\n> On Thu, 2023-03-09 at 11:21 +0100, Peter Eisentraut wrote:\n>> How about this patch version?\n> \n> Looks good to me.\n\nCommitted, after adding a test.\n\n\n\n",
"msg_date": "Fri, 10 Mar 2023 13:43:01 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "On Thu, 2023-03-09 at 11:23 -0800, Jeff Davis wrote:\n> Looks good to me.\n\nAnother thought: for ICU, do we want the default collation to be\nUNICODE (root collation)? What we have now gets the default from the\nenvironment, which is consistent with the libc provider.\n\nBut now that we have the UNICODE collation, it makes me wonder if we\nshould just default to that. The server's environment doesn't\nnecessarily say much about the locale of the data stored in it or the\nlocale of the applications accessing it.\n\nI don't have a strong opinion here, but I thought I'd raise the issue.\n\nBy my count, >50% of locales are actually just the root locale. I'm not\nsure if that should matter or not -- we don't want to weigh some\nlocales over others -- but I found it interesting.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 23 Mar 2023 13:16:35 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "On Thu, 2023-03-23 at 13:16 -0700, Jeff Davis wrote:\n> Another thought: for ICU, do we want the default collation to be\n> UNICODE (root collation)? What we have now gets the default from the\n> environment, which is consistent with the libc provider.\n> \n> But now that we have the UNICODE collation, it makes me wonder if we\n> should just default to that. The server's environment doesn't\n> necessarily say much about the locale of the data stored in it or the\n> locale of the applications accessing it.\n> \n> I don't have a strong opinion here, but I thought I'd raise the issue.\n> \n> By my count, >50% of locales are actually just the root locale. I'm not\n> sure if that should matter or not -- we don't want to weigh some\n> locales over others -- but I found it interesting.\n\nI second that. Most people don't pay attention to that when creating a\ncluster, so having a locale-agnostic collation is often better than\ninheriting whatever default happened to be set in your shell.\nFor example, the Debian/Ubuntu binary packages create a cluster when\nyou install the server package, and most people just go on using that.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 28 Mar 2023 08:50:45 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "On 23.03.23 21:16, Jeff Davis wrote:\n> Another thought: for ICU, do we want the default collation to be\n> UNICODE (root collation)? What we have now gets the default from the\n> environment, which is consistent with the libc provider.\n> \n> But now that we have the UNICODE collation, it makes me wonder if we\n> should just default to that. The server's environment doesn't\n> necessarily say much about the locale of the data stored in it or the\n> locale of the applications accessing it.\n\nAs long as we still have to initialize the libc locale fields to some \nlanguage, I think it would be less confusing to keep the ICU locale on \nthe same language.\n\nIf we ever manage to get rid of that, then I would also support making \nthe ICU locale the root collation by default.\n\n\n\n\n",
"msg_date": "Tue, 28 Mar 2023 12:07:52 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "On 3/28/23 06:07, Peter Eisentraut wrote:\n> On 23.03.23 21:16, Jeff Davis wrote:\n>> Another thought: for ICU, do we want the default collation to be\n>> UNICODE (root collation)? What we have now gets the default from the\n>> environment, which is consistent with the libc provider.\n>> \n>> But now that we have the UNICODE collation, it makes me wonder if we\n>> should just default to that. The server's environment doesn't\n>> necessarily say much about the locale of the data stored in it or the\n>> locale of the applications accessing it.\n> \n> As long as we still have to initialize the libc locale fields to some\n> language, I think it would be less confusing to keep the ICU locale on\n> the same language.\n\nI definitely agree with that.\n\n> If we ever manage to get rid of that, then I would also support making\n> the ICU locale the root collation by default.\n\n+1\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Tue, 28 Mar 2023 08:46:17 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "On Tue, 2023-03-28 at 08:46 -0400, Joe Conway wrote:\n> > As long as we still have to initialize the libc locale fields to\n> > some\n> > language, I think it would be less confusing to keep the ICU locale\n> > on\n> > the same language.\n> \n> I definitely agree with that.\n\nSounds good -- no changes then.\n\nRegards,\n\tJeff Davis\n\n> \n\n\n",
"msg_date": "Tue, 28 Mar 2023 06:30:00 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "\tPeter Eisentraut wrote:\n\n> COLLATE UNICODE\n> \n> instead of\n> \n> COLLATE \"und-x-icu\"\n> \n> or whatever it is, is pretty useful.\n> \n> So, attached is a small patch to add this.\n\nThis collation has an empty pg_collation.collversion column, instead\nof being set to the same value as \"und-x-icu\" to track its version.\n\n\npostgres=# select * from pg_collation where collname='unicode' \\gx\n-[ RECORD 1 ]-------+--------\noid\t\t | 963\ncollname\t | unicode\ncollnamespace\t | 11\ncollowner\t | 10\ncollprovider\t | i\ncollisdeterministic | t\ncollencoding\t | -1\ncollcollate\t | \ncollctype\t | \ncolliculocale\t | und\ncollicurules\t | \ncollversion\t | \n\nThe original patch implements this as an INSERT in which it would be easy to\nfix I guess, but in current HEAD it comes as an entry in\ninclude/catalog/pg_collation.dat:\n\n{ oid => '963',\n descr => 'sorts using the Unicode Collation Algorithm with default\nsettings',\n collname => 'unicode', collprovider => 'i', collencoding => '-1',\n colliculocale => 'und' },\n\nShould it be converted back into an INSERT or better left\nin this file and collversion being updated afterwards?\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Thu, 27 Apr 2023 13:44:55 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "On 27.04.23 13:44, Daniel Verite wrote:\n> This collation has an empty pg_collation.collversion column, instead\n> of being set to the same value as \"und-x-icu\" to track its version.\n\n> The original patch implements this as an INSERT in which it would be easy to\n> fix I guess, but in current HEAD it comes as an entry in\n> include/catalog/pg_collation.dat:\n> \n> { oid => '963',\n> descr => 'sorts using the Unicode Collation Algorithm with default\n> settings',\n> collname => 'unicode', collprovider => 'i', collencoding => '-1',\n> colliculocale => 'und' },\n> \n> Should it be converted back into an INSERT or better left\n> in this file and collversion being updated afterwards?\n\nHow about we do it with an UPDATE command. We already do this for \npg_database in a similar way. See attached patch.",
"msg_date": "Mon, 8 May 2023 17:48:09 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add standard collation UNICODE"
},
{
"msg_contents": "On 08.05.23 17:48, Peter Eisentraut wrote:\n> On 27.04.23 13:44, Daniel Verite wrote:\n>> This collation has an empty pg_collation.collversion column, instead\n>> of being set to the same value as \"und-x-icu\" to track its version.\n> \n>> The original patch implements this as an INSERT in which it would be \n>> easy to\n>> fix I guess, but in current HEAD it comes as an entry in\n>> include/catalog/pg_collation.dat:\n>>\n>> { oid => '963',\n>> descr => 'sorts using the Unicode Collation Algorithm with default\n>> settings',\n>> collname => 'unicode', collprovider => 'i', collencoding => '-1',\n>> colliculocale => 'und' },\n>>\n>> Should it be converted back into an INSERT or better left\n>> in this file and collversion being updated afterwards?\n> \n> How about we do it with an UPDATE command. We already do this for \n> pg_database in a similar way. See attached patch.\n\nThis has been committed.\n\n\n",
"msg_date": "Fri, 12 May 2023 10:04:50 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add standard collation UNICODE"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen using pg_walinspect, and calling functions like\npg_get_wal_records_info(), I often wish that the various information in\nthe block_ref column was separated out into columns so that I could\neasily access them and pass them to various other functions to add\ninformation -- like getting the relname from pg_class like this:\n\nSELECT n.nspname, c.relname, wal_info.*\n FROM pg_get_wal_records_extended_info(:start_lsn, :end_lsn) wal_info\n JOIN pg_class c\n ON wal_info.relfilenode = pg_relation_filenode(c.oid) AND\n wal_info.reldatabase IN (0, (SELECT oid FROM pg_database\n WHERE datname = current_database()))\n JOIN pg_namespace n ON n.oid = c.relnamespace;\n\n\nThis has been mentioned in [1] amongst other places.\n\nSo, attached is a patch with pg_get_wal_records_extended_info(). I\nsuspect the name is not very good. Also, it is nearly a direct copy of\npg_get_wal_fpi_infos() except for the helper called to fill in the\ntuplestore, so it might be worth doing something about that.\n\nHowever, I am mainly looking for feedback about whether or not others\nwould find this useful, and, if so, what columns they would like to see\nin the returned tuplestore.\n\nNote that I didn't include the cumulative fpi_len for all the pages\nsince pg_get_wal_fpi_info() now exists. I noticed that\npg_get_wal_fpi_info() doesn't list compression information (which is in\nthe block_ref column of pg_get_wal_records_info()). I don't know if this\nis worth including in my proposed function\npg_get_wal_records_extended_info().\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CAH2-Wz%3DacGKoP8cZ%2B6Af2inoai0N5cZKCY13DaqXCwQNupK8qg%40mail.gmail.com",
"msg_date": "Wed, 1 Mar 2023 12:51:16 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 12:51 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> When using pg_walinspect, and calling functions like\n> pg_get_wal_records_info(), I often wish that the various information in\n> the block_ref column was separated out into columns so that I could\n> easily access them and pass them to various other functions to add\n> information -- like getting the relname from pg_class like this:\n>\n> SELECT n.nspname, c.relname, wal_info.*\n> FROM pg_get_wal_records_extended_info(:start_lsn, :end_lsn) wal_info\n> JOIN pg_class c\n> ON wal_info.relfilenode = pg_relation_filenode(c.oid) AND\n> wal_info.reldatabase IN (0, (SELECT oid FROM pg_database\n> WHERE datname = current_database()))\n> JOIN pg_namespace n ON n.oid = c.relnamespace;\n>\n>\n> This has been mentioned in [1] amongst other places.\n>\n> So, attached is a patch with pg_get_wal_records_extended_info(). I\n> suspect the name is not very good. Also, it is nearly a direct copy of\n> pg_get_wal_fpi_infos() except for the helper called to fill in the\n> tuplestore, so it might be worth doing something about that.\n>\n> However, I am mainly looking for feedback about whether or not others\n> would find this useful, and, if so, what columns they would like to see\n> in the returned tuplestore.\n>\n> Note that I didn't include the cumulative fpi_len for all the pages\n> since pg_get_wal_fpi_info() now exists. I noticed that\n> pg_get_wal_fpi_info() doesn't list compression information (which is in\n> the block_ref column of pg_get_wal_records_info()). I don't know if this\n> is worth including in my proposed function\n> pg_get_wal_records_extended_info().\n\nThinking about this more, it could make sense to have a function which\ngives you this extended block information and has a parameter like\nwith_fpi which would include the information returned by\npg_get_wal_fpi_info(). It might be nice to have it still include the\ninformation about the record itself as well.\n\nI don't know if it would be instead of pg_get_wal_fpi_info(), though.\n\nThe way I would use this is when I want to see the record level\ninformation but with some additional information aggregated across the\nrelevant blocks. For example, I could group by the record information\nand relfilenode and using the query in my example above, see all the\ninformation for the record along with the relname (when possible).\n\n- Melanie\n\n\n",
"msg_date": "Thu, 2 Mar 2023 11:17:05 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Thu, Mar 02, 2023 at 11:17:05AM -0500, Melanie Plageman wrote:\n> Thinking about this more, it could make sense to have a function which\n> gives you this extended block information and has a parameter like\n> with_fpi which would include the information returned by\n> pg_get_wal_fpi_info(). It might be nice to have it still include the\n> information about the record itself as well.\n\nHmm. I am OK if you want to include more information about the\nblocks, and it may be nicer to not bloat the interface with more\nfunctions than necessary.\n\n> I don't know if it would be instead of pg_get_wal_fpi_info(), though.\n> \n> The way I would use this is when I want to see the record level\n> information but with some additional information aggregated across the\n> relevant blocks. For example, I could group by the record information\n> and relfilenode and using the query in my example above, see all the\n> information for the record along with the relname (when possible).\n\nAs far as I know, a block reference could have some data or a FPW, so\nit is true that pg_get_wal_fpi_info() is not extensive enough if you\nwant to get more information about the blocks in use for each record,\nespecially if there is some data, and grouping the information about\nwhole set of blocks into a single function call can some time.\n\nIn order to satisfy your case, why not having one function that does\neverything, looping over the blocks of a single record as long as\nXLogRecHasBlockRef() is satisfied, returning the FPW if the block\nincludes an image (or NULL if !XLogRecHasBlockImage()), as well as its\ndata in bytea if XLogRecGetData() gives something (?).\n\nI am not sure that this should return anything about the record itself\nexcept its ReadRecPtr, though, as ReadRecPtr would be enough to\ncross-check with the information provided by GetWALRecordInfo() with a\njoin. Hence, I guess that we could update the existing FPI function\nwith:\n- the addition of some of the flags of bimg_info, like the compression\ntype, if they apply, with a text[].\n- the addition of bimg_len, if the block has a FPW, or NULL if none.\n- the addition of apply_image, if the block has a FPW, or NULL if\nnone.\n- the addition of the block data, if any, or NULL if there is no\ndata.\n- an update for the FPW handling, where we would return NULL if there\nis no FPW references in the block, but still return the full,\ndecompressed 8kB image if it is there.\n--\nMichael",
"msg_date": "Fri, 3 Mar 2023 14:36:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 9:47 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> > However, I am mainly looking for feedback about whether or not others\n> > would find this useful, and, if so, what columns they would like to see\n> > in the returned tuplestore.\n\nIMO, pg_get_wal_records_extended_info as proposed doesn't look good to\nme as it outputs most of the columns that are already given by\npg_get_wal_records_info.What I think the best way at this point is to\nmake it return the following:\nlsn pg_lsn\nblock_id int8\nspcOid oid\ndbOid oid\nrelNumber oid\nforkNames text\nfpi bytea\nfpi_info text\n\nSo, there can be multiple columns for the same record LSN, which\nmeans, essentially (lsn, block_id) can be a unique value for the row.\nIf a block has FPI, fpi and fpi_info are non-null, otherwise, nulls.\nIf needed, this output can be joined with pg_get_wal_records_info on\nlsn, to get all the record level details.What do you think? Will this\nserve your purpose?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 6 Mar 2023 20:10:14 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Mon, 6 Mar 2023 at 15:40, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Mar 2, 2023 at 9:47 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > > However, I am mainly looking for feedback about whether or not others\n> > > would find this useful, and, if so, what columns they would like to see\n> > > in the returned tuplestore.\n>\n> IMO, pg_get_wal_records_extended_info as proposed doesn't look good to\n> me as it outputs most of the columns that are already given by\n> pg_get_wal_records_info.What I think the best way at this point is to\n> make it return the following:\n> lsn pg_lsn\n> block_id int8\n> spcOid oid\n> dbOid oid\n> relNumber oid\n> forkNames text\n> fpi bytea\n> fpi_info text\n\nWouldn't it be more useful to have\n\ntype PgXlogRecordBlock\n( block_id int\n ...\n , blockimage bytea\n , data bytea\n)\n\ntype PgXlogRecord\n( start lsn\n , ...\n , blocks PgXlogRecordBlock[] -- array of record's registered blocks,\nc.q. DecodedBkpBlock->blocks\n , main_data bytea\n)\n\nwhich is returned by one sql function, and then used and processed\n(unnest()ed) in the relevant views? It would allow anyone to build\ntheir own processing on pg_walinspect where they want or need it,\nwithout us having to decide what the user wants, and without having to\nassociate blocks with the main xlog record data through the joining of\nseveral (fairly expensive) xlog decoding passes.\n\nThe basic idea is to create a single entrypoint to all relevant data\nfrom DecodedXLogRecord in SQL, not multiple.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 6 Mar 2023 16:08:28 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 04:08:28PM +0100, Matthias van de Meent wrote:\n> On Mon, 6 Mar 2023 at 15:40, Bharath Rupireddy\n>> IMO, pg_get_wal_records_extended_info as proposed doesn't look good to\n>> me as it outputs most of the columns that are already given by\n>> pg_get_wal_records_info.What I think the best way at this point is to\n>> make it return the following:\n>> lsn pg_lsn\n>> block_id int8\n>> spcOid oid\n>> dbOid oid\n>> relNumber oid\n>> forkNames text\n>> fpi bytea\n>> fpi_info text\n\nI would add the length of the block data (without the hole and\ncompressed, as the FPI data should always be presented as\nuncompressed), and the block data if any (without the block data\nlength as one can guess it based on the bytea data length). Note that \na block can have both a FPI and some data assigned to it, as far as I\nrecall.\n\n> The basic idea is to create a single entrypoint to all relevant data\n> from DecodedXLogRecord in SQL, not multiple.\n\nWhile I would agree with this principle on simplicity's ground in\nterms of minimizing the SQL interface and the pg_wal/ lookups, I\ndisagree about it on unsability ground, because we can avoid extra SQL\ntweaks with more functions. One recent example I have in mind is\npartitionfuncs.c, which can actually be achieved with a WITH RECURSIVE\non the catalogs. There are of course various degrees of complexity,\nand perhaps unnest() cannot qualify as one, but having two functions\nreturning normalized records (one for the record information, and a\nsecond for the block information), is a rather good balance between\nusability and interface complexity, in my experience. If you have two\nfunctions, a JOIN is enough to cross-check the block data and the\nrecord data, while an unnest() heavily bloats the main function output\n(aka byteas of FPIs in a single array).\n--\nMichael",
"msg_date": "Tue, 7 Mar 2023 09:34:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "At Tue, 7 Mar 2023 09:34:24 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, Mar 06, 2023 at 04:08:28PM +0100, Matthias van de Meent wrote:\n> > On Mon, 6 Mar 2023 at 15:40, Bharath Rupireddy\n> >> IMO, pg_get_wal_records_extended_info as proposed doesn't look good to\n> >> me as it outputs most of the columns that are already given by\n> >> pg_get_wal_records_info.What I think the best way at this point is to\n> >> make it return the following:\n> >> lsn pg_lsn\n> >> block_id int8\n> >> spcOid oid\n> >> dbOid oid\n> >> relNumber oid\n> >> forkNames text\n> >> fpi bytea\n> >> fpi_info text\n> \n> I would add the length of the block data (without the hole and\n> compressed, as the FPI data should always be presented as\n> uncompressed), and the block data if any (without the block data\n> length as one can guess it based on the bytea data length). Note that \n> a block can have both a FPI and some data assigned to it, as far as I\n> recall.\n\n+1\n\n> > The basic idea is to create a single entrypoint to all relevant data\n> > from DecodedXLogRecord in SQL, not multiple.\n> \n> While I would agree with this principle on simplicity's ground in\n> terms of minimizing the SQL interface and the pg_wal/ lookups, I\n> disagree about it on unsability ground, because we can avoid extra SQL\n> tweaks with more functions. One recent example I have in mind is\n> partitionfuncs.c, which can actually be achieved with a WITH RECURSIVE\n> on the catalogs. There are of course various degrees of complexity,\n> and perhaps unnest() cannot qualify as one, but having two functions\n> returning normalized records (one for the record information, and a\n> second for the block information), is a rather good balance between\n> usability and interface complexity, in my experience. If you have two\n> functions, a JOIN is enough to cross-check the block data and the\n> record data, while an unnest() heavily bloats the main function output\n> (aka byteas of FPIs in a single array).\n\nFWIW, my initial thought about the proposal was similar to Matthias,\nand tried a function that would convert (for simplicity) the block_ref\nstring to a json object. Although this approach did work, I was not\nsatisfied with its limited usability and poor performance (mainly the\npoor performance is due to text->json conversion, though)..\n\nFinally, I realized that the initial discomfort I experienced stemmed\nfrom the name of the function, which suggests that it returns\ninformation of \"records\". This discomfort would disappear if the\nfunction were instead named pg_get_wal_blockref_info() or something\nsimilar.\n\nThus I'm inclined to agree with Michael's suggestion of creating a new\nnormalized set-returning function that returns information of\n\"blocks\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 07 Mar 2023 11:17:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Tue, Mar 07, 2023 at 11:17:45AM +0900, Kyotaro Horiguchi wrote:\n> Thus I'm inclined to agree with Michael's suggestion of creating a new\n> normalized set-returning function that returns information of\n> \"blocks\".\n\nJust to be clear here, I am not suggesting to add a new function for\nonly the block information, just a rename of the existing\npg_get_wal_fpi_info() to something like pg_get_wal_block_info() that\nincludes both the FPI (if any or NULL if none) and the block data (if\nany or NULL is none) so as all of them are governed by the same lookup\nat pg_wal/. The fpi information (aka compression type) is displayed\nif there is a FPI in the block.\n--\nMichael",
"msg_date": "Tue, 7 Mar 2023 14:44:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "At Tue, 7 Mar 2023 14:44:49 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Mar 07, 2023 at 11:17:45AM +0900, Kyotaro Horiguchi wrote:\n> > Thus I'm inclined to agree with Michael's suggestion of creating a new\n> > normalized set-returning function that returns information of\n> > \"blocks\".\n> \n> Just to be clear here, I am not suggesting to add a new function for\n> only the block information, just a rename of the existing\n> pg_get_wal_fpi_info() to something like pg_get_wal_block_info() that\n> includes both the FPI (if any or NULL if none) and the block data (if\n> any or NULL is none) so as all of them are governed by the same lookup\n> at pg_wal/. The fpi information (aka compression type) is displayed\n> if there is a FPI in the block.\n\nAh. Yes, that expansion sounds sensible.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 07 Mar 2023 15:49:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Tue, Mar 07, 2023 at 03:49:02PM +0900, Kyotaro Horiguchi wrote:\n> Ah. Yes, that expansion sounds sensible.\n\nOkay, so, based on this idea, I have hacked on this stuff and finish\nwith the attached that shows block data if it exists, as well as FPI\nstuff if any. bimg_info is showed as a text[] for its flags.\n\nI guess that I'd better add a test that shows correctly a record with\nsome block data attached to it, on top of the existing one for FPIs..\nAny suggestions? Perhaps just a heap/heap2 record?\n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 7 Mar 2023 16:18:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "At Tue, 7 Mar 2023 16:18:21 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Mar 07, 2023 at 03:49:02PM +0900, Kyotaro Horiguchi wrote:\n> > Ah. Yes, that expansion sounds sensible.\n> \n> Okay, so, based on this idea, I have hacked on this stuff and finish\n> with the attached that shows block data if it exists, as well as FPI\n> stuff if any. bimg_info is showed as a text[] for its flags.\n\n# The naming convetion looks inconsistent between\n# pg_get_wal_records_info and pg_get_wal_block_info but it's not an\n# issue of this patch..\n\n> I guess that I'd better add a test that shows correctly a record with\n> some block data attached to it, on top of the existing one for FPIs..\n> Any suggestions? Perhaps just a heap/heap2 record?\n> \n> Thoughts?\n\nI thought that we needed a test for block data when I saw the patch.\nI don't have great idea but a single insert should work.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 07 Mar 2023 18:07:58 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 12:48 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Mar 07, 2023 at 03:49:02PM +0900, Kyotaro Horiguchi wrote:\n> > Ah. Yes, that expansion sounds sensible.\n>\n> Okay, so, based on this idea, I have hacked on this stuff and finish\n> with the attached that shows block data if it exists, as well as FPI\n> stuff if any. bimg_info is showed as a text[] for its flags.\n\n+1.\n\n> I guess that I'd better add a test that shows correctly a record with\n> some block data attached to it, on top of the existing one for FPIs..\n> Any suggestions? Perhaps just a heap/heap2 record?\n>\n> Thoughts?\n\nThat would be a lot better. Not just the test, but also the\ndocumentation can have it. Simple way to generate such a record (both\nblock data and FPI) is to just change the wal_level to logical in\nwalinspect.conf [1], see code around REGBUF_KEEP_DATA and\nRelationIsLogicallyLogged in heapam.c\n\nI had the following comments and fixed them in the attached v2 patch:\n\n1. Still a trace of pg_get_wal_fpi_info in docs, removed it.\n\n2. Used int4 instead of int for fpilen just to be in sync with\nfpi_length of pg_get_wal_record_info.\n\n3. Changed to be consistent and use just FPI or \"F/full page\".\n /* FPI flags */\n /* No full page image, so store NULLs for all its fields */\n /* Full-page image */\n /* Full page exists, so let's save it. */\n * and end LSNs. This produces information about the full page images with\n * to a record. Decompression is applied to the full-page images, if\n\n4. I think we need to free raw_data, raw_page and flags as we loop\nover multiple blocks (XLR_MAX_BLOCK_ID) and will leak memory which can\nbe a problem if we have many blocks assocated with a single WAL\nrecord.\n flags = (Datum *) palloc0(sizeof(Datum) * bitcnt);\nAlso, we will leak all CStringGetTextDatum memory in the block_id for loop.\nAnother way is to use and reset temp memory context in the for loop\nover block_ids. I prefer this approach over multiple pfree()s in\nblock_id for loop.\n\n5. I think it'd be good to say if the FPI is for WAL_VERIFICATION, so\nI changed it to the following. Feel free to ignore this if you think\nit's not required.\n if (blk->apply_image)\n flags[cnt++] = CStringGetTextDatum(\"APPLY\");\n else\n flags[cnt++] = CStringGetTextDatum(\"WAL_VERIFICATION\");\n\n6. Did minor wordsmithing.\n\n7. Added test case which shows both block data and fpi in the documentation.\n\n8. Changed wal_level to logical in walinspect.conf to test case with block data.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 7 Mar 2023 15:56:22 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Tue, 7 Mar 2023 at 01:34, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Mar 06, 2023 at 04:08:28PM +0100, Matthias van de Meent wrote:\n> > On Mon, 6 Mar 2023 at 15:40, Bharath Rupireddy\n> >> IMO, pg_get_wal_records_extended_info as proposed doesn't look good to\n> >> me as it outputs most of the columns that are already given by\n> >> pg_get_wal_records_info.What I think the best way at this point is to\n> >> make it return the following:\n> >> lsn pg_lsn\n> >> block_id int8\n> >> spcOid oid\n> >> dbOid oid\n> >> relNumber oid\n> >> forkNames text\n> >> fpi bytea\n> >> fpi_info text\n>\n> I would add the length of the block data (without the hole and\n> compressed, as the FPI data should always be presented as\n> uncompressed), and the block data if any (without the block data\n> length as one can guess it based on the bytea data length). Note that\n> a block can have both a FPI and some data assigned to it, as far as I\n> recall.\n>\n> > The basic idea is to create a single entrypoint to all relevant data\n> > from DecodedXLogRecord in SQL, not multiple.\n>\n> While I would agree with this principle on simplicity's ground in\n> terms of minimizing the SQL interface and the pg_wal/ lookups, I\n> disagree about it on unsability ground, because we can avoid extra SQL\n> tweaks with more functions. One recent example I have in mind is\n> partitionfuncs.c, which can actually be achieved with a WITH RECURSIVE\n> on the catalogs.\n\nCorrect, but in that case the user would build the same query (or at\nleast with the same complexity) as what we're executing under the\nhood, right?\n\n> There are of course various degrees of complexity,\n> and perhaps unnest() cannot qualify as one, but having two functions\n> returning normalized records (one for the record information, and a\n> second for the block information), is a rather good balance between\n> usability and interface complexity, in my experience.\n\nI would agree, if it weren't for the reasons written below.\n\n> If you have two\n> functions, a JOIN is enough to cross-check the block data and the\n> record data,\n\nJoins are expensive on large datasets; and because WAL is one of the\nlargest datasets in the system, why would we want to force the user to\nJOIN them if we can produce the data in one pre-baked data structure\nwithout a need to join?\n\n> while an unnest() heavily bloats the main function output\n> (aka byteas of FPIs in a single array).\n\nI don't see how that would be bad. You can select a subset of columns\nwithout much issue, which can allow you to ignore any and all bloat.\nIt is also not easy to imagine that we'd have arguments in the\nfunction that determine whether it includes the largest fields (main\ndata, blocks, block data, and block images) or leaves them NULL so\nthat we need to pass less data around if the user doesn't want the\ndata.\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 7 Mar 2023 13:08:26 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Tue, Mar 07, 2023 at 03:56:22PM +0530, Bharath Rupireddy wrote:\n> That would be a lot better. Not just the test, but also the\n> documentation can have it. Simple way to generate such a record (both\n> block data and FPI) is to just change the wal_level to logical in\n> walinspect.conf [1], see code around REGBUF_KEEP_DATA and\n> RelationIsLogicallyLogged in heapam.c\n\nI don't agree that we need to go down to wal_level=logical for this.\nThe important part is to check that the non-NULL and NULL paths for\nthe block data and FPI data are both taken, making 4 paths to check.\nSo we need two tests at minimum, which would be either:\n- One SQL generating no FPI with no block data and a second generating\na FPI with block data. v2 was doing that but did not cover the first\ncase.\n- One SQL generating a FPI with no block data and a second generating\nno FPI with block data.\n\nSo let's just geenrate a heap record on an UPDATE, for example, like\nin the version attached.\n\n> 2. Used int4 instead of int for fpilen just to be in sync with\n> fpi_length of pg_get_wal_record_info.\n\nOkay.\n\n> 3. Changed to be consistent and use just FPI or \"F/full page\".\n> /* FPI flags */\n> /* No full page image, so store NULLs for all its fields */\n> /* Full-page image */\n> /* Full page exists, so let's save it. */\n> * and end LSNs. This produces information about the full page images with\n> * to a record. Decompression is applied to the full-page images, if\n\nFine by me.\n\n> 4. I think we need to free raw_data, raw_page and flags as we loop\n> over multiple blocks (XLR_MAX_BLOCK_ID) and will leak memory which can\n> be a problem if we have many blocks assocated with a single WAL\n> record.\n> flags = (Datum *) palloc0(sizeof(Datum) * bitcnt);\n> Also, we will leak all CStringGetTextDatum memory in the block_id for loop.\n> Another way is to use and reset temp memory context in the for loop\n> over block_ids. I prefer this approach over multiple pfree()s in\n> block_id for loop.\n\nI disagree, this was on purpose in the last version. This version\nfinishes by calling AllocSetContextCreate() and MemoryContextDelete()\nonce per *record*, which will not be free, and we are arguing about\nresetting the memory context after scanning up to XLR_MAX_BLOCK_ID\nblocks, or 32 blocks which would go up to 32kB per page in the worst\ncase. That's not going to matter in a large scan for each record, but\nthe extra AllocSet*() calls could. And we basically do the same thing\non HEAD.\n\n> 5. I think it'd be good to say if the FPI is for WAL_VERIFICATION, so\n> I changed it to the following. Feel free to ignore this if you think\n> it's not required.\n> if (blk->apply_image)\n> flags[cnt++] = CStringGetTextDatum(\"APPLY\");\n> else\n> flags[cnt++] = CStringGetTextDatum(\"WAL_VERIFICATION\");\n\nDisagreed here as well. WAL_VERIFICATION does not map with any of the\ninternal flags, and actually it may be finished by not being used\nat replay if the LSN of the page read if higher than what the WAL\nstores.\n\n> 7. Added test case which shows both block data and fpi in the\n> documentation.\n\nOkay on that.\n\n> 8. Changed wal_level to logical in walinspect.conf to test case with block data.\n\nThis change is not necessary, per the argument above.\n\nAny comments?\n--\nMichael",
"msg_date": "Wed, 8 Mar 2023 16:28:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 12:58 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Mar 07, 2023 at 03:56:22PM +0530, Bharath Rupireddy wrote:\n> > That would be a lot better. Not just the test, but also the\n> > documentation can have it. Simple way to generate such a record (both\n> > block data and FPI) is to just change the wal_level to logical in\n> > walinspect.conf [1], see code around REGBUF_KEEP_DATA and\n> > RelationIsLogicallyLogged in heapam.c\n>\n> I don't agree that we need to go down to wal_level=logical for this.\n> The important part is to check that the non-NULL and NULL paths for\n> the block data and FPI data are both taken, making 4 paths to check.\n> So we need two tests at minimum, which would be either:\n> - One SQL generating no FPI with no block data and a second generating\n> a FPI with block data. v2 was doing that but did not cover the first\n> case.\n> - One SQL generating a FPI with no block data and a second generating\n> no FPI with block data.\n>\n> So let's just geenrate a heap record on an UPDATE, for example, like\n> in the version attached.\n\nYup, that should work too because block data gets logged [1].\n\n> > 4. I think we need to free raw_data, raw_page and flags as we loop\n> > over multiple blocks (XLR_MAX_BLOCK_ID) and will leak memory which can\n> > be a problem if we have many blocks assocated with a single WAL\n> > record.\n> > flags = (Datum *) palloc0(sizeof(Datum) * bitcnt);\n> > Also, we will leak all CStringGetTextDatum memory in the block_id for loop.\n> > Another way is to use and reset temp memory context in the for loop\n> > over block_ids. I prefer this approach over multiple pfree()s in\n> > block_id for loop.\n>\n> I disagree, this was on purpose in the last version. This version\n> finishes by calling AllocSetContextCreate() and MemoryContextDelete()\n> once per *record*, which will not be free, and we are arguing about\n> resetting the memory context after scanning up to XLR_MAX_BLOCK_ID\n> blocks, or 32 blocks which would go up to 32kB per page in the worst\n> case. That's not going to matter in a large scan for each record, but\n> the extra AllocSet*() calls could. And we basically do the same thing\n> on HEAD.\n\nIt's not just 32kB per page right? 32*8KB on HEAD (no block data,\nflags and CStringGetTextDatum there). With the patch, the number of\npallocs for each block_id = 6 CStringGetTextDatum + BLCKSZ (8KB) +\nflags (5*size of ptr) + block data_len. In the worst case, all\nXLR_MAX_BLOCK_ID can have both FPIs and block data. Furthermore,\nimagine if someone initialized their cluster with a higher BLCKSZ (>=\n8KB), then the memory leak happens noticeably on a lower-end system.\n\nI understand that performance is critical here but we need to ensure\nmemory is used wisely. Therefore, I'd still vote to free at least the\nmajor contributors here, that is, pfree(raw_data);, pfree(raw_page);\nand pfree(flags); right after they are done using. I'm sure pfree()s\ndon't hurt more than resetting memory context for every block_id.\n\n> Any comments?\n\nI think we need to output block data length (blk->data_len) similar to\nfpilen to save users from figuring out how to get the length of a\nbytea column. This will also keep block data in sync with FPI info.\n\n[1]\nneeds_backup = (page_lsn <= RedoRecPtr);\n\n(gdb) p page_lsn\n$2 = 21581544\n(gdb) p RedoRecPtr\n$3 = 21484808\n(gdb) p needs_backup\n$4 = false\n(gdb)\n(gdb) bt\n#0 XLogRecordAssemble (rmid=10 '\\n', info=64 '@',\nRedoRecPtr=21484808, doPageWrites=true, fpw_lsn=0x7ffde118d640,\n num_fpi=0x7ffde118d634, topxid_included=0x7ffde118d633) at xloginsert.c:582\n#1 0x00005598cd9c3ef7 in XLogInsert (rmid=10 '\\n', info=64 '@') at\nxloginsert.c:497\n#2 0x00005598cd930452 in log_heap_update (reln=0x7f4a4c7cd808,\noldbuf=136, newbuf=136, oldtup=0x7ffde118d820,\n newtup=0x5598d00cb098, old_key_tuple=0x0,\nall_visible_cleared=false, new_all_visible_cleared=false)\n at heapam.c:8473\n#3 0x00005598cd92876e in heap_update (relation=0x7f4a4c7cd808,\notid=0x7ffde118dab2, newtup=0x5598d00cb098, cid=0,\n crosscheck=0x0, wait=true, tmfd=0x7ffde118db60,\nlockmode=0x7ffde118da74) at heapam.c:3741\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 8 Mar 2023 16:01:56 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Wed, Mar 08, 2023 at 04:01:56PM +0530, Bharath Rupireddy wrote:\n> I understand that performance is critical here but we need to ensure\n> memory is used wisely. Therefore, I'd still vote to free at least the\n> major contributors here, that is, pfree(raw_data);, pfree(raw_page);\n> and pfree(flags); right after they are done using. I'm sure pfree()s\n> don't hurt more than resetting memory context for every block_id.\n\nOkay by me to have intermediate pfrees between each block scanned if\nyou feel strongly about it.\n\n> I think we need to output block data length (blk->data_len) similar to\n> fpilen to save users from figuring out how to get the length of a\n> bytea column. This will also keep block data in sync with FPI info.\n\nlength() works fine on bytea, so it can be used on the block data.\nfpilen is a very different matter as it would be the length of a page\nwithout a hole, or just something compressed.\n--\nMichael",
"msg_date": "Wed, 8 Mar 2023 19:53:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 4:23 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Mar 08, 2023 at 04:01:56PM +0530, Bharath Rupireddy wrote:\n> > I understand that performance is critical here but we need to ensure\n> > memory is used wisely. Therefore, I'd still vote to free at least the\n> > major contributors here, that is, pfree(raw_data);, pfree(raw_page);\n> > and pfree(flags); right after they are done using. I'm sure pfree()s\n> > don't hurt more than resetting memory context for every block_id.\n>\n> Okay by me to have intermediate pfrees between each block scanned if\n> you feel strongly about it.\n\nThanks. Attached v4 with that change.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 8 Mar 2023 20:18:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "At Wed, 8 Mar 2023 20:18:06 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \r\n> On Wed, Mar 8, 2023 at 4:23 PM Michael Paquier <michael@paquier.xyz> wrote:\r\n> >\r\n> > On Wed, Mar 08, 2023 at 04:01:56PM +0530, Bharath Rupireddy wrote:\r\n> > > I understand that performance is critical here but we need to ensure\r\n> > > memory is used wisely. Therefore, I'd still vote to free at least the\r\n> > > major contributors here, that is, pfree(raw_data);, pfree(raw_page);\r\n> > > and pfree(flags); right after they are done using. I'm sure pfree()s\r\n> > > don't hurt more than resetting memory context for every block_id.\r\n> >\r\n> > Okay by me to have intermediate pfrees between each block scanned if\r\n> > you feel strongly about it.\r\n> \r\n> Thanks. Attached v4 with that change.\r\n\r\nAlthough I'm not strongly opposed to pfreeing them, I'm not sure I\r\nlike the way the patch frees them. The life times of all of raw_data,\r\nraw_page and flags are within a block. They can be freed\r\nunconditionally after they are actually used and the scope of the\r\npointer variables can be properly narowed.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Thu, 09 Mar 2023 09:46:12 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Thu, Mar 09, 2023 at 09:46:12AM +0900, Kyotaro Horiguchi wrote:\n> Although I'm not strongly opposed to pfreeing them, I'm not sure I\n> like the way the patch frees them. The life times of all of raw_data,\n> raw_page and flags are within a block. They can be freed\n> unconditionally after they are actually used and the scope of the\n> pointer variables can be properly narowed.\n\nThe thing is that you cannot keep them inside each individual blocks\nbecause they have to be freed once their values are stored in the\ntuplestore, which is why I guess Bharath has done things this way.\nAfter sleeping on that, I tend to prefer the simplicity of v3 where we\nkeep track of the block and fpi data in each of their respective\nblocks. It means that we lose track of them each time we go to a\ndifferent block, but the memory context reset done after each record\nmeans that scanning through a large WAL history will not cause a leak\nacross the function call.\n\nThe worst scenario with v3 is a record that makes use of all the 32\nblocks with a hell lot of block data in each one of them, which is\npossible in theory, but very unlikely in practice except if someone\nuses a custom RGMR to generate crazily-shaped WAL records. I am aware\nof the fact that it is possible to generate such records if you are\nreally willing to do so, aka this thread:\nhttps://www.postgresql.org/message-id/flat/CAEze2WgGiw+LZt+vHf8tWqB_6VxeLsMeoAuod0N=ij1q17n5pw@mail.gmail.com\n\nIn short, my choice would still be simplicity here with v3, I guess.\n--\nMichael",
"msg_date": "Thu, 9 Mar 2023 10:15:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "At Thu, 9 Mar 2023 10:15:39 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Mar 09, 2023 at 09:46:12AM +0900, Kyotaro Horiguchi wrote:\n> > Although I'm not strongly opposed to pfreeing them, I'm not sure I\n> > like the way the patch frees them. The life times of all of raw_data,\n> > raw_page and flags are within a block. They can be freed\n> > unconditionally after they are actually used and the scope of the\n> > pointer variables can be properly narowed.\n> \n> The thing is that you cannot keep them inside each individual blocks\n> because they have to be freed once their values are stored in the\n> tuplestore, which is why I guess Bharath has done things this way.\n\nUgh.. Right.\n\n> After sleeping on that, I tend to prefer the simplicity of v3 where we\n> keep track of the block and fpi data in each of their respective\n> blocks. It means that we lose track of them each time we go to a\n> different block, but the memory context reset done after each record\n> means that scanning through a large WAL history will not cause a leak\n> across the function call.\n> \n> The worst scenario with v3 is a record that makes use of all the 32\n> blocks with a hell lot of block data in each one of them, which is\n> possible in theory, but very unlikely in practice except if someone\n> uses a custom RGMR to generate crazily-shaped WAL records. I am aware\n> of the fact that it is possible to generate such records if you are\n> really willing to do so, aka this thread:\n> https://www.postgresql.org/message-id/flat/CAEze2WgGiw+LZt+vHf8tWqB_6VxeLsMeoAuod0N=ij1q17n5pw@mail.gmail.com\n\nI agree to the view that that \"leakage\" for at-most 32 blocks and\ntypically 0 to 2 blcoks won't be a matter.\n\n> In short, my choice would still be simplicity here with v3, I guess.\n\nFWIW, I slightly prefer v3 for the reason I mentioned above.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 09 Mar 2023 11:04:56 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 7:34 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> > In short, my choice would still be simplicity here with v3, I guess.\n>\n> FWIW, I slightly prefer v3 for the reason I mentioned above.\n\nHm, then, +1 for v3.\n\nFWIW, I quickly tried to hit that case where a single WAL record has\nmax_block_id = XLR_MAX_BLOCK_ID with both FPIs and block data, but I\ncouldn't. I could generate WAL records with 45K FPIs, 10mn block data\nand the total palloc'd length in the block_id for loop has not crossed\n8K.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 9 Mar 2023 09:52:57 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Thu, Mar 09, 2023 at 09:52:57AM +0530, Bharath Rupireddy wrote:\n> Hm, then, +1 for v3.\n\nOkay, thanks. Let's use that, then.\n--\nMichael",
"msg_date": "Thu, 9 Mar 2023 15:37:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Thu, Mar 09, 2023 at 03:37:21PM +0900, Michael Paquier wrote:\n> Okay, thanks. Let's use that, then.\n\nI have done one pass over that today, and applied it. Thanks!\n\nI'd really like to do something about the errors we raise in the\nmodule when specifying LSNs in the future for this release, now. I\ngot annoyed by it again this morning while doing \\watch queries that\nkept failing randomly while stressing this patch.\n--\nMichael",
"msg_date": "Fri, 10 Mar 2023 10:13:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 6:43 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> I'd really like to do something about the errors we raise in the\n> module when specifying LSNs in the future for this release, now. I\n> got annoyed by it again this morning while doing \\watch queries that\n> kept failing randomly while stressing this patch.\n\nPerhaps what is proposed here\nhttps://www.postgresql.org/message-id/CALj2ACWqJ+m0HoQj9qkAV2uQfq97yk5jN2MOdfKcXusXsyptKQ@mail.gmail.com\nmight help and avoid many errors around input LSN validations. Let's\ndiscuss that in that thread.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 10 Mar 2023 06:50:05 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 06:50:05AM +0530, Bharath Rupireddy wrote:\n> Perhaps what is proposed here\n> https://www.postgresql.org/message-id/CALj2ACWqJ+m0HoQj9qkAV2uQfq97yk5jN2MOdfKcXusXsyptKQ@mail.gmail.com\n> might help and avoid many errors around input LSN validations. Let's\n> discuss that in that thread.\n\nYep, I am going to look at your proposal.\n--\nMichael",
"msg_date": "Fri, 10 Mar 2023 10:30:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "I'm excited to see that pg_get_wal_block_info() was merged. Thanks for\nworking on this!\n\nApologies for jumping back in here a bit late. I've been playing around\nwith it and wanted to comment on the performance of JOINing to\npg_get_wal_records_info().\n\nOn Tue, Mar 7, 2023 at 7:08 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Tue, 7 Mar 2023 at 01:34, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Mon, Mar 06, 2023 at 04:08:28PM +0100, Matthias van de Meent wrote:\n> > > On Mon, 6 Mar 2023 at 15:40, Bharath Rupireddy\n> > >> IMO, pg_get_wal_records_extended_info as proposed doesn't look good to\n> > >> me as it outputs most of the columns that are already given by\n> > >> pg_get_wal_records_info.What I think the best way at this point is to\n> > >> make it return the following:\n> > >> lsn pg_lsn\n> > >> block_id int8\n> > >> spcOid oid\n> > >> dbOid oid\n> > >> relNumber oid\n> > >> forkNames text\n> > >> fpi bytea\n> > >> fpi_info text\n> >\n> > I would add the length of the block data (without the hole and\n> > compressed, as the FPI data should always be presented as\n> > uncompressed), and the block data if any (without the block data\n> > length as one can guess it based on the bytea data length). Note that\n> > a block can have both a FPI and some data assigned to it, as far as I\n> > recall.\n> >\n> > > The basic idea is to create a single entrypoint to all relevant data\n> > > from DecodedXLogRecord in SQL, not multiple.\n> >\n> > While I would agree with this principle on simplicity's ground in\n> > terms of minimizing the SQL interface and the pg_wal/ lookups, I\n> > disagree about it on unsability ground, because we can avoid extra SQL\n> > tweaks with more functions. One recent example I have in mind is\n> > partitionfuncs.c, which can actually be achieved with a WITH RECURSIVE\n> > on the catalogs.\n>\n> Correct, but in that case the user would build the same query (or at\n> least with the same complexity) as what we're executing under the\n> hood, right?\n>\n> > There are of course various degrees of complexity,\n> > and perhaps unnest() cannot qualify as one, but having two functions\n> > returning normalized records (one for the record information, and a\n> > second for the block information), is a rather good balance between\n> > usability and interface complexity, in my experience.\n>\n> I would agree, if it weren't for the reasons written below.\n>\n> > If you have two\n> > functions, a JOIN is enough to cross-check the block data and the\n> > record data,\n>\n> Joins are expensive on large datasets; and because WAL is one of the\n> largest datasets in the system, why would we want to force the user to\n> JOIN them if we can produce the data in one pre-baked data structure\n> without a need to join?\n\nI wanted to experiment to see how much slower it is to do the join\nbetween pg_get_wal_block_info() and pg_get_wal_records_info() and\nprofile where the time was spent.\n\nI saved the wal lsn before and after a bunch of inserts (generates\n~1,000,000 records).\n\nOn master, I did a join like this:\n\n SELECT count(*) FROM\n pg_get_wal_block_info(:start_lsn, :end_lsn) b JOIN\n pg_get_wal_records_info(:start_lsn, :end_lsn) w ON w.start_lsn = b.lsn;\n\nwhich took 1191 ms.\n\nAfter patching master to add in the columns from\npg_get_wal_records_info() which are not returned by\npg_get_wal_block_info() (except block_ref column of course), this query:\n\n SELECT COUNT(*) FROM pg_get_wal_block_info(:start_lsn, :end_lsn);\n\ntook 467 ms.\n\nPerhaps this difference isn't important, but I found it noticeable.\n\nA large chunk of the time is spent joining the tuplestores.\n\nThe second largest chunk of time seems to be in GetWalRecordInfo()'s\ncalls to XLogRecGetBlockRefInfo(), which spends quite a bit of time in\nstring construction -- mainly for strings we wouldn't end up needing\nafter joining to the block info function.\n\nSurprisingly, the string construction seemed to overshadow the\nperformance impact of doubling the decoding passes over the WAL records.\n\nSo maybe it is worth including more record-level info?\n\nOn an unrelated note, I had thought that we should have some kind of\nparameter to pg_get_wal_block_info() to control whether or not we output\nthe FPI to save us from wasting time decompressing the FPIs.\n\nAFAIK, we don't have access to projection information from inside the\nfunction, so a parameter like \"output_fpi\" or the like would have to do.\n\nI wanted to share what I found in trying this in case someone else had\nhad that thought.\n\nTL;DR, it doesn't seem to matter from a performance perspective if we\nskip decompressing the FPIs in pg_get_wal_block_info().\n\nI hacked \"output_fpi\" into pg_get_wal_block_info(), enabled pglz wal\ncompression, and generated a boatload of FPIs by dirtying buffers, doing\na checkpoint and then updating those pages again right after the\ncheckpoint. With output_fpi = true (same as master), my call to\npg_get_wal_block_info() took around 7 seconds and with output_fpi =\nfalse, it took around 6 seconds. Not an impressive difference.\n\nI noticed that only around 2% of the time is spent in pglz_decompress().\nMost of the time (about half) is spent in building tuples for the\ntuplestore, copying memory around, and writing the tuplestore out to a\nfile. Another 10-20% is spent decoding the records--which has to be done\nregardless.\n\nI wonder if there are cases where the decompression overhead would matter.\nMy conclusion is that it isn't worth bothering with such a parameter.\n\n- Melanie\n\n\n",
"msg_date": "Tue, 14 Mar 2023 18:34:09 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 3:34 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> After patching master to add in the columns from\n> pg_get_wal_records_info() which are not returned by\n> pg_get_wal_block_info() (except block_ref column of course), this query:\n>\n> SELECT COUNT(*) FROM pg_get_wal_block_info(:start_lsn, :end_lsn);\n>\n> took 467 ms.\n>\n> Perhaps this difference isn't important, but I found it noticeable.\n\nThis seems weird to me too. It's not so much the performance overhead\nthat bothers me (though that's not great either). It seems *illogical*\nto me. The query you end up writing must do two passes over the WAL\nrecords, but its structure almost suggests that it's necessary to do\ntwo separate passes over distinct \"streams\".\n\nWhy doesn't it already work like this? Why do we need a separate\npg_get_wal_block_info() function at all?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 14 Mar 2023 15:56:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 6:57 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Mar 14, 2023 at 3:34 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > After patching master to add in the columns from\n> > pg_get_wal_records_info() which are not returned by\n> > pg_get_wal_block_info() (except block_ref column of course), this query:\n> >\n> > SELECT COUNT(*) FROM pg_get_wal_block_info(:start_lsn, :end_lsn);\n> >\n> > took 467 ms.\n> >\n> > Perhaps this difference isn't important, but I found it noticeable.\n>\n> This seems weird to me too. It's not so much the performance overhead\n> that bothers me (though that's not great either). It seems *illogical*\n> to me. The query you end up writing must do two passes over the WAL\n> records, but its structure almost suggests that it's necessary to do\n> two separate passes over distinct \"streams\".\n>\n> Why doesn't it already work like this? Why do we need a separate\n> pg_get_wal_block_info() function at all?\n\nWell, I think if you only care about the WAL record-level information\nand not the block-level information, having the WAL record information\ndenormalized like that with all the block information would be a\nnuisance.\n\nBut, perhaps you are suggesting a parameter to pg_get_wal_records_info()\nlike \"with_block_info\" or something, which produces the full\ndenormalized block + record output?\n\n- Melanie\n\n\n",
"msg_date": "Tue, 14 Mar 2023 20:34:19 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 5:34 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Tue, Mar 14, 2023 at 6:57 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Why doesn't it already work like this? Why do we need a separate\n> > pg_get_wal_block_info() function at all?\n>\n> Well, I think if you only care about the WAL record-level information\n> and not the block-level information, having the WAL record information\n> denormalized like that with all the block information would be a\n> nuisance.\n\nI generally care about both. When I want to look at things at the\npg_get_wal_records_info() level (as opposed to a summary), the\nblock_ref information is *always* of primary importance. I don't want\nto have to write my own bug-prone parser for block_ref, but why should\nthe only alternative be joining against pg_get_wal_block_info()? The\ninformation that I'm interested in is \"close at hand\" to\npg_get_wal_records_info() already.\n\nI understand that in the general case there might be quite a few\nblocks associated with a WAL record. For complicated cases,\npg_get_wal_block_info() does make sense. However, the vast majority of\nindividual WAL records (and possibly most WAL record types) are\nrelated to one block only. One block that is generally from the\nrelation's main fork.\n\n> But, perhaps you are suggesting a parameter to pg_get_wal_records_info()\n> like \"with_block_info\" or something, which produces the full\n> denormalized block + record output?\n\nI was thinking of something like that, yes -- though it wouldn't\nnecessarily have to be the *full* denormalized block_ref info, the FPI\nitself, etc. Just the more useful stuff.\n\nIt occurs to me that my concern about the information that\npg_get_wal_records_info() lacks could be restated as a concern about\nwhat pg_get_wal_block_info() lacks: pg_get_wal_block_info() fails to\nshow basic information about the WAL record whose blocks it reports\non, even though it could easily show all of the\npg_get_wal_records_info() info once per block (barring block_ref). So\naddressing my concern by adjusting pg_get_wal_block_info() might be\nthe best approach. I'd probably be happy with that -- I'd likely just\nstop using pg_get_wal_records_info() completely under this scheme.\n\nOverall, I'm concerned that we may have missed the opportunity to make\nsimple things easier. Again, wanting to see (say) all of the PRUNE\nrecords and VACUUM records with an \"order by relfilenode,\nblock_number, lsn\" seems likely to be a very common requirement to me.\nIt's exactly the kind of thing that you'd expect an SQL interface to\nmake easy.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 14 Mar 2023 18:50:15 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 7:20 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> > But, perhaps you are suggesting a parameter to pg_get_wal_records_info()\n> > like \"with_block_info\" or something, which produces the full\n> > denormalized block + record output?\n>\n> I was thinking of something like that, yes -- though it wouldn't\n> necessarily have to be the *full* denormalized block_ref info, the FPI\n> itself, etc. Just the more useful stuff.\n>\n> It occurs to me that my concern about the information that\n> pg_get_wal_records_info() lacks could be restated as a concern about\n> what pg_get_wal_block_info() lacks: pg_get_wal_block_info() fails to\n> show basic information about the WAL record whose blocks it reports\n> on, even though it could easily show all of the\n> pg_get_wal_records_info() info once per block (barring block_ref). So\n> addressing my concern by adjusting pg_get_wal_block_info() might be\n> the best approach. I'd probably be happy with that -- I'd likely just\n> stop using pg_get_wal_records_info() completely under this scheme.\n\nHow about something like the attached? It adds the per-record columns\nto pg_get_wal_block_info() avoiding \"possibly expensive\" joins with\npg_get_wal_records_info().\n\nWith this, pg_get_wal_records_info() too will be useful for users\nscanning WAL at record level. That is to say that we can retain both\npg_get_wal_records_info() and pg_get_wal_block_info().\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 15 Mar 2023 12:13:56 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 06:50:15PM -0700, Peter Geoghegan wrote:\n> On Tue, Mar 14, 2023 at 5:34 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n>> Well, I think if you only care about the WAL record-level information\n>> and not the block-level information, having the WAL record information\n>> denormalized like that with all the block information would be a\n>> nuisance.\n> \n> I generally care about both. When I want to look at things at the\n> pg_get_wal_records_info() level (as opposed to a summary), the\n> block_ref information is *always* of primary importance. I don't want\n> to have to write my own bug-prone parser for block_ref, but why should\n> the only alternative be joining against pg_get_wal_block_info()? The\n> information that I'm interested in is \"close at hand\" to\n> pg_get_wal_records_info() already.\n>\n\nI am not sure to get the concern here. As long as one is smart enough\nwith SQL, there is no need to perform a double scan of the contents of\npg_wal with a large scan on the start LSN. If one wishes to only\nextract some block for a given record type, or for a filter of your\nchoice, it is possible to use a LATERAL on pg_get_wal_block_info(),\nsay:\nSELECT r.start_lsn, b.blockid\n FROM pg_get_wal_records_info('0/01000028', '0/1911AA8') AS r,\n LATERAL pg_get_wal_block_info(start_lsn, end_lsn) as b\n WHERE r.resource_manager = 'Heap2';\n\nThis will extract the block information that you'd want for a given\nrecord type.\n\n> I understand that in the general case there might be quite a few\n> blocks associated with a WAL record. For complicated cases,\n> pg_get_wal_block_info() does make sense. However, the vast majority of\n> individual WAL records (and possibly most WAL record types) are\n> related to one block only. One block that is generally from the\n> relation's main fork.\n\nSure, though there may be more complicated scenarios, like custom\nRMGRs. At the end it comes to how much normalization should be\napplied to the data extracted. FWIW, I think that the current\ninterface is a pretty good balance in usability.\n--\nMichael",
"msg_date": "Wed, 15 Mar 2023 15:50:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 12:13:56PM +0530, Bharath Rupireddy wrote:\n> How about something like the attached? It adds the per-record columns\n> to pg_get_wal_block_info() avoiding \"possibly expensive\" joins with\n> pg_get_wal_records_info().\n> \n> With this, pg_get_wal_records_info() too will be useful for users\n> scanning WAL at record level. That is to say that we can retain both\n> pg_get_wal_records_info() and pg_get_wal_block_info().\n\nFWIW, I am not convinced that there is any need to bloat more the\nattributes of these functions, as filters for records could basically\ntouch all the fields returned by pg_get_wal_records_info(). What\nabout adding an example in the docs with the LATERAL query I mentioned\npreviously?\n--\nMichael",
"msg_date": "Wed, 15 Mar 2023 16:00:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 12:20 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Mar 14, 2023 at 06:50:15PM -0700, Peter Geoghegan wrote:\n> > On Tue, Mar 14, 2023 at 5:34 PM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> >> Well, I think if you only care about the WAL record-level information\n> >> and not the block-level information, having the WAL record information\n> >> denormalized like that with all the block information would be a\n> >> nuisance.\n> >\n> > I generally care about both. When I want to look at things at the\n> > pg_get_wal_records_info() level (as opposed to a summary), the\n> > block_ref information is *always* of primary importance. I don't want\n> > to have to write my own bug-prone parser for block_ref, but why should\n> > the only alternative be joining against pg_get_wal_block_info()? The\n> > information that I'm interested in is \"close at hand\" to\n> > pg_get_wal_records_info() already.\n> >\n>\n> I am not sure to get the concern here. As long as one is smart enough\n> with SQL, there is no need to perform a double scan of the contents of\n> pg_wal with a large scan on the start LSN. If one wishes to only\n> extract some block for a given record type, or for a filter of your\n> choice, it is possible to use a LATERAL on pg_get_wal_block_info(),\n> say:\n> SELECT r.start_lsn, b.blockid\n> FROM pg_get_wal_records_info('0/01000028', '0/1911AA8') AS r,\n> LATERAL pg_get_wal_block_info(start_lsn, end_lsn) as b\n> WHERE r.resource_manager = 'Heap2';\n>\n> This will extract the block information that you'd want for a given\n> record type.\n\nIt looks like nested-loop join is chosen for LATERAL query [1], that\nis, for every start_lsn and end_lsn that we get from\npg_get_wal_records_info, pg_get_wal_block_info gets called. Whereas,\nfor non-LATERAL join [2], hash/merge join is chosen which is pretty\nfast (5x) over 5mn WAL records. Therefore, I'm not sure if adding the\nLATERAL query as an example is a better idea.\n\nIIUC, the concern raised so far in this thread is not just on the\nperformance of JOIN queries to get both block info and record level\ninfo, but on ease of using pg_walinspect functions. If\npg_get_wal_block_info emits the record level information too (which\nturns out to be 50 LOC more), one doesn't have to be expert at writing\nJOIN queries or such, but just can run the function, which actually\ntakes way less time (3sec) to scan the same 5mn WAL records [3].\n\n[1]\npostgres=# EXPLAIN (ANALYZE) SELECT * FROM\npg_get_wal_records_info(:'start_lsn', :'end_lsn') AS r,\n LATERAL pg_get_wal_block_info(start_lsn, end_lsn) AS b WHERE\nr.resource_manager = 'Heap';\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.01..112.50 rows=5000 width=330) (actual\ntime=3175.114..49596.749 rows=5000019 loops=1)\n -> Function Scan on pg_get_wal_records_info r (cost=0.00..12.50\nrows=5 width=168) (actual time=3175.058..4142.507 rows=4000019\nloops=1)\n Filter: (resource_manager = 'Heap'::text)\n Rows Removed by Filter: 52081\n -> Function Scan on pg_get_wal_block_info b (cost=0.00..10.00\nrows=1000 width=162) (actual time=0.011..0.011 rows=1 loops=4000019)\n Planning Time: 0.076 ms\n Execution Time: 49998.850 ms\n(7 rows)\n\nTime: 49999.203 ms (00:49.999)\n\n[2]\npostgres=# EXPLAIN (ANALYZE) SELECT * FROM\npg_get_wal_block_info(:'start_lsn', :'end_lsn') AS b\n JOIN pg_get_wal_records_info(:'start_lsn', :'end_lsn') AS w ON\nw.start_lsn = b.lsn WHERE w.resource_manager = 'Heap';\n\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=12.57..26.57 rows=25 width=330) (actual\ntime=6241.449..9901.715 rows=5000019 loops=1)\n Hash Cond: (b.lsn = w.start_lsn)\n -> Function Scan on pg_get_wal_block_info b (cost=0.00..10.00\nrows=1000 width=162) (actual time=1415.815..1870.522 rows=5067960\nloops=1)\n -> Hash (cost=12.50..12.50 rows=5 width=168) (actual\ntime=4665.292..4665.292 rows=4000019 loops=1)\n Buckets: 65536 (originally 1024) Batches: 128 (originally 1)\n Memory Usage: 7681kB\n -> Function Scan on pg_get_wal_records_info w\n(cost=0.00..12.50 rows=5 width=168) (actual time=3160.010..3852.332\nrows=4000019 loops=1)\n Filter: (resource_manager = 'Heap'::text)\n Rows Removed by Filter: 52081\n Planning Time: 0.082 ms\n Execution Time: 10159.066 ms\n(10 rows)\n\nTime: 10159.465 ms (00:10.159)\n\n[3]\npostgres=# EXPLAIN ANALYZE SELECT * FROM\npg_get_wal_block_info(:'start_lsn', :'end_lsn');\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Function Scan on pg_get_wal_block_info (cost=0.00..10.00 rows=1000\nwidth=286) (actual time=2617.755..3081.526 rows=5004478 loops=1)\n Planning Time: 0.039 ms\n Execution Time: 3301.217 ms\n(3 rows)\n\nTime: 3301.817 ms (00:03.302)\npostgres=#\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 16 Mar 2023 14:49:31 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 2:19 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Wed, Mar 15, 2023 at 12:20 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > I am not sure to get the concern here. As long as one is smart enough\n> > with SQL, there is no need to perform a double scan of the contents of\n> > pg_wal with a large scan on the start LSN. If one wishes to only\n> > extract some block for a given record type, or for a filter of your\n> > choice, it is possible to use a LATERAL on pg_get_wal_block_info(),\n> > say:\n> > SELECT r.start_lsn, b.blockid\n> > FROM pg_get_wal_records_info('0/01000028', '0/1911AA8') AS r,\n> > LATERAL pg_get_wal_block_info(start_lsn, end_lsn) as b\n> > WHERE r.resource_manager = 'Heap2';\n> >\n> > This will extract the block information that you'd want for a given\n> > record type.\n\nThe same information *already* appears in pg_get_wal_records_info()'s\nblock_ref output! Why should the user be expected to use a LATERAL\njoin (or any type of join) to get _the same information_, just in a\nusable form?\n\n> IIUC, the concern raised so far in this thread is not just on the\n> performance of JOIN queries to get both block info and record level\n> info, but on ease of using pg_walinspect functions. If\n> pg_get_wal_block_info emits the record level information too (which\n> turns out to be 50 LOC more), one doesn't have to be expert at writing\n> JOIN queries or such, but just can run the function, which actually\n> takes way less time (3sec) to scan the same 5mn WAL records [3].\n\nThat's exactly my concern, yes. As you say, it's not just the\nperformance aspect. Requiring users to write a needlessly ornamental\nquery is actively misleading. It suggests that block_ref is distinct\ninformation from the blocks output by pg_get_wal_block_info().\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 16 Mar 2023 19:03:12 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 7:33 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> > IIUC, the concern raised so far in this thread is not just on the\n> > performance of JOIN queries to get both block info and record level\n> > info, but on ease of using pg_walinspect functions. If\n> > pg_get_wal_block_info emits the record level information too (which\n> > turns out to be 50 LOC more), one doesn't have to be expert at writing\n> > JOIN queries or such, but just can run the function, which actually\n> > takes way less time (3sec) to scan the same 5mn WAL records [3].\n>\n> That's exactly my concern, yes. As you say, it's not just the\n> performance aspect. Requiring users to write a needlessly ornamental\n> query is actively misleading. It suggests that block_ref is distinct\n> information from the blocks output by pg_get_wal_block_info().\n\n+1 for pg_get_wal_block_info emitting per-record WAL info too along\nwith block info, attached v2 patch does that. IMO, usability wins the\nrace here.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 17 Mar 2023 12:50:09 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 12:20 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> +1 for pg_get_wal_block_info emitting per-record WAL info too along\n> with block info, attached v2 patch does that. IMO, usability wins the\n> race here.\n\nI think that this direction makes a lot of sense. Under this scheme,\nwe still have pg_get_wal_records_info(), which is more or less an SQL\ninterface to the information that pg_waldump presents by default.\nThat's important when the record view of things is of primary\nimportance. But now you also have a \"block oriented\" view of WAL\npresented by pg_get_wal_block_info(), which is useful when particular\nblocks are of primary interest. I think that I'll probably end up\nusing both, while primarily using pg_get_wal_block_info() for more\nadvanced analysis that focuses on what happened to particular blocks\nover time.\n\nIt makes sense to present pg_get_wal_block_info() immediately after\npg_get_wal_records_info() in the documentation under this scheme,\nsince they're closely related. It would make sense to explain the\nrelationship directly: pg_get_wal_block_info() doesn't have the\nblock_ref column because it breaks that same information out by block\ninstead, occasionally showing multiple rows for particular record\ntypes (which is what its \"extra\" columns describe). And,\npg_get_wal_block_info() won't show anything for those records whose\nblock_ref column is null according to pg_get_wal_records_info(), such\nas commit records.\n\n(Checks pg_walinspect once more...)\n\nActually, I now see that block_ref won't be NULL for those records\nthat have no block references at all -- it just outputs an empty\nstring. But wouldn't it be better if it actually output NULL? Better\nfor its own sake, but also better because doing so enables describing\nthe relationship between the two functions with reference to\nblock_ref. It seems particularly helpful to me to be able to say that\npg_get_wal_block_info() doesn't show anything for precisely those WAL\nrecords whose block_ref is NULL according to\npg_get_wal_records_info().\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 17 Mar 2023 12:36:04 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 12:36:04PM -0700, Peter Geoghegan wrote:\n> I think that this direction makes a lot of sense. Under this scheme,\n> we still have pg_get_wal_records_info(), which is more or less an SQL\n> interface to the information that pg_waldump presents by default.\n> That's important when the record view of things is of primary\n> importance. But now you also have a \"block oriented\" view of WAL\n> presented by pg_get_wal_block_info(), which is useful when particular\n> blocks are of primary interest. I think that I'll probably end up\n> using both, while primarily using pg_get_wal_block_info() for more\n> advanced analysis that focuses on what happened to particular blocks\n> over time.\n\nFWIW, I am not sure that it is a good idea and that we'd better not\nencourage too much the use of block_info() across a large range of\nWAL, which is what this function will make users eager to do in this\ncase as it is possible to apply directly more filters to it. This is\na community, so, anyway, if you feel strongly about doing this change,\nfeel free to.\n--\nMichael",
"msg_date": "Sat, 18 Mar 2023 08:11:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 4:11 PM Michael Paquier <michael@paquier.xyz> wrote:\n> FWIW, I am not sure that it is a good idea and that we'd better not\n> encourage too much the use of block_info() across a large range of\n> WAL, which is what this function will make users eager to do in this\n> case as it is possible to apply directly more filters to it.\n\nI'm sure that they will do that much more than they would have\notherwise. Since we'll have made pg_get_wal_block_info() so much more\nuseful than pg_get_wal_records_info() for many important use cases.\nWhy is that a bad thing? Are you concerned about the overhead of\npulling in FPIs when pg_get_wal_block_info() is run, if Bharath's\npatch is committed? That could be a problem, I suppose -- but it would\nbe good to get more data on that. Do you think that this will be much\nof an issue, Bharath?\n\nI have pushed pg_walinspect to its limits myself (which is how I found\nthat memory leak). Performance matters a great deal when you're doing\nan analysis of how blocks change over time, on a system that has\nwritten a realistically large amount of WAL over minutes or even\nhours. Why shouldn't that be a priority for pg_walinspect? My concerns\nhave little to do with aesthetics, and everything to do with making\nthose kinds of queries feasible.\n\nIf the FPI thing is a problem then it seems to me that it should be\naddressed directly. For example, perhaps it would make sense to add a\nway to not incur the overhead of decompressing FPIs uselessly in cases\nwhere they're of no interest to us (likely the majority of cases once\nthe patch is committed). It also might well make sense to rename\npg_get_wal_block_info() to something more general, to reflect its more\ngeneral purpose once it is expanded by Bharath's patch. As I said, it\nwill become a lot closer to pg_get_wal_records_info(). We should be\nclear on that.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 17 Mar 2023 16:36:58 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 04:36:58PM -0700, Peter Geoghegan wrote:\n> I'm sure that they will do that much more than they would have\n> otherwise. Since we'll have made pg_get_wal_block_info() so much more\n> useful than pg_get_wal_records_info() for many important use cases.\n> Why is that a bad thing? Are you concerned about the overhead of\n> pulling in FPIs when pg_get_wal_block_info() is run, if Bharath's\n> patch is committed? That could be a problem, I suppose -- but it would\n> be good to get more data on that. Do you think that this will be much\n> of an issue, Bharath?\n\nYes. The CPU cost is one thing, but I am also worrying about the\nI/O cost with a tuplestore spilling to disk a large number of FPIs,\nand some workloads can generate WAL so as FPIs is what makes for most\nof the contents stored in the WAL. (wal_compression is very effective\nin such cases, for example.)\n\nIt is true that it is possible to tweak SQLs that exactly do that with\na large amount of data materialized, or just eat so much CPU that they\nbasically DoS the backend. Still I'd rather keep a minimalistic\ndesign for each function with block_info having only one field able to\ntrack back to which record a block information refers to, and I'd like\nto think one able to look at WAL internals will be smart enough to\nwrite SQL in such a way that they avoid that on a production machine.\nThe current design allows to do that in this view, but that's just one\nway I see how to represent structures at SQL level. Extending\nblock_info() with more record-level attributes allows that as well,\nstill it bloats its interface unnecessarily. Designing software is\nhard, and it looks like our point of view on that is different. If\nyou wish to change the current interface of block_info, feel free to\ndo so. It does not mean that it cannot be changed, just that I\nrecommend not to do that, and that's just one opinion.\n\nThis said, your point about having rec_blk_ref reported as an empty\nstring rather than NULL if there are no block references does not feel\nnatural to me, either.. Reporting NULL would be better.\n--\nMichael",
"msg_date": "Sat, 18 Mar 2023 09:51:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 5:51 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Yes. The CPU cost is one thing, but I am also worrying about the\n> I/O cost with a tuplestore spilling to disk a large number of FPIs,\n> and some workloads can generate WAL so as FPIs is what makes for most\n> of the contents stored in the WAL. (wal_compression is very effective\n> in such cases, for example.)\n>\n> It is true that it is possible to tweak SQLs that exactly do that with\n> a large amount of data materialized, or just eat so much CPU that they\n> basically DoS the backend. Still I'd rather keep a minimalistic\n> design for each function with block_info having only one field able to\n> track back to which record a block information refers to, and I'd like\n> to think one able to look at WAL internals will be smart enough to\n> write SQL in such a way that they avoid that on a production machine.\n> The current design allows to do that in this view, but that's just one\n> way I see how to represent structures at SQL level.\n\nNot really. It has nothing to do with some abstract ideal about how\nthe data should be logically structured. It is about how the actual\nunderlying physical data structures work, and are accessed in\npractice, during query execution. And its about the constraints placed\non us by the laws of physics. Some ways of doing this are measurably,\nprovably much faster than other ways. It's very much not like we're\nquerying tables whose general structure is under our control, via\nschema design, where the optimizer could reasonably be expected to\nmake better choices as the data distribution changes. So why treat it\nlike that?\n\nRight now, you're still basically standing by a design that is\n*fundamentally* less efficient for certain types of queries -- queries\nthat I am very interested in, that I'm sure many of us will be\ninterested in. It's not a matter of opinion. It is very much in\nevidence from Bharath's analysis. If a similar analysis reached the\nopposite conclusion, then you would be right and I would be wrong. It\nreally is that simple.\n\n> This said, your point about having rec_blk_ref reported as an empty\n> string rather than NULL if there are no block references does not feel\n> natural to me, either.. Reporting NULL would be better.\n\nYou have it backwards. It outputs an empty string right now. I want to\nchange that, so that it outputs NULLs instead.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 17 Mar 2023 18:09:05 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 06:09:05PM -0700, Peter Geoghegan wrote:\n>> This said, your point about having rec_blk_ref reported as an empty\n>> string rather than NULL if there are no block references does not feel\n>> natural to me, either.. Reporting NULL would be better.\n> \n> You have it backwards. It outputs an empty string right now. I want to\n> change that, so that it outputs NULLs instead.\n\nMy previous paragraph means exactly the same? I have just read what I\nwrote again. Twice. So, yes, I agree with this point. Sorry if my\nwords meant the contrary to you. :)\n--\nMichael",
"msg_date": "Sat, 18 Mar 2023 10:18:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Sat, Mar 18, 2023 at 1:06 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Mar 17, 2023 at 12:20 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > +1 for pg_get_wal_block_info emitting per-record WAL info too along\n> > with block info, attached v2 patch does that. IMO, usability wins the\n> > race here.\n>\n> I think that this direction makes a lot of sense. Under this scheme,\n> we still have pg_get_wal_records_info(), which is more or less an SQL\n> interface to the information that pg_waldump presents by default.\n> That's important when the record view of things is of primary\n> importance. But now you also have a \"block oriented\" view of WAL\n> presented by pg_get_wal_block_info(), which is useful when particular\n> blocks are of primary interest. I think that I'll probably end up\n> using both, while primarily using pg_get_wal_block_info() for more\n> advanced analysis that focuses on what happened to particular blocks\n> over time.\n\nHm.\n\n> It makes sense to present pg_get_wal_block_info() immediately after\n> pg_get_wal_records_info() in the documentation under this scheme,\n> since they're closely related.\n\n-1. I don't think we need that and even if we did, it's hard to\nmaintain that ordering in future. One who knows to use these functions\nwill anyway get to know how they're related.\n\n> (Checks pg_walinspect once more...)\n>\n> Actually, I now see that block_ref won't be NULL for those records\n> that have no block references at all -- it just outputs an empty\n> string.\n\nYes, that's unnecessary.\n\n> But wouldn't it be better if it actually output NULL?\n\n+1 done so in the attached 0001 patch.\n\n> Better\n> for its own sake, but also better because doing so enables describing\n> the relationship between the two functions with reference to\n> block_ref. It seems particularly helpful to me to be able to say that\n> pg_get_wal_block_info() doesn't show anything for precisely those WAL\n> records whose block_ref is NULL according to\n> pg_get_wal_records_info().\n\nHm.\n\nAttaching v3 patch set - 0001 optimizations around block references,\n0002 enables pg_get_wal_block_info() to emit per-record info. Any\nthoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 18 Mar 2023 10:08:53 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "At Sat, 18 Mar 2023 10:08:53 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \r\n> On Sat, Mar 18, 2023 at 1:06 AM Peter Geoghegan <pg@bowt.ie> wrote:\r\n> >\r\n> > On Fri, Mar 17, 2023 at 12:20 AM Bharath Rupireddy\r\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> > > +1 for pg_get_wal_block_info emitting per-record WAL info too along\r\n> > > with block info, attached v2 patch does that. IMO, usability wins the\r\n> > > race here.\r\n> >\r\n> > I think that this direction makes a lot of sense. Under this scheme,\r\n> > we still have pg_get_wal_records_info(), which is more or less an SQL\r\n> > interface to the information that pg_waldump presents by default.\r\n> > That's important when the record view of things is of primary\r\n> > importance. But now you also have a \"block oriented\" view of WAL\r\n> > presented by pg_get_wal_block_info(), which is useful when particular\r\n> > blocks are of primary interest. I think that I'll probably end up\r\n> > using both, while primarily using pg_get_wal_block_info() for more\r\n> > advanced analysis that focuses on what happened to particular blocks\r\n> > over time.\r\n> \r\n> Hm.\r\n\r\nEven though I haven't explored every aspect of the view, I believe it\r\nmakes sense to have at least the record_type in the block data. I\r\ndon't know how often it will be used, but considering that, I have no\r\nobjections to adding all the record information (apart from the block\r\ndata itself) to the block data.\r\n\r\n> > It makes sense to present pg_get_wal_block_info() immediately after\r\n> > pg_get_wal_records_info() in the documentation under this scheme,\r\n> > since they're closely related.\r\n> \r\n> -1. I don't think we need that and even if we did, it's hard to\r\n> maintain that ordering in future. One who knows to use these functions\r\n> will anyway get to know how they're related.\r\n\r\nThe documentation has just one section titled \"General Functions\"\r\nwhich directly contains detailed explation of four functions, making\r\nit hard to get clear understanding of the available functions. I\r\nconsidered breaking it down into a few subsections, but that wouldn't\r\nlook great since most of them would only contain one function.\r\nHowever, I feel it would be helpful to add a list of all functions at\r\nthe beginning of the section.\r\n\r\n> > (Checks pg_walinspect once more...)\r\n> >\r\n> > Actually, I now see that block_ref won't be NULL for those records\r\n> > that have no block references at all -- it just outputs an empty\r\n> > string.\r\n> \r\n> Yes, that's unnecessary.\r\n> \r\n> > But wouldn't it be better if it actually output NULL?\r\n> \r\n> +1 done so in the attached 0001 patch.\r\n> \r\n> > Better\r\n> > for its own sake, but also better because doing so enables describing\r\n> > the relationship between the two functions with reference to\r\n> > block_ref. It seems particularly helpful to me to be able to say that\r\n> > pg_get_wal_block_info() doesn't show anything for precisely those WAL\r\n> > records whose block_ref is NULL according to\r\n> > pg_get_wal_records_info().\r\n> \r\n> Hm.\r\n\r\nI agree that adding a note about the characteristics would helpful to\r\navoid the misuse of pg_get_wal_block_info(). How about something like,\r\n\"Note that pg_get_wal_block_info() omits records that contains no\r\nblock references.\"?\r\n\r\n> Attaching v3 patch set - 0001 optimizations around block references,\r\n> 0002 enables pg_get_wal_block_info() to emit per-record info. Any\r\n> thoughts?\r\n\r\n+\t\t/* Get block references, if any, otherwise continue. */\r\n+\t\tif (!XLogRecHasAnyBlockRefs(xlogreader))\r\n+\t\t\tcontinue;\r\n\r\nI'm not sure, but the \"continue\" might be confusing since the code\r\n\"continue\"s if the condition is true and continues the process\r\notherwise.. And it seems like a kind of \"explaination of what the\r\ncode does\". I feel we don't need the a comment there.\r\n\r\nIt is not an issue with this patch, but as I look at this version, I'm\r\nstarting to feel uneasy about the subtle differences between what\r\nGetWALRecordsInfo and GetWALBlockInfo do. One solution might be to\r\nhave GetWALBlockInfo return a values array for each block, but that\r\ncould make things more complex than needed. Alternatively, could we\r\nget GetWALRecordsInfo to call tuplestore_putvalues() internally? This\r\nway, both functions can manage the temporary memory context within\r\nthemselves.\r\n\r\n\r\n\r\nAbout 0002:\r\n\r\n+\t\t/* Reset only per-block output columns, keep per-record info as-is. */\r\n+\t\tmemset(&nulls[PG_GET_WAL_BLOCK_INFO_PER_RECORD_COLS], 0,\r\n+\t\t\t PG_GET_WAL_BLOCK_INFO_PER_RECORD_COLS * sizeof(bool));\r\n+\t\tmemset(&values[PG_GET_WAL_BLOCK_INFO_PER_RECORD_COLS], 0,\r\n+\t\t\t PG_GET_WAL_BLOCK_INFO_PER_RECORD_COLS * sizeof(bool));\r\n\r\nsizeof(*values) is not sizeof(bool), but sizeof(Datum).\r\n\r\nIt seems to me that the starting elemnt of the arrays is\r\n(PG_GET_WAL_BLOCK_INFO_COLS -\r\nPG_GET_WAL_BLOCK_INFO_PER_RECORD_COLS). But I don't think simply\r\nrewriting that way is great.\r\n\r\n #define PG_GET_WAL_RECORD_INFO_COLS 11\r\n...\r\n+#define PG_GET_WAL_BLOCK_INFO_PER_RECORD_COLS 9\r\n\r\nThis means GetWALBlockInfo overwrites the last two columns generated\r\nby GetWalRecordInfo, but I don't think this approach is clean and\r\nstable. I agree we don't want the final columns in a block info tuple\r\nbut we don't want to duplicate the common code path.\r\n\r\nI initially thought we could devide the function into\r\nGetWALCommonInfo(), GetWALRecordInfo() and GetWALBlockInfo(), but it\r\ndoesn't seem that simple.. In the end, I think we should have separate\r\nGetWALRecordInfo() and GetWALBlockInfo() that have duplicate\r\n\"values[i++] = ..\" lines.\r\n\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Mon, 20 Mar 2023 12:20:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Sun, Mar 19, 2023 at 8:21 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> The documentation has just one section titled \"General Functions\"\n> which directly contains detailed explation of four functions, making\n> it hard to get clear understanding of the available functions. I\n> considered breaking it down into a few subsections, but that wouldn't\n> look great since most of them would only contain one function.\n> However, I feel it would be helpful to add a list of all functions at\n> the beginning of the section.\n\nI like the idea of sections, even if there is only one function per\nsection in some cases.\n\nI also think that we should add a \"Tip\" that advises users that they\nmay use an \"end LSN\" that is the largest possible LSN,\n'FFFFFFFF/FFFFFFFF' to get information about records up until the\ncurrent LSN of the cluster (per commit 5c1b6628).\n\nIs there a straightforward way to get a usable LSN constant for this\npurpose? The simplest way I could come up with quickly is \"SELECT\npg_lsn(2^64.-1)\" -- which still isn't very simple. Actually, it might\nbe even worse than 'FFFFFFFF/FFFFFFFF', so perhaps we should just use\nthat in the docs new \"Tip\".\n\n> I agree that adding a note about the characteristics would helpful to\n> avoid the misuse of pg_get_wal_block_info(). How about something like,\n> \"Note that pg_get_wal_block_info() omits records that contains no\n> block references.\"?\n\nThis should be a strict invariant. In other words, it should be part\nof the documented contract of pg_get_wal_block_info and\npg_get_wal_records_info. The two functions should be defined in terms\nof each other. Their relationship is important.\n\nUsers should be able to safely assume that the records that have a\nNULL block_ref according to pg_get_wal_records_info are *precisely*\nthose records that won't have any entries within pg_get_wal_block_info\n(assuming that the same LSN range is used with both functions).\npg_walinspect should explicitly promise this, and promise the\ncorollary condition around non-NULL block_ref records. It is a useful\npromise from the point of view of users. It also makes it easier to\nunderstand what's really going on here without any ambiguity.\n\nI don't completely disagree with Michael about the redundancy. I just\nthink that it's worth it on performance grounds. We might want to say\nthat directly in the docs, too.\n\n> > Attaching v3 patch set - 0001 optimizations around block references,\n> > 0002 enables pg_get_wal_block_info() to emit per-record info. Any\n> > thoughts?\n>\n> + /* Get block references, if any, otherwise continue. */\n> + if (!XLogRecHasAnyBlockRefs(xlogreader))\n> + continue;\n>\n> I'm not sure, but the \"continue\" might be confusing since the code\n> \"continue\"s if the condition is true and continues the process\n> otherwise.. And it seems like a kind of \"explaination of what the\n> code does\". I feel we don't need the a comment there.\n\n+1.\n\nAlso, if GetWALBlockInfo() is now supposed to only be called when\nXLogRecHasAnyBlockRefs() now then it should probably have an assertion\nto verify the precondition.\n\n> It is not an issue with this patch, but as I look at this version, I'm\n> starting to feel uneasy about the subtle differences between what\n> GetWALRecordsInfo and GetWALBlockInfo do. One solution might be to\n> have GetWALBlockInfo return a values array for each block, but that\n> could make things more complex than needed. Alternatively, could we\n> get GetWALRecordsInfo to call tuplestore_putvalues() internally? This\n> way, both functions can manage the temporary memory context within\n> themselves.\n\n Agreed. I'm also not sure what to do about it, though.\n\n> This means GetWALBlockInfo overwrites the last two columns generated\n> by GetWalRecordInfo, but I don't think this approach is clean and\n> stable. I agree we don't want the final columns in a block info tuple\n> but we don't want to duplicate the common code path.\n\n> I initially thought we could devide the function into\n> GetWALCommonInfo(), GetWALRecordInfo() and GetWALBlockInfo(), but it\n> doesn't seem that simple.. In the end, I think we should have separate\n> GetWALRecordInfo() and GetWALBlockInfo() that have duplicate\n> \"values[i++] = ..\" lines.\n\nI agree. A little redundancy is better when the alternative is fragile\ncode, and I'm pretty sure that that applies here -- there won't be\nvery many duplicated lines, and the final code will be significantly\nclearer. There can be a comment about keeping GetWALRecordInfo and\nGetWALBlockInfo in sync.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 20 Mar 2023 16:34:06 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 4:34 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I agree. A little redundancy is better when the alternative is fragile\n> code, and I'm pretty sure that that applies here -- there won't be\n> very many duplicated lines, and the final code will be significantly\n> clearer. There can be a comment about keeping GetWALRecordInfo and\n> GetWALBlockInfo in sync.\n\nThe new pg_get_wal_block_info outputs columns in an order that doesn't\nseem like the most useful possible order to me. This gives us another\nreason to have separate GetWALRecordInfo and GetWALBlockInfo utility\nfunctions rather than sharing logic for building output tuples.\n\nSpecifically, I think that pg_get_wal_block_info should ouput the\n\"primary key\" columns first:\n\nreltablespace, reldatabase, relfilenode, blockid, start_lsn, end_lsn\n\nNext comes the columns that duplicate the columns output by\npg_get_wal_records_info, in the same order as they appear in\npg_get_wal_records_info. (Obviously this won't include block_ref).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 20 Mar 2023 16:51:19 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 4:51 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> The new pg_get_wal_block_info outputs columns in an order that doesn't\n> seem like the most useful possible order to me. This gives us another\n> reason to have separate GetWALRecordInfo and GetWALBlockInfo utility\n> functions rather than sharing logic for building output tuples.\n\nOne more piece of feedback for Bharath:\n\nI think that we should also make the description output column display\nNULLs for those records that don't output any description string. This\nat least includes the \"FPI\" record type from the \"XLOG\" rmgr.\nAlternatively, we could find a way of making it show a description.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 20 Mar 2023 17:00:25 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 05:00:25PM -0700, Peter Geoghegan wrote:\n> I think that we should also make the description output column display\n> NULLs for those records that don't output any description string. This\n> at least includes the \"FPI\" record type from the \"XLOG\" rmgr.\n> Alternatively, we could find a way of making it show a description.\n\nAn empty StringInfo is sent to rm_desc, so hardcoding something in\npg_walinspect to show some data does not look right to me. Saying\nthat, using NULL when there is no description is OK by me, as much as\nis using an empty string because it maps with the reality of the empty\nStringInfo sent to the rm_desc callback.\n--\nMichael",
"msg_date": "Wed, 22 Mar 2023 15:22:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 04:51:19PM -0700, Peter Geoghegan wrote:\n> On Mon, Mar 20, 2023 at 4:34 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> I agree. A little redundancy is better when the alternative is fragile\n>> code, and I'm pretty sure that that applies here -- there won't be\n>> very many duplicated lines, and the final code will be significantly\n>> clearer. There can be a comment about keeping GetWALRecordInfo and\n>> GetWALBlockInfo in sync.\n\nWell. Vox populi, Vox dei. I am deadly outnumbered.\n\n> The new pg_get_wal_block_info outputs columns in an order that doesn't\n> seem like the most useful possible order to me. This gives us another\n> reason to have separate GetWALRecordInfo and GetWALBlockInfo utility\n> functions rather than sharing logic for building output tuples.\n> \n> Specifically, I think that pg_get_wal_block_info should ouput the\n> \"primary key\" columns first:\n> \n> reltablespace, reldatabase, relfilenode, blockid, start_lsn, end_lsn\n\nIt seems to me that this is up to the one using this SQL? I am not\nsure to follow why this is important. For the cases you have poked\nat, I guess it is, but is strikes me that it is just natural to shape\nthat to match the C structures we use for the WAL records\nthemselves, so the other way around.\n\n> Next comes the columns that duplicate the columns output by\n> pg_get_wal_records_info, in the same order as they appear in\n> pg_get_wal_records_info. (Obviously this won't include block_ref).\n\nHmm. Once you add more record data into pg_get_wal_block_info(), I am\nnot sure to agree with this statement, actually. I would think that\nthe most logical order would be the start_lsn, the end_lsn, the record\ninformation that you want for your quals, the block ID for the block\nregistered in the record, and finally then the rest of the block\ninformation. So this puts the record-level data first, and the block\ninfo after. This would be a bit closer with the order of the WAL\nrecord structure itself, aka XLogRecord and such.\n\nHence, it seems to me that 0002 has the order pretty much right.\nWhat's the point in adding the description, by the way? Only\nconsistency with the other function? Is that really useful if you\nwant to apply more quals when retrieving some block data?\n\nCalling GetWALRecordInfo() as part of GetWALBlockInfo() leads to a\nrather confusing result, IMO...\n\n@@ -377,6 +385,12 @@ pg_get_wal_block_info(PG_FUNCTION_ARGS)\n while (ReadNextXLogRecord(xlogreader) &&\n xlogreader->EndRecPtr <= end_lsn)\n {\n+ CHECK_FOR_INTERRUPTS();\n+\n+ /* Get block references, if any, otherwise continue. */\n+ if (!XLogRecHasAnyBlockRefs(xlogreader))\n+ continue;\n\nThis early shortcut in 0001 is a good idea.\n--\nMichael",
"msg_date": "Wed, 22 Mar 2023 15:33:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 8:51 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Mar 17, 2023 at 04:36:58PM -0700, Peter Geoghegan wrote:\n> > I'm sure that they will do that much more than they would have\n> > otherwise. Since we'll have made pg_get_wal_block_info() so much more\n> > useful than pg_get_wal_records_info() for many important use cases.\n> > Why is that a bad thing? Are you concerned about the overhead of\n> > pulling in FPIs when pg_get_wal_block_info() is run, if Bharath's\n> > patch is committed? That could be a problem, I suppose -- but it would\n> > be good to get more data on that. Do you think that this will be much\n> > of an issue, Bharath?\n>\n> Yes. The CPU cost is one thing, but I am also worrying about the\n> I/O cost with a tuplestore spilling to disk a large number of FPIs,\n> and some workloads can generate WAL so as FPIs is what makes for most\n> of the contents stored in the WAL. (wal_compression is very effective\n> in such cases, for example.)\n\nI had done some analysis about CPU costs for decompressing FPI upthread\nin [1], finding that adding a parameter to allow skipping outputting FPI\nwould not have much impact when FPI are compressed, as decompressing the\nimages comprised very little of the overall time.\n\nAfter reading what you said, I was interested to see how substantial the\nI/O cost with non-compressed FPI would be.\n\nUsing a patch with a parameter to pg_get_wal_block_info() to skip\noutputting FPI, I found that on a fast local nvme ssd, the timing\ndifference between doing so and not still isn't huge -- 9 seconds when\noutputting the FPI vs 8.5 seconds when skipping outputting FPI. (with\n~50,000 records all with non-compressed FPIs).\n\nHowever, perhaps obviously, the I/O cost is worse.\nDoing nothing but\n\n SELECT * FROM pg_get_wal_block_info(:start_lsn, :end_lsn, true)\nwhere fpi is not null;\n\nper iostat, the write latency was double for the query which output fpi\nfrom the one that didn't and the wkB/s was much higher. This is probably\nobvious, but I'm just wondering if it makes sense to have such a\nparameter to avoid impacting a system which is doing concurrent I/O with\nwalinspect.\n\nI have had use for block info without seeing the FPIs, personally.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CAAKRu_bJvbcYBRj2cN6G2xV7B7-Ja%2BpjTO1nEnEhRR8OXYiABA%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 22 Mar 2023 11:35:46 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 11:35 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Fri, Mar 17, 2023 at 8:51 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Fri, Mar 17, 2023 at 04:36:58PM -0700, Peter Geoghegan wrote:\n> > > I'm sure that they will do that much more than they would have\n> > > otherwise. Since we'll have made pg_get_wal_block_info() so much more\n> > > useful than pg_get_wal_records_info() for many important use cases.\n> > > Why is that a bad thing? Are you concerned about the overhead of\n> > > pulling in FPIs when pg_get_wal_block_info() is run, if Bharath's\n> > > patch is committed? That could be a problem, I suppose -- but it would\n> > > be good to get more data on that. Do you think that this will be much\n> > > of an issue, Bharath?\n> >\n> > Yes. The CPU cost is one thing, but I am also worrying about the\n> > I/O cost with a tuplestore spilling to disk a large number of FPIs,\n> > and some workloads can generate WAL so as FPIs is what makes for most\n> > of the contents stored in the WAL. (wal_compression is very effective\n> > in such cases, for example.)\n>\n> I had done some analysis about CPU costs for decompressing FPI upthread\n> in [1], finding that adding a parameter to allow skipping outputting FPI\n> would not have much impact when FPI are compressed, as decompressing the\n> images comprised very little of the overall time.\n>\n> After reading what you said, I was interested to see how substantial the\n> I/O cost with non-compressed FPI would be.\n>\n> Using a patch with a parameter to pg_get_wal_block_info() to skip\n> outputting FPI, I found that on a fast local nvme ssd, the timing\n> difference between doing so and not still isn't huge -- 9 seconds when\n> outputting the FPI vs 8.5 seconds when skipping outputting FPI. (with\n> ~50,000 records all with non-compressed FPIs).\n>\n> However, perhaps obviously, the I/O cost is worse.\n> Doing nothing but\n>\n> SELECT * FROM pg_get_wal_block_info(:start_lsn, :end_lsn, true)\n> where fpi is not null;\n\nSorry, I should have been more clear: similar results with a select list\nsimply excluding fpi and no where clause.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 22 Mar 2023 12:25:23 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 7:34 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sun, Mar 19, 2023 at 8:21 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > It is not an issue with this patch, but as I look at this version, I'm\n> > starting to feel uneasy about the subtle differences between what\n> > GetWALRecordsInfo and GetWALBlockInfo do. One solution might be to\n> > have GetWALBlockInfo return a values array for each block, but that\n> > could make things more complex than needed. Alternatively, could we\n> > get GetWALRecordsInfo to call tuplestore_putvalues() internally? This\n> > way, both functions can manage the temporary memory context within\n> > themselves.\n>\n> Agreed. I'm also not sure what to do about it, though.\n>\n> > This means GetWALBlockInfo overwrites the last two columns generated\n> > by GetWalRecordInfo, but I don't think this approach is clean and\n> > stable. I agree we don't want the final columns in a block info tuple\n> > but we don't want to duplicate the common code path.\n>\n> > I initially thought we could devide the function into\n> > GetWALCommonInfo(), GetWALRecordInfo() and GetWALBlockInfo(), but it\n> > doesn't seem that simple.. In the end, I think we should have separate\n> > GetWALRecordInfo() and GetWALBlockInfo() that have duplicate\n> > \"values[i++] = ..\" lines.\n>\n> I agree. A little redundancy is better when the alternative is fragile\n> code, and I'm pretty sure that that applies here -- there won't be\n> very many duplicated lines, and the final code will be significantly\n> clearer. There can be a comment about keeping GetWALRecordInfo and\n> GetWALBlockInfo in sync.\n\nSo, I also agree that it is better to have the two separate functions\ninstead of overwriting the last two columns. As for keeping them in\nsync, we could define the number of common columns as a macro like:\n\n#define WALINSPECT_INFO_NUM_COMMON_COLS 10\n\nand use that to calculate the size of the values/nulls array in\nGetWalRecordInfo() and GetWALBlockInfo() (assuming a new version where\nthose two functions duplicate the setting of values[x] = y).\n\nThat way, if a new column of information is added and one of the two\nfunctions forgets to set it in the values array, it would still cause an\nempty column and it will be easier for the programmer to see it needs to\nbe added.\n\nWe could even define an enum like:\n typedef enum walinspect_common_col\n {\n WALINSPECT_START_LSN,\n WALINSPECT_END_LSN,\n WALINSPECT_PREV_LSN,\n WALINSPECT_XID,\n WALINSPECT_RMGR,\n WALINSPECT_REC_TYPE,\n WALINSPECT_REC_LENGTH,\n WALINSPECT_MAIN_DATA_LENGTH,\n WALINSPECT_FPILEN,\n WALINSPECT_DESC,\n WALINSPECT_NUM_COMMON_COL,\n } walinspect_common_col;\n\nand set values in both functions like\n values[WALINSPECT_FPILEN] = y\nif we kept the order of common columns the same and as the first N\ncolumns for both functions. This would keep us from having to manually\nupdate a macro like WALINSPECT_INFO_NUM_COMMON_COLS.\n\nThough, I'm not sure how much value that actually adds.\n\n- Melanie\n\n\n",
"msg_date": "Wed, 22 Mar 2023 13:20:51 -0400",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 8:35 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> After reading what you said, I was interested to see how substantial the\n> I/O cost with non-compressed FPI would be.\n>\n> Using a patch with a parameter to pg_get_wal_block_info() to skip\n> outputting FPI, I found that on a fast local nvme ssd, the timing\n> difference between doing so and not still isn't huge -- 9 seconds when\n> outputting the FPI vs 8.5 seconds when skipping outputting FPI. (with\n> ~50,000 records all with non-compressed FPIs).\n>\n> However, perhaps obviously, the I/O cost is worse.\n> Doing nothing but\n>\n> SELECT * FROM pg_get_wal_block_info(:start_lsn, :end_lsn, true)\n> where fpi is not null;\n>\n> per iostat, the write latency was double for the query which output fpi\n> from the one that didn't and the wkB/s was much higher.\n\nI think that we should also have something like the patch that you\nwrote to skip FPIs. It's not something that I feel as strongly about\nas the main point about including all the fields from\npg_get_wal_records_info. but it does seem worth doing.\n\n> I have had use for block info without seeing the FPIs, personally.\n\nI'd go further than that myself: I haven't had any use for FPIs at\nall. If I was going to do something with FPIs then I'd just use\npg_waldump, since I'd likely want to get them onto the filesystem for\nanalysis anyway. (Just my experience.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Mar 2023 17:05:10 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 11:33 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > The new pg_get_wal_block_info outputs columns in an order that doesn't\n> > seem like the most useful possible order to me. This gives us another\n> > reason to have separate GetWALRecordInfo and GetWALBlockInfo utility\n> > functions rather than sharing logic for building output tuples.\n> >\n> > Specifically, I think that pg_get_wal_block_info should ouput the\n> > \"primary key\" columns first:\n> >\n> > reltablespace, reldatabase, relfilenode, blockid, start_lsn, end_lsn\n>\n> It seems to me that this is up to the one using this SQL?\n\nIf you see it that way, then why does it matter what I may want to do\nwith the declared order?\n\n> I am not\n> sure to follow why this is important. For the cases you have poked\n> at, I guess it is, but is strikes me that it is just natural to shape\n> that to match the C structures we use for the WAL records\n> themselves, so the other way around.\n\nI don't feel very strongly about it, but it seems better to highlight\nthe difference that exists between this and pg_get_wal_records_info.\n\n> Hence, it seems to me that 0002 has the order pretty much right.\n> What's the point in adding the description, by the way? Only\n> consistency with the other function? Is that really useful if you\n> want to apply more quals when retrieving some block data?\n\nI don't understand. It's useful to include the description for the\nsame reason as it's useful to include it in pg_get_wal_records_info.\nWhy wouldn't it be useful?\n\nMost individual records that have any block_ref blocks have exactly\none. Most individual WAL records are very simple record types. So\npg_get_wal_block_info just isn't going to look that different to\npg_get_wal_records_info, once they share most of the same columns. The\nway that pg_get_wal_block_info disaggregates on block won't make the\noutput look all that different. So each distinct \"description\" will\nusually only appear once in pg_get_wal_block_info anyway.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Mar 2023 17:13:03 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 05:05:10PM -0700, Peter Geoghegan wrote:\n> I'd go further than that myself: I haven't had any use for FPIs at\n> all. If I was going to do something with FPIs then I'd just use\n> pg_waldump, since I'd likely want to get them onto the filesystem for\n> analysis anyway. (Just my experience.)\n\nFWIW, being able to get access to raw FPIs with a SQL interface is\nuseful if you cannot log into the host, which is something that a lot\nof cloud providers don't allow, and not everybody is able to have\naccess to archived WAL segments in a different host.\n--\nMichael",
"msg_date": "Thu, 23 Mar 2023 09:14:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 5:14 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Mar 22, 2023 at 05:05:10PM -0700, Peter Geoghegan wrote:\n> > I'd go further than that myself: I haven't had any use for FPIs at\n> > all. If I was going to do something with FPIs then I'd just use\n> > pg_waldump, since I'd likely want to get them onto the filesystem for\n> > analysis anyway. (Just my experience.)\n>\n> FWIW, being able to get access to raw FPIs with a SQL interface is\n> useful if you cannot log into the host, which is something that a lot\n> of cloud providers don't allow, and not everybody is able to have\n> access to archived WAL segments in a different host.\n\nI'm not saying that it's not ever useful. Just that finding FPIs\ninteresting isn't necessarily all that common when using\npg_get_wal_block_info (or won't be, once it has those additional\ncolumns from pg_get_wal_records_info).\n\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Mar 2023 17:16:09 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 8:51 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> + /* Get block references, if any, otherwise continue. */\n> + if (!XLogRecHasAnyBlockRefs(xlogreader))\n>\n> code does\". I feel we don't need the a comment there.\n\nRemoved.\n\n> This means GetWALBlockInfo overwrites the last two columns generated\n> by GetWalRecordInfo, but I don't think this approach is clean and\n> stable. I agree we don't want the final columns in a block info tuple\n> but we don't want to duplicate the common code path.\n>\n> I initially thought we could devide the function into\n> GetWALCommonInfo(), GetWALRecordInfo() and GetWALBlockInfo(), but it\n> doesn't seem that simple.. In the end, I think we should have separate\n> GetWALRecordInfo() and GetWALBlockInfo() that have duplicate\n> \"values[i++] = ..\" lines.\n\nDone as per Peter's suggestion (keeping primary key columns first and\nhaving a bit of code duplicated instead of making it complex in the\nname of deduplication). Please see the attached v4 patch set.\n\nOn Tue, Mar 21, 2023 at 5:04 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sun, Mar 19, 2023 at 8:21 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > The documentation has just one section titled \"General Functions\"\n> > which directly contains detailed explation of four functions, making\n> > it hard to get clear understanding of the available functions. I\n> > considered breaking it down into a few subsections, but that wouldn't\n> > look great since most of them would only contain one function.\n> > However, I feel it would be helpful to add a list of all functions at\n> > the beginning of the section.\n>\n> I like the idea of sections, even if there is only one function per\n> section in some cases.\n\nHm, -1 for now. Most of the extensions that I have seen don't have\nanything like that. If needed, someone can start a separate thread for\nsuch a proposal for all of the extensions.\n\n> I also think that we should add a \"Tip\" that advises users that they\n> may use an \"end LSN\" that is the largest possible LSN,\n> 'FFFFFFFF/FFFFFFFF' to get information about records up until the\n> current LSN of the cluster (per commit 5c1b6628).\n>\n> Is there a straightforward way to get a usable LSN constant for this\n> purpose? The simplest way I could come up with quickly is \"SELECT\n> pg_lsn(2^64.-1)\" -- which still isn't very simple. Actually, it might\n> be even worse than 'FFFFFFFF/FFFFFFFF', so perhaps we should just use\n> that in the docs new \"Tip\".\n\nDone.\n\n> > I agree that adding a note about the characteristics would helpful to\n> > avoid the misuse of pg_get_wal_block_info(). How about something like,\n> > \"Note that pg_get_wal_block_info() omits records that contains no\n> > block references.\"?\n>\n> This should be a strict invariant. In other words, it should be part\n> of the documented contract of pg_get_wal_block_info and\n> pg_get_wal_records_info. The two functions should be defined in terms\n> of each other. Their relationship is important.\n>\n> Users should be able to safely assume that the records that have a\n> NULL block_ref according to pg_get_wal_records_info are *precisely*\n> those records that won't have any entries within pg_get_wal_block_info\n> (assuming that the same LSN range is used with both functions).\n> pg_walinspect should explicitly promise this, and promise the\n> corollary condition around non-NULL block_ref records. It is a useful\n> promise from the point of view of users. It also makes it easier to\n> understand what's really going on here without any ambiguity.\n>\n> I don't completely disagree with Michael about the redundancy. I just\n> think that it's worth it on performance grounds. We might want to say\n> that directly in the docs, too.\n\nAdded a note in the docs.\n\n> Also, if GetWALBlockInfo() is now supposed to only be called when\n> XLogRecHasAnyBlockRefs() now then it should probably have an assertion\n> to verify the precondition.\n\nDone.\n\n> > I initially thought we could devide the function into\n> > GetWALCommonInfo(), GetWALRecordInfo() and GetWALBlockInfo(), but it\n> > doesn't seem that simple.. In the end, I think we should have separate\n> > GetWALRecordInfo() and GetWALBlockInfo() that have duplicate\n> > \"values[i++] = ..\" lines.\n>\n> I agree. A little redundancy is better when the alternative is fragile\n> code, and I'm pretty sure that that applies here -- there won't be\n> very many duplicated lines, and the final code will be significantly\n> clearer. There can be a comment about keeping GetWALRecordInfo and\n> GetWALBlockInfo in sync.\n\nDone.\n\nOn Tue, Mar 21, 2023 at 5:21 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> The new pg_get_wal_block_info outputs columns in an order that doesn't\n> seem like the most useful possible order to me. This gives us another\n> reason to have separate GetWALRecordInfo and GetWALBlockInfo utility\n> functions rather than sharing logic for building output tuples.\n>\n> Specifically, I think that pg_get_wal_block_info should ouput the\n> \"primary key\" columns first:\n>\n> reltablespace, reldatabase, relfilenode, blockid, start_lsn, end_lsn\n>\n> Next comes the columns that duplicate the columns output by\n> pg_get_wal_records_info, in the same order as they appear in\n> pg_get_wal_records_info. (Obviously this won't include block_ref).\n\nDone.\n\nOn Tue, Mar 21, 2023 at 5:30 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> I think that we should also make the description output column display\n> NULLs for those records that don't output any description string. This\n> at least includes the \"FPI\" record type from the \"XLOG\" rmgr.\n> Alternatively, we could find a way of making it show a description.\n\nDone.\n\nPlease see the attached v4 patch set addressing all the review comments.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 23 Mar 2023 22:54:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 10:54:40PM +0530, Bharath Rupireddy wrote:\n> Please see the attached v4 patch set addressing all the review comments.\n\n- desc = GetRmgr(XLogRecGetRmid(record));\n- id = desc.rm_identify(XLogRecGetInfo(record));\n-\n- if (id == NULL)\n- id = psprintf(\"UNKNOWN (%x)\", XLogRecGetInfo(record) & ~XLR_INFO_MASK);\n-\n- initStringInfo(&rec_desc);\n- desc.rm_desc(&rec_desc, record);\n-\n- /* Block references. */\n- initStringInfo(&rec_blk_ref);\n- XLogRecGetBlockRefInfo(record, false, true, &rec_blk_ref, &fpi_len);\n-\n- main_data_len = XLogRecGetDataLen(record);\n\nI don't see any need to move this block of code? This leads to\nunnecessary diffs, potentially making backpatch a bit harder. Either\nway is not a big deal, still.. Except for this bit, 0001 looks fine\nby me.\n\n OUT reltablespace oid,\n OUT reldatabase oid,\n OUT relfilenode oid,\n OUT relblocknumber int8,\n+ OUT blockid int2,\n+ OUT start_lsn pg_lsn,\n+ OUT end_lsn pg_lsn,\n+ OUT prev_lsn pg_lsn,\n\nI'd still put the LSN data before the three OIDs for consistency with\nthe structures, though my opinion does not seem to count much..\n--\nMichael",
"msg_date": "Sat, 25 Mar 2023 12:12:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "At Sat, 25 Mar 2023 12:12:50 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Mar 23, 2023 at 10:54:40PM +0530, Bharath Rupireddy wrote:\n> OUT reltablespace oid,\n> OUT reldatabase oid,\n> OUT relfilenode oid,\n> OUT relblocknumber int8,\n> + OUT blockid int2,\n> + OUT start_lsn pg_lsn,\n> + OUT end_lsn pg_lsn,\n> + OUT prev_lsn pg_lsn,\n> \n> I'd still put the LSN data before the three OIDs for consistency with\n> the structures, though my opinion does not seem to count much..\n\nI agree with Michael on this point. Also, although it may not be\nsignificant for SQL, the rows are output in lsn order from the\nfunction.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 27 Mar 2023 12:41:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 9:11 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Sat, 25 Mar 2023 12:12:50 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n> > On Thu, Mar 23, 2023 at 10:54:40PM +0530, Bharath Rupireddy wrote:\n> > OUT reltablespace oid,\n> > OUT reldatabase oid,\n> > OUT relfilenode oid,\n> > OUT relblocknumber int8,\n> > + OUT blockid int2,\n> > + OUT start_lsn pg_lsn,\n> > + OUT end_lsn pg_lsn,\n> > + OUT prev_lsn pg_lsn,\n> >\n> > I'd still put the LSN data before the three OIDs for consistency with\n> > the structures, though my opinion does not seem to count much..\n>\n> I agree with Michael on this point. Also, although it may not be\n> significant for SQL, the rows are output in lsn order from the\n> function.\n\nDone that way.\n\nOn Sat, Mar 25, 2023 at 8:42 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Mar 23, 2023 at 10:54:40PM +0530, Bharath Rupireddy wrote:\n> > Please see the attached v4 patch set addressing all the review comments.\n>\n> - desc = GetRmgr(XLogRecGetRmid(record));\n> - id = desc.rm_identify(XLogRecGetInfo(record));\n> -\n> - if (id == NULL)\n> - id = psprintf(\"UNKNOWN (%x)\", XLogRecGetInfo(record) & ~XLR_INFO_MASK);\n> -\n> - initStringInfo(&rec_desc);\n> - desc.rm_desc(&rec_desc, record);\n> -\n> - /* Block references. */\n> - initStringInfo(&rec_blk_ref);\n> - XLogRecGetBlockRefInfo(record, false, true, &rec_blk_ref, &fpi_len);\n> -\n> - main_data_len = XLogRecGetDataLen(record);\n>\n> I don't see any need to move this block of code? This leads to\n> unnecessary diffs, potentially making backpatch a bit harder. Either\n> way is not a big deal, still.. Except for this bit, 0001 looks fine\n> by me.\n\nIt's a cosmetic change - I wanted to keep the calculation of column\nvalues closer to where they're assigned to Datum values. I agree to\nnot cause too much diff and removed them.\n\nPlease see the attached v5 patch set.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 27 Mar 2023 09:41:01 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Sat, Mar 25, 2023 at 12:12:50PM +0900, Michael Paquier wrote:\n> I don't see any need to move this block of code? This leads to\n> unnecessary diffs, potentially making backpatch a bit harder. Either\n> way is not a big deal, still.. Except for this bit, 0001 looks fine\n> by me.\n\nFYI, I have gone through 0001 and applied it, after tweaking a bit the\npart about block references so as we have only one\nXLogRecHasAnyBlockRefs, with its StringInfoData used only locally.\n--\nMichael",
"msg_date": "Mon, 27 Mar 2023 13:18:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 9:49 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Mar 25, 2023 at 12:12:50PM +0900, Michael Paquier wrote:\n> > I don't see any need to move this block of code? This leads to\n> > unnecessary diffs, potentially making backpatch a bit harder. Either\n> > way is not a big deal, still.. Except for this bit, 0001 looks fine\n> > by me.\n>\n> FYI, I have gone through 0001 and applied it, after tweaking a bit the\n> part about block references so as we have only one\n> XLogRecHasAnyBlockRefs, with its StringInfoData used only locally.\n\nThanks. Here's the v6 patch (last patch that I have with me for\npg_walinspect) for adding per-record info to pg_get_wal_block_info.\nNote that I addressed all review comments received so far. Any\nthoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 27 Mar 2023 13:12:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Sun, Mar 26, 2023 at 8:41 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> > I'd still put the LSN data before the three OIDs for consistency with\n> > the structures, though my opinion does not seem to count much..\n>\n> I agree with Michael on this point. Also, although it may not be\n> significant for SQL, the rows are output in lsn order from the\n> function.\n\nI guess that it makes sense to have the LSN data first, but to have\nthe other columns after that. I certainly don't dislike that approach.\n\nI just noticed is that \"forkname\" appears towards the end among\ndeclared output parameters, even in Bharath's v6. I think that it\nshould be after \"relblocknumber\" instead, because it is conceptually\npart of the \"composite primary key\", and so belongs right next to\n\"relblocknumber\". I failed to mention this detail upthread, I think.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 27 Mar 2023 15:38:23 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 12:42 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Thanks. Here's the v6 patch (last patch that I have with me for\n> pg_walinspect) for adding per-record info to pg_get_wal_block_info.\n> Note that I addressed all review comments received so far. Any\n> thoughts?\n\nLooking at this now, with the intention of committing it for 16.\n\nIn addition to what I said a little while ago about the forknum\nparameter and parameter ordering, I have a concern about the data\ntype: perhaps the forknum paramater should be declared as\n\"relforknumber smallint\", instead of using text? That would match the\napproach taken by pg_buffercache, and would be more efficient.\n\nI don't think that using a text column with the fork name adds too\nmuch, since this is after all supposed to be a tool used by experts.\nPlus it's usually pretty clear what it is from context. Not that many\nWAL records touch the visibility map, and those that do make it\nrelatively obvious which block is from the VM based on other details.\nDetails such as blockid and relblocknumber (the VM is approximately\n32k times smaller than the heap). Once I see that the record is (say)\na VISIBLE record, I'm already looking at the order of each block\nreference, and maybe at relblocknumber -- I'm not likely to visually\nscan the forknum column at all.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 27 Mar 2023 16:59:18 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 4:59 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Looking at this now, with the intention of committing it for 16.\n\nI see a bug on HEAD, following yesterday's commit 0276ae42dd.\n\nGetWALRecordInfo() will now output the value of the fpi_len variable\nbefore it has actually been set by our call to XXXX. So it'll always\nbe 0.\n\nCan you post a bugfix patch for this, Bharath?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 27 Mar 2023 18:07:09 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 4:59 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Looking at this now, with the intention of committing it for 16.\n\nAttached revision v7 adjusts the column order. This is still WIP, but\ngives a good idea of the direction I'm going in.\n\nv7 makes the column output look like this:\n\npg@regression:5432 [1209761]=# select * from\npg_get_wal_block_info('0/10D8280', '0/10D82B8');\n┌─[ RECORD 1 ]─────┬──────────────────────────────────────────────────┐\n│ start_lsn │ 0/10D8280 │\n│ end_lsn │ 0/10D82B8 │\n│ prev_lsn │ 0/10D8208 │\n│ blockid │ 0 │\n│ reltablespace │ 1,663 │\n│ reldatabase │ 1 │\n│ relfilenode │ 2,610 │\n│ relforknumber │ 0 │\n│ relblocknumber │ 0 │\n│ xid │ 0 │\n│ resource_manager │ Heap2 │\n│ record_type │ PRUNE │\n│ record_length │ 56 │\n│ main_data_length │ 8 │\n│ block_fpi_length │ 0 │\n│ block_fpi_info │ ∅ │\n│ description │ snapshotConflictHorizon 10 nredirected 0 ndead 1 │\n│ block_data │ \\x2b00 │\n│ fpi_data │ ∅ │\n└──────────────────┴──────────────────────────────────────────────────┘\n\nFew things to note here:\n\n* blockid is now given more prominence, in that it appears just after\nthe LSN related output params.\n\nThe idea is that the blockid is conceptually closer to the LSN stuff.\n\n* There is now a smallint relblocknumber, for consistency with\npg_buffercache. This replaces the previous text column.\n\nAs I mentioned earlier on, I don't think that a text based output\nparam adds much.\n\n* The integer fields record_length, main_data_length, block_fpi_length\nall now all appear together. This is for consistency with the similar\noutput params from the other function.\n\nv7 allows block_fpi_length to be 0 instead of NULL, for consistency\nwith the fpi_length param from the other function. The intention is to\nmake it relatively obvious which information \"comes from the record\"\nand which information \"comes from the block reference\".\n\n* The block_fpi_info output param appears right after block_fpi_info.\n\nThis is not very verbose, and I find that it is hard to find by\nscrolling horizontally in pspg if it gets placed after either\nblock_data or fpi_data, which tend to have at least some huge/wide\noutputs. It seemed sensible to place block_fpi_info next to the param\nI'm now calling block_fpi_length, after it was moved next to the other\n\"length\" fields.\n\nHow do people feel about this approach? I'll need to write\ndocumentation to help the user to understand what's really going on\nhere.\n\n-- \nPeter Geoghegan",
"msg_date": "Mon, 27 Mar 2023 19:40:13 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 5:29 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Mar 27, 2023 at 12:42 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Thanks. Here's the v6 patch (last patch that I have with me for\n> > pg_walinspect) for adding per-record info to pg_get_wal_block_info.\n> > Note that I addressed all review comments received so far. Any\n> > thoughts?\n>\n> Looking at this now, with the intention of committing it for 16.\n>\n> In addition to what I said a little while ago about the forknum\n> parameter and parameter ordering, I have a concern about the data\n> type: perhaps the forknum paramater should be declared as\n> \"relforknumber smallint\", instead of using text? That would match the\n> approach taken by pg_buffercache, and would be more efficient.\n>\n> I don't think that using a text column with the fork name adds too\n> much, since this is after all supposed to be a tool used by experts.\n> Plus it's usually pretty clear what it is from context. Not that many\n> WAL records touch the visibility map, and those that do make it\n> relatively obvious which block is from the VM based on other details.\n> Details such as blockid and relblocknumber (the VM is approximately\n> 32k times smaller than the heap). Once I see that the record is (say)\n> a VISIBLE record, I'm already looking at the order of each block\n> reference, and maybe at relblocknumber -- I'm not likely to visually\n> scan the forknum column at all.\n\nHm, agreed. Changed in the attached v7-0002 patch. We can as well\nwrite a case statement in the create function SQL to output forkname\ninstead forknumber, but I'd stop doing that to keep in sync with\npg_buffercache.\n\nOn Tue, Mar 28, 2023 at 6:37 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Mar 27, 2023 at 4:59 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Looking at this now, with the intention of committing it for 16.\n>\n> I see a bug on HEAD, following yesterday's commit 0276ae42dd.\n>\n> GetWALRecordInfo() will now output the value of the fpi_len variable\n> before it has actually been set by our call to XXXX. So it'll always\n> be 0.\n>\n> Can you post a bugfix patch for this, Bharath?\n\nOh, thanks for finding it out. Fixed in the attached v7-0001 patch. I\nalso removed the \"invalid fork number\" error as users can figure that\nout if at all the fork number is wrong.\n\nOn the ordering of the columns, I kept start_lsn, end_lsn and prev_lsn\nfirst and then the rel** columns (this rel** columns order follows\npg_buffercache) and then block data related columns. Michael and\nKyotaro are of the opinion that it's better to keep LSNs first to be\nconsistent and also given that this function is WAL related, it makes\nsense to have LSNs first.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 28 Mar 2023 08:17:30 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 06:07:09PM -0700, Peter Geoghegan wrote:\n> I see a bug on HEAD, following yesterday's commit 0276ae42dd.\n> \n> GetWALRecordInfo() will now output the value of the fpi_len variable\n> before it has actually been set by our call to XXXX. So it'll always\n> be 0.\n\nIndeed, good catch. It looks like I was not careful enough with the\nblock controlled by XLogRecHasAnyBlockRefs().\n--\nMichael",
"msg_date": "Tue, 28 Mar 2023 12:37:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 7:47 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Hm, agreed. Changed in the attached v7-0002 patch. We can as well\n> write a case statement in the create function SQL to output forkname\n> instead forknumber, but I'd stop doing that to keep in sync with\n> pg_buffercache.\n\nI just don't see much value in any textual representation of fork\nname, however generated. In practice it's just not adding very much\nuseful information. It is mostly useful as a way of filtering block\nreferences, which makes simple integers more natural.\n\n> Oh, thanks for finding it out. Fixed in the attached v7-0001 patch. I\n> also removed the \"invalid fork number\" error as users can figure that\n> out if at all the fork number is wrong.\n\nPushed just now.\n\n> On the ordering of the columns, I kept start_lsn, end_lsn and prev_lsn\n> first and then the rel** columns (this rel** columns order follows\n> pg_buffercache) and then block data related columns. Michael and\n> Kyotaro are of the opinion that it's better to keep LSNs first to be\n> consistent and also given that this function is WAL related, it makes\n> sense to have LSNs first.\n\nRight, but I didn't change that part in the revision of the patch I\nposted. Those columns still came first, and were totally consistent\nwith the pg_get_wal_record_info function.\n\nI think that there was a \"mid air collision\" here, where we both\nposted patches that we each called v7 within minutes of each other.\nJust to be clear, I ended up with a column order as described here in\nmy revision:\n\nhttps://postgr.es/m/CAH2-WzmzO-AU4QSbnzzANBkrpg=4CuOd3scVtv+7x65e+QKBZw@mail.gmail.com\n\nIt now occurs to me that \"fpi_data\" should perhaps be called\n\"block_fpi_data\". What do you think?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 28 Mar 2023 11:15:17 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 11:15:17AM -0700, Peter Geoghegan wrote:\n> On Mon, Mar 27, 2023 at 7:47 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> Hm, agreed. Changed in the attached v7-0002 patch. We can as well\n>> write a case statement in the create function SQL to output forkname\n>> instead forknumber, but I'd stop doing that to keep in sync with\n>> pg_buffercache.\n> \n> I just don't see much value in any textual representation of fork\n> name, however generated. In practice it's just not adding very much\n> useful information. It is mostly useful as a way of filtering block\n> references, which makes simple integers more natural.\n\nI disagree with this argument. Personally, I have a *much* better\nexperience with textual representation because there is no need to\ncross-check the internals of the code in case you don't remember what\na given number means in an enum or in a set of bits, especially if\nyou're in a hurry of looking at a production or customer deployment.\nIn short, it makes for less mistakes because you don't have to think\nabout some extra mapping between some integers and what they actually\nmean through text. The clauses you'd apply for a group by on the\nforks, or for a filter with IN clauses don't change, they're just made\neasier to understand for the common user, and that includes\nexperienced people. We'd better think about that like the text[]\narrays we use for the flag values, like the FPI flags, or why we've\nintroduced text[] for the HEAP_* flags in the heap functions of\npageinspect.\n\nThere's even more consistency with pageinspect in using a fork name,\nwhere we can pass down a fork name to get a raw page.\n--\nMichael",
"msg_date": "Wed, 29 Mar 2023 07:34:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 3:34 PM Michael Paquier <michael@paquier.xyz> wrote:\n> I disagree with this argument. Personally, I have a *much* better\n> experience with textual representation because there is no need to\n> cross-check the internals of the code in case you don't remember what\n> a given number means in an enum or in a set of bits, especially if\n> you're in a hurry of looking at a production or customer deployment.\n\nI couldn't tell you which fork number is which right off the top of my\nhead, either (except that main fork is 0). But it doesn't matter.\nThere are only ever two relevant fork numbers. And even if that\nchanges, it's perfectly clear which is which from context. There will\nbe two block references for a given (say) VISIBLE record, one of which\nis obviously for the main fork, the other of which is obviously for\nthe VM fork.\n\nPlus it's just more consistent that way. The existing\npg_get_wal_block_info() output parameters look like they were directly\ncopied from pg_buffercache (the same names are already used), so why\nnot do the same with relforknumber?\n\n> In short, it makes for less mistakes because you don't have to think\n> about some extra mapping between some integers and what they actually\n> mean through text. The clauses you'd apply for a group by on the\n> forks, or for a filter with IN clauses don't change, they're just made\n> easier to understand for the common user, and that includes\n> experienced people.\n\nIt's slightly harder to write a query that filters on text, and it\nwon't perform as well.\n\n> We'd better think about that like the text[]\n> arrays we use for the flag values, like the FPI flags, or why we've\n> introduced text[] for the HEAP_* flags in the heap functions of\n> pageinspect.\n\nI think that those other things are fine, because they're much less\nobvious, and are very unlikely to ever appear in a query predicate.\n\n> There's even more consistency with pageinspect in using a fork name,\n> where we can pass down a fork name to get a raw page.\n\npageinspect provides a way of mapping raw infomask/informask2 fields\nto status flags, through its heap_tuple_infomask_flags function. But\nthe actual function that returns information from tuples\n(heap_page_items) faithfully represents the raw on-disk format\ndirectly -- the heap_tuple_infomask_flags part is totally optional. So\nthat doesn't seem like an example that supports your argument -- quite\nthe contrary.\n\nI wouldn't mind adding something to the docs about fork number. In\nfact it's definitely going to be necessary to have an explanation that\nat least matches the one from the pg_buffercache docs.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 29 Mar 2023 11:29:25 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 7:40 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached revision v7 adjusts the column order. This is still WIP, but\n> gives a good idea of the direction I'm going in.\n\nA couple of small tweaks to this appear in the attached revision, v8.\nNow it looks like this:\n\npg@regression:5432 [1294231]=# select * from\npg_get_wal_block_info('0/10E9D80' , '0/10E9DC0');\n┌─[ RECORD 1 ]──────┬────────────────────────────────────┐\n│ start_lsn │ 0/10E9D80 │\n│ end_lsn │ 0/10E9DC0 │\n│ prev_lsn │ 0/10E9860 │\n│ blockid │ 0 │\n│ reltablespace │ 1,663 │\n│ reldatabase │ 1 │\n│ relfilenode │ 2,690 │\n│ relforknumber │ 0 │\n│ relblocknumber │ 5 │\n│ xid │ 117 │\n│ resource_manager │ Btree │\n│ record_type │ INSERT_LEAF │\n│ record_length │ 64 │\n│ main_data_length │ 2 │\n│ block_data_length │ 16 │\n│ block_fpi_length │ 0 │\n│ block_fpi_info │ ∅ │\n│ description │ off 14 │\n│ block_data │ \\x00005400020010001407000000000000 │\n│ block_fpi_data │ ∅ │\n└───────────────────┴────────────────────────────────────┘\n\nThis is similar to what I showed recently for v7. Just two changes:\n\n* The parameter formerly called fpi_data is now called block_fpi_data,\nto enforce the idea that it's block specific (and for consistency with\nthe related block_fpi_length param).\n\n* There is now a new column, which makes the size of block_data\nexplicit: block_data_length\n\nThis made sense on consistency grounds, since we already had a\nblock_fpi_length. But it also seems quite useful. In this example, I\ncan immediately see that this INSERT_LEAF record needed 2 bytes for\nthe block offset number (indicating off 14), and 16 bytes of block\ndata for the IndexTuple data itself. There is a more recognizable\npattern to things, since the size of tuples for a given relation tends\nto be somewhat homogenous. block_data_length also seems like it could\nprovide users with a handy way of filtering out definitely-irrelevant\nblock references.\n\n-- \nPeter Geoghegan",
"msg_date": "Wed, 29 Mar 2023 12:47:55 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 12:47 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> A couple of small tweaks to this appear in the attached revision, v8.\n\nI spent some time on the documentation today, too. Attached is v9,\nwhich seems pretty close to being committable. I hope to commit what I\nhave here (or something very close to it) in the next couple of days.\n\nNote that I've relocated the documentation for pg_get_wal_block_info()\nright after pg_get_wal_records_info(), despite getting some push back\non that before now. It just doesn't make sense to leave it where it\nis, since the documentation now explains the new functionality by\ndirectly comparing the two functions.\n\nI also noticed that the docs were never updated following the end_lsn\nchanges in commit 5c1b6628 (they still said that you needed an end_lsn\nbefore the server's current LSN). I've fixed that in passing, and\nadded a new \"Tip\" that advertises the permissive interpretation around\nend_lsn values in a general sort of way (since it applies equally to\nall but one of the pg_walinspect functions). I've also done a little\nbit of restructuring of some of the other functions, to keep things\nconsistent with what I want to do with pg_get_wal_block_info.\n\n-- \nPeter Geoghegan",
"msg_date": "Wed, 29 Mar 2023 16:44:45 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 5:15 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Mar 29, 2023 at 12:47 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > A couple of small tweaks to this appear in the attached revision, v8.\n>\n> I spent some time on the documentation today, too. Attached is v9,\n> which seems pretty close to being committable. I hope to commit what I\n> have here (or something very close to it) in the next couple of days.\n>\n> Note that I've relocated the documentation for pg_get_wal_block_info()\n> right after pg_get_wal_records_info(), despite getting some push back\n> on that before now. It just doesn't make sense to leave it where it\n> is, since the documentation now explains the new functionality by\n> directly comparing the two functions.\n>\n> I also noticed that the docs were never updated following the end_lsn\n> changes in commit 5c1b6628 (they still said that you needed an end_lsn\n> before the server's current LSN). I've fixed that in passing, and\n> added a new \"Tip\" that advertises the permissive interpretation around\n> end_lsn values in a general sort of way (since it applies equally to\n> all but one of the pg_walinspect functions). I've also done a little\n> bit of restructuring of some of the other functions, to keep things\n> consistent with what I want to do with pg_get_wal_block_info.\n\nI took a look at v9 and LGTM.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 30 Mar 2023 08:58:23 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 8:28 PM Bharath Rupireddy\r\n<bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> I took a look at v9 and LGTM.\r\n\r\nPushed, thanks.\r\n\r\nThere is still an outstanding question around the overhead of\r\noutputting FPIs and even block data from pg_get_wal_block_info(). At\r\none point Melanie suggested that we'd need to do something about that,\r\nand I tend to agree. Attached patch provides an optional parameter\r\nthat will make pg_get_wal_block_info return NULLs for both block_data\r\nand block_fpi_data, no matter whether or not there is something to\r\nshow. Note that this only affects those two bytea columns; we'll still\r\nshow everything else, including valid block_data_length and\r\nblock_fpi_length values (so the metadata describing the on-disk size\r\nof block_data and block_fpi_data is unaffected).\r\n\r\nTo test this patch, I ran pgbench for about 5 minutes, using a fairly\r\nstandard configuration with added indexes and with wal_log_hints\r\nenabled. I ended up with the following WAL records afterwards:\r\n\r\npg@regression:5432 [1402115]=# SELECT\r\n \"resource_manager/record_type\" t,\r\n pg_size_pretty(combined_size) s,\r\n fpi_size_percentage perc_fpi\r\nFROM\r\n pg_get_wal_Stats ('0/10E9D80', 'FFFFFFFF/FFFFFFFF', FALSE) where\r\ncombined_size > 0;\r\n┌─[ RECORD 1 ]──────────────────┐\r\n│ t │ XLOG │\r\n│ s │ 1557 MB │\r\n│ perc_fpi │ 22.029466865781302 │\r\n├─[ RECORD 2 ]──────────────────┤\r\n│ t │ Transaction │\r\n│ s │ 49 MB │\r\n│ perc_fpi │ 0 │\r\n├─[ RECORD 3 ]──────────────────┤\r\n│ t │ Storage │\r\n│ s │ 13 kB │\r\n│ perc_fpi │ 0 │\r\n├─[ RECORD 4 ]──────────────────┤\r\n│ t │ CLOG │\r\n│ s │ 1380 bytes │\r\n│ perc_fpi │ 0 │\r\n├─[ RECORD 5 ]──────────────────┤\r\n│ t │ Database │\r\n│ s │ 118 bytes │\r\n│ perc_fpi │ 0 │\r\n├─[ RECORD 6 ]──────────────────┤\r\n│ t │ RelMap │\r\n│ s │ 565 bytes │\r\n│ perc_fpi │ 0 │\r\n├─[ RECORD 7 ]──────────────────┤\r\n│ t │ Standby │\r\n│ s │ 30 kB │\r\n│ perc_fpi │ 0 │\r\n├─[ RECORD 8 ]──────────────────┤\r\n│ t │ Heap2 │\r\n│ s │ 4235 MB │\r\n│ perc_fpi │ 0.6731388657682449 │\r\n├─[ RECORD 9 ]──────────────────┤\r\n│ t │ Heap │\r\n│ s │ 4482 MB │\r\n│ perc_fpi │ 54.46811493602934 │\r\n├─[ RECORD 10 ]─────────────────┤\r\n│ t │ Btree │\r\n│ s │ 1786 MB │\r\n│ perc_fpi │ 22.829279332421116 │\r\n└──────────┴────────────────────┘\r\n\r\nTime: 3618.693 ms (00:03.619)\r\n\r\nSo about 12GB of WAL -- certainly enough to be a challenge for pg_walinspect.\r\n\r\nI then ran the following query several times over the same LSN range\r\nas before, with my patch applied, but with behavior equivalent to\r\ncurrent git HEAD (this is with outputting block_data and\r\nblock_fpi_data values still turned on):\r\n\r\npg@regression:5432 [1402115]=# SELECT\r\n count(*)\r\nFROM\r\n pg_get_wal_block_info ('0/10E9D80', 'FFFFFFFF/FFFFFFFF', false);\r\n┌─[ RECORD 1 ]───────┐\r\n│ count │ 17,031,979 │\r\n└───────┴────────────┘\r\n\r\nTime: 35171.463 ms (00:35.171)\r\n\r\nThe time shown here is typical of what I saw.\r\n\r\nAnd now the same query, but without any overhead for outputting\r\nblock_data and block_fpi_data values:\r\n\r\npg@regression:5432 [1402115]=# SELECT\r\n count(*)\r\nFROM\r\n pg_get_wal_block_info ('0/10E9D80', 'FFFFFFFF/FFFFFFFF', true);\r\n┌─[ RECORD 1 ]───────┐\r\n│ count │ 17,031,979 │\r\n└───────┴────────────┘\r\n\r\nTime: 15235.499 ms (00:15.235)\r\n\r\nThis time is also typical of what I saw. The variance was fairly low,\r\nso I won't bother describing it.\r\n\r\nI think that this is a compelling reason to apply the patch. It would\r\nbe possible to get about 75% of the benefit shown here by just\r\nsuppressing block_fpi_data output, without suppressing block_data, but\r\nI think that it makes sense to either suppress both or neither. Things\r\nlike page split records can write a fairly large amount of WAL in a\r\nway that resembles an FPI, even though technically no FPI is involved.\r\n\r\nIf there are no objections, I'll move ahead with committing something\r\nalong the lines of this patch in the next couple of days.\r\n\r\n-- \r\nPeter Geoghegan",
"msg_date": "Thu, 30 Mar 2023 14:41:58 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 2:41 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> pg@regression:5432 [1402115]=# SELECT\n> count(*)\n> FROM\n> pg_get_wal_block_info ('0/10E9D80', 'FFFFFFFF/FFFFFFFF', true);\n> ┌─[ RECORD 1 ]───────┐\n> │ count │ 17,031,979 │\n> └───────┴────────────┘\n>\n> Time: 15235.499 ms (00:15.235)\n>\n> This time is also typical of what I saw. The variance was fairly low,\n> so I won't bother describing it.\n\nIf I rerun the same test case with pg_get_wal_records_info (same WAL\nrecords, same system) then I find that it takes about 16 and a half\nseconds. So my patch makes pg_get_wal_block_info a little bit faster\nthan pg_get_wal_records_info for this test case, and likely many\ninteresting cases (assuming that the user opts out of fetching\nblock_data and block_fpi_data values when running\npg_get_wal_block_info, per the patch).\n\nThis result closely matches what I was expecting. We're doing almost\nthe same amount of work when each function is called, so naturally the\nruntime almost matches. Note that pg_get_wal_records_info does\nslightly *more* work here, since it alone must output rows for commit\nrecords. Unlike pg_get_wal_block_info, which (by design) never outputs\nrows for WAL records that lack block references.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 30 Mar 2023 15:25:45 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 2:41 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> There is still an outstanding question around the overhead of\n> outputting FPIs and even block data from pg_get_wal_block_info(). At\n> one point Melanie suggested that we'd need to do something about that,\n> and I tend to agree. Attached patch provides an optional parameter\n> that will make pg_get_wal_block_info return NULLs for both block_data\n> and block_fpi_data, no matter whether or not there is something to\n> show. Note that this only affects those two bytea columns; we'll still\n> show everything else, including valid block_data_length and\n> block_fpi_length values (so the metadata describing the on-disk size\n> of block_data and block_fpi_data is unaffected).\n\nI pushed this patch just now. Except that the final commited version\nhad the \"suppress_block_data\" output parameter name flipped. It was\ninverted and renamed to \"show_data\" (and made \"DEFAULT true\"). This is\ncloser to how the pg_stat_statements() function handles a similar\nissue with overly large query texts.\n\nI'm very happy with the end result of the work on this thread. It\nworks a lot better for the sorts of queries I am interested in. Thanks\nto all involved, particularly Bharath.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 31 Mar 2023 14:03:15 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_walinspect function with block info columns"
}
] |
[
{
"msg_contents": "Hi,\n\nDuring a recent code review, I noticed a lot of 'struct\nLogicalDecodingContext' usage.\n\nThere are many function prototypes where the params are (for no\napparent reason to me) a mixture of structs and typedef structs.\n\nAFAICT just by pre-declaring the typedef struct\nLogicalDecodingContext, all of those 'struct LogicalDecodingContext'\ncan be culled, resulting in cleaner and more consistent function\nsignatures.\n\nThe PG Docs were similarly modified.\n\nPSA patch for this. It passes make check-world.\n\n(I recognize this is potentially the tip of an iceberg. If this patch\nis deemed OK, I can hunt down similar underuse of typedefs for other\nstructs)\n\nThoughts?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 2 Mar 2023 09:13:58 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "typedef struct LogicalDecodingContext"
},
{
"msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> AFAICT just by pre-declaring the typedef struct\n> LogicalDecodingContext, all of those 'struct LogicalDecodingContext'\n> can be culled, resulting in cleaner and more consistent function\n> signatures.\n\nSadly, this is almost certainly going to cause bitching on the part of\nsome compilers, because depending on the order of header inclusions\nthey are going to see multiple typedefs for the same name. Redundant\n\"struct foo\" declarations are portable C, but redundant \"typedef foo\"\nnot so much.\n\nI also wonder if this passes headerscheck and cpluspluscheck.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Mar 2023 18:04:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: typedef struct LogicalDecodingContext"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > AFAICT just by pre-declaring the typedef struct\n> > LogicalDecodingContext, all of those 'struct LogicalDecodingContext'\n> > can be culled, resulting in cleaner and more consistent function\n> > signatures.\n>\n> Sadly, this is almost certainly going to cause bitching on the part of\n> some compilers, because depending on the order of header inclusions\n> they are going to see multiple typedefs for the same name. Redundant\n> \"struct foo\" declarations are portable C, but redundant \"typedef foo\"\n> not so much.\n>\n\nBah, I should not have used that tip-of-an-iceberg metaphor; it turns\nout I was actually standing on the ship...\n\nSo does your reply mean there is no way really to be sure if such\nchanges are OK or not, other than to push them and then revert them\nif/when one of the BF animals complains? If that is the case, then\nit's not worth the hassle to pursue this any further.\n\n> I also wonder if this passes headerscheck and cpluspluscheck.\n\nThanks for pointing me to those - I didn't know about them.\n\nAside: Is there missing documentation for those targets here:\nhttps://www.postgresql.org/docs/devel/regress.html\n\n~\n\nFWIW, both those tests passed OK. What does \"pass\" even mean -- does\nit confirm this patch doesn't suffer the multiple typedef problem you\nanticipated after all?\n\n[postgres@CentOS7-x64 oss_postgres_misc]$ make headerscheck\nmake -C ./src/backend generated-headers\nmake[1]: Entering directory `/home/postgres/oss_postgres_misc/src/backend'\nmake -C catalog distprep generated-header-symlinks\nmake[2]: Entering directory\n`/home/postgres/oss_postgres_misc/src/backend/catalog'\nmake[2]: Nothing to be done for `distprep'.\nmake[2]: Nothing to be done for `generated-header-symlinks'.\nmake[2]: Leaving directory\n`/home/postgres/oss_postgres_misc/src/backend/catalog'\nmake -C nodes distprep generated-header-symlinks\nmake[2]: Entering directory `/home/postgres/oss_postgres_misc/src/backend/nodes'\nmake[2]: Nothing to be done for `distprep'.\nmake[2]: Nothing to be done for `generated-header-symlinks'.\nmake[2]: Leaving directory `/home/postgres/oss_postgres_misc/src/backend/nodes'\nmake -C utils distprep generated-header-symlinks\nmake[2]: Entering directory `/home/postgres/oss_postgres_misc/src/backend/utils'\nmake[2]: Nothing to be done for `distprep'.\nmake[2]: Nothing to be done for `generated-header-symlinks'.\nmake[2]: Leaving directory `/home/postgres/oss_postgres_misc/src/backend/utils'\nmake[1]: Leaving directory `/home/postgres/oss_postgres_misc/src/backend'\n./src/tools/pginclude/headerscheck . /home/postgres/oss_postgres_misc\n[postgres@CentOS7-x64 oss_postgres_misc]$\n\n\n[postgres@CentOS7-x64 oss_postgres_misc]$ make cpluspluscheck\nmake -C ./src/backend generated-headers\nmake[1]: Entering directory `/home/postgres/oss_postgres_misc/src/backend'\nmake -C catalog distprep generated-header-symlinks\nmake[2]: Entering directory\n`/home/postgres/oss_postgres_misc/src/backend/catalog'\nmake[2]: Nothing to be done for `distprep'.\nmake[2]: Nothing to be done for `generated-header-symlinks'.\nmake[2]: Leaving directory\n`/home/postgres/oss_postgres_misc/src/backend/catalog'\nmake -C nodes distprep generated-header-symlinks\nmake[2]: Entering directory `/home/postgres/oss_postgres_misc/src/backend/nodes'\nmake[2]: Nothing to be done for `distprep'.\nmake[2]: Nothing to be done for `generated-header-symlinks'.\nmake[2]: Leaving directory `/home/postgres/oss_postgres_misc/src/backend/nodes'\nmake -C utils distprep generated-header-symlinks\nmake[2]: Entering directory `/home/postgres/oss_postgres_misc/src/backend/utils'\nmake[2]: Nothing to be done for `distprep'.\nmake[2]: Nothing to be done for `generated-header-symlinks'.\nmake[2]: Leaving directory `/home/postgres/oss_postgres_misc/src/backend/utils'\nmake[1]: Leaving directory `/home/postgres/oss_postgres_misc/src/backend'\n./src/tools/pginclude/cpluspluscheck . /home/postgres/oss_postgres_misc\n[postgres@CentOS7-x64 oss_postgres_misc]$\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 2 Mar 2023 11:54:22 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: typedef struct LogicalDecodingContext"
},
{
"msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> On Thu, Mar 2, 2023 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Sadly, this is almost certainly going to cause bitching on the part of\n>> some compilers, because depending on the order of header inclusions\n>> they are going to see multiple typedefs for the same name. Redundant\n>> \"struct foo\" declarations are portable C, but redundant \"typedef foo\"\n>> not so much.\n\n> So does your reply mean there is no way really to be sure if such\n> changes are OK or not, other than to push them and then revert them\n> if/when one of the BF animals complains?\n\nWe know which compilers don't like that, I believe, but you'd have\nto dig in the commit log or mail archives to find out.\n\n[ ... pokes around ... ] The commit log entries I could find about\nthis suggest that (at least) older gcc versions complain. Maybe\nthose are all gone from the buildfarm now, but I wouldn't bet on it.\nWe were fixing this sort of thing as recently as aa3ac6453.\n\n>> I also wonder if this passes headerscheck and cpluspluscheck.\n\n> FWIW, both those tests passed OK. What does \"pass\" even mean -- does\n> it confirm this patch doesn't suffer the multiple typedef problem you\n> anticipated after all?\n\nNo, those have nothing to do with duplicate typedefs. headerscheck is\nabout whether anything is dependent on inclusion order, which I wondered\nabout for this patch. cpluspluscheck is about whether C++ compilers will\nspit up on any of our headers (due to, eg, identifiers that are C++\nkeywords); we try to keep them clean for the benefit of people who write\nextensions in C++. I wouldn't have expected cpluspluscheck to show\nanything new with this patch, but people tend to always run these tools\ntogether.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Mar 2023 20:15:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: typedef struct LogicalDecodingContext"
},
{
"msg_contents": "I wrote:\n> Peter Smith <smithpb2250@gmail.com> writes:\n>> On Thu, Mar 2, 2023 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Sadly, this is almost certainly going to cause bitching on the part of\n>>> some compilers, because depending on the order of header inclusions\n>>> they are going to see multiple typedefs for the same name.\n\n>> So does your reply mean there is no way really to be sure if such\n>> changes are OK or not, other than to push them and then revert them\n>> if/when one of the BF animals complains?\n\n> We know which compilers don't like that, I believe, but you'd have\n> to dig in the commit log or mail archives to find out.\n\nI looked into the C standard to see what I could find about this.\nC99 specifically describes the use of \"struct foo\" to forward-declare\na struct type whose meaning will be provided later. It also says\n\n [#8] If a type specifier of the form\n struct-or-union identifier\n or\n enum identifier\n occurs other than as part of one of the above forms, and a\n declaration of the identifier as a tag is visible, then it\n specifies the same type as that other declaration, and does\n not redeclare the tag.\n\nwhich appears to me to specifically authorize the appearance of\nmultiple forward declarations. On the other hand, no such wording\nappears for typedefs; they're just plain identifiers with the same\nscope rules as other identifiers. Maybe later versions of the C\nspec clarify this, but I think duplicate typedefs are pretty\nclearly not OK per C99. Perhaps with sufficiently tight warning\nor language-version options, you could get modern gcc or clang to\ncomplain about it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Mar 2023 20:40:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: typedef struct LogicalDecodingContext"
},
{
"msg_contents": "I wrote:\n> Maybe later versions of the C\n> spec clarify this, but I think duplicate typedefs are pretty\n> clearly not OK per C99.\n\nFurther research shows that C11 allows this, but it's definitely\nnot okay in C99, which is still our reference standard.\n\n> Perhaps with sufficiently tight warning\n> or language-version options, you could get modern gcc or clang to\n> complain about it.\n\nclang seems to do so as soon as you restrict it to C99:\n\n$ cat dup.c\ntypedef int foo;\ntypedef int foo;\n$ clang -c -std=gnu99 dup.c\ndup.c:2:13: warning: redefinition of typedef 'foo' is a C11 feature [-Wtypedef-redefinition]\ntypedef int foo;\n ^\ndup.c:1:13: note: previous definition is here\ntypedef int foo;\n ^\n1 warning generated.\n\nI couldn't get gcc to issue a similar warning without resorting\nto -Wpedantic, which of course whines about a ton of other stuff.\n\nI'm a little inclined to see if I can turn on -std=gnu99 on my\nclang-based buildfarm animals. I use that with gcc for my\nnormal development activities, but now that I see that clang\ncatches some things gcc doesn't ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Mar 2023 21:16:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: typedef struct LogicalDecodingContext"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 12:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Peter Smith <smithpb2250@gmail.com> writes:\n> >> On Thu, Mar 2, 2023 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> Sadly, this is almost certainly going to cause bitching on the part of\n> >>> some compilers, because depending on the order of header inclusions\n> >>> they are going to see multiple typedefs for the same name.\n>\n> >> So does your reply mean there is no way really to be sure if such\n> >> changes are OK or not, other than to push them and then revert them\n> >> if/when one of the BF animals complains?\n>\n> > We know which compilers don't like that, I believe, but you'd have\n> > to dig in the commit log or mail archives to find out.\n>\n> I looked into the C standard to see what I could find about this.\n> C99 specifically describes the use of \"struct foo\" to forward-declare\n> a struct type whose meaning will be provided later. It also says\n>\n> [#8] If a type specifier of the form\n> struct-or-union identifier\n> or\n> enum identifier\n> occurs other than as part of one of the above forms, and a\n> declaration of the identifier as a tag is visible, then it\n> specifies the same type as that other declaration, and does\n> not redeclare the tag.\n>\n> which appears to me to specifically authorize the appearance of\n> multiple forward declarations. On the other hand, no such wording\n> appears for typedefs; they're just plain identifiers with the same\n> scope rules as other identifiers. Maybe later versions of the C\n> spec clarify this, but I think duplicate typedefs are pretty\n> clearly not OK per C99. Perhaps with sufficiently tight warning\n> or language-version options, you could get modern gcc or clang to\n> complain about it.\n\nI was reading this post [1], and more specifically, this specification\nnote [2] which seems to explain things\n\nApparently, not all C99 compilers can be assumed to work using the\nstrict C99 rules. So I will abandon this idea.\n\nThanks for your replies.\n\n------\n[1] https://stackoverflow.com/questions/26240370/why-are-typedef-identifiers-allowed-to-be-declared-multiple-times/26240595#26240595\n[2] https://www.open-std.org/jtc1/sc22/wg14/www/docs/n1360.htm\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 2 Mar 2023 13:17:18 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: typedef struct LogicalDecodingContext"
},
{
"msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> Apparently, not all C99 compilers can be assumed to work using the\n> strict C99 rules.\n\nWhile googling this issue I came across a statement that clang currently\ndefaults to C17 rules. Even relatively old compilers might default to\nC11. But considering how long we held on to C89, I doubt we'll want\nto move the project minimum to C11 for some years yet.\n\n> So I will abandon this idea.\n\nThere might still be room to do something here, just not quite\nthat way. Maybe some actual header refactoring is called for?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Mar 2023 21:46:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: typedef struct LogicalDecodingContext"
},
{
"msg_contents": "I wrote:\n> I'm a little inclined to see if I can turn on -std=gnu99 on my\n> clang-based buildfarm animals. I use that with gcc for my\n> normal development activities, but now that I see that clang\n> catches some things gcc doesn't ...\n\nFTR: done on sifaka and longfin.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Mar 2023 22:00:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: typedef struct LogicalDecodingContext"
},
{
"msg_contents": "On 02.03.23 04:00, Tom Lane wrote:\n> I wrote:\n>> I'm a little inclined to see if I can turn on -std=gnu99 on my\n>> clang-based buildfarm animals. I use that with gcc for my\n>> normal development activities, but now that I see that clang\n>> catches some things gcc doesn't ...\n> \n> FTR: done on sifaka and longfin.\n\nmylodon already does something similar.\n\n\n\n",
"msg_date": "Thu, 2 Mar 2023 11:45:31 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: typedef struct LogicalDecodingContext"
},
{
"msg_contents": "On 02.03.23 03:46, Tom Lane wrote:\n> Peter Smith <smithpb2250@gmail.com> writes:\n>> Apparently, not all C99 compilers can be assumed to work using the\n>> strict C99 rules.\n> \n> While googling this issue I came across a statement that clang currently\n> defaults to C17 rules. Even relatively old compilers might default to\n> C11. But considering how long we held on to C89, I doubt we'll want\n> to move the project minimum to C11 for some years yet.\n\nWe need to wait until we de-support Visual Studio older then 2019. \n(Current minimum is 2015 (changed from 2013 for PG16).)\n\n\n\n\n",
"msg_date": "Thu, 2 Mar 2023 11:49:17 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: typedef struct LogicalDecodingContext"
}
] |
[
{
"msg_contents": "Greetings,\n\nIn [1] I proposed a patch that used a GUC to request a list of OID's to be\nreturned in binary format.\nIn [2] Peter Eisentraut proposed a very similar solution to the problem.\n\nIn [2] there was some discussion regarding whether this should be set via\nGUC or a new protocol message.\n\nI'd like to open up this discussion again so that we can move forward. I\nprefer the GUC as it is relatively simple and as Peter mentioned it works,\nbut I'm not married to the idea.\n\nRegards,\nDave\n\n[1] PostgreSQL: Proposal to provide the facility to set binary format\noutput for specific OID's per session\n<https://www.postgresql.org/message-id/CADK3HHJxQ8ydLj98u7M0NGFh3x%3DrgoG9MVx8T6AanMbor2HTzw%40mail.gmail.com>\n[2] PostgreSQL: default result formats setting\n<https://www.postgresql.org/message-id/40cbb35d-774f-23ed-3079-03f938aacdae%402ndquadrant.com>\nDave Cramer\n\nGreetings,In [1] I proposed a patch that used a GUC to request a list of OID's to be returned in binary format. In [2] Peter Eisentraut proposed a very similar solution to the problem.In [2] there was some discussion regarding whether this should be set via GUC or a new protocol message. I'd like to open up this discussion again so that we can move forward. I prefer the GUC as it is relatively simple and as Peter mentioned it works, but I'm not married to the idea. Regards,Dave[1] PostgreSQL: Proposal to provide the facility to set binary format output for specific OID's per session[2] PostgreSQL: default result formats settingDave Cramer",
"msg_date": "Thu, 2 Mar 2023 09:13:36 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Thu, 2023-03-02 at 09:13 -0500, Dave Cramer wrote:\n> I'd like to open up this discussion again so that we can\n> move forward. I prefer the GUC as it is relatively simple and as\n> Peter mentioned it works, but I'm not married to the idea. \n\nIt's not very friendly to extensions, where the types are not\nguaranteed to have stable OIDs. Did you consider any proposals that\nwork with type names?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sat, 04 Mar 2023 08:35:11 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Sat, 4 Mar 2023 at 11:35, Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Thu, 2023-03-02 at 09:13 -0500, Dave Cramer wrote:\n> > I'd like to open up this discussion again so that we can\n> > move forward. I prefer the GUC as it is relatively simple and as\n> > Peter mentioned it works, but I'm not married to the idea.\n>\n> It's not very friendly to extensions, where the types are not\n> guaranteed to have stable OIDs. Did you consider any proposals that\n> work with type names?\n>\n\nI had not.\nMost of the clients know how to decode the builtin types. I'm not sure\nthere is a use case for binary encode types that the clients don't have a\npriori knowledge of.\n\nDave\n\n>\n> Regards,\n> Jeff Davis\n>\n>\n\nDave CramerOn Sat, 4 Mar 2023 at 11:35, Jeff Davis <pgsql@j-davis.com> wrote:On Thu, 2023-03-02 at 09:13 -0500, Dave Cramer wrote:\n> I'd like to open up this discussion again so that we can\n> move forward. I prefer the GUC as it is relatively simple and as\n> Peter mentioned it works, but I'm not married to the idea. \n\nIt's not very friendly to extensions, where the types are not\nguaranteed to have stable OIDs. Did you consider any proposals that\nwork with type names?I had not. Most of the clients know how to decode the builtin types. I'm not sure there is a use case for binary encode types that the clients don't have a priori knowledge of.Dave \n\nRegards,\n Jeff Davis",
"msg_date": "Sat, 4 Mar 2023 18:04:22 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Sat, 2023-03-04 at 18:04 -0500, Dave Cramer wrote:\n> Most of the clients know how to decode the builtin types. I'm not\n> sure there is a use case for binary encode types that the clients\n> don't have a priori knowledge of.\n\nThe client could, in theory, have a priori knowledge of a non-builtin\ntype.\n\nI don't have a great solution for that, though. Maybe it's only\npractical for builtin types.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sat, 04 Mar 2023 15:58:19 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> On Sat, 2023-03-04 at 18:04 -0500, Dave Cramer wrote:\n>> Most of the clients know how to decode the builtin types. I'm not\n>> sure there is a use case for binary encode types that the clients\n>> don't have a priori knowledge of.\n\n> The client could, in theory, have a priori knowledge of a non-builtin\n> type.\n\nI don't see what's \"in theory\" about that. There seems plenty of\nuse for binary I/O of, say, PostGIS types. Even for built-in types,\ndo we really want to encourage people to hard-wire their OIDs into\napplications?\n\nI don't see a big problem with driving this off a GUC, but I think\nit should be a list of type names not OIDs. We already have plenty\nof precedent for dealing with that sort of thing; see search_path\nfor the canonical example. IIRC, there's similar caching logic\nfor temp_tablespaces.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Mar 2023 19:06:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Sat, Mar 4, 2023 at 5:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Jeff Davis <pgsql@j-davis.com> writes:\n> > On Sat, 2023-03-04 at 18:04 -0500, Dave Cramer wrote:\n> >> Most of the clients know how to decode the builtin types. I'm not\n> >> sure there is a use case for binary encode types that the clients\n> >> don't have a priori knowledge of.\n>\n> > The client could, in theory, have a priori knowledge of a non-builtin\n> > type.\n>\n> I don't see what's \"in theory\" about that. There seems plenty of\n> use for binary I/O of, say, PostGIS types. Even for built-in types,\n> do we really want to encourage people to hard-wire their OIDs into\n> applications?\n>\n> I don't see a big problem with driving this off a GUC, but I think\n> it should be a list of type names not OIDs. We already have plenty\n> of precedent for dealing with that sort of thing; see search_path\n> for the canonical example. IIRC, there's similar caching logic\n> for temp_tablespaces.\n>\n>\nThis seems slightly different since types depend upon schemas whereas\nsearch_path is top-level and tablespaces are global. But I agree that\nnames should be accepted, maybe in addition to OIDs, the latter, for core\ntypes in particular, being a way to not have to worry about masking in\nuser-space.\n\nDavid J.\n\nOn Sat, Mar 4, 2023 at 5:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Jeff Davis <pgsql@j-davis.com> writes:\n> On Sat, 2023-03-04 at 18:04 -0500, Dave Cramer wrote:\n>> Most of the clients know how to decode the builtin types. I'm not\n>> sure there is a use case for binary encode types that the clients\n>> don't have a priori knowledge of.\n\n> The client could, in theory, have a priori knowledge of a non-builtin\n> type.\n\nI don't see what's \"in theory\" about that. There seems plenty of\nuse for binary I/O of, say, PostGIS types. Even for built-in types,\ndo we really want to encourage people to hard-wire their OIDs into\napplications?\n\nI don't see a big problem with driving this off a GUC, but I think\nit should be a list of type names not OIDs. We already have plenty\nof precedent for dealing with that sort of thing; see search_path\nfor the canonical example. IIRC, there's similar caching logic\nfor temp_tablespaces.This seems slightly different since types depend upon schemas whereas search_path is top-level and tablespaces are global. But I agree that names should be accepted, maybe in addition to OIDs, the latter, for core types in particular, being a way to not have to worry about masking in user-space.David J.",
"msg_date": "Sat, 4 Mar 2023 17:13:13 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Sat, 4 Mar 2023 at 19:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Jeff Davis <pgsql@j-davis.com> writes:\n> > On Sat, 2023-03-04 at 18:04 -0500, Dave Cramer wrote:\n> >> Most of the clients know how to decode the builtin types. I'm not\n> >> sure there is a use case for binary encode types that the clients\n> >> don't have a priori knowledge of.\n>\n> > The client could, in theory, have a priori knowledge of a non-builtin\n> > type.\n>\n> I don't see what's \"in theory\" about that. There seems plenty of\n> use for binary I/O of, say, PostGIS types. Even for built-in types,\n> do we really want to encourage people to hard-wire their OIDs into\n> applications?\n>\n\nHow does a client read these? I'm pretty narrowly focussed. The JDBC API\ndoesn't really have a way to read a non built-in type. There is a facility\nto read a UDT, but the user would have to provide that transcoder. I guess\nI'm curious how other clients read binary UDT's ?\n\n>\n> I don't see a big problem with driving this off a GUC, but I think\n> it should be a list of type names not OIDs. We already have plenty\n> of precedent for dealing with that sort of thing; see search_path\n> for the canonical example. IIRC, there's similar caching logic\n> for temp_tablespaces.\n>\n\nI have no issue with allowing names, OID's were compact, but we could\neasily support both\n\nDave\n\nOn Sat, 4 Mar 2023 at 19:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:Jeff Davis <pgsql@j-davis.com> writes:\n> On Sat, 2023-03-04 at 18:04 -0500, Dave Cramer wrote:\n>> Most of the clients know how to decode the builtin types. I'm not\n>> sure there is a use case for binary encode types that the clients\n>> don't have a priori knowledge of.\n\n> The client could, in theory, have a priori knowledge of a non-builtin\n> type.\n\nI don't see what's \"in theory\" about that. There seems plenty of\nuse for binary I/O of, say, PostGIS types. Even for built-in types,\ndo we really want to encourage people to hard-wire their OIDs into\napplications?How does a client read these? I'm pretty narrowly focussed. The JDBC API doesn't really have a way to read a non built-in type. There is a facility to read a UDT, but the user would have to provide that transcoder. I guess I'm curious how other clients read binary UDT's ?\n\nI don't see a big problem with driving this off a GUC, but I think\nit should be a list of type names not OIDs. We already have plenty\nof precedent for dealing with that sort of thing; see search_path\nfor the canonical example. IIRC, there's similar caching logic\nfor temp_tablespaces.I have no issue with allowing names, OID's were compact, but we could easily support bothDave",
"msg_date": "Sat, 4 Mar 2023 19:39:23 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Sat, 4 Mar 2023 at 19:39, Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n>\n> On Sat, 4 Mar 2023 at 19:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Jeff Davis <pgsql@j-davis.com> writes:\n>> > On Sat, 2023-03-04 at 18:04 -0500, Dave Cramer wrote:\n>> >> Most of the clients know how to decode the builtin types. I'm not\n>> >> sure there is a use case for binary encode types that the clients\n>> >> don't have a priori knowledge of.\n>>\n>> > The client could, in theory, have a priori knowledge of a non-builtin\n>> > type.\n>>\n>> I don't see what's \"in theory\" about that. There seems plenty of\n>> use for binary I/O of, say, PostGIS types. Even for built-in types,\n>> do we really want to encourage people to hard-wire their OIDs into\n>> applications?\n>>\n>\n> How does a client read these? I'm pretty narrowly focussed. The JDBC API\n> doesn't really have a way to read a non built-in type. There is a facility\n> to read a UDT, but the user would have to provide that transcoder. I guess\n> I'm curious how other clients read binary UDT's ?\n>\n>>\n>> I don't see a big problem with driving this off a GUC, but I think\n>> it should be a list of type names not OIDs. We already have plenty\n>> of precedent for dealing with that sort of thing; see search_path\n>> for the canonical example. IIRC, there's similar caching logic\n>> for temp_tablespaces.\n>>\n>\n> I have no issue with allowing names, OID's were compact, but we could\n> easily support both\n>\n\nAttached is a preliminary patch that takes a list of OID's. I'd like to\nknow if this is going in the right direction.\n\nNext step would be to deal with type names as opposed to OID's.\nThis will be a bit more challenging as type names are schema specific.\n\nDave\n\n>",
"msg_date": "Mon, 13 Mar 2023 16:33:05 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, 2023-03-13 at 16:33 -0400, Dave Cramer wrote:\n> Attached is a preliminary patch that takes a list of OID's. I'd like\n> to know if this is going in the right direction.\n\nI found a few issues:\n\n1. Some kind of memory error:\n\n SET format_binary='25,1082,1184';\n WARNING: problem in alloc set PortalContext: detected write past\nchunk end in block 0x55ba7b5f7610, chunk 0x55ba7b5f7a48\n ...\n SET\n\n2. Easy to confuse psql:\n\n CREATE TABLE a(d date, t timestamptz);\n SET format_binary='25,1082,1184';\n SELECT * FROM a;\n d | t \n ---+---\n ! | \n (1 row)\n\n3. Some style issues\n - use of \"//\" comments\n - findOid should return bool, not int\n\nWhen you add support for user-defined types, that introduces a couple\nother issues:\n\n4. The format_binary GUC would depend on the search_path GUC, which\nisn't great.\n\n5. There's a theoretical invalidation problem. It might also be a\npractical problem in some testing setups with long-lived connections\nthat are recreating user-defined types.\n\n\nWe've had this problem with binary for a long time, and it seems\ndesirable to solve it. But I'm not sure GUCs are the right way.\n\nHow hard did you try to solve it in the protocol rather than with a\nGUC? I see that the startup message allows protocol extensions by\nprefixing a parameter name with \"_pq_\". Are protocol extensions\ndocumented somewhere and would that be a reasonable thing to do here?\n\nAlso, if we're going to make the binary format more practical to use,\ncan we document the expectations better? It seems the expecatation is\nthat the binary format just never changes, and that if it does, that's\na new type name.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 20 Mar 2023 10:04:42 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "+Paul Ramsey\n\nOn Mon, 20 Mar 2023 at 13:05, Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Mon, 2023-03-13 at 16:33 -0400, Dave Cramer wrote:\n> > Attached is a preliminary patch that takes a list of OID's. I'd like\n> > to know if this is going in the right direction.\n>\n>\nThanks for the review. I'm curious what system you are running on as I\ndon't see any of these errors.\n\n> I found a few issues:\n>\n> 1. Some kind of memory error:\n>\n> SET format_binary='25,1082,1184';\n> WARNING: problem in alloc set PortalContext: detected write past\n> chunk end in block 0x55ba7b5f7610, chunk 0x55ba7b5f7a48\n> ...\n> SET\n>\n2. Easy to confuse psql:\n>\n> CREATE TABLE a(d date, t timestamptz);\n> SET format_binary='25,1082,1184';\n> SELECT * FROM a;\n> d | t\n> ---+---\n> ! |\n> (1 row)\n>\n> Well I'm guessing psql doesn't know how to read date or timestamptz in\nbinary. This is not a failing of the code.\n\n\n> 3. Some style issues\n> - use of \"//\" comments\n> - findOid should return bool, not int\n>\n> Sure will fix see attached patch\n\n> When you add support for user-defined types, that introduces a couple\n> other issues:\n>\n> 4. The format_binary GUC would depend on the search_path GUC, which\n> isn't great.\n>\nThis is an interesting question. If the type isn't visible then it's not\nvisible to the query so\n\n>\n> 5. There's a theoretical invalidation problem. It might also be a\n> practical problem in some testing setups with long-lived connections\n> that are recreating user-defined types.\n>\nUDT's seem to be a problem here which candidly have very little use case\nfor binary output.\n\n>\n>\n> We've had this problem with binary for a long time, and it seems\n> desirable to solve it. But I'm not sure GUCs are the right way.\n>\n> How hard did you try to solve it in the protocol rather than with a\n> GUC? I see that the startup message allows protocol extensions by\n> prefixing a parameter name with \"_pq_\". Are protocol extensions\n> documented somewhere and would that be a reasonable thing to do here?\n>\n\nI didn't try to solve it as Tom was OK with using a GUC. Using a startup\nGUC is interesting,\nbut how would that work with pools where we want to reset the connection\nwhen we return it and then\nset the binary format on borrow ? By using a GUC when a client borrows a\nconnection from a pool the client\ncan reconfigure the oids it wants formatted in binary.\n\n>\n> Also, if we're going to make the binary format more practical to use,\n> can we document the expectations better?\n\nYes we can do that.\n\n> It seems the expecatation is\n> that the binary format just never changes, and that if it does, that's\n> a new type name.\n>\n> I really hadn't considered supporting type names. I have asked Paul\nRamsey about PostGIS and he doesn't see PostGIS using this.\n\n\n> Regards,\n> Jeff Davis\n>\n>",
"msg_date": "Mon, 20 Mar 2023 14:36:25 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, 2023-03-20 at 10:04 -0700, Jeff Davis wrote:\n> CREATE TABLE a(d date, t timestamptz);\n> SET format_binary='25,1082,1184';\n> SELECT * FROM a;\n> d | t \n> ---+---\n> ! | \n> (1 row)\n\nOops, missing the following statement after the CREATE TABLE:\n\n INSERT INTO a VALUES('1234-01-01', '2023-03-20 09:00:00');\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 20 Mar 2023 11:41:15 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "Dave Cramer <davecramer@gmail.com> writes:\n> On Mon, 20 Mar 2023 at 13:05, Jeff Davis <pgsql@j-davis.com> wrote:\n>> 2. Easy to confuse psql:\n>> \n>> CREATE TABLE a(d date, t timestamptz);\n>> SET format_binary='25,1082,1184';\n>> SELECT * FROM a;\n>> d | t\n>> ---+---\n>> ! |\n>> (1 row)\n>> \n>> Well I'm guessing psql doesn't know how to read date or timestamptz in\n>> binary. This is not a failing of the code.\n\nWhat it is is a strong suggestion that controlling this via a GUC is\nnot a great choice. There are many inappropriate (wrong abstraction\nlevel) ways to change a GUC and thereby break a client that's not\nexpecting binary output. I think Jeff's suggestion that we should\ntreat this as a protocol extension might be a good idea.\n\nIf I recall the protocol-extension design correctly, such a setting\ncould only be set at session start, which could be annoying --- at the\nvery least we'd have to tolerate entries for unrecognized data types,\nsince clients couldn't be expected to have checked the list against\nthe current server in advance.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Mar 2023 15:09:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, 2023-03-20 at 14:36 -0400, Dave Cramer wrote:\n> Thanks for the review. I'm curious what system you are running on as\n> I don't see any of these errors. \n\nAre asserts enabled?\n\n> Well I'm guessing psql doesn't know how to read date or timestamptz\n> in binary. This is not a failing of the code.\n\nIt seems strange, and potentially dangerous, to send binary data to a\nclient that's not expecting it. It feels too easy to cause confusion by\nchanging the GUC mid-session.\n\nAlso, it seems like DISCARD ALL is not resetting it, which I think is a\nbug.\n\n> \n> This is an interesting question. If the type isn't visible then it's\n> not visible to the query so \n\nI don't think that's true -- the type could be in a different schema\nfrom the table.\n\n> > \n> > 5. There's a theoretical invalidation problem. It might also be a\n> > practical problem in some testing setups with long-lived\n> > connections\n> > that are recreating user-defined types.\n> > \n> \n> UDT's seem to be a problem here which candidly have very little use\n> case for binary output. \n\nI mostly agree with that, but it also might not be hard to support\nUDTs. Is there a design problem here or is it \"just a matter of code\"?\n\n> \n> I didn't try to solve it as Tom was OK with using a GUC. Using a\n> startup GUC is interesting, \n> but how would that work with pools where we want to reset the\n> connection when we return it and then\n> set the binary format on borrow ? By using a GUC when a client\n> borrows a connection from a pool the client\n> can reconfigure the oids it wants formatted in binary.\n\nThat's a good point. How common is it to share a connection pool\nbetween different clients (some of which might support a binary format,\nand others which don't)? And would the connection pool put connections\nwith and without the property in different pools?\n\n> \n> I really hadn't considered supporting type names. I have asked Paul\n> Ramsey about PostGIS and he doesn't see PostGIS using this.\n\nOne of the things I like about Postgres is that the features all work\ntogether, and that user-defined objects are generally as good as built-\nin ones. Sometimes there's a reason to make a special case (e.g. syntax\nsupport or something), but in this case it seems like we could support\nuser-defined types just fine, right? It's also just more friendly and\nreadable to use type names, especially if it's a GUC.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 20 Mar 2023 12:09:50 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 3:33 PM Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n> Dave Cramer\n>\n>\n> On Sat, 4 Mar 2023 at 19:39, Dave Cramer <davecramer@gmail.com> wrote:\n>\n>>\n>>\n>> On Sat, 4 Mar 2023 at 19:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>>> Jeff Davis <pgsql@j-davis.com> writes:\n>>> > On Sat, 2023-03-04 at 18:04 -0500, Dave Cramer wrote:\n>>> >> Most of the clients know how to decode the builtin types. I'm not\n>>> >> sure there is a use case for binary encode types that the clients\n>>> >> don't have a priori knowledge of.\n>>>\n>>> > The client could, in theory, have a priori knowledge of a non-builtin\n>>> > type.\n>>>\n>>> I don't see what's \"in theory\" about that. There seems plenty of\n>>> use for binary I/O of, say, PostGIS types. Even for built-in types,\n>>> do we really want to encourage people to hard-wire their OIDs into\n>>> applications?\n>>>\n>>\n>> How does a client read these? I'm pretty narrowly focussed. The JDBC API\n>> doesn't really have a way to read a non built-in type. There is a facility\n>> to read a UDT, but the user would have to provide that transcoder. I guess\n>> I'm curious how other clients read binary UDT's ?\n>>\n>>>\n>>> I don't see a big problem with driving this off a GUC, but I think\n>>> it should be a list of type names not OIDs. We already have plenty\n>>> of precedent for dealing with that sort of thing; see search_path\n>>> for the canonical example. IIRC, there's similar caching logic\n>>> for temp_tablespaces.\n>>>\n>>\n>> I have no issue with allowing names, OID's were compact, but we could\n>> easily support both\n>>\n>\n> Attached is a preliminary patch that takes a list of OID's. I'd like to\n> know if this is going in the right direction.\n>\n> Next step would be to deal with type names as opposed to OID's.\n> This will be a bit more challenging as type names are schema specific.\n>\n\nOIDs are a pain to deal with IMO. They will not survive a dump style\nrestore, and are hard to keep synchronized between databases...type names\ndon't have this problem. OIDs are an implementation artifact that ought\nnot need any extra dependency.\n\nThis seems like a protocol or even a driver issue rather than a GUC issue.\nWhy does the server need to care what format the client might want to\nprefer on a query by query basis? I just don't see it. The resultformat\nswitch in libpq works pretty well, except that it's \"all in\" on getting\ndata from the server, with the dead simple workaround of casting to text\nwhich might even be able to be managed from within the driver itself.\n\nmerlin\n\nOn Mon, Mar 13, 2023 at 3:33 PM Dave Cramer <davecramer@gmail.com> wrote:Dave CramerOn Sat, 4 Mar 2023 at 19:39, Dave Cramer <davecramer@gmail.com> wrote:On Sat, 4 Mar 2023 at 19:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:Jeff Davis <pgsql@j-davis.com> writes:\n> On Sat, 2023-03-04 at 18:04 -0500, Dave Cramer wrote:\n>> Most of the clients know how to decode the builtin types. I'm not\n>> sure there is a use case for binary encode types that the clients\n>> don't have a priori knowledge of.\n\n> The client could, in theory, have a priori knowledge of a non-builtin\n> type.\n\nI don't see what's \"in theory\" about that. There seems plenty of\nuse for binary I/O of, say, PostGIS types. Even for built-in types,\ndo we really want to encourage people to hard-wire their OIDs into\napplications?How does a client read these? I'm pretty narrowly focussed. The JDBC API doesn't really have a way to read a non built-in type. There is a facility to read a UDT, but the user would have to provide that transcoder. I guess I'm curious how other clients read binary UDT's ?\n\nI don't see a big problem with driving this off a GUC, but I think\nit should be a list of type names not OIDs. We already have plenty\nof precedent for dealing with that sort of thing; see search_path\nfor the canonical example. IIRC, there's similar caching logic\nfor temp_tablespaces.I have no issue with allowing names, OID's were compact, but we could easily support bothAttached is a preliminary patch that takes a list of OID's. I'd like to know if this is going in the right direction.Next step would be to deal with type names as opposed to OID's. This will be a bit more challenging as type names are schema specific.OIDs are a pain to deal with IMO. They will not survive a dump style restore, and are hard to keep synchronized between databases...type names don't have this problem. OIDs are an implementation artifact that ought not need any extra dependency. This seems like a protocol or even a driver issue rather than a GUC issue. Why does the server need to care what format the client might want to prefer on a query by query basis? I just don't see it. The resultformat switch in libpq works pretty well, except that it's \"all in\" on getting data from the server, with the dead simple workaround of casting to text which might even be able to be managed from within the driver itself. merlin",
"msg_date": "Mon, 20 Mar 2023 18:10:22 -0500",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, 20 Mar 2023 at 19:10, Merlin Moncure <mmoncure@gmail.com> wrote:\n\n>\n>\n> On Mon, Mar 13, 2023 at 3:33 PM Dave Cramer <davecramer@gmail.com> wrote:\n>\n>>\n>> Dave Cramer\n>>\n>>\n>> On Sat, 4 Mar 2023 at 19:39, Dave Cramer <davecramer@gmail.com> wrote:\n>>\n>>>\n>>>\n>>> On Sat, 4 Mar 2023 at 19:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>\n>>>> Jeff Davis <pgsql@j-davis.com> writes:\n>>>> > On Sat, 2023-03-04 at 18:04 -0500, Dave Cramer wrote:\n>>>> >> Most of the clients know how to decode the builtin types. I'm not\n>>>> >> sure there is a use case for binary encode types that the clients\n>>>> >> don't have a priori knowledge of.\n>>>>\n>>>> > The client could, in theory, have a priori knowledge of a non-builtin\n>>>> > type.\n>>>>\n>>>> I don't see what's \"in theory\" about that. There seems plenty of\n>>>> use for binary I/O of, say, PostGIS types. Even for built-in types,\n>>>> do we really want to encourage people to hard-wire their OIDs into\n>>>> applications?\n>>>>\n>>>\n>>> How does a client read these? I'm pretty narrowly focussed. The JDBC API\n>>> doesn't really have a way to read a non built-in type. There is a facility\n>>> to read a UDT, but the user would have to provide that transcoder. I guess\n>>> I'm curious how other clients read binary UDT's ?\n>>>\n>>>>\n>>>> I don't see a big problem with driving this off a GUC, but I think\n>>>> it should be a list of type names not OIDs. We already have plenty\n>>>> of precedent for dealing with that sort of thing; see search_path\n>>>> for the canonical example. IIRC, there's similar caching logic\n>>>> for temp_tablespaces.\n>>>>\n>>>\n>>> I have no issue with allowing names, OID's were compact, but we could\n>>> easily support both\n>>>\n>>\n>> Attached is a preliminary patch that takes a list of OID's. I'd like to\n>> know if this is going in the right direction.\n>>\n>> Next step would be to deal with type names as opposed to OID's.\n>> This will be a bit more challenging as type names are schema specific.\n>>\n>\n> OIDs are a pain to deal with IMO. They will not survive a dump style\n> restore, and are hard to keep synchronized between databases...type names\n> don't have this problem. OIDs are an implementation artifact that ought\n> not need any extra dependency.\n>\nAFAIK, OID's for built-in types don't change.\nClearly we need more thought on how to deal with UDT's\n\n>\n>\n> This seems like a protocol or even a driver issue rather than a GUC issue.\n> Why does the server need to care what format the client might want to\n> prefer on a query by query basis?\n>\n\nActually this isn't a query by query basis. The point of this is that the\nclient wants all the results for given OID's in binary.\n\n\n> I just don't see it. The resultformat switch in libpq works pretty well,\n> except that it's \"all in\" on getting data from the server, with the dead\n> simple workaround of casting to text which might even be able to be managed\n> from within the driver itself.\n>\n> merlin\n>\n>\n>\n\nOn Mon, 20 Mar 2023 at 19:10, Merlin Moncure <mmoncure@gmail.com> wrote:On Mon, Mar 13, 2023 at 3:33 PM Dave Cramer <davecramer@gmail.com> wrote:Dave CramerOn Sat, 4 Mar 2023 at 19:39, Dave Cramer <davecramer@gmail.com> wrote:On Sat, 4 Mar 2023 at 19:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:Jeff Davis <pgsql@j-davis.com> writes:\n> On Sat, 2023-03-04 at 18:04 -0500, Dave Cramer wrote:\n>> Most of the clients know how to decode the builtin types. I'm not\n>> sure there is a use case for binary encode types that the clients\n>> don't have a priori knowledge of.\n\n> The client could, in theory, have a priori knowledge of a non-builtin\n> type.\n\nI don't see what's \"in theory\" about that. There seems plenty of\nuse for binary I/O of, say, PostGIS types. Even for built-in types,\ndo we really want to encourage people to hard-wire their OIDs into\napplications?How does a client read these? I'm pretty narrowly focussed. The JDBC API doesn't really have a way to read a non built-in type. There is a facility to read a UDT, but the user would have to provide that transcoder. I guess I'm curious how other clients read binary UDT's ?\n\nI don't see a big problem with driving this off a GUC, but I think\nit should be a list of type names not OIDs. We already have plenty\nof precedent for dealing with that sort of thing; see search_path\nfor the canonical example. IIRC, there's similar caching logic\nfor temp_tablespaces.I have no issue with allowing names, OID's were compact, but we could easily support bothAttached is a preliminary patch that takes a list of OID's. I'd like to know if this is going in the right direction.Next step would be to deal with type names as opposed to OID's. This will be a bit more challenging as type names are schema specific.OIDs are a pain to deal with IMO. They will not survive a dump style restore, and are hard to keep synchronized between databases...type names don't have this problem. OIDs are an implementation artifact that ought not need any extra dependency.AFAIK, OID's for built-in types don't change. Clearly we need more thought on how to deal with UDT's This seems like a protocol or even a driver issue rather than a GUC issue. Why does the server need to care what format the client might want to prefer on a query by query basis? Actually this isn't a query by query basis. The point of this is that the client wants all the results for given OID's in binary. I just don't see it. The resultformat switch in libpq works pretty well, except that it's \"all in\" on getting data from the server, with the dead simple workaround of casting to text which might even be able to be managed from within the driver itself. merlin",
"msg_date": "Mon, 20 Mar 2023 20:11:37 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Mon, 20 Mar 2023 at 15:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Dave Cramer <davecramer@gmail.com> writes:\n> > On Mon, 20 Mar 2023 at 13:05, Jeff Davis <pgsql@j-davis.com> wrote:\n> >> 2. Easy to confuse psql:\n> >>\n> >> CREATE TABLE a(d date, t timestamptz);\n> >> SET format_binary='25,1082,1184';\n> >> SELECT * FROM a;\n> >> d | t\n> >> ---+---\n> >> ! |\n> >> (1 row)\n> >>\n> >> Well I'm guessing psql doesn't know how to read date or timestamptz in\n> >> binary. This is not a failing of the code.\n>\n> What it is is a strong suggestion that controlling this via a GUC is\n> not a great choice. There are many inappropriate (wrong abstraction\n> level) ways to change a GUC and thereby break a client that's not\n> expecting binary output. I think Jeff's suggestion that we should\n> treat this as a protocol extension might be a good idea.\n>\n> If I recall the protocol-extension design correctly, such a setting\n> could only be set at session start, which could be annoying --- at the\n> very least we'd have to tolerate entries for unrecognized data types,\n> since clients couldn't be expected to have checked the list against\n> the current server in advance.\n>\n\nAs mentioned for connection pools we need to be able to set these after the\nsession starts.\nI'm not sure how useful the protocol extension mechanism works given that\nit can only be used on startup.\n\n>\n> regards, tom lane\n>\n\nDave CramerOn Mon, 20 Mar 2023 at 15:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:Dave Cramer <davecramer@gmail.com> writes:\n> On Mon, 20 Mar 2023 at 13:05, Jeff Davis <pgsql@j-davis.com> wrote:\n>> 2. Easy to confuse psql:\n>> \n>> CREATE TABLE a(d date, t timestamptz);\n>> SET format_binary='25,1082,1184';\n>> SELECT * FROM a;\n>> d | t\n>> ---+---\n>> ! |\n>> (1 row)\n>> \n>> Well I'm guessing psql doesn't know how to read date or timestamptz in\n>> binary. This is not a failing of the code.\n\nWhat it is is a strong suggestion that controlling this via a GUC is\nnot a great choice. There are many inappropriate (wrong abstraction\nlevel) ways to change a GUC and thereby break a client that's not\nexpecting binary output. I think Jeff's suggestion that we should\ntreat this as a protocol extension might be a good idea.\n\nIf I recall the protocol-extension design correctly, such a setting\ncould only be set at session start, which could be annoying --- at the\nvery least we'd have to tolerate entries for unrecognized data types,\nsince clients couldn't be expected to have checked the list against\nthe current server in advance.As mentioned for connection pools we need to be able to set these after the session starts.I'm not sure how useful the protocol extension mechanism works given that it can only be used on startup. \n\n regards, tom lane",
"msg_date": "Mon, 20 Mar 2023 20:16:18 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, 20 Mar 2023 at 15:09, Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Mon, 2023-03-20 at 14:36 -0400, Dave Cramer wrote:\n> > Thanks for the review. I'm curious what system you are running on as\n> > I don't see any of these errors.\n>\n> Are asserts enabled?\n>\n> > Well I'm guessing psql doesn't know how to read date or timestamptz\n> > in binary. This is not a failing of the code.\n>\n> It seems strange, and potentially dangerous, to send binary data to a\n> client that's not expecting it. It feels too easy to cause confusion by\n> changing the GUC mid-session.\n>\n> Also, it seems like DISCARD ALL is not resetting it, which I think is a\n> bug.\n>\nThanks yes, this is a bug\n\n>\n> >\n> > This is an interesting question. If the type isn't visible then it's\n> > not visible to the query so\n>\n> I don't think that's true -- the type could be in a different schema\n> from the table.\n\n\nGood point. This seems to be the very difficult part.\n\n>\n\n\n> > >\n> > > 5. There's a theoretical invalidation problem. It might also be a\n> > > practical problem in some testing setups with long-lived\n> > > connections\n> > > that are recreating user-defined types.\n> > >\n> >\n> > UDT's seem to be a problem here which candidly have very little use\n> > case for binary output.\n>\n> I mostly agree with that, but it also might not be hard to support\n> UDTs. Is there a design problem here or is it \"just a matter of code\"?\n>\n> >\n> > I didn't try to solve it as Tom was OK with using a GUC. Using a\n> > startup GUC is interesting,\n> > but how would that work with pools where we want to reset the\n> > connection when we return it and then\n> > set the binary format on borrow ? By using a GUC when a client\n> > borrows a connection from a pool the client\n> > can reconfigure the oids it wants formatted in binary.\n>\n> That's a good point. How common is it to share a connection pool\n> between different clients (some of which might support a binary format,\n> and others which don't)? And would the connection pool put connections\n> with and without the property in different pools?\n>\n\nFor JAVA pools it's probably OK, but for pools like pgbouncer we have no\ncontrol of who is going to get the connection next.\n\n\n>\n> >\n> > I really hadn't considered supporting type names. I have asked Paul\n> > Ramsey about PostGIS and he doesn't see PostGIS using this.\n>\n> One of the things I like about Postgres is that the features all work\n> together, and that user-defined objects are generally as good as built-\n> in ones. Sometimes there's a reason to make a special case (e.g. syntax\n> support or something), but in this case it seems like we could support\n> user-defined types just fine, right? It's also just more friendly and\n> readable to use type names, especially if it's a GUC.\n>\n> Regards,\n> Jeff Davis\n>\n>\n\nOn Mon, 20 Mar 2023 at 15:09, Jeff Davis <pgsql@j-davis.com> wrote:On Mon, 2023-03-20 at 14:36 -0400, Dave Cramer wrote:\n> Thanks for the review. I'm curious what system you are running on as\n> I don't see any of these errors. \n\nAre asserts enabled?\n\n> Well I'm guessing psql doesn't know how to read date or timestamptz\n> in binary. This is not a failing of the code.\n\nIt seems strange, and potentially dangerous, to send binary data to a\nclient that's not expecting it. It feels too easy to cause confusion by\nchanging the GUC mid-session.\n\nAlso, it seems like DISCARD ALL is not resetting it, which I think is a\nbug.Thanks yes, this is a bug \n\n> \n> This is an interesting question. If the type isn't visible then it's\n> not visible to the query so \n\nI don't think that's true -- the type could be in a different schema\nfrom the table.Good point. This seems to be the very difficult part. \n\n> > \n> > 5. There's a theoretical invalidation problem. It might also be a\n> > practical problem in some testing setups with long-lived\n> > connections\n> > that are recreating user-defined types.\n> > \n> \n> UDT's seem to be a problem here which candidly have very little use\n> case for binary output. \n\nI mostly agree with that, but it also might not be hard to support\nUDTs. Is there a design problem here or is it \"just a matter of code\"?\n\n> \n> I didn't try to solve it as Tom was OK with using a GUC. Using a\n> startup GUC is interesting, \n> but how would that work with pools where we want to reset the\n> connection when we return it and then\n> set the binary format on borrow ? By using a GUC when a client\n> borrows a connection from a pool the client\n> can reconfigure the oids it wants formatted in binary.\n\nThat's a good point. How common is it to share a connection pool\nbetween different clients (some of which might support a binary format,\nand others which don't)? And would the connection pool put connections\nwith and without the property in different pools?For JAVA pools it's probably OK, but for pools like pgbouncer we have no control of who is going to get the connection next. \n\n> \n> I really hadn't considered supporting type names. I have asked Paul\n> Ramsey about PostGIS and he doesn't see PostGIS using this.\n\nOne of the things I like about Postgres is that the features all work\ntogether, and that user-defined objects are generally as good as built-\nin ones. Sometimes there's a reason to make a special case (e.g. syntax\nsupport or something), but in this case it seems like we could support\nuser-defined types just fine, right? It's also just more friendly and\nreadable to use type names, especially if it's a GUC.\n\nRegards,\n Jeff Davis",
"msg_date": "Mon, 20 Mar 2023 20:18:58 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 7:11 PM Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n>\n>\n> On Mon, 20 Mar 2023 at 19:10, Merlin Moncure <mmoncure@gmail.com> wrote:\n>\n>>\n>>\n>> On Mon, Mar 13, 2023 at 3:33 PM Dave Cramer <davecramer@gmail.com> wrote:\n>>\n>>>\n>>> OIDs are a pain to deal with IMO. They will not survive a dump style\n>> restore, and are hard to keep synchronized between databases...type names\n>> don't have this problem. OIDs are an implementation artifact that ought\n>> not need any extra dependency.\n>>\n> AFAIK, OID's for built-in types don't change.\n> Clearly we need more thought on how to deal with UDT's\n>\n\nYeah. Not having a solution that handles arrays and composites though\nwould feel pretty incomplete since they would be the one of the main\nbeneficiaries from a performance standpoint. I guess minimally you'd\nneed to expose some mechanic to look up oids, but being able to\nspecify \"foo\".\"bar\", in the GUC would be pretty nice (albeit a lot more\nwork).\n\n\n> This seems like a protocol or even a driver issue rather than a GUC issue.\n>> Why does the server need to care what format the client might want to\n>> prefer on a query by query basis?\n>>\n>\n> Actually this isn't a query by query basis. The point of this is that the\n> client wants all the results for given OID's in binary.\n>\n\nYep. Your rationale is starting to click. How would this interact with\nexisting code bases? I get that JDBC is the main target, but how does this\ninteract with libpq code that explicitly sets resultformat? Perhaps the\nanswer should be as it shouldn't change documented behavior, and a\nhypothetical resultformat=2 could be reserved to default to text but allow\nfor server control, and 3 as the same but default to binary.\n\nmerlin\n\nOn Mon, Mar 20, 2023 at 7:11 PM Dave Cramer <davecramer@gmail.com> wrote:On Mon, 20 Mar 2023 at 19:10, Merlin Moncure <mmoncure@gmail.com> wrote:On Mon, Mar 13, 2023 at 3:33 PM Dave Cramer <davecramer@gmail.com> wrote:OIDs are a pain to deal with IMO. They will not survive a dump style restore, and are hard to keep synchronized between databases...type names don't have this problem. OIDs are an implementation artifact that ought not need any extra dependency.AFAIK, OID's for built-in types don't change. Clearly we need more thought on how to deal with UDT's Yeah. Not having a solution that handles arrays and composites though would feel pretty incomplete since they would be the one of the main beneficiaries from a performance standpoint. I guess minimally you'd need to expose some mechanic to look up oids, but being able to specify \"foo\".\"bar\", in the GUC would be pretty nice (albeit a lot more work). This seems like a protocol or even a driver issue rather than a GUC issue. Why does the server need to care what format the client might want to prefer on a query by query basis? Actually this isn't a query by query basis. The point of this is that the client wants all the results for given OID's in binary. Yep. Your rationale is starting to click. How would this interact with existing code bases? I get that JDBC is the main target, but how does this interact with libpq code that explicitly sets resultformat? Perhaps the answer should be as it shouldn't change documented behavior, and a hypothetical resultformat=2 could be reserved to default to text but allow for server control, and 3 as the same but default to binary.merlin",
"msg_date": "Tue, 21 Mar 2023 06:35:30 -0500",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Tue, 21 Mar 2023 at 07:35, Merlin Moncure <mmoncure@gmail.com> wrote:\n\n>\n>\n> On Mon, Mar 20, 2023 at 7:11 PM Dave Cramer <davecramer@gmail.com> wrote:\n>\n>>\n>>\n>>\n>> On Mon, 20 Mar 2023 at 19:10, Merlin Moncure <mmoncure@gmail.com> wrote:\n>>\n>>>\n>>>\n>>> On Mon, Mar 13, 2023 at 3:33 PM Dave Cramer <davecramer@gmail.com>\n>>> wrote:\n>>>\n>>>>\n>>>> OIDs are a pain to deal with IMO. They will not survive a dump style\n>>> restore, and are hard to keep synchronized between databases...type names\n>>> don't have this problem. OIDs are an implementation artifact that ought\n>>> not need any extra dependency.\n>>>\n>> AFAIK, OID's for built-in types don't change.\n>> Clearly we need more thought on how to deal with UDT's\n>>\n>\n> Yeah. Not having a solution that handles arrays and composites though\n> would feel pretty incomplete since they would be the one of the main\n> beneficiaries from a performance standpoint.\n>\nI don't think arrays of built-in types are a problem; drivers already know\nhow to deal with these.\n\n\n> I guess minimally you'd need to expose some mechanic to look up oids, but\n> being able to specify \"foo\".\"bar\", in the GUC would be pretty nice (albeit\n> a lot more work).\n>\n\nAs Jeff mentioned there is a visibility problem if the search path is\nchanged. The simplest solution IMO is to look up the OID at the time the\nformat is requested and use the OID going forward to format the output as\nbinary. If the search path changes and a type with the same name is now\nfirst in the search path then the data would be returned in text.\n\n\n>\n\n>\n>> This seems like a protocol or even a driver issue rather than a GUC\n>>> issue. Why does the server need to care what format the client might want\n>>> to prefer on a query by query basis?\n>>>\n>>\n>> Actually this isn't a query by query basis. The point of this is that the\n>> client wants all the results for given OID's in binary.\n>>\n>\n> Yep. Your rationale is starting to click. How would this interact with\n> existing code bases?\n>\nActually JDBC wasn't the first to ask for this. Default result formats\nshould be settable per session · postgresql-interfaces/enhancement-ideas ·\nDiscussion #5 (github.com)\n<https://github.com/postgresql-interfaces/enhancement-ideas/discussions/5> I've\ntested it with JDBC and it requires no code changes on our end. Jack tested\nit and it required no code changes on his end either. He did some\nperformance tests and found \"At 100 rows the text format takes 48% longer\nthan the binary format.\"\nhttps://github.com/postgresql-interfaces/enhancement-ideas/discussions/5#discussioncomment-3188599\n\nI get that JDBC is the main target, but how does this interact with libpq\n> code that explicitly sets resultformat?\n>\nHonestly I have no idea how it would function with libpq. I presume if the\nclient did not request binary format then things would work as they do\ntoday.\n\n\nDave\n\n>\n>\n\nOn Tue, 21 Mar 2023 at 07:35, Merlin Moncure <mmoncure@gmail.com> wrote:On Mon, Mar 20, 2023 at 7:11 PM Dave Cramer <davecramer@gmail.com> wrote:On Mon, 20 Mar 2023 at 19:10, Merlin Moncure <mmoncure@gmail.com> wrote:On Mon, Mar 13, 2023 at 3:33 PM Dave Cramer <davecramer@gmail.com> wrote:OIDs are a pain to deal with IMO. They will not survive a dump style restore, and are hard to keep synchronized between databases...type names don't have this problem. OIDs are an implementation artifact that ought not need any extra dependency.AFAIK, OID's for built-in types don't change. Clearly we need more thought on how to deal with UDT's Yeah. Not having a solution that handles arrays and composites though would feel pretty incomplete since they would be the one of the main beneficiaries from a performance standpoint. I don't think arrays of built-in types are a problem; drivers already know how to deal with these. I guess minimally you'd need to expose some mechanic to look up oids, but being able to specify \"foo\".\"bar\", in the GUC would be pretty nice (albeit a lot more work).As Jeff mentioned there is a visibility problem if the search path is changed. The simplest solution IMO is to look up the OID at the time the format is requested and use the OID going forward to format the output as binary. If the search path changes and a type with the same name is now first in the search path then the data would be returned in text. This seems like a protocol or even a driver issue rather than a GUC issue. Why does the server need to care what format the client might want to prefer on a query by query basis? Actually this isn't a query by query basis. The point of this is that the client wants all the results for given OID's in binary. Yep. Your rationale is starting to click. How would this interact with existing code bases? Actually JDBC wasn't the first to ask for this. Default result formats should be settable per session · postgresql-interfaces/enhancement-ideas · Discussion #5 (github.com) I've tested it with JDBC and it requires no code changes on our end. Jack tested it and it required no code changes on his end either. He did some performance tests and found \"At 100 rows the text format takes 48% longer than the binary format.\" https://github.com/postgresql-interfaces/enhancement-ideas/discussions/5#discussioncomment-3188599 I get that JDBC is the main target, but how does this interact with libpq code that explicitly sets resultformat? Honestly I have no idea how it would function with libpq. I presume if the client did not request binary format then things would work as they do today.Dave",
"msg_date": "Tue, 21 Mar 2023 09:22:01 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 8:22 AM Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n> On Tue, 21 Mar 2023 at 07:35, Merlin Moncure <mmoncure@gmail.com> wrote:\n>\n>>\n>>\n>> On Mon, Mar 20, 2023 at 7:11 PM Dave Cramer <davecramer@gmail.com> wrote:\n>>\n>>>\n>>>\n>>>\n>>> On Mon, 20 Mar 2023 at 19:10, Merlin Moncure <mmoncure@gmail.com> wrote:\n>>>\n>>>>\n>>>>\n>>>> On Mon, Mar 13, 2023 at 3:33 PM Dave Cramer <davecramer@gmail.com>\n>>>> wrote:\n>>>>\n>>>>>\n>>>>> OIDs are a pain to deal with IMO. They will not survive a dump style\n>>>> restore, and are hard to keep synchronized between databases...type names\n>>>> don't have this problem. OIDs are an implementation artifact that ought\n>>>> not need any extra dependency.\n>>>>\n>>> AFAIK, OID's for built-in types don't change.\n>>> Clearly we need more thought on how to deal with UDT's\n>>>\n>>\n>> Yeah. Not having a solution that handles arrays and composites though\n>> would feel pretty incomplete since they would be the one of the main\n>> beneficiaries from a performance standpoint.\n>>\n> I don't think arrays of built-in types are a problem; drivers already know\n> how to deal with these.\n>\n>\n>> I guess minimally you'd need to expose some mechanic to look up oids, but\n>> being able to specify \"foo\".\"bar\", in the GUC would be pretty nice (albeit\n>> a lot more work).\n>>\n>\n> As Jeff mentioned there is a visibility problem if the search path is\n> changed.\n>\n\nOnly if the name is not fully qualified. By allowing OID to bypass\nvisibility, it stands to reason visibility ought to be bypassed for type\nrequests as well, or at least be able to be. If we are setting things in\nGUC, that suggests we can establish things in postgresql.conf, and oids\nfeel out of place there.\n\nYep. Your rationale is starting to click. How would this interact with\n>> existing code bases?\n>>\n> Actually JDBC wasn't the first to ask for this. Default result formats\n> should be settable per session · postgresql-interfaces/enhancement-ideas ·\n> Discussion #5 (github.com)\n> <https://github.com/postgresql-interfaces/enhancement-ideas/discussions/5> I've\n> tested it with JDBC and it requires no code changes on our end. Jack tested\n> it and it required no code changes on his end either. He did some\n> performance tests and found \"At 100 rows the text format takes 48% longer\n> than the binary format.\"\n> https://github.com/postgresql-interfaces/enhancement-ideas/discussions/5#discussioncomment-3188599\n>\n\nYeah, the general need is very clear IMO.\n\n\n> I get that JDBC is the main target, but how does this interact with libpq\n>> code that explicitly sets resultformat?\n>>\n> Honestly I have no idea how it would function with libpq. I presume if the\n> client did not request binary format then things would work as they do\n> today.\n>\n\nI see your argument here, but IMO this is another can of nudge away from\nGUC, unless you're willing to establish that behavior. Thinking here is\nthat the GUC wouldn't do anything for libpq, uses cases, and couldn't,\nsince resultformat would be overriding the behavior in all interesting\ncases...it seems odd to implement server side specified behavior that the\nclient library doesn't implement.\n\nmerlin\n\nOn Tue, Mar 21, 2023 at 8:22 AM Dave Cramer <davecramer@gmail.com> wrote:On Tue, 21 Mar 2023 at 07:35, Merlin Moncure <mmoncure@gmail.com> wrote:On Mon, Mar 20, 2023 at 7:11 PM Dave Cramer <davecramer@gmail.com> wrote:On Mon, 20 Mar 2023 at 19:10, Merlin Moncure <mmoncure@gmail.com> wrote:On Mon, Mar 13, 2023 at 3:33 PM Dave Cramer <davecramer@gmail.com> wrote:OIDs are a pain to deal with IMO. They will not survive a dump style restore, and are hard to keep synchronized between databases...type names don't have this problem. OIDs are an implementation artifact that ought not need any extra dependency.AFAIK, OID's for built-in types don't change. Clearly we need more thought on how to deal with UDT's Yeah. Not having a solution that handles arrays and composites though would feel pretty incomplete since they would be the one of the main beneficiaries from a performance standpoint. I don't think arrays of built-in types are a problem; drivers already know how to deal with these. I guess minimally you'd need to expose some mechanic to look up oids, but being able to specify \"foo\".\"bar\", in the GUC would be pretty nice (albeit a lot more work).As Jeff mentioned there is a visibility problem if the search path is changed. Only if the name is not fully qualified. By allowing OID to bypass visibility, it stands to reason visibility ought to be bypassed for type requests as well, or at least be able to be. If we are setting things in GUC, that suggests we can establish things in postgresql.conf, and oids feel out of place there.Yep. Your rationale is starting to click. How would this interact with existing code bases? Actually JDBC wasn't the first to ask for this. Default result formats should be settable per session · postgresql-interfaces/enhancement-ideas · Discussion #5 (github.com) I've tested it with JDBC and it requires no code changes on our end. Jack tested it and it required no code changes on his end either. He did some performance tests and found \"At 100 rows the text format takes 48% longer than the binary format.\" https://github.com/postgresql-interfaces/enhancement-ideas/discussions/5#discussioncomment-3188599Yeah, the general need is very clear IMO. I get that JDBC is the main target, but how does this interact with libpq code that explicitly sets resultformat? Honestly I have no idea how it would function with libpq. I presume if the client did not request binary format then things would work as they do today.I see your argument here, but IMO this is another can of nudge away from GUC, unless you're willing to establish that behavior. Thinking here is that the GUC wouldn't do anything for libpq, uses cases, and couldn't, since resultformat would be overriding the behavior in all interesting cases...it seems odd to implement server side specified behavior that the client library doesn't implement. merlin",
"msg_date": "Tue, 21 Mar 2023 09:57:18 -0500",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, 2023-03-20 at 20:18 -0400, Dave Cramer wrote:\n> For JAVA pools it's probably OK, but for pools like pgbouncer we have\n> no control of who is going to get the connection next.\n\nCan pgbouncer use different pools for different settings of\nformat_binary?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 21 Mar 2023 08:52:06 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Tue, 21 Mar 2023 at 11:52, Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Mon, 2023-03-20 at 20:18 -0400, Dave Cramer wrote:\n> > For JAVA pools it's probably OK, but for pools like pgbouncer we have\n> > no control of who is going to get the connection next.\n>\n> Can pgbouncer use different pools for different settings of\n> format_binary?\n>\n>\nMy concern here is that if I can only change binary format in the startup\nparameter then when I return the connection to the pool I would expect the\npool to reset all session level settings including binary format.\nThe next time I borrow the connection I can no longer set binary format.\n\n\n>\n\nOn Tue, 21 Mar 2023 at 11:52, Jeff Davis <pgsql@j-davis.com> wrote:On Mon, 2023-03-20 at 20:18 -0400, Dave Cramer wrote:\n> For JAVA pools it's probably OK, but for pools like pgbouncer we have\n> no control of who is going to get the connection next.\n\nCan pgbouncer use different pools for different settings of\nformat_binary?\nMy concern here is that if I can only change binary format in the startup parameter then when I return the connection to the pool I would expect the pool to reset all session level settings including binary format. The next time I borrow the connection I can no longer set binary format.",
"msg_date": "Tue, 21 Mar 2023 12:21:03 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Tue, 2023-03-21 at 09:22 -0400, Dave Cramer wrote:\n> As Jeff mentioned there is a visibility problem if the search path is\n> changed. The simplest solution IMO is to look up the OID at the time\n> the format is requested and use the OID going forward to format the\n> output as binary. If the search path changes and a type with the same\n> name is now first in the search path then the data would be returned\n> in text. \n\nThe binary format parameter would ordinarily be set by the maintainer\nof the client library, who knows nothing about the schema the client\nmight be accessing, and nothing about the search_path that might be\nset. They would only know which binary parsers they've already written\nand included with their client library.\n\nWith that in mind, using search_path at all seems weird. Why would a\nchange in search_path affect which types the client library knows how\nto parse? If the client library knows how to parse \"foo.mytype\"'s\nbinary representation, and you change the search path such that it\nfinds \"bar.mytype\" instead, did the client library all of a sudden\nforget how to parse \"foo.mytype\" and learn to parse \"bar.mytype\"?\n\nIf there's some extension that offers type \"mytype\", and perhaps allows\nit to be installed in any schema, then it seems that the client library\nwould know how to parse all instances of \"mytype\" regardless of the\nschema or search_path.\n\nOf course, a potential problem is that ordinary users can create types\n(e.g. enum types) and so you'd have to be careful about some tricks\nwhere someone shadows a well-known extension in order to confuse the\nclient with unexpected binary data (not sure if that's a security\nconcern or not, just thinking out loud).\n\nOne solution might be that unqualified type names would work on all\ntypes of that name (in any schema) that are owned by a superuser,\nregardless of search_path. Most extension scripts will be run as\nsuperuser anyway. It would feel a little magical, which I don't like,\nbut would work in any practical case I can think of.\n\nAnother solution would be to have some extra catalog field in pg_type\nthat would be a \"binary format identifier\" and use that rather than the\ntype name to match up binary parsers with the proper type.\n\nAm I over-thinking this?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 21 Mar 2023 14:47:52 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On 04.03.23 17:35, Jeff Davis wrote:\n> On Thu, 2023-03-02 at 09:13 -0500, Dave Cramer wrote:\n>> I'd like to open up this discussion again so that we can\n>> move forward. I prefer the GUC as it is relatively simple and as\n>> Peter mentioned it works, but I'm not married to the idea.\n> \n> It's not very friendly to extensions, where the types are not\n> guaranteed to have stable OIDs. Did you consider any proposals that\n> work with type names?\n\nSending type names is kind of useless if what comes back with the result \n(RowDescription) are OIDs anyway.\n\nThe client would presumably have some code like\n\nif (typeoid == 555)\n parseThatType();\n\nSo it already needs to know about the OIDs of all the types it is \ninterested in.\n\n\n\n",
"msg_date": "Wed, 22 Mar 2023 10:12:12 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On 20.03.23 18:04, Jeff Davis wrote:\n> 2. Easy to confuse psql:\n> \n> CREATE TABLE a(d date, t timestamptz);\n> SET format_binary='25,1082,1184';\n> SELECT * FROM a;\n> d | t\n> ---+---\n> ! |\n> (1 row)\n\nYou can already send binary data to psql using DECLARE BINARY CURSOR. \nIt might be sensible to have psql check that the data it is getting is \ntext format before trying to print it.\n\n\n\n",
"msg_date": "Wed, 22 Mar 2023 10:14:28 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On 20.03.23 18:04, Jeff Davis wrote:\n> Also, if we're going to make the binary format more practical to use,\n> can we document the expectations better? It seems the expecatation is\n> that the binary format just never changes, and that if it does, that's\n> a new type name.\n\nI've been thinking that we need some new kind of identifier to allow \nclients to process types in more sophisticated ways.\n\nFor example, each type could be (self-)assigned a UUID, which is fixed \nfor that type no matter in which schema or under what extension name or \nwith what OID it is installed. Client libraries could then hardcode \nthat UUID for processing the types. Conversely, the UUID could be \nchanged if the wire format of the type is changed, without having to \nchange the type name.\n\n\n\n",
"msg_date": "Wed, 22 Mar 2023 10:21:17 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "If there's some extension that offers type \"mytype\", and perhaps allows\nit to be installed in any schema, then it seems that the client library\nwould know how to parse all instances of \"mytype\" regardless of the\nschema or search_path.\n\nI may be overthinking this.\n\nDave Cramer\n\n\nOn Tue, 21 Mar 2023 at 17:47, Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Tue, 2023-03-21 at 09:22 -0400, Dave Cramer wrote:\n> > As Jeff mentioned there is a visibility problem if the search path is\n> > changed. The simplest solution IMO is to look up the OID at the time\n> > the format is requested and use the OID going forward to format the\n> > output as binary. If the search path changes and a type with the same\n> > name is now first in the search path then the data would be returned\n> > in text.\n>\n> The binary format parameter would ordinarily be set by the maintainer\n> of the client library, who knows nothing about the schema the client\n> might be accessing, and nothing about the search_path that might be\n> set. They would only know which binary parsers they've already written\n> and included with their client library.\n>\n> With that in mind, using search_path at all seems weird. Why would a\n> change in search_path affect which types the client library knows how\n> to parse? If the client library knows how to parse \"foo.mytype\"'s\n> binary representation, and you change the search path such that it\n> finds \"bar.mytype\" instead, did the client library all of a sudden\n> forget how to parse \"foo.mytype\" and learn to parse \"bar.mytype\"?\n>\n> If there's some extension that offers type \"mytype\", and perhaps allows\n> it to be installed in any schema, then it seems that the client library\n> would know how to parse all instances of \"mytype\" regardless of the\n> schema or search_path.\n>\n> Of course, a potential problem is that ordinary users can create types\n> (e.g. enum types) and so you'd have to be careful about some tricks\n> where someone shadows a well-known extension in order to confuse the\n> client with unexpected binary data (not sure if that's a security\n> concern or not, just thinking out loud).\n>\n> One solution might be that unqualified type names would work on all\n> types of that name (in any schema) that are owned by a superuser,\n> regardless of search_path. Most extension scripts will be run as\n> superuser anyway. It would feel a little magical, which I don't like,\n> but would work in any practical case I can think of.\n>\n> Another solution would be to have some extra catalog field in pg_type\n> that would be a \"binary format identifier\" and use that rather than the\n> type name to match up binary parsers with the proper type.\n>\n> Am I over-thinking this?\n>\n> Regards,\n> Jeff Davis\n>\n>\n\nIf there's some extension that offers type \"mytype\", and perhaps allowsit to be installed in any schema, then it seems that the client librarywould know how to parse all instances of \"mytype\" regardless of theschema or search_path.I may be overthinking this.Dave CramerOn Tue, 21 Mar 2023 at 17:47, Jeff Davis <pgsql@j-davis.com> wrote:On Tue, 2023-03-21 at 09:22 -0400, Dave Cramer wrote:\n> As Jeff mentioned there is a visibility problem if the search path is\n> changed. The simplest solution IMO is to look up the OID at the time\n> the format is requested and use the OID going forward to format the\n> output as binary. If the search path changes and a type with the same\n> name is now first in the search path then the data would be returned\n> in text. \n\nThe binary format parameter would ordinarily be set by the maintainer\nof the client library, who knows nothing about the schema the client\nmight be accessing, and nothing about the search_path that might be\nset. They would only know which binary parsers they've already written\nand included with their client library.\n\nWith that in mind, using search_path at all seems weird. Why would a\nchange in search_path affect which types the client library knows how\nto parse? If the client library knows how to parse \"foo.mytype\"'s\nbinary representation, and you change the search path such that it\nfinds \"bar.mytype\" instead, did the client library all of a sudden\nforget how to parse \"foo.mytype\" and learn to parse \"bar.mytype\"?\n\nIf there's some extension that offers type \"mytype\", and perhaps allows\nit to be installed in any schema, then it seems that the client library\nwould know how to parse all instances of \"mytype\" regardless of the\nschema or search_path.\n\nOf course, a potential problem is that ordinary users can create types\n(e.g. enum types) and so you'd have to be careful about some tricks\nwhere someone shadows a well-known extension in order to confuse the\nclient with unexpected binary data (not sure if that's a security\nconcern or not, just thinking out loud).\n\nOne solution might be that unqualified type names would work on all\ntypes of that name (in any schema) that are owned by a superuser,\nregardless of search_path. Most extension scripts will be run as\nsuperuser anyway. It would feel a little magical, which I don't like,\nbut would work in any practical case I can think of.\n\nAnother solution would be to have some extra catalog field in pg_type\nthat would be a \"binary format identifier\" and use that rather than the\ntype name to match up binary parsers with the proper type.\n\nAm I over-thinking this?\n\nRegards,\n Jeff Davis",
"msg_date": "Wed, 22 Mar 2023 08:14:57 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "If I recall the protocol-extension design correctly, such a setting\ncould only be set at session start, which could be annoying --- at the\nvery least we'd have to tolerate entries for unrecognized data types,\nsince clients couldn't be expected to have checked the list against\nthe current server in advance.\n\nThe protocol extension design has the drawback that it can only be set at\nstartup.\nWhat if we were to allow changes to the setting after startup if the client\npassed the cancel key as a unique identifier that only the driver would\nknow?\n\nDave Cramer\n\n\n\n>\n>\n\nIf I recall the protocol-extension design correctly, such a settingcould only be set at session start, which could be annoying --- at thevery least we'd have to tolerate entries for unrecognized data types,since clients couldn't be expected to have checked the list againstthe current server in advance.The protocol extension design has the drawback that it can only be set at startup. What if we were to allow changes to the setting after startup if the client passed the cancel key as a unique identifier that only the driver would know? Dave Cramer",
"msg_date": "Wed, 22 Mar 2023 08:18:25 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Wed, 2023-03-22 at 10:21 +0100, Peter Eisentraut wrote:\n> I've been thinking that we need some new kind of identifier to allow \n> clients to process types in more sophisticated ways.\n> \n> For example, each type could be (self-)assigned a UUID, which is\n> fixed \n> for that type no matter in which schema or under what extension name\n> or \n> with what OID it is installed. Client libraries could then hardcode \n> that UUID for processing the types. Conversely, the UUID could be \n> changed if the wire format of the type is changed, without having to \n> change the type name.\n\nThat sounds reasonable to me. It could also be useful for other\nextension objects (or the extension itself) to avoid other kinds of\nweirdness from name collisions or major version updates or extensions\nthat depend on other extensions.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 22 Mar 2023 10:56:53 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Wed, 2023-03-22 at 10:12 +0100, Peter Eisentraut wrote:\n> Sending type names is kind of useless if what comes back with the\n> result \n> (RowDescription) are OIDs anyway.\n> \n> The client would presumably have some code like\n> \n> if (typeoid == 555)\n> parseThatType();\n> \n> So it already needs to know about the OIDs of all the types it is \n> interested in.\n\nTechnically it's still an improvement because you can avoid an extra\nround-trip. The client library can pipeline a query like:\n\n SELECT typname, oid FROM pg_type\n WHERE typname IN (...list of supported type names...);\n\nwhen the client first connects, and then go ahead and send whatever\nqueries you want without waiting for the response. When you get back\nthe result of the pg_type query, you cache the mapping, and use it to\nprocess any other results you get.\n\nThat avoids introducing an extra round-trip. I'm not sure if that's a\nreasonable thing to expect the client to do, so I agree that we should\noffer a better way.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 22 Mar 2023 11:05:03 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> On Wed, 2023-03-22 at 10:21 +0100, Peter Eisentraut wrote:\n>> I've been thinking that we need some new kind of identifier to allow \n>> clients to process types in more sophisticated ways.\n>> For example, each type could be (self-)assigned a UUID, which is fixed \n>> for that type no matter in which schema or under what extension name or \n>> with what OID it is installed. Client libraries could then hardcode \n>> that UUID for processing the types. Conversely, the UUID could be \n>> changed if the wire format of the type is changed, without having to \n>> change the type name.\n\n> That sounds reasonable to me. It could also be useful for other\n> extension objects (or the extension itself) to avoid other kinds of\n> weirdness from name collisions or major version updates or extensions\n> that depend on other extensions.\n\nThis isn't going to help much unless we change the wire protocol\nso that RowDescription messages carry these UUIDs instead of\n(or in addition to?) the OIDs of the column datatypes. While\nthat's not completely out of the question, it's a heavy lift\nthat will affect multiple layers of client code along with the\nserver.\n\nAlso, what about container types? I doubt it's sane for\narray-of-foo to have a UUID that's unrelated to the one for foo.\nComposites and ranges would need some intelligence too if we\ndon't want them to be unduly complicated to process.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Mar 2023 14:42:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Wed, 2023-03-22 at 14:42 -0400, Tom Lane wrote:\n> This isn't going to help much unless we change the wire protocol\n> so that RowDescription messages carry these UUIDs instead of\n> (or in addition to?) the OIDs of the column datatypes. While\n> that's not completely out of the question, it's a heavy lift\n> that will affect multiple layers of client code along with the\n> server.\n\nI'm not sure that's a hard requirement. I pointed out a similar\nsolution for type names here:\n\nhttps://www.postgresql.org/message-id/4297b9e310172b9a1e6d737e21ad8796d0ab7b03.camel@j-davis.com\n\nIn other words: if the Bind message depends on knowing the OID\nmappings, that forces an extra round-trip; but if the client doesn't\nneed the mapping until it receives its first result, then it can use\npipelining to avoid the extra round-trip.\n\n(I haven't actually tried it and I don't know if it's very reasonable\nto expect the client to do this.)\n\n> Also, what about container types? I doubt it's sane for\n> array-of-foo to have a UUID that's unrelated to the one for foo.\n> Composites and ranges would need some intelligence too if we\n> don't want them to be unduly complicated to process.\n\nThat's a good point. I don't know if that is a major design issue or\nnot; but it certainly adds complexity to the proposal and/or clients\nimplementing it.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 22 Mar 2023 12:23:18 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Wed, 22 Mar 2023 at 15:23, Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Wed, 2023-03-22 at 14:42 -0400, Tom Lane wrote:\n> > This isn't going to help much unless we change the wire protocol\n> > so that RowDescription messages carry these UUIDs instead of\n> > (or in addition to?) the OIDs of the column datatypes. While\n> > that's not completely out of the question, it's a heavy lift\n> > that will affect multiple layers of client code along with the\n> > server.\n>\n> I'm not sure that's a hard requirement. I pointed out a similar\n> solution for type names here:\n>\n>\n> https://www.postgresql.org/message-id/4297b9e310172b9a1e6d737e21ad8796d0ab7b03.camel@j-davis.com\n>\n> In other words: if the Bind message depends on knowing the OID\n> mappings, that forces an extra round-trip; but if the client doesn't\n> need the mapping until it receives its first result, then it can use\n> pipelining to avoid the extra round-trip.\n>\n\nThis overcomplicates things for the JDBC driver. We don't pipeline queries,\nwell we do for batch queries but those are special.\n\n\n> (I haven't actually tried it and I don't know if it's very reasonable\n> to expect the client to do this.)\n>\n> > Also, what about container types? I doubt it's sane for\n> > array-of-foo to have a UUID that's unrelated to the one for foo.\n> > Composites and ranges would need some intelligence too if we\n> > don't want them to be unduly complicated to process.\n>\n> That's a good point. I don't know if that is a major design issue or\n> not; but it certainly adds complexity to the proposal and/or clients\n> implementing it.\n>\n\nSo where do we go from here ?\n\nI can implement using type names as well as OID's\n\nDave\n\nOn Wed, 22 Mar 2023 at 15:23, Jeff Davis <pgsql@j-davis.com> wrote:On Wed, 2023-03-22 at 14:42 -0400, Tom Lane wrote:\n> This isn't going to help much unless we change the wire protocol\n> so that RowDescription messages carry these UUIDs instead of\n> (or in addition to?) the OIDs of the column datatypes. While\n> that's not completely out of the question, it's a heavy lift\n> that will affect multiple layers of client code along with the\n> server.\n\nI'm not sure that's a hard requirement. I pointed out a similar\nsolution for type names here:\n\nhttps://www.postgresql.org/message-id/4297b9e310172b9a1e6d737e21ad8796d0ab7b03.camel@j-davis.com\n\nIn other words: if the Bind message depends on knowing the OID\nmappings, that forces an extra round-trip; but if the client doesn't\nneed the mapping until it receives its first result, then it can use\npipelining to avoid the extra round-trip.This overcomplicates things for the JDBC driver. We don't pipeline queries, well we do for batch queries but those are special.\n\n(I haven't actually tried it and I don't know if it's very reasonable\nto expect the client to do this.)\n\n> Also, what about container types? I doubt it's sane for\n> array-of-foo to have a UUID that's unrelated to the one for foo.\n> Composites and ranges would need some intelligence too if we\n> don't want them to be unduly complicated to process.\n\nThat's a good point. I don't know if that is a major design issue or\nnot; but it certainly adds complexity to the proposal and/or clients\nimplementing it.So where do we go from here ?I can implement using type names as well as OID'sDave",
"msg_date": "Thu, 23 Mar 2023 15:37:05 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 4:47 PM Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Tue, 2023-03-21 at 09:22 -0400, Dave Cramer wrote:\n> > As Jeff mentioned there is a visibility problem if the search path is\n> > changed. The simplest solution IMO is to look up the OID at the time\n> > the format is requested and use the OID going forward to format the\n> > output as binary. If the search path changes and a type with the same\n> > name is now first in the search path then the data would be returned\n> > in text.\n>\n> Am I over-thinking this?\n>\n\nI think so. Dave's idea puts a lot of flexibility into the client\nside, and that's good. Search path mechanics are really well understood\nand well integrated with extensions already (create extension ..schema)\nassuming that the precise time UDT are looked up in an unqualified way is\nvery clear to- or invoked via- the client code. I'll say it again though;\nOIDs really ought to be considered a transient cache of type information\nrather than a permanent identifier.\n\nRegarding UDT, lots of common and useful scenarios (containers, enum,\nrange, etc), do not require special knowledge to parse beyond the kind of\ntype it is. Automatic type creation from tables is one of the most\ngenius things about postgres and directly wiring client structures to them\nthrough binary is really nifty. This undermines the case that binary\nparsing requires special knowledge IMO, UDT might in fact be the main long\nterm target...I could see scenarios where java classes might be glued\ndirectly to postgres tables by the driver...this would be a lot more\nefficient than using json which is how everyone does it today. Then again,\nmaybe *I* might be overthinking this.\n\nmerlin\n\nOn Tue, Mar 21, 2023 at 4:47 PM Jeff Davis <pgsql@j-davis.com> wrote:On Tue, 2023-03-21 at 09:22 -0400, Dave Cramer wrote:\n> As Jeff mentioned there is a visibility problem if the search path is\n> changed. The simplest solution IMO is to look up the OID at the time\n> the format is requested and use the OID going forward to format the\n> output as binary. If the search path changes and a type with the same\n> name is now first in the search path then the data would be returned\n> in text. \nAm I over-thinking this?I think so. Dave's idea puts a lot of flexibility into the client side, and that's good. Search path mechanics are really well understood and well integrated with extensions already (create extension ..schema) assuming that the precise time UDT are looked up in an unqualified way is very clear to- or invoked via- the client code. I'll say it again though; OIDs really ought to be considered a transient cache of type information rather than a permanent identifier. Regarding UDT, lots of common and useful scenarios (containers, enum, range, etc), do not require special knowledge to parse beyond the kind of type it is. Automatic type creation from tables is one of the most genius things about postgres and directly wiring client structures to them through binary is really nifty. This undermines the case that binary parsing requires special knowledge IMO, UDT might in fact be the main long term target...I could see scenarios where java classes might be glued directly to postgres tables by the driver...this would be a lot more efficient than using json which is how everyone does it today. Then again, maybe *I* might be overthinking this.merlin",
"msg_date": "Fri, 24 Mar 2023 07:52:32 -0500",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Fri, 2023-03-24 at 07:52 -0500, Merlin Moncure wrote:\n> I think so. Dave's idea puts a lot of flexibility into the client\n> side, and that's good. Search path mechanics are really well\n> understood and well integrated with extensions already (create\n> extension ..schema) assuming that the precise time UDT are looked up\n> in an unqualified way is very clear to- or invoked via- the client\n> code. I'll say it again though; OIDs really ought to be considered a\n> transient cache of type information rather than a\n> permanent identifier. \n\nI'm not clear on what proposal you are making and/or endorising?\n\n> Regarding UDT, lots of common and useful scenarios (containers, enum,\n> range, etc), do not require special knowledge to parse beyond the\n> kind of type it is. Automatic type creation from tables is one of\n> the most genius things about postgres and directly wiring client\n> structures to them through binary is really nifty.\n\nPerhaps not special knowledge, but you need to know the structure. If\nyou have a query like \"SELECT '...'::sometable\", you still need to know\nthe structure of sometable to parse it.\n\n> This undermines the case that binary parsing requires special\n> knowledge IMO, UDT might in fact be the main long term target...I\n> could see scenarios where java classes might be glued directly to\n> postgres tables by the driver...this would be a lot more efficient\n> than using json which is how everyone does it today. Then again,\n> maybe *I* might be overthinking this.\n\nWouldn't that only work if someone is doing a \"SELECT *\"?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sat, 25 Mar 2023 15:35:50 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Thu, 2023-03-23 at 15:37 -0400, Dave Cramer wrote:\n> So where do we go from here ?\n> \n> I can implement using type names as well as OID's\n\nMy current thought is that you should use the protocol extension and\nmake type names work (in addition to OIDs) at least for fully-qualified\ntype names. I don't really like the GUC -- perhaps I could be convinced\nit's OK, but until we find a problem with protocol extensions, it looks\nlike a protocol extension is the way to go here.\n\nI like Peter's idea for some kind of global identifier, but we can do\nthat independently at a later time.\n\nIf search path works fine and we're all happy with it, we could also\nsupport unqualified type names. It feels slightly off to me to use\nsearch_path for something like that, though.\n\nThere's still the problem about the connection pools. pgbouncer could\nconsider the binary formats to be an important parameter like the\ndatabase name, where the connection pooler would not mingle connections\nwith different settings for binary_formats. That would work, but it\nwould be weird because if a new version of a driver adds new binary\nformat support, it could cause worse connection pooling performance\nuntil all the other drivers also support that binary format. I'm not\nsure if that's a major problem or not. Another idea would be for the\nconnection pooler to also have a binary_formats config, and it would do\nsome checking to make sure all connecting clients understand some\nminimal set of binary formats, so that it could still mingle the\nconnections. Either way, I think this is solvable by the connection\npooler.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sat, 25 Mar 2023 16:06:21 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Sat, 25 Mar 2023 at 19:06, Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Thu, 2023-03-23 at 15:37 -0400, Dave Cramer wrote:\n> > So where do we go from here ?\n> >\n> > I can implement using type names as well as OID's\n>\n> My current thought is that you should use the protocol extension and\n> make type names work (in addition to OIDs) at least for fully-qualified\n> type names. I don't really like the GUC -- perhaps I could be convinced\n> it's OK, but until we find a problem with protocol extensions, it looks\n> like a protocol extension is the way to go here.\n>\n> Well as I said if I use any external pool that shares connections with\nmultiple clients, some of which may not know how to decode binary data then\nwe have to have a way to set the binary format after the connection is\nestablished. I did float the idea of using the cancel key as a unique\nidentifier that passed with the parameter would allow setting the parameter\nafter connection establishmen.\n\nI like Peter's idea for some kind of global identifier, but we can do\n> that independently at a later time.\n>\n> If search path works fine and we're all happy with it, we could also\n> support unqualified type names. It feels slightly off to me to use\n> search_path for something like that, though.\n>\n> There's still the problem about the connection pools. pgbouncer could\n> consider the binary formats to be an important parameter like the\n> database name, where the connection pooler would not mingle connections\n> with different settings for binary_formats. That would work, but it\n> would be weird because if a new version of a driver adds new binary\n> format support, it could cause worse connection pooling performance\n> until all the other drivers also support that binary format. I'm not\n> sure if that's a major problem or not. Another idea would be for the\n> connection pooler to also have a binary_formats config, and it would do\n> some checking to make sure all connecting clients understand some\n> minimal set of binary formats, so that it could still mingle the\n> connections. Either way, I think this is solvable by the connection\n> pooler.\n>\n\nWell that means that connection poolers have to all be fixed. There are\nmore than just pgbouncer.\nSeems rather harsh that a new feature breaks a connection pooler or makes\nthe pooler unusable.\n\nDave\n\nDave CramerOn Sat, 25 Mar 2023 at 19:06, Jeff Davis <pgsql@j-davis.com> wrote:On Thu, 2023-03-23 at 15:37 -0400, Dave Cramer wrote:\n> So where do we go from here ?\n> \n> I can implement using type names as well as OID's\n\nMy current thought is that you should use the protocol extension and\nmake type names work (in addition to OIDs) at least for fully-qualified\ntype names. I don't really like the GUC -- perhaps I could be convinced\nit's OK, but until we find a problem with protocol extensions, it looks\nlike a protocol extension is the way to go here.\nWell as I said if I use any external pool that shares connections with multiple clients, some of which may not know how to decode binary data then we have to have a way to set the binary format after the connection is established. I did float the idea of using the cancel key as a unique identifier that passed with the parameter would allow setting the parameter after connection establishmen.\nI like Peter's idea for some kind of global identifier, but we can do\nthat independently at a later time.\n\nIf search path works fine and we're all happy with it, we could also\nsupport unqualified type names. It feels slightly off to me to use\nsearch_path for something like that, though.\n\nThere's still the problem about the connection pools. pgbouncer could\nconsider the binary formats to be an important parameter like the\ndatabase name, where the connection pooler would not mingle connections\nwith different settings for binary_formats. That would work, but it\nwould be weird because if a new version of a driver adds new binary\nformat support, it could cause worse connection pooling performance\nuntil all the other drivers also support that binary format. I'm not\nsure if that's a major problem or not. Another idea would be for the\nconnection pooler to also have a binary_formats config, and it would do\nsome checking to make sure all connecting clients understand some\nminimal set of binary formats, so that it could still mingle the\nconnections. Either way, I think this is solvable by the connection\npooler.Well that means that connection poolers have to all be fixed. There are more than just pgbouncer.Seems rather harsh that a new feature breaks a connection pooler or makes the pooler unusable.Dave",
"msg_date": "Sat, 25 Mar 2023 19:58:37 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Sat, 2023-03-25 at 19:58 -0400, Dave Cramer wrote:\n> Well that means that connection poolers have to all be fixed. There\n> are more than just pgbouncer.\n> Seems rather harsh that a new feature breaks a connection pooler or\n> makes the pooler unusable.\n\nWould it actually break connection poolers as they are now? Or would,\nfor example, pgbouncer just not set the binary_format parameter on the\noutbound connection, and therefore just return everything as text until\nthey add support to configure it?\n\nI'll admit that GUCs wouldn't have this problem at all, but it would be\nnice to know how much of a problem it is before we decide between a\nprotocol extension and a GUC.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Sun, 26 Mar 2023 11:00:02 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Sun, 26 Mar 2023 at 14:00, Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Sat, 2023-03-25 at 19:58 -0400, Dave Cramer wrote:\n> > Well that means that connection poolers have to all be fixed. There\n> > are more than just pgbouncer.\n> > Seems rather harsh that a new feature breaks a connection pooler or\n> > makes the pooler unusable.\n>\n> Would it actually break connection poolers as they are now? Or would,\n> for example, pgbouncer just not set the binary_format parameter on the\n> outbound connection, and therefore just return everything as text until\n> they add support to configure it?\n>\n\nWell I was presuming that they would just pass the parameter on. If they\ndidn't then binary_format won't work with them. In the case that they do\npass it on, then DISCARD_ALL will reset it and future borrows of the\nconnection will have no way to set it again; effectively making this a one\ntime setting.\n\nSo while it may not break them it doesn't seem like it is a very useful\nsolution.\n\nDave\n\nOn Sun, 26 Mar 2023 at 14:00, Jeff Davis <pgsql@j-davis.com> wrote:On Sat, 2023-03-25 at 19:58 -0400, Dave Cramer wrote:\n> Well that means that connection poolers have to all be fixed. There\n> are more than just pgbouncer.\n> Seems rather harsh that a new feature breaks a connection pooler or\n> makes the pooler unusable.\n\nWould it actually break connection poolers as they are now? Or would,\nfor example, pgbouncer just not set the binary_format parameter on the\noutbound connection, and therefore just return everything as text until\nthey add support to configure it?Well I was presuming that they would just pass the parameter on. If they didn't then binary_format won't work with them. In the case that they do pass it on, then DISCARD_ALL will reset it and future borrows of the connection will have no way to set it again; effectively making this a one time setting.So while it may not break them it doesn't seem like it is a very useful solution.Dave",
"msg_date": "Sun, 26 Mar 2023 17:54:05 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "Dave Cramer <davecramer@gmail.com> writes:\n> Well I was presuming that they would just pass the parameter on. If they\n> didn't then binary_format won't work with them. In the case that they do\n> pass it on, then DISCARD_ALL will reset it and future borrows of the\n> connection will have no way to set it again; effectively making this a one\n> time setting.\n\nI would not expect DISCARD ALL to reset a session-level property.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Mar 2023 18:12:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Sun, 26 Mar 2023 at 18:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Dave Cramer <davecramer@gmail.com> writes:\n> > Well I was presuming that they would just pass the parameter on. If they\n> > didn't then binary_format won't work with them. In the case that they do\n> > pass it on, then DISCARD_ALL will reset it and future borrows of the\n> > connection will have no way to set it again; effectively making this a\n> one\n> > time setting.\n>\n> I would not expect DISCARD ALL to reset a session-level property.\n>\n\nWell if we can't reset it with DISCARD ALL how would that work with\npgbouncer, or any pool for that matter since it doesn't know which client\nasked for which (if any) OID's to be binary.\n\nDave\n\nDave CramerOn Sun, 26 Mar 2023 at 18:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:Dave Cramer <davecramer@gmail.com> writes:\n> Well I was presuming that they would just pass the parameter on. If they\n> didn't then binary_format won't work with them. In the case that they do\n> pass it on, then DISCARD_ALL will reset it and future borrows of the\n> connection will have no way to set it again; effectively making this a one\n> time setting.\n\nI would not expect DISCARD ALL to reset a session-level property.Well if we can't reset it with DISCARD ALL how would that work with pgbouncer, or any pool for that matter since it doesn't know which client asked for which (if any) OID's to be binary. Dave",
"msg_date": "Sun, 26 Mar 2023 20:39:28 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "Dave Cramer <davecramer@gmail.com> writes:\n> On Sun, 26 Mar 2023 at 18:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I would not expect DISCARD ALL to reset a session-level property.\n\n> Well if we can't reset it with DISCARD ALL how would that work with\n> pgbouncer, or any pool for that matter since it doesn't know which client\n> asked for which (if any) OID's to be binary.\n\nWell, it'd need to know that, just like it already needs to know\nwhich clients asked for which database or which login role.\nHaving DISCARD ALL reset those session properties is obviously silly.\n\nThe way I'm imagining this working is that it fits into the framework\nfor protocol options (cf commits ae65f6066 and bbf9c282c), whereby\nthe client and server negotiate whether they can handle this feature.\nA non-updated pooler would act like a server that doesn't know the\nfeature, and the client would have to fall back to not using it,\njust as it would with an older server.\n\nI doubt that this would crimp a pooler's freedom of action very much.\nIn any given environment there will probably be only a few values of\nthe set-of-okay-types in use.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Mar 2023 21:30:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Sun, 26 Mar 2023 at 21:30, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Dave Cramer <davecramer@gmail.com> writes:\n> > On Sun, 26 Mar 2023 at 18:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I would not expect DISCARD ALL to reset a session-level property.\n>\n> > Well if we can't reset it with DISCARD ALL how would that work with\n> > pgbouncer, or any pool for that matter since it doesn't know which client\n> > asked for which (if any) OID's to be binary.\n>\n> Well, it'd need to know that, just like it already needs to know\n> which clients asked for which database or which login role.\n>\n\nOK, IIUC what you are proposing here is that there would be a separate pool\nfor\ndatabase, user, and OIDs. This doesn't seem too flexible. For instance if I\ncreate a UDT and then want it to be returned\nas binary then I have to reconfigure the pool to be able to accept a new\nlist of OID's.\n\nAm I mis-understanding how this would potentially work?\n\nDave\n\n>\n>\n\nOn Sun, 26 Mar 2023 at 21:30, Tom Lane <tgl@sss.pgh.pa.us> wrote:Dave Cramer <davecramer@gmail.com> writes:\n> On Sun, 26 Mar 2023 at 18:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I would not expect DISCARD ALL to reset a session-level property.\n\n> Well if we can't reset it with DISCARD ALL how would that work with\n> pgbouncer, or any pool for that matter since it doesn't know which client\n> asked for which (if any) OID's to be binary.\n\nWell, it'd need to know that, just like it already needs to know\nwhich clients asked for which database or which login role.OK, IIUC what you are proposing here is that there would be a separate pool for database, user, and OIDs. This doesn't seem too flexible. For instance if I create a UDT and then want it to be returned as binary then I have to reconfigure the pool to be able to accept a new list of OID's.Am I mis-understanding how this would potentially work?Dave",
"msg_date": "Tue, 28 Mar 2023 10:22:36 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "FYI I attached this thread to\nhttps://commitfest.postgresql.org/42/3777 which I believe is the same\nissue. I mistakenly had this listed as a CF entry with no discussion\nfor a long time due to that missing link.\n\n\n--\nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Tue, 28 Mar 2023 14:50:46 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Tue, 2023-03-28 at 10:22 -0400, Dave Cramer wrote:\n> OK, IIUC what you are proposing here is that there would be a\n> separate pool for \n> database, user, and OIDs. This doesn't seem too flexible. For\n> instance if I create a UDT and then want it to be returned \n> as binary then I have to reconfigure the pool to be able to accept a\n> new list of OID's.\n\nThere are two ways that I could imagine the connection pool working:\n\n1. Accept whatever clients connect, and pass along the binary_formats\nsetting to the outbound (server) connection. The downside here is that\nif you have many different clients (or different versions) that have\ndifferent binary_formats settings, then it creates too many pools and\ndoesn't share well enough.\n\n2. Some kind of configuration setting (or maybe it can be done\nautomatically) that organizes based on a common subset of binary\nformats that many clients can understand.\n\nThese can evolve once the protocol extension is in place.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 28 Mar 2023 16:01:10 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Tue, 28 Mar 2023 at 19:01, Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Tue, 2023-03-28 at 10:22 -0400, Dave Cramer wrote:\n> > OK, IIUC what you are proposing here is that there would be a\n> > separate pool for\n> > database, user, and OIDs. This doesn't seem too flexible. For\n> > instance if I create a UDT and then want it to be returned\n> > as binary then I have to reconfigure the pool to be able to accept a\n> > new list of OID's.\n>\n> There are two ways that I could imagine the connection pool working:\n>\n> 1. Accept whatever clients connect, and pass along the binary_formats\n> setting to the outbound (server) connection. The downside here is that\n> if you have many different clients (or different versions) that have\n> different binary_formats settings, then it creates too many pools and\n> doesn't share well enough.\n>\nAs I understand it, pools create connections before the client actually\nrequests the connection.\nThis would necessitate having the binary format information in the\nconfiguration file.\n\nI'm starting to wonder about the utility of the protocol extension\nmechanism?\nIt would seem that you would need to add the new feature into all pools ?\n\n>\n> 2. Some kind of configuration setting (or maybe it can be done\n> automatically) that organizes based on a common subset of binary\n> formats that many clients can understand.\n>\n\nWell that would bring us back to just providing a list of OID's of well\nknown types as I first proposed instead of trying to accomodate UDT's\n\nDave\n\nOn Tue, 28 Mar 2023 at 19:01, Jeff Davis <pgsql@j-davis.com> wrote:On Tue, 2023-03-28 at 10:22 -0400, Dave Cramer wrote:\n> OK, IIUC what you are proposing here is that there would be a\n> separate pool for \n> database, user, and OIDs. This doesn't seem too flexible. For\n> instance if I create a UDT and then want it to be returned \n> as binary then I have to reconfigure the pool to be able to accept a\n> new list of OID's.\n\nThere are two ways that I could imagine the connection pool working:\n\n1. Accept whatever clients connect, and pass along the binary_formats\nsetting to the outbound (server) connection. The downside here is that\nif you have many different clients (or different versions) that have\ndifferent binary_formats settings, then it creates too many pools and\ndoesn't share well enough.As I understand it, pools create connections before the client actually requests the connection.This would necessitate having the binary format information in the configuration file.I'm starting to wonder about the utility of the protocol extension mechanism? It would seem that you would need to add the new feature into all pools ? \n\n2. Some kind of configuration setting (or maybe it can be done\nautomatically) that organizes based on a common subset of binary\nformats that many clients can understand. Well that would bring us back to just providing a list of OID's of well known types as I first proposed instead of trying to accomodate UDT's Dave",
"msg_date": "Wed, 29 Mar 2023 08:17:48 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Wed, 2023-03-29 at 08:17 -0400, Dave Cramer wrote:\n> I'm starting to wonder about the utility of the protocol extension\n> mechanism? \n\nI'm starting to agree that the awkwardness around connection poolers is\na problem. If that's the case, I'm wondering if the protocol extensions\nwill ever be useful.\n\nWhat I'm worried about with the GUC is that an attacker may be able to\nstart with a SQL injection attack, and then use the GUC to confuse a\nclient in a way that further escalates privileges. Is that a reasonable\nfear?\n\nA couple ideas to mitigate that concern with the GUC:\n\n1. Fix our own clients, like psql, to check for binary data they can't\nprocess.\n\n2. Communicate (after the patch is committed) with client library\nmaintainers to see that they behave sanely when they receive binary\ndata unexpectedly.\n\n3. Require that the binary_formats parameter is set very early, either\nduring connection startup or right after a DISCARD statement. A bit of\na hack, but may help. Not sure it really solves my security concern\nbecause they'd just need to modify their SQL injection to also include\na DISCARD statement.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 29 Mar 2023 09:04:19 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 11:04 AM Jeff Davis <pgsql@j-davis.com> wrote:\n\n> I'm not clear on what proposal you are making and/or endorsing?\n>\n\nha -- was just backing up dave's GUC idea.\n\n\n> 1. Fix our own clients, like psql, to check for binary data they can't\n> process.\n>\n\nThis ought to be impossible IMO. All libpq routines except PQexec have an\nexplicit expectation on format (via resultformat parameter) that should not\nbe overridden. PQexec ought to be explicitly documented and wired to only\nrequest text format data.\n\nresultfomat can be extended now or later to allow participating clients to\nreceive GUC configured format. I do not think that libpq's result format\nbeing able to be overridden by GUC is a good idea at all, the library has\nto to participate, and I think can be made to so so without adjusting the\ninterface (say, by resultFormat = 3). Similarly, in JDBC world, it ought\nto be up to the driver to determine when it want the server to flex wire\nformats but must be able to override the server's decision.\n\nmerlin\n\nOn Wed, Mar 29, 2023 at 11:04 AM Jeff Davis <pgsql@j-davis.com> wrote: I'm not clear on what proposal you are making and/or endorsing?ha -- was just backing up dave's GUC idea. 1. Fix our own clients, like psql, to check for binary data they can't\nprocess.This ought to be impossible IMO. All libpq routines except PQexec have an explicit expectation on format (via resultformat parameter) that should not be overridden. PQexec ought to be explicitly documented and wired to only request text format data.resultfomat can be extended now or later to allow participating clients to receive GUC configured format. I do not think that libpq's result format being able to be overridden by GUC is a good idea at all, the library has to to participate, and I think can be made to so so without adjusting the interface (say, by resultFormat = 3). Similarly, in JDBC world, it ought to be up to the driver to determine when it want the server to flex wire formats but must be able to override the server's decision.merlin",
"msg_date": "Thu, 30 Mar 2023 07:06:48 -0500",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Thu, 2023-03-30 at 07:06 -0500, Merlin Moncure wrote:\n> This ought to be impossible IMO. All libpq routines except PQexec\n> have an explicit expectation on format (via resultformat parameter)\n> that should not be overridden. PQexec ought to be explicitly\n> documented and wired to only request text format data.\n\nRight now it's clearly documented[1] which formats will be returned for\na given Bind message. That can be seen as the root of the problem with\npsql -- we are breaking the protocol by returning binary when psql can\nrightfully expect text.\n\nIt is a minor break, because something needed to send the \"SET\nbinary_formats='...'\" command, but the protocol docs would need to be\nupdated for sure.\n\n> participating clients to receive GUC configured format. I do not\n> think that libpq's result format being able to be overridden by GUC\n> is a good idea at all, the library has to to participate, and I\n> think can be made to so so without adjusting the interface (say, by\n> resultFormat = 3).\n\nInteresting idea. I suppose you'd need to specify 3 for all result\ncolumns? That is a protocol change, but wouldn't \"break\" older clients.\nThe newer clients would need to make sure that they're connecting to\nv16+, so using the protocol version alone wouldn't be enough. Hmm.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/docs/current/protocol-message-formats.html#PROTOCOL-MESSAGE-FORMATS-BIND\n\n\n\n",
"msg_date": "Thu, 30 Mar 2023 12:40:12 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Thu, 30 Mar 2023 at 15:40, Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Thu, 2023-03-30 at 07:06 -0500, Merlin Moncure wrote:\n> > This ought to be impossible IMO. All libpq routines except PQexec\n> > have an explicit expectation on format (via resultformat parameter)\n> > that should not be overridden. PQexec ought to be explicitly\n> > documented and wired to only request text format data.\n>\n> Right now it's clearly documented[1] which formats will be returned for\n> a given Bind message. That can be seen as the root of the problem with\n> psql -- we are breaking the protocol by returning binary when psql can\n> rightfully expect text.\n>\n> It is a minor break, because something needed to send the \"SET\n> binary_formats='...'\" command, but the protocol docs would need to be\n> updated for sure.\n>\n> > participating clients to receive GUC configured format. I do not\n> > think that libpq's result format being able to be overridden by GUC\n> > is a good idea at all, the library has to to participate, and I\n> > think can be made to so so without adjusting the interface (say, by\n> > resultFormat = 3).\n>\n> Interesting idea. I suppose you'd need to specify 3 for all result\n> columns? That is a protocol change, but wouldn't \"break\" older clients.\n> The newer clients would need to make sure that they're connecting to\n> v16+, so using the protocol version alone wouldn't be enough. Hmm.\n>\n\nI'm confused. How does using resultFormat=3 change anything ?\nDave\n\nDave CramerOn Thu, 30 Mar 2023 at 15:40, Jeff Davis <pgsql@j-davis.com> wrote:On Thu, 2023-03-30 at 07:06 -0500, Merlin Moncure wrote:\n> This ought to be impossible IMO. All libpq routines except PQexec\n> have an explicit expectation on format (via resultformat parameter)\n> that should not be overridden. PQexec ought to be explicitly\n> documented and wired to only request text format data.\n\nRight now it's clearly documented[1] which formats will be returned for\na given Bind message. That can be seen as the root of the problem with\npsql -- we are breaking the protocol by returning binary when psql can\nrightfully expect text.\n\nIt is a minor break, because something needed to send the \"SET\nbinary_formats='...'\" command, but the protocol docs would need to be\nupdated for sure.\n\n> participating clients to receive GUC configured format. I do not\n> think that libpq's result format being able to be overridden by GUC\n> is a good idea at all, the library has to to participate, and I\n> think can be made to so so without adjusting the interface (say, by\n> resultFormat = 3).\n\nInteresting idea. I suppose you'd need to specify 3 for all result\ncolumns? That is a protocol change, but wouldn't \"break\" older clients.\nThe newer clients would need to make sure that they're connecting to\nv16+, so using the protocol version alone wouldn't be enough. Hmm.I'm confused. How does using resultFormat=3 change anything ?Dave",
"msg_date": "Thu, 30 Mar 2023 20:54:12 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "> participating clients to receive GUC configured format. I do not\n\n> > think that libpq's result format being able to be overridden by GUC\n>> > is a good idea at all, the library has to to participate, and I\n>> > think can be made to so so without adjusting the interface (say, by\n>> > resultFormat = 3).\n>>\n>> Interesting idea. I suppose you'd need to specify 3 for all result\n>> columns? That is a protocol change, but wouldn't \"break\" older clients.\n>> The newer clients would need to make sure that they're connecting to\n>> v16+, so using the protocol version alone wouldn't be enough. Hmm.\n>>\n>\n>\nSo this only works with extended protocol and not simple queries.\nAgain, as Peter mentioned it's already easy enough to confuse psql using\nbinary cursors so\nit makes sense to fix psql either way.\n\nIf you use resultFormat (3) I think you'd still end up doing the Describe\n(which we are trying to avoid) to make sure you could receive all the\ncolumns in binary.\n\nDave\n\n> participating clients to receive GUC configured format. I do not\n> think that libpq's result format being able to be overridden by GUC\n> is a good idea at all, the library has to to participate, and I\n> think can be made to so so without adjusting the interface (say, by\n> resultFormat = 3).\n\nInteresting idea. I suppose you'd need to specify 3 for all result\ncolumns? That is a protocol change, but wouldn't \"break\" older clients.\nThe newer clients would need to make sure that they're connecting to\nv16+, so using the protocol version alone wouldn't be enough. Hmm.So this only works with extended protocol and not simple queries. Again, as Peter mentioned it's already easy enough to confuse psql using binary cursors so it makes sense to fix psql either way.If you use resultFormat (3) I think you'd still end up doing the Describe (which we are trying to avoid) to make sure you could receive all the columns in binary.Dave",
"msg_date": "Mon, 3 Apr 2023 12:28:50 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, Apr 3, 2023 at 11:29 AM Dave Cramer <davecramer@gmail.com> wrote:\n\n> > participating clients to receive GUC configured format. I do not\n>\n>> > think that libpq's result format being able to be overridden by GUC\n>>> > is a good idea at all, the library has to to participate, and I\n>>> > think can be made to so so without adjusting the interface (say, by\n>>> > resultFormat = 3).\n>>>\n>>> Interesting idea. I suppose you'd need to specify 3 for all result\n>>> columns? That is a protocol change, but wouldn't \"break\" older clients.\n>>> The newer clients would need to make sure that they're connecting to\n>>> v16+, so using the protocol version alone wouldn't be enough. Hmm.\n>>>\n>>\n>>\n> So this only works with extended protocol and not simple queries.\n> Again, as Peter mentioned it's already easy enough to confuse psql using\n> binary cursors so\n> it makes sense to fix psql either way.\n>\n> If you use resultFormat (3) I think you'd still end up doing the Describe\n> (which we are trying to avoid) to make sure you could receive all the\n> columns in binary.\n>\n\nCan you elaborate on why Describe would have to be passed? Agreed that\nwould be a dealbreaker if so. If you pass a 3 sending it in, the you'd be\nchecking PQfformat on data coming back as 0/1, or at least that's be smart\nsince you're indicating the server is able to address the format. This\naddresses the concern libpq clients currently passing resultfomat zero\ncould not have that decision overridden by the server which I also think is\na dealbreaker. There might be other reasons why a describe message may be\nforced however.\n\nmerlin\n\nOn Mon, Apr 3, 2023 at 11:29 AM Dave Cramer <davecramer@gmail.com> wrote:> participating clients to receive GUC configured format. I do not\n> think that libpq's result format being able to be overridden by GUC\n> is a good idea at all, the library has to to participate, and I\n> think can be made to so so without adjusting the interface (say, by\n> resultFormat = 3).\n\nInteresting idea. I suppose you'd need to specify 3 for all result\ncolumns? That is a protocol change, but wouldn't \"break\" older clients.\nThe newer clients would need to make sure that they're connecting to\nv16+, so using the protocol version alone wouldn't be enough. Hmm.So this only works with extended protocol and not simple queries. Again, as Peter mentioned it's already easy enough to confuse psql using binary cursors so it makes sense to fix psql either way.If you use resultFormat (3) I think you'd still end up doing the Describe (which we are trying to avoid) to make sure you could receive all the columns in binary.Can you elaborate on why Describe would have to be passed? Agreed that would be a dealbreaker if so. If you pass a 3 sending it in, the you'd be checking PQfformat on data coming back as 0/1, or at least that's be smart since you're indicating the server is able to address the format. This addresses the concern libpq clients currently passing resultfomat zero could not have that decision overridden by the server which I also think is a dealbreaker. There might be other reasons why a describe message may be forced however.merlin",
"msg_date": "Fri, 14 Apr 2023 07:43:53 -0500",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 12:04 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I'm starting to agree that the awkwardness around connection poolers is\n> a problem. If that's the case, I'm wondering if the protocol extensions\n> will ever be useful.\n\nIn the case at hand, it seems like the problem could easily be solved\nby allowing the property to be changed after connection startup.\nInstead of using the protocol extension mechanism to negotiate a\nspecific value for the property, we can use it to negotiate about\nwhether or not some new protocol message that can be used to change\nthat property is supported. If it is, then a new value can be set\nwhenever, and a connection pooler can switch the active value when it\nassociates the server's session with a different client session.\n\nAlternatively, the protocol extension mechanism can be used to\nnegotiate an initial value for the property, with the understanding\nthat if any initial value is negotiated, that also implies that the\nserver will accept some new protocol message later in the session to\nchange the value. If no initial value is negotiated, the client can't\nassume that the server even knows about that property and can't try to\nchange it.\n\nBacking up a level, the purpose of the protocol extension mechanism is\nto help us agree on the communication protocol -- that is, the set of\nmessages that we can send and receive on a certain connection. The\nquestion for the protocol extension mechanism isn't \"which types\nshould always be sent in binary format?\" but \"would it be ok if I\nwanted you to always send certain types in binary format?\", with the\nidea that if the answer is yes it will still be necessary for the\nclient to let the server know which ones, but that's easy to do if\nwe've agreed on the concept that it's OK for me to ask the server for\nthat. And if it's OK for me to ask that once, it should also be OK for\nme to later ask for something different.\n\nThis could, perhaps, be made even more general yet. We could define a\nconcept of \"protocol session parameters\" and make \"which types are\nalways sent in binary format?\" one of those parameters. So then the\nconversation could go like this:\n\nC: Hello! Do you know about protocol session parameters?\nS: Why yes, actually I do.\nC: Cool. I would like to set the protocol session parameter\ntypes_always_in_binary_format=timestamptz. Does that work for you?\nS: Sure thing! (or alternatively: Sadly, I've not heard of that\nparticular protocol session parameter, sorry to disappoint.)\n\nThe reason why I suggest this is that I feel like there could be a\nbunch of things like this. The set of things to be sent in binary\nformat feels like a property of the wire protocol, not something\nSQL-level that should be configured via SET. Clients, drivers, and\nconnection poolers aren't going to want to have to worry about some\nuser screwing up the session by changing that property inside of a\nfunction or procedure or whatever. But there could also be a bunch of\ndifferent things like this that we want to support. For example, one\nthat would be really useful for connection poolers is the session\nuser. The pooler would like to change the session user whenever the\nconnection is changed to talk to a different client, and it would like\nthat to happen in a way that can't be reversed by issuing any SQL\ncommand. I expect in time we may find a bunch of others.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 Apr 2023 12:17:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, 2023-04-17 at 12:17 -0400, Robert Haas wrote:\n> In the case at hand, it seems like the problem could easily be solved\n> by allowing the property to be changed after connection startup.\n> Instead of using the protocol extension mechanism to negotiate a\n> specific value for the property, we can use it to negotiate about\n> whether or not some new protocol message that can be used to change\n> that property is supported.\n\n...\n\n> Backing up a level, the purpose of the protocol extension mechanism\n> is\n> to help us agree on the communication protocol\n\nThank you, that seems like a better approach to me.\n\nIt involves introducing new message types which I didn't really\nconsider. We might want to be careful about how many kinds of messages\nwe introduce so that the one-letter codes are still managable. I've\nbeen frustrated in the past that we don't have separate symbols in the\nsource code to refer to the message types (we just use literal 'S',\netc.).\n\nMaybe we should have a single new message type 'x' to indicate a\nmessage for a protocol extension, and then have a sub-message-type? It\nmight make error handling better for unexpected messages.\n\nAlso, is there any reason we'd want this concept to integrate with\nconnection strings/URIs? Probably not a good idea to turn on features\nthat way, but perhaps we'd want to support disabling protocol\nextensions from a URI? This could be used to restrict authentication\nmethods or sources of authentication information.\n\n> The reason why I suggest this is that I feel like there could be a\n> bunch of things like this.\n\nWhat's the trade-off between having one protocol extension (e.g.\n_pq_protocol_session_parameters) that tries to work for multiple cases\n(e.g. binary_formats and session_user) vs just having two protocol\nextensions (_pq_set_session_user and _pq_set_binary_formats)?\n\n\n> For example, one\n> that would be really useful for connection poolers is the session\n> user. The pooler would like to change the session user whenever the\n> connection is changed to talk to a different client, and it would\n> like\n> that to happen in a way that can't be reversed by issuing any SQL\n> command.\n\nThat sounds valuable to me whether we generalize with \"protocol session\nparameters\" or not.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 17 Apr 2023 10:55:35 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 1:55 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> It involves introducing new message types which I didn't really\n> consider. We might want to be careful about how many kinds of messages\n> we introduce so that the one-letter codes are still managable. I've\n> been frustrated in the past that we don't have separate symbols in the\n> source code to refer to the message types (we just use literal 'S',\n> etc.).\n\nRight. That was part of the thinking behind the protocol session\nparameter thing I was throwing out there.\n\n> Maybe we should have a single new message type 'x' to indicate a\n> message for a protocol extension, and then have a sub-message-type? It\n> might make error handling better for unexpected messages.\n\nI'm somewhat skeptical that we want every protocol extension in the\nuniverse to use a single message type. I think that could lead to\nmunging together all sorts of messages that are actually really\ndifferent from each other. On the other hand, in a certain sense, we\ndon't really have a choice. The type byte for a protocol message can\nonly take on one of 256 possible values, and some of those are already\nused, so if we add a bunch of stuff to the protocol, we're eventually\ngoing to run short of byte values. In fact, even if we said, well, 'x'\nmeans that it's an extended message and then there's a type byte as\nthe first byte of the payload, that only doubles the number of\npossible message types before we run out of room, and maybe there's a\nworld where we eventually have thousands upon thousands of message\ntypes. We'd need a longer type code than 1 byte to really get out from\nunder the problem, so if we add a message like what you're talking\nabout, we should probably do that.\n\nBut I don't know if we need to be too paranoid about this. For\nexample, suppose we were to agree on adding protocol session\nparameters and make this the first one. To do that, suppose we add two\nnew messages to the protocol, ProtocolSessionParameterSet and\nProtocolSessionParameterReponse. And suppose we just pick single\nletter codes for those, like we have right now. How much use would\nsuch a mechanism get? It seems possible that we'd add as many as 5 or\n10 such parameters in the next half-decade, but they'd all only need\nthose two new message types. We'd only need a different message type\nif somebody wanted to customize something about the protocol that\ndidn't fit into that model, and that might happen, but I bet it\nwouldn't happen that often. I feel like if we're careful to make sure\nthat the new protocol messages that we add are carefully designed to\nbe reasonably general, we'd add them very slowly. It seems very\npossible that we could go a century or more without running out of\npossible values. We could then decide to leave it to future hackers to\ndecide what to do about it when the remaining bit space starts to get\ntight.\n\nThe point of this thought experiment is to help us estimate how\ncareful we need to be. I think that if we added messages with 1-byte\ntype codes for things as specific as SetTypesWithBinaryOutputAlways,\nthere would be a significant chance that we would run out of 1-byte\ntype codes while some of us are still around to be sad about it. Maybe\nit wouldn't happen, but it seems risky. Furthermore, such messages are\nFAR more specific than existing protocol messages like Query or\nExecute or ErrorResponse which cover HUGE amounts of territory. I\nthink we need to be a level of abstraction removed. Something like\nProtocolSessionParameterSet seems good enough to me - I think we'll\nrun out of codes like that soon enough to matter. I don't think it\nwould be wrong to take that as far as you propose here, and just add\none new message type to cover all future developments, but it feels\nlike it might not really help anyone. A lot of code would probably\nhave to drill down and look at what type of extended message it was\nbefore deciding what to do with it, which seems a bit annoying.\n\nOne thing to keep in mind is that it's possible that in the future we\nmight want protocol extensions for things that are very\nperformance-sensitive. For instance, I think it might be advantageous\nto have something that is intermediate between the simple and extended\nquery protocol. The simple query protocol doesn't let you set\nparameters, but the extended query protocol requires you to send a\nwhole series of messages (Parse-Bind-Describe-Execute-Sync) which\ndoesn't seem to be particularly efficient for either the client or the\nserver. I think it would be nice to have a way to send a single\nmessage that says \"run this query with these parameters.\" But, if we\nhad that, some clients might use it Really A Lot. They would therefore\nwant the message to be as short as possible, which means that using up\na single byte code for it would probably be desirable. On the other\nhand, the kinds of things we're talking about here really shouldn't be\nsubjected to that level of use, and so if for this purpose we pick a\nmessage format that is longer and wordier and more extensible, that\nshould be fine. If your connection pooler is switching your connection\nback and forth between a bunch of end clients that all have different\nideas about binary format parameters, it should be running at least\none query after each such change, and probably more than that. And\nthat query probably has some results, so a few extra bytes of overhead\nin the message format shouldn't cost much even in fairly extreme\ncases.\n\n> Also, is there any reason we'd want this concept to integrate with\n> connection strings/URIs? Probably not a good idea to turn on features\n> that way, but perhaps we'd want to support disabling protocol\n> extensions from a URI? This could be used to restrict authentication\n> methods or sources of authentication information.\n\nI don't really see why the connection string/URI has any business\ndisabling anything. It might require something to be enabled, though.\nFor instance, if we added a protocol extension to encrypt all result\nsets returned to the client using rot13, we might also add a\nconnection parameter to control that behavior. If the user requested\nthat behavior using a connection parameter, libpq might then try to\nenable it via a protocol extension -- it would have to, else it would\notherwise be unable to deliver the requested behavior. But the user\nshouldn't get to say \"please enable the protocol extension that would\nenable you to turn on rot13 even though I don't actually want to use\nrot13\" nor should they be able to say \"please give me rot13 without\nusing the protocol extension that would let you ask for that\". Those\nrequests aren't sensible. The connection parameter interface is a way\nfor the user to request certain behaviors that they might want, and\nthen it's up to libpq, or some other connector, to decide what needs\nto happen at a protocol level to implement those requests.\n\nAnd that might change over time. We could introduce a new major\nprotocol version (v4!) or somebody could eventually say \"hey, these\nsix protocol extensions are now universally supported by literally\nevery bit of code that we can find that speaks the PG wire protocol,\nlet's just start sending all these messages unconditionally and the\ncounterparty can error out if they're a fossil from the Jurassic era\".\nIt's kind of hard to imagine that happening from where we are now, but\ntimes change.\n\n> > The reason why I suggest this is that I feel like there could be a\n> > bunch of things like this.\n>\n> What's the trade-off between having one protocol extension (e.g.\n> _pq_protocol_session_parameters) that tries to work for multiple cases\n> (e.g. binary_formats and session_user) vs just having two protocol\n> extensions (_pq_set_session_user and _pq_set_binary_formats)?\n\nWell, it seems related to the message types issue mentioned above.\nPresumably if we were going to have one set of message types for both\nfeatures, we'd want one protocol extension to enable that set of\nmessage types. And if we were going to have separate message types for\neach feature, we'd want separate protocol extensions to enable them.\nThere are probably other ways it could work, but that seems like the\nmost natural idea.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 Apr 2023 15:53:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Apr 17, 2023 at 1:55 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>> Maybe we should have a single new message type 'x' to indicate a\n>> message for a protocol extension, and then have a sub-message-type? It\n>> might make error handling better for unexpected messages.\n\n> ...\n> The point of this thought experiment is to help us estimate how\n> careful we need to be.\n\nI tend to agree with the proposition that we aren't going to add new\nmessage types very often, as long as we're careful to make them general\npurpose. Don't forget that adding a new message type isn't just a matter\nof writing some spec text --- there has to be code backing it up. We\nwill never introduce thousands of new message types, or if we do,\nsomebody factored it wrong and put data into the type code.\n\nThe fact that we've gotten away without adding *any* new message types\nfor about twenty years suggests to me that the growth rate isn't such\nthat we need sub-message-types yet. I'd keep the structure the same\nuntil such time as we can't choose a plausible code value for a new\nmessage, and then maybe add the \"x-and-subtype\" convention Jeff suggests.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 16:22:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 4:22 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The fact that we've gotten away without adding *any* new message types\n> for about twenty years suggests to me that the growth rate isn't such\n> that we need sub-message-types yet. I'd keep the structure the same\n> until such time as we can't choose a plausible code value for a new\n> message, and then maybe add the \"x-and-subtype\" convention Jeff suggests.\n\nOne thing I think we should do in this area is introduce #defines for\nall the message type codes and use those instead of having hard-coded\nconstants everywhere.\n\nI'm not brave enough to tackle that day, but the only reason the\ncurrent situation isn't a disaster is because every place we use e.g.\n'Z' we generally also have a comment that mentions ReadyForQuery. If\nit weren't for that, this would be pretty un-greppable.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Apr 2023 11:40:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> One thing I think we should do in this area is introduce #defines for\n> all the message type codes and use those instead of having hard-coded\n> constants everywhere.\n\n+1, but I wonder where we should put those exactly. My first thought\nwas postgres_ext.h, but the charter for that is\n\n * This file contains declarations of things that are visible everywhere\n * in PostgreSQL *and* are visible to clients of frontend interface libraries.\n * For example, the Oid type is part of the API of libpq and other libraries.\n\nso picayune details of the wire protocol probably don't belong there.\nMaybe we need a new header concerned with the wire protocol?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Apr 2023 11:51:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 11:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > One thing I think we should do in this area is introduce #defines for\n> > all the message type codes and use those instead of having hard-coded\n> > constants everywhere.\n>\n> +1, but I wonder where we should put those exactly. My first thought\n> was postgres_ext.h, but the charter for that is\n>\n> * This file contains declarations of things that are visible everywhere\n> * in PostgreSQL *and* are visible to clients of frontend interface libraries.\n> * For example, the Oid type is part of the API of libpq and other libraries.\n>\n> so picayune details of the wire protocol probably don't belong there.\n> Maybe we need a new header concerned with the wire protocol?\n\nYeah. I sort of thought maybe one of the files in src/include/libpq\nwould be the right place, but it doesn't look like it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Apr 2023 12:23:59 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Tue, 18 Apr 2023 at 12:24, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Apr 18, 2023 at 11:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > One thing I think we should do in this area is introduce #defines for\n> > > all the message type codes and use those instead of having hard-coded\n> > > constants everywhere.\n> >\n> > +1, but I wonder where we should put those exactly. My first thought\n> > was postgres_ext.h, but the charter for that is\n> >\n> > * This file contains declarations of things that are visible\n> everywhere\n> > * in PostgreSQL *and* are visible to clients of frontend interface\n> libraries.\n> > * For example, the Oid type is part of the API of libpq and other\n> libraries.\n> >\n> > so picayune details of the wire protocol probably don't belong there.\n> > Maybe we need a new header concerned with the wire protocol?\n>\n> Yeah. I sort of thought maybe one of the files in src/include/libpq\n> would be the right place, but it doesn't look like it.\n>\n> If we at least created the defines and replaced occurrences with the same,\nthen we can litigate where to put them later.\n\nI think I'd prefer this in a different patch, but I'd be willing to take a\nrun at it.\n\nDave\n\nOn Tue, 18 Apr 2023 at 12:24, Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Apr 18, 2023 at 11:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > One thing I think we should do in this area is introduce #defines for\n> > all the message type codes and use those instead of having hard-coded\n> > constants everywhere.\n>\n> +1, but I wonder where we should put those exactly. My first thought\n> was postgres_ext.h, but the charter for that is\n>\n> * This file contains declarations of things that are visible everywhere\n> * in PostgreSQL *and* are visible to clients of frontend interface libraries.\n> * For example, the Oid type is part of the API of libpq and other libraries.\n>\n> so picayune details of the wire protocol probably don't belong there.\n> Maybe we need a new header concerned with the wire protocol?\n\nYeah. I sort of thought maybe one of the files in src/include/libpq\nwould be the right place, but it doesn't look like it.\nIf we at least created the defines and replaced occurrences with the same, then we can litigate where to put them later.I think I'd prefer this in a different patch, but I'd be willing to take a run at it.Dave",
"msg_date": "Tue, 18 Apr 2023 12:31:14 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, 17 Apr 2023 at 16:22, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I tend to agree with the proposition that we aren't going to add new\n> message types very often, as long as we're careful to make them general\n> purpose. Don't forget that adding a new message type isn't just a matter\n> of writing some spec text --- there has to be code backing it up. We\n> will never introduce thousands of new message types, or if we do,\n> somebody factored it wrong and put data into the type code.\n\nWell the way I understood Robert's proposal would be that you would\nset a protocol option which could be some name like\nSuperDuperExtension and then later send an extended message like X\nSuperDuper Extension ...\n\nThe point being not so much that it saves on message types but that it\nbecomes possible for the wire protocol code to recognize the message\ntype and know which extension's code to call back to. Presumably a\ncallback was registered when the option was negotiated.\n\n> The fact that we've gotten away without adding *any* new message types\n> for about twenty years suggests to me that the growth rate isn't such\n> that we need sub-message-types yet. I'd keep the structure the same\n> until such time as we can't choose a plausible code value for a new\n> message, and then maybe add the \"x-and-subtype\" convention Jeff suggests.\n\nFwiw I've had at least two miniprojects that would eventually have led\nto protocol extensions. Like most of my projects they're not finished\nbut one day...\n\nProgress reporting on queries in progress -- I had things hacked to\nsend the progress report in an elog but eventually it would have made\nsense to put it in a dedicated message type that the client would know\nthe structure of the content of.\n\nDistributed tracing -- to pass the trace span id for each query and\nany other baggage. Currently people either stuff it in application_id\nor in SQL comments but they're both pretty awful.\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 18 Apr 2023 15:53:46 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 3:54 PM Greg Stark <stark@mit.edu> wrote:\n> Well the way I understood Robert's proposal would be that you would\n> set a protocol option which could be some name like\n> SuperDuperExtension and then later send an extended message like X\n> SuperDuper Extension ...\n>\n> The point being not so much that it saves on message types but that it\n> becomes possible for the wire protocol code to recognize the message\n> type and know which extension's code to call back to. Presumably a\n> callback was registered when the option was negotiated.\n\nThat's not what I was talking about. I meant extending the protocol in\ncore, and dealing with version differences between the client and the\nserver, not loading extensions that extend the protocol. Such a thing\ncould possibly be done, but it seems fairly tricky to make useful.\nDefining the message format is just a small part of the problem. If\nfor example the message is one to be sent from server to client, you\nneed a server side hook that's called at the right point to allow you\nto inject those messages, and then you need something on the libpq\nside to, I guess, intercept those messages and call a user-defined\nhandler when they show up. It might make sense for things like\nprogress reporting and tracing to piggyback on e.g. NoticeResponse,\nwhich already has existing libpq-side handling, rather than inventing\nsomething altogether new. Or if we are going to invent something new,\nsay because we want to send structured data rather than a string, then\nwe invent one new message type for that which can be used by multiple\nfacilities e.g. StructuredNoticeResponse with a content-type (e.g.\njson) and a payload.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 Apr 2023 09:24:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Tue, 18 Apr 2023 at 12:31, Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n>\n> On Tue, 18 Apr 2023 at 12:24, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>> On Tue, Apr 18, 2023 at 11:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> > Robert Haas <robertmhaas@gmail.com> writes:\n>> > > One thing I think we should do in this area is introduce #defines for\n>> > > all the message type codes and use those instead of having hard-coded\n>> > > constants everywhere.\n>> >\n>> > +1, but I wonder where we should put those exactly. My first thought\n>> > was postgres_ext.h, but the charter for that is\n>> >\n>> > * This file contains declarations of things that are visible\n>> everywhere\n>> > * in PostgreSQL *and* are visible to clients of frontend interface\n>> libraries.\n>> > * For example, the Oid type is part of the API of libpq and other\n>> libraries.\n>> >\n>> > so picayune details of the wire protocol probably don't belong there.\n>> > Maybe we need a new header concerned with the wire protocol?\n>>\n>> Yeah. I sort of thought maybe one of the files in src/include/libpq\n>> would be the right place, but it doesn't look like it.\n>>\n>> If we at least created the defines and replaced occurrences with the\n> same, then we can litigate where to put them later.\n>\n> I think I'd prefer this in a different patch, but I'd be willing to take a\n> run at it.\n>\n\nAs promised here is a patch with defines for all of the protocol messages.\nI created a protocol.h file and put it in src/includes\nI'm fairly sure that some of the names I used may need to be changed but\nthe grunt work of finding and replacing everything is done.\n\nDave Cramer\n\n>\n> Dave\n>",
"msg_date": "Thu, 20 Apr 2023 15:51:34 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": ">\n>\n>\n>\n> Backing up a level, the purpose of the protocol extension mechanism is\n> to help us agree on the communication protocol -- that is, the set of\n> messages that we can send and receive on a certain connection. The\n> question for the protocol extension mechanism isn't \"which types\n> should always be sent in binary format?\" but \"would it be ok if I\n> wanted you to always send certain types in binary format?\", with the\n> idea that if the answer is yes it will still be necessary for the\n> client to let the server know which ones, but that's easy to do if\n> we've agreed on the concept that it's OK for me to ask the server for\n> that. And if it's OK for me to ask that once, it should also be OK for\n> me to later ask for something different.\n>\n> This could, perhaps, be made even more general yet. We could define a\n> concept of \"protocol session parameters\" and make \"which types are\n> always sent in binary format?\" one of those parameters. So then the\n> conversation could go like this:\n>\n> C: Hello! Do you know about protocol session parameters?\n> S: Why yes, actually I do.\n> C: Cool. I would like to set the protocol session parameter\n> types_always_in_binary_format=timestamptz. Does that work for you?\n> S: Sure thing! (or alternatively: Sadly, I've not heard of that\n> particular protocol session parameter, sorry to disappoint.)\n>\n> The reason why I suggest this is that I feel like there could be a\n> bunch of things like this. The set of things to be sent in binary\n> format feels like a property of the wire protocol, not something\n> SQL-level that should be configured via SET. Clients, drivers, and\n> connection poolers aren't going to want to have to worry about some\n> user screwing up the session by changing that property inside of a\n> function or procedure or whatever. But there could also be a bunch of\n> different things like this that we want to support. For example, one\n> that would be really useful for connection poolers is the session\n> user. The pooler would like to change the session user whenever the\n> connection is changed to talk to a different client, and it would like\n> that to happen in a way that can't be reversed by issuing any SQL\n> command. I expect in time we may find a bunch of others.\n>\n>\n\nOk, this looks like the way to go. I have some questions about\nimplementation.\n\nClient sends _pq_.format_binary\nserver doesn't object so now the client implicitly knows that they can send\na new protocol message.\nAt this point the client sends some new message 'F\" for example, with OID's\nthe client wants in binary for the remainder of the session.\n\nIdeally, I'd like to avoid this second message. Is the above correct ?\n\nDave\n\n\n\nBacking up a level, the purpose of the protocol extension mechanism is\nto help us agree on the communication protocol -- that is, the set of\nmessages that we can send and receive on a certain connection. The\nquestion for the protocol extension mechanism isn't \"which types\nshould always be sent in binary format?\" but \"would it be ok if I\nwanted you to always send certain types in binary format?\", with the\nidea that if the answer is yes it will still be necessary for the\nclient to let the server know which ones, but that's easy to do if\nwe've agreed on the concept that it's OK for me to ask the server for\nthat. And if it's OK for me to ask that once, it should also be OK for\nme to later ask for something different.\n\nThis could, perhaps, be made even more general yet. We could define a\nconcept of \"protocol session parameters\" and make \"which types are\nalways sent in binary format?\" one of those parameters. So then the\nconversation could go like this:\n\nC: Hello! Do you know about protocol session parameters?\nS: Why yes, actually I do.\nC: Cool. I would like to set the protocol session parameter\ntypes_always_in_binary_format=timestamptz. Does that work for you?\nS: Sure thing! (or alternatively: Sadly, I've not heard of that\nparticular protocol session parameter, sorry to disappoint.)\n\nThe reason why I suggest this is that I feel like there could be a\nbunch of things like this. The set of things to be sent in binary\nformat feels like a property of the wire protocol, not something\nSQL-level that should be configured via SET. Clients, drivers, and\nconnection poolers aren't going to want to have to worry about some\nuser screwing up the session by changing that property inside of a\nfunction or procedure or whatever. But there could also be a bunch of\ndifferent things like this that we want to support. For example, one\nthat would be really useful for connection poolers is the session\nuser. The pooler would like to change the session user whenever the\nconnection is changed to talk to a different client, and it would like\nthat to happen in a way that can't be reversed by issuing any SQL\ncommand. I expect in time we may find a bunch of others.\nOk, this looks like the way to go. I have some questions about implementation.Client sends _pq_.format_binary server doesn't object so now the client implicitly knows that they can send a new protocol message.At this point the client sends some new message 'F\" for example, with OID's the client wants in binary for the remainder of the session.Ideally, I'd like to avoid this second message. Is the above correct ?Dave",
"msg_date": "Mon, 24 Apr 2023 09:49:47 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Thu, Apr 20, 2023 at 2:52 PM Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n> As promised here is a patch with defines for all of the protocol messages.\n>\nI created a protocol.h file and put it in src/includes\n> I'm fairly sure that some of the names I used may need to be changed but\n> the grunt work of finding and replacing everything is done.\n>\n\nIn many cases, converting inline character to macro eliminates the need for\ninline comment, e.g.:\n+ case SIMPLE_QUERY: /* simple query */\n\n...that's more work obviously, do you agree and if so would you like some\nhelp going through that?\n\nmerlin\n\n>\n\nOn Thu, Apr 20, 2023 at 2:52 PM Dave Cramer <davecramer@gmail.com> wrote:As promised here is a patch with defines for all of the protocol messages.I created a protocol.h file and put it in src/includesI'm fairly sure that some of the names I used may need to be changed but the grunt work of finding and replacing everything is done.In many cases, converting inline character to macro eliminates the need for inline comment, e.g.:+\t\tcase SIMPLE_QUERY:\t\t\t/* simple query */...that's more work obviously, do you agree and if so would you like some help going through that?merlin",
"msg_date": "Mon, 24 Apr 2023 18:18:34 -0500",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, 24 Apr 2023 at 19:18, Merlin Moncure <mmoncure@gmail.com> wrote:\n\n>\n>\n> On Thu, Apr 20, 2023 at 2:52 PM Dave Cramer <davecramer@gmail.com> wrote:\n>\n>>\n>> As promised here is a patch with defines for all of the protocol messages.\n>>\n> I created a protocol.h file and put it in src/includes\n>> I'm fairly sure that some of the names I used may need to be changed but\n>> the grunt work of finding and replacing everything is done.\n>>\n>\n> In many cases, converting inline character to macro eliminates the need\n> for inline comment, e.g.:\n> + case SIMPLE_QUERY: /* simple query */\n>\n> ...that's more work obviously, do you agree and if so would you like some\n> help going through that?\n>\n\nI certainly agree. I left them there mostly for reviewers. I expected some\nminor adjustments to names of the macro's\n\nSo if you have suggestions I'll make changes.\n\nI'll remove the comments if they are no longer necessary.\n\nDave\n\n>\n\nOn Mon, 24 Apr 2023 at 19:18, Merlin Moncure <mmoncure@gmail.com> wrote:On Thu, Apr 20, 2023 at 2:52 PM Dave Cramer <davecramer@gmail.com> wrote:As promised here is a patch with defines for all of the protocol messages.I created a protocol.h file and put it in src/includesI'm fairly sure that some of the names I used may need to be changed but the grunt work of finding and replacing everything is done.In many cases, converting inline character to macro eliminates the need for inline comment, e.g.:+\t\tcase SIMPLE_QUERY:\t\t\t/* simple query */...that's more work obviously, do you agree and if so would you like some help going through that?I certainly agree. I left them there mostly for reviewers. I expected some minor adjustments to names of the macro'sSo if you have suggestions I'll make changes. I'll remove the comments if they are no longer necessary.Dave",
"msg_date": "Tue, 25 Apr 2023 07:26:18 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Tue, 25 Apr 2023 at 07:26, Dave Cramer <davecramer@gmail.com> wrote:\n\n>\n>\n>\n> On Mon, 24 Apr 2023 at 19:18, Merlin Moncure <mmoncure@gmail.com> wrote:\n>\n>>\n>>\n>> On Thu, Apr 20, 2023 at 2:52 PM Dave Cramer <davecramer@gmail.com> wrote:\n>>\n>>>\n>>> As promised here is a patch with defines for all of the protocol\n>>> messages.\n>>>\n>> I created a protocol.h file and put it in src/includes\n>>> I'm fairly sure that some of the names I used may need to be changed but\n>>> the grunt work of finding and replacing everything is done.\n>>>\n>>\n>> In many cases, converting inline character to macro eliminates the need\n>> for inline comment, e.g.:\n>> + case SIMPLE_QUERY: /* simple query */\n>>\n>> ...that's more work obviously, do you agree and if so would you like some\n>> help going through that?\n>>\n>\n> I certainly agree. I left them there mostly for reviewers. I expected some\n> minor adjustments to names of the macro's\n>\n> So if you have suggestions I'll make changes.\n>\n> I'll remove the comments if they are no longer necessary.\n>\n\nPatch attached with comments removed\n\n\n>\n> Dave\n>\n>>",
"msg_date": "Tue, 25 Apr 2023 10:47:19 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "> On 25 Apr 2023, at 16:47, Dave Cramer <davecramer@gmail.com> wrote:\n\n> Patch attached with comments removed\n\nThis patch no longer applies, please submit a rebased version on top of HEAD.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 10 Jul 2023 11:56:43 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "Dave Cramer\n\n\nOn Mon, 10 Jul 2023 at 03:56, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 25 Apr 2023, at 16:47, Dave Cramer <davecramer@gmail.com> wrote:\n>\n> > Patch attached with comments removed\n>\n> This patch no longer applies, please submit a rebased version on top of\n> HEAD.\n>\n\nRebased see attached\n\n\n\n>\n> --\n> Daniel Gustafsson\n>\n>",
"msg_date": "Mon, 31 Jul 2023 10:27:45 -0600",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On 31.07.23 18:27, Dave Cramer wrote:\n> On Mon, 10 Jul 2023 at 03:56, Daniel Gustafsson <daniel@yesql.se \n> <mailto:daniel@yesql.se>> wrote:\n> \n> > On 25 Apr 2023, at 16:47, Dave Cramer <davecramer@gmail.com\n> <mailto:davecramer@gmail.com>> wrote:\n> \n> > Patch attached with comments removed\n> \n> This patch no longer applies, please submit a rebased version on top\n> of HEAD.\n> \n> \n> Rebased see attached\n\nI have studied this thread now. It seems it has gone through the same \nprogression with the same (non-)result as my original patch on the subject.\n\nI have a few intermediate conclusions:\n\n- Doing it with a GUC is challenging. It's questionable layering to \nhave the GUC system control protocol behavior. It would allow weird \nbehavior where a GUC set, maybe for a user or a database, would confuse, \nsay, psql or pg_dump. We probably should make some of those more robust \nin any case. Also, handling of GUCs through connection poolers is a \nchallenge. It does work, but it's more like opt-in, and so can't be \nfully relied on for protocol correctness.\n\n- Doing it with a session-level protocol-level setting is challenging. \nWe currently don't have that kind of thing. It's not clear how \nconnection poolers would/should handle it. Someone would have to work \nall this out before this could be used.\n\n- In either case, there are issues like what if there is a connection \npooler and types have different OIDs in different databases. (Or, \nsimilarly, an extension is upgraded during the lifetime of a session and \na type's OID changes.) Also, maybe, what if types are in different \nschemas on different databases.\n\n- We could avoid some of the session-state issues by doing this per \nrequest, like extending the Bind message somehow by appending the list \nof types to be sent in binary. But the JDBC driver currently lists 24 \ntypes for which it supports binary, so that would require adding 24*4=96 \nbytes per request, which seems untenable.\n\nI think intuitively, this facility ought to work like client_encoding. \nThere, the client declares its capabilities, and the server has to \nformat the output according to the client's capabilities. That works, \nand it also works through connection poolers. (It is a GUC.) If we can \nmodel it like that as closely as possible, then we have a chance of \ngetting it working reliably. Notably, the value space for \nclient_encoding is a globally known fixed list of strings. We need to \nfigure out what is the right way to globally identify types, like either \nby fully-qualified name, by base name, some combination, how does it \nwork with extensions, or do we need a new mechanism like UUIDs. I think \nthat is something we need to work out, no matter which protocol \nmechanism we end up using.\n\n\n\n",
"msg_date": "Wed, 4 Oct 2023 16:17:28 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Wed, Oct 4, 2023 at 9:17 AM Peter Eisentraut <peter@eisentraut.org>\nwrote:\n\n> I think intuitively, this facility ought to work like client_encoding.\n> There, the client declares its capabilities, and the server has to\n> format the output according to the client's capabilities. That works,\n> and it also works through connection poolers. (It is a GUC.) If we can\n> model it like that as closely as possible, then we have a chance of\n> getting it working reliably. Notably, the value space for\n> client_encoding is a globally known fixed list of strings. We need to\n> figure out what is the right way to globally identify types, like either\n> by fully-qualified name, by base name, some combination, how does it\n> work with extensions, or do we need a new mechanism like UUIDs. I think\n> that is something we need to work out, no matter which protocol\n> mechanism we end up using.\n>\n\n Fantastic write up.\n\n> globally known fixed list of strings\nAre you suggesting that we would have a client/server negotiation such as,\n'jdbc<version>', 'all', etc where that would identify which types are done\nwhich way? If you did that, why would we need to promote names/uuid to\npermanent global space?\n\nmerlin\n\nOn Wed, Oct 4, 2023 at 9:17 AM Peter Eisentraut <peter@eisentraut.org> wrote: \nI think intuitively, this facility ought to work like client_encoding. \nThere, the client declares its capabilities, and the server has to \nformat the output according to the client's capabilities. That works, \nand it also works through connection poolers. (It is a GUC.) If we can \nmodel it like that as closely as possible, then we have a chance of \ngetting it working reliably. Notably, the value space for \nclient_encoding is a globally known fixed list of strings. We need to \nfigure out what is the right way to globally identify types, like either \nby fully-qualified name, by base name, some combination, how does it \nwork with extensions, or do we need a new mechanism like UUIDs. I think \nthat is something we need to work out, no matter which protocol \nmechanism we end up using. Fantastic write up. > globally known fixed list of stringsAre you suggesting that we would have a client/server negotiation such as, 'jdbc<version>', 'all', etc where that would identify which types are done which way? If you did that, why would we need to promote names/uuid to permanent global space?merlin",
"msg_date": "Wed, 4 Oct 2023 11:26:31 -0500",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Wed, 4 Oct 2023 at 10:17, Peter Eisentraut <peter@eisentraut.org> wrote:\n\n> On 31.07.23 18:27, Dave Cramer wrote:\n> > On Mon, 10 Jul 2023 at 03:56, Daniel Gustafsson <daniel@yesql.se\n> > <mailto:daniel@yesql.se>> wrote:\n> >\n> > > On 25 Apr 2023, at 16:47, Dave Cramer <davecramer@gmail.com\n> > <mailto:davecramer@gmail.com>> wrote:\n> >\n> > > Patch attached with comments removed\n> >\n> > This patch no longer applies, please submit a rebased version on top\n> > of HEAD.\n> >\n> >\n> > Rebased see attached\n>\n> I have studied this thread now. It seems it has gone through the same\n> progression with the same (non-)result as my original patch on the subject.\n>\n> I have a few intermediate conclusions:\n>\n> - Doing it with a GUC is challenging. It's questionable layering to\n> have the GUC system control protocol behavior. It would allow weird\n> behavior where a GUC set, maybe for a user or a database, would confuse,\n> say, psql or pg_dump. We probably should make some of those more robust\n> in any case. Also, handling of GUCs through connection poolers is a\n> challenge. It does work, but it's more like opt-in, and so can't be\n> fully relied on for protocol correctness.\n>\n> - Doing it with a session-level protocol-level setting is challenging.\n> We currently don't have that kind of thing. It's not clear how\n> connection poolers would/should handle it. Someone would have to work\n> all this out before this could be used.\n>\n> - In either case, there are issues like what if there is a connection\n> pooler and types have different OIDs in different databases. (Or,\n> similarly, an extension is upgraded during the lifetime of a session and\n> a type's OID changes.) Also, maybe, what if types are in different\n> schemas on different databases.\n>\n> - We could avoid some of the session-state issues by doing this per\n> request, like extending the Bind message somehow by appending the list\n> of types to be sent in binary. But the JDBC driver currently lists 24\n> types for which it supports binary, so that would require adding 24*4=96\n> bytes per request, which seems untenable.\n>\n> I think intuitively, this facility ought to work like client_encoding.\n> There, the client declares its capabilities, and the server has to\n> format the output according to the client's capabilities. That works,\n> and it also works through connection poolers. (It is a GUC.) If we can\n> model it like that as closely as possible, then we have a chance of\n> getting it working reliably. Notably, the value space for\n> client_encoding is a globally known fixed list of strings. We need to\n> figure out what is the right way to globally identify types, like either\n> by fully-qualified name, by base name, some combination, how does it\n> work with extensions, or do we need a new mechanism like UUIDs. I think\n> that is something we need to work out, no matter which protocol\n> mechanism we end up using.\n>\n\nSo how is this different than the GUC that I proposed ?\n\nDave\n\nOn Wed, 4 Oct 2023 at 10:17, Peter Eisentraut <peter@eisentraut.org> wrote:On 31.07.23 18:27, Dave Cramer wrote:\n> On Mon, 10 Jul 2023 at 03:56, Daniel Gustafsson <daniel@yesql.se \n> <mailto:daniel@yesql.se>> wrote:\n> \n> > On 25 Apr 2023, at 16:47, Dave Cramer <davecramer@gmail.com\n> <mailto:davecramer@gmail.com>> wrote:\n> \n> > Patch attached with comments removed\n> \n> This patch no longer applies, please submit a rebased version on top\n> of HEAD.\n> \n> \n> Rebased see attached\n\nI have studied this thread now. It seems it has gone through the same \nprogression with the same (non-)result as my original patch on the subject.\n\nI have a few intermediate conclusions:\n\n- Doing it with a GUC is challenging. It's questionable layering to \nhave the GUC system control protocol behavior. It would allow weird \nbehavior where a GUC set, maybe for a user or a database, would confuse, \nsay, psql or pg_dump. We probably should make some of those more robust \nin any case. Also, handling of GUCs through connection poolers is a \nchallenge. It does work, but it's more like opt-in, and so can't be \nfully relied on for protocol correctness.\n\n- Doing it with a session-level protocol-level setting is challenging. \nWe currently don't have that kind of thing. It's not clear how \nconnection poolers would/should handle it. Someone would have to work \nall this out before this could be used.\n\n- In either case, there are issues like what if there is a connection \npooler and types have different OIDs in different databases. (Or, \nsimilarly, an extension is upgraded during the lifetime of a session and \na type's OID changes.) Also, maybe, what if types are in different \nschemas on different databases.\n\n- We could avoid some of the session-state issues by doing this per \nrequest, like extending the Bind message somehow by appending the list \nof types to be sent in binary. But the JDBC driver currently lists 24 \ntypes for which it supports binary, so that would require adding 24*4=96 \nbytes per request, which seems untenable.\n\nI think intuitively, this facility ought to work like client_encoding. \nThere, the client declares its capabilities, and the server has to \nformat the output according to the client's capabilities. That works, \nand it also works through connection poolers. (It is a GUC.) If we can \nmodel it like that as closely as possible, then we have a chance of \ngetting it working reliably. Notably, the value space for \nclient_encoding is a globally known fixed list of strings. We need to \nfigure out what is the right way to globally identify types, like either \nby fully-qualified name, by base name, some combination, how does it \nwork with extensions, or do we need a new mechanism like UUIDs. I think \nthat is something we need to work out, no matter which protocol \nmechanism we end up using.So how is this different than the GUC that I proposed ?Dave",
"msg_date": "Wed, 4 Oct 2023 14:30:45 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Wed, Oct 4, 2023 at 10:17 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> I think intuitively, this facility ought to work like client_encoding.\n\nI hadn't really considered client_encoding as a precedent for this\nsetting. A lot of my discomfort with the proposed mechanism also\napplies to client_encoding, namely, suppose you call some function or\nprocedure or whatever and it changes client_encoding on your behalf\nand now your communication with the server is all screwed up. That\nseems very unpleasant. Yet it's also existing behavior. I think one\ncould conclude on these facts either that (a) client_encoding is fine\nand the problems with controlling behavior using that kind of\nmechanism are mostly theoretical or (b) that we messed up with\nclient_encoding and shouldn't add any more mistakes of the same ilk or\n(c) that we should really be looking at redesigning the way\nclient_encoding works, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Oct 2023 15:10:05 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On 04.10.23 18:26, Merlin Moncure wrote:\n> On Wed, Oct 4, 2023 at 9:17 AM Peter Eisentraut <peter@eisentraut.org \n> <mailto:peter@eisentraut.org>> wrote:\n> \n> I think intuitively, this facility ought to work like client_encoding.\n> There, the client declares its capabilities, and the server has to\n> format the output according to the client's capabilities. That works,\n> and it also works through connection poolers. (It is a GUC.) If we\n> can\n> model it like that as closely as possible, then we have a chance of\n> getting it working reliably. Notably, the value space for\n> client_encoding is a globally known fixed list of strings. We need to\n> figure out what is the right way to globally identify types, like\n> either\n> by fully-qualified name, by base name, some combination, how does it\n> work with extensions, or do we need a new mechanism like UUIDs. I\n> think\n> that is something we need to work out, no matter which protocol\n> mechanism we end up using.\n> \n> \n> Fantastic write up.\n> \n> > globally known fixed list of strings\n> Are you suggesting that we would have a client/server negotiation such \n> as, 'jdbc<version>', 'all', etc where that would identify which types \n> are done which way? If you did that, why would we need to promote \n> names/uuid to permanent global space?\n\nNo, I don't think I meant anything like that.\n\n\n\n",
"msg_date": "Fri, 6 Oct 2023 13:09:01 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On 04.10.23 20:30, Dave Cramer wrote:\n> We need to\n> figure out what is the right way to globally identify types, like\n> either\n> by fully-qualified name, by base name, some combination, how does it\n> work with extensions, or do we need a new mechanism like UUIDs. I\n> think\n> that is something we need to work out, no matter which protocol\n> mechanism we end up using.\n> \n> \n> So how is this different than the GUC that I proposed ?\n\nThe last patch I see from you in this thread uses OIDs, which I have \nargued is not the right solution.\n\n\n",
"msg_date": "Fri, 6 Oct 2023 13:11:24 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On 04.10.23 21:10, Robert Haas wrote:\n> On Wed, Oct 4, 2023 at 10:17 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n>> I think intuitively, this facility ought to work like client_encoding.\n> \n> I hadn't really considered client_encoding as a precedent for this\n> setting. A lot of my discomfort with the proposed mechanism also\n> applies to client_encoding, namely, suppose you call some function or\n> procedure or whatever and it changes client_encoding on your behalf\n> and now your communication with the server is all screwed up. That\n> seems very unpleasant. Yet it's also existing behavior. I think one\n> could conclude on these facts either that (a) client_encoding is fine\n> and the problems with controlling behavior using that kind of\n> mechanism are mostly theoretical or (b) that we messed up with\n> client_encoding and shouldn't add any more mistakes of the same ilk or\n> (c) that we should really be looking at redesigning the way\n> client_encoding works, too.\n\nYeah I agree with all three of these points, but I don't have a strong \nopinion which is the best one.\n\n\n\n",
"msg_date": "Fri, 6 Oct 2023 13:12:24 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Wed, 4 Oct 2023 at 21:10, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Oct 4, 2023 at 10:17 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> > I think intuitively, this facility ought to work like client_encoding.\n>\n> I hadn't really considered client_encoding as a precedent for this\n> setting. A lot of my discomfort with the proposed mechanism also\n> applies to client_encoding, namely, suppose you call some function or\n> procedure or whatever and it changes client_encoding on your behalf\n> and now your communication with the server is all screwed up. That\n> seems very unpleasant. Yet it's also existing behavior. I think one\n> could conclude on these facts either that (a) client_encoding is fine\n> and the problems with controlling behavior using that kind of\n> mechanism are mostly theoretical or (b) that we messed up with\n> client_encoding and shouldn't add any more mistakes of the same ilk or\n> (c) that we should really be looking at redesigning the way\n> client_encoding works, too.\n\nWith my PgBouncer maintainer hat on: I think the GUC approach would be\nquite alright, i.e. option (a). The nice thing is that it would be\nvery simple to make it work with connection poolers, because the same\napproach could be reused that is currently used for client_encoding.\nNOTE: This does require that the new GUC has the GUC_REPORT flag set\n(just like client_encoding). By adding the GUC_REPORT flag clients\ncould also take into account any changes to the setting even when they\ndid not change it themselves (simplest way to handle a change would be\nby throwing an error and closing the connection).\n\nTo clarify how PgBouncer currently handles client_encoding: For each\nclient PgBouncer keeps track of the current value for a list of GUCs,\none of which is client_encoding. This is done by listening for the\nParameterStatus responses it gets from the server while the client is\nconnected. Then if later a client is assigned another server\nconnection, and that server has different values for (some of) these\nGUCs, before actually forwarding the client its query some SET\ncommands are sent to correctly set the GUCs.\n\nThe resultFormat = 3 trick might be nice for backwards compatibility\nof clients. That way old clients would continue to get text or binary\noutput even when the new GUC is set. To be clear resultFormat=3 would\nmean: Use binary format when the new GUC indicates that it should.\nUpthread I see that Dave mentioned that this would require an extra\nDescribe, but I don't understand why. If you set 3 for all columns and\nyou know the value of the GUC, then you know which columns will be\nencoded in binary.\n\n\n",
"msg_date": "Mon, 9 Oct 2023 17:08:53 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Fri, 6 Oct 2023 at 13:11, Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 04.10.23 20:30, Dave Cramer wrote:\n> > We need to\n> > figure out what is the right way to globally identify types, like\n> > either\n> > by fully-qualified name, by base name, some combination, how does it\n> > work with extensions, or do we need a new mechanism like UUIDs. I\n> > think\n> > that is something we need to work out, no matter which protocol\n> > mechanism we end up using.\n> >\n> >\n> > So how is this different than the GUC that I proposed ?\n>\n> The last patch I see from you in this thread uses OIDs, which I have\n> argued is not the right solution.\n\nSince the protocol already returns OIDs in the ParameterDescription\nand RowDescription messages I don't see why using OIDs for this GUC\nwould cause any additional problems. Clients already need to know OIDs\nand how to encode/decode them. So I don't see a big reason why we\nshould allow passing in \"schema\".\"type\" as well. Clients already need\na mapping from typename to OID for user defined types to be able to\nparse ParameterDescription and RowDescription messages.\n\nWith my Citus hat on: I would very much like something like the UUID\nor typename approach. With Citus the same user defined type can have\ndifferent OIDs on each of the servers in the cluster. So it sounds\nlike currently using a transaction pooler that does load balancing\nacross the workers in the cluster would cause issues for user defined\ntypes. Having a cluster global unique identifier for a type within a\ndatabase would be able to solve those issues. But that would require\nthat the protocol actually sent those cluster global unique\nidentifiers instead of OIDs. As far as I can tell similar issues would\nbe present with zero-downtime upgrades using pg_upgrade + logical\nreplication, and probably also in solutions like BDR. i.e. this is an\nissue when clients get transparently re-connected to a new host where\nan OIDs of user defined types might be different.\n\nSo I think OIDs are a good choice for the newly proposed GUC, because\nthat's what the protocol uses currently. But I do think it would be\ngood to keep in mind what it would look like if we'd change the\nprotocol to report and accept UUIDs/typenames instead of OIDs.\nUUIDs/typenames and OIDs have a clearly different string\nrepresentation though. So, I think we could easily expand the new GUC\nto support both OIDs and UUIDs/typenames when we change the protocol\nto do so too, even when supporting just OIDs now.\n\n\n",
"msg_date": "Mon, 9 Oct 2023 17:08:55 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, Oct 9, 2023 at 11:09 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n> Since the protocol already returns OIDs in the ParameterDescription\n> and RowDescription messages I don't see why using OIDs for this GUC\n> would cause any additional problems.\n\n...but then...\n\n> With Citus the same user defined type can have\n> different OIDs on each of the servers in the cluster.\n\nI realize that your intention here may be to say that this is not an\n*additional* problem but one we have already. But it seems like one\nthat we ought to be trying to solve, rather than propagating a\nproblematic solution into more places.\n\nDecisions we make about the wire protocol are some of the most\nlong-lasting and painful decisions we make, right up there with the\non-disk format. Maybe worse, in some ways.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Oct 2023 14:59:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, 9 Oct 2023 at 15:00, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Oct 9, 2023 at 11:09 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n> > Since the protocol already returns OIDs in the ParameterDescription\n> > and RowDescription messages I don't see why using OIDs for this GUC\n> > would cause any additional problems.\n>\n> ...but then...\n>\n> > With Citus the same user defined type can have\n> > different OIDs on each of the servers in the cluster.\n>\n> I realize that your intention here may be to say that this is not an\n> *additional* problem but one we have already. But it seems like one\n> that we ought to be trying to solve, rather than propagating a\n> problematic solution into more places.\n>\n> Decisions we make about the wire protocol are some of the most\n> long-lasting and painful decisions we make, right up there with the\n> on-disk format. Maybe worse, in some ways.\n>\n\nSo if we use <schema>.<type> would it be possible to have something like\n<builtin> which represents a set of well known types?\nMy goal here is to reduce the overhead of naming all the types the client\nwants in binary. The list of well known types is pretty long.\nAdditionally we could have a shorthand for removing a well known type.\n\nDave\n\nOn Mon, 9 Oct 2023 at 15:00, Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Oct 9, 2023 at 11:09 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n> Since the protocol already returns OIDs in the ParameterDescription\n> and RowDescription messages I don't see why using OIDs for this GUC\n> would cause any additional problems.\n\n...but then...\n\n> With Citus the same user defined type can have\n> different OIDs on each of the servers in the cluster.\n\nI realize that your intention here may be to say that this is not an\n*additional* problem but one we have already. But it seems like one\nthat we ought to be trying to solve, rather than propagating a\nproblematic solution into more places.\n\nDecisions we make about the wire protocol are some of the most\nlong-lasting and painful decisions we make, right up there with the\non-disk format. Maybe worse, in some ways.So if we use <schema>.<type> would it be possible to have something like <builtin> which represents a set of well known types? My goal here is to reduce the overhead of naming all the types the client wants in binary. The list of well known types is pretty long.Additionally we could have a shorthand for removing a well known type. Dave",
"msg_date": "Mon, 9 Oct 2023 15:08:28 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Wed, 2023-10-04 at 15:10 -0400, Robert Haas wrote:\n> I hadn't really considered client_encoding as a precedent for this\n> setting. A lot of my discomfort with the proposed mechanism also\n> applies to client_encoding, namely, suppose you call some function or\n> procedure or whatever and it changes client_encoding on your behalf\n> and now your communication with the server is all screwed up.\n\nThis may have some security implications, but we've had lots of\ndiscussion about the general topic of executing malicious code, and the\nability to mess with the on-the-wire formats might not be any worse\nthan what can already happen. (Though expanding it to binary formats\nmight slightly increase the attack surface area.)\n\n> That\n> seems very unpleasant. Yet it's also existing behavior.\n\nThe binary format setting is better in some ways and worse in other\nways.\n\nFor text encoding, usually it's expecting a single encoding and so a\nsingle setting at the start of the session makes sense. For binary\nformats, the client is likely to support some values in binary and\nothers not; and user-defined types make it even messier.\n\nOn the other hand, at least the results are marked as being binary\nformat, so if something unexpected happens, a well-written client is\nmore likely to see that something went wrong. For text encoding, the\nclient would have to be a bit more defensive.\n\nAnother thing to consider is that using a GUC for binary formats is a\nprotocol change in a way that client_encoding is not. The existing\ndocumentation for the protocol already specifies when binary formats\nwill be used, and a GUC would change that behavior. We absolutely would\nneed to update the documentation, and clients (like psql) really should\nbe updated.\n\n> I think one\n> could conclude on these facts either that (a) client_encoding is fine\n> and the problems with controlling behavior using that kind of\n> mechanism are mostly theoretical or \n\nI'm not clear on the exact rules for a protocol version bump and why a\nGUC helps us avoid one. If we have a binary_formats GUC, the client\nwould need to know the server version and check that it's >=17 before\nsending the \"SET binary_formats='...'\" commmand, right? What's the\ndifference between that and making it an explicit protocol message that\nonly >=17 understand?\n\nIn any case, I think clients and connection poolers can work around the\nproblems, and they are mostly minor in practice, but I wouldn't call\nthem \"theoretical\". If there's enough utility in the binary_formats\nparameter, we can decide to put up with the problems; which is\ndifferent than saying there aren't any.\n\n> (b) that we messed up with\n> client_encoding and shouldn't add any more mistakes of the same ilk\n> or\n> (c) that we should really be looking at redesigning the way\n> client_encoding works, too.\n\n(b) doesn't seem like a very helpful perspective without some ideas\ntoward (c). I think (c) is worth discussing but we don't have to block\non it.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Mon, 09 Oct 2023 13:25:32 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, 9 Oct 2023 at 21:00, Robert Haas <robertmhaas@gmail.com> wrote:\n> ...but then...\n>\n> > With Citus the same user defined type can have\n> > different OIDs on each of the servers in the cluster.\n>\n> I realize that your intention here may be to say that this is not an\n> *additional* problem but one we have already. But it seems like one\n> that we ought to be trying to solve, rather than propagating a\n> problematic solution into more places.\n\nYes, I probably should have emphasized the word *additional*. i.e.\nstarting from scratch I wouldn't use OIDs in this GUC nor in\nParameterDescription or RowDescription, but blocking the addition of\nthis GUC on addressing that seems unnecessary. When we fix it we can\nfix this too. I'd rather use OIDs (with all their problems)\nconsistently now for communication of types with regards to protocol\nrelated things. Then we can at some point change all places in bulk to\nsome better identifier than OIDs.\n\n> Decisions we make about the wire protocol are some of the most\n> long-lasting and painful decisions we make, right up there with the\n> on-disk format. Maybe worse, in some ways.\n\nYes, I agree. I just don't think using OIDs makes changing the\nprotocol in this regard any less painful than it already is currently.\n\n\n",
"msg_date": "Mon, 9 Oct 2023 22:27:15 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, 9 Oct 2023 at 22:25, Jeff Davis <pgsql@j-davis.com> wrote:\n> We absolutely would\n> need to update the documentation, and clients (like psql) really should\n> be updated.\n\n+1\n\n> > I think one\n> > could conclude on these facts either that (a) client_encoding is fine\n> > and the problems with controlling behavior using that kind of\n> > mechanism are mostly theoretical or\n>\n> I'm not clear on the exact rules for a protocol version bump and why a\n> GUC helps us avoid one. If we have a binary_formats GUC, the client\n> would need to know the server version and check that it's >=17 before\n> sending the \"SET binary_formats='...'\" commmand, right?\n\nI agree that we'd probably still want to do a protocol minor version\nbump. FYI there is another thread trying to introduce protocol change\nwhich needs a minor version bump. Patch number 0003 in that patchset\nis meant to actually make libpq handle minor version increases\ncorrectly. If we need a version bump than that would be useful[1].\n\n\n> What's the\n> difference between that and making it an explicit protocol message that\n> only >=17 understand?\n\nHonestly I think the main difference is the need to introduce this\nexplicit protocol message. If we do, I think it might be best to have\nthis be a way of setting a GUC at the Protocol level, and expand the\nGucContext enum to have a way to disallow setting it from SQL (e.g.\nPGC_PROTOCOL), while still allowing PgBouncer (or other poolers) to\nchange the GUC as part of the connection handoff, in a way that's\nsimilar to what's being done for client_encoding now. We might even\nwant to make client_encoding PGC_PROTOCOL too (eventually).\n\nActually, for connection poolers there's other reasons to want to set\nGUC values at the protocol level instead of SQL. Because the value of\nthe ParameterStatus response is sadly not a valid SQL string... That's\nwhy in PgBouncer we have to re-quote the value [2], which is a problem\nfor any GUC_LIST_QUOTE type, which search_path is. This GUC_LIST_QUOTE\nlogic is actually not completely correct in PgBouncer and only handles\n\"\" (empty search_path), because for search_path that's the only\nreasonable problematic case that people might hit (e.g. truncating to\nNAMELEN is another problem, but elements in search_path should already\nbe at most NAMELEN). But still it would be good not to have to worry\nabout that. And being able to send the value in ParameterStatus back\nverbatim to the server would be quite helpful for PgBouncer.\n\n[1]: https://www.postgresql.org/message-id/flat/CAFj8pRAX48WH5Y6BbqnZbUSzmtEaQZ22rY_6cYw%3DE9QkoVvL0A%40mail.gmail.com#643c91f84ae33b316c0fed64e19c8e49\n[2]: https://github.com/pgbouncer/pgbouncer/blob/60708022d5b934fa53c51849b9f02d87a7881b11/src/varcache.c#L172-L183\n\n\n",
"msg_date": "Mon, 9 Oct 2023 23:02:09 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, 9 Oct 2023 at 21:08, Dave Cramer <davecramer@gmail.com> wrote:\n> So if we use <schema>.<type> would it be possible to have something like <builtin> which represents a set of well known types?\n> My goal here is to reduce the overhead of naming all the types the client wants in binary. The list of well known types is pretty long.\n> Additionally we could have a shorthand for removing a well known type.\n\nYou're only setting this once in the lifetime of the connection right,\ni.e. right at the start (although pgbouncer could set it once per\ntransaction in the worst case). It seems like it shouldn't really\nmatter much to optimize the size of the \"SET format_binary=...\"\ncommand, I'd expect it to be at most 1 kilobyte. I'm not super opposed\nto having a shorthand for some of the most commonly wanted built-in\ntypes, but then we'd need to decide on what those are, which would add\neven more discussion/bikeshedding to this thread. I'm not sure the win\nin size is worth that effort.\n\n\n",
"msg_date": "Mon, 9 Oct 2023 23:11:25 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, 9 Oct 2023 at 17:11, Jelte Fennema <postgres@jeltef.nl> wrote:\n\n> On Mon, 9 Oct 2023 at 21:08, Dave Cramer <davecramer@gmail.com> wrote:\n> > So if we use <schema>.<type> would it be possible to have something like\n> <builtin> which represents a set of well known types?\n> > My goal here is to reduce the overhead of naming all the types the\n> client wants in binary. The list of well known types is pretty long.\n> > Additionally we could have a shorthand for removing a well known type.\n>\n> You're only setting this once in the lifetime of the connection right,\n>\n\nCorrect\n\n> i.e. right at the start (although pgbouncer could set it once per\n> transaction in the worst case). It seems like it shouldn't really\n> matter much to optimize the size of the \"SET format_binary=...\"\n> command, I'd expect it to be at most 1 kilobyte. I'm not super opposed\n> to having a shorthand for some of the most commonly wanted built-in\n> types, but then we'd need to decide on what those are, which would add\n> even more discussion/bikeshedding to this thread. I'm not sure the win\n> in size is worth that effort.\n>\nIt's worth the effort if we use schema.typename, if we use oids then I'm\nnot that invested in this approach.\n\nDave\n\nOn Mon, 9 Oct 2023 at 17:11, Jelte Fennema <postgres@jeltef.nl> wrote:On Mon, 9 Oct 2023 at 21:08, Dave Cramer <davecramer@gmail.com> wrote:\n> So if we use <schema>.<type> would it be possible to have something like <builtin> which represents a set of well known types?\n> My goal here is to reduce the overhead of naming all the types the client wants in binary. The list of well known types is pretty long.\n> Additionally we could have a shorthand for removing a well known type.\n\nYou're only setting this once in the lifetime of the connection right,Correct \ni.e. right at the start (although pgbouncer could set it once per\ntransaction in the worst case). It seems like it shouldn't really\nmatter much to optimize the size of the \"SET format_binary=...\"\ncommand, I'd expect it to be at most 1 kilobyte. I'm not super opposed\nto having a shorthand for some of the most commonly wanted built-in\ntypes, but then we'd need to decide on what those are, which would add\neven more discussion/bikeshedding to this thread. I'm not sure the win\nin size is worth that effort.It's worth the effort if we use schema.typename, if we use oids then I'm not that invested in this approach.Dave",
"msg_date": "Tue, 10 Oct 2023 08:24:28 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, Oct 9, 2023 at 4:25 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Another thing to consider is that using a GUC for binary formats is a\n> protocol change in a way that client_encoding is not. The existing\n> documentation for the protocol already specifies when binary formats\n> will be used, and a GUC would change that behavior. We absolutely would\n> need to update the documentation, and clients (like psql) really should\n> be updated.\n\nI think the idea of using a new parameterFormat value is a good one.\nLet 0 and 1 continue to mean what they mean, and let clients opt in to\nthe new mechanism if they're aware of it.\n\nI think it's a pretty bad idea to dump new protocol behavior on\nclients who have in no way indicated that they know about it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Oct 2023 10:25:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Tue, 10 Oct 2023 at 10:25, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Oct 9, 2023 at 4:25 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > Another thing to consider is that using a GUC for binary formats is a\n> > protocol change in a way that client_encoding is not. The existing\n> > documentation for the protocol already specifies when binary formats\n> > will be used, and a GUC would change that behavior. We absolutely would\n> > need to update the documentation, and clients (like psql) really should\n> > be updated.\n>\n> I think the idea of using a new parameterFormat value is a good one.\n> Let 0 and 1 continue to mean what they mean, and let clients opt in to\n> the new mechanism if they're aware of it.\n>\n\nCorrect me if I am wrong, but the client has to request this. So I'm not\nsure how we would be surprised ?\n\nDave\n\nOn Tue, 10 Oct 2023 at 10:25, Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Oct 9, 2023 at 4:25 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Another thing to consider is that using a GUC for binary formats is a\n> protocol change in a way that client_encoding is not. The existing\n> documentation for the protocol already specifies when binary formats\n> will be used, and a GUC would change that behavior. We absolutely would\n> need to update the documentation, and clients (like psql) really should\n> be updated.\n\nI think the idea of using a new parameterFormat value is a good one.\nLet 0 and 1 continue to mean what they mean, and let clients opt in to\nthe new mechanism if they're aware of it.Correct me if I am wrong, but the client has to request this. So I'm not sure how we would be surprised ?Dave",
"msg_date": "Tue, 10 Oct 2023 10:30:04 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, Oct 9, 2023 at 5:02 PM Jelte Fennema <postgres@jeltef.nl> wrote:\n> Honestly I think the main difference is the need to introduce this\n> explicit protocol message. If we do, I think it might be best to have\n> this be a way of setting a GUC at the Protocol level, and expand the\n> GucContext enum to have a way to disallow setting it from SQL (e.g.\n> PGC_PROTOCOL), while still allowing PgBouncer (or other poolers) to\n> change the GUC as part of the connection handoff, in a way that's\n> similar to what's being done for client_encoding now. We might even\n> want to make client_encoding PGC_PROTOCOL too (eventually).\n\nThat's an idea worth considering, IMHO. I'm not saying it's the best\nor only idea, but it seems to have some real advantages.\n\nThe pooler case is actually a really important one here. If the client\nis connected directly to the server, the difference between whether\nsomething is controlled via the protocol or via SQL is just whether it\ncould be set inside some function. I think that's a thing to be\nconcerned about, but when you add the pooler to the equation then you\nhave the additional question of whether a certain value should be\ncontrolled by the end-client or by the pooler. A really obvious\nexample of where you might want the latter behavior is\nsession_authorization. You'd like the pooler to be able to set that in\nsuch a way that the end-client can't tinker with it by any means.\nRight now we don't have a way to do that, but maybe someday we will.\nThis issue is perhaps a bit less critical, but it still feels bad if\nthe end-client can effectively pull the rug out from under the\npooler's wire protocol expectations. I'm not exactly sure what the\nright policy is here concretely, so I'm not ready to argue for exactly\nwhat we should do just yet, but I do want to argue that we should be\nthinking carefully about these issues.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Oct 2023 10:33:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Tue, Oct 10, 2023 at 10:30 AM Dave Cramer <davecramer@gmail.com> wrote:\n> Correct me if I am wrong, but the client has to request this. So I'm not sure how we would be surprised ?\n\nConsider an application, a connection pooler, and a stored procedure\nor function on the server. If this is controlled by a GUC, any of them\ncould set it at any point in the session. That could lead to the\napplication and/or the connection pooler being out of step with the\nserver behavior.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Oct 2023 10:35:28 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Mon, 31 Jul 2023 at 21:58, Dave Cramer <davecramer@gmail.com> wrote:\n>\n>\n> Dave Cramer\n>\n>\n> On Mon, 10 Jul 2023 at 03:56, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>\n>> > On 25 Apr 2023, at 16:47, Dave Cramer <davecramer@gmail.com> wrote:\n>>\n>> > Patch attached with comments removed\n>>\n>> This patch no longer applies, please submit a rebased version on top of HEAD.\n>\n>\n> Rebased see attached\n\nCFBot shows that the patch does not apply anymore as in [1]:\n=== Applying patches on top of PostgreSQL commit ID\nfba2112b1569fd001a9e54dfdd73fd3cb8f16140 ===\n=== applying patch ./0001-Created-protocol.h.patch\npatching file src/backend/access/common/printsimple.c\nHunk #1 succeeded at 22 with fuzz 2 (offset 1 line).\nHunk #2 FAILED at 33.\nHunk #3 FAILED at 66.\n2 out of 3 hunks FAILED -- saving rejects to file\nsrc/backend/access/common/printsimple.c.rej\npatching file src/backend/access/transam/parallel.c\nHunk #1 succeeded at 34 (offset 1 line).\nHunk #2 FAILED at 1128.\nHunk #3 FAILED at 1138.\nHunk #4 FAILED at 1184.\nHunk #5 succeeded at 1205 (offset 4 lines).\nHunk #6 FAILED at 1218.\nHunk #7 FAILED at 1373.\nHunk #8 FAILED at 1551.\n6 out of 8 hunks FAILED -- saving rejects to file\nsrc/backend/access/transam/parallel.c.rej\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_3777.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 27 Jan 2024 07:45:03 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
},
{
"msg_contents": "On Sat, 27 Jan 2024 at 07:45, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, 31 Jul 2023 at 21:58, Dave Cramer <davecramer@gmail.com> wrote:\n> >\n> >\n> > Dave Cramer\n> >\n> >\n> > On Mon, 10 Jul 2023 at 03:56, Daniel Gustafsson <daniel@yesql.se> wrote:\n> >>\n> >> > On 25 Apr 2023, at 16:47, Dave Cramer <davecramer@gmail.com> wrote:\n> >>\n> >> > Patch attached with comments removed\n> >>\n> >> This patch no longer applies, please submit a rebased version on top of HEAD.\n> >\n> >\n> > Rebased see attached\n>\n> CFBot shows that the patch does not apply anymore as in [1]:\n> === Applying patches on top of PostgreSQL commit ID\n> fba2112b1569fd001a9e54dfdd73fd3cb8f16140 ===\n> === applying patch ./0001-Created-protocol.h.patch\n> patching file src/backend/access/common/printsimple.c\n> Hunk #1 succeeded at 22 with fuzz 2 (offset 1 line).\n> Hunk #2 FAILED at 33.\n> Hunk #3 FAILED at 66.\n> 2 out of 3 hunks FAILED -- saving rejects to file\n> src/backend/access/common/printsimple.c.rej\n> patching file src/backend/access/transam/parallel.c\n> Hunk #1 succeeded at 34 (offset 1 line).\n> Hunk #2 FAILED at 1128.\n> Hunk #3 FAILED at 1138.\n> Hunk #4 FAILED at 1184.\n> Hunk #5 succeeded at 1205 (offset 4 lines).\n> Hunk #6 FAILED at 1218.\n> Hunk #7 FAILED at 1373.\n> Hunk #8 FAILED at 1551.\n> 6 out of 8 hunks FAILED -- saving rejects to file\n> src/backend/access/transam/parallel.c.rej\n>\n> Please post an updated version for the same.\n>\n> [1] - http://cfbot.cputube.org/patch_46_3777.log\n\nThe patch which you submitted has been awaiting your attention for\nquite some time now. As such, we have moved it to \"Returned with\nFeedback\" and removed it from the reviewing queue. Depending on\ntiming, this may be reversible. Kindly address the feedback you have\nreceived, and resubmit the patch to the next CommitFest.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 1 Feb 2024 21:05:15 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for comment on setting binary format output per session"
}
] |
[
{
"msg_contents": "While preparing 3dfae91f7 I couldn't help noticing that what\npsql-ref.sgml has to say about \\df's \"function type\" column:\n\n ... and function types, which are classified as <quote>agg</quote>\n (aggregate), <quote>normal</quote>, <quote>procedure</quote>, <quote>trigger</quote>, or <quote>window</quote>.\n\nno longer corresponds very well to what the code actually does,\nwhen dealing with a v11-or-later server:\n\n \" CASE p.prokind\\n\"\n \" WHEN 'a' THEN '%s'\\n\"\n \" WHEN 'w' THEN '%s'\\n\"\n \" WHEN 'p' THEN '%s'\\n\"\n \" ELSE '%s'\\n\"\n \" END as \\\"%s\\\"\",\n ...\n /* translator: \"agg\" is short for \"aggregate\" */\n gettext_noop(\"agg\"),\n gettext_noop(\"window\"),\n gettext_noop(\"proc\"),\n gettext_noop(\"func\"),\n gettext_noop(\"Type\"));\n\nI was going to just fix the docs to match the code, but removing\n\"trigger\" from the list seems very confusing, because the docs\ngo on to say\n\n To display only functions\n of specific type(s), add the corresponding letters <literal>a</literal>,\n <literal>n</literal>, <literal>p</literal>, <literal>t</literal>, or <literal>w</literal> to the command.\n\nand indeed filtering triggers in or out still seems to work.\nMoreover, if you are inspecting a pre-v11 server then you do\nstill get the old classification, which is bizarrely inconsistent.\n\nIt seems like we should either restore \"trigger\" as its own\ntype classification, or remove it from the list of properties\nyou can filter on, or adjust the docs to describe \"t\" as a\nspecial filter condition. I'm kind of inclined to the second\noption, because treating trigger as a different prokind sure\nseems like a wart. But back in 2009 people thought that was\na good idea; what is our opinion now?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Mar 2023 17:34:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Documentation of psql's \\df no longer matches reality"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 3:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> It seems like we should either restore \"trigger\" as its own\n> type classification, or remove it from the list of properties\n> you can filter on, or adjust the docs to describe \"t\" as a\n> special filter condition. I'm kind of inclined to the second\n> option, because treating trigger as a different prokind sure\n> seems like a wart. But back in 2009 people thought that was\n> a good idea; what is our opinion now?\n>\n>\nPersonally, I'd go for option 1, bring back the formal concept of a trigger\nfunction to this view. Admit the mistake and back-patch so we are\nconsistent again.\n\nOr, to improve things, \" \\df func_name - trigger \" should be made to\nprovide a pattern filter on the output type, in which case people could\nthen filter on any type they want, not just trigger. Incorporating\nset-returning functions into such a filtering mechanism would be a bonus\nworth striving for.\n\nBetween choices 2 and 3 above I'd go with 3 before 2. I can imagine the\nchange to label the output of \\dft as \"func\" would easily go unnoticed but\nremoving the existing filtering feature seems likely to draw valid\ncomplaints. If we had the more powerful alternative described above to\nreplace it with maybe I'd go for 2. Absent that it is a special case wart\nnecessitated by the lack of being able to readily specify the return type\nfilter in a manner similar to the existing input type filtering.\n\nDavid J.\n\nOn Thu, Mar 2, 2023 at 3:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:It seems like we should either restore \"trigger\" as its own\ntype classification, or remove it from the list of properties\nyou can filter on, or adjust the docs to describe \"t\" as a\nspecial filter condition. I'm kind of inclined to the second\noption, because treating trigger as a different prokind sure\nseems like a wart. But back in 2009 people thought that was\na good idea; what is our opinion now?Personally, I'd go for option 1, bring back the formal concept of a trigger function to this view. Admit the mistake and back-patch so we are consistent again.Or, to improve things, \" \\df func_name - trigger \" should be made to provide a pattern filter on the output type, in which case people could then filter on any type they want, not just trigger. Incorporating set-returning functions into such a filtering mechanism would be a bonus worth striving for.Between choices 2 and 3 above I'd go with 3 before 2. I can imagine the change to label the output of \\dft as \"func\" would easily go unnoticed but removing the existing filtering feature seems likely to draw valid complaints. If we had the more powerful alternative described above to replace it with maybe I'd go for 2. Absent that it is a special case wart necessitated by the lack of being able to readily specify the return type filter in a manner similar to the existing input type filtering.David J.",
"msg_date": "Tue, 1 Aug 2023 19:36:01 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation of psql's \\df no longer matches reality"
}
] |
[
{
"msg_contents": "Harden new test case against force_parallel_mode = regress.\n\nPer buildfarm: worker processes can't see a role created in\nthe current transaction.\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/98a88bc2bcd60e41ca70e2f1e13eee827e23eefb\n\nModified Files\n--------------\nsrc/test/regress/expected/psql.out | 3 ++-\nsrc/test/regress/sql/psql.sql | 3 ++-\n2 files changed, 4 insertions(+), 2 deletions(-)",
"msg_date": "Thu, 02 Mar 2023 22:47:34 +0000",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pgsql: Harden new test case against force_parallel_mode = regress."
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 5:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Harden new test case against force_parallel_mode = regress.\n>\n> Per buildfarm: worker processes can't see a role created in\n> the current transaction.\n\nNow why would that happen? Surely the snapshot for each command is\npassed down from leader to worker, and the worker is not free to\ninvent a snapshot from nothing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 3 Mar 2023 11:16:28 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Harden new test case against force_parallel_mode =\n regress."
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Mar 2, 2023 at 5:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Per buildfarm: worker processes can't see a role created in\n>> the current transaction.\n\n> Now why would that happen? Surely the snapshot for each command is\n> passed down from leader to worker, and the worker is not free to\n> invent a snapshot from nothing.\n\nThe workers were failing at startup, eg (from [1]):\n\n+ERROR: role \"regress_psql_user\" does not exist\n+CONTEXT: while setting parameter \"session_authorization\" to \"regress_psql_user\"\n\nMaybe this says that worker startup needs to install the snapshot before\ndoing any catalog accesses? Anyway, I'd be happy to revert this test\nhack if you care to make the case work.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Mar 2023 11:37:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Harden new test case against force_parallel_mode =\n regress."
},
{
"msg_contents": "On Fri, 3 Mar 2023 at 17:16, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Mar 2, 2023 at 5:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Harden new test case against force_parallel_mode = regress.\n> >\n> > Per buildfarm: worker processes can't see a role created in\n> > the current transaction.\n>\n> Now why would that happen? Surely the snapshot for each command is\n> passed down from leader to worker, and the worker is not free to\n> invent a snapshot from nothing.\n\nProbably because we nitialize which user and database to use in the\nbackend before we load the parent process' snapshot:\n\nin ParallelWorkerMain (parallel.c, as of HEAD @ b6a0d469):\n\n /* Restore database connection. */\n BackgroundWorkerInitializeConnectionByOid(fps->database_id,\n fps->authenticated_user_id,\n 0);\n[...]\n\n /* Crank up a transaction state appropriate to a parallel worker. */\n tstatespace = shm_toc_lookup(toc, PARALLEL_KEY_TRANSACTION_STATE, false);\n StartParallelWorkerTransaction(tstatespace);\n\n /* Restore combo CID state. */\n combocidspace = shm_toc_lookup(toc, PARALLEL_KEY_COMBO_CID, false);\n RestoreComboCIDState(combocidspace);\n\n-Matthias\n\n\n",
"msg_date": "Fri, 3 Mar 2023 17:38:08 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Harden new test case against force_parallel_mode =\n regress."
},
{
"msg_contents": "I wrote:\n> The workers were failing at startup, eg (from [1]):\n\nargh, forgot to add the link:\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hippopotamus&dt=2023-03-02%2022%3A31%3A17\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Mar 2023 11:47:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Harden new test case against force_parallel_mode =\n regress."
},
{
"msg_contents": "On Fri, Mar 3, 2023 at 11:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The workers were failing at startup, eg (from [1]):\n>\n> +ERROR: role \"regress_psql_user\" does not exist\n> +CONTEXT: while setting parameter \"session_authorization\" to \"regress_psql_user\"\n>\n> Maybe this says that worker startup needs to install the snapshot before\n> doing any catalog accesses? Anyway, I'd be happy to revert this test\n> hack if you care to make the case work.\n\nOh, that's interesting (and sad). A parallel worker has a \"startup\ntransaction\" that is used to restore library and GUC state, and then\nafter that transaction commits, it starts up a new transaction that\nuses the same snapshot and settings as the transaction in the parallel\nleader. So the problem here is that the startup transaction can't see\nthe uncommitted work of some unrelated (as far as it knows)\ntransaction, and that prevents restoring the session_authorization\nGUC.\n\nThat startup transaction has broken stuff before, and it would be nice\nto get rid of it. Unfortunately, I don't remember right now why we\nneed it in the first place. I'm fairly sure that if you load the\nlibrary and GUC state without any transaction, that doesn't work,\nbecause a bunch of important processing gets skipped. And I think if\nyou try to do those things in the \"real\" transaction that fails for\nsome reason too, maybe that there's no guarantee that all the relevant\nGUCs can be changed at that point, but I'm fuzzy on the details at the\nmoment.\n\nSo I don't know how to fix this right now, but thanks for the details.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 3 Mar 2023 12:40:32 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Harden new test case against force_parallel_mode =\n regress."
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Mar 3, 2023 at 11:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> +ERROR: role \"regress_psql_user\" does not exist\n>> +CONTEXT: while setting parameter \"session_authorization\" to \"regress_psql_user\"\n\n> Oh, that's interesting (and sad). A parallel worker has a \"startup\n> transaction\" that is used to restore library and GUC state, and then\n> after that transaction commits, it starts up a new transaction that\n> uses the same snapshot and settings as the transaction in the parallel\n> leader. So the problem here is that the startup transaction can't see\n> the uncommitted work of some unrelated (as far as it knows)\n> transaction, and that prevents restoring the session_authorization\n> GUC.\n\nGot it.\n\n> That startup transaction has broken stuff before, and it would be nice\n> to get rid of it. Unfortunately, I don't remember right now why we\n> need it in the first place. I'm fairly sure that if you load the\n> library and GUC state without any transaction, that doesn't work,\n> because a bunch of important processing gets skipped. And I think if\n> you try to do those things in the \"real\" transaction that fails for\n> some reason too, maybe that there's no guarantee that all the relevant\n> GUCs can be changed at that point, but I'm fuzzy on the details at the\n> moment.\n\nCouldn't we install the leader's snapshot into both transactions?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Mar 2023 12:46:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Harden new test case against force_parallel_mode =\n regress."
},
{
"msg_contents": "On Fri, Mar 3, 2023 at 12:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Couldn't we install the leader's snapshot into both transactions?\n\nYeah, maybe that would Just Work. Not sure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 3 Mar 2023 12:56:23 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Harden new test case against force_parallel_mode =\n regress."
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Mar 3, 2023 at 12:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Couldn't we install the leader's snapshot into both transactions?\n\n> Yeah, maybe that would Just Work. Not sure.\n\nWell, IIUC the worker is currently getting a brand new snapshot\nfor its startup transaction, which is exactly what you said upthread\nit should never do. Seems like that could have more failure modes\nthan just this one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Mar 2023 13:03:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Harden new test case against force_parallel_mode =\n regress."
}
] |
[
{
"msg_contents": "Hi,\n\nI wanted to use min/max aggregation functions for jsonb type and noticed\nthere is no functions for this type, meanwhile string/array types are\nsupported.\nIs there a concern about implementing support for jsonb in min/max?\n\njsonb is a byte array.\njson faces same limitations.\n\n\n-- \n\nBest regards,\nDaniil Iaitskov\n\nHi,I wanted to use min/max aggregation functions for jsonb type and noticed there is no functions for this type, meanwhile string/array types are supported.Is there a concern about implementing support for jsonb in min/max?jsonb is a byte array.json faces same limitations.-- Best regards,Daniil Iaitskov",
"msg_date": "Thu, 2 Mar 2023 23:24:50 -0500",
"msg_from": "Daneel Yaitskov <dyaitskov@gmail.com>",
"msg_from_op": true,
"msg_subject": "min/max aggregation for jsonb"
},
{
"msg_contents": "On Fri, 3 Mar 2023 at 23:17, Daneel Yaitskov <dyaitskov@gmail.com> wrote:\n> I wanted to use min/max aggregation functions for jsonb type and noticed\n> there is no functions for this type, meanwhile string/array types are supported.\n\nIt's not really clear to me how you'd want these to sort. If you just\nwant to sort by what the output that you see from the type's output\nfunction then you might get what you need by casting to text.\n\n> Is there a concern about implementing support for jsonb in min/max?\n\nI imagine a lack of any meaningful way of comparing two jsonb values\nto find out which is greater than the other is of some concern.\n\nDavid\n\n\n",
"msg_date": "Fri, 3 Mar 2023 23:41:46 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: min/max aggregation for jsonb"
},
{
"msg_contents": "Nonetheless PostgreSQL min/max functions don't work with JSON - array_agg\ndistinct does!\n\nI was working on an experimental napkin audit feature.\nIt rewrites a chain of SQL queries to thread through meta data about all\ncomputations contributed to every column.\nEvery data column gets a meta column with JSON.\nCalculating meta column for non aggregated column is trivial, because new\ncolumn relation with columns used for computation its is 1:1, but\nhistory of aggregated column is composed of a set values (each value has\npotentially different history, but usually it is the same).\nSo in case of aggregated column I had to collapse somehow a set of JSON\nvalues into a few.\n\nOriginal aggregating query:\nSELECT max(a) AS max_a FROM t\n\nThe query with audit meta data embedded:\nSELECT\n max(a) AS max_a,\n jsonb_build_object(\n 'q', 'SELECT max(a) AS max_a FROM t',\n 'o', jsonb_build_object(\n 'a', cast(array_to_json(array_agg( DISTINCT _meta_a)) AS\n\"jsonb\")))\n AS _meta_max_a\nFROM t\n\n\n\nOn Fri, Mar 3, 2023 at 5:41 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 3 Mar 2023 at 23:17, Daneel Yaitskov <dyaitskov@gmail.com> wrote:\n> > I wanted to use min/max aggregation functions for jsonb type and noticed\n> > there is no functions for this type, meanwhile string/array types are\n> supported.\n>\n> It's not really clear to me how you'd want these to sort. If you just\n> want to sort by what the output that you see from the type's output\n> function then you might get what you need by casting to text.\n>\n> > Is there a concern about implementing support for jsonb in min/max?\n>\n> I imagine a lack of any meaningful way of comparing two jsonb values\n> to find out which is greater than the other is of some concern.\n>\n> David\n>\n\n\n-- \n\nBest regards,\nDaniil Iaitskov\n\nNonetheless PostgreSQL min/max functions don't work with JSON - array_agg distinct does!I was working on an experimental napkin audit feature. It rewrites a chain of SQL queries to thread through meta data about all computations contributed to every column. Every data column gets a meta column with JSON.Calculating meta column for non aggregated column is trivial, because new column relation with columns used for computation its is 1:1, but history of aggregated column is composed of a set values (each value has potentially different history, but usually it is the same).So in case of aggregated column I had to collapse somehow a set of JSON values into a few.Original aggregating query:SELECT max(a) AS max_a FROM t The query with audit meta data embedded: SELECT max(a) AS max_a, jsonb_build_object( 'q', 'SELECT max(a) AS max_a FROM t', 'o', jsonb_build_object( 'a', cast(array_to_json(array_agg( DISTINCT _meta_a)) AS \"jsonb\"))) AS _meta_max_aFROM tOn Fri, Mar 3, 2023 at 5:41 AM David Rowley <dgrowleyml@gmail.com> wrote:On Fri, 3 Mar 2023 at 23:17, Daneel Yaitskov <dyaitskov@gmail.com> wrote:\n> I wanted to use min/max aggregation functions for jsonb type and noticed\n> there is no functions for this type, meanwhile string/array types are supported.\n\nIt's not really clear to me how you'd want these to sort. If you just\nwant to sort by what the output that you see from the type's output\nfunction then you might get what you need by casting to text.\n\n> Is there a concern about implementing support for jsonb in min/max?\n\nI imagine a lack of any meaningful way of comparing two jsonb values\nto find out which is greater than the other is of some concern.\n\nDavid\n-- Best regards,Daniil Iaitskov",
"msg_date": "Mon, 10 Apr 2023 10:17:48 -0400",
"msg_from": "Daneel Yaitskov <dyaitskov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: min/max aggregation for jsonb"
},
{
"msg_contents": "On 03.03.23 11:41, David Rowley wrote:\n> On Fri, 3 Mar 2023 at 23:17, Daneel Yaitskov <dyaitskov@gmail.com> wrote:\n>> I wanted to use min/max aggregation functions for jsonb type and noticed\n>> there is no functions for this type, meanwhile string/array types are supported.\n> \n> It's not really clear to me how you'd want these to sort. If you just\n> want to sort by what the output that you see from the type's output\n> function then you might get what you need by casting to text.\n> \n>> Is there a concern about implementing support for jsonb in min/max?\n> \n> I imagine a lack of any meaningful way of comparing two jsonb values\n> to find out which is greater than the other is of some concern.\n\nWe already have ordering operators and operator classes for jsonb, so \nsticking min/max aggregates around that should be pretty straightforward.\n\n\n\n",
"msg_date": "Tue, 2 May 2023 10:47:21 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: min/max aggregation for jsonb"
}
] |
[
{
"msg_contents": "in src/include/access/htup_details.h, I find out this:\r\n#define HEAP_XMAX_IS_MULTI 0x1000 /* t_xmax is a MultiXactId */\r\nwhat's MultiXactId? Can you give me a scenario to make this bit as 1?\r\n\r\n\r\njacktby@gmail.com\r\n\n\nin src/include/access/htup_details.h, I find out this:\n#define HEAP_XMAX_IS_MULTI 0x1000 /* t_xmax is a MultiXactId */what's MultiXactId? Can you give me a scenario to make this bit as 1?\njacktby@gmail.com",
"msg_date": "Fri, 3 Mar 2023 20:37:48 +0800",
"msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>",
"msg_from_op": true,
"msg_subject": "What's MultiXactId?"
}
] |
[
{
"msg_contents": "Hi,\n\nHere's a patch against master for $SUBJECT. It lacks documentation\nchanges and might have bugs, so please review if you're concerned\nabout this issue.\n\nTo recap, under CVE-2020-14349, Noah documented that untrusted users\nshouldn't own tables into which the system is performing logical\nreplication. Otherwise, the users could hook up triggers or default\nexpressions or whatever to those tables and they would execute with\nthe subscription owner's privileges, which would allow the table\nowners to escalate to superuser. However, I'm unsatisfied with just\ndocumenting the hazard, because I feel like almost everyone who uses\nlogical replication wants to do the exact thing that this\ndocumentation says you shouldn't do. If you don't use logical\nreplication or run everything as the superuser or just don't care\nabout security, well then you don't have any problem here, but\notherwise you probably do.\n\nThe proposed fix is to perform logical replication actions (SELECT,\nINSERT, UPDATE, DELETE, and TRUNCATE) as the user who owns the table\nrather than as the owner of the subscription. The session still runs\nas the subscription owner, but the active user ID is switched to the\ntable owner for the duration of each operation. To prevent table\nowners from doing tricky things to attack the subscription owner, we\nimpose SECURITY_RESTRICTED_OPERATION while running as the table owner.\nTo avoid inconveniencing users when this restriction adds no\nmeaningful security, we refrain from imposing\nSECURITY_RESTRICTED_OPERATION when the table owner can SET ROLE to the\nsubscription owner anyway. Such a user need not use logical\nreplication to break into the subscription owner's account: they have\naccess to it anyway.\n\nThere is also a possibility of an attack in the other direction. Maybe\nthe subscription owner would like to obtain the table owner's\npermissions, or at the very least, use logical replication as a\nvehicle to perform operations they can't perform directly. A malicious\nsubscription owner could hook up logical replication to a table into\nwhich the table owner doesn't want replication to occur. To block such\nattacks, the patch requires that the subscription owner have the\nability to SET ROLE to each table owner. If the subscription owner is\na superuser, which is usual, this will be automatic. Otherwise, the\nsuperuser will need to grant to the subscription owner the roles that\nown relevant tables. This can usefully serve as a kind of access\ncontrol to make sure that the subscription doesn't touch any tables\nother than the ones it's supposed to be touching: just make those\ntables owned by a different user and don't grant them to the\nsubscription owner. Previously, we provided no way at all of\ncontrolling the tables that replication can target.\n\nThis fix interacts in an interesting way with Mark Dilger's work,\ncommitted by Jeff Davis, to make logical replication respect table\npermissions. I initially thought that with this change, that commit\nwould end up just being reverted, with the permissions scheme\ndescribed above replacing the existing one. However, I then realized\nthat it's still good to perform those checks. Normally, a table owner\ncan do any DML operation on a table they own, so those checks will\nnever fail. However, if a table owner has revoked their ability to,\nsay, INSERT into one of their own tables, then logical replication\nshouldn't bypass that and perform the INSERT anyway. So I now think\nthat the checks added by that commit complement the ones added by this\nproposed patch, rather than competing with them.\n\nIt is unclear to me whether we should try to back-port this. It's\ndefinitely a behavior change, and changing the behavior in the back\nbranches is not a nice thing to do. On the other hand, at least in my\nopinion, the security consequences of the status quo are pretty dire.\nI tend to suspect that a really high percentage of people who are\nusing logical replication at all are vulnerable to this, and lots of\npeople having a way to escalate to superuser isn't good.\n\nComments?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 3 Mar 2023 11:02:30 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "running logical replication as the subscription owner"
},
{
"msg_contents": "I'm definitely in favor of making it easier to use logical replication\nin a safe manner. In Citus we need to logically replicate and we're\ncurrently using quite some nasty and undocumented hacks to do so:\nWe're creating a subscription per table owner, where each subscription\nis owned by a temporary user that has the same permissions as the\ntable owner. These temporary users were originally superusers, because\notherwise we cannot make them subscription owners, but once assigning\na subscription to them we take away the superuser permissions from\nthem[1]. And we also need to hook into ALTER/DELETE subscription\ncommands to make sure that these temporary owners cannot edit their\nown subscription[2].\n\nGetting this right was not easy. And even it has the serious downside\nthat we need multiple subscriptions/replication slots which causes\nextra complexity in various ways and it eats much more aggressively\ninto the replication slot limits than we'd like. Having one\nsubscription that could apply into tables that were owned by multiple\nusers in a safe way would make this sooo much easier.\n\n[1]: https://github.com/citusdata/citus/blob/main/src/backend/distributed/replication/multi_logical_replication.c#L1487-L1573\n[2]: https://github.com/citusdata/citus/blob/main/src/backend/distributed/commands/utility_hook.c#L455-L487\n\n\n",
"msg_date": "Sat, 4 Mar 2023 00:57:15 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, Mar 3, 2023 at 6:57 PM Jelte Fennema <postgres@jeltef.nl> wrote:\n> I'm definitely in favor of making it easier to use logical replication\n> in a safe manner.\n\nCool.\n\n> In Citus we need to logically replicate and we're\n> currently using quite some nasty and undocumented hacks to do so:\n> We're creating a subscription per table owner, where each subscription\n> is owned by a temporary user that has the same permissions as the\n> table owner. These temporary users were originally superusers, because\n> otherwise we cannot make them subscription owners, but once assigning\n> a subscription to them we take away the superuser permissions from\n> them[1]. And we also need to hook into ALTER/DELETE subscription\n> commands to make sure that these temporary owners cannot edit their\n> own subscription[2].\n>\n> Getting this right was not easy. And even it has the serious downside\n> that we need multiple subscriptions/replication slots which causes\n> extra complexity in various ways and it eats much more aggressively\n> into the replication slot limits than we'd like. Having one\n> subscription that could apply into tables that were owned by multiple\n> users in a safe way would make this sooo much easier.\n\nYeah. As Andres pointed out somewhere or other, that also means you're\ndecoding the WAL once per user instead of just once. I'm surprised\nthat hasn't been cost-prohibitive.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 6 Mar 2023 12:10:23 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "> Yeah. As Andres pointed out somewhere or other, that also means you're\n> decoding the WAL once per user instead of just once. I'm surprised\n> that hasn't been cost-prohibitive.\n\nWe'd definitely prefer to have one subscription and do the decoding\nonly once. But we haven't run into big perf issues with the current\nsetup so far. We use it for non-blocking copying of shards (regular PG\ntables under the hood). Most of the time is usually spent in the\ninitial copy phase, not the catchup. And also in practice our users\noften only have one table owning user (and more than 5 table owning\nusers is extremely rare).\n\n\n",
"msg_date": "Mon, 6 Mar 2023 18:28:28 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, 2023-03-03 at 11:02 -0500, Robert Haas wrote:\n> Hi,\n> \n> Here's a patch against master for $SUBJECT. It lacks documentation\n> changes and might have bugs, so please review if you're concerned\n> about this issue.\n\nI think the subject has a typo, you mean \"as the table owner\", right?\n\n> However, I'm unsatisfied with just\n> documenting the hazard, because I feel like almost everyone who uses\n> logical replication wants to do the exact thing that this\n> documentation says you shouldn't do.\n\n+1\n\n> The proposed fix is to perform logical replication actions (SELECT,\n> INSERT, UPDATE, DELETE, and TRUNCATE) as the user who owns the table\n> rather than as the owner of the subscription. The session still runs\n> as the subscription owner, but the active user ID is switched to the\n> table owner for the duration of each operation. To prevent table\n> owners from doing tricky things to attack the subscription owner, we\n> impose SECURITY_RESTRICTED_OPERATION while running as the table\n> owner.\n\n+1\n\n> To avoid inconveniencing users when this restriction adds no\n> meaningful security, we refrain from imposing\n> SECURITY_RESTRICTED_OPERATION when the table owner can SET ROLE to\n> the\n> subscription owner anyway.\n\nThat's a little confusing, why not just always use the\nSECURITY_RESTRICTED_OPERATION? Is there a use case I'm missing?\n\n> There is also a possibility of an attack in the other direction.\n> Maybe\n> the subscription owner would like to obtain the table owner's\n> permissions, or at the very least, use logical replication as a\n> vehicle to perform operations they can't perform directly. A\n> malicious\n> subscription owner could hook up logical replication to a table into\n> which the table owner doesn't want replication to occur. To block\n> such\n> attacks, the patch requires that the subscription owner have the\n> ability to SET ROLE to each table owner.\n\nIt would be really nice if this could be done with some kind of\nordinary privilege rather than requiring SET ROLE. A user might expect\nthat INSERT/UPDATE/DELETE/TRUNCATE privileges are enough. Or\npg_write_all_data.\n\nI can see theoretically that a table owner might write some dangerous\ncode and attach it to their table. But I don't quite understand why\nthey would do that. If the code was vulnerable to attack, would that\nmean that they couldn't even insert into their own table safely?\n\nRequiring SET ROLE seems like it makes the subscription role into\nsomething very close to a superuser. And that takes away some of the\nbenefits of delegating to non-superusers.\n\n> If the subscription owner is\n> a superuser, which is usual, this will be automatic. Otherwise, the\n> superuser will need to grant to the subscription owner the roles that\n> own relevant tables. This can usefully serve as a kind of access\n> control to make sure that the subscription doesn't touch any tables\n> other than the ones it's supposed to be touching: just make those\n> tables owned by a different user and don't grant them to the\n> subscription owner. Previously, we provided no way at all of\n> controlling the tables that replication can target.\n\nWe check for ordinary access privileges, which I think is your next\npoint, so the above paragraph confuses me a bit.\n\n> This fix interacts in an interesting way with Mark Dilger's work,\n> committed by Jeff Davis, to make logical replication respect table\n> permissions. I initially thought that with this change, that commit\n> would end up just being reverted, with the permissions scheme\n> described above replacing the existing one. However, I then realized\n> that it's still good to perform those checks. Normally, a table owner\n> can do any DML operation on a table they own, so those checks will\n> never fail. However, if a table owner has revoked their ability to,\n> say, INSERT into one of their own tables, then logical replication\n> shouldn't bypass that and perform the INSERT anyway. So I now think\n> that the checks added by that commit complement the ones added by\n> this\n> proposed patch, rather than competing with them.\n\nThat's an interesting case.\n\n> It is unclear to me whether we should try to back-port this. It's\n> definitely a behavior change, and changing the behavior in the back\n> branches is not a nice thing to do. On the other hand, at least in my\n> opinion, the security consequences of the status quo are pretty dire.\n> I tend to suspect that a really high percentage of people who are\n> using logical replication at all are vulnerable to this, and lots of\n> people having a way to escalate to superuser isn't good.\n\nIt's worth considering given that most subscription owners are\nsuperusers anyway. What plausible cases might it break?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 24 Mar 2023 00:59:55 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 3:59 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> That's a little confusing, why not just always use the\n> SECURITY_RESTRICTED_OPERATION? Is there a use case I'm missing?\n\nSome concern was expressed -- not sure exactly where the email is\nexactly, and it might've been on pgsql-security -- that doing that\ncategorically might break things that are currently working. The\npeople who were concerned included Andres and I forget who else. My\ngut reaction was the same as yours, just do it always and don't worry\nabout it. But if people think that users are likely to run afoul of\nthe SECURITY_RESTRICTED_OPERATION restrictions in practice, then this\nis better, and the implementation complexity isn't high. We could even\nthink of extending this kind of logic to other places where\nSECURITY_RESTRICTED_OPERATION is enforced, if desired.\n\n> It would be really nice if this could be done with some kind of\n> ordinary privilege rather than requiring SET ROLE. A user might expect\n> that INSERT/UPDATE/DELETE/TRUNCATE privileges are enough. Or\n> pg_write_all_data.\n>\n> I can see theoretically that a table owner might write some dangerous\n> code and attach it to their table. But I don't quite understand why\n> they would do that. If the code was vulnerable to attack, would that\n> mean that they couldn't even insert into their own table safely?\n>\n> Requiring SET ROLE seems like it makes the subscription role into\n> something very close to a superuser. And that takes away some of the\n> benefits of delegating to non-superusers.\n\nI am not thrilled with this either, but if you can arrange to run code\nas a certain user without the ability to SET ROLE to that user then\nthere is, by definition, a potential privilege escalation. I don't\nthink we can just dismiss that as a non-issue. If the ability of one\nuser to potentially become some other user weren't a problem, we\nwouldn't need any patch at all. Imagine for example that the table\nowner has a trigger which doesn't sanitize search_path. The\nsubscription owner can potentially leverage that to get the table\nowner's privileges.\n\nMore generally, Stephen Frost has elsewhere argued that we should want\nthe subscription owner to be a very low-privilege user, so that if\ntheir privileges get stolen, it's no big deal. I disagree with that. I\nthink it's always a problem if one user can get unauthorized access to\nanother user's account, regardless of exactly what those accounts can\ndo. I think our goal should be to make it safe for the subscription\nowner to be a very high-privilege user, because you're going to need\nto be a very high-privilege user to set up replication. And if you do\nhave that level of privilege, it's more convenient and simpler if you\ncan just own the subscription yourself, rather than having to make a\ndummy account to own it. To put that another way, I think that what\npeople are going to want to do in a lot of cases is have the superuser\nown the subscription, so I think we need to make that case safe,\nwhatever it takes. In cases where the subscription owner isn't the\nsuperuser, I think the next most likely possibility is that the\nsubscription owner is some kind of almost-super-user, like a\nCREATEROLE user or someone running with rds_superuser or similar on\nsome PG fork. So that needs to be safe too, and I think this does\nthat. Having the subscription owner be some random user that doesn't\nhave a lot of privileges doesn't seem particularly useful to me. If it\nwere unproblematic to allow that, sure, but considering how easy it\nwould be for that low-privilege user to steal table owner privileges,\nI don't think it makes sense.\n\n> > It is unclear to me whether we should try to back-port this. It's\n> > definitely a behavior change, and changing the behavior in the back\n> > branches is not a nice thing to do. On the other hand, at least in my\n> > opinion, the security consequences of the status quo are pretty dire.\n> > I tend to suspect that a really high percentage of people who are\n> > using logical replication at all are vulnerable to this, and lots of\n> > people having a way to escalate to superuser isn't good.\n>\n> It's worth considering given that most subscription owners are\n> superusers anyway. What plausible cases might it break?\n\nAFAIU, the main concern is about the SECURITY_RESTRICTED_OPERATION\nflag interacting badly with things people are already doing. Other\nproblems seem possible, e.g. if you're doing something that gets the\ncurrent user name and does something with it, the answer's going to\nchange, and you might like the new answer more or less than the old\none. It's a little hard to predict who will be inconvenienced in what\nways when you change behavior, but problems are certainly possible.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 24 Mar 2023 10:00:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "> Some concern was expressed -- not sure exactly where the email is\n> exactly, and it might've been on pgsql-security -- that doing that\n> categorically might break things that are currently working. The\n> people who were concerned included Andres and I forget who else. My\n> gut reaction was the same as yours, just do it always and don't worry\n> about it. But if people think that users are likely to run afoul of\n> the SECURITY_RESTRICTED_OPERATION restrictions in practice, then this\n> is better, and the implementation complexity isn't high. We could even\n> think of extending this kind of logic to other places where\n> SECURITY_RESTRICTED_OPERATION is enforced, if desired.\n\nI personally cannot think of a reasonable example that would be\nbroken, but I agree the code is simple enough. I do think that if\nthere is an actual reason to do this, we'd probably want it in other\nplaces where SECURITY_RESTRICTED_OPERATION is enforced too.\n\nI think there's some important tests missing related to this:\n1. Ensuring that SECURITY_RESTRICTED_OPERATION things are enforced\nwhen the user **does not** have SET ROLE permissions to the\nsubscription owner, e.g. don't allow SET ROLE from a trigger.\n2. Ensuring that SECURITY_RESTRICTED_OPERATION things are not enforced\nwhen the user **does** have SET ROLE permissions to the subscription\nowner, e.g. allows SET ROLE from trigger.\n\n\nFinally a small typo in the one of the comments:\n\n> + * If we created a new GUC nest level, also role back any changes that were\ns/role/roll/\n\n\n",
"msg_date": "Fri, 24 Mar 2023 17:16:56 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 12:17 PM Jelte Fennema <postgres@jeltef.nl> wrote:\n> I personally cannot think of a reasonable example that would be\n> broken, but I agree the code is simple enough. I do think that if\n> there is an actual reason to do this, we'd probably want it in other\n> places where SECURITY_RESTRICTED_OPERATION is enforced too.\n\nI don't think it makes sense for this patch to run around and try to\nadjust all of those other pages. We have to pick between doing it this\nway (and thus being inconsistent with what we do elsewhere) or taking\nthat logic out (and taking our chances that something will break for\nsome users). I'm OK with either of those, but I'm not OK with going\nand changing the way this works in all of the other cases first and\nonly then coming back to this problem. This problem is WAY more\nimportant than fiddling with the details of how\nSECURITY_RESTRICTED_OPERATION is applied.\n\n> I think there's some important tests missing related to this:\n> 1. Ensuring that SECURITY_RESTRICTED_OPERATION things are enforced\n> when the user **does not** have SET ROLE permissions to the\n> subscription owner, e.g. don't allow SET ROLE from a trigger.\n> 2. Ensuring that SECURITY_RESTRICTED_OPERATION things are not enforced\n> when the user **does** have SET ROLE permissions to the subscription\n> owner, e.g. allows SET ROLE from trigger.\n\nYeah, if we stick with the current approach we should probably add\ntests for that stuff.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 24 Mar 2023 12:51:26 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "\n\n> On Mar 24, 2023, at 7:00 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> More generally, Stephen Frost has elsewhere argued that we should want\n> the subscription owner to be a very low-privilege user, so that if\n> their privileges get stolen, it's no big deal. I disagree with that. I\n> think it's always a problem if one user can get unauthorized access to\n> another user's account, regardless of exactly what those accounts can\n> do. I think our goal should be to make it safe for the subscription\n> owner to be a very high-privilege user, because you're going to need\n> to be a very high-privilege user to set up replication. And if you do\n> have that level of privilege, it's more convenient and simpler if you\n> can just own the subscription yourself, rather than having to make a\n> dummy account to own it. To put that another way, I think that what\n> people are going to want to do in a lot of cases is have the superuser\n> own the subscription, so I think we need to make that case safe,\n> whatever it takes.\n\nI also think the subscription owner should be a low-privileged user, owing to the risk of the publisher injecting malicious content into the publication. I think you are focused on all the bad actors on the subscription-side database and what they can do to each other. That's also valid, but I get the impression that you're losing sight of the risk posed by malicious publishers. Or maybe you aren't, and can explain?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 24 Mar 2023 09:58:57 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "> I don't think it makes sense for this patch to run around and try to\n> adjust all of those other pages. We have to pick between doing it this\n> way (and thus being inconsistent with what we do elsewhere) or taking\n> that logic out (and taking our chances that something will break for\n> some users). I'm OK with either of those, but I'm not OK with going\n> and changing the way this works in all of the other cases first and\n> only then coming back to this problem. This problem is WAY more\n> important than fiddling with the details of how\n> SECURITY_RESTRICTED_OPERATION is applied.\n\nYes, I totally agree. I now realise that wasn't clear at all from the\nwording in my previous email. I'm fine with both behaviours. I mainly meant\nthat if we actually think the new behaviour is better (which honestly I'm\nnot convinced of yet), then some follow up patch would probably be good. I\ndefinitely don't want to block this patch on any of that though. Both\nbehaviors would be vastly better than the current one in my opinion. So if\nothers wanted the behaviour in your patch, I'm completely fine with that.\n\n> Yeah, if we stick with the current approach we should probably add\n> tests for that stuff.\n\nEven if we don't, we should still have tests showing that the security\nrestrictions that we intend to put in place actually do their job.\n\n> I don't think it makes sense for this patch to run around and try to\n> adjust all of those other pages. We have to pick between doing it this\n> way (and thus being inconsistent with what we do elsewhere) or taking\n> that logic out (and taking our chances that something will break for\n> some users). I'm OK with either of those, but I'm not OK with going\n> and changing the way this works in all of the other cases first and\n> only then coming back to this problem. This problem is WAY more\n> important than fiddling with the details of how\n> SECURITY_RESTRICTED_OPERATION is applied.\n\nYes, I totally agree. I now realise that wasn't clear at all from the wording in my previous email. I'm fine with both behaviours. I mainly meant that if we actually think the new behaviour is better (which honestly I'm not convinced of yet), then some follow up patch would probably be good. I definitely don't want to block this patch on any of that though. Both behaviors would be vastly better than the current one in my opinion. So if others wanted the behaviour in your patch, I'm completely fine with that. \n\n> Yeah, if we stick with the current approach we should probably add\n> tests for that stuff.\n\nEven if we don't, we should still have tests showing that the security restrictions that we intend to put in place actually do their job.",
"msg_date": "Fri, 24 Mar 2023 19:14:39 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 12:58 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I also think the subscription owner should be a low-privileged user, owing to the risk of the publisher injecting malicious content into the publication. I think you are focused on all the bad actors on the subscription-side database and what they can do to each other. That's also valid, but I get the impression that you're losing sight of the risk posed by malicious publishers. Or maybe you aren't, and can explain?\n\nYou have a point.\n\nAs things stand today, if you think somebody's going to send you\nchanges to tables other than the ones you intend to replicate, you\ncould handle that by making sure that the user that owns the\nsubscription only has permission to write to the tables that are\nexpected to receive replicated data. It's a bit awkward to set up\nbecause you have to initially make the subscription owner a superuser\nand then later remove the superuser bit, so I think this is another\nargument for the pg_create_subscription patch posted elsewhere, but if\nyou're a superuser yourself, you can do it. However, with this patch,\nthat wouldn't work any more, because the permissions checks don't\nhappen until after we've switched to the target role. You could\nalternatively set up a user to own the subscription who has the\nability to SET ROLE to some users and not others, but that only lets\nyou restrict replication based on which user owns the tables, rather\nthan which specific tables are getting data replicated into them. That\nactually wouldn't work today, and with the patch it would start\nworking, so basically the effect of the patch on the problem that you\nmention would be to remove the ability to filter by specific table and\nadd the ability to filter by owning role.\n\nI don't know how bad that sounds to you, and if it does sound bad, I\ndon't immediately see how to mitigate it. As I said to Jeff, if you\ncan replicate into a table that has a casually-written SECURITY\nINVOKER trigger on it, you can probably hack into the table owner's\naccount. So I think that if we allow user A to replicate into user B's\ntable with fewer privileges than A-can-set-role-to-B, we're building a\nprivilege-escalation attack into the system. But if we do require\nA-can-set-role-to-B, then things change as described above.\n\nI suppose in theory we could check both A-can-set-role-to-B and\nA-can-modify-this-table-as-A, but that feels pretty unprincipled. I\ncan't particularly see why A should need permission to perform an\naction that is actually going to be performed as B; what makes sense\nis to check that A can become B, and that B has permission to perform\nthe action, which is the state that this patch would create. I guess\nthere are other things we could do, too, like add replication\nrestriction or filtering capabilities specifically to address this\nproblem, or devise some other kind of new kind of permission system\naround this, but I don't have a specific idea what that would look\nlike.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 24 Mar 2023 14:35:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 2:14 PM Jelte Fennema <postgres@jeltef.nl> wrote:\n> Yes, I totally agree. I now realise that wasn't clear at all from the wording in my previous email. I'm fine with both behaviours. I mainly meant that if we actually think the new behaviour is better (which honestly I'm not convinced of yet), then some follow up patch would probably be good. I definitely don't want to block this patch on any of that though. Both behaviors would be vastly better than the current one in my opinion. So if others wanted the behaviour in your patch, I'm completely fine with that.\n\nMakes sense. I hope a few more people will comment on what they think\nwe should do here, especially Andres and Noah.\n\n> > Yeah, if we stick with the current approach we should probably add\n> > tests for that stuff.\n>\n> Even if we don't, we should still have tests showing that the security restrictions that we intend to put in place actually do their job.\n\nYeah, I just don't want to write the tests and then decide to change\nthe behavior and then have to write them over again. It's not so much\nfun that I'm yearning to do it twice.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 24 Mar 2023 14:36:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "\n\n> On Mar 24, 2023, at 11:35 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> I don't know how bad that sounds to you, and if it does sound bad, I\n> don't immediately see how to mitigate it. As I said to Jeff, if you\n> can replicate into a table that has a casually-written SECURITY\n> INVOKER trigger on it, you can probably hack into the table owner's\n> account.\n\nI assume you mean this bit:\n\n> > Imagine for example that the table\n> > owner has a trigger which doesn't sanitize search_path. The\n> > subscription owner can potentially leverage that to get the table\n> > owner's privileges.\n\n\nI don't find that terribly convincing. First, there's no reason a subscription owner should be an ordinary user able to volitionally do anything. The subscription owner should just be a role that the subscription runs under, as a means of superuser dropping privileges before applying changes. So the only real problem would be that the changes coming from the publisher might, upon application, hack the table owner. But if that's the case, the table owner's vulnerability on the subscription-database side is equal to their vulnerability on the publication-database side (assuming equal schemas on both). Flagging this vulnerability as being logical replication related seems a category error. Instead, it's a schema vulnerability.\n\n> So I think that if we allow user A to replicate into user B's\n> table with fewer privileges than A-can-set-role-to-B, we're building a\n> privilege-escalation attack into the system. But if we do require\n> A-can-set-role-to-B, then things change as described above.\n\nI don't understand the direction this patch is going. I'm emphatically not objecting to it, merely expressing my confusion about it.\n\nI had imagined the solution to the replication security problem was to stop running the replication as superuser, and instead as a trivial user. Imagine that superuser creates roles \"deadhead_bob\" and \"deadhead_alice\" which cannot log in, are not members of any groups nor have any other roles as members of themselves, and have no privileges beyond begin able to replicate into bob's and alice's tables, respectively. The superuser sets up the subscriptions disabled, transfers ownership to deadhead_bob and deadhead_alice, and only then enables the subscriptions.\n\nSince deadhead_bob and deadhead_alice cannot log in, and nobody can set role to them, I don't see what the vulnerability is. Sure, maybe alice can attack deadhead_alice, or bob can attack deadhead_bob, but that's more of a privilege deescalation than a privilege escalation, so where's the risk? That's not a rhetorical question. Is there a risk here? Or are we just concerned that most users will set up replication with superuser or some other high-privilege user, and we're trying to protect them from the consequences of that choice?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 24 Mar 2023 13:11:10 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, 2023-03-24 at 10:00 -0400, Robert Haas wrote:\n> On Fri, Mar 24, 2023 at 3:59 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > That's a little confusing, why not just always use the\n> > SECURITY_RESTRICTED_OPERATION? Is there a use case I'm missing?\n> \n> Some concern was expressed -- not sure exactly where the email is\n> exactly, and it might've been on pgsql-security -- that doing that\n> categorically might break things that are currently working. The\n> people who were concerned included Andres and I forget who else. My\n> gut reaction was the same as yours, just do it always and don't worry\n> about it. But if people think that users are likely to run afoul of\n> the SECURITY_RESTRICTED_OPERATION restrictions in practice, then this\n> is better, and the implementation complexity isn't high. We could\n> even\n> think of extending this kind of logic to other places where\n> SECURITY_RESTRICTED_OPERATION is enforced, if desired.\n\nWithout a reasonable example, we should probably be on some kind of\npath to disallowing crazy stuff in triggers that poses only risks and\nno benefits. Not the job of this patch, but perhaps it can be seen as a\nstep in that direction?\n\n> \n> I am not thrilled with this either, but if you can arrange to run\n> code\n> as a certain user without the ability to SET ROLE to that user then\n> there is, by definition, a potential privilege escalation.\n\nI don't think that's \"by definition\".\n\nThe code is being executed with the privileges of the person who wrote\nit. I don't see anything inherently insecure about that. There could be\nincidental or practical risks, but it's a pretty reasonable thing to do\nat a high level.\n\n> I don't\n> think we can just dismiss that as a non-issue. If the ability of one\n> user to potentially become some other user weren't a problem, we\n> wouldn't need any patch at all. Imagine for example that the table\n> owner has a trigger which doesn't sanitize search_path. The\n> subscription owner can potentially leverage that to get the table\n> owner's privileges.\n\nCan you explain? Couldn't we control the subscription process's search\npath?\n\n> In cases where the subscription owner isn't the\n> superuser, I think the next most likely possibility is that the\n> subscription owner is some kind of almost-super-user\n\nThe benefit of delegating to a non-superuser is to contain the risk if\nthat account is compromised. Allowing SET ROLE on tons of accounts\ndiminishes that benefit, a lot.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 24 Mar 2023 17:02:18 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, 2023-03-24 at 14:35 -0400, Robert Haas wrote:\n> That\n> actually wouldn't work today, and with the patch it would start\n> working, so basically the effect of the patch on the problem that you\n> mention would be to remove the ability to filter by specific table\n> and\n> add the ability to filter by owning role.\n\nThat's certainly a loss. It makes a lot of sense to want to limit a\nsubscription to only write to certain tables.\n\nIf you want to filter by role, you can do that today by granting the\nrole, or some role that has the necessary privileges.\n\nIt takes me a while to re-learn the problems of poorly-written trigger\nfunctions, malicious trigger functions, search paths, etc., each time I\nstart working in this area again. Can you include an example of such a\ntrigger function that we're worried about? Can the subscription owner\nchange the search path in the subscription process, and if so, why?\n\nThe doc here:\n\nhttps://www.postgresql.org/docs/devel/sql-createfunction.html#SQL-CREATEFUNCTION-SECURITY\n\nmentions search_path, but other hazards don't really seem applicable.\n(Is the trigger creating new roles?)\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 24 Mar 2023 17:26:11 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "> As things stand today, if you think somebody's going to send you\n> changes to tables other than the ones you intend to replicate, you\n> could handle that by making sure that the user that owns the\n> subscription only has permission to write to the tables that are\n> expected to receive replicated data. It's a bit awkward to set up\n> because you have to initially make the subscription owner a superuser\n> and then later remove the superuser bit, so I think this is another\n> argument for the pg_create_subscription patch posted elsewhere, but if\n> you're a superuser yourself, you can do it. However, with this patch,\n> that wouldn't work any more, because the permissions checks don't\n> happen until after we've switched to the target role.\n\nFor my purposes I always trust the publisher, what I don't trust is\nthe table owners. But I can indeed imagine scenarios where that's the\nother way around, and indeed you can protect against that currently,\nbut not with your new patch. That seems fairly easily solvable though.\n\n+ if (!member_can_set_role(context->save_userid, userid))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n+ errmsg(\"role \\\"%s\\\" cannot SET ROLE to \\\"%s\\\"\",\n+ GetUserNameFromId(context->save_userid, false),\n+ GetUserNameFromId(userid, false))));\n\nIf we don't throw an error here, but instead simply return, then the\ncurrent behaviour is preserved and people can manually configure\npermissions to protect against an untrusted publisher. This would\nstill mean that the table owners can escalate privileges to the\nsubscription owner, but if that subscription owner actually has fewer\nprivileges than the table owner then you don't have that issue.\n\n\n",
"msg_date": "Sat, 25 Mar 2023 10:24:32 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 10:00:36AM -0400, Robert Haas wrote:\n> On Fri, Mar 24, 2023 at 3:59 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > That's a little confusing, why not just always use the\n> > SECURITY_RESTRICTED_OPERATION? Is there a use case I'm missing?\n> \n> Some concern was expressed -- not sure exactly where the email is\n> exactly, and it might've been on pgsql-security -- that doing that\n> categorically might break things that are currently working. The\n> people who were concerned included Andres and I forget who else. My\n\nSECURITY_RESTRICTED_OPERATION blocks deferred triggers and CREATE TEMP TABLE.\nIf you create a DEFERRABLE INITIALLY DEFERRED fk constraint and replicate to\nthe constraint's table in a SECURITY_RESTRICTED_OPERATION, I expect an error.\n\n> gut reaction was the same as yours, just do it always and don't worry\n> about it. But if people think that users are likely to run afoul of\n\nHard to know. It's the sort of thing that I model as creating months of work\nfor epsilon users to adapt their applications. Epsilon could be zero. Most\nusers don't notice.\n\n> the SECURITY_RESTRICTED_OPERATION restrictions in practice, then this\n> is better, and the implementation complexity isn't high. We could even\n> think of extending this kind of logic to other places where\n> SECURITY_RESTRICTED_OPERATION is enforced, if desired.\n\nFiring a trigger in an index expression or materialized view query is not\nreasonable, so today's uses of SECURITY_RESTRICTED_OPERATION would not benefit\nfrom this approach. Being able to create temp tables in those places has some\nvalue, but I would allow temp tables in SECURITY_RESTRICTED_OPERATION instead\nof proliferating the check on the ability to SET ROLE. (One might allow temp\ntables by introducing NewTempSchemaNestLevel(), called whenever we call\nNewGUCNestLevel(). The transaction would then proceed as though it has no\ntemp schema, allocating an additional schema if creating a temp object.)\n\n\n",
"msg_date": "Sat, 25 Mar 2023 09:01:19 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 4:11 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > > Imagine for example that the table\n> > > owner has a trigger which doesn't sanitize search_path. The\n> > > subscription owner can potentially leverage that to get the table\n> > > owner's privileges.\n>\n> I don't find that terribly convincing. First, there's no reason a subscription owner should be an ordinary user able to volitionally do anything. The subscription owner should just be a role that the subscription runs under, as a means of superuser dropping privileges before applying changes. So the only real problem would be that the changes coming from the publisher might, upon application, hack the table owner. But if that's the case, the table owner's vulnerability on the subscription-database side is equal to their vulnerability on the publication-database side (assuming equal schemas on both). Flagging this vulnerability as being logical replication related seems a category error. Instead, it's a schema vulnerability.\n\nI think there are actually a number of reasons why the subscription\nowner should be a regular user account rather than a special\nlow-privilege account. First, it's only barely possible to do anything\nelse. As of today, you can't create a subscription owned by a\nnon-superuser except by making the subscription first and then\nremoving superuser from the account. You can't even do this:\n\nrhaas=# alter subscription s1 owner to alice;\nERROR: permission denied to change owner of subscription \"s1\"\nHINT: The owner of a subscription must be a superuser.\n\nThat hint is actually a lie because of the loophole mentioned above,\nbut even if making the subscription owner a low-privilege account were\nthe right model (which I don't believe) we've got error messages in\nthe current source code saying that you're not allowed to even do it.\n\nSecond, even if this kind of setup were fully supported and all the\nstuff worked as you expect, it's not very convenient. It requires you\nto create this extra dummy account that doesn't otherwise need to\nexist. I don't see a good reason to impose that requirement on\neverybody. Given that subscriptions initially could only be owned by\nsuperusers, and that's still mostly true, it seems to me that the\nfeature was intended to be used with the superuser as the subscription\nowner, and I think that's what most people must be doing now and will\nprobably want to continue to do, and we should try to make it safe\ninstead of back-pedaling and saying, hey, do it this totally other way\ninstead.\n\nMore discussion of problems below.\n\n> > So I think that if we allow user A to replicate into user B's\n> > table with fewer privileges than A-can-set-role-to-B, we're building a\n> > privilege-escalation attack into the system. But if we do require\n> > A-can-set-role-to-B, then things change as described above.\n>\n> I don't understand the direction this patch is going. I'm emphatically not objecting to it, merely expressing my confusion about it.\n>\n> I had imagined the solution to the replication security problem was to stop running the replication as superuser, and instead as a trivial user. Imagine that superuser creates roles \"deadhead_bob\" and \"deadhead_alice\" which cannot log in, are not members of any groups nor have any other roles as members of themselves, and have no privileges beyond begin able to replicate into bob's and alice's tables, respectively. The superuser sets up the subscriptions disabled, transfers ownership to deadhead_bob and deadhead_alice, and only then enables the subscriptions.\n>\n> Since deadhead_bob and deadhead_alice cannot log in, and nobody can set role to them, I don't see what the vulnerability is. Sure, maybe alice can attack deadhead_alice, or bob can attack deadhead_bob, but that's more of a privilege deescalation than a privilege escalation, so where's the risk? That's not a rhetorical question. Is there a risk here? Or are we just concerned that most users will set up replication with superuser or some other high-privilege user, and we're trying to protect them from the consequences of that choice?\n\nHaving a separate subscription for each different table owner is\nextremely undesirable from a performance perspective, because it means\nthat the WAL on the origin server has to be decoded once per table\nowner. That's not very much fun even if the number of table owners is\nonly two, as in your example, and there could be many more than two\nusers who own tables. In addition, it breaks the transactional\nconsistency of replication. If there's any single transaction that\nmodifies both a table owned by alice and a table owned by bob, we lose\ntransactional consistency on the subscriber side unless both of those\ntransactions are replicated in a single transaction, which means they\nalso need to be part of a single subscription.\n\nI admit that hacking into deadhead_alice or deadhead_bob is probably\nonly minimally interesting. There are, perhaps, things you could do\nwith that, like creating objects owned by that user, and maybe your\nown account is subject to some kind of restrictions separate from\nwhat's in place on the deadhead account, but most likely in most\ncircumstances you can't get very far by breaking into a deadhead\naccount whose only purpose is to replicate changes on tables you\nalready own. However, because of the problems with having a\nsubscription per table owner, you're probably really going to need a\nshared deadhead account that is used to replicate across all users who\nown tables, and now breaking into that account gets a lot more\ninteresting. Two users that aren't supposed to be able to talk to each\nother could use the shared deadhead account to pass data back and\nforth, e.g. to exfiltrate data from a top secret account to one with a\nlower security classification. Or maybe a malicious user can try to\nsteal data as it's being replicated, or just mess with the system.\nThey can create objects owned by the deadhead account in any\npublicly-writable schemas, and they can mess with the GUC settings\nthat apply to the deadhead account, at a minimum.\n\nI basically don't think there's such a thing as an account that is so\nlow-privilege that we don't care about someone hacking into it. Now,\nyou can go way down the rabbit hole here and say \"well, what if we\ninvented a SUPER low privilege account that ever create or own objects\nand can't use even those privileges that are granted to public and\n<insert other restrictions here>.\" And I admit that could be done, but\nthat sounds like a lot of work and I see no point. If we just make it\nsafe for superusers to own subscriptions, then the job's done. And I\nsee nothing preventing us from doing that. That's the point of this\npatch, in fact.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 27 Mar 2023 11:31:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, Mar 24, 2023 at 8:02 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Without a reasonable example, we should probably be on some kind of\n> path to disallowing crazy stuff in triggers that poses only risks and\n> no benefits. Not the job of this patch, but perhaps it can be seen as a\n> step in that direction?\n\nPossibly, but it's a little harder to say what's crazy in a trigger\nthan in some other contexts. I feel like it would be fine to say that\nyour index expression should probably not be doing \"ALTER USER\nsomebody SUPERUSER\" really ever, but it's not quite so clear in case\nof a trigger. And the stuff that's prevented by\nSECURITY_RESTRICTED_OPERATION isn't that sort of thing, but has rather\nto do with stuff that messes with the session state, maybe leaving\nbooby-traps behind for the next user. For instance, if user A runs\nsome code as user B and user B closes a cursor opened by A and opens a\nnew one with the same name, that has a rather good chance of making A\ndo something they didn't intend to do. SECURITY_RESTRICTED_OPERATION\nis aimed at preventing that sort of attack.\n\n> > I am not thrilled with this either, but if you can arrange to run\n> > code\n> > as a certain user without the ability to SET ROLE to that user then\n> > there is, by definition, a potential privilege escalation.\n>\n> I don't think that's \"by definition\".\n>\n> The code is being executed with the privileges of the person who wrote\n> it. I don't see anything inherently insecure about that. There could be\n> incidental or practical risks, but it's a pretty reasonable thing to do\n> at a high level.\n\nNot really. My home directory on my laptop is full of Perl and sh\nscripts that I wouldn't want someone else to execute as me. They don't\nhave any defenses against malicious use because I don't expect anyone\nelse has access to run them, especially as me. If someone got access\nto run them as me, they'd compromise my laptop account in no time at\nall. I don't see any reason to believe the situation is any different\ninside of a database. People have no reason to harden code unless\nsomeone else is going to have access to run it.\n\n> > I don't\n> > think we can just dismiss that as a non-issue. If the ability of one\n> > user to potentially become some other user weren't a problem, we\n> > wouldn't need any patch at all. Imagine for example that the table\n> > owner has a trigger which doesn't sanitize search_path. The\n> > subscription owner can potentially leverage that to get the table\n> > owner's privileges.\n>\n> Can you explain? Couldn't we control the subscription process's search\n> path?\n\nThere's no place in the code right now where when we switch user\nidentities we also change the search_path. There is nothing to prevent\nus from writing such code, but we have no place from which to obtain a\nsearch_path that will cause the called code to behave properly. We\ndon't have access to the search_path that would prevail at the time\nthe target user logged in, and even if we did, we don't know that that\nsearch_path is secure. We do know that an empty search_path is secure,\nbut it's probably also going to cause any code we run to fail, unless\nthat code schema-qualifies all references outside of pg_catalog, or\nunless it sets search_path itself. search_path also isn't necessarily\nthe only problem. As a hypothetical example, suppose I create a table\nwith one text column, revoke all public access to that table, and then\ncreate an on-insert trigger that executes as an SQL command any text\nvalue inserted into the table. This is perfectly secure as long as I'm\nthe only one who can access the table, but if someone else gets access\nto insert things into that table using my credentials then they can\neasily break into my account. Real examples aren't necessarily that\ndramatically bad, but that doesn't mean they don't exist.\n\n> The benefit of delegating to a non-superuser is to contain the risk if\n> that account is compromised. Allowing SET ROLE on tons of accounts\n> diminishes that benefit, a lot.\n\nWell, I continue to feel that if you can compromise the subscription\nowner's account, we haven't really fixed anything yet. I mean, it used\nto be that autovacuum could compromise the superuser's account, and we\nfixed that. If we find more ways for that same thing to happen, we\nwould presumably fix those too. We would never accept a situation\nwhere autovacuum can compromise the superuser's account. And we\nshouldn't accept a situation where either the table owner can\ncompromise the subscription owner's account, either. And similarly\nnobody ever proposed that that issue should be fixed by running the\nautovacuum worker process as some kind of low-privileged user that we\ncreated specially for that purpose. We just ... fixed it so that no\ncompromise was possible. And I think that's also the right approach\nhere.\n\nI do agree with you that allowing SET ROLE on tons of accounts is not\ngreat, though. I don't really think it matters very much today,\nbecause basically all subscriptions today are owned by superusers and\ncan do everything anyway. But if you imagine that a lot of\nsubscriptions are going to be owned by less-privileged users, then\nit's a lot less nice. I think it has the strength of at least being\nhonest about what the problem is. It makes it clear right on the tin\nthat the subscription owner is going to be able to get into the table\nowner's accounts, in a way that people shouldn't really miss noticing.\nAnd if they notice it, then they're at least aware of it, which is\nsomething. It would be better still if we could prevent the\nsubscription owner from hacking into the table owner's account. Then,\nwe could very sensibly remove the SET ROLE requirement and check\nsomething weaker instead, and that would be fantastic.\n\nSince I didn't see how to engineer that, I did this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 27 Mar 2023 12:53:40 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Sat, Mar 25, 2023 at 12:01 PM Noah Misch <noah@leadboat.com> wrote:\n> (One might allow temp\n> tables by introducing NewTempSchemaNestLevel(), called whenever we call\n> NewGUCNestLevel(). The transaction would then proceed as though it has no\n> temp schema, allocating an additional schema if creating a temp object.)\n\nNeat idea. That would require some adjustments, I believe, because of\nthe way that temp schemas are named. But it sounds doable.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 27 Mar 2023 12:55:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Sat, Mar 25, 2023 at 5:24 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n> For my purposes I always trust the publisher, what I don't trust is\n> the table owners. But I can indeed imagine scenarios where that's the\n> other way around, and indeed you can protect against that currently,\n> but not with your new patch. That seems fairly easily solvable though.\n>\n> + if (!member_can_set_role(context->save_userid, userid))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> + errmsg(\"role \\\"%s\\\" cannot SET ROLE to \\\"%s\\\"\",\n> + GetUserNameFromId(context->save_userid, false),\n> + GetUserNameFromId(userid, false))));\n>\n> If we don't throw an error here, but instead simply return, then the\n> current behaviour is preserved and people can manually configure\n> permissions to protect against an untrusted publisher. This would\n> still mean that the table owners can escalate privileges to the\n> subscription owner, but if that subscription owner actually has fewer\n> privileges than the table owner then you don't have that issue.\n\nI don't get it. If we just return, that would result in skipping\nchanges rather than erroring out on changes, but it wouldn't preserve\nthe current behavior, because we'd still care about the table owner's\npermissions rather than, as now, the subscription owner's permissions.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 27 Mar 2023 13:14:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, 2023-03-27 at 12:53 -0400, Robert Haas wrote:\n> We do know that an empty search_path is secure,\n> but it's probably also going to cause any code we run to fail, unless\n> that code schema-qualifies all references outside of pg_catalog, or\n> unless it sets search_path itself.\n\nI am confused.\n\nIf the trigger function requires schema \"xyz\" to be in the search_path,\nand the function itself doesn't set it, how will it ever get set in the\nsubscription process? Won't such a function be simply broken for all\nlogical subscriptions (in current code or with any of the proposals\nactive right now)?\n\nAnd if the trigger function requires object \"abc\" (regardless of\nschema) to be somehow accessible without qualification, and if it\ndoesn't set the search path itself, and it's not in pg_catalog; then\nagain I think that function would be broken today.\n\nIt feels like we are reaching to say that a trigger function might be\nbroken based on some contrived cases, but we can already contrive cases\nthat are broken for logical replication today. It might not be exactly\nthe same set, but unless I'm missing something it would be a very\nsimilar set.\n\n> search_path also isn't necessarily\n> the only problem. As a hypothetical example, suppose I create a table\n> with one text column, revoke all public access to that table, and\n> then\n> create an on-insert trigger that executes as an SQL command any text\n> value inserted into the table. This is perfectly secure as long as\n> I'm\n> the only one who can access the table, but if someone else gets\n> access\n> to insert things into that table using my credentials then they can\n> easily break into my account. Real examples aren't necessarily that\n> dramatically bad, but that doesn't mean they don't exist.\n\nAre you suggesting that we require the subscription owner to have SET\nROLE on everybody so that everyone is equally insecure and there's no\nway to ever fix it even if you don't have any triggers on your tables\nat all?\n\nAnd all of this is to protect a user that writes a trigger function\nthat executes arbitrary SQL based on the input data?\n\n> \n> Well, I continue to feel that if you can compromise the subscription\n> owner's account, we haven't really fixed anything yet.\n\nI'm trying to reconcile the following two points:\n\n A. That you are proposing a patch to allow non-superuser\nsubscriptions; and\n B. That you are arguing that the subscription owner basically needs\nto be superuser and we should just accept that.\n\nGranted, there's some nuance here and I won't say that those two are\nmutually exclusive. But I think it would be helpful to explain how\nthose two ideas fit together.\n\n> We just ... fixed it so that no\n> compromise was possible. And I think that's also the right approach\n> here.\n\nIf you are saying that we fix the subscription process so that it can't\neasily be compromised by the table owner, then that appears to be the\nentire purpose of this patch and I agree.\n\nBut I think you're saying something slightly different about the\nsubscription process compromising the table owner, and assuming that\nproblem exists, your patch doesn't do anything to stop it. In fact,\nyour patch requires the subscription owner can SET ROLE to each of the\ntable owners, guaranteeing that the table owners can never do anything\nat all to protect themselves from the subscription owner.\n\n> I do agree with you that allowing SET ROLE on tons of accounts is not\n> great, though.\n\nRegardless of security concerns, the UI is bad.\n\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 27 Mar 2023 12:04:20 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 3:04 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Mon, 2023-03-27 at 12:53 -0400, Robert Haas wrote:\n> > We do know that an empty search_path is secure,\n> > but it's probably also going to cause any code we run to fail, unless\n> > that code schema-qualifies all references outside of pg_catalog, or\n> > unless it sets search_path itself.\n>\n> I am confused.\n>\n> If the trigger function requires schema \"xyz\" to be in the search_path,\n> and the function itself doesn't set it, how will it ever get set in the\n> subscription process? Won't such a function be simply broken for all\n> logical subscriptions (in current code or with any of the proposals\n> active right now)?\n>\n> And if the trigger function requires object \"abc\" (regardless of\n> schema) to be somehow accessible without qualification, and if it\n> doesn't set the search path itself, and it's not in pg_catalog; then\n> again I think that function would be broken today.\n\nNo, not really. It's pretty common for a lot of things to be in the\npublic schema, and the public schema is likely to be in the search\npath of every user involved.\n\n> It feels like we are reaching to say that a trigger function might be\n> broken based on some contrived cases, but we can already contrive cases\n> that are broken for logical replication today. It might not be exactly\n> the same set, but unless I'm missing something it would be a very\n> similar set.\n\nNo, it's not a contrived case, and it's not the same set of cases, not\neven close. Running functions with a different search path than the\nauthor intended is a really common cause of all kinds of things\nbreaking. See for example commit\n582edc369cdbd348d68441fc50fa26a84afd0c1a, which certainly did break\nthings for some users.\n\n> Are you suggesting that we require the subscription owner to have SET\n> ROLE on everybody so that everyone is equally insecure and there's no\n> way to ever fix it even if you don't have any triggers on your tables\n> at all?\n\nI certainly am not. I don't even know how to respond to this. I want\nto make it secure, not insecure. But we don't gain any security by\npretending that certain exploits don't exist or aren't problems when\nthey do and are. Quite the contrary.\n\n> And all of this is to protect a user that writes a trigger function\n> that executes arbitrary SQL based on the input data?\n\nOr is insecure in any other way, and there are quite a few ways. If\nyou don't think that this is a problem in reality, I really don't know\nhow to carry this conversation forward. The idea that the average\ntrigger function is safe if it can be unexpectedly called by someone\nother than the table owner with arguments and GUC settings chosen by\nthe caller doesn't seem remotely correct to me. It matches no part of\nmy experience with user-defined functions, either written by me or by\nEDB customers. Every database I've ever seen that used triggers at all\nwould be vulnerable to the subscription owner compromising the table\nowner's account.\n\n> > Well, I continue to feel that if you can compromise the subscription\n> > owner's account, we haven't really fixed anything yet.\n>\n> I'm trying to reconcile the following two points:\n>\n> A. That you are proposing a patch to allow non-superuser\n> subscriptions; and\n> B. That you are arguing that the subscription owner basically needs\n> to be superuser and we should just accept that.\n>\n> Granted, there's some nuance here and I won't say that those two are\n> mutually exclusive. But I think it would be helpful to explain how\n> those two ideas fit together.\n\nI do think that what this patch does is tantamount to B, because you\ncan have SET ROLE to some users without having SET ROLE to all users.\nThat's a big part of the point of the CREATEROLE and\ncreaterole_self_grant work.\n\n> But I think you're saying something slightly different about the\n> subscription process compromising the table owner, and assuming that\n> problem exists, your patch doesn't do anything to stop it. In fact,\n> your patch requires the subscription owner can SET ROLE to each of the\n> table owners, guaranteeing that the table owners can never do anything\n> at all to protect themselves from the subscription owner.\n\nYeah, that's true, and like I said, if there's a way to avoid that,\ngreat. But wishing it were so is not that way.\n\nLet's back up a minute here. Suppose someone were to request a new\ncommand, ALTER TABLE <name> DO NOT LET THE SUPERUSER ACCESS THIS. We\nwould reject that proposal. The reason we would reject it is because\nit wouldn't actually work as documented. We know that the superuser\nhas the power to access that account and reverse that command, either\nby SET ROLE or by changing the account password or by changing\npg_hba.conf or by shelling out to the filesystem and doing whatever.\nThe feature purports to do something that it actually cannot do. No\none can defend themselves against the superuser. Not even another\nsuperuser can defend themselves against a superuser. Pretending\notherwise would be confusing and have no security benefits.\n\nNow let's think about this case. Can a table owner defend themselves\nagainst a subscription owner who wants to usurp their privileges? If\nthey cannot, then I think that what the patch does is correct for the\nsame reason that I think we would correctly reject the hypothetical\ncommand proposed above. If they can, then the patch has got the wrong\nidea, perhaps. But what is the actual answer to the question? My\nanswer is that it's *often* impossible but not *always* impossible. If\nthe table owner has no executable code at all attached to the table --\nnot just triggers, but also expression indexes and default expressions\nand so forth -- then they can. If they do have those things then in\ntheory they might be able to protect themselves, but in practice they\nare unlikely to be careful enough. I judge that practically every\ninstallation where table owners use triggers would be easily\ncompromised. Only the most security-conscious of table owners are\ngoing to get this right.\n\nSo that's a grey area, at least IMHO. The patch could be revised in\nsome way, and the permissions requirements downgraded. However, if we\ndo that, I think we're going to have to document that although in\ntheory table owners can make themselves against subscription owners,\nin practice they probably won't be. That approach has some advantages,\nand I don't think it's insane. However, I am not convinced that it is\nthe best idea, either, and I had the impression based on\npgsql-security discussion that Andres and Noah thought this way was\nbetter. I might have misinterpreted their position, and they might\nhave changed their minds, and they might have been wrong. But that's\nhow we got here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 27 Mar 2023 16:47:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "> I don't get it. If we just return, that would result in skipping\n> changes rather than erroring out on changes, but it wouldn't preserve\n> the current behavior, because we'd still care about the table owner's\n> permissions rather than, as now, the subscription owner's permissions.\n\nAttached is an updated version of your patch with what I had in mind\n(admittedly it needed one more line than \"just\" the return to make it\nwork). But as you can see all previous tests for a lowly privileged\nsubscription owner that **cannot** SET ROLE to the table owner\ncontinue to work as they did before. While still downgrading to the\ntable owners role when the subscription owner **can** SET ROLE to the\ntable owner.\n\nObviously this needs some comments explaining what's going on and\nprobably some code refactoring and/or variable renaming, but I hope\nit's clear what I meant now: For high privileged subscription owners,\nwe downgrade to the permissions of the table owner, but for low\nprivileged ones we care about permissions of the subscription owner\nitself.",
"msg_date": "Tue, 28 Mar 2023 00:08:40 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, 2023-03-27 at 16:47 -0400, Robert Haas wrote:\n> No, not really. It's pretty common for a lot of things to be in the\n> public schema, and the public schema is likely to be in the search\n> path of every user involved.\n\nConsider this trigger function which uses an unqualified reference to\npfunc(), therefore implicitly depending on the public schema:\n\n CREATE FUNCTION public.pfunc() RETURNS INT4 LANGUAGE plpgsql AS\n $$ BEGIN RETURN 42; END; $$;\n CREATE FUNCTION foo.tfunc() RETURNS TRIGGER LANGUAGE plpgsql AS\n $$ BEGIN\n NEW.i = pfunc();\n RETURN NEW;\n END; $$;\n CREATE TABLE foo.a(i INT);\n CREATE TRIGGER a_trigger BEFORE INSERT ON foo.a\n FOR EACH ROW EXECUTE PROCEDURE foo.tfunc();\n ALTER TABLE foo.a ENABLE ALWAYS TRIGGER a_trigger;\n\nBut that trigger function is broken today for logical replication --\npfunc() isn't found by the subscription process, which has an empty\nsearch_path.\n\n> Let's back up a minute here. Suppose someone were to request a new\n> command, ALTER TABLE <name> DO NOT LET THE SUPERUSER ACCESS THIS. We\n> would reject that proposal. The reason we would reject it is because\n> it wouldn't actually work as documented. We know that the superuser\n> has the power to access that account and reverse that command, either\n> by SET ROLE or by changing the account password or by changing\n> pg_hba.conf or by shelling out to the filesystem and doing whatever.\n> The feature purports to do something that it actually cannot do. No\n> one can defend themselves against the superuser. Not even another\n> superuser can defend themselves against a superuser. Pretending\n> otherwise would be confusing and have no security benefits.\n\nGood framing. With you so far...\n\n> Now let's think about this case. Can a table owner defend themselves\n> against a subscription owner who wants to usurp their privileges? If\n> they cannot, then I think that what the patch does is correct for the\n> same reason that I think we would correctly reject the hypothetical\n> command proposed above.\n\nAgreed...\n\n[ Aside: A user foo who accesses a table owned by bar has no means to\ndefend themselves against SECURITY INVOKER code that usurps their\nprivileges. Applying your logic above, foo should have to grant SET\nROLE privileges to bar before accessing the table. ]\n\n> If\n> the table owner has no executable code at all attached to the table -\n> -\n> not just triggers, but also expression indexes and default\n> expressions\n> and so forth -- then they can.\n\nRight...\n\n> If they do have those things then in\n> theory they might be able to protect themselves, but in practice they\n> are unlikely to be careful enough. I judge that practically every\n> installation where table owners use triggers would be easily\n> compromised. Only the most security-conscious of table owners are\n> going to get this right.\n\nThis is the interesting part.\n\nCommit 11da97024ab set the subscription process's search path as empty.\nIt seems it was done to protect the subscription owner against users\nwriting into the public schema. But after your apply-as-table-owner\npatch, it seems to also offer some protection for table owners against\na malicious subscription owner, too.\n\nBut I must be missing something obvious here -- if the subscription\nprocess has an empty search_path, what can an attacker do to trick it?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 27 Mar 2023 22:38:03 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, 27 Mar 2023 at 22:47, Robert Haas <robertmhaas@gmail.com> wrote:\n> Can a table owner defend themselves\n> against a subscription owner who wants to usurp their privileges?\n\nI have a hard time following the discussion. And I think it's because\nthere's lots of different possible privilege escalations to think\nabout. Below is the list of escalations I've gathered and how I think\nthey interact with the different patches:\n1. Table owner escalating to a high-privileged subscription owner.\ni.e. the subscription owner is superuser, or has SET ROLE privileges\nfor all owners of tables in the subscription.\n2. Table owner escalating to a low-privileged subscription owner. e.g.\nthe subscription owner can only insert into the tables in its\nsubscription\n a. The subscription owner only has insert permissions for a tables\nowned by a single other user\n b. The subscription owner has insert permissions for tables owned\nby multiple other users (but still does not have SET ROLE, or possibly\neven select/update/delete)\n3. Publisher applying into tables that the subscriber side doesn't want it to\n4. Subscription-owner escalating to table owner\n a. The subscription owner is highly privileged (allows SET ROLE to\ntable owner)\n b. The subscription owner is lowly privileged\n\nWhich can currently only be addressed by having 1\nsubscription/publication pair for every table owner. This has the big\nissue that WAL needs to be decoded multiple times on the publisher.\nThis patch would make escalation 1 impossible without having to do\nanything special when setting up the subscription. With Citus we only\nrun into this escalation issue with logical replication at the moment.\nWe want to replicate lots of different tables, possibly owned by\nmultiple users from one node to another. We trust the publisher and we\ntrust the subscription owner, but we don't trust the table owners at\nall. This is why I'm very much in favor of a version of this patch.\n\n2.a seems a non-issue, because in this case the table owner can easily\ngive itself the same permissions as the subscription owner (if it\ndidn't have them yet). So by \"escalating\" to the subscription owner\nthe table owner only gets fewer permissions. 2.b is actually\ninteresting from a security perspective, because by escalating to the\nsubscription owner, the table owner might be able to insert into\ntables that it normally can't. Patch v1 would \"solve\" both these\nissues by simply not supporting these scenarios. My patch v2 keeps the\nexisting behaviour, where both \"escalations\" are possible and who-ever\nsets up the replication should create a dedicated subscriber for each\ntable owner to make sure that only 2.a ever occurs and 2.b does not.\n\n3 is something I have not run into. But I can easily imagine a case\nwhere a subscriber connects to a (somewhat) untrusted publisher for\nthe purpose of replicating changes from a single table, e.g. some\nevents table. But you don't want to allow replication into any other\ntables, even if the publisher tells you to. Patch v1 would force you\nto have SET ROLE privileges on the target events table its owner. So\nif that owner owns other tables too, then effectively you'd allow the\npublisher to write into those tables too. The current behaviour\n(without any patch) supports protecting against this escalation, by\ngiving only INSERT permissions on a single table without the need for\nSET ROLE. My v2 patch preserves that ability.\n\n4.a again seems like an obvious non-issue to me because the\nsubscription owner can already \"escalate\" to table owner using SET\nROLE. 4.b seems like it's pretty much the same as 3, afaict all the\nsame arguments apply. And I honestly can't think of a real situation\nwhere you would not trust the subscription owner (as opposed to the\npublisher). If any of you have an example of such a situation I'd love\nto understand this one better.\n\nAll in all, I think patch v2 is the right direction here, because it\ncovers all these escalations to some extent.\n\n\n",
"msg_date": "Tue, 28 Mar 2023 11:15:28 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Tue, 28 Mar 2023 at 11:15, Jelte Fennema <postgres@jeltef.nl> wrote:\n> Which can currently only be addressed by having 1\n> subscription/publication pair for every table owner.\n\nOops, moving sentences around in my email made me not explicitly\nreference escalation 1 anymore. The above should have been:\n1 can currently only be addressed by having 1 subscription/publication\npair for every table owner.\n\n\n",
"msg_date": "Tue, 28 Mar 2023 11:21:28 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 6:08 PM Jelte Fennema <postgres@jeltef.nl> wrote:\n> Attached is an updated version of your patch with what I had in mind\n> (admittedly it needed one more line than \"just\" the return to make it\n> work). But as you can see all previous tests for a lowly privileged\n> subscription owner that **cannot** SET ROLE to the table owner\n> continue to work as they did before. While still downgrading to the\n> table owners role when the subscription owner **can** SET ROLE to the\n> table owner.\n>\n> Obviously this needs some comments explaining what's going on and\n> probably some code refactoring and/or variable renaming, but I hope\n> it's clear what I meant now: For high privileged subscription owners,\n> we downgrade to the permissions of the table owner, but for low\n> privileged ones we care about permissions of the subscription owner\n> itself.\n\nHmm. This is an interesting idea. A variant on this theme could be:\nwhat if we made this an explicit configuration option?\n\nI'm worried that if we just try to switch users and silently fall back\nto not doing so when we don't have enough permissions, the resulting\nbehavior is going to be difficult to understand and troubleshoot. I'm\nthinking that maybe if you let people pick the behavior they want that\nbecomes more comprehensible. It's also a nice insurance policy: say\nfor the sake of argument we make switch-to-table-owner the new\ndefault. If that new behavior causes something to happen to somebody\nthat they don't like, they can always turn it off, even if they are a\nhighly privileged user who doesn't \"need\" to turn it off.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 28 Mar 2023 12:13:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 1:38 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > If they do have those things then in\n> > theory they might be able to protect themselves, but in practice they\n> > are unlikely to be careful enough. I judge that practically every\n> > installation where table owners use triggers would be easily\n> > compromised. Only the most security-conscious of table owners are\n> > going to get this right.\n>\n> This is the interesting part.\n>\n> Commit 11da97024ab set the subscription process's search path as empty.\n> It seems it was done to protect the subscription owner against users\n> writing into the public schema. But after your apply-as-table-owner\n> patch, it seems to also offer some protection for table owners against\n> a malicious subscription owner, too.\n>\n> But I must be missing something obvious here -- if the subscription\n> process has an empty search_path, what can an attacker do to trick it?\n\nOh, interesting. I hadn't realized we were doing that, and I do think\nthat narrows the vulnerability quite a bit.\n\nBut I still feel pretty uncomfortable with the idea of requiring only\nINSERT/UPDATE/DELETE permissions on the target table. Let's suppose\nthat you create a table and attach a trigger to it which is SECURITY\nINVOKER. Absent logical replication, you don't really need to worry\nabout whether that function is written securely -- it will run with\nprivileges of the person performing the DML, and not impact your\naccount at all. They have reason to be scared of you, but not the\nreverse. However, if the people doing DML on the table can arrange to\nperform that DML as you, then any security vulnerabilities in your\nfunction potentially allow them to compromise your account. Now, there\nmight not be any, but there also might be some, depending on exactly\nwhat your function does. And if logical replication into a table\nrequires only I/U/D permission then it's basically a back-door way to\nrun functions that someone could otherwise execute only as themselves\nas the table owner, and that's scary.\n\nNow, I don't know how much to worry about that. As you just pointed\nout to me in some out-of-band discussion, this is only going to affect\na table owner who writes a trigger, makes it insecure in some way\nother than failure to sanitize the search_path, and sets it ENABLE\nALWAYS TRIGGER or ENABLE REPLICA TRIGGER. And maybe we could say that\nif you do that last part, you kind of need to think about the\nconsequences for logical replication. If so, we could document that\nproblem away. It would also affect someone who uses a default\nexpression or other means of associating executable code with the\ntable, and something like a default expression doesn't require any\nexplicit setting to make it apply in case of replication, so maybe the\nrisk of someone messing up is a bit higher.\n\nBut this definitely makes it more of a judgment call than I thought\ninitially. Functions that are vulnerable to search_path exploits are a\ndime a dozen; other kinds of exploits are going to be less common, and\nmore dependent on exactly what the function is doing.\n\nI'm still inclined to leave the patch checking for SET ROLE, because\nafter all, we're thinking of switching user identities to the table\nowner, and checking whether we can SET ROLE is the most natural way of\ndoing that, and definitely doesn't leave room to escalate to any\nprivilege you don't already have. However, there might be an argument\nthat we ought to do something else, like have a REPLICATE privilege on\nthe table that the owner can grant to users that they trust to\nreplicate into that table. Because time is short, I'd like to leave\nthat idea for a future release. What I would like to change in the\npatch in this release is to add a new subscription property along the\nlines of what I proposed to Jelte in my earlier email: let's provide\nan escape hatch that turns off the user-switching behavior. If we do\nthat, then anyone who feels themselves worse off after this patch can\nswitch back to the old behavior. Most people will be better off, I\nbelieve, and the opportunity to make things still better in the future\nis not foreclosed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 28 Mar 2023 13:55:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Tue, 28 Mar 2023 at 18:13, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Mar 27, 2023 at 6:08 PM Jelte Fennema <postgres@jeltef.nl> wrote:\n> > For high privileged subscription owners,\n> > we downgrade to the permissions of the table owner, but for low\n> > privileged ones we care about permissions of the subscription owner\n> > itself.\n>\n> Hmm. This is an interesting idea. A variant on this theme could be:\n> what if we made this an explicit configuration option?\n\nSounds perfect to me. Let's do that. As long as the old no-superuser\ntests continue to pass when disabling the new switch-to-table-owner\nbehaviour, that sounds totally fine to me. I think it's probably\neasiest if you use the tests from my v2 patch when you add that\noption, since that was by far the thing I spent most time on to get\nright in the v2 patch.\n\nOn Tue, 28 Mar 2023 at 18:13, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Mar 27, 2023 at 6:08 PM Jelte Fennema <postgres@jeltef.nl> wrote:\n> > Attached is an updated version of your patch with what I had in mind\n> > (admittedly it needed one more line than \"just\" the return to make it\n> > work). But as you can see all previous tests for a lowly privileged\n> > subscription owner that **cannot** SET ROLE to the table owner\n> > continue to work as they did before. While still downgrading to the\n> > table owners role when the subscription owner **can** SET ROLE to the\n> > table owner.\n> >\n> > Obviously this needs some comments explaining what's going on and\n> > probably some code refactoring and/or variable renaming, but I hope\n> > it's clear what I meant now: For high privileged subscription owners,\n> > we downgrade to the permissions of the table owner, but for low\n> > privileged ones we care about permissions of the subscription owner\n> > itself.\n>\n> Hmm. This is an interesting idea. A variant on this theme could be:\n> what if we made this an explicit configuration option?\n>\n> I'm worried that if we just try to switch users and silently fall back\n> to not doing so when we don't have enough permissions, the resulting\n> behavior is going to be difficult to understand and troubleshoot. I'm\n> thinking that maybe if you let people pick the behavior they want that\n> becomes more comprehensible. It's also a nice insurance policy: say\n> for the sake of argument we make switch-to-table-owner the new\n> default. If that new behavior causes something to happen to somebody\n> that they don't like, they can always turn it off, even if they are a\n> highly privileged user who doesn't \"need\" to turn it off.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 28 Mar 2023 20:53:36 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Tue, 2023-03-28 at 13:55 -0400, Robert Haas wrote:\n> Oh, interesting. I hadn't realized we were doing that, and I do think\n> that narrows the vulnerability quite a bit.\n\nIt's good to know I wasn't missing something obvious.\n\n> bsent logical replication, you don't really need to worry\n> about whether that function is written securely -- it will run with\n> privileges of the person performing the DML, and not impact your\n> account at all.\n\nThat's not strictly true. See the example at the bottom of this email.\n\nThe second trigger is SECURITY INVOKER, and it captures the leaked\nsecret number that was stored in the tuple by an earlier SECURITY\nDEFINER trigger.\n\nPerhaps I'm being pedantic, but my point is that SECURITY INVOKER is\nnot a magical shield that protects users from themselves without bound.\nIt protects users against some kinds of attacks, sure, but there's a\nlimit to what it has to offer.\n\n> They have reason to be scared of you, but not the\n> reverse. However, if the people doing DML on the table can arrange to\n> perform that DML as you, then any security vulnerabilities in your\n> function potentially allow them to compromise your account.\n\nOK, but I'd like to be clear that you've moved from your prior\nstatement:\n\n \"in theory they might be able to protect themselves, but in practice\nthey are unlikely to be careful enough\"\n\nTo something very different, where it seems that we can't think of a\nconcrete problem even for not-so-careful users.\n\nThe dangerous cases seem to be something along the lines of a security-\ninvoker trigger function that builds and executes arbirary SQL based on\nthe row contents. And then the table owner would then still need to set\nENABLE ALWAYS TRIGGER.\n\nDo we really want to take that case on as our security responsibility?\n\n> It would also affect someone who uses a default\n> expression or other means of associating executable code with the\n> table, and something like a default expression doesn't require any\n> explicit setting to make it apply in case of replication, so maybe\n> the\n> risk of someone messing up is a bit higher.\n\nPerhaps, but I still don't understand that case. Unless I'm missing\nsomething, the table owner would have to write a pretty weird default\nexpression or check constraint or whatever to end up with something\ndangerous.\n\n> But this definitely makes it more of a judgment call than I thought\n> initially.\n\nAnd I'm fine if the judgement is that we just require SET ROLE to be\nconservative and make sure we don't over-promise in 16. That's a\ntotally reasonable thing: it's easier to loosen restrictions later than\nto tighten them. Furthermore, just because you and I can't think of\nexploitable problems doesn't mean they don't exist.\n\nI have found this discussion enlightening, so thank you for going\nthrough these details with me.\n\n> I'm still inclined to leave the patch checking for SET ROLE\n\n+0.5. I want the patch to go in either way, and can carry on the\ndiscussion later and improve things more for 17.\n\nBut I want to say generally that I believe we spend too much effort\ntrying to protect unreasonable users from themselves, which interferes\nwith our ability to protect reasonable users and solve real use cases.\n\n> However, there might be an argument\n> that we ought to do something else, like have a REPLICATE privilege\n\nSounds cool. Not sure I have an opinion yet, but probably 17 material.\n\n> What I would like to change in the\n> patch in this release is to add a new subscription property along the\n> lines of what I proposed to Jelte in my earlier email: let's provide\n> an escape hatch that turns off the user-switching behavior.\n\nI believe switching to the table owner (as done in your patch) is the\nright best practice, and should be the default.\n\nI'm fine with an escape hatch here to ease migrations or whatever. But\nplease do it in an unobtrusive way such that we (as hackers) can mostly\nforget that the non-switching behavior exists. At least for me, our\nsystem is already pretty complex without two kinds of subscription\nsecurity models.\n\n\nRegards,\n\tJeff Davis\n\n\n----- example follows ------\n\n -- user foo\n CREATE TABLE secret(I INT);\n INSERT INTO secret VALUES(42);\n CREATE TABLE r(i INT, secret INT);\n CREATE OR REPLACE FUNCTION a_func() RETURNS TRIGGER\n SECURITY DEFINER LANGUAGE plpgsql AS\n $$\n BEGIN\n SELECT i INTO NEW.secret FROM secret;\n RETURN NEW;\n END;\n $$;\n CREATE OR REPLACE FUNCTION z_func() RETURNS TRIGGER\n SECURITY INVOKER LANGUAGE plpgsql AS\n $$\n BEGIN\n IF NEW.secret <> abs(NEW.secret) THEN\n RAISE EXCEPTION 'no negative secret numbers allowed';\n END IF;\n RETURN NEW;\n END;\n $$;\n CREATE TRIGGER a_trigger BEFORE INSERT ON r\n FOR EACH ROW EXECUTE PROCEDURE a_func();\n CREATE TRIGGER z_trigger BEFORE INSERT ON r\n FOR EACH ROW EXECUTE PROCEDURE z_func();\n GRANT INSERT ON r TO bar;\n GRANT USAGE ON SCHEMA foo TO bar;\n\n -- user bar\n CREATE OR REPLACE FUNCTION bar.abs(i INT4) RETURNS INT4\n LANGUAGE plpgsql AS\n $$\n BEGIN\n INSERT INTO secret_copy VALUES(i);\n RETURN pg_catalog.abs(i);\n END;\n $$;\n CREATE TABLE secret_copy(secret int);\n SET search_path = \"$user\", pg_catalog;\n INSERT INTO foo.r VALUES(1);\n\n\n\n",
"msg_date": "Tue, 28 Mar 2023 15:48:18 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 6:48 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> That's not strictly true. See the example at the bottom of this email.\n\nYeah, we're very casual about repeated user switching in all kinds of\ncontexts, and that's really pretty scary. I don't know how to fix that\nwithout removing capabilities that people rely on.\n\n> The dangerous cases seem to be something along the lines of a security-\n> invoker trigger function that builds and executes arbirary SQL based on\n> the row contents. And then the table owner would then still need to set\n> ENABLE ALWAYS TRIGGER.\n>\n> Do we really want to take that case on as our security responsibility?\n\nThat's something about which I would like to get more opinions.\n\n> I believe switching to the table owner (as done in your patch) is the\n> right best practice, and should be the default.\n>\n> I'm fine with an escape hatch here to ease migrations or whatever. But\n> please do it in an unobtrusive way such that we (as hackers) can mostly\n> forget that the non-switching behavior exists. At least for me, our\n> system is already pretty complex without two kinds of subscription\n> security models.\n\nHere's a new patch set to show how this would work. 0001 is as before.\n0002 adds a run_as_owner option to subscriptions. This doesn't make\nthe updated regression tests fail, because they don't use it. If you\nrevert the changes to 027_nosuperuser.pl then you get failures (as one\nwould hope) and if you then add WITH (run_as_owner = true) to the\nCREATE SUBSCRIPTION command in 027_nosuperuser.pl then it passes\nagain. I need to spend some more time thinking about what the tests\nactually ought to look like if we go this route -- I haven't looked\nthrough what Jelte proposed yet -- and also the documentation would\nneed a bunch of updating.\n\nNow, backing up a minute to address your other comments here, I'm not\nparticularly dedicated to this new option. But the problem is that you\ncan't please all the people all the time. When I proposed to\nunconditionally change the behavior, people said \"what if\nSECURITY_RESTRICTED_OPERATION hoses people, or they just don't like\nthe new behavior?\" and \"what if I don't trust the publisher and want\nto use table permissions on the subscriber side to minimize my\nexposure?\". Well, this option provides a way out of those problems by\nallowing you to get the current behavior, but that gives rise to the\ncomplaint you raise here: it's nicer to have one behavior than two. To\nbe honest, I'm pretty sympathetic to that complaint: i think the\nproblems if we don't add this switch are probably not going to affect\nmany people at all, and the new switch will therefore be of very\nlittle value but will be something we continue to maintain forever.\nHowever, I could be wrong and there's certainly something to be said\nfor having escape hatches that let people un-break stuff that core\npatches break.\n\nBut you know we can't have it both ways. We either add an escape hatch\nto make sure that people have a way out if they run into trouble and\nincur the risk and complexity of carrying it probably forever, OR we\nsuck it up and make it a hard behavior change and tell people who are\nupset \"too bad, we changed it.\" And either way, at least PostgreSQL\nhacker who has taken the time to write about this topic is going to\nthink we've made a poor call, or at best a dubious one, and either\nway, there will probably be at least one user who cusses the\nPostgreSQL development process out under their breath. I don't know\nwhat to do about that. There's not a fix for this gaping security\nproblem that doesn't involve the potential of some people being\ninconvenienced.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 29 Mar 2023 16:00:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Wed, 2023-03-29 at 16:00 -0400, Robert Haas wrote:\n> Here's a new patch set to show how this would work. 0001 is as\n> before.\n\nThe following special case (\"unless they can SET ROLE...\"):\n\n /*\n * Try to prevent the user to which we're switching from assuming the\n * privileges of the current user, unless they can SET ROLE to that\n * user anyway.\n */\n\nwhich turns SwitchToUntrustedUser() into a no-op, is conceptually\nredundant with patch 0002. The only difference is that the former (in\n0001) is implicit and the latter (in 0002) is declared.\n\nI say just take the special case out of 0001. If the trigger doesn't\nwork as a SECURITY_RESTRICTED_OPERATION, and is also ENABLED ALWAYS,\nthen the user can just use the new option in 0002 to get the old\nbehavior. I don't see a reason to implicitly give them the old\nbehavior, as 0001 does.\n\n> 0002 adds a run_as_owner option to subscriptions. This doesn't make\n> the updated regression tests fail, because they don't use it. If you\n> revert the changes to 027_nosuperuser.pl then you get failures (as\n> one\n> would hope) and if you then add WITH (run_as_owner = true) to the\n> CREATE SUBSCRIPTION command in 027_nosuperuser.pl then it passes\n> again. I need to spend some more time thinking about what the tests\n> actually ought to look like if we go this route -- I haven't looked\n> through what Jelte proposed yet -- and also the documentation would\n> need a bunch of updating.\n\n\"run_as_owner\" is ambiguous -- subscription owner or table owner?\n\nI would prefer something like \"trust_table_owners\". That would\ncommunicate that the user shouldn't choose it unless they know what\nthey're doing.\n\n\n\n> And either way, at least PostgreSQL\n> hacker who has taken the time to write about this topic is going to\n> think we've made a poor call, or at best a dubious one, and either\n> way,\n\nIf you are worried that *I* think 0002 would be a poor call, then no, I\ndon't. Initially I didn't like the idea of supporting two behaviors\n(and who would?), but we probably can't avoid it at least for right\nnow.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Wed, 29 Mar 2023 22:19:02 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 1:19 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> I say just take the special case out of 0001. If the trigger doesn't\n> work as a SECURITY_RESTRICTED_OPERATION, and is also ENABLED ALWAYS,\n> then the user can just use the new option in 0002 to get the old\n> behavior. I don't see a reason to implicitly give them the old\n> behavior, as 0001 does.\n\nMmm, I don't agree. Suppose A can SET ROLE to B or C, and B can SET\nROLE to A. With the patch as written, actions on B's tables are not\nconfined by the SECURITY_RESTRICTED_OPERATION flag, but actions on C's\ntables are.\n\nI think we want to do everything possible to avoid people feeling like\nthey need to turn on this new option. I'm not sure we'll ever be able\nto get rid of it, but we certainly should avoid doing things that make\nit more likely that it will be needed.\n\n> > 0002 adds a run_as_owner option to subscriptions. This doesn't make\n> > the updated regression tests fail, because they don't use it. If you\n> > revert the changes to 027_nosuperuser.pl then you get failures (as\n> > one\n> > would hope) and if you then add WITH (run_as_owner = true) to the\n> > CREATE SUBSCRIPTION command in 027_nosuperuser.pl then it passes\n> > again. I need to spend some more time thinking about what the tests\n> > actually ought to look like if we go this route -- I haven't looked\n> > through what Jelte proposed yet -- and also the documentation would\n> > need a bunch of updating.\n>\n> \"run_as_owner\" is ambiguous -- subscription owner or table owner?\n>\n> I would prefer something like \"trust_table_owners\". That would\n> communicate that the user shouldn't choose it unless they know what\n> they're doing.\n\nI agree that the naming is somewhat problematic, but I don't like\ntrust_table_owners. It's not clear enough about what actually happens.\nI want the name to describe behavior, not sentiment.\n\nrun_as_subscription_owner removes the ambiguity, but is long.\n\nrun_as_table_owner is a bit shorter, and we could do that with the\nsense reversed. Is that equally clear? I'm not sure.\n\nI can think of other alternatives, like user_switching or\nswitch_to_table_owner or no_user_switching or various other things,\nbut none of them seem very good to me.\n\nAnother idea could be to make the option non-Boolean. This is\ncomically long and I can't seriously recommend it, but just to\nillustrate the point, if you type CREATE SUBSCRIPTION ... WITH\n(execute_code_as_owner_of_which_object = subscription) then you\ncertainly should know what you've signed up for! If there were a\nshorter version that were still clear, I could go for that, but I'm\nhaving trouble coming up with exact wording.\n\nI don't think run_as_owner is terrible, despite the ambiguity. It's\ntalking about the owner of the object on which the property is being\nset. Isn't that the most natural interpretation? I'd be pretty\nsurprised if I set a property called run_as_owner on object A and it\nran as the owner of some other object B. I think our notion of how\nambiguous this is may be somewhat inflated by the fact that we've just\nhad a huge conversation about whether it should be the table owner or\nthe subscription owner, so those possibilities are etched in our mind\nin a way that maybe they won't be for people coming at this fresh. But\nit's hard to be sure what other people will think about something, and\nI don't want to be too optimistic about the name I picked, either.\n\n> If you are worried that *I* think 0002 would be a poor call, then no, I\n> don't. Initially I didn't like the idea of supporting two behaviors\n> (and who would?), but we probably can't avoid it at least for right\n> now.\n\nOK, good. Then we have a way forward that nobody's too upset about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 Mar 2023 09:41:39 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Thu, 30 Mar 2023 at 15:42, Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Mar 30, 2023 at 1:19 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > I say just take the special case out of 0001. If the trigger doesn't\n> > work as a SECURITY_RESTRICTED_OPERATION, and is also ENABLED ALWAYS,\n> > then the user can just use the new option in 0002 to get the old\n> > behavior. I don't see a reason to implicitly give them the old\n> > behavior, as 0001 does.\n>\n> Mmm, I don't agree. Suppose A can SET ROLE to B or C, and B can SET\n> ROLE to A. With the patch as written, actions on B's tables are not\n> confined by the SECURITY_RESTRICTED_OPERATION flag, but actions on C's\n> tables are.\n\nI think that's fair. There's no need to set\nSECURITY_RESTRICTED_OPERATION if it doesn't protect you anyway, and\nindeed it might break things. To be clear I do think it's important to\nstill switch to the table owner, simply for consistency. But that's\ndone in the patch so that's fine.\n\n> I agree that the naming is somewhat problematic, but I don't like\n> trust_table_owners. It's not clear enough about what actually happens.\n> I want the name to describe behavior, not sentiment.\n\nFor security related things, I think sentiment is often just as\nimportant as the actual behaviour. It's not without reason that newer\njavascript frameworks have things like dangerouslySetInnerHTML, to\nscare people away from using it unless they know what they are doing.\n\n> I don't think run_as_owner is terrible, despite the ambiguity. It's\n> talking about the owner of the object on which the property is being\n> set. Isn't that the most natural interpretation? I'd be pretty\n> surprised if I set a property called run_as_owner on object A and it\n> ran as the owner of some other object B.\n\nI think that's fair and I'd be happy with run_as_owner. If someone\ndoesn't understand which owner, they should probably check the\ndocumentation anyways to understand the implications.\n\nRegarding the actual patch. I think the code looks good. Mainly the\ntests and docs are lacking for the new option. Like I said for the\ntests you can borrow the tests I updated for my v2 patch, I think\nthose should work fine for the new option.\n\nOn Thu, 30 Mar 2023 at 15:42, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Mar 30, 2023 at 1:19 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > I say just take the special case out of 0001. If the trigger doesn't\n> > work as a SECURITY_RESTRICTED_OPERATION, and is also ENABLED ALWAYS,\n> > then the user can just use the new option in 0002 to get the old\n> > behavior. I don't see a reason to implicitly give them the old\n> > behavior, as 0001 does.\n>\n> Mmm, I don't agree. Suppose A can SET ROLE to B or C, and B can SET\n> ROLE to A. With the patch as written, actions on B's tables are not\n> confined by the SECURITY_RESTRICTED_OPERATION flag, but actions on C's\n> tables are.\n>\n> I think we want to do everything possible to avoid people feeling like\n> they need to turn on this new option. I'm not sure we'll ever be able\n> to get rid of it, but we certainly should avoid doing things that make\n> it more likely that it will be needed.\n>\n> > > 0002 adds a run_as_owner option to subscriptions. This doesn't make\n> > > the updated regression tests fail, because they don't use it. If you\n> > > revert the changes to 027_nosuperuser.pl then you get failures (as\n> > > one\n> > > would hope) and if you then add WITH (run_as_owner = true) to the\n> > > CREATE SUBSCRIPTION command in 027_nosuperuser.pl then it passes\n> > > again. I need to spend some more time thinking about what the tests\n> > > actually ought to look like if we go this route -- I haven't looked\n> > > through what Jelte proposed yet -- and also the documentation would\n> > > need a bunch of updating.\n> >\n> > \"run_as_owner\" is ambiguous -- subscription owner or table owner?\n> >\n> > I would prefer something like \"trust_table_owners\". That would\n> > communicate that the user shouldn't choose it unless they know what\n> > they're doing.\n>\n> I agree that the naming is somewhat problematic, but I don't like\n> trust_table_owners. It's not clear enough about what actually happens.\n> I want the name to describe behavior, not sentiment.\n>\n> run_as_subscription_owner removes the ambiguity, but is long.\n>\n> run_as_table_owner is a bit shorter, and we could do that with the\n> sense reversed. Is that equally clear? I'm not sure.\n>\n> I can think of other alternatives, like user_switching or\n> switch_to_table_owner or no_user_switching or various other things,\n> but none of them seem very good to me.\n>\n> Another idea could be to make the option non-Boolean. This is\n> comically long and I can't seriously recommend it, but just to\n> illustrate the point, if you type CREATE SUBSCRIPTION ... WITH\n> (execute_code_as_owner_of_which_object = subscription) then you\n> certainly should know what you've signed up for! If there were a\n> shorter version that were still clear, I could go for that, but I'm\n> having trouble coming up with exact wording.\n>\n> I don't think run_as_owner is terrible, despite the ambiguity. It's\n> talking about the owner of the object on which the property is being\n> set. Isn't that the most natural interpretation? I'd be pretty\n> surprised if I set a property called run_as_owner on object A and it\n> ran as the owner of some other object B. I think our notion of how\n> ambiguous this is may be somewhat inflated by the fact that we've just\n> had a huge conversation about whether it should be the table owner or\n> the subscription owner, so those possibilities are etched in our mind\n> in a way that maybe they won't be for people coming at this fresh. But\n> it's hard to be sure what other people will think about something, and\n> I don't want to be too optimistic about the name I picked, either.\n>\n> > If you are worried that *I* think 0002 would be a poor call, then no, I\n> > don't. Initially I didn't like the idea of supporting two behaviors\n> > (and who would?), but we probably can't avoid it at least for right\n> > now.\n>\n> OK, good. Then we have a way forward that nobody's too upset about.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n>\n\n\n",
"msg_date": "Thu, 30 Mar 2023 17:11:18 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 11:11 AM Jelte Fennema <postgres@jeltef.nl> wrote:\n> Regarding the actual patch. I think the code looks good. Mainly the\n> tests and docs are lacking for the new option. Like I said for the\n> tests you can borrow the tests I updated for my v2 patch, I think\n> those should work fine for the new option.\n\nI took a look at that but I didn't really feel like that was quite the\ndirection I wanted to go. I'd actually like to separate the tests of\nthe new option out into their own file, so that if for some reason we\ndecide we want to remove it in the future, it's easier to nuke all the\nassociated tests. Also, quite frankly, I think we've gone way\noverboard in terms of loading too many tests into a single file, with\nthe result that it's very hard to understand exactly what and all the\nfile is actually testing and what it's intended to be testing. So the\nattached 0002 does it that way. I've also amended 0001 and 0002 with\ndocumentation changes that I hope are appropriate.\n\nI noticed along the way that my earlier commits had missed one place\nthat needed to be updated by the pg_create_subscription patch I\ncreated earlier. A fix for that is included in 0001, but it can be\nbroken out and committed separately if somebody feels strongly about\nit. I personally don't think it's worth it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 30 Mar 2023 14:17:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Thu, 2023-03-30 at 09:41 -0400, Robert Haas wrote:\n> On Thu, Mar 30, 2023 at 1:19 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > I say just take the special case out of 0001. If the trigger\n> > doesn't\n> > work as a SECURITY_RESTRICTED_OPERATION, and is also ENABLED\n> > ALWAYS,\n> > then the user can just use the new option in 0002 to get the old\n> > behavior. I don't see a reason to implicitly give them the old\n> > behavior, as 0001 does.\n> \n> Mmm, I don't agree. Suppose A can SET ROLE to B or C, and B can SET\n> ROLE to A. With the patch as written, actions on B's tables are not\n> confined by the SECURITY_RESTRICTED_OPERATION flag, but actions on\n> C's\n> tables are.\n\nIt's interesting that it's not transitive, but I'm not sure whether\nthat's an argument for or against the current approach, or where it\nfits (or doesn't fit) with my suggestion. Why do you consider it\nimportant that C's actions are SECURITY_RESTRICTED_OPERATIONs?\n\n> I think we want to do everything possible to avoid people feeling\n> like\n> they need to turn on this new option. I'm not sure we'll ever be able\n> to get rid of it, but we certainly should avoid doing things that\n> make\n> it more likely that it will be needed.\n\nI don't think it helps much, though. While I previously said that the\nspecial-case behavior is implicit (which is true), it still almost\ncertainly requires a manual step:\n\n GRANT subscription_owner TO table_owner WITH SET;\n\nIn version 16, the subscription owner is almost certainly a superuser,\nand the table owner almost certainly is not, so there's little chance\nthat it just happens that the table owner has that privilege already.\n\nI don't think we want to encourage such grants to proliferate any more\nthan we want the option you introduce in 0002 to proliferate. Arguably,\nit's worse.\n\nAnd let's say a user says \"I upgraded and my trigger broke logical\nreplication with message about a security-restricted operation... how\ndo I get up and running again?\". With the patches as-written, we will\nhave two answers to that question:\n\n * GRANT subscription_owner TO table_owner WITH SET TRUE\n * ALTER SUBSCRIPTION ... ($awesome_option_name=false)\n\nUnder what circumstances would we recommend the former vs. the latter?\n\n> > \n> I agree that the naming is somewhat problematic, but I don't like\n> trust_table_owners. It's not clear enough about what actually\n> happens.\n> I want the name to describe behavior, not sentiment.\n\nOn reflection, I agree here. We want it to communicate something about\nthe behavior or mechanism.\n\n> run_as_subscription_owner removes the ambiguity, but is long.\n\nThen fewer people will use it, which might be a good thing.\n\n> I can think of other alternatives, like user_switching or\n> switch_to_table_owner or no_user_switching or various other things,\n> but none of them seem very good to me.\n\nI like the idea of using \"switch\" (or some synonym) because it's\ntechnically more correct. The subscription always runs as the\nsubscription owner; we are just switching temporarily while applying a\nchange.\n\n> Another idea could be to make the option non-Boolean. This is\n> comically long and I can't seriously recommend it, but just to\n> illustrate the point, if you type CREATE SUBSCRIPTION ... WITH\n> (execute_code_as_owner_of_which_object = subscription) then you\n> certainly should know what you've signed up for! If there were a\n> shorter version that were still clear, I could go for that, but I'm\n> having trouble coming up with exact wording.\n\nI don't care for that -- it communicates the options as co-equal and\nmaybe something that would live forever (or even have more options in\nthe future). I'd prefer that nobody uses the non-switching behavior\nexcept for migration purposes or weird use cases we don't really\nunderstand.\n\n> I don't think run_as_owner is terrible, despite the ambiguity.\n\nI won't object but I'm not thrilled.\n\n> \nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 30 Mar 2023 11:52:10 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 2:52 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > Mmm, I don't agree. Suppose A can SET ROLE to B or C, and B can SET\n> > ROLE to A. With the patch as written, actions on B's tables are not\n> > confined by the SECURITY_RESTRICTED_OPERATION flag, but actions on\n> > C's\n> > tables are.\n>\n> It's interesting that it's not transitive, but I'm not sure whether\n> that's an argument for or against the current approach, or where it\n> fits (or doesn't fit) with my suggestion. Why do you consider it\n> important that C's actions are SECURITY_RESTRICTED_OPERATIONs?\n\nSo that C can't try to hack into A's account.\n\nI mean the point here is that B already has permissions to get into\nA's account whenever they like, without any hacking. So we don't need\nto impose SECURITY_RESTRICTED_OPERATION when running as B, because the\nonly purpose of SECURITY_RESTRICTED_OPERATION is to prevent the role\nto which we're switching from attacking the role from which we're\nswitching. And that's how the patch is currently coded. You proposed\nremoving that behavior, on the theory that if the\nSECURITY_RESTRICTED_OPERATION restrictions were a problem, someone\ncould activate the run_as_owner option (or whatever we end up calling\nit). But the run_as_owner option applies to the entire subscription.\nIf A activates that option, then B's hypothetical triggers that run\nafoul of the SECURITY_RESTRICTED_OPERATION restrictions start working\nagain (woohoo!) but they're now vulnerable to attacks from C. With the\npatch as coded, A doesn't need to use run_as_owner, everything still\njust works for B, and A is still protected against C.\n\n> In version 16, the subscription owner is almost certainly a superuser,\n> and the table owner almost certainly is not, so there's little chance\n> that it just happens that the table owner has that privilege already.\n>\n> I don't think we want to encourage such grants to proliferate any more\n> than we want the option you introduce in 0002 to proliferate. Arguably,\n> it's worse.\n\nI don't necessarily find those role grants to be a problem. Obviously\nit depends on the use case. If you're hoping to be able to set up an\naccount whose only purpose in life is to own subscriptions and which\nshould have as few permissions as possible, then those role grants\nsuck, and a hypothetical future feature where you can GRANT\nREPLICATION ON TABLE t1 TO subscription_owning_user will be far\nbetter. But I imagine CREATE SUBSCRIPTION being used either by\nsuperusers or by people who already have those role grants anyway,\nbecause I imagine replication as something that a highly privileged\nuser configures on behalf of everyone who uses the system. And in that\ncase those role grants aren't something new that you do specifically\nfor logical replication - they're already there because you need them\nto administer stuff. Or you're the superuser and don't need them\nanyway.\n\n> And let's say a user says \"I upgraded and my trigger broke logical\n> replication with message about a security-restricted operation... how\n> do I get up and running again?\". With the patches as-written, we will\n> have two answers to that question:\n>\n> * GRANT subscription_owner TO table_owner WITH SET TRUE\n> * ALTER SUBSCRIPTION ... ($awesome_option_name=false)\n>\n> Under what circumstances would we recommend the former vs. the latter?\n\nWell, the latter is clearly better because it has such an awesome\noption name, right?\n\nMore seriously, my theory is that there's very little use case for\nhaving a replication trigger, default expression, etc. that is\nperforming a security restricted operation. And if someone does have\na use case, and it's between users that can't already SET ROLE back\nand forth, then the setup is pretty dubious from a security\nperspective and maybe the user ought to rethink it. And if they don't\nwant to rethink it, then they need to throw security out the window,\nand I don't really care which of those commands they use to do it, but\nthe second one would probably break less other stuff for them, so I'd\nlikely recommend that one.\n\n> > I don't think run_as_owner is terrible, despite the ambiguity.\n>\n> I won't object but I'm not thrilled.\n\nLet's see if anyone else weighs in.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 Mar 2023 16:08:05 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Thu, 2023-03-30 at 16:08 -0400, Robert Haas wrote:\n> But the run_as_owner option applies to the entire subscription.\n> If A activates that option, then B's hypothetical triggers that run\n> afoul of the SECURITY_RESTRICTED_OPERATION restrictions start working\n> again (woohoo!) but they're now vulnerable to attacks from C. With\n> the\n> patch as coded, A doesn't need to use run_as_owner, everything still\n> just works for B, and A is still protected against C.\n\nThat's moving the goalposts a little, though:\n\n\"Some concern was expressed ... might break things that are currently\nworking...\"\n\nhttps://www.postgresql.org/message-id/CA+TgmoaE35kKS3-zSvGiZszXP9Tb9rNfYzT=+fO8Ehk5EDKrag@mail.gmail.com\n\nIf the original use case was \"don't break stuff\", I think patch 0002\nsolves that, and we don't need this special case in 0001. Would you\nagree with that statement?\n\nHypothetically, if 0001 (without the special case) along with 0002 were\nalready in 16, and then there was some hypothetical 0003 that\nintroduced the special case to solve the problem described above with\nthe bidirectional trust relationship, I'm not sure I'd be sold on 0003.\nFirst, the problem seems fairly minor to me, at least in comparison to\nthe main problem you are solving in this thread. Second, it seems like\nyou could work around it by having two subscriptions. Third, it's a bit\nunintuitive at least to me: if you introduce a new user Z that can SET\nROLE to any of A, B, or C, and then Z reassigns the subscription to\nthemselves, then B's trigger will break because B can't SET ROLE to Z.\n\nOthers seem to like it, so don't take that as a hard objection.\n\n> \n> But I imagine CREATE SUBSCRIPTION being used either by\n> superusers or by people who already have those role grants anyway,\n> because I imagine replication as something that a highly privileged\n> user configures on behalf of everyone who uses the system. And in\n> that\n> case those role grants aren't something new that you do specifically\n> for logical replication - they're already there because you need them\n> to administer stuff. Or you're the superuser and don't need them\n> anyway.\n\nDid the discussion drift back towards the SET ROLE in the other\ndirection? I thought we had settled that in v16 we would require that\nthe subscription owner can SET ROLE to the table owner (as in your\ncurrent 0001), but that we could revisit it later.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 31 Mar 2023 00:36:14 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, Mar 31, 2023 at 3:36 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> That's moving the goalposts a little, though:\n>\n> \"Some concern was expressed ... might break things that are currently\n> working...\"\n>\n> https://www.postgresql.org/message-id/CA+TgmoaE35kKS3-zSvGiZszXP9Tb9rNfYzT=+fO8Ehk5EDKrag@mail.gmail.com\n>\n> If the original use case was \"don't break stuff\", I think patch 0002\n> solves that, and we don't need this special case in 0001. Would you\n> agree with that statement?\n\nI think that's too Boolean. The special case in 0001 is a better\nsolution for the cases where it works. It's both more granular and\nmore convenient. The fact that you might be able to get by with 0002\ndoesn't negate that.\n\n> > But I imagine CREATE SUBSCRIPTION being used either by\n> > superusers or by people who already have those role grants anyway,\n> > because I imagine replication as something that a highly privileged\n> > user configures on behalf of everyone who uses the system. And in\n> > that\n> > case those role grants aren't something new that you do specifically\n> > for logical replication - they're already there because you need them\n> > to administer stuff. Or you're the superuser and don't need them\n> > anyway.\n>\n> Did the discussion drift back towards the SET ROLE in the other\n> direction? I thought we had settled that in v16 we would require that\n> the subscription owner can SET ROLE to the table owner (as in your\n> current 0001), but that we could revisit it later.\n\nYeah, I think that's what we agreed. I'm just saying that I'm not as\nconcerned about that design as you are, and encouraging you to maybe\nnot be quite so dismayed by it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 31 Mar 2023 15:17:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, 2023-03-31 at 15:17 -0400, Robert Haas wrote:\n> I think that's too Boolean. The special case in 0001 is a better\n> solution for the cases where it works. It's both more granular and\n> more convenient.\n\nI guess the \"more convenient\" is where I'm confused, because the \"grant\nsubscription_owner to table owner with set role true\" is not likely to\nbe conveniently already present; it would need to be issued manually to\ntake advantage of this special case.\n\nDo you have any concern about the weirdness where assigning the\nsubscription to a higher-privilege user Z would cause B's trigger to\nfail?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 31 Mar 2023 15:46:04 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 04:47:22PM -0400, Robert Haas wrote:\n> So that's a grey area, at least IMHO. The patch could be revised in\n> some way, and the permissions requirements downgraded. However, if we\n> do that, I think we're going to have to document that although in\n> theory table owners can make themselves against subscription owners,\n> in practice they probably won't be. That approach has some advantages,\n> and I don't think it's insane. However, I am not convinced that it is\n> the best idea, either, and I had the impression based on\n> pgsql-security discussion that Andres and Noah thought this way was\n> better. I might have misinterpreted their position, and they might\n> have changed their minds, and they might have been wrong. But that's\n> how we got here.\n\n[\"this way\" = requirement for SET ROLE]\n\nOn Wed, Mar 29, 2023 at 04:00:45PM -0400, Robert Haas wrote:\n> > The dangerous cases seem to be something along the lines of a security-\n> > invoker trigger function that builds and executes arbirary SQL based on\n> > the row contents. And then the table owner would then still need to set\n> > ENABLE ALWAYS TRIGGER.\n> >\n> > Do we really want to take that case on as our security responsibility?\n> \n> That's something about which I would like to get more opinions.\n\nThe most-plausible-to-me attack involves an ENABLE ALWAYS trigger that logs\nCURRENT_USER to an audit table. The \"SQL based on the row contents\" scenario\nfeels remote. Another remotely-possible attack involves a trigger that\ninternally queries some other table having RLS. (Switching to the table owner\ncan change the rows visible to that query.)\n\nIf having INSERT/UPDATE privileges on the table were enough to make a\nsubscription that impersonates the table owner, then relatively-unprivileged\nroles could make a subscription to bypass the aforementioned auditing. Commit\nc3afe8c has imposed weighty requirements beyond I/U privileges, namely holding\nthe pg_create_subscription role and database-level CREATE privilege. Since\ndatabase-level CREATE is already powerful, it would be plausible to drop the\nSET ROLE requirement and add this audit bypass to its powers. The SET ROLE\nrequirement is nice for keeping the powers disentangled. One drawback is\nmaking people do GRANTs regardless of whether a relevant audit trigger exists.\nAnother drawback is the subscription role having more privileges than ideally\nneeded. I do like keeping strong privileges orthogonal, so I lean toward\nkeeping the SET ROLE requirement.\n\nOn Thu, Mar 30, 2023 at 02:17:31PM -0400, Robert Haas wrote:\n> --- a/doc/src/sgml/logical-replication.sgml\n> +++ b/doc/src/sgml/logical-replication.sgml\n> @@ -1774,6 +1774,23 @@ CONTEXT: processing remote data for replication origin \"pg_16395\" during \"INSER\n> <literal>SET ROLE</literal> to each role that owns a replicated table.\n> </para>\n> \n> + <para>\n> + If the subscription has been configued with\n\nTypo.\n\n> Subject: [PATCH v3 1/2] Perform logical replication actions as the table\n> owner.\n\n> Since this involves switching the active user frequently within\n> a session that is authenticated as the subscription user, also\n> impose SECURITY_RESTRICTED_OPEATION restrictions on logical\n\ns/OPEATION/OPERATION/\n\n> replication code. As an exception, if the table owner can SET\n> ROLE to the subscription owner, these restrictions have no\n> security value, so don't impose them in that case.\n> \n> Subscription owners are now required to have the ability to\n> SET ROLE to every role that owns a table that the subscription\n> is replicating. If they don't, replication will fail. Superusers,\n> who normally own subscriptions, satisfy this property by default.\n> Non-superusers users who own subscriptions will needed to be\n> granted the roles that own relevant tables.\n\ns/will needed/will need/\n\n(I did not read the patches in their entirety.)\n\n\n",
"msg_date": "Sun, 2 Apr 2023 20:21:06 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 7:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> I don't think run_as_owner is terrible, despite the ambiguity. It's\n> talking about the owner of the object on which the property is being\n> set.\n>\n\nI find this justification quite reasonable to keep the option name as\nrun_as_owner. So, +1 to use the new option name as run_as_owner.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 3 Apr 2023 17:24:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, Mar 31, 2023 at 6:46 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I guess the \"more convenient\" is where I'm confused, because the \"grant\n> subscription_owner to table owner with set role true\" is not likely to\n> be conveniently already present; it would need to be issued manually to\n> take advantage of this special case.\n\nYou and I disagree about the likelihood of that, but I could well be wrong.\n\n> Do you have any concern about the weirdness where assigning the\n> subscription to a higher-privilege user Z would cause B's trigger to\n> fail?\n\nNot very much. I think the biggest risk is user confusion, but I don't\nthink that's a huge risk because I don't think this scenario will come\nup very often. Also, it's kind of hard to imagine that there's a\nsecurity model here which never does anything potentially surprising.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 3 Apr 2023 10:26:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "Thank you for this email. It's very helpful to get your opinion on this.\n\nOn Sun, Apr 2, 2023 at 11:21 PM Noah Misch <noah@leadboat.com> wrote:\n> On Wed, Mar 29, 2023 at 04:00:45PM -0400, Robert Haas wrote:\n> > > The dangerous cases seem to be something along the lines of a security-\n> > > invoker trigger function that builds and executes arbirary SQL based on\n> > > the row contents. And then the table owner would then still need to set\n> > > ENABLE ALWAYS TRIGGER.\n> > >\n> > > Do we really want to take that case on as our security responsibility?\n> >\n> > That's something about which I would like to get more opinions.\n>\n> The most-plausible-to-me attack involves an ENABLE ALWAYS trigger that logs\n> CURRENT_USER to an audit table. The \"SQL based on the row contents\" scenario\n> feels remote. Another remotely-possible attack involves a trigger that\n> internally queries some other table having RLS. (Switching to the table owner\n> can change the rows visible to that query.)\n\nI had thought of the first of these cases, but not the second one.\n\n> If having INSERT/UPDATE privileges on the table were enough to make a\n> subscription that impersonates the table owner, then relatively-unprivileged\n> roles could make a subscription to bypass the aforementioned auditing. Commit\n> c3afe8c has imposed weighty requirements beyond I/U privileges, namely holding\n> the pg_create_subscription role and database-level CREATE privilege. Since\n> database-level CREATE is already powerful, it would be plausible to drop the\n> SET ROLE requirement and add this audit bypass to its powers. The SET ROLE\n> requirement is nice for keeping the powers disentangled. One drawback is\n> making people do GRANTs regardless of whether a relevant audit trigger exists.\n> Another drawback is the subscription role having more privileges than ideally\n> needed. I do like keeping strong privileges orthogonal, so I lean toward\n> keeping the SET ROLE requirement.\n\nThe orthogonality argument weighs extremely heavily with me in this\ncase. As I said to Jeff, I would not mind having a more granular way\nto control which tables a user can replicate into; e.g. a grantable\nREPLICAT{E,ION} privilege, or we want something global we could have a\npredefined role for it, e.g. pg_replicate_into_any_table. But I think\nany such thing should definitely be separate from\npg_create_subscription.\n\nI'll fix the typos. Thanks for reporting them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 3 Apr 2023 10:34:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Sun, 2023-04-02 at 20:21 -0700, Noah Misch wrote:\n> The most-plausible-to-me attack involves an ENABLE ALWAYS trigger\n> that logs\n> CURRENT_USER to an audit table.\n\nHow does requiring that the subscription owner have SET ROLE privileges\non the table owner help that case? As Robert pointed out, users coming\nfrom v15 will have superuser subscription owners anyway, so the change\nwill be silent for them.\n\nWe need support to apply changes as the table owner, and we need that\nto be the default; and this patch provides those things, and almost all\nusers of logical replication will be better off after this is\ncommitted.\n\nThe small number of users for whom this new model is not good still\nneed the right documentation in front of them to understand the\nconsequences, so that they can opt out one way or another (as 0002\noffers). Release notes are probably the most powerful tool we have for\nnotifying users, unfortunately. Requiring SET ROLE for users that are\nalmost certainly superusers doesn't give an opportunity to educate\npeople about the change in behavior.\n\nAs I said before, I'm fine with requiring that the subscription owner\ncan SET ROLE to the table owner for v16. It's the most conservative\nchoice and the most \"correct\" (in that no lesser privilege we have\ntoday is a perfect match).\n\nBut I feel like we can do better in version 17 when we have time to\nactually work through common use cases and the exceptional cases and\nweight them appropriately. Like, how common is it to want to get the\nuser from a trigger on the subscriber side? Should that trigger be\nusing SESSION_USER instead of CURRENT_USER? Security is best when it\ntakes into account what people actually want to do and makes it easy to\ndo that securely.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 03 Apr 2023 12:05:29 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, 2023-04-03 at 10:26 -0400, Robert Haas wrote:\n> Not very much. I think the biggest risk is user confusion, but I\n> don't\n> think that's a huge risk because I don't think this scenario will\n> come\n> up very often. Also, it's kind of hard to imagine that there's a\n> security model here which never does anything potentially surprising.\n\nAlright, let's just proceed as-is then. I believe these patches are a\nmajor improvement to the usability of logical replication and will put\nup with the weirdness. I wanted to understand better why it's there,\nand I'm not sure I fully do, but we'll have more time to discuss later.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 03 Apr 2023 12:14:33 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, Apr 03, 2023 at 12:05:29PM -0700, Jeff Davis wrote:\n> On Sun, 2023-04-02 at 20:21 -0700, Noah Misch wrote:\n> > The most-plausible-to-me attack involves an ENABLE ALWAYS trigger\n> > that logs\n> > CURRENT_USER to an audit table.\n> \n> How does requiring that the subscription owner have SET ROLE privileges\n> on the table owner help that case? As Robert pointed out, users coming\n> from v15 will have superuser subscription owners anyway, so the change\n> will be silent for them.\n\nFor subscriptions upgraded from v15, it doesn't matter. Requiring SET ROLE\nprevents this sequence:\n\n- Make a table with such an audit trigger. Grant INSERT and UPDATE to Alice.\n- Upgrade to v15.\n- Grant pg_create_subscription and database-level CREATE to Alice.\n- Alice creates a subscription as a tool to impersonate the table owner,\n bypassing audit.\n\nTo put it another way, the benefit of the SET ROLE requirement is not really\nmaking subscriptions more secure. The benefit of the requirement is\npg_create_subscription not becoming a tool for bypassing audit.\n\nI gather we agree on what to do for v16, which is good.\n\n> But I feel like we can do better in version 17 when we have time to\n> actually work through common use cases and the exceptional cases and\n> weight them appropriately. Like, how common is it to want to get the\n> user from a trigger on the subscriber side?\n\nFair. I don't think the community has arrived at a usable approach for\nanswering questions like that. It would be valuable to have an approach.\n\n> Should that trigger be\n> using SESSION_USER instead of CURRENT_USER?\n\nApart from evaluating the argument of SET ROLE, I've not heard of a valid use\ncase for SESSION_USER.\n\n\n",
"msg_date": "Mon, 3 Apr 2023 19:09:51 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, Apr 3, 2023 at 10:09 PM Noah Misch <noah@leadboat.com> wrote:\n> I gather we agree on what to do for v16, which is good.\n\nI have committed the patches.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 4 Apr 2023 12:09:58 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "Hi hackers,\r\nThank you for developing a great feature. \r\nThe following commit added a column to the pg_subscription catalog.\r\n https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=482675987bcdffb390ae735cfd5f34b485ae97c6\r\n https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=c3afe8cf5a1e465bd71e48e4bc717f5bfdc7a7d6\r\n\r\nI found that the documentation of the pg_subscription catalog was missing an explanation of the columns subrunasowner and subpasswordrequired, so I attached a patch. Please fix the patch if you have a better explanation.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: Robert Haas <robertmhaas@gmail.com> \r\nSent: Wednesday, April 5, 2023 1:10 AM\r\nTo: Noah Misch <noah@leadboat.com>\r\nCc: Jeff Davis <pgsql@j-davis.com>; Jelte Fennema <postgres@jeltef.nl>; pgsql-hackers@postgresql.org; Andres Freund <andres@anarazel.de>\r\nSubject: Re: running logical replication as the subscription owner\r\n\r\nOn Mon, Apr 3, 2023 at 10:09 PM Noah Misch <noah@leadboat.com> wrote:\r\n> I gather we agree on what to do for v16, which is good.\r\n\r\nI have committed the patches.\r\n\r\n-- \r\nRobert Haas\r\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 11 Apr 2023 02:09:24 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": false,
"msg_subject": "RE: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, Apr 10, 2023 at 10:09 PM Shinoda, Noriyoshi (PN Japan FSIP)\n<noriyoshi.shinoda@hpe.com> wrote:\n> Hi hackers,\n> Thank you for developing a great feature.\n> The following commit added a column to the pg_subscription catalog.\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=482675987bcdffb390ae735cfd5f34b485ae97c6\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=c3afe8cf5a1e465bd71e48e4bc717f5bfdc7a7d6\n>\n> I found that the documentation of the pg_subscription catalog was missing an explanation of the columns subrunasowner and subpasswordrequired, so I attached a patch. Please fix the patch if you have a better explanation.\n\nThank you. Committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 11 Apr 2023 11:09:05 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Tue, Apr 4, 2023 at 9:40 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Mon, Apr 3, 2023 at 10:09 PM Noah Misch <noah@leadboat.com> wrote:\n> > I gather we agree on what to do for v16, which is good.\n>\n> I have committed the patches.\n>\n\nDo we want the initial sync to also respect 'run_as_owner' option? I\nmight be missing something but I don't see anything in the docs about\ninitial sync interaction with this option. In the commit a2ab9c06ea,\nwe did the permission checking during the initial sync so I thought we\nshould do it here as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 11 May 2023 17:08:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Thu, May 11, 2023 at 7:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Do we want the initial sync to also respect 'run_as_owner' option? I\n> might be missing something but I don't see anything in the docs about\n> initial sync interaction with this option. In the commit a2ab9c06ea,\n> we did the permission checking during the initial sync so I thought we\n> should do it here as well.\n\nIt definitely should work that way. lf it doesn't, that's a bug.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 May 2023 12:12:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, May 12, 2023 at 1:12 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, May 11, 2023 at 7:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Do we want the initial sync to also respect 'run_as_owner' option? I\n> > might be missing something but I don't see anything in the docs about\n> > initial sync interaction with this option. In the commit a2ab9c06ea,\n> > we did the permission checking during the initial sync so I thought we\n> > should do it here as well.\n>\n> It definitely should work that way. lf it doesn't, that's a bug.\n\nAfter some tests, it seems that the initial sync worker respects\n'run_as_owner' during catching up but not during COPYing.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 12 May 2023 12:39:44 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, May 12, 2023 at 9:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, May 12, 2023 at 1:12 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Thu, May 11, 2023 at 7:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Do we want the initial sync to also respect 'run_as_owner' option? I\n> > > might be missing something but I don't see anything in the docs about\n> > > initial sync interaction with this option. In the commit a2ab9c06ea,\n> > > we did the permission checking during the initial sync so I thought we\n> > > should do it here as well.\n> >\n> > It definitely should work that way. lf it doesn't, that's a bug.\n>\n> After some tests, it seems that the initial sync worker respects\n> 'run_as_owner' during catching up but not during COPYing.\n>\n\nYeah, I was worried during copy phase only. During catchup, the code\nis common with apply worker code, so it will work.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 12 May 2023 09:19:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, May 12, 2023 at 1:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 12, 2023 at 9:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, May 12, 2023 at 1:12 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > On Thu, May 11, 2023 at 7:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > Do we want the initial sync to also respect 'run_as_owner' option? I\n> > > > might be missing something but I don't see anything in the docs about\n> > > > initial sync interaction with this option. In the commit a2ab9c06ea,\n> > > > we did the permission checking during the initial sync so I thought we\n> > > > should do it here as well.\n> > >\n> > > It definitely should work that way. lf it doesn't, that's a bug.\n> >\n> > After some tests, it seems that the initial sync worker respects\n> > 'run_as_owner' during catching up but not during COPYing.\n> >\n>\n> Yeah, I was worried during copy phase only. During catchup, the code\n> is common with apply worker code, so it will work.\n>\n\nI tried the following test:\n\n====================\nRepeat On the publisher and subscriber:\n /* Create role regress_alice with NOSUPERUSER on\n publisher and subscriber and a table for replication */\n\nCREATE ROLE regress_alice NOSUPERUSER LOGIN;\nCREATE ROLE regress_admin SUPERUSER LOGIN;\nGRANT CREATE ON DATABASE postgres TO regress_alice;\nSET SESSION AUTHORIZATION regress_alice;\nCREATE SCHEMA alice;\nGRANT USAGE ON SCHEMA alice TO regress_admin;\nCREATE TABLE alice.test (i INTEGER);\nALTER TABLE alice.test REPLICA IDENTITY FULL;\n\nOn the publisher:\npostgres=> insert into alice.test values(1);\npostgres=> insert into alice.test values(2);\npostgres=> insert into alice.test values(3);\npostgres=> CREATE PUBLICATION alice FOR TABLE alice.test\nWITH (publish_via_partition_root = true);\n\nOn the subscriber: /* create table admin_audit which regress_alice\ndoes not have access to */\nSET SESSION AUTHORIZATION regress_admin;\ncreate table admin_audit (i integer);\n\nOn the subscriber: /* Create a trigger for table alice.test which\ninserts on table admin_audit which the table owner of alice.test does\nnot have access to */\nSET SESSION AUTHORIZATION regress_alice;\nCREATE OR REPLACE FUNCTION alice.alice_audit()\nRETURNS trigger AS\n$$\nBEGIN\ninsert into public.admin_audit values(2);\nRETURN NEW;\nEND;\n$$\nLANGUAGE 'plpgsql';\ncreate trigger test_alice after insert on alice.test for each row\nexecute procedure alice.alice_audit();\nalter table alice.test enable always trigger test_alice;\n\nOn the subscriber: /* Create a subscription with run_as_owner = false */\nCREATE SUBSCRIPTION admin_sub CONNECTION 'dbname=postgres\nhost=localhost port=6972' PUBLICATION alice WITH (run_as_owner =\nfalse);\n===============\n\nWhat I see is that as part of tablesync, the trigger invokes an\nupdates admin_audit which it shouldn't, as the table owner\nof alice.test should not have access to the\ntable admin_audit. This means the table copy is being invoked as the\nsubscription owner and not the table owner.\n\nHowever, I see subsequent inserts fail on replication with\npermission denied error, so the apply worker correctly\napplies the inserts as the table owner.\n\nIf nobody else is working on this, I can come up with a patch to fix this\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 12 May 2023 21:55:46 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, May 12, 2023 at 9:55 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> If nobody else is working on this, I can come up with a patch to fix this\n>\n\nAttaching a patch which attempts to fix this.\n\nregards,\nAjin Cherian\nFujitsu Australia",
"msg_date": "Mon, 15 May 2023 18:43:48 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, May 15, 2023 at 5:44 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Fri, May 12, 2023 at 9:55 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > If nobody else is working on this, I can come up with a patch to fix this\n> >\n>\n> Attaching a patch which attempts to fix this.\n>\n\nThank you for the patch! I think we might want to have tests for it.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 15 May 2023 21:46:45 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, 15 May 2023 at 14:47, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Thank you for the patch! I think we might want to have tests for it.\n\nYes, code looks good. But indeed some tests would be great. It seems\nwe forgot to actually do:\n\nOn Fri, 12 May 2023 at 13:55, Ajin Cherian <itsajin@gmail.com> wrote:\n> CREATE ROLE regress_alice NOSUPERUSER LOGIN;\n> CREATE ROLE regress_admin SUPERUSER LOGIN;\n> ...\n>\n> What I see is that as part of tablesync, the trigger invokes an\n> updates admin_audit which it shouldn't, as the table owner\n> of alice.test should not have access to the\n> table admin_audit. This means the table copy is being invoked as the\n> subscription owner and not the table owner.\n\nI think having this as a tap/regress test would be very useful.\n\n\n",
"msg_date": "Mon, 15 May 2023 15:54:14 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, 24 Mar 2023 at 19:37, Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > I think there's some important tests missing related to this:\n> > > > 1. Ensuring that SECURITY_RESTRICTED_OPERATION things are enforced\n> > > > when the user **does not** have SET ROLE permissions to the\n> > > > subscription owner, e.g. don't allow SET ROLE from a trigger.\n> > > > 2. Ensuring that SECURITY_RESTRICTED_OPERATION things are not enforced\n> > > > when the user **does** have SET ROLE permissions to the subscription\n> > > > owner, e.g. allows SET ROLE from trigger.\n> > > Yeah, if we stick with the current approach we should probably add\n> > > tests for that stuff.\n> >\n> > Even if we don't, we should still have tests showing that the security restrictions that we intend to put in place actually do their job.\n>\n> Yeah, I just don't want to write the tests and then decide to change\n> the behavior and then have to write them over again. It's not so much\n> fun that I'm yearning to do it twice.\n\nI forgot to follow up on this before, but based on the bug found by\nAmit. I think it would be good to still add these tests.\n\n\n",
"msg_date": "Mon, 15 May 2023 15:57:15 +0200",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, May 12, 2023 at 5:25 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Fri, May 12, 2023 at 1:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> I tried the following test:\n>\n> ====================\n> Repeat On the publisher and subscriber:\n> /* Create role regress_alice with NOSUPERUSER on\n> publisher and subscriber and a table for replication */\n>\n> CREATE ROLE regress_alice NOSUPERUSER LOGIN;\n> CREATE ROLE regress_admin SUPERUSER LOGIN;\n> GRANT CREATE ON DATABASE postgres TO regress_alice;\n> SET SESSION AUTHORIZATION regress_alice;\n> CREATE SCHEMA alice;\n> GRANT USAGE ON SCHEMA alice TO regress_admin;\n> CREATE TABLE alice.test (i INTEGER);\n> ALTER TABLE alice.test REPLICA IDENTITY FULL;\n>\n\nWhy do we need a schema and following grant statement for this test?\n\n> On the publisher:\n> postgres=> insert into alice.test values(1);\n> postgres=> insert into alice.test values(2);\n> postgres=> insert into alice.test values(3);\n> postgres=> CREATE PUBLICATION alice FOR TABLE alice.test\n> WITH (publish_via_partition_root = true);\n>\n\nAgain, 'publish_via_partition_root' doesn't seem to be required. Let's\ntry to write a minimal test for the initial sync behaviour.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 16 May 2023 12:06:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, May 15, 2023 at 7:24 PM Jelte Fennema <postgres@jeltef.nl> wrote:\n>\n> On Mon, 15 May 2023 at 14:47, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Thank you for the patch! I think we might want to have tests for it.\n>\n> Yes, code looks good. But indeed some tests would be great. It seems\n> we forgot to actually do:\n>\n\nAgreed with you and Sawada-San about having a test. BTW, shall we\nslightly tweak the documentation [1]: \"The subscription apply process\nwill, at a session level, run with the privileges of the subscription\nowner. However, when performing an insert, update, delete, or truncate\noperation on a particular table, it will switch roles to the table\nowner and perform the operation with the table owner's privileges.\" to\nbe bit more specific about initial sync process as well?\n\n[1] - https://www.postgresql.org/docs/devel/logical-replication-security.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 16 May 2023 12:08:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Tue, May 16, 2023 at 2:39 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Agreed with you and Sawada-San about having a test. BTW, shall we\n> slightly tweak the documentation [1]: \"The subscription apply process\n> will, at a session level, run with the privileges of the subscription\n> owner. However, when performing an insert, update, delete, or truncate\n> operation on a particular table, it will switch roles to the table\n> owner and perform the operation with the table owner's privileges.\" to\n> be bit more specific about initial sync process as well?\n\nIt doesn't seem entirely necessary to me because the initial sync is\nin effect a bunch of inserts.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 May 2023 08:19:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, May 15, 2023 at 10:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, May 15, 2023 at 5:44 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > On Fri, May 12, 2023 at 9:55 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> > >\n> > > If nobody else is working on this, I can come up with a patch to fix this\n> > >\n> >\n> > Attaching a patch which attempts to fix this.\n> >\n>\n> Thank you for the patch! I think we might want to have tests for it.\n>\nI have updated the patch with a test case as well.\n\nregards,\nAjin Cherian\nFujitsu Australia",
"msg_date": "Wed, 17 May 2023 11:09:59 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Wed, May 17, 2023 at 10:10 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Mon, May 15, 2023 at 10:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, May 15, 2023 at 5:44 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> > >\n> > > On Fri, May 12, 2023 at 9:55 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> > > >\n> > > > If nobody else is working on this, I can come up with a patch to fix this\n> > > >\n> > >\n> > > Attaching a patch which attempts to fix this.\n> > >\n> >\n> > Thank you for the patch! I think we might want to have tests for it.\n> >\n> I have updated the patch with a test case as well.\n\nThank you for updating the patch! Here are review comments:\n\n+ /*\n+ * Make sure that the copy command runs as the table owner, unless\n+ * the user has opted out of that behaviour.\n+ */\n+ run_as_owner = MySubscription->runasowner;\n+ if (!run_as_owner)\n+ SwitchToUntrustedUser(rel->rd_rel->relowner, &ucxt);\n+\n /* Now do the initial data copy */\n PushActiveSnapshot(GetTransactionSnapshot());\n\nI think we should switch users before the acl check in\nLogicalRepSyncTableStart().\n\n---\n+# Create a trigger on table alice.unpartitioned that writes\n+# to a table that regress_alice does not have permission.\n+$node_subscriber->safe_psql(\n+ 'postgres', qq(\n+SET SESSION AUTHORIZATION regress_alice;\n+CREATE OR REPLACE FUNCTION alice.alice_audit()\n+RETURNS trigger AS\n+\\$\\$\n+ BEGIN\n+ insert into public.admin_audit values(2);\n+ RETURN NEW;\n+ END;\n+\\$\\$\n+LANGUAGE 'plpgsql';\n+CREATE TRIGGER ALICE_TRIGGER AFTER INSERT ON alice.unpartitioned FOR EACH ROW\n+EXECUTE PROCEDURE alice.alice_audit();\n+ALTER TABLE alice.unpartitioned ENABLE ALWAYS TRIGGER ALICE_TRIGGER;\n+));\n\nWhile this approach works, I'm not sure we really need a trigger for\nthis test. I've attached a patch for discussion that doesn't use\ntriggers for the regression tests. We create a new subscription owned\nby a user who doesn't have the permission to the target table. The\ntest passes only if run_as_owner = true works.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 22 May 2023 21:35:53 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, May 22, 2023 at 6:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Thank you for updating the patch! Here are review comments:\n>\n> + /*\n> + * Make sure that the copy command runs as the table owner, unless\n> + * the user has opted out of that behaviour.\n> + */\n> + run_as_owner = MySubscription->runasowner;\n> + if (!run_as_owner)\n> + SwitchToUntrustedUser(rel->rd_rel->relowner, &ucxt);\n> +\n> /* Now do the initial data copy */\n> PushActiveSnapshot(GetTransactionSnapshot());\n>\n> I think we should switch users before the acl check in\n> LogicalRepSyncTableStart().\n>\n\nAgreed, we should check acl with the user that is going to perform\noperations on the target table. BTW, is it okay to perform an\noperation on the system table with the changed user as that would be\npossible with your suggestion (see replorigin_create())?\n\n>\n> While this approach works, I'm not sure we really need a trigger for\n> this test. I've attached a patch for discussion that doesn't use\n> triggers for the regression tests. We create a new subscription owned\n> by a user who doesn't have the permission to the target table. The\n> test passes only if run_as_owner = true works.\n>\n\nWhy in the test do you need to give additional permissions to\nregress_admin2 when the actual operation has to be performed by the\ntable owner?\n\n+# Because the initial data sync is working as the table owner, all\n+# dat should be copied.\n\nTypo. /dat/data\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 23 May 2023 16:51:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, May 22, 2023 at 10:36 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, May 17, 2023 at 10:10 AM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n> > On Mon, May 15, 2023 at 10:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, May 15, 2023 at 5:44 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> > > >\n> > > > On Fri, May 12, 2023 at 9:55 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> > > > >\n> > > > > If nobody else is working on this, I can come up with a patch to fix this\n> > > > >\n> > > >\n> > > > Attaching a patch which attempts to fix this.\n> > > >\n> > >\n> > > Thank you for the patch! I think we might want to have tests for it.\n> > >\n> > I have updated the patch with a test case as well.\n>\n> Thank you for updating the patch! Here are review comments:\n>\n> + /*\n> + * Make sure that the copy command runs as the table owner, unless\n> + * the user has opted out of that behaviour.\n> + */\n> + run_as_owner = MySubscription->runasowner;\n> + if (!run_as_owner)\n> + SwitchToUntrustedUser(rel->rd_rel->relowner, &ucxt);\n> +\n> /* Now do the initial data copy */\n> PushActiveSnapshot(GetTransactionSnapshot());\n>\n> I think we should switch users before the acl check in\n> LogicalRepSyncTableStart().\n>\n> ---\n> +# Create a trigger on table alice.unpartitioned that writes\n> +# to a table that regress_alice does not have permission.\n> +$node_subscriber->safe_psql(\n> + 'postgres', qq(\n> +SET SESSION AUTHORIZATION regress_alice;\n> +CREATE OR REPLACE FUNCTION alice.alice_audit()\n> +RETURNS trigger AS\n> +\\$\\$\n> + BEGIN\n> + insert into public.admin_audit values(2);\n> + RETURN NEW;\n> + END;\n> +\\$\\$\n> +LANGUAGE 'plpgsql';\n> +CREATE TRIGGER ALICE_TRIGGER AFTER INSERT ON alice.unpartitioned FOR EACH ROW\n> +EXECUTE PROCEDURE alice.alice_audit();\n> +ALTER TABLE alice.unpartitioned ENABLE ALWAYS TRIGGER ALICE_TRIGGER;\n> +));\n>\n> While this approach works, I'm not sure we really need a trigger for\n> this test. I've attached a patch for discussion that doesn't use\n> triggers for the regression tests. We create a new subscription owned\n> by a user who doesn't have the permission to the target table. The\n> test passes only if run_as_owner = true works.\n>\nthis is better, thanks. Since you are testing run_as_owner = false behaviour\nduring table copy phase, you might as well add a test case that it\ncorrectly behaves during insert replication as well.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 23 May 2023 21:55:48 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Tue, May 23, 2023 at 8:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 22, 2023 at 6:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Thank you for updating the patch! Here are review comments:\n> >\n> > + /*\n> > + * Make sure that the copy command runs as the table owner, unless\n> > + * the user has opted out of that behaviour.\n> > + */\n> > + run_as_owner = MySubscription->runasowner;\n> > + if (!run_as_owner)\n> > + SwitchToUntrustedUser(rel->rd_rel->relowner, &ucxt);\n> > +\n> > /* Now do the initial data copy */\n> > PushActiveSnapshot(GetTransactionSnapshot());\n> >\n> > I think we should switch users before the acl check in\n> > LogicalRepSyncTableStart().\n> >\n>\n> Agreed, we should check acl with the user that is going to perform\n> operations on the target table. BTW, is it okay to perform an\n> operation on the system table with the changed user as that would be\n> possible with your suggestion (see replorigin_create())?\n\nDo you see any problem in particular?\n\nAs per the documentation, pg_replication_origin_create() is only\nallowed to the superuser by default, but in CreateSubscription() a\nnon-superuser (who has pg_create_subscription privilege) can call\nreplorigin_create(). OTOH, we don't necessarily need to switch to the\ntable owner user for checking ACL and RLS. We can just pass either\ntable owner OID or subscription owner OID to pg_class_aclcheck() and\ncheck_enable_rls() without actually switching the user.\n\n>\n> >\n> > While this approach works, I'm not sure we really need a trigger for\n> > this test. I've attached a patch for discussion that doesn't use\n> > triggers for the regression tests. We create a new subscription owned\n> > by a user who doesn't have the permission to the target table. The\n> > test passes only if run_as_owner = true works.\n> >\n>\n> Why in the test do you need to give additional permissions to\n> regress_admin2 when the actual operation has to be performed by the\n> table owner?\n\nGood point. We need to give the ability to SET ROLE to regress_admin2\nbut other permissions are unnecessary.\n\n>\n> +# Because the initial data sync is working as the table owner, all\n> +# dat should be copied.\n>\n> Typo. /dat/data\n\nWill fix.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 25 May 2023 16:02:50 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Thu, May 25, 2023 at 12:33 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, May 23, 2023 at 8:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, May 22, 2023 at 6:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Thank you for updating the patch! Here are review comments:\n> > >\n> > > + /*\n> > > + * Make sure that the copy command runs as the table owner, unless\n> > > + * the user has opted out of that behaviour.\n> > > + */\n> > > + run_as_owner = MySubscription->runasowner;\n> > > + if (!run_as_owner)\n> > > + SwitchToUntrustedUser(rel->rd_rel->relowner, &ucxt);\n> > > +\n> > > /* Now do the initial data copy */\n> > > PushActiveSnapshot(GetTransactionSnapshot());\n> > >\n> > > I think we should switch users before the acl check in\n> > > LogicalRepSyncTableStart().\n> > >\n> >\n> > Agreed, we should check acl with the user that is going to perform\n> > operations on the target table. BTW, is it okay to perform an\n> > operation on the system table with the changed user as that would be\n> > possible with your suggestion (see replorigin_create())?\n>\n> Do you see any problem in particular?\n>\n> As per the documentation, pg_replication_origin_create() is only\n> allowed to the superuser by default, but in CreateSubscription() a\n> non-superuser (who has pg_create_subscription privilege) can call\n> replorigin_create().\n\nNothing in particular but it seems a bit odd to perform operations on\ncatalog tables with some other user table owners when that was not the\nactual intent of this option.\n\n> OTOH, we don't necessarily need to switch to the\n> table owner user for checking ACL and RLS. We can just pass either\n> table owner OID or subscription owner OID to pg_class_aclcheck() and\n> check_enable_rls() without actually switching the user.\n>\n\nI think that would be better.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 25 May 2023 14:11:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Thu, May 25, 2023 at 5:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 25, 2023 at 12:33 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, May 23, 2023 at 8:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, May 22, 2023 at 6:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > Thank you for updating the patch! Here are review comments:\n> > > >\n> > > > + /*\n> > > > + * Make sure that the copy command runs as the table owner, unless\n> > > > + * the user has opted out of that behaviour.\n> > > > + */\n> > > > + run_as_owner = MySubscription->runasowner;\n> > > > + if (!run_as_owner)\n> > > > + SwitchToUntrustedUser(rel->rd_rel->relowner, &ucxt);\n> > > > +\n> > > > /* Now do the initial data copy */\n> > > > PushActiveSnapshot(GetTransactionSnapshot());\n> > > >\n> > > > I think we should switch users before the acl check in\n> > > > LogicalRepSyncTableStart().\n> > > >\n> > >\n> > > Agreed, we should check acl with the user that is going to perform\n> > > operations on the target table. BTW, is it okay to perform an\n> > > operation on the system table with the changed user as that would be\n> > > possible with your suggestion (see replorigin_create())?\n> >\n> > Do you see any problem in particular?\n> >\n> > As per the documentation, pg_replication_origin_create() is only\n> > allowed to the superuser by default, but in CreateSubscription() a\n> > non-superuser (who has pg_create_subscription privilege) can call\n> > replorigin_create().\n>\n> Nothing in particular but it seems a bit odd to perform operations on\n> catalog tables with some other user table owners when that was not the\n> actual intent of this option.\n>\n> > OTOH, we don't necessarily need to switch to the\n> > table owner user for checking ACL and RLS. We can just pass either\n> > table owner OID or subscription owner OID to pg_class_aclcheck() and\n> > check_enable_rls() without actually switching the user.\n> >\n>\n> I think that would be better.\n\nAgreed.\n\nI've attached the updated patch. Please review it.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 26 May 2023 21:48:09 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Fri, May 26, 2023 at 6:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, May 25, 2023 at 5:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I've attached the updated patch. Please review it.\n>\n\nFew comments:\n1.\n+ /* get the owner for ACL and RLS checks */\n+ run_as_owner = MySubscription->runasowner;\n+ checkowner = run_as_owner ? MySubscription->owner : rel->rd_rel->relowner;\n+\n /*\n * Check that our table sync worker has permission to insert into the\n * target table.\n */\n- aclresult = pg_class_aclcheck(RelationGetRelid(rel), GetUserId(),\n+ aclresult = pg_class_aclcheck(RelationGetRelid(rel), checkowner,\n\nOne thing that slightly worries me about this change is that we\nstarted to check the permission for relowner before even ensuring that\nwe can switch to relowner. See checks in SwitchToUntrustedUser(). If\nwe want to first ensure that we can switch to relowner then I think we\nshould move this permission-checking code before we try to copy the\ntable.\n\n2. In the commit message, the link for discussion\n\"https://postgr.es/m/CAA4eK1KfZcRq7hUqQ7WknP+u=08+6MevVm+2W5RrAb+DTxrdww@mail.gmail.com\"\nis slightly misleading. Can we instead use\n\"https://www.postgresql.org/message-id/CAA4eK1L%3DqzRHPEn%2BqeMoKQGFBzqGoLBzt_ov0A89iFFiut%2BppA%40mail.gmail.com\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sun, 4 Jun 2023 23:44:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Mon, Jun 5, 2023 at 3:15 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 26, 2023 at 6:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, May 25, 2023 at 5:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I've attached the updated patch. Please review it.\n> >\n>\n> Few comments:\n> 1.\n> + /* get the owner for ACL and RLS checks */\n> + run_as_owner = MySubscription->runasowner;\n> + checkowner = run_as_owner ? MySubscription->owner : rel->rd_rel->relowner;\n> +\n> /*\n> * Check that our table sync worker has permission to insert into the\n> * target table.\n> */\n> - aclresult = pg_class_aclcheck(RelationGetRelid(rel), GetUserId(),\n> + aclresult = pg_class_aclcheck(RelationGetRelid(rel), checkowner,\n>\n> One thing that slightly worries me about this change is that we\n> started to check the permission for relowner before even ensuring that\n> we can switch to relowner. See checks in SwitchToUntrustedUser(). If\n> we want to first ensure that we can switch to relowner then I think we\n> should move this permission-checking code before we try to copy the\n> table.\n\nAgreed. I thought it's better to do ACL and RLS checks before creating\nthe replication slot but it's not important. Rather checking them\nafter switching user would make sense since we do the same in\nworker.c.\n\n>\n> 2. In the commit message, the link for discussion\n> \"https://postgr.es/m/CAA4eK1KfZcRq7hUqQ7WknP+u=08+6MevVm+2W5RrAb+DTxrdww@mail.gmail.com\"\n> is slightly misleading. Can we instead use\n> \"https://www.postgresql.org/message-id/CAA4eK1L%3DqzRHPEn%2BqeMoKQGFBzqGoLBzt_ov0A89iFFiut%2BppA%40mail.gmail.com\"?\n\nAgreed.\n\nI've attached the updated patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 8 Jun 2023 10:01:27 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 6:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jun 5, 2023 at 3:15 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, May 26, 2023 at 6:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, May 25, 2023 at 5:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I've attached the updated patch. Please review it.\n> > >\n> >\n> > Few comments:\n> > 1.\n> > + /* get the owner for ACL and RLS checks */\n> > + run_as_owner = MySubscription->runasowner;\n> > + checkowner = run_as_owner ? MySubscription->owner : rel->rd_rel->relowner;\n> > +\n> > /*\n> > * Check that our table sync worker has permission to insert into the\n> > * target table.\n> > */\n> > - aclresult = pg_class_aclcheck(RelationGetRelid(rel), GetUserId(),\n> > + aclresult = pg_class_aclcheck(RelationGetRelid(rel), checkowner,\n> >\n> > One thing that slightly worries me about this change is that we\n> > started to check the permission for relowner before even ensuring that\n> > we can switch to relowner. See checks in SwitchToUntrustedUser(). If\n> > we want to first ensure that we can switch to relowner then I think we\n> > should move this permission-checking code before we try to copy the\n> > table.\n>\n> Agreed. I thought it's better to do ACL and RLS checks before creating\n> the replication slot but it's not important. Rather checking them\n> after switching user would make sense since we do the same in\n> worker.c.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 8 Jun 2023 15:59:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
},
{
"msg_contents": "On Thu, Jun 8, 2023 at 7:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jun 8, 2023 at 6:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Jun 5, 2023 at 3:15 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, May 26, 2023 at 6:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Thu, May 25, 2023 at 5:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > I've attached the updated patch. Please review it.\n> > > >\n> > >\n> > > Few comments:\n> > > 1.\n> > > + /* get the owner for ACL and RLS checks */\n> > > + run_as_owner = MySubscription->runasowner;\n> > > + checkowner = run_as_owner ? MySubscription->owner : rel->rd_rel->relowner;\n> > > +\n> > > /*\n> > > * Check that our table sync worker has permission to insert into the\n> > > * target table.\n> > > */\n> > > - aclresult = pg_class_aclcheck(RelationGetRelid(rel), GetUserId(),\n> > > + aclresult = pg_class_aclcheck(RelationGetRelid(rel), checkowner,\n> > >\n> > > One thing that slightly worries me about this change is that we\n> > > started to check the permission for relowner before even ensuring that\n> > > we can switch to relowner. See checks in SwitchToUntrustedUser(). If\n> > > we want to first ensure that we can switch to relowner then I think we\n> > > should move this permission-checking code before we try to copy the\n> > > table.\n> >\n> > Agreed. I thought it's better to do ACL and RLS checks before creating\n> > the replication slot but it's not important. Rather checking them\n> > after switching user would make sense since we do the same in\n> > worker.c.\n> >\n>\n> LGTM.\n\nThanks, pushed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 9 Jun 2023 10:45:06 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: running logical replication as the subscription owner"
}
] |
[
{
"msg_contents": "I realized that headerscheck is failing to enforce $SUBJECT.\nThis is bad, since we aren't really using libpq-fe.h ourselves\nin a way that would ensure that c.h symbols don't creep into it.\n\nWe can easily do better, as attached, but I wonder which other\nheaders should get the same treatment.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 03 Mar 2023 12:07:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "libpq-fe.h should compile *entirely* standalone"
},
{
"msg_contents": "I wrote:\n> We can easily do better, as attached, but I wonder which other\n> headers should get the same treatment.\n\nAfter a bit of further research I propose the attached. I'm not\nsure exactly what subset of ECPG headers is meant to be exposed\nto clients, but we can adjust these patterns if new info emerges.\n\nThis is actually moving the inclusion-check goalposts quite far,\nbut HEAD seems to pass cleanly, and again we can always adjust later.\nAny objections?\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 03 Mar 2023 13:46:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: libpq-fe.h should compile *entirely* standalone"
},
{
"msg_contents": "On 2023-03-03 Fr 13:46, Tom Lane wrote:\n> I wrote:\n>> We can easily do better, as attached, but I wonder which other\n>> headers should get the same treatment.\n> After a bit of further research I propose the attached. I'm not\n> sure exactly what subset of ECPG headers is meant to be exposed\n> to clients, but we can adjust these patterns if new info emerges.\n>\n> This is actually moving the inclusion-check goalposts quite far,\n> but HEAD seems to pass cleanly, and again we can always adjust later.\n> Any objections?\n> \t\t\t\n\n\nLGTM\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-03 Fr 13:46, Tom Lane wrote:\n\n\nI wrote:\n\n\nWe can easily do better, as attached, but I wonder which other\nheaders should get the same treatment.\n\n\n\nAfter a bit of further research I propose the attached. I'm not\nsure exactly what subset of ECPG headers is meant to be exposed\nto clients, but we can adjust these patterns if new info emerges.\n\nThis is actually moving the inclusion-check goalposts quite far,\nbut HEAD seems to pass cleanly, and again we can always adjust later.\nAny objections?\n\t\t\t\n\n\n\nLGTM\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 4 Mar 2023 07:08:27 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: libpq-fe.h should compile *entirely* standalone"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-03-03 Fr 13:46, Tom Lane wrote:\n>> This is actually moving the inclusion-check goalposts quite far,\n>> but HEAD seems to pass cleanly, and again we can always adjust later.\n>> Any objections?\n\n> LGTM\n\nPushed, thanks for looking.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Mar 2023 12:12:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: libpq-fe.h should compile *entirely* standalone"
}
] |
[
{
"msg_contents": "Hi,\n\nI was debugging a planner problem on Postgres 14.4 the other day - and the\ninvolved \"bad\" plan was including Memoize - though I don't necessarily\nthink that Memoize is to blame (and this isn't any of the problems recently\nfixed in Memoize costing).\n\nHowever, what I noticed whilst trying different ways to fix the plan, is\nthat the Memoize output was a bit hard to reason about - especially since\nthe plan involving Memoize was expensive to run, and so I was mostly\nrunning EXPLAIN without ANALYZE to look at the costing.\n\nHere is an example of the output I was looking at:\n\n -> Nested Loop (cost=1.00..971672.56 rows=119623 width=0)\n -> Index Only Scan using table1_idx on table1\n(cost=0.43..372676.50 rows=23553966 width=8)\n -> Memoize (cost=0.57..0.61 rows=1 width=8)\n Cache Key: table1.table2_id\n Cache Mode: logical\n -> Index Scan using table2_idx on table2\n(cost=0.56..0.60 rows=1 width=8)\n Index Cond: (id = table1.table2_id)\n\nThe other plan I was comparing with (that I wanted the planner to choose\ninstead), had a total cost of 1,451,807.35 -- and so I was trying to figure\nout why the Nested Loop was costed as 971,672.56.\n\nSimple math makes me expect the Nested Loop should roughly have a total\ncost of14,740,595.76 here (372,676.50 + 23,553,966 * 0.61), ignoring a lot\nof the smaller costs. Thus, in this example, it appears Memoize made the\nplan cost significantly cheaper (roughly 6% of the regular cost).\n\nEssentially this comes down to the \"cost reduction\" performed by Memoize\nonly being implicitly visible in the Nested Loop's total cost - and with\nnothing useful on the Memoize node itself - since the rescan costs are not\nshown.\n\nI think explicitly adding the estimated cache hit ratio for Memoize nodes\nmight make this easier to reason about, like this:\n\n-> Memoize (cost=0.57..0.61 rows=1 width=8)\n Cache Key: table1.table2_id\n Cache Mode: logical\n Cache Hit Ratio Estimated: 0.94\n\nAlternatively (or in addition) we could consider showing the \"ndistinct\"\nvalue that is calculated in cost_memoize_rescan - since that's the most\nsignificant contributor to the cache hit ratio (and you can influence that\ndirectly by improving the ndistinct statistics).\n\nSee attached a patch that implements showing the cache hit ratio as a\ndiscussion starter.\n\nI'll park this in the July commitfest for now.\n\nThanks,\nLukas\n\n-- \nLukas Fittl",
"msg_date": "Sat, 4 Mar 2023 16:20:59 -0800",
"msg_from": "Lukas Fittl <lukas@fittl.com>",
"msg_from_op": true,
"msg_subject": "Add estimated hit ratio to Memoize in EXPLAIN to explain cost\n adjustment"
},
{
"msg_contents": "On Sun, 5 Mar 2023 at 13:21, Lukas Fittl <lukas@fittl.com> wrote:\n> Alternatively (or in addition) we could consider showing the \"ndistinct\" value that is calculated in cost_memoize_rescan - since that's the most significant contributor to the cache hit ratio (and you can influence that directly by improving the ndistinct statistics).\n\nI think the ndistinct estimate plus the est_entries together would be\nuseful. I think showing just the hit ratio number might often just\nraise too many questions about how that's calculated. To calculate the\nhit ratio we need to estimate the number of entries that can be kept\nin the cache at once and also the number of input rows and the number\nof distinct values. We can see the input rows by looking at the outer\nside of the join in EXPLAIN, but we've no idea about the ndistinct or\nhow many items the planner thought could be kept in the cache at once.\n\nThe plan node already has est_entries, so it should just be a matter\nof storing the ndistinct estimate in the Path and putting it into the\nPlan node so the executor has access to it during EXPLAIN.\n\nDavid\n\n\n",
"msg_date": "Tue, 7 Mar 2023 22:51:20 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add estimated hit ratio to Memoize in EXPLAIN to explain cost\n adjustment"
},
{
"msg_contents": "> On 7 Mar 2023, at 10:51, David Rowley <dgrowleyml@gmail.com> wrote:\n> \n> On Sun, 5 Mar 2023 at 13:21, Lukas Fittl <lukas@fittl.com> wrote:\n>> Alternatively (or in addition) we could consider showing the \"ndistinct\" value that is calculated in cost_memoize_rescan - since that's the most significant contributor to the cache hit ratio (and you can influence that directly by improving the ndistinct statistics).\n> \n> I think the ndistinct estimate plus the est_entries together would be\n> useful. I think showing just the hit ratio number might often just\n> raise too many questions about how that's calculated. To calculate the\n> hit ratio we need to estimate the number of entries that can be kept\n> in the cache at once and also the number of input rows and the number\n> of distinct values. We can see the input rows by looking at the outer\n> side of the join in EXPLAIN, but we've no idea about the ndistinct or\n> how many items the planner thought could be kept in the cache at once.\n> \n> The plan node already has est_entries, so it should just be a matter\n> of storing the ndistinct estimate in the Path and putting it into the\n> Plan node so the executor has access to it during EXPLAIN.\n\nLukas: do you have an updated patch for this commitfest to address David's\ncomments?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 09:56:18 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Add estimated hit ratio to Memoize in EXPLAIN to explain cost\n adjustment"
},
{
"msg_contents": "On Thu, Jul 6, 2023 at 12:56 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> Lukas: do you have an updated patch for this commitfest to address David's\n> comments?\n>\n\nI have a draft - I should be able to post an updated patch in the next\ndays. Thanks for checking!\n\nThanks,\nLukas\n\n-- \nLukas Fittl\n\nOn Thu, Jul 6, 2023 at 12:56 AM Daniel Gustafsson <daniel@yesql.se> wrote:\nLukas: do you have an updated patch for this commitfest to address David's\ncomments?I have a draft - I should be able to post an updated patch in the next days. Thanks for checking!Thanks,Lukas-- Lukas Fittl",
"msg_date": "Thu, 6 Jul 2023 01:27:26 -0700",
"msg_from": "Lukas Fittl <lukas@fittl.com>",
"msg_from_op": true,
"msg_subject": "Re: Add estimated hit ratio to Memoize in EXPLAIN to explain cost\n adjustment"
}
] |
[
{
"msg_contents": "Hi! I was running some benchmarks for PG driver built on top of libpq async functionality,and noticed that recv syscalls issued by the application are limited by 16Kb, which seems tobe inBufSize coming from makeEmptyPGconn in interfaces/libpq/fe-connect.c. Hacking that to higher values allowed my benchmarks to issue drastically less syscallswhen running some heavy selects, both in local and cloud environments, which made themsignificantly faster. I believe there is a reason for that value to be 16Kb, but i was wondering if it's safe to changethis default to user-provided value, and if it is - could this functionality be added into API?\n",
"msg_date": "Sun, 05 Mar 2023 05:42:06 +0300",
"msg_from": "=?utf-8?B?0KLRgNC+0YTQuNC80L7QsiDQmNCy0LDQvQ==?= <i.trofimow@yandex.ru>",
"msg_from_op": true,
"msg_subject": "About default inBufSize (connection read buffer size) in libpq"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-05 05:42:06 +0300, Трофимов Иван wrote:\n> I was running some benchmarks for PG driver built on top of libpq async\n> functionality,\n> and noticed that recv syscalls issued by the application are limited by 16Kb,\n> which seems to\n> be inBufSize coming from makeEmptyPGconn in interfaces/libpq/fe-connect.c.\n> \n> Hacking that to higher values allowed my benchmarks to issue drastically less\n> syscalls\n> when running some heavy selects, both in local and cloud environments, which\n> made them\n> significantly faster.\n> \n> I believe there is a reason for that value to be 16Kb, but i was wondering if\n> it's safe to change\n> this default to user-provided value, and if it is - could this functionality be\n> added into API?\n\nI've observed the small buffer size hurting as well - not just client side,\nalso on the serve.\n\nBut I don't think we necessarily need to make it configurable. From what I can\ntell the pain mainly comes using the read/send buffers when they don't even\nhelp, because the message data we're processing is bigger than the buffer\nsize. When we need to receive / send data that we know is bigger than the\nthe buffer, we should copy the portion that is still in the buffer, and then\nsend/receive directly from the data to be sent/received.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 Mar 2023 10:15:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: About default inBufSize (connection read buffer size) in libpq"
}
] |
[
{
"msg_contents": "Hi,\n\nThis was noticed in \nhttps://www.postgresql.org/message-id/CAApHDvo2y9S2AO-BPYo7gMPYD0XE2Lo-KFLnqX80fcftqBCcyw@mail.gmail.com\n\nI am bringing it up again.\n\n\nConsider the following example:\n\nSetup (tuple should be in memory to avoid overshadowing of disk I/O in \nthe experimentation):\n\nwork_mem = 2048MB\n\ncreate table abcd(a int, b int, c int, d int);\ninsert into abcd select x*random(), x*random(), x*random(), x*random() \nfrom generate_series(1, 100000)x;\n\nselect pg_prewarm(abcd);\n\n\n1. explain analyze select * from abcd order by a;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Sort (cost=9845.82..10095.82 rows=100000 width=16) (actual \ntime=134.113..155.990 rows=100000 loops=1)\n Sort Key: a\n Sort Method: quicksort Memory: 8541kB\n -> Seq Scan on abcd (cost=0.00..1541.00 rows=100000 width=16) \n(actual time=0.013..28.418 rows=100000 loops=1)\n Planning Time: 0.392 ms\n Execution Time: 173.702 ms\n(6 rows)\n\n2. explain analyze select * from abcde order by a,b;\n\nexplain analyze select * from abcd order by a,b;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Sort (cost=9845.82..10095.82 rows=100000 width=16) (actual \ntime=174.676..204.065 rows=100000 loops=1)\n Sort Key: a, b\n Sort Method: quicksort Memory: 8541kB\n -> Seq Scan on abcd (cost=0.00..1541.00 rows=100000 width=16) \n(actual time=0.018..29.213 rows=100000 loops=1)\n Planning Time: 0.055 ms\n Execution Time: 229.119 ms\n(6 rows)\n\n\n3. explain analyze select * from abcd order by a,b,c;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Sort (cost=9845.82..10095.82 rows=100000 width=16) (actual \ntime=159.829..179.675 rows=100000 loops=1)\n Sort Key: a, b, c\n Sort Method: quicksort Memory: 8541kB\n -> Seq Scan on abcd (cost=0.00..1541.00 rows=100000 width=16) \n(actual time=0.018..31.207 rows=100000 loops=1)\n Planning Time: 0.055 ms\n Execution Time: 195.393 ms\n(6 rows)\n\nIn above queries, startup and total costs are same, yet execution time \nvaries wildly.\n\nQuestion: If cost is same for similar query, shouldn't execution time be \nsimilar as well?\n\n From my observation, we only account for data in cost computation but \nnot number of\n\ncolumns sorted.\n\nShould we not account for number of columns in sort as well?\n\n\nRelevant discussion: \nhttps://www.postgresql.org/message-id/CAApHDvoc1m_vo1+XVpMUj+Mfy6rMiPQObM9Y-jZ=Xrwc1gkPFA@mail.gmail.com\n\n\nRegards,\n\nAnkit\n\n\n\n\n",
"msg_date": "Sun, 5 Mar 2023 16:30:30 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "[Question] Similar Cost but variable execution time in sort"
},
{
"msg_contents": "Ankit Kumar Pandey <itsankitkp@gmail.com> writes:\n> From my observation, we only account for data in cost computation but \n> not number of columns sorted.\n> Should we not account for number of columns in sort as well?\n\nI'm not sure whether simply charging more for 2 sort columns than 1\nwould help much. The traditional reasoning for not caring was that\ndata and I/O costs would swamp comparison costs anyway, but maybe with\never-increasing memory sizes we're getting to the point where it is\nworth refining the model for in-memory sorts. But see the header\ncomment for cost_sort().\n\nAlso ... not too long ago we tried and failed to install more-complex\nsort cost estimates for GROUP BY. The commit log message for f4c7c410e\ngives some of the reasons why that failed, but what it boils down to\nis that useful estimates would require information we don't have, such\nas a pretty concrete idea of the relative costs of different datatypes'\ncomparison functions.\n\nIn short, maybe there's something to be done here, but I'm afraid\nthere is a lot of infrastructure slogging needed first, if you want\nestimates that are better than garbage-in-garbage-out.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 05 Mar 2023 11:51:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Question] Similar Cost but variable execution time in sort"
},
{
"msg_contents": "\n> On 05/03/23 22:21, Tom Lane wrote:\n\n> Ankit Kumar Pandey <itsankitkp@gmail.com> writes:\n> > From my observation, we only account for data in cost computation but \n> > not number of columns sorted.\n> > Should we not account for number of columns in sort as well?\n>\n> I'm not sure whether simply charging more for 2 sort columns than 1\n> would help much. The traditional reasoning for not caring was that\n> data and I/O costs would swamp comparison costs anyway, but maybe with\n> ever-increasing memory sizes we're getting to the point where it is\n> worth refining the model for in-memory sorts. But see the header\n> comment for cost_sort().\n> \n> Also ... not too long ago we tried and failed to install more-complex\n> sort cost estimates for GROUP BY. The commit log message for f4c7c410e\n> gives some of the reasons why that failed, but what it boils down to\n> is that useful estimates would require information we don't have, such\n> as a pretty concrete idea of the relative costs of different datatypes'\n> comparison functions.\n>\n> In short, maybe there's something to be done here, but I'm afraid\n> there is a lot of infrastructure slogging needed first, if you want\n> estimates that are better than garbage-in-garbage-out.\n>\n>\t\t\tregards, tom lane\n\nThanks, I can see the challenges in this.\n\nRegards,\nAnkit\n\n\n\n\n",
"msg_date": "Sun, 5 Mar 2023 23:47:57 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Question] Similar Cost but variable execution time in sort"
}
] |
[
{
"msg_contents": "Suppose there is a transaction running, how it knows the tuples that are\r\nvisible for it?\r\n\r\n\r\njacktby@gmail.com\r\n\n\n\nSuppose there is a transaction running, how it knows the tuples that arevisible for it?\njacktby@gmail.com",
"msg_date": "Sun, 5 Mar 2023 21:19:02 +0800",
"msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>",
"msg_from_op": true,
"msg_subject": "How does pg implement the visiblity of one tuple for specified\n transaction?"
},
{
"msg_contents": "Hi Jacktby,\n\nDid you try looking at HeapTupleSatisfiesVisibility function (in \nsrc/backend/access/heap/heapam_visibility.c) ? I think it might give you \nsome idea.\n\nThanks,\n\nAnkit\n\n\n",
"msg_date": "Sun, 5 Mar 2023 20:57:45 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How does pg implement the visiblity of one tuple for specified\n transaction?"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nI think we should extend the \"log\" directory the same courtesy as was\ndone for pg_wal (pg_xlog) in 0e42397f42b.\n\nToday, even if BOTH source and target servers have symlinked \"log\"\ndirectories, pg_rewind fails with:\n\nfile \"log\" is of different type in source and target.\n\nAttached is a repro patch using the 004_pg_xlog_symlink.pl test to\ndemonstrate the failure.\nRunning make check PROVE_TESTS='t/004_pg_xlog_symlink.pl'\nin src/bin/pg_rewind should suffice after applying.\n\nThis is because when we use the libpq query to fetch the filemap from\nthe source server, we consider the log directory as a directory, even if\nit is a symlink. This is because pg_stat_file() is used in that query in\nlibpq_traverse_files() and pg_stat_file() returns isdir=t for symlinks\nto directories.\n\nThis shortcoming is somewhat called out:\n\n * XXX: There is no backend function to get a symbolic link's target in\n * general, so if the admin has put any custom symbolic links in the data\n * directory, they won't be copied correctly.\n\nWe could fix the query and/or pg_stat_file(). However, we would also\nlike to support deployments where only one of the primaries and/or\nstandbys have the symlink. That is not hard to conceive, given primaries\nand standbys can have drastically disparate log volume and/or log\ncollection requirements.\n\nAttached is a patch that treats \"log\" like we treat \"pg_wal\".\n\nRegards,\nSoumyadeep (VMware)",
"msg_date": "Sun, 5 Mar 2023 18:10:27 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_rewind: Skip log directory for file type check like pg_wal"
},
{
"msg_contents": "Hello Soumyadeep,\n\nThe problem indeed exists, but IMO the \"log\" directory case must be handled\ndifferently:\n1. We don't need or I would even say we don't want to sync log files from\nthe new primary, because it destroys the actual logs, which could be very\nimportant to figure out what has happened with the old primary\n2. Unlike \"pg_wal\", the \"log\" directory is not necessarily located inside\nPGDATA. The actual value is configured using \"log_directory\" GUC, which\njust happened to be \"log\" by default. And in fact actual values on source\nand target could be different.\n\nRegards,\n--\nAlexander Kukushkin\n\nHello Soumyadeep,The problem indeed exists, but IMO the \"log\" directory case must be handled differently:1. We don't need or I would even say we don't want to sync log files from the new primary, because it destroys the actual logs, which could be very important to figure out what has happened with the old primary2. Unlike \"pg_wal\", the \"log\" directory is not necessarily located inside PGDATA. The actual value is configured using \"log_directory\" GUC, which just happened to be \"log\" by default. And in fact actual values on source and target could be different.Regards,--Alexander Kukushkin",
"msg_date": "Mon, 6 Mar 2023 09:28:10 +0100",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: Skip log directory for file type check like pg_wal"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 12:28 AM Alexander Kukushkin <cyberdemn@gmail.com> wrote:\n>\n> Hello Soumyadeep,\n>\n> The problem indeed exists, but IMO the \"log\" directory case must be handled differently:\n> 1. We don't need or I would even say we don't want to sync log files from the new primary, because it destroys the actual logs, which could be very important to figure out what has happened with the old primary\n\nYes, this can be solved by adding \"log\" to excludeDirContents. We did\nthis for GPDB.\n\n> 2. Unlike \"pg_wal\", the \"log\" directory is not necessarily located inside PGDATA. The actual value is configured using \"log_directory\" GUC, which just happened to be \"log\" by default. And in fact actual values on source and target could be different.\n\nI think we only care about files/dirs inside the datadir. Anything\noutside is out of scope for\npg_rewind AFAIU. We can only address the common case here. As mentioned in this\ncomment:\n\n * XXX: There is no backend function to get a symbolic link's target in\n * general, so if the admin has put any custom symbolic links in the data\n * directory, they won't be copied correctly.\n\nThere is not much we can do about custom configurations.\n\nRegards,\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Mon, 6 Mar 2023 10:36:42 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind: Skip log directory for file type check like pg_wal"
},
{
"msg_contents": "On Mon, 6 Mar 2023 at 19:37, Soumyadeep Chakraborty <\nsoumyadeep2007@gmail.com> wrote:\n\n>\n> > 2. Unlike \"pg_wal\", the \"log\" directory is not necessarily located\n> inside PGDATA. The actual value is configured using \"log_directory\" GUC,\n> which just happened to be \"log\" by default. And in fact actual values on\n> source and target could be different.\n>\n> I think we only care about files/dirs inside the datadir. Anything\n> outside is out of scope for\n> pg_rewind AFAIU. We can only address the common case here. As mentioned in\n> this\n> comment:\n>\n> * XXX: There is no backend function to get a symbolic link's target in\n> * general, so if the admin has put any custom symbolic links in the data\n> * directory, they won't be copied correctly.\n>\n\nThat's exactly my point. Users are very creative.\nOn one node they could set log_directory to for example \"pg_log\" and on\nanother one \"my_log\".\nAnd they would be writing logs to $PGDATA/pg_log and $PGDATA/my_log\nrespectively and they are both located inside datadir.\n\nLets assume that on the source we have \"pg_log\" and on the target we have\n\"my_log\" (they are configured using \"log_directory\" GUC).\nWhen doing rewind in this case we want neither to remove the content of\n\"my_log\" on the target nor to copy content of \"pg_log\" from the source.\nIt couldn't be achieved just by introducing a static string \"log\". The\n\"log_directory\" GUC must be examined on both, source and target.\n\nRegards,\n--\nAlexander Kukushkin\n\nOn Mon, 6 Mar 2023 at 19:37, Soumyadeep Chakraborty <soumyadeep2007@gmail.com> wrote:\n> 2. Unlike \"pg_wal\", the \"log\" directory is not necessarily located inside PGDATA. The actual value is configured using \"log_directory\" GUC, which just happened to be \"log\" by default. And in fact actual values on source and target could be different.\n\nI think we only care about files/dirs inside the datadir. Anything\noutside is out of scope for\npg_rewind AFAIU. We can only address the common case here. As mentioned in this\ncomment:\n\n * XXX: There is no backend function to get a symbolic link's target in\n * general, so if the admin has put any custom symbolic links in the data\n * directory, they won't be copied correctly.That's exactly my point. Users are very creative.On one node they could set log_directory to for example \"pg_log\" and on another one \"my_log\".And they would be writing logs to $PGDATA/pg_log and $PGDATA/my_log respectively and they are both located inside datadir.Lets assume that on the source we have \"pg_log\" and on the target we have \"my_log\" (they are configured using \"log_directory\" GUC).When doing rewind in this case we want neither to remove the content of \"my_log\" on the target nor to copy content of \"pg_log\" from the source.It couldn't be achieved just by introducing a static string \"log\". The \"log_directory\" GUC must be examined on both, source and target.Regards,--Alexander Kukushkin",
"msg_date": "Tue, 7 Mar 2023 08:33:24 +0100",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: Skip log directory for file type check like pg_wal"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 11:33 PM Alexander Kukushkin <cyberdemn@gmail.com> wrote:\n>\n>\n> Lets assume that on the source we have \"pg_log\" and on the target we have \"my_log\" (they are configured using \"log_directory\" GUC).\n> When doing rewind in this case we want neither to remove the content of \"my_log\" on the target nor to copy content of \"pg_log\" from the source.\n> It couldn't be achieved just by introducing a static string \"log\". The \"log_directory\" GUC must be examined on both, source and target.\n\nTrouble with doing that is if pg_rewind is run in non-libpq (offline)\nmode. Then we would have to parse it out of the conf file(s)?\nIs there a standard way of doing that?\n\nRegards,\nSoumyadeep (VMware)\n\n\n",
"msg_date": "Mon, 6 Mar 2023 23:51:52 -0800",
"msg_from": "Soumyadeep Chakraborty <soumyadeep2007@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind: Skip log directory for file type check like pg_wal"
},
{
"msg_contents": "On Tue, 7 Mar 2023 at 08:52, Soumyadeep Chakraborty <\nsoumyadeep2007@gmail.com> wrote:\n\n>\n> > It couldn't be achieved just by introducing a static string \"log\". The\n> \"log_directory\" GUC must be examined on both, source and target.\n>\n> Trouble with doing that is if pg_rewind is run in non-libpq (offline)\n> mode. Then we would have to parse it out of the conf file(s)?\n> Is there a standard way of doing that?\n>\n\npg_rewind is already doing something similar for \"restore_command\":\n/*\n * Get value of GUC parameter restore_command from the target cluster.\n *\n * This uses a logic based on \"postgres -C\" to get the value from the\n * cluster.\n */\nstatic void\ngetRestoreCommand(const char *argv0)\n\nFor the running source cluster one could just use \"SHOW log_directory\"\n\nRegards,\n--\nAlexander Kukushkin\n\nOn Tue, 7 Mar 2023 at 08:52, Soumyadeep Chakraborty <soumyadeep2007@gmail.com> wrote:\n> It couldn't be achieved just by introducing a static string \"log\". The \"log_directory\" GUC must be examined on both, source and target.\n\nTrouble with doing that is if pg_rewind is run in non-libpq (offline)\nmode. Then we would have to parse it out of the conf file(s)?\nIs there a standard way of doing that?pg_rewind is already doing something similar for \"restore_command\":/* * Get value of GUC parameter restore_command from the target cluster. * * This uses a logic based on \"postgres -C\" to get the value from the * cluster. */static voidgetRestoreCommand(const char *argv0)For the running source cluster one could just use \"SHOW log_directory\"Regards,--Alexander Kukushkin",
"msg_date": "Tue, 7 Mar 2023 09:03:21 +0100",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: Skip log directory for file type check like pg_wal"
},
{
"msg_contents": "> On 7 Mar 2023, at 08:33, Alexander Kukushkin <cyberdemn@gmail.com> wrote:\n\n> The \"log_directory\" GUC must be examined on both, source and target.\n\nAgreed, log_directory must be resolved to the configured values. Teaching\npg_rewind about those in case they are stored in $PGDATA sounds like a good\nidea though.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 7 Mar 2023 10:28:37 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind: Skip log directory for file type check like pg_wal"
}
] |
[
{
"msg_contents": "PSA patch to fix a comment inaccurate.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.",
"msg_date": "Mon, 06 Mar 2023 13:54:27 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Inaccurate comment for pg_get_partkeydef"
},
{
"msg_contents": "Looks good to me. Fixed according to the actual output.\n\nselect pg_get_partkeydef('prt1'::regclass);\n pg_get_partkeydef\n-------------------\n RANGE (a)\n(1 row)\n\nbdrdemo@342511=#\\d+ prt1\n Partitioned table \"public.prt1\"\n Column | Type | Collation | Nullable | Default | Storage |\nCompression | Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n a | integer | | not null | | plain |\n | |\n b | integer | | | | plain |\n | |\nPartition key: RANGE (a)\nIndexes:\n \"prt1_pkey\" PRIMARY KEY, btree (a)\n \"prt1_b\" btree (b)\nPartitions: prt1_p1 FOR VALUES FROM (0) TO (10),\n prt1_p2 FOR VALUES FROM (10) TO (20),\n prt1_default DEFAULT\n\nOn Mon, Mar 6, 2023 at 11:24 AM Japin Li <japinli@hotmail.com> wrote:\n>\n>\n> PSA patch to fix a comment inaccurate.\n>\n> --\n> Regrads,\n> Japin Li.\n> ChengDu WenWu Information Technology Co.,Ltd.\n>\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 6 Mar 2023 20:00:03 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inaccurate comment for pg_get_partkeydef"
},
{
"msg_contents": "On Mon, 6 Mar 2023 at 18:54, Japin Li <japinli@hotmail.com> wrote:\n> PSA patch to fix a comment inaccurate.\n\nThanks. Pushed.\n\nDavid\n\n\n",
"msg_date": "Tue, 7 Mar 2023 14:35:03 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inaccurate comment for pg_get_partkeydef"
}
] |
[
{
"msg_contents": "Hi\n\nIn one query I can see very big overhead of memoize node - unfortunately\nwith hits = 0\n\nThe Estimate is almost very good. See details in attachment\n\nRegards\n\nPavel",
"msg_date": "Mon, 6 Mar 2023 08:33:42 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "using memoize in in paralel query decreases performance"
},
{
"msg_contents": "On Mon, 6 Mar 2023 at 20:34, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> In one query I can see very big overhead of memoize node - unfortunately with hits = 0\n>\n> The Estimate is almost very good. See details in attachment\n\nAre you able to share the version number for this?\n\nAlso, it would be good to see EXPLAIN ANALYZE *VERBOSE* for the\nmemorize plan so we can see the timings for the parallel workers.\n\nThe results of:\n\nEXPLAIN ANALYZE\nSELECT DISTINCT ictc.sub_category_id\nFROM ixfk_ictc_subcategoryid ictc\nINNER JOIN item i ON i.item_category_id = ictc.sub_category_id\nWHERE ictc.super_category_id = ANY\n('{47124,49426,49488,47040,47128}'::bigint[]);\n\nwould also be useful. That should give an idea of the ndistinct\nestimate. I guess memorize thinks there are fewer unique values than\nthe 112 that were found. There's probably not much to be done about\nthat. The slowness of the parallel workers seems like a more\ninteresting thing to understand.\n\nDavid\n\n\n",
"msg_date": "Mon, 6 Mar 2023 21:16:41 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using memoize in in paralel query decreases performance"
},
{
"msg_contents": "po 6. 3. 2023 v 9:16 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n\n> On Mon, 6 Mar 2023 at 20:34, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > In one query I can see very big overhead of memoize node - unfortunately\n> with hits = 0\n> >\n> > The Estimate is almost very good. See details in attachment\n>\n> Are you able to share the version number for this?\n>\n\n15.1 - upgrade on 15.2 is planned this month\n\n\n>\n> Also, it would be good to see EXPLAIN ANALYZE *VERBOSE* for the\n> memorize plan so we can see the timings for the parallel workers.\n>\n\ndefault https://explain.depesz.com/s/fnBe\ndisabled memoize https://explain.depesz.com/s/P2rP\n\n\n> The results of:\n>\n> EXPLAIN ANALYZE\n> SELECT DISTINCT ictc.sub_category_id\n> FROM ixfk_ictc_subcategoryid ictc\n> INNER JOIN item i ON i.item_category_id = ictc.sub_category_id\n> WHERE ictc.super_category_id = ANY\n> ('{47124,49426,49488,47040,47128}'::bigint[]);\n>\n>\nhttps://explain.depesz.com/s/OtCl\n\nwould also be useful. That should give an idea of the ndistinct\n> estimate. I guess memorize thinks there are fewer unique values than\n> the 112 that were found. There's probably not much to be done about\n> that. The slowness of the parallel workers seems like a more\n> interesting thing to understand.\n>\n> David\n>\n\npo 6. 3. 2023 v 9:16 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:On Mon, 6 Mar 2023 at 20:34, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> In one query I can see very big overhead of memoize node - unfortunately with hits = 0\n>\n> The Estimate is almost very good. See details in attachment\n\nAre you able to share the version number for this?15.1 - upgrade on 15.2 is planned this month \n\nAlso, it would be good to see EXPLAIN ANALYZE *VERBOSE* for the\nmemorize plan so we can see the timings for the parallel workers.default https://explain.depesz.com/s/fnBedisabled memoize https://explain.depesz.com/s/P2rP \n\nThe results of:\n\nEXPLAIN ANALYZE\nSELECT DISTINCT ictc.sub_category_id\nFROM ixfk_ictc_subcategoryid ictc\nINNER JOIN item i ON i.item_category_id = ictc.sub_category_id\nWHERE ictc.super_category_id = ANY\n('{47124,49426,49488,47040,47128}'::bigint[]);\nhttps://explain.depesz.com/s/OtCl \nwould also be useful. That should give an idea of the ndistinct\nestimate. I guess memorize thinks there are fewer unique values than\nthe 112 that were found. There's probably not much to be done about\nthat. The slowness of the parallel workers seems like a more\ninteresting thing to understand.\n\nDavid",
"msg_date": "Mon, 6 Mar 2023 09:54:59 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: using memoize in in paralel query decreases performance"
},
{
"msg_contents": "On Mon, 6 Mar 2023 at 21:55, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> default https://explain.depesz.com/s/fnBe\n\nIt looks like the slowness is coming from the Bitmap Index scan and\nBitmap heap scan rather than Memoize.\n\n -> Bitmap Heap Scan on public.item i (cost=285.69..41952.12\nrows=29021 width=16) (actual time=20.395..591.606 rows=20471\nloops=784)\n Output: i.id, i.item_category_id\n Recheck Cond: (i.item_category_id = ictc.sub_category_id)\n Heap Blocks: exact=1590348\n Worker 0: actual time=20.128..591.426 rows=20471 loops=112\n Worker 1: actual time=20.243..591.627 rows=20471 loops=112\n Worker 2: actual time=20.318..591.660 rows=20471 loops=112\n Worker 3: actual time=21.180..591.644 rows=20471 loops=112\n Worker 4: actual time=20.226..591.357 rows=20471 loops=112\n Worker 5: actual time=20.597..591.418 rows=20471 loops=112\n -> Bitmap Index Scan on ixfk_ite_itemcategoryid\n(cost=0.00..278.43 rows=29021 width=0) (actual time=14.851..14.851\nrows=25362 loops=784)\n Index Cond: (i.item_category_id = ictc.sub_category_id)\n Worker 0: actual time=14.863..14.863 rows=25362 loops=112\n Worker 1: actual time=14.854..14.854 rows=25362 loops=112\n Worker 2: actual time=14.611..14.611 rows=25362 loops=112\n Worker 3: actual time=15.245..15.245 rows=25362 loops=112\n Worker 4: actual time=14.909..14.909 rows=25362 loops=112\n Worker 5: actual time=14.841..14.841 rows=25362 loops=112\n\n> disabled memoize https://explain.depesz.com/s/P2rP\n\n-> Bitmap Heap Scan on public.item i (cost=285.69..41952.12\nrows=29021 width=16) (actual time=9.256..57.503 rows=20471 loops=784)\n Output: i.id, i.item_category_id\n Recheck Cond: (i.item_category_id = ictc.sub_category_id)\n Heap Blocks: exact=1590349\n Worker 0: actual time=9.422..58.420 rows=20471 loops=112\n Worker 1: actual time=9.449..57.539 rows=20471 loops=112\n Worker 2: actual time=9.751..58.129 rows=20471 loops=112\n Worker 3: actual time=9.620..57.484 rows=20471 loops=112\n Worker 4: actual time=8.940..57.911 rows=20471 loops=112\n Worker 5: actual time=9.454..57.488 rows=20471 loops=112\n -> Bitmap Index Scan on ixfk_ite_itemcategoryid\n(cost=0.00..278.43 rows=29021 width=0) (actual time=4.581..4.581\nrows=25363 loops=784)\n Index Cond: (i.item_category_id = ictc.sub_category_id)\n Worker 0: actual time=4.846..4.846 rows=25363 loops=112\n Worker 1: actual time=4.734..4.734 rows=25363 loops=112\n Worker 2: actual time=4.803..4.803 rows=25363 loops=112\n Worker 3: actual time=4.959..4.959 rows=25363 loops=112\n Worker 4: actual time=4.402..4.402 rows=25363 loops=112\n Worker 5: actual time=4.778..4.778 rows=25363 loops=112\n\nI wonder if the additional work_mem required for Memoize is just doing\nsomething like causing kernel page cache evictions and leading to\nfewer buffers for ixfk_ite_itemcategoryid and the item table being\ncached in the kernel page cache.\n\nMaybe you could get an idea of that if you SET track_io_timing = on;\nand EXPLAIN (ANALYZE, BUFFERS) both queries.\n\nDavid\n\n\n",
"msg_date": "Tue, 7 Mar 2023 10:52:08 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using memoize in in paralel query decreases performance"
},
{
"msg_contents": "po 6. 3. 2023 v 22:52 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n\n> On Mon, 6 Mar 2023 at 21:55, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > default https://explain.depesz.com/s/fnBe\n>\n> It looks like the slowness is coming from the Bitmap Index scan and\n> Bitmap heap scan rather than Memoize.\n>\n> -> Bitmap Heap Scan on public.item i (cost=285.69..41952.12\n> rows=29021 width=16) (actual time=20.395..591.606 rows=20471\n> loops=784)\n> Output: i.id, i.item_category_id\n> Recheck Cond: (i.item_category_id = ictc.sub_category_id)\n> Heap Blocks: exact=1590348\n> Worker 0: actual time=20.128..591.426 rows=20471 loops=112\n> Worker 1: actual time=20.243..591.627 rows=20471 loops=112\n> Worker 2: actual time=20.318..591.660 rows=20471 loops=112\n> Worker 3: actual time=21.180..591.644 rows=20471 loops=112\n> Worker 4: actual time=20.226..591.357 rows=20471 loops=112\n> Worker 5: actual time=20.597..591.418 rows=20471 loops=112\n> -> Bitmap Index Scan on ixfk_ite_itemcategoryid\n> (cost=0.00..278.43 rows=29021 width=0) (actual time=14.851..14.851\n> rows=25362 loops=784)\n> Index Cond: (i.item_category_id = ictc.sub_category_id)\n> Worker 0: actual time=14.863..14.863 rows=25362 loops=112\n> Worker 1: actual time=14.854..14.854 rows=25362 loops=112\n> Worker 2: actual time=14.611..14.611 rows=25362 loops=112\n> Worker 3: actual time=15.245..15.245 rows=25362 loops=112\n> Worker 4: actual time=14.909..14.909 rows=25362 loops=112\n> Worker 5: actual time=14.841..14.841 rows=25362 loops=112\n>\n> > disabled memoize https://explain.depesz.com/s/P2rP\n>\n> -> Bitmap Heap Scan on public.item i (cost=285.69..41952.12\n> rows=29021 width=16) (actual time=9.256..57.503 rows=20471 loops=784)\n> Output: i.id, i.item_category_id\n> Recheck Cond: (i.item_category_id = ictc.sub_category_id)\n> Heap Blocks: exact=1590349\n> Worker 0: actual time=9.422..58.420 rows=20471 loops=112\n> Worker 1: actual time=9.449..57.539 rows=20471 loops=112\n> Worker 2: actual time=9.751..58.129 rows=20471 loops=112\n> Worker 3: actual time=9.620..57.484 rows=20471 loops=112\n> Worker 4: actual time=8.940..57.911 rows=20471 loops=112\n> Worker 5: actual time=9.454..57.488 rows=20471 loops=112\n> -> Bitmap Index Scan on ixfk_ite_itemcategoryid\n> (cost=0.00..278.43 rows=29021 width=0) (actual time=4.581..4.581\n> rows=25363 loops=784)\n> Index Cond: (i.item_category_id = ictc.sub_category_id)\n> Worker 0: actual time=4.846..4.846 rows=25363 loops=112\n> Worker 1: actual time=4.734..4.734 rows=25363 loops=112\n> Worker 2: actual time=4.803..4.803 rows=25363 loops=112\n> Worker 3: actual time=4.959..4.959 rows=25363 loops=112\n> Worker 4: actual time=4.402..4.402 rows=25363 loops=112\n> Worker 5: actual time=4.778..4.778 rows=25363 loops=112\n>\n> I wonder if the additional work_mem required for Memoize is just doing\n> something like causing kernel page cache evictions and leading to\n> fewer buffers for ixfk_ite_itemcategoryid and the item table being\n> cached in the kernel page cache.\n>\n> Maybe you could get an idea of that if you SET track_io_timing = on;\n> and EXPLAIN (ANALYZE, BUFFERS) both queries.\n>\n\nhttps://explain.depesz.com/s/vhk0\nhttps://explain.depesz.com/s/R5ju\n\nRegards\n\nPavel\n\n\n> David\n>\n\npo 6. 3. 2023 v 22:52 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:On Mon, 6 Mar 2023 at 21:55, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> default https://explain.depesz.com/s/fnBe\n\nIt looks like the slowness is coming from the Bitmap Index scan and\nBitmap heap scan rather than Memoize.\n\n -> Bitmap Heap Scan on public.item i (cost=285.69..41952.12\nrows=29021 width=16) (actual time=20.395..591.606 rows=20471\nloops=784)\n Output: i.id, i.item_category_id\n Recheck Cond: (i.item_category_id = ictc.sub_category_id)\n Heap Blocks: exact=1590348\n Worker 0: actual time=20.128..591.426 rows=20471 loops=112\n Worker 1: actual time=20.243..591.627 rows=20471 loops=112\n Worker 2: actual time=20.318..591.660 rows=20471 loops=112\n Worker 3: actual time=21.180..591.644 rows=20471 loops=112\n Worker 4: actual time=20.226..591.357 rows=20471 loops=112\n Worker 5: actual time=20.597..591.418 rows=20471 loops=112\n -> Bitmap Index Scan on ixfk_ite_itemcategoryid\n(cost=0.00..278.43 rows=29021 width=0) (actual time=14.851..14.851\nrows=25362 loops=784)\n Index Cond: (i.item_category_id = ictc.sub_category_id)\n Worker 0: actual time=14.863..14.863 rows=25362 loops=112\n Worker 1: actual time=14.854..14.854 rows=25362 loops=112\n Worker 2: actual time=14.611..14.611 rows=25362 loops=112\n Worker 3: actual time=15.245..15.245 rows=25362 loops=112\n Worker 4: actual time=14.909..14.909 rows=25362 loops=112\n Worker 5: actual time=14.841..14.841 rows=25362 loops=112\n\n> disabled memoize https://explain.depesz.com/s/P2rP\n\n-> Bitmap Heap Scan on public.item i (cost=285.69..41952.12\nrows=29021 width=16) (actual time=9.256..57.503 rows=20471 loops=784)\n Output: i.id, i.item_category_id\n Recheck Cond: (i.item_category_id = ictc.sub_category_id)\n Heap Blocks: exact=1590349\n Worker 0: actual time=9.422..58.420 rows=20471 loops=112\n Worker 1: actual time=9.449..57.539 rows=20471 loops=112\n Worker 2: actual time=9.751..58.129 rows=20471 loops=112\n Worker 3: actual time=9.620..57.484 rows=20471 loops=112\n Worker 4: actual time=8.940..57.911 rows=20471 loops=112\n Worker 5: actual time=9.454..57.488 rows=20471 loops=112\n -> Bitmap Index Scan on ixfk_ite_itemcategoryid\n(cost=0.00..278.43 rows=29021 width=0) (actual time=4.581..4.581\nrows=25363 loops=784)\n Index Cond: (i.item_category_id = ictc.sub_category_id)\n Worker 0: actual time=4.846..4.846 rows=25363 loops=112\n Worker 1: actual time=4.734..4.734 rows=25363 loops=112\n Worker 2: actual time=4.803..4.803 rows=25363 loops=112\n Worker 3: actual time=4.959..4.959 rows=25363 loops=112\n Worker 4: actual time=4.402..4.402 rows=25363 loops=112\n Worker 5: actual time=4.778..4.778 rows=25363 loops=112\n\nI wonder if the additional work_mem required for Memoize is just doing\nsomething like causing kernel page cache evictions and leading to\nfewer buffers for ixfk_ite_itemcategoryid and the item table being\ncached in the kernel page cache.\n\nMaybe you could get an idea of that if you SET track_io_timing = on;\nand EXPLAIN (ANALYZE, BUFFERS) both queries.https://explain.depesz.com/s/vhk0https://explain.depesz.com/s/R5juRegardsPavel\n\nDavid",
"msg_date": "Tue, 7 Mar 2023 09:08:43 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: using memoize in in paralel query decreases performance"
},
{
"msg_contents": " /On Tue, 7 Mar 2023 at 21:09, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> po 6. 3. 2023 v 22:52 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n>> I wonder if the additional work_mem required for Memoize is just doing\n>> something like causing kernel page cache evictions and leading to\n>> fewer buffers for ixfk_ite_itemcategoryid and the item table being\n>> cached in the kernel page cache.\n>>\n>> Maybe you could get an idea of that if you SET track_io_timing = on;\n>> and EXPLAIN (ANALYZE, BUFFERS) both queries.\n>\n>\n> https://explain.depesz.com/s/vhk0\n\nThis is the enable_memoize=on one. The I/O looks like:\n\nBuffers: shared hit=105661309 read=15274264 dirtied=15707 written=34863\nI/O Timings: shared/local read=2671836.341 write=1286.869\n\n2671836.341 / 15274264 = ~0.175 ms per read.\n\n> https://explain.depesz.com/s/R5ju\n\nThis is the faster enable_memoize = off one. The I/O looks like:\n\nBuffers: shared hit=44542473 read=18541899 dirtied=11988 written=18625\nI/O Timings: shared/local read=1554838.583 write=821.477\n\n1554838.583 / 18541899 = ~0.084 ms per read.\n\nThat indicates that the enable_memoize=off version is just finding\nmore pages in the kernel's page cache than the slower query. The\nslower query just appears to be under more memory pressure causing the\nkernel to have less free memory to cache useful pages. I don't see\nanything here that indicates any problems with Memoize. Sure the\nstatistics could be better as, ideally, the Memoize wouldn't have\nhappened for the i_2 relation. I don't see anything that indicates any\nbugs with this, however. It's pretty well known that Memoize puts\nquite a bit of faith into ndistinct estimates. If it causes too many\nissues the enable_memoize switch can be turned to off.\n\nYou might want to consider experimenting with smaller values of\nwork_mem and/or hash_mem_multiplier for this query or just disabling\nmemoize altogether.\n\nDavid\n\n\n",
"msg_date": "Tue, 7 Mar 2023 21:58:03 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using memoize in in paralel query decreases performance"
},
{
"msg_contents": "út 7. 3. 2023 v 9:58 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n\n> /On Tue, 7 Mar 2023 at 21:09, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > po 6. 3. 2023 v 22:52 odesílatel David Rowley <dgrowleyml@gmail.com>\n> napsal:\n> >> I wonder if the additional work_mem required for Memoize is just doing\n> >> something like causing kernel page cache evictions and leading to\n> >> fewer buffers for ixfk_ite_itemcategoryid and the item table being\n> >> cached in the kernel page cache.\n> >>\n> >> Maybe you could get an idea of that if you SET track_io_timing = on;\n> >> and EXPLAIN (ANALYZE, BUFFERS) both queries.\n> >\n> >\n> > https://explain.depesz.com/s/vhk0\n>\n> This is the enable_memoize=on one. The I/O looks like:\n>\n> Buffers: shared hit=105661309 read=15274264 dirtied=15707 written=34863\n> I/O Timings: shared/local read=2671836.341 write=1286.869\n>\n> 2671836.341 / 15274264 = ~0.175 ms per read.\n>\n> > https://explain.depesz.com/s/R5ju\n>\n> This is the faster enable_memoize = off one. The I/O looks like:\n>\n> Buffers: shared hit=44542473 read=18541899 dirtied=11988 written=18625\n> I/O Timings: shared/local read=1554838.583 write=821.477\n>\n> 1554838.583 / 18541899 = ~0.084 ms per read.\n>\n> That indicates that the enable_memoize=off version is just finding\n> more pages in the kernel's page cache than the slower query. The\n> slower query just appears to be under more memory pressure causing the\n> kernel to have less free memory to cache useful pages. I don't see\n> anything here that indicates any problems with Memoize. Sure the\n> statistics could be better as, ideally, the Memoize wouldn't have\n> happened for the i_2 relation. I don't see anything that indicates any\n> bugs with this, however. It's pretty well known that Memoize puts\n> quite a bit of faith into ndistinct estimates. If it causes too many\n> issues the enable_memoize switch can be turned to off.\n>\n> You might want to consider experimenting with smaller values of\n> work_mem and/or hash_mem_multiplier for this query or just disabling\n> memoize altogether.\n>\n\nI can live with it. This is an analytical query and the performance is not\ntoo important for us. I was surprised that the performance was about 25%\nworse, and so the hit ratio was almost zero. I am thinking, but I am not\nsure if the estimation of the effectiveness of memoization can depend (or\nshould depend) on the number of workers? In this case the number of workers\nis high.\n\n\n\n> David\n>\n\nút 7. 3. 2023 v 9:58 odesílatel David Rowley <dgrowleyml@gmail.com> napsal: /On Tue, 7 Mar 2023 at 21:09, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> po 6. 3. 2023 v 22:52 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n>> I wonder if the additional work_mem required for Memoize is just doing\n>> something like causing kernel page cache evictions and leading to\n>> fewer buffers for ixfk_ite_itemcategoryid and the item table being\n>> cached in the kernel page cache.\n>>\n>> Maybe you could get an idea of that if you SET track_io_timing = on;\n>> and EXPLAIN (ANALYZE, BUFFERS) both queries.\n>\n>\n> https://explain.depesz.com/s/vhk0\n\nThis is the enable_memoize=on one. The I/O looks like:\n\nBuffers: shared hit=105661309 read=15274264 dirtied=15707 written=34863\nI/O Timings: shared/local read=2671836.341 write=1286.869\n\n2671836.341 / 15274264 = ~0.175 ms per read.\n\n> https://explain.depesz.com/s/R5ju\n\nThis is the faster enable_memoize = off one. The I/O looks like:\n\nBuffers: shared hit=44542473 read=18541899 dirtied=11988 written=18625\nI/O Timings: shared/local read=1554838.583 write=821.477\n\n1554838.583 / 18541899 = ~0.084 ms per read.\n\nThat indicates that the enable_memoize=off version is just finding\nmore pages in the kernel's page cache than the slower query. The\nslower query just appears to be under more memory pressure causing the\nkernel to have less free memory to cache useful pages. I don't see\nanything here that indicates any problems with Memoize. Sure the\nstatistics could be better as, ideally, the Memoize wouldn't have\nhappened for the i_2 relation. I don't see anything that indicates any\nbugs with this, however. It's pretty well known that Memoize puts\nquite a bit of faith into ndistinct estimates. If it causes too many\nissues the enable_memoize switch can be turned to off.\n\nYou might want to consider experimenting with smaller values of\nwork_mem and/or hash_mem_multiplier for this query or just disabling\nmemoize altogether.I can live with it. This is an analytical query and the performance is not too important for us. I was surprised that the performance was about 25% worse, and so the hit ratio was almost zero. I am thinking, but I am not sure if the estimation of the effectiveness of memoization can depend (or should depend) on the number of workers? In this case the number of workers is high.\n\nDavid",
"msg_date": "Tue, 7 Mar 2023 10:08:55 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: using memoize in in paralel query decreases performance"
},
{
"msg_contents": "On Tue, 7 Mar 2023 at 22:09, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I can live with it. This is an analytical query and the performance is not too important for us. I was surprised that the performance was about 25% worse, and so the hit ratio was almost zero. I am thinking, but I am not sure if the estimation of the effectiveness of memoization can depend (or should depend) on the number of workers? In this case the number of workers is high.\n\nThe costing for Memoize takes the number of workers into account by\nway of the change in expected input rows. The number of estimated\ninput rows is effectively just divided by the number of parallel\nworkers, so if we expect 1 million rows from the outer side of the\njoin and 4 workers, then we'll assume the memorize will deal with\n250,000 rows per worker. If the n_distinct estimate for the cache key\nis 500,000, then it's not going to look very attractive to Memoize\nthat. In reality, estimate_num_groups() won't say the number of\ngroups is higher than the input rows, but Memoize, with all the other\noverheads factored into the costs, it would never look favourable if\nthe planner thought there was never going to be any repeated values.\nThe expected cache hit ratio there would be zero.\n\nDavid\n\n\n",
"msg_date": "Tue, 7 Mar 2023 22:46:35 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: using memoize in in paralel query decreases performance"
},
{
"msg_contents": "út 7. 3. 2023 v 10:46 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:\n\n> On Tue, 7 Mar 2023 at 22:09, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > I can live with it. This is an analytical query and the performance is\n> not too important for us. I was surprised that the performance was about\n> 25% worse, and so the hit ratio was almost zero. I am thinking, but I am\n> not sure if the estimation of the effectiveness of memoization can depend\n> (or should depend) on the number of workers? In this case the number of\n> workers is high.\n>\n> The costing for Memoize takes the number of workers into account by\n> way of the change in expected input rows. The number of estimated\n> input rows is effectively just divided by the number of parallel\n> workers, so if we expect 1 million rows from the outer side of the\n> join and 4 workers, then we'll assume the memorize will deal with\n> 250,000 rows per worker. If the n_distinct estimate for the cache key\n> is 500,000, then it's not going to look very attractive to Memoize\n> that. In reality, estimate_num_groups() won't say the number of\n> groups is higher than the input rows, but Memoize, with all the other\n> overheads factored into the costs, it would never look favourable if\n> the planner thought there was never going to be any repeated values.\n> The expected cache hit ratio there would be zero.\n>\n\nThanks for the explanation.\n\nPavel\n\n\n> David\n>\n\nút 7. 3. 2023 v 10:46 odesílatel David Rowley <dgrowleyml@gmail.com> napsal:On Tue, 7 Mar 2023 at 22:09, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I can live with it. This is an analytical query and the performance is not too important for us. I was surprised that the performance was about 25% worse, and so the hit ratio was almost zero. I am thinking, but I am not sure if the estimation of the effectiveness of memoization can depend (or should depend) on the number of workers? In this case the number of workers is high.\n\nThe costing for Memoize takes the number of workers into account by\nway of the change in expected input rows. The number of estimated\ninput rows is effectively just divided by the number of parallel\nworkers, so if we expect 1 million rows from the outer side of the\njoin and 4 workers, then we'll assume the memorize will deal with\n250,000 rows per worker. If the n_distinct estimate for the cache key\nis 500,000, then it's not going to look very attractive to Memoize\nthat. In reality, estimate_num_groups() won't say the number of\ngroups is higher than the input rows, but Memoize, with all the other\noverheads factored into the costs, it would never look favourable if\nthe planner thought there was never going to be any repeated values.\nThe expected cache hit ratio there would be zero.Thanks for the explanation. Pavel\n\nDavid",
"msg_date": "Tue, 7 Mar 2023 10:50:20 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: using memoize in in paralel query decreases performance"
}
] |
[
{
"msg_contents": "tender wang <tndrwang@gmail.com>\n[image: 附件]14:51 (2小时前)\n发送至 pgsql-hackers\nHi hackers.\n This query has different result on 16devel and 15.2.\nselect\n sample_3.n_regionkey as c0,\n ref_7.l_linenumber as c3,\n sample_4.l_quantity as c6,\n sample_5.n_nationkey as c7,\n sample_3.n_name as c8\n from\n public.nation as sample_3\n left join public.lineitem as ref_5\n on ((cast(null as text) ~>=~ cast(null as text))\n or (ref_5.l_discount is NULL))\n left join public.time_statistics as ref_6\n inner join public.lineitem as ref_7\n on (ref_7.l_returnflag = ref_7.l_linestatus)\n right join public.lineitem as sample_4\n left join public.nation as sample_5\n on (cast(null as tsquery) = cast(null as tsquery))\n on (cast(null as \"time\") <= cast(null as \"time\"))\n right join public.customer as ref_8\n on (sample_4.l_comment = ref_8.c_name )\n on (ref_5.l_quantity = ref_7.l_quantity )\n where (ref_7.l_suppkey is not NULL)\n or ((case when cast(null as lseg) >= cast(null as lseg) then cast(null\nas inet) else cast(null as inet) end\n && cast(null as inet))\n or (pg_catalog.getdatabaseencoding() !~~ case when (cast(null as\nint2) <= cast(null as int8))\n or (EXISTS (\n select\n ref_9.ps_comment as c0,\n 5 as c1,\n ref_8.c_address as c2,\n 58 as c3,\n ref_8.c_acctbal as c4,\n ref_7.l_orderkey as c5,\n ref_7.l_shipmode as c6,\n ref_5.l_commitdate as c7,\n ref_8.c_custkey as c8,\n sample_3.n_nationkey as c9\n from\n public.partsupp as ref_9\n where cast(null as tsquery) @> cast(null as tsquery)\n order by c0, c1, c2, c3, c4, c5, c6, c7, c8, c9 limit 38))\nthen cast(null as text) else cast(null as text) end\n ))\n order by c0, c3, c6, c7, c8 limit 137;\n\nplan on 16devel:\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit\n InitPlan 1 (returns $0)\n -> Result\n One-Time Filter: false\n -> Sort\n Sort Key: sample_3.n_regionkey, l_linenumber, l_quantity,\nn_nationkey, sample_3.n_name\n -> Nested Loop Left Join\n -> Seq Scan on nation sample_3\n -> Materialize\n -> Nested Loop Left Join\n Join Filter: (ref_5.l_quantity = l_quantity)\n Filter: ((l_suppkey IS NOT NULL) OR\n(getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text\nELSE NULL::text END))\n -> Seq Scan on lineitem ref_5\n Filter: (l_discount IS NULL)\n -> Result\n One-Time Filter: false\n(16 rows)\n\nplan on 15.2:\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit\n InitPlan 1 (returns $0)\n -> Result\n One-Time Filter: false\n -> Sort\n Sort Key: sample_3.n_regionkey, l_linenumber, l_quantity,\nn_nationkey, sample_3.n_name\n -> Nested Loop Left Join\n Filter: ((l_suppkey IS NOT NULL) OR (getdatabaseencoding()\n!~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text ELSE NULL::text END))\n -> Seq Scan on nation sample_3\n -> Materialize\n -> Nested Loop Left Join\n Join Filter: (ref_5.l_quantity = l_quantity)\n -> Seq Scan on lineitem ref_5\n Filter: (l_discount IS NULL)\n -> Result\n One-Time Filter: false\n(16 rows)\n\n\nIt looks wrong that the qual (e.g ((l_suppkey IS NOT NULL) OR\n(getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text\nELSE NULL::text END))) is pushdown.\n\n regards, tender wang\n\ntender wang <tndrwang@gmail.com>14:51 (2小时前)发送至 pgsql-hackersHi hackers. This query has different result on 16devel and 15.2.select sample_3.n_regionkey as c0, ref_7.l_linenumber as c3, sample_4.l_quantity as c6, sample_5.n_nationkey as c7, sample_3.n_name as c8 from public.nation as sample_3 left join public.lineitem as ref_5 on ((cast(null as text) ~>=~ cast(null as text)) or (ref_5.l_discount is NULL)) left join public.time_statistics as ref_6 inner join public.lineitem as ref_7 on (ref_7.l_returnflag = ref_7.l_linestatus) right join public.lineitem as sample_4 left join public.nation as sample_5 on (cast(null as tsquery) = cast(null as tsquery)) on (cast(null as \"time\") <= cast(null as \"time\")) right join public.customer as ref_8 on (sample_4.l_comment = ref_8.c_name ) on (ref_5.l_quantity = ref_7.l_quantity ) where (ref_7.l_suppkey is not NULL) or ((case when cast(null as lseg) >= cast(null as lseg) then cast(null as inet) else cast(null as inet) end && cast(null as inet)) or (pg_catalog.getdatabaseencoding() !~~ case when (cast(null as int2) <= cast(null as int8)) or (EXISTS ( select ref_9.ps_comment as c0, 5 as c1, ref_8.c_address as c2, 58 as c3, ref_8.c_acctbal as c4, ref_7.l_orderkey as c5, ref_7.l_shipmode as c6, ref_5.l_commitdate as c7, ref_8.c_custkey as c8, sample_3.n_nationkey as c9 from public.partsupp as ref_9 where cast(null as tsquery) @> cast(null as tsquery) order by c0, c1, c2, c3, c4, c5, c6, c7, c8, c9 limit 38)) then cast(null as text) else cast(null as text) end )) order by c0, c3, c6, c7, c8 limit 137;plan on 16devel: QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit InitPlan 1 (returns $0) -> Result One-Time Filter: false -> Sort Sort Key: sample_3.n_regionkey, l_linenumber, l_quantity, n_nationkey, sample_3.n_name -> Nested Loop Left Join -> Seq Scan on nation sample_3 -> Materialize -> Nested Loop Left Join Join Filter: (ref_5.l_quantity = l_quantity) Filter: ((l_suppkey IS NOT NULL) OR (getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text ELSE NULL::text END)) -> Seq Scan on lineitem ref_5 Filter: (l_discount IS NULL) -> Result One-Time Filter: false(16 rows)plan on 15.2: QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------- Limit InitPlan 1 (returns $0) -> Result One-Time Filter: false -> Sort Sort Key: sample_3.n_regionkey, l_linenumber, l_quantity, n_nationkey, sample_3.n_name -> Nested Loop Left Join Filter: ((l_suppkey IS NOT NULL) OR (getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text ELSE NULL::text END)) -> Seq Scan on nation sample_3 -> Materialize -> Nested Loop Left Join Join Filter: (ref_5.l_quantity = l_quantity) -> Seq Scan on lineitem ref_5 Filter: (l_discount IS NULL) -> Result One-Time Filter: false(16 rows)It looks wrong that the qual (e.g ((l_suppkey IS NOT NULL) OR (getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text ELSE NULL::text END))) is pushdown. regards, tender wang",
"msg_date": "Mon, 6 Mar 2023 17:30:31 +0800",
"msg_from": "tender wang <tndrwang@gmail.com>",
"msg_from_op": true,
"msg_subject": "wrong results due to qual pushdown"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 3:00 PM tender wang <tndrwang@gmail.com> wrote:\n\n> tender wang <tndrwang@gmail.com>\n> [image: 附件]14:51 (2小时前)\n> 发送至 pgsql-hackers\n> Hi hackers.\n> This query has different result on 16devel and 15.2.\n> select\n> sample_3.n_regionkey as c0,\n> ref_7.l_linenumber as c3,\n> sample_4.l_quantity as c6,\n> sample_5.n_nationkey as c7,\n> sample_3.n_name as c8\n> from\n> public.nation as sample_3\n> left join public.lineitem as ref_5\n> on ((cast(null as text) ~>=~ cast(null as text))\n> or (ref_5.l_discount is NULL))\n> left join public.time_statistics as ref_6\n> inner join public.lineitem as ref_7\n> on (ref_7.l_returnflag = ref_7.l_linestatus)\n> right join public.lineitem as sample_4\n> left join public.nation as sample_5\n> on (cast(null as tsquery) = cast(null as tsquery))\n> on (cast(null as \"time\") <= cast(null as \"time\"))\n> right join public.customer as ref_8\n> on (sample_4.l_comment = ref_8.c_name )\n> on (ref_5.l_quantity = ref_7.l_quantity )\n> where (ref_7.l_suppkey is not NULL)\n> or ((case when cast(null as lseg) >= cast(null as lseg) then cast(null\n> as inet) else cast(null as inet) end\n> && cast(null as inet))\n> or (pg_catalog.getdatabaseencoding() !~~ case when (cast(null as\n> int2) <= cast(null as int8))\n> or (EXISTS (\n> select\n> ref_9.ps_comment as c0,\n> 5 as c1,\n> ref_8.c_address as c2,\n> 58 as c3,\n> ref_8.c_acctbal as c4,\n> ref_7.l_orderkey as c5,\n> ref_7.l_shipmode as c6,\n> ref_5.l_commitdate as c7,\n> ref_8.c_custkey as c8,\n> sample_3.n_nationkey as c9\n> from\n> public.partsupp as ref_9\n> where cast(null as tsquery) @> cast(null as tsquery)\n> order by c0, c1, c2, c3, c4, c5, c6, c7, c8, c9 limit 38))\n> then cast(null as text) else cast(null as text) end\n> ))\n> order by c0, c3, c6, c7, c8 limit 137;\n>\n> plan on 16devel:\n>\n> QUERY PLAN\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit\n> InitPlan 1 (returns $0)\n> -> Result\n> One-Time Filter: false\n> -> Sort\n> Sort Key: sample_3.n_regionkey, l_linenumber, l_quantity,\n> n_nationkey, sample_3.n_name\n> -> Nested Loop Left Join\n> -> Seq Scan on nation sample_3\n> -> Materialize\n> -> Nested Loop Left Join\n> Join Filter: (ref_5.l_quantity = l_quantity)\n> Filter: ((l_suppkey IS NOT NULL) OR\n> (getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text\n> ELSE NULL::text END))\n> -> Seq Scan on lineitem ref_5\n> Filter: (l_discount IS NULL)\n> -> Result\n> One-Time Filter: false\n> (16 rows)\n>\n> plan on 15.2:\n> QUERY\n> PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit\n> InitPlan 1 (returns $0)\n> -> Result\n> One-Time Filter: false\n> -> Sort\n> Sort Key: sample_3.n_regionkey, l_linenumber, l_quantity,\n> n_nationkey, sample_3.n_name\n> -> Nested Loop Left Join\n> Filter: ((l_suppkey IS NOT NULL) OR (getdatabaseencoding()\n> !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text ELSE NULL::text END))\n> -> Seq Scan on nation sample_3\n> -> Materialize\n> -> Nested Loop Left Join\n> Join Filter: (ref_5.l_quantity = l_quantity)\n> -> Seq Scan on lineitem ref_5\n> Filter: (l_discount IS NULL)\n> -> Result\n> One-Time Filter: false\n> (16 rows)\n>\n>\n> It looks wrong that the qual (e.g ((l_suppkey IS NOT NULL) OR\n> (getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text\n> ELSE NULL::text END))) is pushdown.\n>\n\nIs that because $0 comes from a peer plan?\n\nAn example of the difference in the results would help.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\nOn Mon, Mar 6, 2023 at 3:00 PM tender wang <tndrwang@gmail.com> wrote:tender wang <tndrwang@gmail.com>14:51 (2小时前)发送至 pgsql-hackersHi hackers. This query has different result on 16devel and 15.2.select sample_3.n_regionkey as c0, ref_7.l_linenumber as c3, sample_4.l_quantity as c6, sample_5.n_nationkey as c7, sample_3.n_name as c8 from public.nation as sample_3 left join public.lineitem as ref_5 on ((cast(null as text) ~>=~ cast(null as text)) or (ref_5.l_discount is NULL)) left join public.time_statistics as ref_6 inner join public.lineitem as ref_7 on (ref_7.l_returnflag = ref_7.l_linestatus) right join public.lineitem as sample_4 left join public.nation as sample_5 on (cast(null as tsquery) = cast(null as tsquery)) on (cast(null as \"time\") <= cast(null as \"time\")) right join public.customer as ref_8 on (sample_4.l_comment = ref_8.c_name ) on (ref_5.l_quantity = ref_7.l_quantity ) where (ref_7.l_suppkey is not NULL) or ((case when cast(null as lseg) >= cast(null as lseg) then cast(null as inet) else cast(null as inet) end && cast(null as inet)) or (pg_catalog.getdatabaseencoding() !~~ case when (cast(null as int2) <= cast(null as int8)) or (EXISTS ( select ref_9.ps_comment as c0, 5 as c1, ref_8.c_address as c2, 58 as c3, ref_8.c_acctbal as c4, ref_7.l_orderkey as c5, ref_7.l_shipmode as c6, ref_5.l_commitdate as c7, ref_8.c_custkey as c8, sample_3.n_nationkey as c9 from public.partsupp as ref_9 where cast(null as tsquery) @> cast(null as tsquery) order by c0, c1, c2, c3, c4, c5, c6, c7, c8, c9 limit 38)) then cast(null as text) else cast(null as text) end )) order by c0, c3, c6, c7, c8 limit 137;plan on 16devel: QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit InitPlan 1 (returns $0) -> Result One-Time Filter: false -> Sort Sort Key: sample_3.n_regionkey, l_linenumber, l_quantity, n_nationkey, sample_3.n_name -> Nested Loop Left Join -> Seq Scan on nation sample_3 -> Materialize -> Nested Loop Left Join Join Filter: (ref_5.l_quantity = l_quantity) Filter: ((l_suppkey IS NOT NULL) OR (getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text ELSE NULL::text END)) -> Seq Scan on lineitem ref_5 Filter: (l_discount IS NULL) -> Result One-Time Filter: false(16 rows)plan on 15.2: QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------- Limit InitPlan 1 (returns $0) -> Result One-Time Filter: false -> Sort Sort Key: sample_3.n_regionkey, l_linenumber, l_quantity, n_nationkey, sample_3.n_name -> Nested Loop Left Join Filter: ((l_suppkey IS NOT NULL) OR (getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text ELSE NULL::text END)) -> Seq Scan on nation sample_3 -> Materialize -> Nested Loop Left Join Join Filter: (ref_5.l_quantity = l_quantity) -> Seq Scan on lineitem ref_5 Filter: (l_discount IS NULL) -> Result One-Time Filter: false(16 rows)It looks wrong that the qual (e.g ((l_suppkey IS NOT NULL) OR (getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text ELSE NULL::text END))) is pushdown.Is that because $0 comes from a peer plan?An example of the difference in the results would help.-- Best Wishes,Ashutosh Bapat",
"msg_date": "Mon, 6 Mar 2023 19:44:42 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: wrong results due to qual pushdown"
},
{
"msg_contents": "Results on 16devel:\nc0 | c3 | c6 | c7 | c8\n----+----+----+----+---------------------------\n 0 | | | | ALGERIA\n 0 | | | | ETHIOPIA\n 0 | | | | KENYA\n 0 | | | | MOROCCO\n 0 | | | | MOZAMBIQUE\n 1 | | | | ARGENTINA\n 1 | | | | BRAZIL\n 1 | | | | CANADA\n 1 | | | | PERU\n 1 | | | | UNITED STATES\n 2 | | | | CHINA\n 2 | | | | INDIA\n 2 | | | | INDONESIA\n 2 | | | | JAPAN\n 2 | | | | VIETNAM\n 3 | | | | FRANCE\n 3 | | | | GERMANY\n 3 | | | | ROMANIA\n 3 | | | | RUSSIA\n 3 | | | | UNITED KINGDOM\n 4 | | | | EGYPT\n 4 | | | | IRAN\n 4 | | | | IRAQ\n 4 | | | | JORDAN\n 4 | | | | SAUDI ARABIA\n(25 rows)\n\nResults on 15.2:\n c0 | c3 | c6 | c7 | c8\n----+----+----+----+----\n(0 rows)\n\ntender wang <tndrwang@gmail.com> 于2023年3月6日周一 22:48写道:\n\n> Results on 16devel:\n> c0 | c3 | c6 | c7 | c8\n> ----+----+----+----+---------------------------\n> 0 | | | | ALGERIA\n> 0 | | | | ETHIOPIA\n> 0 | | | | KENYA\n> 0 | | | | MOROCCO\n> 0 | | | | MOZAMBIQUE\n> 1 | | | | ARGENTINA\n> 1 | | | | BRAZIL\n> 1 | | | | CANADA\n> 1 | | | | PERU\n> 1 | | | | UNITED STATES\n> 2 | | | | CHINA\n> 2 | | | | INDIA\n> 2 | | | | INDONESIA\n> 2 | | | | JAPAN\n> 2 | | | | VIETNAM\n> 3 | | | | FRANCE\n> 3 | | | | GERMANY\n> 3 | | | | ROMANIA\n> 3 | | | | RUSSIA\n> 3 | | | | UNITED KINGDOM\n> 4 | | | | EGYPT\n> 4 | | | | IRAN\n> 4 | | | | IRAQ\n> 4 | | | | JORDAN\n> 4 | | | | SAUDI ARABIA\n> (25 rows)\n>\n> Results on 15.2:\n> c0 | c3 | c6 | c7 | c8\n> ----+----+----+----+----\n> (0 rows)\n>\n> Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> 于2023年3月6日周一 22:14写道:\n>\n>>\n>>\n>> On Mon, Mar 6, 2023 at 3:00 PM tender wang <tndrwang@gmail.com> wrote:\n>>\n>>> tender wang <tndrwang@gmail.com>\n>>> [image: 附件]14:51 (2小时前)\n>>> 发送至 pgsql-hackers\n>>> Hi hackers.\n>>> This query has different result on 16devel and 15.2.\n>>> select\n>>> sample_3.n_regionkey as c0,\n>>> ref_7.l_linenumber as c3,\n>>> sample_4.l_quantity as c6,\n>>> sample_5.n_nationkey as c7,\n>>> sample_3.n_name as c8\n>>> from\n>>> public.nation as sample_3\n>>> left join public.lineitem as ref_5\n>>> on ((cast(null as text) ~>=~ cast(null as text))\n>>> or (ref_5.l_discount is NULL))\n>>> left join public.time_statistics as ref_6\n>>> inner join public.lineitem as ref_7\n>>> on (ref_7.l_returnflag = ref_7.l_linestatus)\n>>> right join public.lineitem as sample_4\n>>> left join public.nation as sample_5\n>>> on (cast(null as tsquery) = cast(null as tsquery))\n>>> on (cast(null as \"time\") <= cast(null as \"time\"))\n>>> right join public.customer as ref_8\n>>> on (sample_4.l_comment = ref_8.c_name )\n>>> on (ref_5.l_quantity = ref_7.l_quantity )\n>>> where (ref_7.l_suppkey is not NULL)\n>>> or ((case when cast(null as lseg) >= cast(null as lseg) then\n>>> cast(null as inet) else cast(null as inet) end\n>>> && cast(null as inet))\n>>> or (pg_catalog.getdatabaseencoding() !~~ case when (cast(null as\n>>> int2) <= cast(null as int8))\n>>> or (EXISTS (\n>>> select\n>>> ref_9.ps_comment as c0,\n>>> 5 as c1,\n>>> ref_8.c_address as c2,\n>>> 58 as c3,\n>>> ref_8.c_acctbal as c4,\n>>> ref_7.l_orderkey as c5,\n>>> ref_7.l_shipmode as c6,\n>>> ref_5.l_commitdate as c7,\n>>> ref_8.c_custkey as c8,\n>>> sample_3.n_nationkey as c9\n>>> from\n>>> public.partsupp as ref_9\n>>> where cast(null as tsquery) @> cast(null as tsquery)\n>>> order by c0, c1, c2, c3, c4, c5, c6, c7, c8, c9 limit\n>>> 38)) then cast(null as text) else cast(null as text) end\n>>> ))\n>>> order by c0, c3, c6, c7, c8 limit 137;\n>>>\n>>> plan on 16devel:\n>>>\n>>> QUERY PLAN\n>>>\n>>>\n>>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> Limit\n>>> InitPlan 1 (returns $0)\n>>> -> Result\n>>> One-Time Filter: false\n>>> -> Sort\n>>> Sort Key: sample_3.n_regionkey, l_linenumber, l_quantity,\n>>> n_nationkey, sample_3.n_name\n>>> -> Nested Loop Left Join\n>>> -> Seq Scan on nation sample_3\n>>> -> Materialize\n>>> -> Nested Loop Left Join\n>>> Join Filter: (ref_5.l_quantity = l_quantity)\n>>> Filter: ((l_suppkey IS NOT NULL) OR\n>>> (getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text\n>>> ELSE NULL::text END))\n>>> -> Seq Scan on lineitem ref_5\n>>> Filter: (l_discount IS NULL)\n>>> -> Result\n>>> One-Time Filter: false\n>>> (16 rows)\n>>>\n>>> plan on 15.2:\n>>>\n>>> QUERY PLAN\n>>>\n>>>\n>>> ----------------------------------------------------------------------------------------------------------------------------------------------------\n>>> Limit\n>>> InitPlan 1 (returns $0)\n>>> -> Result\n>>> One-Time Filter: false\n>>> -> Sort\n>>> Sort Key: sample_3.n_regionkey, l_linenumber, l_quantity,\n>>> n_nationkey, sample_3.n_name\n>>> -> Nested Loop Left Join\n>>> Filter: ((l_suppkey IS NOT NULL) OR\n>>> (getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text\n>>> ELSE NULL::text END))\n>>> -> Seq Scan on nation sample_3\n>>> -> Materialize\n>>> -> Nested Loop Left Join\n>>> Join Filter: (ref_5.l_quantity = l_quantity)\n>>> -> Seq Scan on lineitem ref_5\n>>> Filter: (l_discount IS NULL)\n>>> -> Result\n>>> One-Time Filter: false\n>>> (16 rows)\n>>>\n>>>\n>>> It looks wrong that the qual (e.g ((l_suppkey IS NOT NULL) OR\n>>> (getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text\n>>> ELSE NULL::text END))) is pushdown.\n>>>\n>>\n>> Is that because $0 comes from a peer plan?\n>>\n>> An example of the difference in the results would help.\n>>\n>> --\n>> Best Wishes,\n>> Ashutosh Bapat\n>>\n>\n\nResults on 16devel:c0 | c3 | c6 | c7 | c8----+----+----+----+--------------------------- 0 | | | | ALGERIA 0 | | | | ETHIOPIA 0 | | | | KENYA 0 | | | | MOROCCO 0 | | | | MOZAMBIQUE 1 | | | | ARGENTINA 1 | | | | BRAZIL 1 | | | | CANADA 1 | | | | PERU 1 | | | | UNITED STATES 2 | | | | CHINA 2 | | | | INDIA 2 | | | | INDONESIA 2 | | | | JAPAN 2 | | | | VIETNAM 3 | | | | FRANCE 3 | | | | GERMANY 3 | | | | ROMANIA 3 | | | | RUSSIA 3 | | | | UNITED KINGDOM 4 | | | | EGYPT 4 | | | | IRAN 4 | | | | IRAQ 4 | | | | JORDAN 4 | | | | SAUDI ARABIA(25 rows)Results on 15.2: c0 | c3 | c6 | c7 | c8----+----+----+----+----(0 rows)tender wang <tndrwang@gmail.com> 于2023年3月6日周一 22:48写道:Results on 16devel:c0 | c3 | c6 | c7 | c8----+----+----+----+--------------------------- 0 | | | | ALGERIA 0 | | | | ETHIOPIA 0 | | | | KENYA 0 | | | | MOROCCO 0 | | | | MOZAMBIQUE 1 | | | | ARGENTINA 1 | | | | BRAZIL 1 | | | | CANADA 1 | | | | PERU 1 | | | | UNITED STATES 2 | | | | CHINA 2 | | | | INDIA 2 | | | | INDONESIA 2 | | | | JAPAN 2 | | | | VIETNAM 3 | | | | FRANCE 3 | | | | GERMANY 3 | | | | ROMANIA 3 | | | | RUSSIA 3 | | | | UNITED KINGDOM 4 | | | | EGYPT 4 | | | | IRAN 4 | | | | IRAQ 4 | | | | JORDAN 4 | | | | SAUDI ARABIA(25 rows)Results on 15.2: c0 | c3 | c6 | c7 | c8----+----+----+----+----(0 rows)Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> 于2023年3月6日周一 22:14写道:On Mon, Mar 6, 2023 at 3:00 PM tender wang <tndrwang@gmail.com> wrote:tender wang <tndrwang@gmail.com>14:51 (2小时前)发送至 pgsql-hackersHi hackers. This query has different result on 16devel and 15.2.select sample_3.n_regionkey as c0, ref_7.l_linenumber as c3, sample_4.l_quantity as c6, sample_5.n_nationkey as c7, sample_3.n_name as c8 from public.nation as sample_3 left join public.lineitem as ref_5 on ((cast(null as text) ~>=~ cast(null as text)) or (ref_5.l_discount is NULL)) left join public.time_statistics as ref_6 inner join public.lineitem as ref_7 on (ref_7.l_returnflag = ref_7.l_linestatus) right join public.lineitem as sample_4 left join public.nation as sample_5 on (cast(null as tsquery) = cast(null as tsquery)) on (cast(null as \"time\") <= cast(null as \"time\")) right join public.customer as ref_8 on (sample_4.l_comment = ref_8.c_name ) on (ref_5.l_quantity = ref_7.l_quantity ) where (ref_7.l_suppkey is not NULL) or ((case when cast(null as lseg) >= cast(null as lseg) then cast(null as inet) else cast(null as inet) end && cast(null as inet)) or (pg_catalog.getdatabaseencoding() !~~ case when (cast(null as int2) <= cast(null as int8)) or (EXISTS ( select ref_9.ps_comment as c0, 5 as c1, ref_8.c_address as c2, 58 as c3, ref_8.c_acctbal as c4, ref_7.l_orderkey as c5, ref_7.l_shipmode as c6, ref_5.l_commitdate as c7, ref_8.c_custkey as c8, sample_3.n_nationkey as c9 from public.partsupp as ref_9 where cast(null as tsquery) @> cast(null as tsquery) order by c0, c1, c2, c3, c4, c5, c6, c7, c8, c9 limit 38)) then cast(null as text) else cast(null as text) end )) order by c0, c3, c6, c7, c8 limit 137;plan on 16devel: QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit InitPlan 1 (returns $0) -> Result One-Time Filter: false -> Sort Sort Key: sample_3.n_regionkey, l_linenumber, l_quantity, n_nationkey, sample_3.n_name -> Nested Loop Left Join -> Seq Scan on nation sample_3 -> Materialize -> Nested Loop Left Join Join Filter: (ref_5.l_quantity = l_quantity) Filter: ((l_suppkey IS NOT NULL) OR (getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text ELSE NULL::text END)) -> Seq Scan on lineitem ref_5 Filter: (l_discount IS NULL) -> Result One-Time Filter: false(16 rows)plan on 15.2: QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------- Limit InitPlan 1 (returns $0) -> Result One-Time Filter: false -> Sort Sort Key: sample_3.n_regionkey, l_linenumber, l_quantity, n_nationkey, sample_3.n_name -> Nested Loop Left Join Filter: ((l_suppkey IS NOT NULL) OR (getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text ELSE NULL::text END)) -> Seq Scan on nation sample_3 -> Materialize -> Nested Loop Left Join Join Filter: (ref_5.l_quantity = l_quantity) -> Seq Scan on lineitem ref_5 Filter: (l_discount IS NULL) -> Result One-Time Filter: false(16 rows)It looks wrong that the qual (e.g ((l_suppkey IS NOT NULL) OR (getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text ELSE NULL::text END))) is pushdown.Is that because $0 comes from a peer plan?An example of the difference in the results would help.-- Best Wishes,Ashutosh Bapat",
"msg_date": "Mon, 6 Mar 2023 22:50:51 +0800",
"msg_from": "tender wang <tndrwang@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: wrong results due to qual pushdown"
},
{
"msg_contents": "tender wang <tndrwang@gmail.com> writes:\n> It looks wrong that the qual (e.g ((l_suppkey IS NOT NULL) OR\n> (getdatabaseencoding() !~~ CASE WHEN ($0 OR NULL::boolean) THEN NULL::text\n> ELSE NULL::text END))) is pushdown.\n\nI think this is the same issue reported at [1].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/0b819232-4b50-f245-1c7d-c8c61bf41827%40postgrespro.ru\n\n\n",
"msg_date": "Mon, 06 Mar 2023 10:12:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: wrong results due to qual pushdown"
}
] |
[
{
"msg_contents": "Hi, we got some problem with building PostgreSQL (version 15.1) on linux \r\nldd —version returns\r\nldd (Debian GLIBC 2.31-13+deb11u5.tmw1) 2.31\r\n\r\nwe can build it all right, however we want to use binaries on different glibc version\r\n\r\nso we’re detecting usage of the glibc version > 2.17 and we need to prevent usage\r\n\r\nof symbols (like explicit_bzero), that wasn’t exist in glibc 2.17.\r\n\r\nwhat we see that even if I commented line \r\n\r\n$as_echo \"#define HAVE_EXPLICIT_BZERO 1\" >>confdefs.h\r\n\r\nfrom configure we still have a problem - symbol explicit_bzero was leaked in \r\n\r\nlib/libpq.so.5.15, bin/postgres, bin/pg_verifybackup\r\n\r\nI was able to verify that HAVE_EXPLICIT_BZERO wasn’t defined \r\n\r\nin all c files that use explicit_bzero: \r\n\r\n./src/interfaces/libpq/fe-connect.c\r\n./src/backend/libpq/be-secure-common.c\r\n./src/common/hmac_openssl.c\r\n./src/common/cryptohash.c\r\n./src/common/cryptohash_openssl.c\r\n./src/common/hmac.c\r\n\r\nhow we can guaranty that if HAVE_EXPLICIT_BZERO is not defined then\r\n\r\nexplicit_bzero function implemented in port/explicit_bzero.c will be used (just like in Darwin or windows)\r\n\r\nthanks in advance\r\n\r\ndm\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Tue, 7 Mar 2023 03:05:44 +0000",
"msg_from": "Dimitry Markman <dmarkman@mathworks.com>",
"msg_from_op": true,
"msg_subject": "some problem explicit_bzero with building PostgreSQL on linux"
},
{
"msg_contents": "Dimitry Markman <dmarkman@mathworks.com> writes:\n> how we can guaranty that if HAVE_EXPLICIT_BZERO is not defined then\n> explicit_bzero function implemented in port/explicit_bzero.c will be used (just like in Darwin or windows)\n\nDid you remember to add explicit_bzero.o to LIBOBJS in\nthe configured Makefile.global?\n\nIf it still doesn't work, then evidently your toolchain is selecting\nthe system's built-in definition of explicit_bzero over the one in\nsrc/port/. This is not terribly surprising given that there has to be\nsome amount of compiler magic involved in that function. You may have\nto resort to actually building Postgres on a platform without\nexplicit_bzero.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Mar 2023 09:02:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: some problem explicit_bzero with building PostgreSQL on linux"
},
{
"msg_contents": "Hi Tom, thanks a lot\nAdding explicit_bzero.o did the job\nThanks a lot\n\ndm\n\n\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Tuesday, March 7, 2023 at 9:14 AM\nTo: Dimitry Markman <dmarkman@mathworks.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>, Bhavya Dabas <bdabas@mathworks.com>\nSubject: Re: some problem explicit_bzero with building PostgreSQL on linux\nDimitry Markman <dmarkman@mathworks.com> writes:\n> how we can guaranty that if HAVE_EXPLICIT_BZERO is not defined then\n> explicit_bzero function implemented in port/explicit_bzero.c will be used (just like in Darwin or windows)\n\nDid you remember to add explicit_bzero.o to LIBOBJS in\nthe configured Makefile.global?\n\nIf it still doesn't work, then evidently your toolchain is selecting\nthe system's built-in definition of explicit_bzero over the one in\nsrc/port/. This is not terribly surprising given that there has to be\nsome amount of compiler magic involved in that function. You may have\nto resort to actually building Postgres on a platform without\nexplicit_bzero.\n\n regards, tom lane\n\n\n\n\n\n\n\n\n\nHi Tom, thanks a lot\nAdding explicit_bzero.o did the job\nThanks a lot\n \ndm\n \n \n\nFrom:\nTom Lane <tgl@sss.pgh.pa.us>\nDate: Tuesday, March 7, 2023 at 9:14 AM\nTo: Dimitry Markman <dmarkman@mathworks.com>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>, Bhavya Dabas <bdabas@mathworks.com>\nSubject: Re: some problem explicit_bzero with building PostgreSQL on linux\n\n\nDimitry Markman <dmarkman@mathworks.com> writes:\n> how we can guaranty that if HAVE_EXPLICIT_BZERO is not defined then\n> explicit_bzero function implemented in port/explicit_bzero.c will be used (just like in Darwin or windows)\n\nDid you remember to add explicit_bzero.o to LIBOBJS in\nthe configured Makefile.global?\n\nIf it still doesn't work, then evidently your toolchain is selecting\nthe system's built-in definition of explicit_bzero over the one in\nsrc/port/. This is not terribly surprising given that there has to be\nsome amount of compiler magic involved in that function. You may have\nto resort to actually building Postgres on a platform without\nexplicit_bzero.\n\n regards, tom lane",
"msg_date": "Tue, 7 Mar 2023 16:06:00 +0000",
"msg_from": "Dimitry Markman <dmarkman@mathworks.com>",
"msg_from_op": true,
"msg_subject": "Re: some problem explicit_bzero with building PostgreSQL on linux"
}
] |
[
{
"msg_contents": "Hi,\n\n- How can I determine which format will be used for a numeric type?\n- What the precision and scale values should be for pgsql to use the long\nformat? Is there a threshold?\n\nHi,- How can I determine which format will be used for a numeric type?- What the precision and scale values should be for pgsql to use the long format? Is there a threshold?",
"msg_date": "Mon, 6 Mar 2023 19:46:24 -0800",
"msg_from": "Amin <amin.fallahi@gmail.com>",
"msg_from_op": true,
"msg_subject": "NumericShort vs NumericLong format"
},
{
"msg_contents": "I'll give this a go as a learning exercise for myself...\n\nOn Mon, Mar 6, 2023 at 8:47 PM Amin <amin.fallahi@gmail.com> wrote:\n\n>\n> - How can I determine which format will be used for a numeric type?\n>\n\nhttps://github.com/postgres/postgres/blob/cf96907aadca454c4094819c2ecddee07eafe203/src/backend/utils/adt/numeric.c#L491\n\n(the three constants are decimal 63, 63, and -64 respectively)\n\n> - What the precision and scale values should be for pgsql to use the long\n> format? Is there a threshold?\n>\n>\nOnes that cause the linked-to test to return false I suppose.\n\nDavid J.\n\nAs an aside, for anyone more fluent than I who reads this, is the use of\nthe word \"dynamic scale\" in this code comment supposed to be \"display\nscale\"?\n\nhttps://github.com/postgres/postgres/blob/cf96907aadca454c4094819c2ecddee07eafe203/src/backend/utils/adt/numeric.c#L121\n\nI'll give this a go as a learning exercise for myself...On Mon, Mar 6, 2023 at 8:47 PM Amin <amin.fallahi@gmail.com> wrote:- How can I determine which format will be used for a numeric type?https://github.com/postgres/postgres/blob/cf96907aadca454c4094819c2ecddee07eafe203/src/backend/utils/adt/numeric.c#L491(the three constants are decimal 63, 63, and -64 respectively)- What the precision and scale values should be for pgsql to use the long format? Is there a threshold?Ones that cause the linked-to test to return false I suppose.David J.As an aside, for anyone more fluent than I who reads this, is the use of the word \"dynamic scale\" in this code comment supposed to be \"display scale\"?https://github.com/postgres/postgres/blob/cf96907aadca454c4094819c2ecddee07eafe203/src/backend/utils/adt/numeric.c#L121",
"msg_date": "Mon, 6 Mar 2023 21:46:42 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: NumericShort vs NumericLong format"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> As an aside, for anyone more fluent than I who reads this, is the use of\n> the word \"dynamic scale\" in this code comment supposed to be \"display\n> scale\"?\n> https://github.com/postgres/postgres/blob/cf96907aadca454c4094819c2ecddee07eafe203/src/backend/utils/adt/numeric.c#L121\n\nYeah, I think you're right.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Mar 2023 00:15:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NumericShort vs NumericLong format"
},
{
"msg_contents": "On Tue, Mar 07, 2023 at 12:15:27AM -0500, Tom Lane wrote:\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > As an aside, for anyone more fluent than I who reads this, is the use of\n> > the word \"dynamic scale\" in this code comment supposed to be \"display\n> > scale\"?\n> > https://github.com/postgres/postgres/blob/cf96907aadca454c4094819c2ecddee07eafe203/src/backend/utils/adt/numeric.c#L121\n> \n> Yeah, I think you're right.\n\nFamiliarizing myself with numeric.c today, I too was confused by this.\nAFAICT, it's meant to say display scale as used elsewhere in the file;\nfor instance, the comment for NumericShort's n_header reads \"Sign +\ndisplay scale + weight\". Would it be appropriate if I submitted a patch\nfor this? It's admittedly trivial, but I figured I should say hi before\nsubmitting one.\n\nAll the best,\nOle\n\n-- \nOle Peder Brandtzæg | En KLST/ITK-hybrid\nPlease don't look at me with those eyes\nPlease don't hint that you're capable of lies\n\n\n",
"msg_date": "Tue, 9 Apr 2024 13:41:15 +0200",
"msg_from": "Ole Peder =?utf-8?Q?Brandtz=C3=A6g?= <olebra@samfundet.no>",
"msg_from_op": false,
"msg_subject": "Re: NumericShort vs NumericLong format"
}
] |
[
{
"msg_contents": "When skimming through pg_rewind during a small review I noticed the use of\npipe_read_line for reading arbitrary data from a pipe, the mechanics of which\nseemed odd.\n\nCommit 5b2f4afffe6 refactored find_other_exec() and broke out pipe_read_line()\nas a static convenience routine for reading a single line of output to catch a\nversion number. Many years later, commit a7e8ece41 exposed it externally in\norder to read a GUC from postgresql.conf using \"postgres -C ..\". f06b1c598\nalso make use of it for reading a version string much like find_other_exec().\nFunnily enough, while now used for arbitrary string reading the variable is\nstill \"pgver\".\n\nSince the function requires passing a buffer/size, and at most size - 1 bytes\nwill be read via fgets(), there is a truncation risk when using this for\nreading GUCs (like how pg_rewind does, though the risk there is slim to none).\n\nIf we are going to continue using this for reading $stuff from pipes, maybe we\nshould think about presenting a nicer API which removes that risk? Returning\nan allocated buffer which contains all the output along the lines of the recent\npg_get_line work seems a lot nicer and safer IMO.\n\nThe attached POC diff replace fgets() with pg_get_line(), which may not be an\nOk way to cross the streams (it's clearly not a great fit), but as a POC it\nprovided a neater interface for reading one-off lines from a pipe IMO. Does\nanyone else think this is worth fixing before too many callsites use it, or is\nthis another case of my fear of silent subtle truncation bugs? =)\n\n--\nDaniel Gustafsson",
"msg_date": "Tue, 7 Mar 2023 23:05:12 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "pipe_read_line for reading arbitrary strings"
},
{
"msg_contents": "> On 7 Mar 2023, at 23:05, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n> When skimming through pg_rewind during a small review I noticed the use of\n> pipe_read_line for reading arbitrary data from a pipe, the mechanics of which\n> seemed odd.\n\nA rebase of this for the CFBot since I realized I had forgotten to add this to\nthe July CF.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 17 May 2023 13:49:54 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: pipe_read_line for reading arbitrary strings"
},
{
"msg_contents": "On 08/03/2023 00:05, Daniel Gustafsson wrote:\n> When skimming through pg_rewind during a small review I noticed the use of\n> pipe_read_line for reading arbitrary data from a pipe, the mechanics of which\n> seemed odd.\n> \n> Commit 5b2f4afffe6 refactored find_other_exec() and broke out pipe_read_line()\n> as a static convenience routine for reading a single line of output to catch a\n> version number. Many years later, commit a7e8ece41 exposed it externally in\n> order to read a GUC from postgresql.conf using \"postgres -C ..\". f06b1c598\n> also make use of it for reading a version string much like find_other_exec().\n> Funnily enough, while now used for arbitrary string reading the variable is\n> still \"pgver\".\n> \n> Since the function requires passing a buffer/size, and at most size - 1 bytes\n> will be read via fgets(), there is a truncation risk when using this for\n> reading GUCs (like how pg_rewind does, though the risk there is slim to none).\n\nGood point.\n\n> If we are going to continue using this for reading $stuff from pipes, maybe we\n> should think about presenting a nicer API which removes that risk? Returning\n> an allocated buffer which contains all the output along the lines of the recent\n> pg_get_line work seems a lot nicer and safer IMO.\n\n+1\n\n> /*\n> * Execute a command in a pipe and read the first line from it. The returned\n> * string is allocated, the caller is responsible for freeing.\n> */\n> char *\n> pipe_read_line(char *cmd)\n\nI think it's worth being explicit here that it's palloc'd, or malloc'd \nin frontend programs, rather than just \"allocated\". Like in pg_get_line.\n\nOther than that, LGTM.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Tue, 4 Jul 2023 14:59:40 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: pipe_read_line for reading arbitrary strings"
},
{
"msg_contents": "> On 4 Jul 2023, at 13:59, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 08/03/2023 00:05, Daniel Gustafsson wrote:\n\n>> If we are going to continue using this for reading $stuff from pipes, maybe we\n>> should think about presenting a nicer API which removes that risk? Returning\n>> an allocated buffer which contains all the output along the lines of the recent\n>> pg_get_line work seems a lot nicer and safer IMO.\n> \n> +1\n\nThanks for review!\n\n>> /*\n>> * Execute a command in a pipe and read the first line from it. The returned\n>> * string is allocated, the caller is responsible for freeing.\n>> */\n>> char *\n>> pipe_read_line(char *cmd)\n> \n> I think it's worth being explicit here that it's palloc'd, or malloc'd in frontend programs, rather than just \"allocated\". Like in pg_get_line.\n\nGood point, I'll make that happen before committing this.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Tue, 4 Jul 2023 14:50:00 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: pipe_read_line for reading arbitrary strings"
},
{
"msg_contents": "> On 4 Jul 2023, at 14:50, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 4 Jul 2023, at 13:59, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> On 08/03/2023 00:05, Daniel Gustafsson wrote:\n> \n>>> If we are going to continue using this for reading $stuff from pipes, maybe we\n>>> should think about presenting a nicer API which removes that risk? Returning\n>>> an allocated buffer which contains all the output along the lines of the recent\n>>> pg_get_line work seems a lot nicer and safer IMO.\n>> \n>> +1\n> \n> Thanks for review!\n> \n>>> /*\n>>> * Execute a command in a pipe and read the first line from it. The returned\n>>> * string is allocated, the caller is responsible for freeing.\n>>> */\n>>> char *\n>>> pipe_read_line(char *cmd)\n>> \n>> I think it's worth being explicit here that it's palloc'd, or malloc'd in frontend programs, rather than just \"allocated\". Like in pg_get_line.\n> \n> Good point, I'll make that happen before committing this.\n\nFixed, along with commit message wordsmithing in the attached. Unless objected\nto I'll go ahead with this version.\n\n--\nDaniel Gustafsson",
"msg_date": "Mon, 25 Sep 2023 09:55:36 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: pipe_read_line for reading arbitrary strings"
},
{
"msg_contents": "On Mon, Sep 25, 2023 at 2:55 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> Fixed, along with commit message wordsmithing in the attached. Unless objected\n> to I'll go ahead with this version.\n\n+1\n\n\n",
"msg_date": "Wed, 22 Nov 2023 15:46:31 +0700",
"msg_from": "John Naylor <johncnaylorls@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pipe_read_line for reading arbitrary strings"
},
{
"msg_contents": "On 2023-Mar-07, Daniel Gustafsson wrote:\n\n> The attached POC diff replace fgets() with pg_get_line(), which may not be an\n> Ok way to cross the streams (it's clearly not a great fit), but as a POC it\n> provided a neater interface for reading one-off lines from a pipe IMO. Does\n> anyone else think this is worth fixing before too many callsites use it, or is\n> this another case of my fear of silent subtle truncation bugs? =)\n\nI think this is generally a good change.\n\nI think pipe_read_line should have a \"%m\" in the \"no data returned\"\nerror message. pg_read_line is careful to retain errno (and it was\nalready zero at start), so this should be okay ... or should we set\nerrno again to zero after popen(), even if it works?\n\n(I'm not sure I buy pg_read_line's use of perror in the backend case.\nMaybe this is only okay because the backend doesn't use this code?)\n\npg_get_line says caller can distinguish error from no-input-before-EOF\nwith ferror(), but pipe_read_line does no such thing. I wonder what\nhappens if an NFS mount containing a file being read disappears in the\nmiddle of reading it, for example ... will we think that we completed\nreading the file, rather than erroring out?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"That sort of implies that there are Emacs keystrokes which aren't obscure.\nI've been using it daily for 2 years now and have yet to discover any key\nsequence which makes any sense.\" (Paul Thomas)\n\n\n",
"msg_date": "Wed, 22 Nov 2023 13:47:21 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pipe_read_line for reading arbitrary strings"
},
{
"msg_contents": "> On 22 Nov 2023, at 13:47, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2023-Mar-07, Daniel Gustafsson wrote:\n> \n>> The attached POC diff replace fgets() with pg_get_line(), which may not be an\n>> Ok way to cross the streams (it's clearly not a great fit), but as a POC it\n>> provided a neater interface for reading one-off lines from a pipe IMO. Does\n>> anyone else think this is worth fixing before too many callsites use it, or is\n>> this another case of my fear of silent subtle truncation bugs? =)\n> \n> I think this is generally a good change.\n\nThanks for review!\n\n> I think pipe_read_line should have a \"%m\" in the \"no data returned\"\n> error message. \n\nGood point.\n\n> pg_read_line is careful to retain errno (and it was\n> already zero at start), so this should be okay ... or should we set\n> errno again to zero after popen(), even if it works?\n\nWhile it shouldn't be needed, reading manpages from a variety of systems\nindicates that popen() isn't entirely reliable when it comes to errno so I've\nadded an explicit errno=0 just to be certain.\n\n> (I'm not sure I buy pg_read_line's use of perror in the backend case.\n> Maybe this is only okay because the backend doesn't use this code?)\n\nIn EXEC_BACKEND builds the postmaster will use find_other_exec which in turn\ncalls pipe_read_line, so there is a possibility. I agree it's proabably not a\ngood idea, I'll have a look at it separately and will raise on a new thread.\n\n> pg_get_line says caller can distinguish error from no-input-before-EOF\n> with ferror(), but pipe_read_line does no such thing. I wonder what\n> happens if an NFS mount containing a file being read disappears in the\n> middle of reading it, for example ... will we think that we completed\n> reading the file, rather than erroring out?\n\nInteresting, that's an omission which should be fixed. I notice there are a\nnumber of callsites using pg_get_line which skip validating with ferror(), I'll\nhave a look at those next (posting findings to a new thread).\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 24 Nov 2023 11:08:54 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: pipe_read_line for reading arbitrary strings"
},
{
"msg_contents": "The attached v5 is a rebase with no new changes just to get a fresh run in the\nCFBot before pushing. All review comments have been addressed and the patch\nhas been Ready for Committer for some time, just didn't have time to get to it\nin the last CF.\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 9 Feb 2024 11:40:41 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: pipe_read_line for reading arbitrary strings"
},
{
"msg_contents": "On 22.11.23 13:47, Alvaro Herrera wrote:\n> On 2023-Mar-07, Daniel Gustafsson wrote:\n> \n>> The attached POC diff replace fgets() with pg_get_line(), which may not be an\n>> Ok way to cross the streams (it's clearly not a great fit), but as a POC it\n>> provided a neater interface for reading one-off lines from a pipe IMO. Does\n>> anyone else think this is worth fixing before too many callsites use it, or is\n>> this another case of my fear of silent subtle truncation bugs? =)\n> \n> I think this is generally a good change.\n> \n> I think pipe_read_line should have a \"%m\" in the \"no data returned\"\n> error message. pg_read_line is careful to retain errno (and it was\n> already zero at start), so this should be okay ... or should we set\n> errno again to zero after popen(), even if it works?\n\nIs this correct? The code now looks like this:\n\n line = pg_get_line(pipe_cmd, NULL);\n\n if (line == NULL)\n {\n if (ferror(pipe_cmd))\n log_error(errcode_for_file_access(),\n _(\"could not read from command \\\"%s\\\": %m\"), cmd);\n else\n log_error(errcode_for_file_access(),\n _(\"no data was returned by command \\\"%s\\\": %m\"), \ncmd);\n }\n\nWe already handle the case where an error happened in the first branch, \nso there cannot be an error set in the second branch (unless something \nnonobvious is going on?).\n\nIt seems to me that if the command being run just happens to print \nnothing but is otherwise successful, this would print a bogus error code \n(or \"Success\")?\n\n\n\n",
"msg_date": "Wed, 6 Mar 2024 10:07:27 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: pipe_read_line for reading arbitrary strings"
},
{
"msg_contents": "> On 6 Mar 2024, at 10:07, Peter Eisentraut <peter@eisentraut.org> wrote:\n> \n> On 22.11.23 13:47, Alvaro Herrera wrote:\n>> On 2023-Mar-07, Daniel Gustafsson wrote:\n>>> The attached POC diff replace fgets() with pg_get_line(), which may not be an\n>>> Ok way to cross the streams (it's clearly not a great fit), but as a POC it\n>>> provided a neater interface for reading one-off lines from a pipe IMO. Does\n>>> anyone else think this is worth fixing before too many callsites use it, or is\n>>> this another case of my fear of silent subtle truncation bugs? =)\n>> I think this is generally a good change.\n>> I think pipe_read_line should have a \"%m\" in the \"no data returned\"\n>> error message. pg_read_line is careful to retain errno (and it was\n>> already zero at start), so this should be okay ... or should we set\n>> errno again to zero after popen(), even if it works?\n> \n> Is this correct? The code now looks like this:\n> \n> line = pg_get_line(pipe_cmd, NULL);\n> \n> if (line == NULL)\n> {\n> if (ferror(pipe_cmd))\n> log_error(errcode_for_file_access(),\n> _(\"could not read from command \\\"%s\\\": %m\"), cmd);\n> else\n> log_error(errcode_for_file_access(),\n> _(\"no data was returned by command \\\"%s\\\": %m\"), cmd);\n> }\n> \n> We already handle the case where an error happened in the first branch, so there cannot be an error set in the second branch (unless something nonobvious is going on?).\n> \n> It seems to me that if the command being run just happens to print nothing but is otherwise successful, this would print a bogus error code (or \"Success\")?\n\nGood catch, that's an incorrect copy/paste, it should use ERRCODE_NO_DATA. I'm\nnot convinced that a function to read from a pipe should consider not reading\nanything successful by default, output is sort expected here. We could add a\nflag parameter to use for signalling that no data is fine though as per the\nattached (as of yet untested) diff?\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 6 Mar 2024 10:54:28 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: pipe_read_line for reading arbitrary strings"
},
{
"msg_contents": "On 2024-Mar-06, Daniel Gustafsson wrote:\n\n> Good catch, that's an incorrect copy/paste, it should use ERRCODE_NO_DATA. I'm\n> not convinced that a function to read from a pipe should consider not reading\n> anything successful by default, output is sort expected here. We could add a\n> flag parameter to use for signalling that no data is fine though as per the\n> attached (as of yet untested) diff?\n\nI think adding dead code is not a great plan, particularly if it's hairy\nenough that we need to very carefully dissect what happens in error\ncases. IMO if and when somebody has a need for an empty return string\nbeing acceptable, they can add it then.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 6 Mar 2024 11:46:29 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pipe_read_line for reading arbitrary strings"
},
{
"msg_contents": "> On 6 Mar 2024, at 11:46, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2024-Mar-06, Daniel Gustafsson wrote:\n> \n>> Good catch, that's an incorrect copy/paste, it should use ERRCODE_NO_DATA. I'm\n>> not convinced that a function to read from a pipe should consider not reading\n>> anything successful by default, output is sort expected here. We could add a\n>> flag parameter to use for signalling that no data is fine though as per the\n>> attached (as of yet untested) diff?\n> \n> I think adding dead code is not a great plan, particularly if it's hairy\n> enough that we need to very carefully dissect what happens in error\n> cases. IMO if and when somebody has a need for an empty return string\n> being acceptable, they can add it then.\n\nI agree with that, there are no callers today and I can't imagine one off the\ncuff. The change to use the appropriate errcode still applies though.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 6 Mar 2024 11:49:00 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: pipe_read_line for reading arbitrary strings"
},
{
"msg_contents": "On 06.03.24 10:54, Daniel Gustafsson wrote:\n>> On 6 Mar 2024, at 10:07, Peter Eisentraut <peter@eisentraut.org> wrote:\n>>\n>> On 22.11.23 13:47, Alvaro Herrera wrote:\n>>> On 2023-Mar-07, Daniel Gustafsson wrote:\n>>>> The attached POC diff replace fgets() with pg_get_line(), which may not be an\n>>>> Ok way to cross the streams (it's clearly not a great fit), but as a POC it\n>>>> provided a neater interface for reading one-off lines from a pipe IMO. Does\n>>>> anyone else think this is worth fixing before too many callsites use it, or is\n>>>> this another case of my fear of silent subtle truncation bugs? =)\n>>> I think this is generally a good change.\n>>> I think pipe_read_line should have a \"%m\" in the \"no data returned\"\n>>> error message. pg_read_line is careful to retain errno (and it was\n>>> already zero at start), so this should be okay ... or should we set\n>>> errno again to zero after popen(), even if it works?\n>>\n>> Is this correct? The code now looks like this:\n>>\n>> line = pg_get_line(pipe_cmd, NULL);\n>>\n>> if (line == NULL)\n>> {\n>> if (ferror(pipe_cmd))\n>> log_error(errcode_for_file_access(),\n>> _(\"could not read from command \\\"%s\\\": %m\"), cmd);\n>> else\n>> log_error(errcode_for_file_access(),\n>> _(\"no data was returned by command \\\"%s\\\": %m\"), cmd);\n>> }\n>>\n>> We already handle the case where an error happened in the first branch, so there cannot be an error set in the second branch (unless something nonobvious is going on?).\n>>\n>> It seems to me that if the command being run just happens to print nothing but is otherwise successful, this would print a bogus error code (or \"Success\")?\n> \n> Good catch, that's an incorrect copy/paste, it should use ERRCODE_NO_DATA.\n\nAlso it shouldn't print %m, was my point.\n\n\n\n",
"msg_date": "Fri, 8 Mar 2024 18:13:47 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: pipe_read_line for reading arbitrary strings"
},
{
"msg_contents": "> On 8 Mar 2024, at 18:13, Peter Eisentraut <peter@eisentraut.org> wrote:\n\n>> Good catch, that's an incorrect copy/paste, it should use ERRCODE_NO_DATA.\n> \n> Also it shouldn't print %m, was my point.\n\n\nAbsolutely, I removed that in the patch upthread, it was clearly wrong.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 8 Mar 2024 19:38:32 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: pipe_read_line for reading arbitrary strings"
},
{
"msg_contents": "> On 8 Mar 2024, at 19:38, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 8 Mar 2024, at 18:13, Peter Eisentraut <peter@eisentraut.org> wrote:\n> \n>>> Good catch, that's an incorrect copy/paste, it should use ERRCODE_NO_DATA.\n>> \n>> Also it shouldn't print %m, was my point.\n> \n> Absolutely, I removed that in the patch upthread, it was clearly wrong.\n\nPushed the fix for the incorrect logline and errcode.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Sat, 9 Mar 2024 00:06:15 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: pipe_read_line for reading arbitrary strings"
}
] |
[
{
"msg_contents": "PGDOCS - Replica Identity quotes\n\nHi,\n\nHere are some trivial quote changes to a paragraph describing REPLICA IDENTITY.\n\nThese changes were previously made in another ongoing R.I. patch\nv28-0001 [1], but it was decided that since they are not strictly\nrelated to that patch they should done separately.\n\n\n======\nlogical-replication.sgml\n\nSection 31.1 Publication\n\nA published table must have a “replica identity” configured in order\nto be able to replicate UPDATE and DELETE operations, so that\nappropriate rows to update or delete can be identified on the\nsubscriber side. By default, this is the primary key, if there is one.\nAnother unique index (with certain additional requirements) can also\nbe set to be the replica identity. If the table does not have any\nsuitable key, then it can be set to replica identity “full”, which\nmeans the entire row becomes the key. This, however, is very\ninefficient and should only be used as a fallback if no other solution\nis possible. If a replica identity other than “full” is set on the\npublisher side, a replica identity comprising the same or fewer\ncolumns must also be set on the subscriber side. See REPLICA IDENTITY\nfor details on how to set the replica identity. If a table without a\nreplica identity is added to a publication that replicates UPDATE or\nDELETE operations then subsequent UPDATE or DELETE operations will\ncause an error on the publisher. INSERT operations can proceed\nregardless of any replica identity.\n\n~~\n\nSuggested changes:\n\n1.\nThe quoted \"replica identity\" should not be quoted -- This is the\nfirst time this term is used on this page so I think it should be\nusing <firstterm> SGML tag, just the same as how\n<firstterm>publication</firstterm> looks at the top of this section.\n\n2.\nThe quoted \"full\" should also not be quoted. Replicate identities are\nnot specified using text string \"full\" - they are specified as FULL\n(see [2]), so IMO these instances should be changed to\n<literal>FULL</full> to eliminate that ambiguity.\n\n~~~\n\nPSA patch v1 which implements the above changes.\n\n------\n[1] https://www.postgresql.org/message-id/CAA4eK1J8R-qS97cu27sF2%3DqzjhuQNkv%2BZvgaTzFv7rs-LA4c2w%40mail.gmail.com\n[2] https://www.postgresql.org/docs/current/sql-altertable.html#SQL-ALTERTABLE-REPLICA-IDENTITY\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 8 Mar 2023 09:26:29 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "PGDOCS - Replica Identity quotes"
},
{
"msg_contents": "A rebase was needed due to the recent REPLICA IDENTITY push [1].\n\nPSA v2.\n\n------\n[1] https://github.com/postgres/postgres/commit/89e46da5e511a6970e26a020f265c9fb4b72b1d2\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Fri, 17 Mar 2023 10:46:23 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PGDOCS - Replica Identity quotes"
},
{
"msg_contents": "On 2023-03-16 4:46 p.m., Peter Smith wrote:\n> A rebase was needed due to the recent REPLICA IDENTITY push [1].\n>\n> PSA v2.\n>\n> <para>\n> - A published table must have a <quote>replica identity</quote> configured in\n> + A published table must have a <firstterm>replica identity</firstterm> configured in\n+1\n> order to be able to replicate <command>UPDATE</command>\n> and <command>DELETE</command> operations, so that appropriate rows to\n> update or delete can be identified on the subscriber side. By default,\n> this is the primary key, if there is one. Another unique index (with\n> certain additional requirements) can also be set to be the replica\n> identity. If the table does not have any suitable key, then it can be set\n> - to replica identity <quote>full</quote>, which means the entire row becomes\n> - the key. When replica identity <quote>full</quote> is specified,\n> + to <literal>REPLICA IDENTITY FULL</literal>, which means the entire row becomes\n> + the key. When <literal>REPLICA IDENTITY FULL</literal> is specified,\n> indexes can be used on the subscriber side for searching the rows. Candidate\n> indexes must be btree, non-partial, and have at least one column reference\n> (i.e. cannot consist of only expressions). These restrictions\n> on the non-unique index properties adhere to some of the restrictions that\n> are enforced for primary keys. If there are no such suitable indexes,\n> the search on the subscriber side can be very inefficient, therefore\n> - replica identity <quote>full</quote> should only be used as a\n> + <literal>REPLICA IDENTITY FULL</literal> should only be used as a\n> fallback if no other solution is possible. If a replica identity other\nIMO, it would be better just change \"full\" to \"FULL\". On one side, it \ncan emphasize that \"FULL\" is one of the specific values (DEFAULT | USING \nINDEX index_name | FULL | NOTHING); on the other side, it leaves \n\"replica identity\" in lowercase to be more consistent with the \nterminology used in this entire paragraph.\n> - than <quote>full</quote> is set on the publisher side, a replica identity\n> + than <literal>FULL</literal> is set on the publisher side, a replica identity\n+1\n> comprising the same or fewer columns must also be set on the subscriber\n> side. See <xref linkend=\"sql-altertable-replica-identity\"/> for details on\n> how to set the replica identity. If a table without a replica identity is\n\nDavid\n\n\n\n",
"msg_date": "Fri, 5 May 2023 12:28:16 -0700",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: PGDOCS - Replica Identity quotes"
},
{
"msg_contents": "On Sat, May 6, 2023 at 5:28 AM David Zhang <david.zhang@highgo.ca> wrote:\n>\n> On 2023-03-16 4:46 p.m., Peter Smith wrote:\n> > A rebase was needed due to the recent REPLICA IDENTITY push [1].\n> >\n> > PSA v2.\n> >\n> > <para>\n> > - A published table must have a <quote>replica identity</quote> configured in\n> > + A published table must have a <firstterm>replica identity</firstterm> configured in\n> +1\n> > order to be able to replicate <command>UPDATE</command>\n> > and <command>DELETE</command> operations, so that appropriate rows to\n> > update or delete can be identified on the subscriber side. By default,\n> > this is the primary key, if there is one. Another unique index (with\n> > certain additional requirements) can also be set to be the replica\n> > identity. If the table does not have any suitable key, then it can be set\n> > - to replica identity <quote>full</quote>, which means the entire row becomes\n> > - the key. When replica identity <quote>full</quote> is specified,\n> > + to <literal>REPLICA IDENTITY FULL</literal>, which means the entire row becomes\n> > + the key. When <literal>REPLICA IDENTITY FULL</literal> is specified,\n> > indexes can be used on the subscriber side for searching the rows. Candidate\n> > indexes must be btree, non-partial, and have at least one column reference\n> > (i.e. cannot consist of only expressions). These restrictions\n> > on the non-unique index properties adhere to some of the restrictions that\n> > are enforced for primary keys. If there are no such suitable indexes,\n> > the search on the subscriber side can be very inefficient, therefore\n> > - replica identity <quote>full</quote> should only be used as a\n> > + <literal>REPLICA IDENTITY FULL</literal> should only be used as a\n> > fallback if no other solution is possible. If a replica identity other\n> IMO, it would be better just change \"full\" to \"FULL\". On one side, it\n> can emphasize that \"FULL\" is one of the specific values (DEFAULT | USING\n> INDEX index_name | FULL | NOTHING); on the other side, it leaves\n> \"replica identity\" in lowercase to be more consistent with the\n> terminology used in this entire paragraph.\n> > - than <quote>full</quote> is set on the publisher side, a replica identity\n> > + than <literal>FULL</literal> is set on the publisher side, a replica identity\n> +1\n> > comprising the same or fewer columns must also be set on the subscriber\n> > side. See <xref linkend=\"sql-altertable-replica-identity\"/> for details on\n> > how to set the replica identity. If a table without a replica identity is\n>\n\nThanks for giving some feedback on my patch.\n\nPSA v3 which is changed per your suggestion.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 8 May 2023 10:29:50 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PGDOCS - Replica Identity quotes"
},
{
"msg_contents": "On Mon, May 08, 2023 at 10:29:50AM +1000, Peter Smith wrote:\n> Thanks for giving some feedback on my patch.\n\nLooks OK.\n\nWhile on it, looking at logical-replication.sgml, it seems to me that\nthese two are also incorrect, and we should use <literal> markups:\nimplemented by <quote>walsender</quote> and <quote>apply</quote>\n--\nMichael",
"msg_date": "Mon, 8 May 2023 12:09:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PGDOCS - Replica Identity quotes"
},
{
"msg_contents": "On Mon, May 8, 2023 at 1:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, May 08, 2023 at 10:29:50AM +1000, Peter Smith wrote:\n> > Thanks for giving some feedback on my patch.\n>\n> Looks OK.\n>\n> While on it, looking at logical-replication.sgml, it seems to me that\n> these two are also incorrect, and we should use <literal> markups:\n> implemented by <quote>walsender</quote> and <quote>apply</quote>\n> --\n\nI agree. Do you want me to make a new v4 patch to also do that, or\nwill you handle them in passing?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 8 May 2023 13:57:33 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PGDOCS - Replica Identity quotes"
},
{
"msg_contents": "On Mon, May 08, 2023 at 01:57:33PM +1000, Peter Smith wrote:\n> I agree. Do you want me to make a new v4 patch to also do that, or\n> will you handle them in passing?\n\nI'll just go handle them on the way, no need to send an updated\npatch.\n--\nMichael",
"msg_date": "Mon, 8 May 2023 13:05:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: PGDOCS - Replica Identity quotes"
},
{
"msg_contents": "On Mon, May 8, 2023 at 2:05 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, May 08, 2023 at 01:57:33PM +1000, Peter Smith wrote:\n> > I agree. Do you want me to make a new v4 patch to also do that, or\n> > will you handle them in passing?\n>\n> I'll just go handle them on the way, no need to send an updated\n> patch.\n\nThanks for pushing this yesterday, and for handling the other quotes.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Tue, 9 May 2023 08:15:46 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: PGDOCS - Replica Identity quotes"
}
] |
[
{
"msg_contents": "I noticed that several of the List functions do simple linear searches that\ncan be optimized with SIMD intrinsics (as was done for XidInMVCCSnapshot in\n37a6e5d). The following table shows the time spent iterating over a list\nof n elements (via list_member_int) one billion times on my x86 laptop.\n\n n | head (ms) | patched (ms) \n ------+-----------+--------------\n 2 | 3884 | 3001\n 4 | 5506 | 4092\n 8 | 6209 | 3026\n 16 | 8797 | 4458\n 32 | 25051 | 7032\n 64 | 37611 | 12763\n 128 | 61886 | 22770\n 256 | 111170 | 59885\n 512 | 209612 | 103378\n 1024 | 407462 | 189484\n\nI've attached a work-in-progress patch that implements these optimizations\nfor both x86 and arm, and I will register this in the July commitfest. I'm\nposting this a little early in order to gauge interest. Presumably you\nshouldn't be using a List if you have many elements that must be routinely\nsearched, but it might be nice to speed up these functions, anyway. This\nwas mostly a fun weekend project, and I don't presently have any concrete\nexamples of workloads where this might help.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 7 Mar 2023 16:25:02 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "optimize several list functions with SIMD intrinsics"
},
{
"msg_contents": "On Wed, 8 Mar 2023 at 13:25, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> I've attached a work-in-progress patch that implements these optimizations\n> for both x86 and arm, and I will register this in the July commitfest. I'm\n> posting this a little early in order to gauge interest.\n\nInteresting and quite impressive performance numbers.\n\n From having a quick glance at the patch, it looks like you'll need to\ntake some extra time to make it work on 32-bit builds.\n\nDavid\n\n\n",
"msg_date": "Wed, 8 Mar 2023 13:54:15 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize several list functions with SIMD intrinsics"
},
{
"msg_contents": "On Wed, Mar 08, 2023 at 01:54:15PM +1300, David Rowley wrote:\n> Interesting and quite impressive performance numbers.\n\nThanks for taking a look.\n\n> From having a quick glance at the patch, it looks like you'll need to\n> take some extra time to make it work on 32-bit builds.\n\nAt the moment, the support for SIMD intrinsics in Postgres is limited to\n64-bit (simd.h has the details). But yes, if we want to make this work for\n32-bit builds, additional work is required.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 7 Mar 2023 20:56:58 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize several list functions with SIMD intrinsics"
},
{
"msg_contents": "cfbot's Windows build wasn't happy with a couple of casts. I applied a\nfix similar to c6a43c2 in v2. The patch is still very much a work in\nprogress.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 8 Mar 2023 10:58:09 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize several list functions with SIMD intrinsics"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nHello, \r\n\r\nAdding some review comments:\r\n\r\n1. In list_member_ptr, will it be okay to bring `const ListCell *cell` from \r\n#ifdef USE_NO_SIMD\r\n\tconst ListCell *cell;\r\n#endif\r\nto #else like as mentioned below? This will make visual separation between #if cases more cleaner\r\n#else\r\n const ListCell *cell;\r\n\r\n\tforeach(cell, list)\r\n\t{\r\n\t\tif (lfirst(cell) == datum)\r\n\t\t\treturn true;\r\n\t}\r\n\r\n\treturn false;\r\n\r\n#endif\r\n\r\n2. Lots of duplicated\r\nif (list == NIL) checks before calling list_member_inline_internal(list, datum)\r\nCan we not add this check in list_member_inline_internal itself?\r\n3. if (cell)\r\n\t\treturn list_delete_cell(list, cell) in list_delete_int/oid\r\ncan we change if (cell) to if (cell != NULL) as used elsewhere?\r\n4. list_member_inline_interal_idx , there is typo in name.\r\n5. list_member_inline_interal_idx and list_member_inline_internal are practically same with almost 90+ % duplication.\r\ncan we not refactor both into one and allow cell or true/false as return? Although I see usage of const ListCell so this might be problematic.\r\n6. Loop for (i = 0; i < tail_idx; i += nelem_per_iteration) for Vector32 in list.c looks duplicated from pg_lfind32 (in pg_lfind.h), can we not reuse that?\r\n7. Is it possible to add a benchmark which shows improvement against HEAD ?\r\n\r\nRegards,\r\nAnkit\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Sat, 11 Mar 2023 09:41:18 +0000",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize several list functions with SIMD intrinsics"
},
{
"msg_contents": "\n > 7. Is it possible to add a benchmark which shows improvement against \nHEAD ?\n\n\nPlease ignore this from my earlier mail, I just saw stats now.\n\nThanks,\n\nAnkit\n\n\n\n\n",
"msg_date": "Sat, 11 Mar 2023 16:28:55 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize several list functions with SIMD intrinsics"
},
{
"msg_contents": "Thanks for taking a look.\n\nOn Sat, Mar 11, 2023 at 09:41:18AM +0000, Ankit Kumar Pandey wrote:\n> 1. In list_member_ptr, will it be okay to bring `const ListCell *cell` from \n> #ifdef USE_NO_SIMD\n> \tconst ListCell *cell;\n> #endif\n> to #else like as mentioned below? This will make visual separation between #if cases more cleaner\n\nI would expect to see -Wdeclaration-after-statement warnings if we did\nthis.\n\n> 2. Lots of duplicated\n> if (list == NIL) checks before calling list_member_inline_internal(list, datum)\n> Can we not add this check in list_member_inline_internal itself?\n\nWe probably could. I only extracted the NIL checks to simplify the code in\nlist_member_inline_internal() a bit.\n\n> 3. if (cell)\n> \t\treturn list_delete_cell(list, cell) in list_delete_int/oid\n> can we change if (cell) to if (cell != NULL) as used elsewhere?\n\nSure.\n\n> 4. list_member_inline_interal_idx , there is typo in name.\n\nWill fix.\n\n> 5. list_member_inline_interal_idx and list_member_inline_internal are practically same with almost 90+ % duplication.\n> can we not refactor both into one and allow cell or true/false as return? Although I see usage of const ListCell so this might be problematic.\n\nThe idea was to skip finding the exact match if all we care about is\nwhether the element exists. This micro-optimization might be negligible,\nin which case we could use list_member_inline_internal_idx() for both\ncases.\n\n> 6. Loop for (i = 0; i < tail_idx; i += nelem_per_iteration) for Vector32 in list.c looks duplicated from pg_lfind32 (in pg_lfind.h), can we not reuse that?\n\nThe list.c version is slightly different because we need to disregard any\nmatches that we find in between the data. For example, in an integer List,\nthe integer will take up 4 bytes of the 8-byte ListCell (for 64-bit\nplatforms).\n\n\ttypedef union ListCell\n\t{\n\t\tvoid\t *ptr_value;\n\t\tint\t\t\tint_value;\n\t\tOid\t\t\toid_value;\n\t\tTransactionId xid_value;\n\t} ListCell;\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 13 Mar 2023 14:40:27 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize several list functions with SIMD intrinsics"
},
{
"msg_contents": "Agree with your points Nathan. Just a headup.\n\n> On 14/03/23 03:10, Nathan Bossart wrote:\n> On Sat, Mar 11, 2023 at 09:41:18AM +0000, Ankit Kumar Pandey wrote:\n>> 1. In list_member_ptr, will it be okay to bring `const ListCell *cell` from \n>> #ifdef USE_NO_SIMD\n>> \tconst ListCell *cell;\n>> #endif\n>> to #else like as mentioned below? This will make visual separation between #if cases more cleaner\n> I would expect to see -Wdeclaration-after-statement warnings if we did\n> this.\n\nThis worked fine for me, no warnings on gcc 12.2.0. Not a concern though.\n\nThanks,\n\nAnkit\n\n\n\n",
"msg_date": "Wed, 15 Mar 2023 19:31:46 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize several list functions with SIMD intrinsics"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 07:31:46PM +0530, Ankit Kumar Pandey wrote:\n>> On 14/03/23 03:10, Nathan Bossart wrote:\n>> On Sat, Mar 11, 2023 at 09:41:18AM +0000, Ankit Kumar Pandey wrote:\n>> > 1. In list_member_ptr, will it be okay to bring `const ListCell\n>> > *cell` from #ifdef USE_NO_SIMD\n>> > \tconst ListCell *cell;\n>> > #endif\n>> > to #else like as mentioned below? This will make visual separation between #if cases more cleaner\n>> I would expect to see -Wdeclaration-after-statement warnings if we did\n>> this.\n> \n> This worked fine for me, no warnings on gcc 12.2.0. Not a concern though.\n\nDid you try building without SIMD support? This is what I see:\n\n\tlist.c: In function ‘list_member_ptr’:\n\tlist.c:697:2: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]\n\t 697 | const ListCell *cell;\n\t | ^~~~~\n\nIf your build doesn't have USE_NO_SIMD defined, this warning won't appear\nbecause the code in question will be compiled out.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 15 Mar 2023 09:23:19 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize several list functions with SIMD intrinsics"
},
{
"msg_contents": "\nOn 15/03/23 21:53, Nathan Bossart wrote:\n\n> Did you try building without SIMD support? This is what I see:\n\n>\tlist.c: In function ‘list_member_ptr’:\n>\tlist.c:697:2: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]\n>\t 697 | const ListCell *cell;\n>\t | ^~~~~\n\n> If your build doesn't have USE_NO_SIMD defined, this warning won't appear\n> because the code in question will be compiled out.\n\nMy mistake, I tried with USE_NO_SIMD defined and it showed the warning. Sorry for the noise.\n\nRegards,\nAnkit\n\n\n\n",
"msg_date": "Wed, 15 Mar 2023 22:18:22 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize several list functions with SIMD intrinsics"
},
{
"msg_contents": "Here is a new patch set. I've split it into two patches: one for the\n64-bit functions, and one for the 32-bit functions. I've also added tests\nfor pg_lfind64/pg_lfind64_idx and deduplicated the code a bit.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 17 Apr 2023 13:42:50 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize several list functions with SIMD intrinsics"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 7:25 AM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n>\n> was mostly a fun weekend project, and I don't presently have any concrete\n> examples of workloads where this might help.\n\nIt seems like that should be demonstrated before seriously considering\nthis, like a profile where the relevant list functions show up.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Mar 8, 2023 at 7:25 AM Nathan Bossart <nathandbossart@gmail.com> wrote:>> was mostly a fun weekend project, and I don't presently have any concrete> examples of workloads where this might help.It seems like that should be demonstrated before seriously considering this, like a profile where the relevant list functions show up.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 21 Apr 2023 13:50:34 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: optimize several list functions with SIMD intrinsics"
},
{
"msg_contents": "On Fri, Apr 21, 2023 at 01:50:34PM +0700, John Naylor wrote:\n> On Wed, Mar 8, 2023 at 7:25 AM Nathan Bossart <nathandbossart@gmail.com>\n> wrote:\n>> was mostly a fun weekend project, and I don't presently have any concrete\n>> examples of workloads where this might help.\n> \n> It seems like that should be demonstrated before seriously considering\n> this, like a profile where the relevant list functions show up.\n\nAgreed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 21 Apr 2023 13:33:58 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: optimize several list functions with SIMD intrinsics"
},
{
"msg_contents": "On 21/04/2023 23:33, Nathan Bossart wrote:\n> On Fri, Apr 21, 2023 at 01:50:34PM +0700, John Naylor wrote:\n>> On Wed, Mar 8, 2023 at 7:25 AM Nathan Bossart <nathandbossart@gmail.com>\n>> wrote:\n>>> was mostly a fun weekend project, and I don't presently have any concrete\n>>> examples of workloads where this might help.\n>>\n>> It seems like that should be demonstrated before seriously considering\n>> this, like a profile where the relevant list functions show up.\n> \n> Agreed.\n\nGrepping for \"tlist_member\" and \"list_delete_ptr\", I don't see any \ncallers in hot codepaths where this could make a noticeable difference. \nSo I've marked this as Returned with Feedback in the commitfest.\n\n> I noticed that several of the List functions do simple linear searches that\n> can be optimized with SIMD intrinsics (as was done for XidInMVCCSnapshot in\n> 37a6e5d). The following table shows the time spent iterating over a list\n> of n elements (via list_member_int) one billion times on my x86 laptop.\n> \n> \n> n | head (ms) | patched (ms) \n> ------+-----------+--------------\n> 2 | 3884 | 3001\n> 4 | 5506 | 4092\n> 8 | 6209 | 3026\n> 16 | 8797 | 4458\n> 32 | 25051 | 7032\n> 64 | 37611 | 12763\n> 128 | 61886 | 22770\n> 256 | 111170 | 59885\n> 512 | 209612 | 103378\n> 1024 | 407462 | 189484\n\nI'm surprised to see an improvement with n=2 and n=2. AFAICS, the \nvectorization only kicks in when n >= 8.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 10 Jul 2023 13:50:59 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: optimize several list functions with SIMD intrinsics"
}
] |
[
{
"msg_contents": "Allow tailoring of ICU locales with custom rules\n\nThis exposes the ICU facility to add custom collation rules to a\nstandard collation.\n\nNew options are added to CREATE COLLATION, CREATE DATABASE, createdb,\nand initdb to set the rules.\n\nReviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>\nReviewed-by: Daniel Verite <daniel@manitou-mail.org>\nDiscussion: https://www.postgresql.org/message-id/flat/821c71a4-6ef0-d366-9acf-bb8e367f739f@enterprisedb.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/30a53b792959b36f07200dae246067b3adbcc0b9\n\nModified Files\n--------------\ndoc/src/sgml/catalogs.sgml | 18 +++++\ndoc/src/sgml/ref/create_collation.sgml | 22 ++++++\ndoc/src/sgml/ref/create_database.sgml | 14 ++++\ndoc/src/sgml/ref/createdb.sgml | 10 +++\ndoc/src/sgml/ref/initdb.sgml | 10 +++\nsrc/backend/catalog/pg_collation.c | 5 ++\nsrc/backend/commands/collationcmds.c | 23 +++++-\nsrc/backend/commands/dbcommands.c | 51 ++++++++++++-\nsrc/backend/utils/adt/pg_locale.c | 41 +++++++++-\nsrc/backend/utils/init/postinit.c | 11 ++-\nsrc/bin/initdb/initdb.c | 15 +++-\nsrc/bin/pg_dump/pg_dump.c | 37 +++++++++\nsrc/bin/psql/describe.c | 100 ++++++++++++++++---------\nsrc/bin/scripts/createdb.c | 11 +++\nsrc/include/catalog/catversion.h | 2 +-\nsrc/include/catalog/pg_collation.h | 2 +\nsrc/include/catalog/pg_database.dat | 2 +-\nsrc/include/catalog/pg_database.h | 3 +\nsrc/include/utils/pg_locale.h | 1 +\nsrc/test/regress/expected/collate.icu.utf8.out | 30 ++++++++\nsrc/test/regress/expected/psql.out | 18 ++---\nsrc/test/regress/sql/collate.icu.utf8.sql | 13 ++++\n22 files changed, 380 insertions(+), 59 deletions(-)",
"msg_date": "Wed, 08 Mar 2023 16:03:22 +0000",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On Wed, 2023-03-08 at 16:03 +0000, Peter Eisentraut wrote:\n> Allow tailoring of ICU locales with custom rules\n\nLate review:\n\n* Should throw error when provider != icu and rules != NULL\n\n* Explain what the example means. By itself, users might get confused\nwondering why someone would want to do that.\n\n* Also consider a more practical example?\n\n* It appears rules IS NULL behaves differently from rules=''. Is that\ndesired? For instance:\n create collation c1(provider=icu,\n locale='und-u-ka-shifted-ks-level1',\n deterministic=false);\n create collation c2(provider=icu,\n locale='und-u-ka-shifted-ks-level1',\n rules='',\n deterministic=false);\n select 'a b' collate c1 = 'ab' collate c1; -- true\n select 'a b' collate c2 = 'ab' collate c2; -- false\n\n* Can you document the interaction between locale keywords\n(\"@colStrength=primary\") and a rule like '[strength 2]'?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 08 Mar 2023 12:57:29 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On 08.03.23 21:57, Jeff Davis wrote:\n> On Wed, 2023-03-08 at 16:03 +0000, Peter Eisentraut wrote:\n>> Allow tailoring of ICU locales with custom rules\n> \n> Late review:\n> \n> * Should throw error when provider != icu and rules != NULL\n\nI have fixed that.\n\n> * Explain what the example means. By itself, users might get confused\n> wondering why someone would want to do that.\n> \n> * Also consider a more practical example?\n\nI have added a more practical example with explanation.\n\n> * It appears rules IS NULL behaves differently from rules=''. Is that\n> desired? For instance:\n> create collation c1(provider=icu,\n> locale='und-u-ka-shifted-ks-level1',\n> deterministic=false);\n> create collation c2(provider=icu,\n> locale='und-u-ka-shifted-ks-level1',\n> rules='',\n> deterministic=false);\n> select 'a b' collate c1 = 'ab' collate c1; -- true\n> select 'a b' collate c2 = 'ab' collate c2; -- false\n\nI'm puzzled by this. The general behavior is, extract the rules of the \noriginal locale, append the custom rules, use that. If the custom rules \nare the empty string, that should match using the original rules \nuntouched. Needs further investigation.\n\n> * Can you document the interaction between locale keywords\n> (\"@colStrength=primary\") and a rule like '[strength 2]'?\n\nI'll look into that.\n\n\n\n",
"msg_date": "Fri, 10 Mar 2023 10:54:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 3:24 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 08.03.23 21:57, Jeff Davis wrote:\n>\n> > * It appears rules IS NULL behaves differently from rules=''. Is that\n> > desired? For instance:\n> > create collation c1(provider=icu,\n> > locale='und-u-ka-shifted-ks-level1',\n> > deterministic=false);\n> > create collation c2(provider=icu,\n> > locale='und-u-ka-shifted-ks-level1',\n> > rules='',\n> > deterministic=false);\n> > select 'a b' collate c1 = 'ab' collate c1; -- true\n> > select 'a b' collate c2 = 'ab' collate c2; -- false\n>\n> I'm puzzled by this. The general behavior is, extract the rules of the\n> original locale, append the custom rules, use that. If the custom rules\n> are the empty string, that should match using the original rules\n> untouched. Needs further investigation.\n>\n> > * Can you document the interaction between locale keywords\n> > (\"@colStrength=primary\") and a rule like '[strength 2]'?\n>\n> I'll look into that.\n>\n\nThis thread is listed on PostgreSQL 16 Open Items list. This is a\ngentle reminder to see if there is a plan to move forward with respect\nto open points.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 24 Jul 2023 08:16:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On 24.07.23 04:46, Amit Kapila wrote:\n> On Fri, Mar 10, 2023 at 3:24 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 08.03.23 21:57, Jeff Davis wrote:\n>>\n>>> * It appears rules IS NULL behaves differently from rules=''. Is that\n>>> desired? For instance:\n>>> create collation c1(provider=icu,\n>>> locale='und-u-ka-shifted-ks-level1',\n>>> deterministic=false);\n>>> create collation c2(provider=icu,\n>>> locale='und-u-ka-shifted-ks-level1',\n>>> rules='',\n>>> deterministic=false);\n>>> select 'a b' collate c1 = 'ab' collate c1; -- true\n>>> select 'a b' collate c2 = 'ab' collate c2; -- false\n>>\n>> I'm puzzled by this. The general behavior is, extract the rules of the\n>> original locale, append the custom rules, use that. If the custom rules\n>> are the empty string, that should match using the original rules\n>> untouched. Needs further investigation.\n>>\n>>> * Can you document the interaction between locale keywords\n>>> (\"@colStrength=primary\") and a rule like '[strength 2]'?\n>>\n>> I'll look into that.\n> \n> This thread is listed on PostgreSQL 16 Open Items list. This is a\n> gentle reminder to see if there is a plan to move forward with respect\n> to open points.\n\nI have investigated this. My assessment is that how PostgreSQL \ninterfaces with ICU is correct. Whether what ICU does is correct might \nbe debatable. I have filed a bug with ICU about this: \nhttps://unicode-org.atlassian.net/browse/ICU-22456 , but there is no \nresponse yet.\n\nYou can work around this by including the desired attributes in the \nrules string, for example\n\n create collation c3 (provider=icu,\n locale='und-u-ka-shifted-ks-level1',\n rules='[alternate shifted][strength 1]',\n deterministic=false);\n\nSo I don't think there is anything we need to do here for PostgreSQL 16.\n\n\n\n",
"msg_date": "Mon, 14 Aug 2023 10:34:42 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On Mon, 2023-08-14 at 10:34 +0200, Peter Eisentraut wrote:\n> I have investigated this. My assessment is that how PostgreSQL \n> interfaces with ICU is correct. Whether what ICU does is correct\n> might \n> be debatable. I have filed a bug with ICU about this: \n> https://unicode-org.atlassian.net/browse/ICU-22456 , but there is no \n> response yet.\n\nIs everything other than the language and region simply discarded when\na rules string is present, or are some attributes preserved, or is\nthere some other nuance?\n\n> You can work around this by including the desired attributes in the \n> rules string, for example\n> \n> create collation c3 (provider=icu,\n> locale='und-u-ka-shifted-ks-level1',\n> rules='[alternate shifted][strength 1]',\n> deterministic=false);\n> \n> So I don't think there is anything we need to do here for PostgreSQL\n> 16.\n\nIs there some way we can warn a user that some attributes will be\ndiscarded, or improve the documentation? Letting the user figure this\nout for themselves doesn't seem right.\n\nAre you sure we want to allow rules for the database default collation\nin 16, or should we start with just allowing them in CREATE COLLATION\nand then expand to the database default collation later? I'm still a\nbit concerned about users getting too fancy with daticurules, and\nending up not being able to connect to their database anymore.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 22 Aug 2023 10:25:29 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 10:55 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Mon, 2023-08-14 at 10:34 +0200, Peter Eisentraut wrote:\n> > I have investigated this. My assessment is that how PostgreSQL\n> > interfaces with ICU is correct. Whether what ICU does is correct\n> > might\n> > be debatable. I have filed a bug with ICU about this:\n> > https://unicode-org.atlassian.net/browse/ICU-22456 , but there is no\n> > response yet.\n>\n> Is everything other than the language and region simply discarded when\n> a rules string is present, or are some attributes preserved, or is\n> there some other nuance?\n>\n> > You can work around this by including the desired attributes in the\n> > rules string, for example\n> >\n> > create collation c3 (provider=icu,\n> > locale='und-u-ka-shifted-ks-level1',\n> > rules='[alternate shifted][strength 1]',\n> > deterministic=false);\n> >\n> > So I don't think there is anything we need to do here for PostgreSQL\n> > 16.\n>\n> Is there some way we can warn a user that some attributes will be\n> discarded, or improve the documentation? Letting the user figure this\n> out for themselves doesn't seem right.\n>\n> Are you sure we want to allow rules for the database default collation\n> in 16, or should we start with just allowing them in CREATE COLLATION\n> and then expand to the database default collation later? I'm still a\n> bit concerned about users getting too fancy with daticurules, and\n> ending up not being able to connect to their database anymore.\n>\n\nThere is still an Open Item corresponding to this. Does anyone else\nwant to weigh in?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Sep 2023 18:35:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Allow tailoring of ICU locales with custom rules"
}
] |
[
{
"msg_contents": "Hi\n\nI try to write a safeguard check that ensures the expected extension\nversion for an extension library.\n\nSome like\n\nconst char *expected_extversion = \"2.5\";\n\n...\n\nextoid = getExtensionOfObject(ProcedureRelationId, fcinfo->flinfo->fn_oid));\nextversion = get_extension_version(extoid);\nif (strcmp(expected_extversion, extversion) != 0)\n elog(ERROR, \"extension \\\"%s\\\" needs \\\"ALTER EXTENSION %s UPDATE\\\",\n get_extension_name(extversion),\n get_extension_name(extversion)))\n\nCurrently the extension version is not simply readable - I need to read\ndirectly from the table.\n\nNotes, comments?\n\nRegards\n\nPavel\n\nHiI try to write a safeguard check that ensures the expected extension version for an extension library.Some likeconst char *expected_extversion = \"2.5\";...extoid = getExtensionOfObject(ProcedureRelationId, fcinfo->flinfo->fn_oid));extversion = get_extension_version(extoid);if (strcmp(expected_extversion, extversion) != 0) elog(ERROR, \"extension \\\"%s\\\" needs \\\"ALTER EXTENSION %s UPDATE\\\", get_extension_name(extversion), get_extension_name(extversion)))Currently the extension version is not simply readable - I need to read directly from the table.Notes, comments?RegardsPavel",
"msg_date": "Wed, 8 Mar 2023 17:58:58 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "proposal - get_extension_version function"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I try to write a safeguard check that ensures the expected extension\n> version for an extension library.\n\nThis is a bad idea. How will you do extension upgrades, if the new .so\nwon't run till you apply the extension upgrade script but the old .so\nmalfunctions as soon as you do? You need to make the C code as forgiving\nas possible, not as unforgiving as possible.\n\nIf you have C-level ABI changes you need to make, the usual fix is to\ninclude some sort of version number in the C name of each individual\nfunction you've changed, so that calls made with the old or the new SQL\ndefinition will be routed to the right place. There are multiple\nexamples of this in contrib/.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Mar 2023 13:49:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: proposal - get_extension_version function"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 10:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This is a bad idea. How will you do extension upgrades, if the new .so\n> won't run till you apply the extension upgrade script but the old .so\n> malfunctions as soon as you do?\n\nWhich upgrade paths allow you to have an old .so with a new version\nnumber? I didn't realize that was an issue.\n\n--Jacob\n\n\n",
"msg_date": "Wed, 8 Mar 2023 11:04:28 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal - get_extension_version function"
},
{
"msg_contents": "st 8. 3. 2023 v 20:04 odesílatel Jacob Champion <jchampion@timescale.com>\nnapsal:\n\n> On Wed, Mar 8, 2023 at 10:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > This is a bad idea. How will you do extension upgrades, if the new .so\n> > won't run till you apply the extension upgrade script but the old .so\n> > malfunctions as soon as you do?\n>\n> Which upgrade paths allow you to have an old .so with a new version\n> number? I didn't realize that was an issue.\n>\n\ninstallation from rpm or deb packages\n\n\n\n> --Jacob\n>\n\nst 8. 3. 2023 v 20:04 odesílatel Jacob Champion <jchampion@timescale.com> napsal:On Wed, Mar 8, 2023 at 10:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This is a bad idea. How will you do extension upgrades, if the new .so\n> won't run till you apply the extension upgrade script but the old .so\n> malfunctions as soon as you do?\n\nWhich upgrade paths allow you to have an old .so with a new version\nnumber? I didn't realize that was an issue.installation from rpm or deb packages\n\n--Jacob",
"msg_date": "Wed, 8 Mar 2023 20:10:37 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal - get_extension_version function"
},
{
"msg_contents": "Jacob Champion <jchampion@timescale.com> writes:\n> On Wed, Mar 8, 2023 at 10:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This is a bad idea. How will you do extension upgrades, if the new .so\n>> won't run till you apply the extension upgrade script but the old .so\n>> malfunctions as soon as you do?\n\n> Which upgrade paths allow you to have an old .so with a new version\n> number? I didn't realize that was an issue.\n\nMore usually, it's the other way around: new .so but SQL objects not\nupgraded yet. That's typical in a pg_upgrade to a new major version,\nwhere the new installation may have a newer extension .so than the\nold one did. You can't run ALTER EXTENSION UPGRADE if the new .so\nrefuses to load with the old SQL objects ... which AFAICS is exactly\nwhat Pavel's sketch would do.\n\nIf you have old .so and new SQL objects, it's likely that at least\nsome of those new objects won't work --- but it's good to not break\nany more functionality than you have to. That's why I suggest\nmanaging the compatibility checks on a per-function level rather\nthan trying to have an overall version check.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Mar 2023 14:17:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: proposal - get_extension_version function"
},
{
"msg_contents": "st 8. 3. 2023 v 19:49 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I try to write a safeguard check that ensures the expected extension\n> > version for an extension library.\n>\n> This is a bad idea. How will you do extension upgrades, if the new .so\n> won't run till you apply the extension upgrade script but the old .so\n> malfunctions as soon as you do? You need to make the C code as forgiving\n> as possible, not as unforgiving as possible.\n>\n\nThis method doesn't break updates. It allows any registration, just\ndoesn't allow execution with unsynced SQL API.\n\n\n>\n> If you have C-level ABI changes you need to make, the usual fix is to\n> include some sort of version number in the C name of each individual\n> function you've changed, so that calls made with the old or the new SQL\n> definition will be routed to the right place. There are multiple\n> examples of this in contrib/.\n>\n\nIn my extensions like plpgsql_check I don't want to promise compatible ABI.\nI support PostgreSQL 10 .. 16, and I really don't try to multiply code for\nany historic input/output.\n\n\n\n\n\n>\n> regards, tom lane\n>\n\nst 8. 3. 2023 v 19:49 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I try to write a safeguard check that ensures the expected extension\n> version for an extension library.\n\nThis is a bad idea. How will you do extension upgrades, if the new .so\nwon't run till you apply the extension upgrade script but the old .so\nmalfunctions as soon as you do? You need to make the C code as forgiving\nas possible, not as unforgiving as possible.This method doesn't break updates. It allows any registration, just doesn't allow execution with unsynced SQL API. \n\nIf you have C-level ABI changes you need to make, the usual fix is to\ninclude some sort of version number in the C name of each individual\nfunction you've changed, so that calls made with the old or the new SQL\ndefinition will be routed to the right place. There are multiple\nexamples of this in contrib/.In my extensions like plpgsql_check I don't want to promise compatible ABI. I support PostgreSQL 10 .. 16, and I really don't try to multiply code for any historic input/output. \n\n regards, tom lane",
"msg_date": "Wed, 8 Mar 2023 20:19:15 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal - get_extension_version function"
},
{
"msg_contents": "st 8. 3. 2023 v 20:17 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Jacob Champion <jchampion@timescale.com> writes:\n> > On Wed, Mar 8, 2023 at 10:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> This is a bad idea. How will you do extension upgrades, if the new .so\n> >> won't run till you apply the extension upgrade script but the old .so\n> >> malfunctions as soon as you do?\n>\n> > Which upgrade paths allow you to have an old .so with a new version\n> > number? I didn't realize that was an issue.\n>\n> More usually, it's the other way around: new .so but SQL objects not\n> upgraded yet. That's typical in a pg_upgrade to a new major version,\n> where the new installation may have a newer extension .so than the\n> old one did. You can't run ALTER EXTENSION UPGRADE if the new .so\n> refuses to load with the old SQL objects ... which AFAICS is exactly\n> what Pavel's sketch would do.\n>\n> If you have old .so and new SQL objects, it's likely that at least\n> some of those new objects won't work --- but it's good to not break\n> any more functionality than you have to. That's why I suggest\n> managing the compatibility checks on a per-function level rather\n> than trying to have an overall version check.\n>\n\nThere is agreement - I call this check from functions.\n\n\nhttps://github.com/okbob/plpgsql_check/commit/b0970ff319256207ffe5ba5f18b2a7476c7136f9\n\nRegards\n\nPavel\n\n\n> regards, tom lane\n>\n\nst 8. 3. 2023 v 20:17 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Jacob Champion <jchampion@timescale.com> writes:\n> On Wed, Mar 8, 2023 at 10:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This is a bad idea. How will you do extension upgrades, if the new .so\n>> won't run till you apply the extension upgrade script but the old .so\n>> malfunctions as soon as you do?\n\n> Which upgrade paths allow you to have an old .so with a new version\n> number? I didn't realize that was an issue.\n\nMore usually, it's the other way around: new .so but SQL objects not\nupgraded yet. That's typical in a pg_upgrade to a new major version,\nwhere the new installation may have a newer extension .so than the\nold one did. You can't run ALTER EXTENSION UPGRADE if the new .so\nrefuses to load with the old SQL objects ... which AFAICS is exactly\nwhat Pavel's sketch would do.\n\nIf you have old .so and new SQL objects, it's likely that at least\nsome of those new objects won't work --- but it's good to not break\nany more functionality than you have to. That's why I suggest\nmanaging the compatibility checks on a per-function level rather\nthan trying to have an overall version check.There is agreement - I call this check from functions. https://github.com/okbob/plpgsql_check/commit/b0970ff319256207ffe5ba5f18b2a7476c7136f9RegardsPavel\n\n regards, tom lane",
"msg_date": "Wed, 8 Mar 2023 20:22:19 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal - get_extension_version function"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 11:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Jacob Champion <jchampion@timescale.com> writes:\n> > On Wed, Mar 8, 2023 at 10:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> This is a bad idea. How will you do extension upgrades, if the new .so\n> >> won't run till you apply the extension upgrade script but the old .so\n> >> malfunctions as soon as you do?\n>\n> > Which upgrade paths allow you to have an old .so with a new version\n> > number? I didn't realize that was an issue.\n>\n> More usually, it's the other way around: new .so but SQL objects not\n> upgraded yet. That's typical in a pg_upgrade to a new major version,\n> where the new installation may have a newer extension .so than the\n> old one did.\n\nThat's the opposite case though; I think the expectation of backwards\ncompatibility from C to SQL is very different from (infinite?)\nforwards compatibility from C to SQL.\n\n> If you have old .so and new SQL objects, it's likely that at least\n> some of those new objects won't work --- but it's good to not break\n> any more functionality than you have to.\n\nTo me it doesn't seem like a partial break is safer than refusing to\nexecute in the face of old-C-and-new-SQL -- assuming it's safe at all?\nA bailout seems pretty reasonable in that case.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Wed, 8 Mar 2023 13:10:25 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal - get_extension_version function"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 11:11 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> installation from rpm or deb packages\n\nRight, but I thought the safe order for a downgrade was to issue the\nSQL downgrade first (thus putting the system back into the\npost-upgrade state), and only then replacing the packages with prior\nversions.\n\n--Jacob\n\n\n",
"msg_date": "Wed, 8 Mar 2023 13:11:55 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal - get_extension_version function"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 11:22 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> There is agreement - I call this check from functions.\n\nI think pg_auto_failover does this too, or at least used to.\n\nTimescale does strict compatibility checks as well. It's not entirely\ncomparable to your implementation, though.\n\n--Jacob\n\n\n",
"msg_date": "Wed, 8 Mar 2023 13:18:00 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal - get_extension_version function"
},
{
"msg_contents": "Jacob Champion <jchampion@timescale.com> writes:\n> On Wed, Mar 8, 2023 at 11:11 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>> installation from rpm or deb packages\n\n> Right, but I thought the safe order for a downgrade was to issue the\n> SQL downgrade first (thus putting the system back into the\n> post-upgrade state), and only then replacing the packages with prior\n> versions.\n\nPavel's proposed check would break that too. There's going to be some\ninterval where the SQL definitions are not in sync with the .so version,\nso you really want the .so to support at least two versions' worth of\nSQL objects.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Mar 2023 16:47:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: proposal - get_extension_version function"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 1:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Pavel's proposed check would break that too. There's going to be some\n> interval where the SQL definitions are not in sync with the .so version,\n> so you really want the .so to support at least two versions' worth of\n> SQL objects.\n\nI think we're in agreement that the extension must be able to load\nwith SQL version X and binary version X+1. (Pavel too, if I'm reading\nthe argument correctly -- the proposal is to gate execution paths, not\ninit time. And Pavel's not the only one implementing that today.)\n\nWhat I'm trying to pin down is the project's position on the reverse\n-- binary version X and SQL version X+1 -- because that seems\ngenerally unmaintainable, and I don't understand why an author would\npay that tax if they could just avoid it by bailing out entirely. (If\nan author wants to allow that, great, but does everyone have to?)\n\n--Jacob\n\n\n",
"msg_date": "Wed, 8 Mar 2023 14:26:44 -0800",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: proposal - get_extension_version function"
},
{
"msg_contents": "Jacob Champion <jchampion@timescale.com> writes:\n> What I'm trying to pin down is the project's position on the reverse\n> -- binary version X and SQL version X+1 -- because that seems\n> generally unmaintainable, and I don't understand why an author would\n> pay that tax if they could just avoid it by bailing out entirely. (If\n> an author wants to allow that, great, but does everyone have to?)\n\nHard to say. Our experience with the standard contrib modules is that\nit really isn't much additional trouble; but perhaps more-complex modules\nwould have more interdependencies between functions. In any case,\nI fail to see the need for basing things on a catalog lookup rather\nthan embedding API version numbers in relevant C symbols.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Mar 2023 17:43:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: proposal - get_extension_version function"
},
{
"msg_contents": "st 8. 3. 2023 v 23:43 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Jacob Champion <jchampion@timescale.com> writes:\n> > What I'm trying to pin down is the project's position on the reverse\n> > -- binary version X and SQL version X+1 -- because that seems\n> > generally unmaintainable, and I don't understand why an author would\n> > pay that tax if they could just avoid it by bailing out entirely. (If\n> > an author wants to allow that, great, but does everyone have to?)\n>\n> Hard to say. Our experience with the standard contrib modules is that\n> it really isn't much additional trouble; but perhaps more-complex modules\n> would have more interdependencies between functions. In any case,\n> I fail to see the need for basing things on a catalog lookup rather\n> than embedding API version numbers in relevant C symbols.\n>\n\nHow can you check it? There is not any callback now.\n\nRegards\n\nPavel\n\n\n\n\n>\n> regards, tom lane\n>\n\nst 8. 3. 2023 v 23:43 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Jacob Champion <jchampion@timescale.com> writes:\n> What I'm trying to pin down is the project's position on the reverse\n> -- binary version X and SQL version X+1 -- because that seems\n> generally unmaintainable, and I don't understand why an author would\n> pay that tax if they could just avoid it by bailing out entirely. (If\n> an author wants to allow that, great, but does everyone have to?)\n\nHard to say. Our experience with the standard contrib modules is that\nit really isn't much additional trouble; but perhaps more-complex modules\nwould have more interdependencies between functions. In any case,\nI fail to see the need for basing things on a catalog lookup rather\nthan embedding API version numbers in relevant C symbols.How can you check it? There is not any callback now.RegardsPavel \n\n regards, tom lane",
"msg_date": "Thu, 9 Mar 2023 05:35:20 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal - get_extension_version function"
},
{
"msg_contents": "Hi\n\n\nst 8. 3. 2023 v 17:58 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> I try to write a safeguard check that ensures the expected extension\n> version for an extension library.\n>\n> Some like\n>\n> const char *expected_extversion = \"2.5\";\n>\n> ...\n>\n> extoid = getExtensionOfObject(ProcedureRelationId,\n> fcinfo->flinfo->fn_oid));\n> extversion = get_extension_version(extoid);\n> if (strcmp(expected_extversion, extversion) != 0)\n> elog(ERROR, \"extension \\\"%s\\\" needs \\\"ALTER EXTENSION %s UPDATE\\\",\n> get_extension_name(extversion),\n> get_extension_name(extversion)))\n>\n> Currently the extension version is not simply readable - I need to read\n> directly from the table.\n>\n> Notes, comments?\n>\n\nattached patch\n\nRegards\n\nPavel\n\n\n>\n> Regards\n>\n> Pavel\n>\n>",
"msg_date": "Sat, 11 Mar 2023 05:14:47 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: proposal - get_extension_version function"
}
] |
[
{
"msg_contents": "Commit 7170f2159fb21b62c263acd458d781e2f3c3f8bb, which introduced\nin-place tablespaces, didn't make any adjustments to pg_basebackup.\nThe resulting behavior is pretty bizarre.\n\nIf you take a plain-format backup using pg_basebackup -Fp, then the\nfiles in the in-place tablespace are backed up, but if you take a\ntar-format backup using pg_basebackup -Ft, then they aren't.\n\nI had at first wondered how this could even be possible, since after\nall a plain format backup is little more than a tar-format backup that\npg_basebackup chooses to extract for you. The answer turns out to be\nthat a tar-format backup activates the server's TABLESPACE_MAP option,\nand a plain-format backup doesn't, and so pg_basebackup can handle\nthis case differently depending on the value of that flag, and does.\nEven for a plain-format backup, the server's behavior is not really\ncorrect, because I think what's happening is that the tablespace files\nare getting included in the base.tar file generated by the server,\nrather than in a separate ${TSOID}.tar file as would normally happen\nfor a user-defined tablespace, but because the tar files get extracted\nbefore the user lays eyes on them, the user-visible consequences are\nmasked, at least in the cases I've tested so far.\n\nI'm not sure how messy it's going to be to clean this up. I think each\ntablespace really needs to go into a separate ${TSOID}.tar file,\nbecause we've got tons of code both on the server side and in\npg_basebackup that assumes that's how things work, and having in-place\ntablespaces be a rare exception to that rule seems really unappealing.\nThis of course also implies that files in an in-place tablespace must\nnot go missing from the backup altogether.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 8 Mar 2023 16:30:05 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "allow_in_place_tablespaces vs. pg_basebackup"
},
{
"msg_contents": "At Wed, 8 Mar 2023 16:30:05 -0500, Robert Haas <robertmhaas@gmail.com> wrote in \n> Commit 7170f2159fb21b62c263acd458d781e2f3c3f8bb, which introduced\n> in-place tablespaces, didn't make any adjustments to pg_basebackup.\n> The resulting behavior is pretty bizarre.\n> \n> If you take a plain-format backup using pg_basebackup -Fp, then the\n> files in the in-place tablespace are backed up, but if you take a\n> tar-format backup using pg_basebackup -Ft, then they aren't.\n>\n> I had at first wondered how this could even be possible, since after\n> all a plain format backup is little more than a tar-format backup that\n> pg_basebackup chooses to extract for you. The answer turns out to be\n> that a tar-format backup activates the server's TABLESPACE_MAP option,\n> and a plain-format backup doesn't, and so pg_basebackup can handle\n> this case differently depending on the value of that flag, and does.\n> Even for a plain-format backup, the server's behavior is not really\n> correct, because I think what's happening is that the tablespace files\n> are getting included in the base.tar file generated by the server,\n> rather than in a separate ${TSOID}.tar file as would normally happen\n> for a user-defined tablespace, but because the tar files get extracted\n> before the user lays eyes on them, the user-visible consequences are\n> masked, at least in the cases I've tested so far.\n\nIn my understading, in-place tablespaces are a feature for testing\npurpose only. Therefore, the feature was intentionally designed to\nhave minimal impact on the code and to provide minimum-necessary\nbehavior for that purpose. I believe it is reasonable to make\nbasebackup error-out when it encounters an in-place tablespace\ndirectory when TABLESPACE_MAP is activated.\n\n> I'm not sure how messy it's going to be to clean this up. I think each\n> tablespace really needs to go into a separate ${TSOID}.tar file,\n> because we've got tons of code both on the server side and in\n> pg_basebackup that assumes that's how things work, and having in-place\n> tablespaces be a rare exception to that rule seems really unappealing.\n> This of course also implies that files in an in-place tablespace must\n> not go missing from the backup altogether.\n\nThe doc clearly describes the purpose of in-place tablespaces and the\npotential confusion they can cause for backup tools not excluding\npg_basebackup. Does this not provide sufficient information?\n\nhttps://www.postgresql.org/docs/devel/runtime-config-developer.html\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 09 Mar 2023 10:58:41 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow_in_place_tablespaces vs. pg_basebackup"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 2:58 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Wed, 8 Mar 2023 16:30:05 -0500, Robert Haas <robertmhaas@gmail.com> wrote in\n> > I'm not sure how messy it's going to be to clean this up. I think each\n> > tablespace really needs to go into a separate ${TSOID}.tar file,\n> > because we've got tons of code both on the server side and in\n> > pg_basebackup that assumes that's how things work, and having in-place\n> > tablespaces be a rare exception to that rule seems really unappealing.\n> > This of course also implies that files in an in-place tablespace must\n> > not go missing from the backup altogether.\n>\n> The doc clearly describes the purpose of in-place tablespaces and the\n> potential confusion they can cause for backup tools not excluding\n> pg_basebackup. Does this not provide sufficient information?\n>\n> https://www.postgresql.org/docs/devel/runtime-config-developer.html\n\nYeah. We knew that this didn't work (was discussed in a couple of\nother threads), but you might be right that an error would be better\nfor now. It's absolutely not a user facing mode of operation, it was\nintended just for the replication tests. Clearly it might be useful\nfor testing purposes in the backup area too, which is probably how\nRobert got here. I will think about what changes would be needed as I\nam looking at backup code currently, but I'm not sure when I'll be\nable to post a patch so don't let that stop anyone else who sees how\nto get it working...\n\n\n",
"msg_date": "Thu, 9 Mar 2023 15:52:20 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow_in_place_tablespaces vs. pg_basebackup"
},
{
"msg_contents": "At Thu, 09 Mar 2023 10:58:41 +0900 (JST), I wrote\n> behavior for that purpose. I believe it is reasonable to make\n> basebackup error-out when it encounters an in-place tablespace\n> directory when TABLESPACE_MAP is activated.\n\nIt turned out to be not as simple as I thought, though...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 09 Mar 2023 11:53:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow_in_place_tablespaces vs. pg_basebackup"
},
{
"msg_contents": "At Thu, 09 Mar 2023 11:53:26 +0900 (JST), I wrote\n> It turned out to be not as simple as I thought, though...\n\nThe error message and the location where the error condition is\nchecked don't match, but making the messages more generic may not be\nhelpful for users..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 09 Mar 2023 12:15:47 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow_in_place_tablespaces vs. pg_basebackup"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 9:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Yeah. We knew that this didn't work (was discussed in a couple of\n> other threads), but you might be right that an error would be better\n> for now. It's absolutely not a user facing mode of operation, it was\n> intended just for the replication tests. Clearly it might be useful\n> for testing purposes in the backup area too, which is probably how\n> Robert got here. I will think about what changes would be needed as I\n> am looking at backup code currently, but I'm not sure when I'll be\n> able to post a patch so don't let that stop anyone else who sees how\n> to get it working...\n\nIf there had been an error message like \"ERROR: pg_basebackup cannot\nback up a database that contains in-place tablespaces,\" it would have\nsaved me a lot of time yesterday. If there had at least been a\ndocumentation mention, I would have found it eventually (but not\nnearly as quickly). As it is, the only reference to this topic in the\nrepository seems to be c6f2f01611d4f2c412e92eb7893f76fa590818e8, \"Fix\npg_basebackup with in-place tablespaces,\" which makes it look like\nthis is supposed to be working.\n\nI also think that if we're going to have in-place tablespaces, it's a\ngood idea for them to work with pg_basebackup. I wasn't really looking\nto test pg_basebackup with this feature (although it's a good idea); I\nwas just trying to make sure I didn't break in-place tablespaces while\nworking on something else. I think it's sometimes OK to add stuff for\ntesting that doesn't work with absolutely everything we have in the\nsystem, but the tablespace code is awfully closely related to the\npg_basebackup code for them to just not work together at all.\n\nNow that I'm done grumbling, here's a patch.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 9 Mar 2023 16:15:12 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow_in_place_tablespaces vs. pg_basebackup"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 4:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Now that I'm done grumbling, here's a patch.\n\nAnyone want to comment on this?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 Mar 2023 07:56:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow_in_place_tablespaces vs. pg_basebackup"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 07:56:42AM -0400, Robert Haas wrote:\n> Anyone want to comment on this?\n\nI have not checked the patch in details, but perhaps this needs at\nleast one test?\n--\nMichael",
"msg_date": "Wed, 22 Mar 2023 08:59:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: allow_in_place_tablespaces vs. pg_basebackup"
},
{
"msg_contents": "On Tue, Mar 21, 2023 at 7:59 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Mar 20, 2023 at 07:56:42AM -0400, Robert Haas wrote:\n> > Anyone want to comment on this?\n>\n> I have not checked the patch in details, but perhaps this needs at\n> least one test?\n\nSure. I was sort of hoping to get feedback on the overall plan first,\nbut it's easy enough to add a test so I've done that in the attached\nversion. I've put it in a separate file because 010_pg_basebackup.pl\nis pushing a thousand lines and it's got a ton of unrelated tests in\nthere already. The test included here fails on master, but passes with\nthe patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 22 Mar 2023 13:14:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow_in_place_tablespaces vs. pg_basebackup"
},
{
"msg_contents": "The commit message explains prettty well, and it seems to work in\nsimple testing, and yeah commit c6f2f016 was not a work of art.\npg_basebackeup --format=plain \"worked\", but your way is better. I\nguess we should add a test of -Fp too, to keep it working? Here's one\nof those.\n\nI know it's not your patch's fault, but I wonder if this might break\nsomething. We have this strange beast ti->rpath (commit b168c5ef in\n2014):\n\n+ /*\n+ * Relpath holds the relative path of\nthe tablespace directory\n+ * when it's located within PGDATA, or\nNULL if it's located\n+ * elsewhere.\n+ */\n\nThat's pretty confusing, because relative paths have been banned since\nthe birth of tablespaces (commit 2467394ee15, 2004):\n\n+ /*\n+ * Allowing relative paths seems risky\n+ *\n+ * this also helps us ensure that location is not empty or whitespace\n+ */\n+ if (!is_absolute_path(location))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_OBJECT_DEFINITION),\n+ errmsg(\"tablespace location must be\nan absolute path\")));\n\nThe discussion that produced the contradiction:\nhttps://www.postgresql.org/message-id/flat/m2ob3vl3et.fsf%402ndQuadrant.fr\n\nI guess what I'm wondering here is if there is a hazard where we\nconfuse these outlawed tablespaces that supposedly roam the plains\nsomewhere in your code that is assuming that \"relative implies\nin-place\". Or if not now, maybe in future changes. I wonder if these\n\"semi-supported-but-don't-tell-anyone\" relative symlinks are worthy of\na defensive test (or is it in there somewhere already?).\n\nOriginally when trying to implement allow_in_place_tablespaces, I\ninstead tried allowing limited relative tablespaces, so you could use\nLOCATION 'pg_tblspc/my_directory', and then it would still create a\nsymlink 1234 -> my_directory, which probably would have All Just\nWorked™ given the existing ti->rpath stuff, and possibly made the\nusers that thread was about happy, but I had to give that up because\nit didn't work on Windows (no relative symlinks). Oh well.",
"msg_date": "Tue, 28 Mar 2023 17:15:25 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow_in_place_tablespaces vs. pg_basebackup"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 5:15 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I guess we should add a test of -Fp too, to keep it working?\n\nOops, that was already tested in existing tests, so I take that back.\n\n\n",
"msg_date": "Wed, 29 Mar 2023 07:18:48 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow_in_place_tablespaces vs. pg_basebackup"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 05:15:25PM +1300, Thomas Munro wrote:\n> The commit message explains pretty well, and it seems to work in\n> simple testing, and yeah commit c6f2f016 was not a work of art.\n> pg_basebackeup --format=plain \"worked\", but your way is better. I\n> guess we should add a test of -Fp too, to keep it working? Here's one\n> of those.\n\nThe patch is larger than it is complicated. Particularly, in xlog.c,\nthis is just adding one block for the PGFILETYPE_DIR case to store a\nrelative path.\n\n> I know it's not your patch's fault, but I wonder if this might break\n> something. We have this strange beast ti->rpath (commit b168c5ef in\n> 2014):\n> \n> + /*\n> + * Relpath holds the relative path of the tablespace directory\n> + * when it's located within PGDATA, or NULL if it's located\n> + * elsewhere.\n> + */\n> \n> That's pretty confusing, because relative paths have been banned since\n> the birth of tablespaces (commit 2467394ee15, 2004):\n\nI think that this comes down to people manipulating pg_tblspc/ by\nthemselves post-creation with CREATE, because we don't track the\nlocation anywhere post-creation and rely on the structure of\npg_tblspc/ for any future decision taken by the backend? I recall\nthis specific case, where users were creating tablespaces in PGDATA\nitself to make \"cleaner\" for them the structure in the data folder,\neven if it makes no sense as the mount point is the same.. 33cb8ff\nadded a warning about that in tablespace.c, but we could be more\naggressive and outright drop this case entirely when in-place\ntablespaces are not involved.\n\nTablespace maps defined in pg_basebackup -T require both the old and\nnew locations to be absolute, so it seems shaky to me to assume that\nthis should always be fine in the backend..\n\n> I guess what I'm wondering here is if there is a hazard where we\n> confuse these outlawed tablespaces that supposedly roam the plains\n> somewhere in your code that is assuming that \"relative implies\n> in-place\". Or if not now, maybe in future changes. I wonder if these\n> \"semi-supported-but-don't-tell-anyone\" relative symlinks are worthy of\n> a defensive test (or is it in there somewhere already?).\n\nYeah, it makes for surprising and sometimes undocumented behaviors,\nwhich is confusing for users and developers at the end. I'd like to\nthink that we'd live happier long-term if the borders are clearer, aka\nswitch to more aggressive ERRORs instead of WARNINGs in some places\nwhere the boundaries are blurry. A good thing about in-place\ntablespaces taken in isolation is that the borders of what you can do\nare well-defined, which comes down to the absolute vs relative path\nhandling.\n\nLooking at the patch, nothing really stands out..\n--\nMichael",
"msg_date": "Wed, 29 Mar 2023 10:40:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: allow_in_place_tablespaces vs. pg_basebackup"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 9:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Looking at the patch, nothing really stands out..\n\nIt doesn't seem like anyone's unhappy about this patch. I don't think\nit's necessary to back-patch it, given that in-place tablespaces are\nintended for developer use, not real-world use, and also given that\nthe patch requires changing both a bit of server-side behavior and\nsome client-side behavior and it seems unfriendly to create behavior\nskew of that sort in minor release. However, I would like to get it\ncommitted to master.\n\nDo people think it's OK to do that now, or should I wait until we've\nbranched? I personally think this is bug-fix-ish enough that now is\nOK, but I'll certainly forebear if others disagree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 14 Apr 2023 16:11:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow_in_place_tablespaces vs. pg_basebackup"
},
{
"msg_contents": "On Fri, Apr 14, 2023 at 04:11:47PM -0400, Robert Haas wrote:\n> Do people think it's OK to do that now, or should I wait until we've\n> branched? I personally think this is bug-fix-ish enough that now is\n> OK, but I'll certainly forebear if others disagree.\n\nFWIW, doing that now rather than the beginning of July is OK for me\nfor this stuff.\n--\nMichael",
"msg_date": "Mon, 17 Apr 2023 14:30:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: allow_in_place_tablespaces vs. pg_basebackup"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 1:30 AM Michael Paquier <michael@paquier.xyz> wrote:\n> FWIW, doing that now rather than the beginning of July is OK for me\n> for this stuff.\n\nOK, committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 Apr 2023 11:35:41 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: allow_in_place_tablespaces vs. pg_basebackup"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 11:35:41AM -0400, Robert Haas wrote:\n> On Mon, Apr 17, 2023 at 1:30 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> FWIW, doing that now rather than the beginning of July is OK for me\n>> for this stuff.\n> \n> OK, committed.\n\nThanks!\n--\nMichael",
"msg_date": "Wed, 19 Apr 2023 09:32:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: allow_in_place_tablespaces vs. pg_basebackup"
}
] |
[
{
"msg_contents": "Here is a feature idea that emerged from a pgsql-bugs thread[1] that I\nam kicking into the next commitfest. Example:\n\ns1: \\c db1\ns1: CREATE TABLE t (i int);\ns1: BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;\ns1: INSERT INTO t VALUES (42);\n\ns2: \\c db2\ns2: BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE READ ONLY DEFERRABLE;\ns2: SELECT * FROM x;\n\nI don't know of any reason why s2 should have to wait, or\nalternatively, without DEFERRABLE, why it shouldn't immediately drop\nfrom SSI to SI (that is, opt out of predicate locking and go faster).\nThis change relies on the fact that PostgreSQL doesn't allow any kind\nof cross-database access, except for shared catalogs, and all catalogs\nare already exempt from SSI. I have updated this new version of the\npatch to explain that very clearly at the place where that exemption\nhappens, so that future hackers would see that we rely on that fact\nelsewhere if reconsidering that.\n\n[1] https://www.postgresql.org/message-id/flat/17368-98a4f99e8e4b4402%40postgresql.org",
"msg_date": "Thu, 9 Mar 2023 18:34:27 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Cross-database SERIALIZABLE safe snapshots"
},
{
"msg_contents": "On 09/03/2023 07:34, Thomas Munro wrote:\n> Here is a feature idea that emerged from a pgsql-bugs thread[1] that I\n> am kicking into the next commitfest. Example:\n> \n> s1: \\c db1\n> s1: CREATE TABLE t (i int);\n> s1: BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n> s1: INSERT INTO t VALUES (42);\n> \n> s2: \\c db2\n> s2: BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE READ ONLY DEFERRABLE;\n> s2: SELECT * FROM x;\n> \n> I don't know of any reason why s2 should have to wait, or\n> alternatively, without DEFERRABLE, why it shouldn't immediately drop\n> from SSI to SI (that is, opt out of predicate locking and go faster).\n> This change relies on the fact that PostgreSQL doesn't allow any kind\n> of cross-database access, except for shared catalogs, and all catalogs\n> are already exempt from SSI. I have updated this new version of the\n> patch to explain that very clearly at the place where that exemption\n> happens, so that future hackers would see that we rely on that fact\n> elsewhere if reconsidering that.\n\nMakes sense.\n\n> @@ -1814,7 +1823,17 @@ GetSerializableTransactionSnapshotInt(Snapshot snapshot,\n> {\n> othersxact = dlist_container(SERIALIZABLEXACT, xactLink, iter.cur);\n> \n> - if (!SxactIsCommitted(othersxact)\n> + /*\n> + * We can't possibly have an unsafe conflict with a transaction in\n> + * another database. The only possible overlap is on shared\n> + * catalogs, but we don't support SSI for shared catalogs. The\n> + * invalid database case covers 2PC, because we don't yet record\n> + * database OIDs in the 2PC information. We also filter out doomed\n> + * transactions as they can't possibly commit.\n> + */\n> + if ((othersxact->database == InvalidOid ||\n> + othersxact->database == MyDatabaseId)\n> + && !SxactIsCommitted(othersxact)\n> && !SxactIsDoomed(othersxact)\n> && !SxactIsReadOnly(othersxact))\n> {\n\nWhy don't we set the database OID in 2PC transactions? We actually do \nset it correctly - or rather we never clear it - when a transaction is \nprepared. But you set it to invalid when recovering a prepared \ntransaction on system startup. So the comment is a bit misleading: the \noptimization doesn't apply to 2PC transactions recovered after restart, \nother 2PC transactions are fine.\n\nI'm sure it's not a big deal in practice, but it's also not hard to fix. \nWe do store the database OID in the twophase state. The caller of \npredicatelock_twophase_recover() has it, we just need a little plumbing \nto pass it down.\n\nAttached patches:\n\n1. Rebased version of your patch, just trivial pgindent conflict fixes\n2. Some comment typo fixes and improvements\n3. Set the database ID on recovered 2PC transactions\n\nA test for this would be nice. isolationtester doesn't support \nconnecting to different databases, restarting the server to test the 2PC \nrecovery, but a TAP test could do it.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Mon, 10 Jul 2023 12:17:50 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Cross-database SERIALIZABLE safe snapshots"
},
{
"msg_contents": "On Mon, 10 Jul 2023 at 14:48, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 09/03/2023 07:34, Thomas Munro wrote:\n> > Here is a feature idea that emerged from a pgsql-bugs thread[1] that I\n> > am kicking into the next commitfest. Example:\n> >\n> > s1: \\c db1\n> > s1: CREATE TABLE t (i int);\n> > s1: BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n> > s1: INSERT INTO t VALUES (42);\n> >\n> > s2: \\c db2\n> > s2: BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE READ ONLY DEFERRABLE;\n> > s2: SELECT * FROM x;\n> >\n> > I don't know of any reason why s2 should have to wait, or\n> > alternatively, without DEFERRABLE, why it shouldn't immediately drop\n> > from SSI to SI (that is, opt out of predicate locking and go faster).\n> > This change relies on the fact that PostgreSQL doesn't allow any kind\n> > of cross-database access, except for shared catalogs, and all catalogs\n> > are already exempt from SSI. I have updated this new version of the\n> > patch to explain that very clearly at the place where that exemption\n> > happens, so that future hackers would see that we rely on that fact\n> > elsewhere if reconsidering that.\n>\n> Makes sense.\n>\n> > @@ -1814,7 +1823,17 @@ GetSerializableTransactionSnapshotInt(Snapshot snapshot,\n> > {\n> > othersxact = dlist_container(SERIALIZABLEXACT, xactLink, iter.cur);\n> >\n> > - if (!SxactIsCommitted(othersxact)\n> > + /*\n> > + * We can't possibly have an unsafe conflict with a transaction in\n> > + * another database. The only possible overlap is on shared\n> > + * catalogs, but we don't support SSI for shared catalogs. The\n> > + * invalid database case covers 2PC, because we don't yet record\n> > + * database OIDs in the 2PC information. We also filter out doomed\n> > + * transactions as they can't possibly commit.\n> > + */\n> > + if ((othersxact->database == InvalidOid ||\n> > + othersxact->database == MyDatabaseId)\n> > + && !SxactIsCommitted(othersxact)\n> > && !SxactIsDoomed(othersxact)\n> > && !SxactIsReadOnly(othersxact))\n> > {\n>\n> Why don't we set the database OID in 2PC transactions? We actually do\n> set it correctly - or rather we never clear it - when a transaction is\n> prepared. But you set it to invalid when recovering a prepared\n> transaction on system startup. So the comment is a bit misleading: the\n> optimization doesn't apply to 2PC transactions recovered after restart,\n> other 2PC transactions are fine.\n>\n> I'm sure it's not a big deal in practice, but it's also not hard to fix.\n> We do store the database OID in the twophase state. The caller of\n> predicatelock_twophase_recover() has it, we just need a little plumbing\n> to pass it down.\n>\n> Attached patches:\n>\n> 1. Rebased version of your patch, just trivial pgindent conflict fixes\n> 2. Some comment typo fixes and improvements\n> 3. Set the database ID on recovered 2PC transactions\n>\n> A test for this would be nice. isolationtester doesn't support\n> connecting to different databases, restarting the server to test the 2PC\n> recovery, but a TAP test could do it.\n\n@Thomas Munro As this patch is already marked as \"Ready for\nCommitter\", do you want to take this patch forward based on Heikki's\nsuggestions and get it committed?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 21 Jan 2024 07:35:53 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Cross-database SERIALIZABLE safe snapshots"
}
] |
[
{
"msg_contents": "In [1] I wrote:\n\n> PG Bug reporting form <noreply@postgresql.org> writes:\n>> The following script:\n>> [ leaks a file descriptor per error ]\n> \n> Yeah, at least on platforms where WaitEventSets own kernel file\n> descriptors. I don't think it's postgres_fdw's fault though,\n> but that of ExecAppendAsyncEventWait, which is ignoring the\n> possibility of failing partway through. It looks like it'd be\n> sufficient to add a PG_CATCH or PG_FINALLY block there to make\n> sure the WaitEventSet is disposed of properly --- fortunately,\n> it doesn't need to have any longer lifespan than that one\n> function.\n\nAfter further thought that seems like a pretty ad-hoc solution.\nWe probably can do no better in the back branches, but shouldn't\nwe start treating WaitEventSets as ResourceOwner-managed resources?\nOtherwise, transient WaitEventSets are going to be a permanent\nsource of headaches.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/423731.1678381075%40sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 09 Mar 2023 13:51:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "WaitEventSet resource leakage"
},
{
"msg_contents": "(Alexander just reminded me of this off-list)\n\nOn 09/03/2023 20:51, Tom Lane wrote:\n> In [1] I wrote:\n> \n>> PG Bug reporting form <noreply@postgresql.org> writes:\n>>> The following script:\n>>> [ leaks a file descriptor per error ]\n>>\n>> Yeah, at least on platforms where WaitEventSets own kernel file\n>> descriptors. I don't think it's postgres_fdw's fault though,\n>> but that of ExecAppendAsyncEventWait, which is ignoring the\n>> possibility of failing partway through. It looks like it'd be\n>> sufficient to add a PG_CATCH or PG_FINALLY block there to make\n>> sure the WaitEventSet is disposed of properly --- fortunately,\n>> it doesn't need to have any longer lifespan than that one\n>> function.\n\nHere's a patch to do that. For back branches.\n\n> After further thought that seems like a pretty ad-hoc solution.\n> We probably can do no better in the back branches, but shouldn't\n> we start treating WaitEventSets as ResourceOwner-managed resources?\n> Otherwise, transient WaitEventSets are going to be a permanent\n> source of headaches.\n\nAgreed. The current signature of CurrentWaitEventSet is:\n\nWaitEventSet *\nCreateWaitEventSet(MemoryContext context, int nevents)\n\nPassing MemoryContext makes little sense when the WaitEventSet also \nholds file descriptors. With anything other than TopMemoryContext, you \nneed to arrange for proper cleanup with PG_TRY-PG_CATCH or by avoiding \nereport() calls. And once you've arrange for cleanup, the memory context \ndoesn't matter much anymore.\n\nLet's change it so that it's always allocated in TopMemoryContext, but \npass a ResourceOwner instead:\n\nWaitEventSet *\nCreateWaitEventSet(ResourceOwner owner, int nevents)\n\nAnd use owner == NULL to mean session lifetime.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Wed, 15 Nov 2023 23:48:52 +0100",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: WaitEventSet resource leakage"
},
{
"msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> On 09/03/2023 20:51, Tom Lane wrote:\n>> After further thought that seems like a pretty ad-hoc solution.\n>> We probably can do no better in the back branches, but shouldn't\n>> we start treating WaitEventSets as ResourceOwner-managed resources?\n>> Otherwise, transient WaitEventSets are going to be a permanent\n>> source of headaches.\n\n> Let's change it so that it's always allocated in TopMemoryContext, but \n> pass a ResourceOwner instead:\n> WaitEventSet *\n> CreateWaitEventSet(ResourceOwner owner, int nevents)\n> And use owner == NULL to mean session lifetime.\n\nWFM. (I didn't study your back-branch patch.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Nov 2023 18:08:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: WaitEventSet resource leakage"
},
{
"msg_contents": "On 16/11/2023 01:08, Tom Lane wrote:\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> On 09/03/2023 20:51, Tom Lane wrote:\n>>> After further thought that seems like a pretty ad-hoc solution.\n>>> We probably can do no better in the back branches, but shouldn't\n>>> we start treating WaitEventSets as ResourceOwner-managed resources?\n>>> Otherwise, transient WaitEventSets are going to be a permanent\n>>> source of headaches.\n> \n>> Let's change it so that it's always allocated in TopMemoryContext, but\n>> pass a ResourceOwner instead:\n>> WaitEventSet *\n>> CreateWaitEventSet(ResourceOwner owner, int nevents)\n>> And use owner == NULL to mean session lifetime.\n> \n> WFM. (I didn't study your back-branch patch.)\n\nAnd here is a patch to implement that on master.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Thu, 16 Nov 2023 11:21:49 +0100",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: WaitEventSet resource leakage"
},
{
"msg_contents": "On Fri, Nov 17, 2023 at 12:22 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> On 16/11/2023 01:08, Tom Lane wrote:\n> > Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> >> On 09/03/2023 20:51, Tom Lane wrote:\n> >>> After further thought that seems like a pretty ad-hoc solution.\n> >>> We probably can do no better in the back branches, but shouldn't\n> >>> we start treating WaitEventSets as ResourceOwner-managed resources?\n> >>> Otherwise, transient WaitEventSets are going to be a permanent\n> >>> source of headaches.\n> >\n> >> Let's change it so that it's always allocated in TopMemoryContext, but\n> >> pass a ResourceOwner instead:\n> >> WaitEventSet *\n> >> CreateWaitEventSet(ResourceOwner owner, int nevents)\n> >> And use owner == NULL to mean session lifetime.\n> >\n> > WFM. (I didn't study your back-branch patch.)\n>\n> And here is a patch to implement that on master.\n\nRationale and code look good to me.\n\ncfbot warns about WAIT_USE_WIN32:\n\n[10:12:54.375] latch.c:889:2: error: ISO C90 forbids mixed\ndeclarations and code [-Werror=declaration-after-statement]\n\nLet's see...\n\n WaitEvent *cur_event;\n\n for (cur_event = set->events;\n\nMaybe:\n\n for (WaitEvent *cur_event = set->events;\n\n\n",
"msg_date": "Mon, 20 Nov 2023 10:09:57 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WaitEventSet resource leakage"
},
{
"msg_contents": "20.11.2023 00:09, Thomas Munro wrote:\n> On Fri, Nov 17, 2023 at 12:22 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n>> And here is a patch to implement that on master.\n> Rationale and code look good to me.\n>\n>\n\nI can also confirm that the patches proposed (for master and back branches)\neliminate WES leakage as expected.\n\nThanks for the fix!\n\nMaybe you would find appropriate to add the comment\n/* Convenience wrappers over ResourceOwnerRemember/Forget */\nabove ResourceOwnerRememberWaitEventSet\njust as it's added above ResourceOwnerRememberRelationRef,\nResourceOwnerRememberDSM, ResourceOwnerRememberFile, ...\n\n(As a side note, this fix doesn't resolve the issue #17828 completely,\nbecause that large number of handles might be also consumed\nlegally.)\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 22 Nov 2023 16:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WaitEventSet resource leakage"
},
{
"msg_contents": "On 22/11/2023 15:00, Alexander Lakhin wrote:\n> I can also confirm that the patches proposed (for master and back branches)\n> eliminate WES leakage as expected.\n> \n> Thanks for the fix!\n> \n> Maybe you would find appropriate to add the comment\n> /* Convenience wrappers over ResourceOwnerRemember/Forget */\n> above ResourceOwnerRememberWaitEventSet\n> just as it's added above ResourceOwnerRememberRelationRef,\n> ResourceOwnerRememberDSM, ResourceOwnerRememberFile, ...\n\nAdded that and fixed the Windows warning that Thomas pointed out. Pushed \nthe ResourceOwner version to master, and PG_TRY-CATCH version to 14-16.\n\nThank you!\n\n> (As a side note, this fix doesn't resolve the issue #17828 completely,\n> because that large number of handles might be also consumed\n> legally.)\n\n:-(\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Thu, 23 Nov 2023 13:35:16 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: WaitEventSet resource leakage"
}
] |
[
{
"msg_contents": "Use ICU by default at initdb time.\n\nIf the ICU locale is not specified, initialize the default collator\nand retrieve the locale name from that.\n\nDiscussion: https://postgr.es/m/510d284759f6e943ce15096167760b2edcb2e700.camel@j-davis.com\nReviewed-by: Peter Eisentraut\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/27b62377b47f9e7bf58613608bc718c86ea91e91\n\nModified Files\n--------------\ncontrib/citext/expected/citext_utf8.out | 9 +++-\ncontrib/citext/expected/citext_utf8_1.out | 9 +++-\ncontrib/citext/sql/citext_utf8.sql | 9 +++-\ncontrib/unaccent/expected/unaccent.out | 9 ++++\ncontrib/unaccent/expected/unaccent_1.out | 8 ++++\ncontrib/unaccent/sql/unaccent.sql | 11 +++++\ndoc/src/sgml/ref/initdb.sgml | 53 +++++++++++++--------\nsrc/bin/initdb/Makefile | 4 +-\nsrc/bin/initdb/initdb.c | 54 +++++++++++++++++++++-\nsrc/bin/initdb/t/001_initdb.pl | 7 +--\nsrc/bin/pg_dump/t/002_pg_dump.pl | 2 +-\nsrc/bin/scripts/t/020_createdb.pl | 2 +-\nsrc/interfaces/ecpg/test/Makefile | 3 --\nsrc/interfaces/ecpg/test/connect/test5.pgc | 2 +-\nsrc/interfaces/ecpg/test/expected/connect-test5.c | 2 +-\n.../ecpg/test/expected/connect-test5.stderr | 2 +-\nsrc/interfaces/ecpg/test/meson.build | 1 -\nsrc/test/icu/t/010_database.pl | 2 +-\n18 files changed, 147 insertions(+), 42 deletions(-)",
"msg_date": "Thu, 09 Mar 2023 19:11:42 +0000",
"msg_from": "Jeff Davis <jdavis@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Use ICU by default at initdb time."
},
{
"msg_contents": "On Thu, 2023-03-09 at 19:11 +0000, Jeff Davis wrote:\n> Use ICU by default at initdb time.\n\nI'm seeing a failure on hoverfly:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=hoverfly&dt=2023-03-09%2021%3A51%3A45&stg=initdb-en_US.8859-15\n\nThat's because ICU always uses UTF-8 by default. ICU works just fine\nwith many other encodings; is there a reason it doesn't take it from\nthe environment just like for provider=libc?\n\nOf course, we still need to default to UTF-8 when the encoding from the\nenvironment isn't supported by ICU.\n\nPatch attached. Requires a few test fixups to adapt.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Thu, 09 Mar 2023 18:26:34 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Use ICU by default at initdb time."
},
{
"msg_contents": "On 10.03.23 03:26, Jeff Davis wrote:\n> That's because ICU always uses UTF-8 by default. ICU works just fine\n> with many other encodings; is there a reason it doesn't take it from\n> the environment just like for provider=libc?\n\nI think originally the locale forced the encoding. With ICU, we have a \nchoice. We could either stick to the encoding suggested by the OS, or \npick our own.\n\nArguably, if we are going to nudge toward ICU, maybe we should nudge \ntoward UTF-8 as well.\n\n\n\n",
"msg_date": "Fri, 10 Mar 2023 10:59:12 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Use ICU by default at initdb time."
},
{
"msg_contents": "On Fri, 2023-03-10 at 10:59 +0100, Peter Eisentraut wrote:\n> I think originally the locale forced the encoding. With ICU, we have\n> a \n> choice. We could either stick to the encoding suggested by the OS,\n> or \n> pick our own.\n\nWe still need LC_COLLATE and LC_CTYPE to match the database encoding\nthough. If we get those from the environment (which are connected to an\nencoding), then I think we need to get the encoding from the\nenvironment, too, right?\n\n> Arguably, if we are going to nudge toward ICU, maybe we should nudge \n> toward UTF-8 as well.\n\nThe OSes are already doing a pretty good job of that. Regardless, we\nneed to remove the dependence on LC_CTYPE and LC_COLLATE when the\nprovider is ICU first (we're close to that point but not quite there).\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 10 Mar 2023 06:38:17 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Use ICU by default at initdb time."
},
{
"msg_contents": "On 10.03.23 15:38, Jeff Davis wrote:\n> On Fri, 2023-03-10 at 10:59 +0100, Peter Eisentraut wrote:\n>> I think originally the locale forced the encoding. With ICU, we have\n>> a\n>> choice. We could either stick to the encoding suggested by the OS,\n>> or\n>> pick our own.\n> \n> We still need LC_COLLATE and LC_CTYPE to match the database encoding\n> though. If we get those from the environment (which are connected to an\n> encoding), then I think we need to get the encoding from the\n> environment, too, right?\n\nYes, of course. So we can't really do what I was thinking of.\n\n\n\n",
"msg_date": "Fri, 10 Mar 2023 15:48:07 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Use ICU by default at initdb time."
},
{
"msg_contents": "On Fri, 2023-03-10 at 15:48 +0100, Peter Eisentraut wrote:\n> Yes, of course. So we can't really do what I was thinking of.\n\nOK, I plan to commit something like the patch in this thread soon. I\njust need to add an explanatory comment.\n\nIt passes CI, but it's possible that there could be more buildfarm\nfailures that I'll need to look at afterward, so I'll count this as a\n\"trial fix\".\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 10 Mar 2023 07:48:13 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Use ICU by default at initdb time."
},
{
"msg_contents": "On Fri, 2023-03-10 at 07:48 -0800, Jeff Davis wrote:\n> On Fri, 2023-03-10 at 15:48 +0100, Peter Eisentraut wrote:\n> > Yes, of course. So we can't really do what I was thinking of.\n> \n> OK, I plan to commit something like the patch in this thread soon. I\n> just need to add an explanatory comment.\n\nCommitted a slightly narrower fix that derives the default encoding the\nsame way for both libc and ICU; except that ICU still uses UTF-8 for\nC/POSIX/--no-locale (because ICU doesn't work with SQL_ASCII).\n\nThat seemed more consistent with the comments around\npg_get_encoding_from_locale() and it was also easier to document the -E\nswitch in initdb.\n\nI'll keep an eye on the buildfarm to see if this fixes the problem or\ncauses other issues. But it seems like the right change.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 10 Mar 2023 10:58:49 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Use ICU by default at initdb time."
}
] |
[
{
"msg_contents": "Hi,\n\nI think that 4753ef37e0ed undid the work caf626b2c did to support\nsub-millisecond delays for vacuum and autovacuum.\n\nAfter 4753ef37e0ed, vacuum_delay_point()'s local variable msec is a\ndouble which, after being passed to WaitLatch() as timeout, which is a\nlong, ends up being 0, so we don't end up waiting AFAICT.\n\nWhen I set [autovacuum_]vacuum_delay_point to 0.5, SHOW will report that\nit is 500us, but WaitLatch() is still getting 0 as timeout.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 9 Mar 2023 16:26:02 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 10:26 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I think that 4753ef37e0ed undid the work caf626b2c did to support\n> sub-millisecond delays for vacuum and autovacuum.\n>\n> After 4753ef37e0ed, vacuum_delay_point()'s local variable msec is a\n> double which, after being passed to WaitLatch() as timeout, which is a\n> long, ends up being 0, so we don't end up waiting AFAICT.\n>\n> When I set [autovacuum_]vacuum_delay_point to 0.5, SHOW will report that\n> it is 500us, but WaitLatch() is still getting 0 as timeout.\n\nGiven that some of the clunkier underlying kernel primitives have\nmilliseconds in their interface, I don't think it would be possible to\nmake a usec-based variant of WaitEventSetWait() that works everywhere.\nCould it possibly make sense to do something that accumulates the\nerror, so if you're using 0.5 then every second vacuum_delay_point()\nwaits for 1ms?\n\n\n",
"msg_date": "Fri, 10 Mar 2023 10:40:36 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Mar 10, 2023 at 10:26 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n>> I think that 4753ef37e0ed undid the work caf626b2c did to support\n>> sub-millisecond delays for vacuum and autovacuum.\n\n> Given that some of the clunkier underlying kernel primitives have\n> milliseconds in their interface, I don't think it would be possible to\n> make a usec-based variant of WaitEventSetWait() that works everywhere.\n> Could it possibly make sense to do something that accumulates the\n> error, so if you're using 0.5 then every second vacuum_delay_point()\n> waits for 1ms?\n\nYeah ... using float math there was cute, but it'd only get us so far.\nThe caf626b2c code would only work well on platforms that have\nmicrosecond-based sleep primitives, so it was already not too portable.\n\nCan we fix this by making VacuumCostBalance carry the extra fractional\ndelay, or would a separate variable be better?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Mar 2023 17:02:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "Greetings,\n\n* Thomas Munro (thomas.munro@gmail.com) wrote:\n> On Fri, Mar 10, 2023 at 10:26 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > I think that 4753ef37e0ed undid the work caf626b2c did to support\n> > sub-millisecond delays for vacuum and autovacuum.\n> >\n> > After 4753ef37e0ed, vacuum_delay_point()'s local variable msec is a\n> > double which, after being passed to WaitLatch() as timeout, which is a\n> > long, ends up being 0, so we don't end up waiting AFAICT.\n> >\n> > When I set [autovacuum_]vacuum_delay_point to 0.5, SHOW will report that\n> > it is 500us, but WaitLatch() is still getting 0 as timeout.\n> \n> Given that some of the clunkier underlying kernel primitives have\n> milliseconds in their interface, I don't think it would be possible to\n> make a usec-based variant of WaitEventSetWait() that works everywhere.\n> Could it possibly make sense to do something that accumulates the\n> error, so if you're using 0.5 then every second vacuum_delay_point()\n> waits for 1ms?\n\nHmm. That generally makes sense to me.. though isn't exactly the same.\nStill, I wouldn't want to go back to purely pg_usleep() as that has the\nother downsides mentioned.\n\nPerhaps if the delay is sub-millisecond, explicitly do the WaitLatch()\nwith zero but also do the pg_usleep()? That's doing a fair bit of work\nbeyond just sleeping, but it also means we shouldn't miss out on the\npostmaster going away or similar..\n\nThanks,\n\nStephen",
"msg_date": "Thu, 9 Mar 2023 17:08:37 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 11:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Fri, Mar 10, 2023 at 10:26 AM Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> >> I think that 4753ef37e0ed undid the work caf626b2c did to support\n> >> sub-millisecond delays for vacuum and autovacuum.\n>\n> > Given that some of the clunkier underlying kernel primitives have\n> > milliseconds in their interface, I don't think it would be possible to\n> > make a usec-based variant of WaitEventSetWait() that works everywhere.\n> > Could it possibly make sense to do something that accumulates the\n> > error, so if you're using 0.5 then every second vacuum_delay_point()\n> > waits for 1ms?\n>\n> Yeah ... using float math there was cute, but it'd only get us so far.\n> The caf626b2c code would only work well on platforms that have\n> microsecond-based sleep primitives, so it was already not too portable.\n\nAlso, the previous coding was already b0rked, because pg_usleep()\nrounds up to milliseconds on Windows (with a surprising formula for\nrounding), and also the whole concept seems to assume things about\nschedulers that aren't really universally true. If we actually cared\nabout high res times maybe we should be using nanosleep and tracking\nthe drift? And spreading it out a bit. But I don't know.\n\n> Can we fix this by making VacuumCostBalance carry the extra fractional\n> delay, or would a separate variable be better?\n\nI was wondering the same thing, but not being too familiar with that\ncode, no opinion on that yet.\n\n\n",
"msg_date": "Fri, 10 Mar 2023 11:10:06 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 5:10 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Mar 10, 2023 at 11:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> > > On Fri, Mar 10, 2023 at 10:26 AM Melanie Plageman\n> > > <melanieplageman@gmail.com> wrote:\n> > >> I think that 4753ef37e0ed undid the work caf626b2c did to support\n> > >> sub-millisecond delays for vacuum and autovacuum.\n> >\n> > > Given that some of the clunkier underlying kernel primitives have\n> > > milliseconds in their interface, I don't think it would be possible to\n> > > make a usec-based variant of WaitEventSetWait() that works everywhere.\n> > > Could it possibly make sense to do something that accumulates the\n> > > error, so if you're using 0.5 then every second vacuum_delay_point()\n> > > waits for 1ms?\n> >\n> > Yeah ... using float math there was cute, but it'd only get us so far.\n> > The caf626b2c code would only work well on platforms that have\n> > microsecond-based sleep primitives, so it was already not too portable.\n>\n> Also, the previous coding was already b0rked, because pg_usleep()\n> rounds up to milliseconds on Windows (with a surprising formula for\n> rounding), and also the whole concept seems to assume things about\n> schedulers that aren't really universally true. If we actually cared\n> about high res times maybe we should be using nanosleep and tracking\n> the drift? And spreading it out a bit. But I don't know.\n>\n> > Can we fix this by making VacuumCostBalance carry the extra fractional\n> > delay, or would a separate variable be better?\n>\n> I was wondering the same thing, but not being too familiar with that\n> code, no opinion on that yet.\n\nWell, that is reset to zero in vacuum() at the top -- which is called for\neach table for autovacuum, so it would get reset to zero between\nautovacuuming tables. I dunno how you feel about that...\n\n- Melanie\n\n\n",
"msg_date": "Thu, 9 Mar 2023 17:15:16 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Fri, Mar 10, 2023 at 11:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The caf626b2c code would only work well on platforms that have\n>> microsecond-based sleep primitives, so it was already not too portable.\n\n> Also, the previous coding was already b0rked, because pg_usleep()\n> rounds up to milliseconds on Windows (with a surprising formula for\n> rounding), and also the whole concept seems to assume things about\n> schedulers that aren't really universally true. If we actually cared\n> about high res times maybe we should be using nanosleep and tracking\n> the drift? And spreading it out a bit. But I don't know.\n\nYeah, I was wondering about trying to make it a closed-loop control,\nbut I think that'd be huge overkill considering what the mechanism is\ntrying to accomplish.\n\nA minimalistic fix could be as attached. I'm not sure if it's worth\nmaking the state variable global so that it can be reset to zero in\nthe places where we zero out VacuumCostBalance etc. Also note that\nthis is ignoring the VacuumSharedCostBalance stuff, so you'd possibly\nhave the extra delay accumulating in unexpected places when there are\nmultiple workers. But I really doubt it's worth worrying about that.\n\nIs it reasonable to assume that all modern platforms can time\nmillisecond delays accurately? Ten years ago I'd have suggested\ntruncating the delay to a multiple of 10msec and using this logic\nto track the remainder, but maybe now that's unnecessary.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 09 Mar 2023 17:27:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "On Thu, Mar 09, 2023 at 05:27:08PM -0500, Tom Lane wrote:\n> Is it reasonable to assume that all modern platforms can time\n> millisecond delays accurately? Ten years ago I'd have suggested\n> truncating the delay to a multiple of 10msec and using this logic\n> to track the remainder, but maybe now that's unnecessary.\n\nIf so, it might also be worth updating or removing this comment in\npgsleep.c:\n\n * NOTE: although the delay is specified in microseconds, the effective\n * resolution is only 1/HZ, or 10 milliseconds, on most Unixen. Expect\n * the requested delay to be rounded up to the next resolution boundary.\n\nI've had doubts for some time about whether this is still accurate...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 9 Mar 2023 14:37:47 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 5:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Fri, Mar 10, 2023 at 11:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> The caf626b2c code would only work well on platforms that have\n> >> microsecond-based sleep primitives, so it was already not too portable.\n>\n> > Also, the previous coding was already b0rked, because pg_usleep()\n> > rounds up to milliseconds on Windows (with a surprising formula for\n> > rounding), and also the whole concept seems to assume things about\n> > schedulers that aren't really universally true. If we actually cared\n> > about high res times maybe we should be using nanosleep and tracking\n> > the drift? And spreading it out a bit. But I don't know.\n>\n> Yeah, I was wondering about trying to make it a closed-loop control,\n> but I think that'd be huge overkill considering what the mechanism is\n> trying to accomplish.\n\nNot relevant to fixing this, but I wonder if you could eliminate the\nneed to specify the cost delay in most cases for autovacuum if you used\nfeedback from how much vacuuming work was done during the last cycle of\nvacuuming to control the delay value internally - a kind of\nfeedback-adjusted controller.\n\n- Melanie\n\n\n",
"msg_date": "Thu, 9 Mar 2023 17:54:25 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 5:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Fri, Mar 10, 2023 at 11:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> The caf626b2c code would only work well on platforms that have\n> >> microsecond-based sleep primitives, so it was already not too portable.\n>\n> > Also, the previous coding was already b0rked, because pg_usleep()\n> > rounds up to milliseconds on Windows (with a surprising formula for\n> > rounding), and also the whole concept seems to assume things about\n> > schedulers that aren't really universally true. If we actually cared\n> > about high res times maybe we should be using nanosleep and tracking\n> > the drift? And spreading it out a bit. But I don't know.\n>\n> Yeah, I was wondering about trying to make it a closed-loop control,\n> but I think that'd be huge overkill considering what the mechanism is\n> trying to accomplish.\n>\n> A minimalistic fix could be as attached. I'm not sure if it's worth\n> making the state variable global so that it can be reset to zero in\n> the places where we zero out VacuumCostBalance etc. Also note that\n> this is ignoring the VacuumSharedCostBalance stuff, so you'd possibly\n> have the extra delay accumulating in unexpected places when there are\n> multiple workers. But I really doubt it's worth worrying about that.\n\nWhat if someone resets the delay guc and there is still a large residual?\n\n\n",
"msg_date": "Thu, 9 Mar 2023 18:02:44 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "Melanie Plageman <melanieplageman@gmail.com> writes:\n> On Thu, Mar 9, 2023 at 5:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> A minimalistic fix could be as attached. I'm not sure if it's worth\n>> making the state variable global so that it can be reset to zero in\n>> the places where we zero out VacuumCostBalance etc. Also note that\n>> this is ignoring the VacuumSharedCostBalance stuff, so you'd possibly\n>> have the extra delay accumulating in unexpected places when there are\n>> multiple workers. But I really doubt it's worth worrying about that.\n\n> What if someone resets the delay guc and there is still a large residual?\n\nBy definition, the residual is less than 1msec.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Mar 2023 18:17:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 11:37 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> On Thu, Mar 09, 2023 at 05:27:08PM -0500, Tom Lane wrote:\n> > Is it reasonable to assume that all modern platforms can time\n> > millisecond delays accurately? Ten years ago I'd have suggested\n> > truncating the delay to a multiple of 10msec and using this logic\n> > to track the remainder, but maybe now that's unnecessary.\n>\n> If so, it might also be worth updating or removing this comment in\n> pgsleep.c:\n>\n> * NOTE: although the delay is specified in microseconds, the effective\n> * resolution is only 1/HZ, or 10 milliseconds, on most Unixen. Expect\n> * the requested delay to be rounded up to the next resolution boundary.\n>\n> I've had doubts for some time about whether this is still accurate...\n\nWhat I see with the old select(), or a more modern clock_nanosleep()\ncall, is that Linux, FreeBSD, macOS are happy sleeping for .1ms, .5ms,\n1ms, 2ms, 3ms, and through innaccuracies and scheduling overheads etc\nit works out to about 5-25% extra sleep time (I expect that can be\naffected by choice of time source/available hardware, and perhaps\nvarious system calls use different tricks). I definitely recall the\nbehaviour described, back in the old days where more stuff was\nscheduler-tick based. I have no clue for Windows; quick googling\ntells me that it might still be pretty chunky, unless you do certain\nother stuff that I didn't follow up; we could probably get more\naccurate sleep times by rummaging through nt.dll. It would be good to\nfind out how well WaitEventSet does on Windows; perhaps we should have\na little timing accuracy test in the tree to collect build farm data?\n\nFWIW epoll has a newer _pwait2() call that has higher res timeout\nargument, and Windows WaitEventSet could also do high res timers if\nyou add timer events rather than using the timeout argument, and I\nguess conceptually even the old poll() thing could do the equivalent\nwith a signal alarm timer, but it sounds a lot like a bad idea to do\nvery short sleeps to me, burning so much CPU on scheduling. I kinda\nwonder if the 10ms + residual thing might even turn out to be a better\nidea... but I dunno.\n\nThe 1ms residual thing looks pretty good to me as a fix to the\nimmediate problem report, but we might also want to adjust the wording\nin config.sgml?\n\n\n",
"msg_date": "Fri, 10 Mar 2023 13:05:27 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "Erm, but maybe I'm just looking at this too myopically. Is there\nreally any point in letting people set it to 0.5, if it behaves as if\nyou'd set it to 1 and doubled the cost limit? Isn't it just more\nconfusing? I haven't read the discussion from when fractional delays\ncame in, where I imagine that must have come up...\n\n\n",
"msg_date": "Fri, 10 Mar 2023 13:25:52 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Erm, but maybe I'm just looking at this too myopically. Is there\n> really any point in letting people set it to 0.5, if it behaves as if\n> you'd set it to 1 and doubled the cost limit? Isn't it just more\n> confusing? I haven't read the discussion from when fractional delays\n> came in, where I imagine that must have come up...\n\nAt [1] I argued\n\n>> The reason is this: what we want to do is throttle VACUUM's I/O demand,\n>> and by \"throttle\" I mean \"gradually reduce\". There is nothing gradual\n>> about issuing a few million I/Os and then sleeping for many milliseconds;\n>> that'll just produce spikes and valleys in the I/O demand. Ideally,\n>> what we'd have it do is sleep for a very short interval after each I/O.\n>> But that's not too practical, both for code-structure reasons and because\n>> most platforms don't give us a way to so finely control the length of a\n>> sleep. Hence the design of sleeping for awhile after every so many I/Os.\n>> \n>> However, the current settings are predicated on the assumption that\n>> you can't get the kernel to give you a sleep of less than circa 10ms.\n>> That assumption is way outdated, I believe; poking around on systems\n>> I have here, the minimum delay time using pg_usleep(1) seems to be\n>> generally less than 100us, and frequently less than 10us, on anything\n>> released in the last decade.\n>> \n>> I propose therefore that instead of increasing vacuum_cost_limit,\n>> what we ought to be doing is reducing vacuum_cost_delay by a similar\n>> factor. And, to provide some daylight for people to reduce it even\n>> more, we ought to arrange for it to be specifiable in microseconds\n>> not milliseconds. There's no GUC_UNIT_US right now, but it's time.\n\nThat last point was later overruled in favor of keeping it measured in\nmsec to avoid breaking existing configuration files. Nonetheless,\nvacuum_cost_delay *is* an actual time to wait (conceptually at least),\nnot just part of a unitless ratio; and there seem to be good arguments\nin favor of letting people make it small.\n\nI take your point that really short sleeps are inefficient so far as the\nscheduling overhead goes. But on modern machines you probably have to get\ndown to a not-very-large number of microseconds before that's a big deal.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/28720.1552101086%40sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 09 Mar 2023 19:46:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 1:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I propose therefore that instead of increasing vacuum_cost_limit,\n> >> what we ought to be doing is reducing vacuum_cost_delay by a similar\n> >> factor. And, to provide some daylight for people to reduce it even\n> >> more, we ought to arrange for it to be specifiable in microseconds\n> >> not milliseconds. There's no GUC_UNIT_US right now, but it's time.\n>\n> That last point was later overruled in favor of keeping it measured in\n> msec to avoid breaking existing configuration files. Nonetheless,\n> vacuum_cost_delay *is* an actual time to wait (conceptually at least),\n> not just part of a unitless ratio; and there seem to be good arguments\n> in favor of letting people make it small.\n>\n> I take your point that really short sleeps are inefficient so far as the\n> scheduling overhead goes. But on modern machines you probably have to get\n> down to a not-very-large number of microseconds before that's a big deal.\n\nOK. One idea is to provide a WaitLatchUsec(), which is just some\ncross platform donkeywork that I think I know how to type in, and it\nwould have to round up on poll() and Windows builds. Then we could\neither also provide WaitEventSetResolution() that returns 1000 or 1\ndepending on availability of 1us waits so that we could round\nappropriately and then track residual, but beyond that let the user\nworry about inaccuracies and overheads (as mentioned in the\ndocumentation), or we could start consulting the clock and tracking\nour actual sleep time and true residual over time (maybe that's what\n\"closed-loop control\" means?).\n\n\n",
"msg_date": "Fri, 10 Mar 2023 14:13:14 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> OK. One idea is to provide a WaitLatchUsec(), which is just some\n> cross platform donkeywork that I think I know how to type in, and it\n> would have to round up on poll() and Windows builds. Then we could\n> either also provide WaitEventSetResolution() that returns 1000 or 1\n> depending on availability of 1us waits so that we could round\n> appropriately and then track residual, but beyond that let the user\n> worry about inaccuracies and overheads (as mentioned in the\n> documentation),\n\n... so we'd still need to have the residual-sleep-time logic?\n\n> or we could start consulting the clock and tracking\n> our actual sleep time and true residual over time (maybe that's what\n> \"closed-loop control\" means?).\n\nYeah, I was hand-waving about trying to measure our actual sleep times.\nOn reflection I doubt it's a great idea. It'll add overhead and there's\nstill a question of whether measurement noise would accumulate.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Mar 2023 20:21:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 2:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > OK. One idea is to provide a WaitLatchUsec(), which is just some\n> > cross platform donkeywork that I think I know how to type in, and it\n> > would have to round up on poll() and Windows builds. Then we could\n> > either also provide WaitEventSetResolution() that returns 1000 or 1\n> > depending on availability of 1us waits so that we could round\n> > appropriately and then track residual, but beyond that let the user\n> > worry about inaccuracies and overheads (as mentioned in the\n> > documentation),\n>\n> ... so we'd still need to have the residual-sleep-time logic?\n\nAh, perhaps not. Considering that the historical behaviour on the\nmain affected platform (Windows) was already to round up to\nmilliseconds before we latchified this code anyway, and now a google\nsearch is telling me that the relevant timer might in fact be *super*\nlumpy, perhaps even to the point of 1/64th of a second[1] (maybe\nthat's a problem for a Windows hacker to look into some time; I really\nshould create a wiki page of known Windows problems in search of a\nhacker)... it now looks like sub-ms residual logic would be a bit\npointless after all.\n\nI'll go and see about usec latch waits. More soon.\n\n[1] https://randomascii.wordpress.com/2020/10/04/windows-timer-resolution-the-great-rule-change/\n\n\n",
"msg_date": "Fri, 10 Mar 2023 14:45:02 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 2:45 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I'll go and see about usec latch waits. More soon.\n\nHere are some experimental patches along those lines. Seems good\nlocally, but I saw a random failure I don't understand on CI so\napparently I need to find a bug; at least this gives an idea of how\nthis might look. Unfortunately, the new interface on Linux turned out\nto be newer that I first realised: Linux 5.11+ (so RHEL 9, Debian\n12/Bookworm, Ubuntu 21.04/Hirsute Hippo), so unless we're OK with it\ntaking a couple more years to be more widely used, we'll need some\nfallback code. Perhaps something like 0004, which also shows the sort\nof thing that we might consider back-patching to 14 and 15 (next\nrevision I'll move that up the front and put it in back-patchable\nform). It's not exactly beautiful; maybe sharing code with recovery's\nlazy PM-exit detection could help. Of course, the new μs-based wait\nAPI could be used wherever we do timestamp-based waiting, for no\nparticular reason other than that it is the resolution of our\ntimestamps, so there is no need to bother rounding; I doubt anyone\nwould notice or care much about that, but it's vote in favour of μs\nrather than the other obvious contender ns, which modern underlying\nkernel primitives are using.",
"msg_date": "Fri, 10 Mar 2023 18:58:29 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 6:58 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> ... Perhaps something like 0004, which also shows the sort\n> of thing that we might consider back-patching to 14 and 15 (next\n> revision I'll move that up the front and put it in back-patchable\n> form).\n\nI think this is the minimal back-patchable change. I propose to go\nahead and do that, and then to kick the ideas about latch API changes\ninto a new thread for the next commitfest.",
"msg_date": "Sat, 11 Mar 2023 11:39:08 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I think this is the minimal back-patchable change. I propose to go\n> ahead and do that, and then to kick the ideas about latch API changes\n> into a new thread for the next commitfest.\n\nOK by me, but then again 4753ef37 wasn't my patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Mar 2023 17:49:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 1:05 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Mar 10, 2023 at 11:37 AM Nathan Bossart\n> <nathandbossart@gmail.com> wrote:\n> > On Thu, Mar 09, 2023 at 05:27:08PM -0500, Tom Lane wrote:\n> > > Is it reasonable to assume that all modern platforms can time\n> > > millisecond delays accurately? Ten years ago I'd have suggested\n> > > truncating the delay to a multiple of 10msec and using this logic\n> > > to track the remainder, but maybe now that's unnecessary.\n> >\n> > If so, it might also be worth updating or removing this comment in\n> > pgsleep.c:\n> >\n> > * NOTE: although the delay is specified in microseconds, the effective\n> > * resolution is only 1/HZ, or 10 milliseconds, on most Unixen. Expect\n> > * the requested delay to be rounded up to the next resolution boundary.\n> >\n> > I've had doubts for some time about whether this is still accurate...\n\nUnfortunately I was triggered by this Unix archeology discussion, and\nwasted some time this weekend testing every Unix we target. I found 3\ngroups:\n\n1. OpenBSD, NetBSD: Like the comment says, kernel ticks still control\nsleep resolution. I measure an average time of ~20ms when I ask for\n1ms sleeps in a loop with select() or nanosleep(). I don't actually\nunderstand why it's not ~10ms because HZ is 100 on these systems, but\nI didn't look harder.\n\n2. AIX, Solaris, illumos: select() can sleep for 1ms accurately, but\nnot fractions of 1ms. If you use nanosleep() instead of select(),\nthen AIX joins the third group (huh, maybe it's just that its\nselect(us) calls poll(ms) under the covers?), but Solaris does not\n(maybe it's still tick-based, but HZ == 1000?).\n\n3. Linux, FreeBSD, macOS: sub-ms sleeps are quite accurate (via\nvarious system calls).\n\nI didn't test Windows but it sounds a lot like it is in group 1 if you\nuse WaitForMultipleObjects() or SleepEx(), as we do.\n\nYou can probably tune some of the above; for example FreeBSD can go\nback to the old way with kern.eventtimer.periodic=1 to get a thousand\ninterrupts per second (kern.hz) instead of programming a hardware\ntimer to get an interrupt at just the right time, and then 0.5ms sleep\nrequests get rounded to an average of 1ms, just like on Solaris. And\npower usage probably goes up.\n\nAs for what do do about it, I dunno, how about this?\n\n * NOTE: although the delay is specified in microseconds, the effective\n- * resolution is only 1/HZ, or 10 milliseconds, on most Unixen. Expect\n- * the requested delay to be rounded up to the next resolution boundary.\n+ * resolution is only 1/HZ on systems that use periodic kernel ticks to limit\n+ * sleeping. This may cause sleeps to be rounded up by as much as 1-20\n+ * milliseconds on old Unixen and Windows.\n\nAs for the following paragraph about the dangers of select() and\ninterrupts and restarts, I suspect it is describing the HP-UX\nbehaviour (a dropped platform), which I guess must have led to POSIX's\nreluctance to standardise that properly, but in any case all\nhypothetical concerns would disappear if we just used POSIX\n[clock_]nanosleep(), no? It has defined behaviour on signals, and it\nalso tells you the remaining time (if we cared, which we wouldn't).\n\n\n",
"msg_date": "Sun, 12 Mar 2023 16:52:40 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "On Sat, Mar 11, 2023 at 11:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > I think this is the minimal back-patchable change. I propose to go\n> > ahead and do that, and then to kick the ideas about latch API changes\n> > into a new thread for the next commitfest.\n>\n> OK by me, but then again 4753ef37 wasn't my patch.\n\nI'll wait another day to see if Stephen or anyone else who hasn't hit\nMonday yet wants to object.\n\nHere also are those other minor tweaks, for master only. I see now\nthat nanosleep() has already been proposed before:\n\nhttps://www.postgresql.org/message-id/flat/CABQrizfxpBLZT5mZeE0js5oCh1tqEWvcGF3vMRCv5P-RwUY5dQ%40mail.gmail.com\nhttps://www.postgresql.org/message-id/flat/4902.1552349020%40sss.pgh.pa.us\n\nThere I see the question of whether it should loop on EINTR to keep\nwaiting the remaining time. Generally it seems like a job for\nsomething higher level to deal with interruption policy, and of course\nall the race condition and portability problems inherent with signals\nare fixed by using latches instead, so I don't think there really is a\ngood answer to that question -- if you loop, you break our programming\nrules by wilfully ignoring eg global barriers, but if you don't loop,\nit implies you're relying on the interrupt to cause you to do\nsomething and yet you might have missed it if it was delivered just\nbefore the syscall. At the time of the earlier thread, maybe it was\nmore acceptable as it could only delay cancel for that backend, but\nnow it might even delay arbitrary other backends, and neither answer\nto that question can fix that in a race-free way. Also, back then\nlatches had a SIGUSR1 handler on common systems, but now they don't,\nso (racy unreliable) latch responsiveness has decreased since then.\nSo I think we should just leave the interface as it is, and build\nbetter things and then eventually retire it. This general topic is\nalso currently being discussed at:\n\nhttps://www.postgresql.org/message-id/flat/20230209205929.GA720594%40nathanxps13\n\nI propose to go ahead and make this small improvement anyway because\nit'll surely be a while before we delete the last pg_usleep() call,\nand it's good to spring-clean old confusing commentary about signals\nand portability.",
"msg_date": "Mon, 13 Mar 2023 16:11:22 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "> * NOTE: although the delay is specified in microseconds, the effective\n> - * resolution is only 1/HZ, or 10 milliseconds, on most Unixen. Expect\n> - * the requested delay to be rounded up to the next resolution boundary.\n> + * resolution is only 1/HZ on systems that use periodic kernel ticks to wake\n> + * up. This may cause sleeps to be rounded up by 1-20 milliseconds on older\n> + * Unixen and Windows.\n\nnitpick: Could the 1/HZ versus 20 milliseconds discrepancy cause confusion?\nOtherwise, I think this is the right idea.\n\n> + * CAUTION: if interrupted by a signal, this function will return, but its\n> + * interface doesn't report that. It's not a good idea to use this\n> + * for long sleeps in the backend, because backends are expected to respond to\n> + * interrupts promptly. Better practice for long sleeps is to use WaitLatch()\n> + * with a timeout.\n\nI'm not sure this argument follows. If pg_usleep() returns if interrupted,\nthen why are we concerned about delayed responses to interrupts?\n\n> -\t\tdelay.tv_usec = microsec % 1000000L;\n> -\t\t(void) select(0, NULL, NULL, NULL, &delay);\n> +\t\tdelay.tv_nsec = (microsec % 1000000L) * 1000;\n> +\t\t(void) nanosleep(&delay, NULL);\n\nUsing nanosleep() seems reasonable to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 13 Mar 2023 16:10:08 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 12:10 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> > * NOTE: although the delay is specified in microseconds, the effective\n> > - * resolution is only 1/HZ, or 10 milliseconds, on most Unixen. Expect\n> > - * the requested delay to be rounded up to the next resolution boundary.\n> > + * resolution is only 1/HZ on systems that use periodic kernel ticks to wake\n> > + * up. This may cause sleeps to be rounded up by 1-20 milliseconds on older\n> > + * Unixen and Windows.\n>\n> nitpick: Could the 1/HZ versus 20 milliseconds discrepancy cause confusion?\n> Otherwise, I think this is the right idea.\n\nBetter words welcome; 1-20ms summarises the range I actually measured,\nand if reports are correct about Windows' HZ=64 (1/HZ = 15.625ms) then\nit neatly covers that too, so I don't feel too bad about not chasing\ndown the reason for that 10ms/20ms discrepancy; maybe I looked at the\nwrong HZ number (which you can change, anyway), I'm not too used to\nNetBSD... BTW they have a project plan to fix that\nhttps://wiki.netbsd.org/projects/project/tickless/\n\n> > + * CAUTION: if interrupted by a signal, this function will return, but its\n> > + * interface doesn't report that. It's not a good idea to use this\n> > + * for long sleeps in the backend, because backends are expected to respond to\n> > + * interrupts promptly. Better practice for long sleeps is to use WaitLatch()\n> > + * with a timeout.\n>\n> I'm not sure this argument follows. If pg_usleep() returns if interrupted,\n> then why are we concerned about delayed responses to interrupts?\n\nBecause you can't rely on it:\n\n1. Maybe the signal is delivered just before pg_usleep() begins, and\na handler sets some flag we would like to react to. Now pg_usleep()\nwill not be interrupted. That problem is solved by using latches\ninstead.\n2. Maybe the signal is one that is no longer handled by a handler at\nall; these days, latches use SIGURG, which pops out when you read a\nsignalfd or kqueue, so pg_usleep() will not wake up. That problem is\nsolved by using latches instead.\n\n(The word \"interrupt\" is a bit overloaded, which doesn't help with\nthis discussion.)\n\n> > - delay.tv_usec = microsec % 1000000L;\n> > - (void) select(0, NULL, NULL, NULL, &delay);\n> > + delay.tv_nsec = (microsec % 1000000L) * 1000;\n> > + (void) nanosleep(&delay, NULL);\n>\n> Using nanosleep() seems reasonable to me.\n\nThanks for looking!\n\n\n",
"msg_date": "Tue, 14 Mar 2023 15:38:45 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 03:38:45PM +1300, Thomas Munro wrote:\n> On Tue, Mar 14, 2023 at 12:10 PM Nathan Bossart\n> <nathandbossart@gmail.com> wrote:\n>> > * NOTE: although the delay is specified in microseconds, the effective\n>> > - * resolution is only 1/HZ, or 10 milliseconds, on most Unixen. Expect\n>> > - * the requested delay to be rounded up to the next resolution boundary.\n>> > + * resolution is only 1/HZ on systems that use periodic kernel ticks to wake\n>> > + * up. This may cause sleeps to be rounded up by 1-20 milliseconds on older\n>> > + * Unixen and Windows.\n>>\n>> nitpick: Could the 1/HZ versus 20 milliseconds discrepancy cause confusion?\n>> Otherwise, I think this is the right idea.\n> \n> Better words welcome; 1-20ms summarises the range I actually measured,\n> and if reports are correct about Windows' HZ=64 (1/HZ = 15.625ms) then\n> it neatly covers that too, so I don't feel too bad about not chasing\n> down the reason for that 10ms/20ms discrepancy; maybe I looked at the\n> wrong HZ number (which you can change, anyway), I'm not too used to\n> NetBSD... BTW they have a project plan to fix that\n> https://wiki.netbsd.org/projects/project/tickless/\n\nHere is roughly what I had in mind:\n\n\tNOTE: Although the delay is specified in microseconds, older Unixen and\n\tWindows use periodic kernel ticks to wake up, which might increase the\n\tdelay time significantly. We've observed delay increases as large as\n\t20 milliseconds on supported platforms.\n\n>> > + * CAUTION: if interrupted by a signal, this function will return, but its\n>> > + * interface doesn't report that. It's not a good idea to use this\n>> > + * for long sleeps in the backend, because backends are expected to respond to\n>> > + * interrupts promptly. Better practice for long sleeps is to use WaitLatch()\n>> > + * with a timeout.\n>>\n>> I'm not sure this argument follows. If pg_usleep() returns if interrupted,\n>> then why are we concerned about delayed responses to interrupts?\n> \n> Because you can't rely on it:\n> \n> 1. Maybe the signal is delivered just before pg_usleep() begins, and\n> a handler sets some flag we would like to react to. Now pg_usleep()\n> will not be interrupted. That problem is solved by using latches\n> instead.\n> 2. Maybe the signal is one that is no longer handled by a handler at\n> all; these days, latches use SIGURG, which pops out when you read a\n> signalfd or kqueue, so pg_usleep() will not wake up. That problem is\n> solved by using latches instead.\n> \n> (The word \"interrupt\" is a bit overloaded, which doesn't help with\n> this discussion.)\n\nYeah, I think it would be clearer if \"interrupt\" was disambiguated.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 14 Mar 2023 11:54:28 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 7:54 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Here is roughly what I had in mind:\n>\n> NOTE: Although the delay is specified in microseconds, older Unixen and\n> Windows use periodic kernel ticks to wake up, which might increase the\n> delay time significantly. We've observed delay increases as large as\n> 20 milliseconds on supported platforms.\n\nSold. And pushed.\n\nI couldn't let that 20ms != 1s/100 problem go, despite my claim that I\nwould, and now I see: NetBSD does have 10ms resolution, so everyone\ncan relax, arithmetic still works. It's just that it always or often\nadds on one extra tick, for some strange reason. So you can measure\n20ms, 30ms, ... but never as low as 10ms. *Shrug*. Your description\ncovered that nicely.\n\nhttps://marc.info/?l=netbsd-current-users&m=144832117108168&w=2\n\n> > (The word \"interrupt\" is a bit overloaded, which doesn't help with\n> > this discussion.)\n>\n> Yeah, I think it would be clearer if \"interrupt\" was disambiguated.\n\nOK, I rewrote it to avoid that terminology.\n\nOn small detail, after reading Tom's 2019 proposal to do this[1]: He\nmentioned SUSv2's ENOSYS error. I see that SUSv3 (POSIX.1-2001)\ndropped that. Systems that don't have the \"timers\" option simply\nshouldn't define the function, but we already require the \"timers\"\noption for clock_gettime(). And more practically, I know that all our\ntarget systems have it and it works.\n\nPushed.\n\n[1] https://www.postgresql.org/message-id/4902.1552349020@sss.pgh.pa.us\n\n\n",
"msg_date": "Wed, 15 Mar 2023 17:59:35 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sub-millisecond [autovacuum_]vacuum_cost_delay broken"
}
] |
[
{
"msg_contents": "Hi,\n\nDuring a recent code review, I was confused multiple times by the\ntoptxn member of ReorderBufferTXN, which is defined only for\nsub-transactions.\n\ne.g. txn->toptxn member == NULL means the txn is a top level txn.\ne.g. txn->toptxn member != NULL means the txn is not a top level txn\n\nIt makes sense if you squint and read it slowly enough, but IMO it's\ntoo easy to accidentally misinterpret the meaning when reading code\nthat uses this member.\n\n~\n\nSuch code can be made easier to read just by introducing some simple macros:\n\n#define isa_toptxn(rbtxn) (rbtxn->toptxn == NULL)\n#define isa_subtxn(rbtxn) (rbtxn->toptxn != NULL)\n#define get_toptxn(rbtxn) (isa_subtxn(rbtxn) ? rbtxn->toptxn : rbtxn)\n\n~\n\nPSA a small patch that does this.\n\n(Tests OK using make check-world)\n\nThoughts?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Fri, 10 Mar 2023 10:06:02 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add macros for ReorderBufferTXN toptxn"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 4:36 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> During a recent code review, I was confused multiple times by the\n> toptxn member of ReorderBufferTXN, which is defined only for\n> sub-transactions.\n>\n> e.g. txn->toptxn member == NULL means the txn is a top level txn.\n> e.g. txn->toptxn member != NULL means the txn is not a top level txn\n>\n> It makes sense if you squint and read it slowly enough, but IMO it's\n> too easy to accidentally misinterpret the meaning when reading code\n> that uses this member.\n>\n> ~\n>\n> Such code can be made easier to read just by introducing some simple macros:\n>\n> #define isa_toptxn(rbtxn) (rbtxn->toptxn == NULL)\n> #define isa_subtxn(rbtxn) (rbtxn->toptxn != NULL)\n> #define get_toptxn(rbtxn) (isa_subtxn(rbtxn) ? rbtxn->toptxn : rbtxn)\n>\n> ~\n>\n> PSA a small patch that does this.\n>\n\nI also find it will make code easier to read. So, +1 to the idea. I'll\ndo the detailed review and test next week unless there are objections\nto the idea.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 10 Mar 2023 17:00:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add macros for ReorderBufferTXN toptxn"
},
{
"msg_contents": "On Fri, 10 Mar 2023 at 04:36, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi,\n>\n> During a recent code review, I was confused multiple times by the\n> toptxn member of ReorderBufferTXN, which is defined only for\n> sub-transactions.\n>\n> e.g. txn->toptxn member == NULL means the txn is a top level txn.\n> e.g. txn->toptxn member != NULL means the txn is not a top level txn\n>\n> It makes sense if you squint and read it slowly enough, but IMO it's\n> too easy to accidentally misinterpret the meaning when reading code\n> that uses this member.\n>\n> ~\n>\n> Such code can be made easier to read just by introducing some simple macros:\n>\n> #define isa_toptxn(rbtxn) (rbtxn->toptxn == NULL)\n> #define isa_subtxn(rbtxn) (rbtxn->toptxn != NULL)\n> #define get_toptxn(rbtxn) (isa_subtxn(rbtxn) ? rbtxn->toptxn : rbtxn)\n>\n> ~\n>\n> PSA a small patch that does this.\n>\n> (Tests OK using make check-world)\n>\n> Thoughts?\n\nFew comments:\n1) Can we move the macros along with the other macros present in this\nfile, just above this structure, similar to the macros added for\ntxn_flags:\n /* Toplevel transaction for this subxact (NULL for top-level). */\n+#define isa_toptxn(rbtxn) (rbtxn->toptxn == NULL)\n+#define isa_subtxn(rbtxn) (rbtxn->toptxn != NULL)\n+#define get_toptxn(rbtxn) (isa_subtxn(rbtxn) ? rbtxn->toptxn : rbtxn)\n\n2) The macro name can be changed to rbtxn_is_toptxn, rbtxn_is_subtxn\nand rbtxn_get_toptxn to keep it consistent with others:\n /* Toplevel transaction for this subxact (NULL for top-level). */\n+#define isa_toptxn(rbtxn) (rbtxn->toptxn == NULL)\n+#define isa_subtxn(rbtxn) (rbtxn->toptxn != NULL)\n+#define get_toptxn(rbtxn) (isa_subtxn(rbtxn) ? rbtxn->toptxn : rbtxn)\n\n3) We could add separate comments for each of the macros:\n /* Toplevel transaction for this subxact (NULL for top-level). */\n+#define isa_toptxn(rbtxn) (rbtxn->toptxn == NULL)\n+#define isa_subtxn(rbtxn) (rbtxn->toptxn != NULL)\n+#define get_toptxn(rbtxn) (isa_subtxn(rbtxn) ? rbtxn->toptxn : rbtxn)\n\n4) We check if txn->toptxn is not null twice here both in if condition\nand in the assignment, we could retain the assignment operation as\nearlier to remove the 2nd check:\n- if (txn->toptxn)\n- txn = txn->toptxn;\n+ if (isa_subtxn(txn))\n+ txn = get_toptxn(txn);\n\nWe could avoid one check again by:\n+ if (isa_subtxn(txn))\n+ txn = txn->toptxn;\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 13 Mar 2023 12:49:10 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add macros for ReorderBufferTXN toptxn"
},
{
"msg_contents": "Thanks for the review!\n\nOn Mon, Mar 13, 2023 at 6:19 PM vignesh C <vignesh21@gmail.com> wrote:\n...\n> Few comments:\n> 1) Can we move the macros along with the other macros present in this\n> file, just above this structure, similar to the macros added for\n> txn_flags:\n> /* Toplevel transaction for this subxact (NULL for top-level). */\n> +#define isa_toptxn(rbtxn) (rbtxn->toptxn == NULL)\n> +#define isa_subtxn(rbtxn) (rbtxn->toptxn != NULL)\n> +#define get_toptxn(rbtxn) (isa_subtxn(rbtxn) ? rbtxn->toptxn : rbtxn)\n>\n> 2) The macro name can be changed to rbtxn_is_toptxn, rbtxn_is_subtxn\n> and rbtxn_get_toptxn to keep it consistent with others:\n> /* Toplevel transaction for this subxact (NULL for top-level). */\n> +#define isa_toptxn(rbtxn) (rbtxn->toptxn == NULL)\n> +#define isa_subtxn(rbtxn) (rbtxn->toptxn != NULL)\n> +#define get_toptxn(rbtxn) (isa_subtxn(rbtxn) ? rbtxn->toptxn : rbtxn)\n>\n> 3) We could add separate comments for each of the macros:\n> /* Toplevel transaction for this subxact (NULL for top-level). */\n> +#define isa_toptxn(rbtxn) (rbtxn->toptxn == NULL)\n> +#define isa_subtxn(rbtxn) (rbtxn->toptxn != NULL)\n> +#define get_toptxn(rbtxn) (isa_subtxn(rbtxn) ? rbtxn->toptxn : rbtxn)\n>\n\nAll the above are fixed as suggested.\n\n> 4) We check if txn->toptxn is not null twice here both in if condition\n> and in the assignment, we could retain the assignment operation as\n> earlier to remove the 2nd check:\n> - if (txn->toptxn)\n> - txn = txn->toptxn;\n> + if (isa_subtxn(txn))\n> + txn = get_toptxn(txn);\n>\n> We could avoid one check again by:\n> + if (isa_subtxn(txn))\n> + txn = txn->toptxn;\n>\n\nYeah, that is true, but I chose not to keep the original assignment in\nthis case mainly because then it is consistent with the other changed\ncode --- e.g. Every other direct member assignment/access of the\n'toptxn' member now hides behind the macros so I did not want this\nsingle place to be the odd one out. TBH, I don't think 1 extra check\nis of any significance, but it is not a problem to change like you\nsuggested if other people also want it done that way.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.",
"msg_date": "Tue, 14 Mar 2023 18:06:44 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add macros for ReorderBufferTXN toptxn"
},
{
"msg_contents": "On Tue, 14 Mar 2023 at 12:37, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Thanks for the review!\n>\n> On Mon, Mar 13, 2023 at 6:19 PM vignesh C <vignesh21@gmail.com> wrote:\n> ...\n> > Few comments:\n> > 1) Can we move the macros along with the other macros present in this\n> > file, just above this structure, similar to the macros added for\n> > txn_flags:\n> > /* Toplevel transaction for this subxact (NULL for top-level). */\n> > +#define isa_toptxn(rbtxn) (rbtxn->toptxn == NULL)\n> > +#define isa_subtxn(rbtxn) (rbtxn->toptxn != NULL)\n> > +#define get_toptxn(rbtxn) (isa_subtxn(rbtxn) ? rbtxn->toptxn : rbtxn)\n> >\n> > 2) The macro name can be changed to rbtxn_is_toptxn, rbtxn_is_subtxn\n> > and rbtxn_get_toptxn to keep it consistent with others:\n> > /* Toplevel transaction for this subxact (NULL for top-level). */\n> > +#define isa_toptxn(rbtxn) (rbtxn->toptxn == NULL)\n> > +#define isa_subtxn(rbtxn) (rbtxn->toptxn != NULL)\n> > +#define get_toptxn(rbtxn) (isa_subtxn(rbtxn) ? rbtxn->toptxn : rbtxn)\n> >\n> > 3) We could add separate comments for each of the macros:\n> > /* Toplevel transaction for this subxact (NULL for top-level). */\n> > +#define isa_toptxn(rbtxn) (rbtxn->toptxn == NULL)\n> > +#define isa_subtxn(rbtxn) (rbtxn->toptxn != NULL)\n> > +#define get_toptxn(rbtxn) (isa_subtxn(rbtxn) ? rbtxn->toptxn : rbtxn)\n> >\n>\n> All the above are fixed as suggested.\n>\n> > 4) We check if txn->toptxn is not null twice here both in if condition\n> > and in the assignment, we could retain the assignment operation as\n> > earlier to remove the 2nd check:\n> > - if (txn->toptxn)\n> > - txn = txn->toptxn;\n> > + if (isa_subtxn(txn))\n> > + txn = get_toptxn(txn);\n> >\n> > We could avoid one check again by:\n> > + if (isa_subtxn(txn))\n> > + txn = txn->toptxn;\n> >\n>\n> Yeah, that is true, but I chose not to keep the original assignment in\n> this case mainly because then it is consistent with the other changed\n> code --- e.g. Every other direct member assignment/access of the\n> 'toptxn' member now hides behind the macros so I did not want this\n> single place to be the odd one out. TBH, I don't think 1 extra check\n> is of any significance, but it is not a problem to change like you\n> suggested if other people also want it done that way.\n\nThe same issue exists here too:\n1)\n- if (toptxn != NULL && !rbtxn_has_catalog_changes(toptxn))\n+ if (rbtxn_is_subtxn(txn))\n {\n- toptxn->txn_flags |= RBTXN_HAS_CATALOG_CHANGES;\n- dclist_push_tail(&rb->catchange_txns, &toptxn->catchange_node);\n+ ReorderBufferTXN *toptxn = rbtxn_get_toptxn(txn);\n\n2)\n - if (change->txn->toptxn)\n- topxid = change->txn->toptxn->xid;\n+ if (rbtxn_is_subtxn(change->txn))\n+ topxid = rbtxn_get_toptxn(change->txn)->xid;\n\nIf you plan to fix, bothe the above also should be handled.\n\n3) The comment on top of rbtxn_get_toptxn could be kept similar in\nboth the below places. I know it is not because of your change, but\nsince it is a very small change probably we could include it along\nwith this patch:\n@@ -717,10 +717,7 @@ ReorderBufferProcessPartialChange(ReorderBuffer\n*rb, ReorderBufferTXN *txn,\n return;\n\n /* Get the top transaction. */\n- if (txn->toptxn != NULL)\n- toptxn = txn->toptxn;\n- else\n- toptxn = txn;\n+ toptxn = rbtxn_get_toptxn(txn);\n\n /*\n * Indicate a partial change for toast inserts. The change will be\n@@ -812,10 +809,7 @@ ReorderBufferQueueChange(ReorderBuffer *rb,\nTransactionId xid, XLogRecPtr lsn,\n ReorderBufferTXN *toptxn;\n\n /* get the top transaction */\n- if (txn->toptxn != NULL)\n- toptxn = txn->toptxn;\n- else\n- toptxn = txn;\n+ toptxn = rbtxn_get_toptxn(txn);\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 14 Mar 2023 17:02:39 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add macros for ReorderBufferTXN toptxn"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 12:37 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Thanks for the review!\n>\n> On Mon, Mar 13, 2023 at 6:19 PM vignesh C <vignesh21@gmail.com> wrote:\n> ...\n>\n> > 4) We check if txn->toptxn is not null twice here both in if condition\n> > and in the assignment, we could retain the assignment operation as\n> > earlier to remove the 2nd check:\n> > - if (txn->toptxn)\n> > - txn = txn->toptxn;\n> > + if (isa_subtxn(txn))\n> > + txn = get_toptxn(txn);\n> >\n> > We could avoid one check again by:\n> > + if (isa_subtxn(txn))\n> > + txn = txn->toptxn;\n> >\n>\n> Yeah, that is true, but I chose not to keep the original assignment in\n> this case mainly because then it is consistent with the other changed\n> code --- e.g. Every other direct member assignment/access of the\n> 'toptxn' member now hides behind the macros so I did not want this\n> single place to be the odd one out. TBH, I don't think 1 extra check\n> is of any significance, but it is not a problem to change like you\n> suggested if other people also want it done that way.\n>\n\nCan't we directly use rbtxn_get_toptxn() for this case? I think that\nway code will look neat. I see that it is not exactly matching the\nexisting check so you might be worried but I feel the new code will\nachieve the same purpose and will be easy to follow.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 14 Mar 2023 17:13:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add macros for ReorderBufferTXN toptxn"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 5:03 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, 14 Mar 2023 at 12:37, Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n>\n> The same issue exists here too:\n> 1)\n> - if (toptxn != NULL && !rbtxn_has_catalog_changes(toptxn))\n> + if (rbtxn_is_subtxn(txn))\n> {\n> - toptxn->txn_flags |= RBTXN_HAS_CATALOG_CHANGES;\n> - dclist_push_tail(&rb->catchange_txns, &toptxn->catchange_node);\n> + ReorderBufferTXN *toptxn = rbtxn_get_toptxn(txn);\n>\n> 2)\n> - if (change->txn->toptxn)\n> - topxid = change->txn->toptxn->xid;\n> + if (rbtxn_is_subtxn(change->txn))\n> + topxid = rbtxn_get_toptxn(change->txn)->xid;\n>\n> If you plan to fix, bothe the above also should be handled.\n>\n\nI don't know if it would be any better to change the above two as\ncompared to what the proposed patch has.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 14 Mar 2023 17:16:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add macros for ReorderBufferTXN toptxn"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 10:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 14, 2023 at 12:37 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Thanks for the review!\n> >\n> > On Mon, Mar 13, 2023 at 6:19 PM vignesh C <vignesh21@gmail.com> wrote:\n> > ...\n> >\n> > > 4) We check if txn->toptxn is not null twice here both in if condition\n> > > and in the assignment, we could retain the assignment operation as\n> > > earlier to remove the 2nd check:\n> > > - if (txn->toptxn)\n> > > - txn = txn->toptxn;\n> > > + if (isa_subtxn(txn))\n> > > + txn = get_toptxn(txn);\n> > >\n> > > We could avoid one check again by:\n> > > + if (isa_subtxn(txn))\n> > > + txn = txn->toptxn;\n> > >\n> >\n> > Yeah, that is true, but I chose not to keep the original assignment in\n> > this case mainly because then it is consistent with the other changed\n> > code --- e.g. Every other direct member assignment/access of the\n> > 'toptxn' member now hides behind the macros so I did not want this\n> > single place to be the odd one out. TBH, I don't think 1 extra check\n> > is of any significance, but it is not a problem to change like you\n> > suggested if other people also want it done that way.\n> >\n>\n> Can't we directly use rbtxn_get_toptxn() for this case? I think that\n> way code will look neat. I see that it is not exactly matching the\n> existing check so you might be worried but I feel the new code will\n> achieve the same purpose and will be easy to follow.\n>\n\nOK. Done as suggested.\n\nPSA v3.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 15 Mar 2023 10:54:26 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add macros for ReorderBufferTXN toptxn"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 10:33 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, 14 Mar 2023 at 12:37, Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Thanks for the review!\n> >\n> > On Mon, Mar 13, 2023 at 6:19 PM vignesh C <vignesh21@gmail.com> wrote:\n> > ...\n> > > Few comments:\n> > > 1) Can we move the macros along with the other macros present in this\n> > > file, just above this structure, similar to the macros added for\n> > > txn_flags:\n> > > /* Toplevel transaction for this subxact (NULL for top-level). */\n> > > +#define isa_toptxn(rbtxn) (rbtxn->toptxn == NULL)\n> > > +#define isa_subtxn(rbtxn) (rbtxn->toptxn != NULL)\n> > > +#define get_toptxn(rbtxn) (isa_subtxn(rbtxn) ? rbtxn->toptxn : rbtxn)\n> > >\n> > > 2) The macro name can be changed to rbtxn_is_toptxn, rbtxn_is_subtxn\n> > > and rbtxn_get_toptxn to keep it consistent with others:\n> > > /* Toplevel transaction for this subxact (NULL for top-level). */\n> > > +#define isa_toptxn(rbtxn) (rbtxn->toptxn == NULL)\n> > > +#define isa_subtxn(rbtxn) (rbtxn->toptxn != NULL)\n> > > +#define get_toptxn(rbtxn) (isa_subtxn(rbtxn) ? rbtxn->toptxn : rbtxn)\n> > >\n> > > 3) We could add separate comments for each of the macros:\n> > > /* Toplevel transaction for this subxact (NULL for top-level). */\n> > > +#define isa_toptxn(rbtxn) (rbtxn->toptxn == NULL)\n> > > +#define isa_subtxn(rbtxn) (rbtxn->toptxn != NULL)\n> > > +#define get_toptxn(rbtxn) (isa_subtxn(rbtxn) ? rbtxn->toptxn : rbtxn)\n> > >\n> >\n> > All the above are fixed as suggested.\n> >\n> > > 4) We check if txn->toptxn is not null twice here both in if condition\n> > > and in the assignment, we could retain the assignment operation as\n> > > earlier to remove the 2nd check:\n> > > - if (txn->toptxn)\n> > > - txn = txn->toptxn;\n> > > + if (isa_subtxn(txn))\n> > > + txn = get_toptxn(txn);\n> > >\n> > > We could avoid one check again by:\n> > > + if (isa_subtxn(txn))\n> > > + txn = txn->toptxn;\n> > >\n> >\n> > Yeah, that is true, but I chose not to keep the original assignment in\n> > this case mainly because then it is consistent with the other changed\n> > code --- e.g. Every other direct member assignment/access of the\n> > 'toptxn' member now hides behind the macros so I did not want this\n> > single place to be the odd one out. TBH, I don't think 1 extra check\n> > is of any significance, but it is not a problem to change like you\n> > suggested if other people also want it done that way.\n>\n> The same issue exists here too:\n> 1)\n> - if (toptxn != NULL && !rbtxn_has_catalog_changes(toptxn))\n> + if (rbtxn_is_subtxn(txn))\n> {\n> - toptxn->txn_flags |= RBTXN_HAS_CATALOG_CHANGES;\n> - dclist_push_tail(&rb->catchange_txns, &toptxn->catchange_node);\n> + ReorderBufferTXN *toptxn = rbtxn_get_toptxn(txn);\n>\n> 2)\n> - if (change->txn->toptxn)\n> - topxid = change->txn->toptxn->xid;\n> + if (rbtxn_is_subtxn(change->txn))\n> + topxid = rbtxn_get_toptxn(change->txn)->xid;\n>\n> If you plan to fix, bothe the above also should be handled.\n\nOK, noted. Anyway, for now, I preferred the 'toptxn' member to be\nconsistently hidden in the code so I don't plan to remove those\nmacros.\n\nAlso, please see Amit's reply [1] to your suggestion.\n\n>\n> 3) The comment on top of rbtxn_get_toptxn could be kept similar in\n> both the below places. I know it is not because of your change, but\n> since it is a very small change probably we could include it along\n> with this patch:\n> @@ -717,10 +717,7 @@ ReorderBufferProcessPartialChange(ReorderBuffer\n> *rb, ReorderBufferTXN *txn,\n> return;\n>\n> /* Get the top transaction. */\n> - if (txn->toptxn != NULL)\n> - toptxn = txn->toptxn;\n> - else\n> - toptxn = txn;\n> + toptxn = rbtxn_get_toptxn(txn);\n>\n> /*\n> * Indicate a partial change for toast inserts. The change will be\n> @@ -812,10 +809,7 @@ ReorderBufferQueueChange(ReorderBuffer *rb,\n> TransactionId xid, XLogRecPtr lsn,\n> ReorderBufferTXN *toptxn;\n>\n> /* get the top transaction */\n> - if (txn->toptxn != NULL)\n> - toptxn = txn->toptxn;\n> - else\n> - toptxn = txn;\n> + toptxn = rbtxn_get_toptxn(txn);\n>\n\nIMO the comment (\"/* get the top transaction */\") was not really\nsaying anything useful that is not already obvious from the macro name\n(\"rbtxn_get_toptxn\"). So I've removed it entirely in your 2nd case.\nThis change is consistent with other parts of the patch where the\ntoptxn is just assigned in the declaration.\n\nPSA v3. [2]\n\n------\n[1] Amit reply to your suggestion -\nhttps://www.postgresql.org/message-id/CAA4eK1%2BoqfUSC3vpu6bJzgfnSmu_yaeoLS%3DRb3416GuS5iRP1Q%40mail.gmail.com\n[2] v3 - https://www.postgresql.org/message-id/CAHut%2BPtrD4xU4OPUB64ZK%2BDPDhfKn3zph%3DnDpEWUFFzUvMKo2w%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 15 Mar 2023 11:01:10 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add macros for ReorderBufferTXN toptxn"
},
{
"msg_contents": "Hi,\n\nOn Wed, Mar 15, 2023 at 8:55 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Mar 14, 2023 at 10:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Mar 14, 2023 at 12:37 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > Thanks for the review!\n> > >\n> > > On Mon, Mar 13, 2023 at 6:19 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > ...\n> > >\n> > > > 4) We check if txn->toptxn is not null twice here both in if condition\n> > > > and in the assignment, we could retain the assignment operation as\n> > > > earlier to remove the 2nd check:\n> > > > - if (txn->toptxn)\n> > > > - txn = txn->toptxn;\n> > > > + if (isa_subtxn(txn))\n> > > > + txn = get_toptxn(txn);\n> > > >\n> > > > We could avoid one check again by:\n> > > > + if (isa_subtxn(txn))\n> > > > + txn = txn->toptxn;\n> > > >\n> > >\n> > > Yeah, that is true, but I chose not to keep the original assignment in\n> > > this case mainly because then it is consistent with the other changed\n> > > code --- e.g. Every other direct member assignment/access of the\n> > > 'toptxn' member now hides behind the macros so I did not want this\n> > > single place to be the odd one out. TBH, I don't think 1 extra check\n> > > is of any significance, but it is not a problem to change like you\n> > > suggested if other people also want it done that way.\n> > >\n> >\n> > Can't we directly use rbtxn_get_toptxn() for this case? I think that\n> > way code will look neat. I see that it is not exactly matching the\n> > existing check so you might be worried but I feel the new code will\n> > achieve the same purpose and will be easy to follow.\n> >\n>\n> OK. Done as suggested.\n>\n\n+1 to the idea. Here are some minor comments:\n\n@@ -1667,7 +1658,7 @@ ReorderBufferTruncateTXN(ReorderBuffer *rb,\nReorderBufferTXN *txn, bool txn_prep\n * about the toplevel xact (we send the XID in all messages), but we never\n * stream XIDs of empty subxacts.\n */\n- if ((!txn_prepared) && ((!txn->toptxn) || (txn->nentries_mem != 0)))\n+ if ((!txn_prepared) && (rbtxn_is_toptxn(txn) || (txn->nentries_mem != 0)))\n txn->txn_flags |= RBTXN_IS_STREAMED;\n\nProbably the following comment of the above lines also needs to be updated?\n\n * The toplevel transaction, identified by (toptxn==NULL), is marked as\n * streamed always,\n\n---\n+/* Is this a top-level transaction? */\n+#define rbtxn_is_toptxn(txn)\\\n+(\\\n+ (txn)->toptxn == NULL\\\n+)\n+\n+/* Is this a subtransaction? */\n+#define rbtxn_is_subtxn(txn)\\\n+(\\\n+ (txn)->toptxn != NULL\\\n+)\n+\n+/* Get the top-level transaction of this (sub)transaction. */\n+#define rbtxn_get_toptxn(txn)\\\n+(\\\n+ rbtxn_is_subtxn(txn) ? (txn)->toptxn : (txn)\\\n+)\n\nWe need a whitespace before backslashes.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 15 Mar 2023 14:54:32 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add macros for ReorderBufferTXN toptxn"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 4:55 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi,\n>\n...\n> +1 to the idea. Here are some minor comments:\n>\n> @@ -1667,7 +1658,7 @@ ReorderBufferTruncateTXN(ReorderBuffer *rb,\n> ReorderBufferTXN *txn, bool txn_prep\n> * about the toplevel xact (we send the XID in all messages), but we never\n> * stream XIDs of empty subxacts.\n> */\n> - if ((!txn_prepared) && ((!txn->toptxn) || (txn->nentries_mem != 0)))\n> + if ((!txn_prepared) && (rbtxn_is_toptxn(txn) || (txn->nentries_mem != 0)))\n> txn->txn_flags |= RBTXN_IS_STREAMED;\n>\n> Probably the following comment of the above lines also needs to be updated?\n>\n> * The toplevel transaction, identified by (toptxn==NULL), is marked as\n> * streamed always,\n>\n> ---\n> +/* Is this a top-level transaction? */\n> +#define rbtxn_is_toptxn(txn)\\\n> +(\\\n> + (txn)->toptxn == NULL\\\n> +)\n> +\n> +/* Is this a subtransaction? */\n> +#define rbtxn_is_subtxn(txn)\\\n> +(\\\n> + (txn)->toptxn != NULL\\\n> +)\n> +\n> +/* Get the top-level transaction of this (sub)transaction. */\n> +#define rbtxn_get_toptxn(txn)\\\n> +(\\\n> + rbtxn_is_subtxn(txn) ? (txn)->toptxn : (txn)\\\n> +)\n>\n> We need a whitespace before backslashes.\n>\n\nThanks for your interest in my patch.\n\nPSA v4 which addresses both of your review comments.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Thu, 16 Mar 2023 12:49:34 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add macros for ReorderBufferTXN toptxn"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 7:20 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> PSA v4 which addresses both of your review comments.\n\nLooks like a reasonable change to me.\n\nA nitpick: how about using rbtxn_get_toptxn instead of an explicit\nvariable toptxn for single use?\n\n1.\nChange\n ReorderBufferTXN *toptxn = rbtxn_get_toptxn(txn);\n TestDecodingTxnData *txndata = toptxn->output_plugin_private;\n\nTo\n TestDecodingTxnData *txndata = rbtxn_get_toptxn(txn)->output_plugin_private;\n\n2.\nChange\n ReorderBufferTXN *toptxn = rbtxn_get_toptxn(txn);\n toptxn->txn_flags |= RBTXN_HAS_STREAMABLE_CHANGE;\n\nTo\n rbtxn_get_toptxn(txn)->txn_flags |= RBTXN_HAS_STREAMABLE_CHANGE;\n\n3.\nChange\n /*\n * Update the total size in top level as well. This is later used to\n * compute the decoding stats.\n */\n toptxn = rbtxn_get_toptxn(txn);\n\n if (addition)\n {\n txn->size += sz;\n rb->size += sz;\n\n /* Update the total size in the top transaction. */\n toptxn->total_size += sz;\n }\n else\n {\n Assert((rb->size >= sz) && (txn->size >= sz));\n txn->size -= sz;\n rb->size -= sz;\n\n /* Update the total size in the top transaction. */\n toptxn->total_size -= sz;\n }\n\nTo\n\n /*\n * Update the total size in top level as well. This is later used to\n * compute the decoding stats.\n */\n if (addition)\n {\n txn->size += sz;\n rb->size += sz;\n rbtxn_get_toptxn(txn)->total_size += sz;\n }\n else\n {\n Assert((rb->size >= sz) && (txn->size >= sz));\n txn->size -= sz;\n rb->size -= sz;\n rbtxn_get_toptxn(txn)->total_size -= sz;\n }\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 16 Mar 2023 10:40:05 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add macros for ReorderBufferTXN toptxn"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 10:40 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Mar 16, 2023 at 7:20 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > PSA v4 which addresses both of your review comments.\n>\n> Looks like a reasonable change to me.\n>\n> A nitpick: how about using rbtxn_get_toptxn instead of an explicit\n> variable toptxn for single use?\n>\n\nI find all three suggestions are similar. Personally, I think the\ncurrent code looks better. The v4 patch LGTM and I am planning to\ncommit it unless there are more comments or people find your\nsuggestions as an improvement.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Mar 2023 10:50:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add macros for ReorderBufferTXN toptxn"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 7:20 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> PSA v4 which addresses both of your review comments.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 17 Mar 2023 11:37:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add macros for ReorderBufferTXN toptxn"
},
{
"msg_contents": "The build-farm was OK for the last 18hrs after this push, except there\nwas one error on mamba [1] in test-decoding-check.\n\nThis patch did change the test_decoding.c file, so it seems an\nunlikely coincidence, but OTOH the change was very small and I don't\nsee yet how it could have caused a problem here but nowhere else.\n\nAlthough, mamba has since passed again since that failure.\n\nAny thoughts?\n\n------\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2023-03-17%2005%3A36%3A10\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Sat, 18 Mar 2023 08:47:24 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add macros for ReorderBufferTXN toptxn"
},
{
"msg_contents": "On Sat, Mar 18, 2023 at 8:47 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> The build-farm was OK for the last 18hrs after this push, except there\n> was one error on mamba [1] in test-decoding-check.\n>\n> This patch did change the test_decoding.c file, so it seems an\n> unlikely coincidence, but OTOH the change was very small and I don't\n> see yet how it could have caused a problem here but nowhere else.\n>\n> Although, mamba has since passed again since that failure.\n>\n> Any thoughts?\n>\n> ------\n> [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mamba&dt=2023-03-17%2005%3A36%3A10\n>\n\nSubsequent testing with this \"toptxn\" patch reverted [1] was able to\nreproduce the same error, so it seems this \"toptxn\" patch is unrelated\nto the reported build-farm failure.\n\n[1] https://www.postgresql.org/message-id/CAHut%2BPvVrjwJm_9ZqnXJk4x9k8dN0dYrV%2BT5_Rd30BSneDhv1A%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 20 Mar 2023 17:26:16 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add macros for ReorderBufferTXN toptxn"
}
] |
[
{
"msg_contents": "I can use relation struct to get all attributes' typeoid, so which funcion I can use\r\nto get the real type.\r\n\r\n\r\njacktby@gmail.com\r\n\n\n\nI can use relation struct to get all attributes' typeoid, so which funcion I can useto get the real type.\njacktby@gmail.com",
"msg_date": "Fri, 10 Mar 2023 13:33:25 +0800",
"msg_from": "\"jacktby@gmail.com\" <jacktby@gmail.com>",
"msg_from_op": true,
"msg_subject": "How to get the real type use oid in internal codes?"
},
{
"msg_contents": "On 10.03.23 06:33, jacktby@gmail.com wrote:\n> I can use relation struct to get all attributes' typeoid, so which \n> funcion I can use\n> to get the real type.\n\nDepends on what you mean by \"real\", but perhaps the format_type* family \nof functions would help you.\n\n\n\n",
"msg_date": "Fri, 10 Mar 2023 12:18:09 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: How to get the real type use oid in internal codes?"
}
] |
[
{
"msg_contents": "Hi all,\n\nThis is a follow-up of the point I have made a few weeks ago on this\nthread of pgsql-bugs about $subject:\nhttps://www.postgresql.org/message-id/Y/Q/17rpYS7YGbIt@paquier.xyz\nhttps://www.postgresql.org/message-id/Y/v0c+3W89NBT/if@paquier.xyz\n\nHere is a short summary of what I think is incorrect, and what I'd\nlike to do to improve things moving forward, this pointing to a\nsimple solution..\n\nWhile looking at the so-said thread, I have dug into the recovery code\nto see what looks like an incorrect assumption behind the two boolean\nflags named ArchiveRecoveryRequested and InArchiveRecovery that we\nhave in xlogrecovery.c to control the behavior of archive recovery in\nthe startup process. For information, as of HEAD, these two are\ndescribed as follows:\n/*\n * When ArchiveRecoveryRequested is set, archive recovery was requested,\n * ie. signal files were present. When InArchiveRecovery is set, we are\n * currently recovering using offline XLOG archives. These variables are only\n * valid in the startup process.\n *\n * When ArchiveRecoveryRequested is true, but InArchiveRecovery is false, we're\n * currently performing crash recovery using only XLOG files in pg_wal, but\n * will switch to using offline XLOG archives as soon as we reach the end of\n * WAL in pg_wal.\n */\nbool ArchiveRecoveryRequested = false;\nbool InArchiveRecovery = false;\n\nWhen you read this text alone, its assumptions are simple. When the\nstartup process finds a recovery.signal or a standby.signal, we switch\nArchiveRecoveryRequested to true. If there is a standby.signal,\nInArchiveRecovery would be immediately set to true before beginning\nthe redo loop. If we begin redo with a recovery.signal,\nArchiveRecoveryRequested = true and InArchiveRecovery = false, crash\nrecovery happens first, consuming all the WAL in pg_wal/, then we'd\nmove on with archive recovery.\n\nNow comes the problem of the other thread, which is what happens when\nyou use a backup_label *without* a recovery.signal or a\nstandby.signal. In this case, as currently coded, it is possible to\nenforce ArchiveRecoveryRequested = false and later InArchiveRecovery =\ntrue. Not setting ArchiveRecoveryRequested has a couple of undesired\neffect. First, this skips some initialization steps that may be\nneeded at a later point in recovery. The thread quoted above has\nreported one aspect of that: we miss some hot-standby related\nintialization that can reflect if replaying enough WAL that a restart\npoint could happen. Depending on the amount of data copied into\npg_wal/ before starting a node with only a backup_label it may also be\npossible that a consistent point has been reached, where restart\npoints would be legit. A second Kiss Cool effect (understands who\ncan), is that we miss the initialization of the recoveryWakeupLatch.\nA third effect is that some code paths can use GUC values related to\nrecovery without ArchiveRecoveryRequested being around, one example\nseems to be hot_standby whose default is true.\n\nIt is worth noting the end of FinishWalRecovery(), that includes this\npart:\n if (ArchiveRecoveryRequested)\n {\n /*\n * We are no longer in archive recovery state.\n *\n * We are now done reading the old WAL. Turn off archive fetching if\n * it was active.\n */\n Assert(InArchiveRecovery);\n InArchiveRecovery = false;\n\nI have been pondering for a few weeks now about what kind of\ndefinition would suit to a cluster having a backup_label file without\na signal file added, which is documented as required by the docs in\nthe HA section as well as pg_rewind. It is true that there could be a\npoint to allow such a configuration so as a node recovers without a\nTLI jump, but I cannot find appealing this case, as well, as a node\ncould just happily overwrite WAL segments in the archives on an\nexisting timeline, potentially corruption other nodes writing on the\nsame TLI. There are a few other recovery scenarios where one copies\ndirectly WAL segments into pg_wal/ that can lead to a lot of weird\ninconsistencies as well, one being the report of the thread of\npgsql-hackers.\n\nAt the end, I'd like to think that we should just require\na recovery.signal or a standby.signal if we have a backup_label file,\nand even enforce this rule at the end of recovery for some sanity\nchecks. I don't think that we can just enforce\nArchiveRecoveryRequested in this path, either, as a backup_label would\nbe renamed to .old once the control file knows up to which LSN it\nneeds to replay to reach consistency and if an end-of-backup record is\nrequired. That's not something that can be reasonably backpatched, as\nit could disturb some recovery workflows, even if these are kind of in\na dangerous spot, IMO, so I would like to propose that only on HEAD\nfor 16~ because the recovery code has never really considered this\ncombination of ArchiveRecoveryRequested and InArchiveRecovery.\n\nWhile digging into that, I have found one TAP test of pg_basebackup\nthat was doing recovery with just a backup_label file, with a\nrestore_command already set. A second case was in pg_rewind, were we\nhave a node without standby.signal, still it uses a primary_conninfo.\n\nAttached is a patch on the lines of what I am thinking about. This\nreworks a bit some of the actions at the beginning of the startup\nprocess:\n- Report the set of LOGs showing the state of the node after reading\nthe backup_label.\n- Enforce a rule in ShutdownWalRecovery() and document the\nrestriction.\n- Add a check with the signal files after finding a backup_label\nfile.\n- The validation checks on the recovery parameters are applied (aka\nrestore_command required with recovery.signal, or a primary_conninfo\nrequired on standby for streaming, etc.).\n\nMy apologies for the long message, but this deserves some attention,\nIMHO.\n\nSo, any thoughts?\n--\nMichael",
"msg_date": "Fri, 10 Mar 2023 15:59:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Requiring recovery.signal or standby.signal when recovering with a\n backup_label"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 03:59:04PM +0900, Michael Paquier wrote:\n> My apologies for the long message, but this deserves some attention,\n> IMHO.\n\nNote: A CF entry has been added as of [1], and I have added an item in\nthe list of live issues on the open item page for 16.\n\n[1]: https://commitfest.postgresql.org/43/4244/\n[2]: https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items#Live_issues\n--\nMichael",
"msg_date": "Mon, 13 Mar 2023 10:06:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "I believe before users can make a backup using pg_basebackup and then \nstart the backup as an independent Primary server for whatever reasons. \nNow, if this is still allowed, then users need to be aware that the \nbackup_label must be manually deleted, otherwise, the backup won't be \nable to start as a Primary.\n\nThe current message below doesn't provide such a hint.\n\n+\t\tif (!ArchiveRecoveryRequested)\n+\t\t\tereport(FATAL,\n+\t\t\t\t\t(errmsg(\"could not find recovery.signal or standby.signal when recovering with backup_label\"),\n+\t\t\t\t\t errhint(\"If you are restoring from a backup, touch \\\"%s/recovery.signal\\\" or \\\"%s/standby.signal\\\" and add required recovery options.\",\n+\t\t\t\t\t\t\t DataDir, DataDir)));\n\nOn 2023-03-12 6:06 p.m., Michael Paquier wrote:\n> On Fri, Mar 10, 2023 at 03:59:04PM +0900, Michael Paquier wrote:\n>> My apologies for the long message, but this deserves some attention,\n>> IMHO.\n> Note: A CF entry has been added as of [1], and I have added an item in\n> the list of live issues on the open item page for 16.\n>\n> [1]:https://commitfest.postgresql.org/43/4244/\n> [2]:https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items#Live_issues\n> --\n> Michael\n\nBest regards,\n\nDavid\n\n\n\n\n\n\nI believe before users can make a backup using pg_basebackup and\n then start the backup as an independent Primary server for\n whatever reasons. Now, if this is still allowed, then users need\n to be aware that the backup_label must be manually deleted,\n otherwise, the backup won't be able to start as a Primary.\nThe current message below doesn't provide such a hint. \n\n+\t\tif (!ArchiveRecoveryRequested)\n+\t\t\tereport(FATAL,\n+\t\t\t\t\t(errmsg(\"could not find recovery.signal or standby.signal when recovering with backup_label\"),\n+\t\t\t\t\t errhint(\"If you are restoring from a backup, touch \\\"%s/recovery.signal\\\" or \\\"%s/standby.signal\\\" and add required recovery options.\",\n+\t\t\t\t\t\t\t DataDir, DataDir)));\nOn 2023-03-12 6:06 p.m., Michael\n Paquier wrote:\n\n\nOn Fri, Mar 10, 2023 at 03:59:04PM +0900, Michael Paquier wrote:\n\n\nMy apologies for the long message, but this deserves some attention,\nIMHO.\n\n\n\nNote: A CF entry has been added as of [1], and I have added an item in\nthe list of live issues on the open item page for 16.\n\n[1]: https://commitfest.postgresql.org/43/4244/\n[2]: https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items#Live_issues\n--\nMichael\n\n\nBest regards,\nDavid",
"msg_date": "Fri, 14 Jul 2023 13:32:49 -0700",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 01:32:49PM -0700, David Zhang wrote:\n> I believe before users can make a backup using pg_basebackup and then start\n> the backup as an independent Primary server for whatever reasons. Now, if\n> this is still allowed, then users need to be aware that the backup_label\n> must be manually deleted, otherwise, the backup won't be able to start as a\n> Primary.\n\nDelete a backup_label from a fresh base backup can easily lead to data\ncorruption, as the startup process would pick up as LSN to start\nrecovery from the control file rather than the backup_label file.\nThis would happen if a checkpoint updates the redo LSN in the control\nfile while a backup happens and the control file is copied after the\ncheckpoint, for instance. If one wishes to deploy a new primary from\na base backup, recovery.signal is the way to go, making sure that the\nnew primary is bumped into a new timeline once recovery finishes, on\ntop of making sure that the startup process starts recovery from a\nposition where the cluster would be able to achieve a consistent\nstate.\n\n> The current message below doesn't provide such a hint.\n> \n> +\t\tif (!ArchiveRecoveryRequested)\n> +\t\t\tereport(FATAL,\n> +\t\t\t\t\t(errmsg(\"could not find\n> recovery.signal or standby.signal when recovering with\n> backup_label\"), \n> +\t\t\t\t\t errhint(\"If you are restoring\n> from a backup, touch \\\"%s/recovery.signal\\\" or \\\"%s/standby.signal\\\"\n> and add required recovery options.\",\n> +\t\t\t\t\t\t\t DataDir,\n> DataDir)));\n\nHow would you rewrite that? I am not sure how many details we want to\nput here in terms of differences between recovery.signal and\nstandby.signal, still we surely should mention these are the two\npossible choices.\n--\nMichael",
"msg_date": "Mon, 17 Jul 2023 10:27:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On 2023-07-16 6:27 p.m., Michael Paquier wrote:\n>\n> Delete a backup_label from a fresh base backup can easily lead to data\n> corruption, as the startup process would pick up as LSN to start\n> recovery from the control file rather than the backup_label file.\n> This would happen if a checkpoint updates the redo LSN in the control\n> file while a backup happens and the control file is copied after the\n> checkpoint, for instance. If one wishes to deploy a new primary from\n> a base backup, recovery.signal is the way to go, making sure that the\n> new primary is bumped into a new timeline once recovery finishes, on\n> top of making sure that the startup process starts recovery from a\n> position where the cluster would be able to achieve a consistent\n> state.\nThanks a lot for sharing this information.\n>\n> How would you rewrite that? I am not sure how many details we want to\n> put here in terms of differences between recovery.signal and\n> standby.signal, still we surely should mention these are the two\n> possible choices.\n\nHonestly, I can't convince myself to mention the backup_label here too. \nBut, I can share some information regarding my testing of the patch and \nthe corresponding results.\n\nTo assess the impact of the patch, I executed the following commands for \nbefore and after,\n\npg_basebackup -h localhost -p 5432 -U david -D pg_backup1\n\npg_ctl -D pg_backup1 -l /tmp/logfile start\n\nBefore the patch, there were no issues encountered when starting an \nindependent Primary server.\n\n\nHowever, after applying the patch, I observed the following behavior \nwhen starting from the base backup:\n\n1) simply start server from a base backup\n\nFATAL: could not find recovery.signal or standby.signal when recovering \nwith backup_label\n\nHINT: If you are restoring from a backup, touch \n\"/media/david/disk1/pg_backup1/recovery.signal\" or \n\"/media/david/disk1/pg_backup1/standby.signal\" and add required recovery \noptions.\n\n2) touch a recovery.signal file and then try to start the server, the \nfollowing error was encountered:\n\nFATAL: must specify restore_command when standby mode is not enabled\n\n3) touch a standby.signal file, then the server successfully started, \nhowever, it operates in standby mode, whereas the intended behavior was \nfor it to function as a primary server.\n\n\nBest regards,\n\nDavid\n\n\n\n\n\n\n",
"msg_date": "Wed, 19 Jul 2023 11:21:17 -0700",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 11:21:17AM -0700, David Zhang wrote:\n> 1) simply start server from a base backup\n> \n> FATAL: could not find recovery.signal or standby.signal when recovering\n> with backup_label\n> \n> HINT: If you are restoring from a backup, touch\n> \"/media/david/disk1/pg_backup1/recovery.signal\" or\n> \"/media/david/disk1/pg_backup1/standby.signal\" and add required recovery\n> options.\n\nNote the difference when --write-recovery-conf is specified, where a\nstandby.conf is created with a primary_conninfo in\npostgresql.auto.conf. So, yes, that's expected by default with the\npatch.\n\n> 2) touch a recovery.signal file and then try to start the server, the\n> following error was encountered:\n> \n> FATAL: must specify restore_command when standby mode is not enabled\n\nYes, that's also something expected in the scope of the v1 posted.\nThe idea behind this restriction is that specifying recovery.signal is\nequivalent to asking for archive recovery, but not specifying\nrestore_command is equivalent to not provide any options to be able to\nrecover. See validateRecoveryParameters() and note that this\nrestriction exists since the beginning of times, introduced in commit\n66ec2db. I tend to agree that there is something to be said about\nself-contained backups taken from pg_basebackup, though, as these\nwould fail if no restore_command is specified, and this restriction is\nin place before Postgres has introduced replication and easier ways to\nhave base backups. As a whole, I think that there is a good argument\nin favor of removing this restriction for the case where archive\nrecovery is requested if users have all their WAL in pg_wal/ to be\nable to recover up to a consistent point, keeping these GUC\nrestrictions if requesting a standby (not recovery.signal, only\nstandby.signal).\n\n> 3) touch a standby.signal file, then the server successfully started,\n> however, it operates in standby mode, whereas the intended behavior was for\n> it to function as a primary server.\n\nstandby.signal implies that the server will start in standby mode. If\none wants to deploy a new primary, that would imply a timeline jump at\nthe end of recovery, you would need to specify recovery.signal\ninstead.\n\nWe need more discussions and more opinions, but the discussion has\nstalled for a few months now. In case, I am adding Thomas Munro in CC\nwho has mentioned to me at PGcon that he was interested in this patch\n(this thread's problem is not directly related to the fact that the\ncheckpointer now runs in crash recovery, though).\n\nFor now, I am attaching a rebased v2.\n--\nMichael",
"msg_date": "Thu, 20 Jul 2023 08:19:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "Thanks for the patch.\n\nI rerun the test in\nhttps://www.postgresql.org/message-id/flat/ZQtzcH2lvo8leXEr%40paquier.xyz#cc5ed83e0edc0b9a1c1305f08ff7a335\n. We can discuss all the problems in this thread.\n\nFirst I encountered the problem \" FATAL: could not find\nrecovery.signal or standby.signal when recovering with backup_label \",\nthen I deleted the backup_label file and started the instance\nsuccessfully.\n\n> Delete a backup_label from a fresh base backup can easily lead to data\n> corruption, as the startup process would pick up as LSN to start\n> recovery from the control file rather than the backup_label file.\n> This would happen if a checkpoint updates the redo LSN in the control\n> file while a backup happens and the control file is copied after the\n> checkpoint, for instance. If one wishes to deploy a new primary from\n> a base backup, recovery.signal is the way to go, making sure that the\n> new primary is bumped into a new timeline once recovery finishes, on\n> top of making sure that the startup process starts recovery from a\n> position where the cluster would be able to achieve a consistent\n> state.\n\nereport(FATAL,\n(errmsg(\"could not find redo location referenced by checkpoint record\"),\nerrhint(\"If you are restoring from a backup, touch\n\\\"%s/recovery.signal\\\" and add required recovery options.\\n\"\n\"If you are not restoring from a backup, try removing the file\n\\\"%s/backup_label\\\".\\n\"\n\"Be careful: removing \\\"%s/backup_label\\\" will result in a corrupt\ncluster if restoring from a backup.\",\nDataDir, DataDir, DataDir)));\n\nThere are two similar error messages in xlogrecovery.c. Maybe we can\nmodify the error messages to be similar.\n\n--\nBowen Shi\n\n\nOn Thu, 21 Sept 2023 at 11:01, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jul 19, 2023 at 11:21:17AM -0700, David Zhang wrote:\n> > 1) simply start server from a base backup\n> >\n> > FATAL: could not find recovery.signal or standby.signal when recovering\n> > with backup_label\n> >\n> > HINT: If you are restoring from a backup, touch\n> > \"/media/david/disk1/pg_backup1/recovery.signal\" or\n> > \"/media/david/disk1/pg_backup1/standby.signal\" and add required recovery\n> > options.\n>\n> Note the difference when --write-recovery-conf is specified, where a\n> standby.conf is created with a primary_conninfo in\n> postgresql.auto.conf. So, yes, that's expected by default with the\n> patch.\n>\n> > 2) touch a recovery.signal file and then try to start the server, the\n> > following error was encountered:\n> >\n> > FATAL: must specify restore_command when standby mode is not enabled\n>\n> Yes, that's also something expected in the scope of the v1 posted.\n> The idea behind this restriction is that specifying recovery.signal is\n> equivalent to asking for archive recovery, but not specifying\n> restore_command is equivalent to not provide any options to be able to\n> recover. See validateRecoveryParameters() and note that this\n> restriction exists since the beginning of times, introduced in commit\n> 66ec2db. I tend to agree that there is something to be said about\n> self-contained backups taken from pg_basebackup, though, as these\n> would fail if no restore_command is specified, and this restriction is\n> in place before Postgres has introduced replication and easier ways to\n> have base backups. As a whole, I think that there is a good argument\n> in favor of removing this restriction for the case where archive\n> recovery is requested if users have all their WAL in pg_wal/ to be\n> able to recover up to a consistent point, keeping these GUC\n> restrictions if requesting a standby (not recovery.signal, only\n> standby.signal).\n>\n> > 3) touch a standby.signal file, then the server successfully started,\n> > however, it operates in standby mode, whereas the intended behavior was for\n> > it to function as a primary server.\n>\n> standby.signal implies that the server will start in standby mode. If\n> one wants to deploy a new primary, that would imply a timeline jump at\n> the end of recovery, you would need to specify recovery.signal\n> instead.\n>\n> We need more discussions and more opinions, but the discussion has\n> stalled for a few months now. In case, I am adding Thomas Munro in CC\n> who has mentioned to me at PGcon that he was interested in this patch\n> (this thread's problem is not directly related to the fact that the\n> checkpointer now runs in crash recovery, though).\n>\n> For now, I am attaching a rebased v2.\n> --\n> Michael\n\n\n",
"msg_date": "Thu, 21 Sep 2023 11:45:06 +0800",
"msg_from": "Bowen Shi <zxwsbg12138@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 11:45:06AM +0800, Bowen Shi wrote:\n> First I encountered the problem \" FATAL: could not find\n> recovery.signal or standby.signal when recovering with backup_label \",\n> then I deleted the backup_label file and started the instance\n> successfully.\n\nDoing that is equal to corrupting your instance as recovery would\nbegin from the latest redo LSN stored in the control file, but the\nphysical relation files included in the backup may include blocks that\nrequire records that are needed before this redo LSN and the LSN\nstored in the backup_label.\n\n>> Delete a backup_label from a fresh base backup can easily lead to data\n>> corruption, as the startup process would pick up as LSN to start\n>> recovery from the control file rather than the backup_label file.\n>> This would happen if a checkpoint updates the redo LSN in the control\n>> file while a backup happens and the control file is copied after the\n>> checkpoint, for instance. If one wishes to deploy a new primary from\n>> a base backup, recovery.signal is the way to go, making sure that the\n>> new primary is bumped into a new timeline once recovery finishes, on\n>> top of making sure that the startup process starts recovery from a\n>> position where the cluster would be able to achieve a consistent\n>> state.\n\nAnd that's what I mean here. In more details. So, really, don't do\nthat.\n\n> ereport(FATAL,\n> (errmsg(\"could not find redo location referenced by checkpoint record\"),\n> errhint(\"If you are restoring from a backup, touch\n> \\\"%s/recovery.signal\\\" and add required recovery options.\\n\"\n> \"If you are not restoring from a backup, try removing the file\n> \\\"%s/backup_label\\\".\\n\"\n> \"Be careful: removing \\\"%s/backup_label\\\" will result in a corrupt\n> cluster if restoring from a backup.\",\n> DataDir, DataDir, DataDir)));\n> \n> There are two similar error messages in xlogrecovery.c. Maybe we can\n> modify the error messages to be similar.\n\nThe patch adds the following message, which is written this way to be\nconsistent with the two others, already:\n\n+ ereport(FATAL,\n+ (errmsg(\"could not find recovery.signal or standby.signal when recovering with backup_label\"),\n+ errhint(\"If you are restoring from a backup, touch \\\"%s/recovery.signal\\\" or \\\"%s/standby.signal\\\" and add required recovery options.\",\n+ DataDir, DataDir)));\n\nBut you have an interesting point here, why isn't standby.signal also\nmentioned in the two existing comments? Depending on what's wanted by\nthe user this can be equally useful to report back.\n\nAttached is a slightly updated patch, where I have also removed the\ncheck on ArchiveRecoveryRequested because the FATAL generated for\n!ArchiveRecoveryRequested makes sure that it is useless after reading\nthe backup_label file.\n\nThis patch has been around for a few months now. Do others have\nopinions about the direction taken here to make the presence of\nrecovery.signal or standby.signal a hard requirement when a\nbackup_label file is found (HEAD only)?\n--\nMichael",
"msg_date": "Wed, 27 Sep 2023 16:25:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "At Fri, 10 Mar 2023 15:59:04 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> My apologies for the long message, but this deserves some attention,\n> IMHO.\n> \n> So, any thoughts?\n\nSorry for being late. However, I agree with David's concern regarding\nthe unnecessary inconvenience it introduces. I'd like to maintain the\nfunctionality.\n\nWhile I agree that InArchiveRecovery should be activated only if\nArchiveReArchiveRecoveryRequested is true, I oppose to the notion that\nthe mere presence of backup_label should be interpreted as a request\nfor archive recovery (even if it is implied in a comment in\nInitWalRecovery()). Instead, I propose that we separate backup_label\nand archive recovery, in other words, we should not turn on\nInArchiveRecovery if !ArchiveRecoveryRequested, regardless of the\npresence of backup_label. We can know the minimum required recovery\nLSN by the XLOG_BACKUP_END record.\n\nThe attached is a quick mock-up, but providing an approximation of my\nthoughts. (For example, end_of_backup_reached could potentially be\nextended to the ArchiveRecoveryRequested case and we could simplify\nthe condition..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 28 Sep 2023 12:58:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering\n with a backup_label"
},
{
"msg_contents": "Sorry, it seems that I posted at the wrong position..\n\nAt Thu, 28 Sep 2023 12:58:51 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 10 Mar 2023 15:59:04 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > My apologies for the long message, but this deserves some attention,\n> > IMHO.\n> > \n> > So, any thoughts?\n> \n> Sorry for being late. However, I agree with David's concern regarding\n> the unnecessary inconvenience it introduces. I'd like to maintain the\n> functionality.\n> \n> While I agree that InArchiveRecovery should be activated only if\n> ArchiveReArchiveRecoveryRequested is true, I oppose to the notion that\n> the mere presence of backup_label should be interpreted as a request\n> for archive recovery (even if it is implied in a comment in\n> InitWalRecovery()). Instead, I propose that we separate backup_label\n> and archive recovery, in other words, we should not turn on\n> InArchiveRecovery if !ArchiveRecoveryRequested, regardless of the\n> presence of backup_label. We can know the minimum required recovery\n> LSN by the XLOG_BACKUP_END record.\n> \n> The attached is a quick mock-up, but providing an approximation of my\n> thoughts. (For example, end_of_backup_reached could potentially be\n> extended to the ArchiveRecoveryRequested case and we could simplify\n> the condition..)\n\nregards\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 28 Sep 2023 13:04:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering\n with a backup_label"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 12:58:51PM +0900, Kyotaro Horiguchi wrote:\n> The attached is a quick mock-up, but providing an approximation of my\n> thoughts. (For example, end_of_backup_reached could potentially be\n> extended to the ArchiveRecoveryRequested case and we could simplify\n> the condition..)\n\nI am not sure why this is related to this thread..\n\n static XLogRecPtr backupStartPoint;\n static XLogRecPtr backupEndPoint;\n static bool backupEndRequired = false;\n+static bool backupEndReached = false;\n\nAnyway, sneaking at your suggestion, this is actually outlining the\nmain issue I have with this code currently. We have so many static\nbooleans to control one behavior over the other that we always try to\nmake this code more complicated, while we should try to make it\nsimpler instead. \n--\nMichael",
"msg_date": "Thu, 28 Sep 2023 13:26:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On 9/27/23 23:58, Kyotaro Horiguchi wrote:\n> At Fri, 10 Mar 2023 15:59:04 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n>> My apologies for the long message, but this deserves some attention,\n>> IMHO.\n>>\n>> So, any thoughts?\n> \n> Sorry for being late. However, I agree with David's concern regarding\n> the unnecessary inconvenience it introduces. I'd like to maintain the\n> functionality.\n\nAfter some playing around, I find I agree with Michael on this, i.e. \nrequire at least standby.signal when a backup_label is present.\n\nAccording to my testing, you can preserve the \"independent server\" \nfunctionality by setting archive_command = /bin/false. In this case the \ntimeline is not advanced and recovery proceeds from whatever is \navailable in pg_wal.\n\nI think this type of recovery from a backup label without a timeline \nchange should absolutely be the exception, not the default as it seems \nto be now. If the server is truly independent, then the timeline change \nis not important. If the server is not independent, then the timeline \nchange is critical.\n\nSo overall, +1 for Michael's patch, though I have only read through it \nand not tested it yet.\n\nOne comment, though, if we are going to require recovery.signal when \nbackup_label is present, should it just be implied? Why error and force \nthe user to create it?\n\nRegards,\n-David\n\n\n",
"msg_date": "Thu, 28 Sep 2023 16:23:42 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Thu, Sep 28, 2023 at 04:23:42PM -0400, David Steele wrote:\n> After some playing around, I find I agree with Michael on this, i.e. require\n> at least standby.signal when a backup_label is present.\n> \n> According to my testing, you can preserve the \"independent server\"\n> functionality by setting archive_command = /bin/false. In this case the\n> timeline is not advanced and recovery proceeds from whatever is available in\n> pg_wal.\n\nI've seen folks depend on such setups in the past, actually, letting a\nprocess outside Postgres just \"push\" WAL segments to pg_wal instead of\nPostgres pulling it with a restore_command or a primary_conninfo for a\nstandby.\n\n> I think this type of recovery from a backup label without a timeline change\n> should absolutely be the exception, not the default as it seems to be now.\n\nThis can mess up archives pretty easily, additionally, so it's not\nsomething to encourage..\n\n> If the server is truly independent, then the timeline change is not\n> important. If the server is not independent, then the timeline change is\n> critical.\n> \n> So overall, +1 for Michael's patch, though I have only read through it and\n> not tested it yet.\n\nReviews, thoughts and opinions are welcome.\n\n> One comment, though, if we are going to require recovery.signal when\n> backup_label is present, should it just be implied? Why error and force the\n> user to create it?\n\nThat's one thing I was considering, but I also cannot convince myself\nthat this is the best option because the presence of recovery.signal\nor standby.standby (if both, standby.signal takes priority) makes it\nclear what type of recovery is wanted at disk level. I'd be OK if\nfolks think that this is a sensible consensus, as well, even if I\ndon't really agree with it.\n\nAnother idea I had was to force the creation of recovery.signal by\npg_basebackup even if -R is not used. All the reports we've seen with\npeople getting confused came from pg_basebackup that enforces no\nconfiguration.\n\nA last thing, that had better be covered in a separate thread and\npatch, is about validateRecoveryParameters(). These days, I'd like to\nthink that it may be OK to lift at least the restriction on\nrestore_command being required if we are doing recovery to ease the\ncase of self-contained backups (aka the case where all the WAL needed\nto reach a consistent point is in pg_wal/ or its tarball)\n--\nMichael",
"msg_date": "Fri, 29 Sep 2023 08:59:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On 9/28/23 19:59, Michael Paquier wrote:\n> On Thu, Sep 28, 2023 at 04:23:42PM -0400, David Steele wrote:\n>>\n>> So overall, +1 for Michael's patch, though I have only read through it and\n>> not tested it yet.\n> \n> Reviews, thoughts and opinions are welcome.\n\nOK, I have now reviewed and tested the patch and it looks good to me. I \nstopped short of marking this RfC since there are other reviewers in the \nmix.\n\nI dislike that we need to repeat:\n\nOwnLatch(&XLogRecoveryCtl->recoveryWakeupLatch);\n\nBut I see the logic behind why you did it and there's no better way to \ndo it as far as I can see.\n\n>> One comment, though, if we are going to require recovery.signal when\n>> backup_label is present, should it just be implied? Why error and force the\n>> user to create it?\n> \n> That's one thing I was considering, but I also cannot convince myself\n> that this is the best option because the presence of recovery.signal\n> or standby.standby (if both, standby.signal takes priority) makes it\n> clear what type of recovery is wanted at disk level. I'd be OK if\n> folks think that this is a sensible consensus, as well, even if I\n> don't really agree with it.\n\nI'm OK with keeping it as required for now.\n\n> Another idea I had was to force the creation of recovery.signal by\n> pg_basebackup even if -R is not used. All the reports we've seen with\n> people getting confused came from pg_basebackup that enforces no\n> configuration.\n\nThis change makes it more obvious if configuration is missing (since \nyou'll get an error), however +1 for adding this to pg_basebackup.\n\n> A last thing, that had better be covered in a separate thread and\n> patch, is about validateRecoveryParameters(). These days, I'd like to\n> think that it may be OK to lift at least the restriction on\n> restore_command being required if we are doing recovery to ease the\n> case of self-contained backups (aka the case where all the WAL needed\n> to reach a consistent point is in pg_wal/ or its tarball)\n\nHmmm, I'm not sure about this. I'd prefer users set \nrestore_command=/bin/false explicitly to fetch WAL from pg_wal by \ndefault if that's what they really intend.\n\nRegards,\n-David\n\n\n",
"msg_date": "Sat, 14 Oct 2023 15:45:33 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Sat, Oct 14, 2023 at 03:45:33PM -0400, David Steele wrote:\n> On 9/28/23 19:59, Michael Paquier wrote:\n> OK, I have now reviewed and tested the patch and it looks good to me. I\n> stopped short of marking this RfC since there are other reviewers in the\n> mix.\n\nThanks for the review. Yes, I am wondering if other people would\nchime in here. It doesn't feel like this has gathered enough\nopinions. Now this thread has been around for many months, and we've\ndone quite a few changes in the backup APIs in the last few years with\nfew users complaining back about them..\n\n> I dislike that we need to repeat:\n> \n> OwnLatch(&XLogRecoveryCtl->recoveryWakeupLatch);\n> \n> But I see the logic behind why you did it and there's no better way to do it\n> as far as I can see.\n\nThe main point is that there is no meaning in setting the latch until\nthe backup_label file is read because if ArchiveRecoveryRequested is\n*not* set the startup process would outright fail as of the lack of\n[recovery|standby].signal.\n\n>> Another idea I had was to force the creation of recovery.signal by\n>> pg_basebackup even if -R is not used. All the reports we've seen with\n>> people getting confused came from pg_basebackup that enforces no\n>> configuration.\n> \n> This change makes it more obvious if configuration is missing (since you'll\n> get an error), however +1 for adding this to pg_basebackup.\n\nLooking at the streaming APIs of pg_basebackup, it looks like this\nwould be a matter of using bbstreamer_inject_file() to inject an empty\nfile into the stream. Still something seems to be off once\ncompression methods are involved.. Hmm. I am not sure. Well, this\ncould always be done as a patch independant of this one, under a\nseparate discussion. There are extra arguments about whether it would\nbe a good idea to add a recovery.signal even when taking a backup from\na standby, and do that only in 17~.\n\n>> A last thing, that had better be covered in a separate thread and\n>> patch, is about validateRecoveryParameters(). These days, I'd like to\n>> think that it may be OK to lift at least the restriction on\n>> restore_command being required if we are doing recovery to ease the\n>> case of self-contained backups (aka the case where all the WAL needed\n>> to reach a consistent point is in pg_wal/ or its tarball)\n> \n> Hmmm, I'm not sure about this. I'd prefer users set\n> restore_command=/bin/false explicitly to fetch WAL from pg_wal by default if\n> that's what they really intend.\n\nIt wouldn't be the first time we break compatibility in this area, so\nperhaps you are right and keeping this requirement is fine, even if it\nrequires one extra step when recovering a self-contained backup\ngenerated by pg_basebackup. At least this forces users to look at\ntheir setup and check if something is wrong. We'd likely finish with\na few \"bug\" reports, as well :D\n--\nMichael",
"msg_date": "Mon, 16 Oct 2023 14:54:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nIt looks good to me.\r\n\r\nI have reviewed the code and tested the patch with basic check-world test an pgbench test (metioned in https://www.postgresql.org/message-id/flat/ZQtzcH2lvo8leXEr%40paquier.xyz#cc5ed83e0edc0b9a1c1305f08ff7a335). \r\n\r\nAnother reviewer has also approved it, so I change the status to RFC.\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Mon, 16 Oct 2023 06:21:07 +0000",
"msg_from": "Bowen Shi <zxwsbg12138@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a\n backup_label"
},
{
"msg_contents": "On Mon, 2023-10-16 at 14:54 +0900, Michael Paquier wrote:\n> Thanks for the review. Yes, I am wondering if other people would\n> chime in here. It doesn't feel like this has gathered enough\n> opinions.\n\nI don't have strong feelings either way. If you have backup_label\nbut no signal file, starting PostgreSQL may succeed (if the WAL\nwith the checkpoint happens to be in pg_wal) or it may fail with\nan error message. There is no danger of causing damage unless you\nremove backup_label, right?\n\nI cannot think of a use case where you use such a configuration on\npurpose, and the current error message is more crypric than a plain\n\"you must have a signal file to start from a backup\", so perhaps\nyour patch is a good idea.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Mon, 16 Oct 2023 17:48:43 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering\n with a backup_label"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 05:48:43PM +0200, Laurenz Albe wrote:\n> I don't have strong feelings either way. If you have backup_label\n> but no signal file, starting PostgreSQL may succeed (if the WAL\n> with the checkpoint happens to be in pg_wal) or it may fail with\n> an error message. There is no danger of causing damage unless you\n> remove backup_label, right?\n\nA bit more happens currently if you have a backup_label with no signal\nfiles, unfortunately, because this causes some startup states to not\nbe initialized. See around here:\nhttps://www.postgresql.org/message-id/Y/Q/17rpYS7YGbIt@paquier.xyz\nhttps://www.postgresql.org/message-id/Y/v0c+3W89NBT/if@paquier.xyz\n\n> I cannot think of a use case where you use such a configuration on\n> purpose, and the current error message is more crypric than a plain\n> \"you must have a signal file to start from a backup\", so perhaps\n> your patch is a good idea.\n\nI hope so.\n--\nMichael",
"msg_date": "Tue, 17 Oct 2023 08:21:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Mon, Oct 16, 2023 at 02:54:35PM +0900, Michael Paquier wrote:\n> On Sat, Oct 14, 2023 at 03:45:33PM -0400, David Steele wrote:\n>> On 9/28/23 19:59, Michael Paquier wrote:\n>>> Another idea I had was to force the creation of recovery.signal by\n>>> pg_basebackup even if -R is not used. All the reports we've seen with\n>>> people getting confused came from pg_basebackup that enforces no\n>>> configuration.\n>> \n>> This change makes it more obvious if configuration is missing (since you'll\n>> get an error), however +1 for adding this to pg_basebackup.\n> \n> Looking at the streaming APIs of pg_basebackup, it looks like this\n> would be a matter of using bbstreamer_inject_file() to inject an empty\n> file into the stream. Still something seems to be off once\n> compression methods are involved.. Hmm. I am not sure. Well, this\n> could always be done as a patch independant of this one, under a\n> separate discussion. There are extra arguments about whether it would\n> be a good idea to add a recovery.signal even when taking a backup from\n> a standby, and do that only in 17~.\n\nHmm. On this specific point, it would actually be much simpler to\nforce recovery.signal to be in the contents streamed to a BASE_BACKUP.\nThis does not step on your proposal at [1], though, because you'd\nstill require a .signal file for recovery as far as I understand :/ \n\n[1]: https://www.postgresql.org/message-id/2daf8adc-8db7-4204-a7f2-a7e94e2bfa4b@pgmasters.net\n\nWould folks be OK to move on with the patch of this thread at the end?\nI am attempting a last-call kind of thing.\n--\nMichael",
"msg_date": "Fri, 27 Oct 2023 16:22:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On 10/27/23 03:22, Michael Paquier wrote:\n> On Mon, Oct 16, 2023 at 02:54:35PM +0900, Michael Paquier wrote:\n>> On Sat, Oct 14, 2023 at 03:45:33PM -0400, David Steele wrote:\n>>> On 9/28/23 19:59, Michael Paquier wrote:\n>>>> Another idea I had was to force the creation of recovery.signal by\n>>>> pg_basebackup even if -R is not used. All the reports we've seen with\n>>>> people getting confused came from pg_basebackup that enforces no\n>>>> configuration.\n>>>\n>>> This change makes it more obvious if configuration is missing (since you'll\n>>> get an error), however +1 for adding this to pg_basebackup.\n>>\n>> Looking at the streaming APIs of pg_basebackup, it looks like this\n>> would be a matter of using bbstreamer_inject_file() to inject an empty\n>> file into the stream. Still something seems to be off once\n>> compression methods are involved.. Hmm. I am not sure. Well, this\n>> could always be done as a patch independant of this one, under a\n>> separate discussion. There are extra arguments about whether it would\n>> be a good idea to add a recovery.signal even when taking a backup from\n>> a standby, and do that only in 17~.\n> \n> Hmm. On this specific point, it would actually be much simpler to\n> force recovery.signal to be in the contents streamed to a BASE_BACKUP.\n\nThat sounds like the right plan to me. Nice and simple.\n\n> This does not step on your proposal at [1], though, because you'd\n> still require a .signal file for recovery as far as I understand :/\n> \n> [1]: https://www.postgresql.org/message-id/2daf8adc-8db7-4204-a7f2-a7e94e2bfa4b@pgmasters.net\n\nYes.\n\n> Would folks be OK to move on with the patch of this thread at the end?\n> I am attempting a last-call kind of thing.\n\nI'm still +1 for the patch as it stands.\n\nRegards,\n-David\n\n\n",
"msg_date": "Fri, 27 Oct 2023 09:31:10 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Fri, Oct 27, 2023 at 09:31:10AM -0400, David Steele wrote:\n> That sounds like the right plan to me. Nice and simple.\n\nI'll tackle that in a separate thread with a patch registered for the\nupcoming CF of November.\n\n> I'm still +1 for the patch as it stands.\n\nI have been reviewing the patch, and applied portions of it as of\ndc5bd388 and 1ffdc03c and they're quite independent pieces. After\nthat, the remaining bits of the patch to change the behavior is now\nstraight-forward. I have written a commit message for it, while on\nit, as per the attached.\n--\nMichael",
"msg_date": "Mon, 30 Oct 2023 16:08:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 1:09 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> I have been reviewing the patch, and applied portions of it as of\n> dc5bd388 and 1ffdc03c and they're quite independent pieces. After\n> that, the remaining bits of the patch to change the behavior is now\n> straight-forward. I have written a commit message for it, while on\n> it, as per the attached.\n>\n\nA suggestion for the hint message in an effort to improve readability:\n\n\"If you are restoring from a backup, ensure \\\"%s/recovery.signal\\\" or\n\\\"%s/standby.signal\\\" is present and add required recovery options.\"\n\nI realize the original use of \"touch\" is a valid shortcut for what I\nsuggest above, however that will be less clear for the not-so-un*x-inclined\nusers of Postgres, while for some it'll be downright confusing, IMHO. It\nalso provides the advantage of being crystal clear on what needs to be done\nto fix the problem.\n\nRoberto\n\nOn Mon, Oct 30, 2023 at 1:09 AM Michael Paquier <michael@paquier.xyz> wrote:\nI have been reviewing the patch, and applied portions of it as of\ndc5bd388 and 1ffdc03c and they're quite independent pieces. After\nthat, the remaining bits of the patch to change the behavior is now\nstraight-forward. I have written a commit message for it, while on\nit, as per the attached.A suggestion for the hint message in an effort to improve readability:\"If you are restoring from a backup, ensure \\\"%s/recovery.signal\\\" or \\\"%s/standby.signal\\\" is present and add required recovery options.\" I realize the original use of \"touch\" is a valid shortcut for what I suggest above, however that will be less clear for the not-so-un*x-inclined users of Postgres, while for some it'll be downright confusing, IMHO. It also provides the advantage of being crystal clear on what needs to be done to fix the problem.Roberto",
"msg_date": "Mon, 30 Oct 2023 10:32:28 -0600",
"msg_from": "Roberto Mello <roberto.mello@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 3:09 AM Michael Paquier <michael@paquier.xyz> wrote:\n> I have been reviewing the patch, and applied portions of it as of\n> dc5bd388 and 1ffdc03c and they're quite independent pieces. After\n> that, the remaining bits of the patch to change the behavior is now\n> straight-forward. I have written a commit message for it, while on\n> it, as per the attached.\n\nI would encourage some caution here.\n\nIn a vacuum, I'm in favor of this, and for the same reasons as you,\nnamely, that the huge pile of Booleans that we use to control recovery\nis confusing, and it's difficult to make sure that all the code paths\nare adequately tested, and I think some of the things that actually\nwork here are not documented.\n\nBut in practice, I think there is a possibility of something like this\nbackfiring very hard. Notice that the first two people who commented\non the thread saw the error and immediately removed backup_label even\nthough that's 100% wrong. It shows how utterly willing users are to\nremove backup_label for any reason or no reason at all. If we convert\ncases where things would have worked into cases where people nuke\nbackup_label and then it appears to work, we're going to be worse off\nin the long run, no matter how crazy the idea of removing backup_label\nmay seem to us.\n\nAlso, Andres just recently mentioned to me that he uses this procedure\nof starting a server with a backup_label but no recovery.signal or\nstandby.signal file regularly, and thinks other people do too. I was\nsurprised, since I've never done that, except maybe when I was a noob\nand didn't have a clue. But Andres is far from a noob.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 30 Oct 2023 13:55:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-30 16:08:50 +0900, Michael Paquier wrote:\n> From 26a8432fe3ab8426e7797d85d19b0fe69d3384c9 Mon Sep 17 00:00:00 2001\n> From: Michael Paquier <michael@paquier.xyz>\n> Date: Mon, 30 Oct 2023 16:02:52 +0900\n> Subject: [PATCH v4] Require recovery.signal or standby.signal when reading a\n> backup_file\n>\n> Historically, the startup process uses two static variables to control\n> if archive recovery should happen, when either recovery.signal or\n> standby.signal are defined in the data folder at the beginning of\n> recovery:\n\nI think the problem with these variables is that they're a really messy state\nmachine - something this patch doesn't meaningfully improve IMO.\n\n\n> This configuration was possible when recovering from a base backup taken\n> by pg_basebackup without -R. Note that the documentation requires at\n> least to set recovery.signal to restore from a backup, but the startup\n> process was not making this policy explicit.\n\nMaybe I just didn't check the right place, but from I saw, this, at most, is\nimplied, rather than explicitly stated.\n\n\n> In most cases, one would have been able to complete recovery, but that's a\n> matter of luck, really, as it depends on the workload of the origin server.\n\nWith -X ... we have all the necessary WAL locally, how does the workload on\nthe primary matter? If you pass --no-slot, pg_basebackup might fail to fetch\nthe necessary wal, but then you'd also have gotten an error.\n\n\nI agree with Robert that this would be a good error check on a green field,\nbut that I am less convinced it's going to help more than hurt now.\n\n\nRight now running pg_basebackup with -X stream, without --write-recovery-conf,\ngives you a copy of a cluster that will come up correctly as a distinct\ninstance.\n\nWith this change applied, you need to know that the way to avoid the existing\nFATAL about restore_command at startup (when recovery.signal exists but\nrestore_command isn't set)) is to is to set \"restore_command = false\",\nsomething we don't explain anywhere afaict. We should lessen the need to ever\nuse restore_command, not increase it.\n\nIt also seems risky to have people get used to restore_command = false,\nbecause that effectively disables detection of other timelines etc. But, this\nmethod does force a new timeline - which will be the same on each clone of the\ndatabase...\n\nI also just don't think that it's always desirable to create a new timeline.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 30 Oct 2023 12:47:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 01:55:13PM -0400, Robert Haas wrote:\n> I would encourage some caution here.\n\nThanks for chiming here.\n\n> In a vacuum, I'm in favor of this, and for the same reasons as you,\n> namely, that the huge pile of Booleans that we use to control recovery\n> is confusing, and it's difficult to make sure that all the code paths\n> are adequately tested, and I think some of the things that actually\n> work here are not documented.\n\nYep, same feeling here.\n\n> But in practice, I think there is a possibility of something like this\n> backfiring very hard. Notice that the first two people who commented\n> on the thread saw the error and immediately removed backup_label even\n> though that's 100% wrong. It shows how utterly willing users are to\n> remove backup_label for any reason or no reason at all. If we convert\n> cases where things would have worked into cases where people nuke\n> backup_label and then it appears to work, we're going to be worse off\n> in the long run, no matter how crazy the idea of removing backup_label\n> may seem to us.\n\nAs far as I know, there's one paragraph in the docs that implies this\nmode without giving an actual hint that this may be OK or not, so\nshrug:\nhttps://www.postgresql.org/docs/devel/continuous-archiving.html#BACKUP-TIPS\n\"As with base backups, the easiest way to produce a standalone hot\nbackup is to use the pg_basebackup tool. If you include the -X\nparameter when calling it, all the write-ahead log required to use the\nbackup will be included in the backup automatically, and no special\naction is required to restore the backup.\"\n\nAnd a few lines down we imply to use restore_command, something that\nwe check is set only if recovery.signal is set. See additionally\nvalidateRecoveryParameters(), where the comments imply that\nInArchiveRecovery would be set only when there's a restore command.\n\nAs you're telling me, and I've considered that as an option as well,\nperhaps we should just consider the presence of a backup_label file\nwith no .signal files as a synonym of crash recovery? In the recovery\npath, currently the essence of the problem is when we do\nInArchiveRecovery=true, but ArchiveRecoveryRequested=false, meaning\nthat it should do archive recovery but we don't want it, and that does\nnot really make sense. The rest of the code sort of implies that this\nis not a suported combination. So basically, my suggestion here, is\nto just replay WAL up to the end of what's in your local pg_wal/ and\nhope for the best, without TLI jumps, except that we'd do nothing.\nDoing a pg_basebackup -X stream followed by a restart would work fine\nwith that, because all the WAL is here.\n\nA point of contention is if we'd better be stricter about satisfying\nbackupEndPoint in such a case, but the redo code only wants to do\nsomething here when ArchiveRecoveryRequested is set (aka there's a\n.signal file set), and we would not want a TLI jump at the end of\nrecovery, so I don't see an argument with caring about backupEndPoint\nin this case.\n\nAt the end, I'm OK as long as ArchiveRecoveryRequested=false\nInArchiveRecovery=true does not exist anymore, because it's much\neasier to get what's going on with the redo path, IMHO.\n\n(I have a patch at hand to show the idea, will post it with a reply to\nAndres' message.)\n\n> Also, Andres just recently mentioned to me that he uses this procedure\n> of starting a server with a backup_label but no recovery.signal or\n> standby.signal file regularly, and thinks other people do too. I was\n> surprised, since I've never done that, except maybe when I was a noob\n> and didn't have a clue. But Andres is far from a noob.\n\nAt this stage, that's basically at your own risk, as the code thinks\nit's OK to force what's basically archive-recovery-without-being-it.\nSo it basically works, but it can also easily backfire, as well..\n--\nMichael",
"msg_date": "Tue, 31 Oct 2023 09:40:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 10:32:28AM -0600, Roberto Mello wrote:\n> I realize the original use of \"touch\" is a valid shortcut for what I\n> suggest above, however that will be less clear for the not-so-un*x-inclined\n> users of Postgres, while for some it'll be downright confusing, IMHO. It\n> also provides the advantage of being crystal clear on what needs to be done\n> to fix the problem.\n\nIndeed, \"touch\" may be better in this path if we'd throw an ERROR to\nenforce a given policy, and that's more consistent with the rest of\nthe area.\n--\nMichael",
"msg_date": "Tue, 31 Oct 2023 09:42:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 12:47:41PM -0700, Andres Freund wrote:\n> I think the problem with these variables is that they're a really messy state\n> machine - something this patch doesn't meaningfully improve IMO.\n\nOkay. Yes, this is my root issue as well. We're at the stage where\nwe should reduce the possible set of combinations and assumptions\nwe're inventing because people can do undocumented stuff, then perhaps\nrefactor the code on top of that (say, if one combination with too\nbooleans is not possible, switch to a three-state enum rather than 2\nbools, etc).\n\n>> This configuration was possible when recovering from a base backup taken\n>> by pg_basebackup without -R. Note that the documentation requires at\n>> least to set recovery.signal to restore from a backup, but the startup\n>> process was not making this policy explicit.\n> \n> Maybe I just didn't check the right place, but from I saw, this, at most, is\n> implied, rather than explicitly stated.\n\nSee the doc reference here:\nhttps://www.postgresql.org/message-id/ZUBM6BNQnEh7lzIM@paquier.xyz\n\nSo it kind of implies it, still also mentions restore_command. It's\nlike Schrödinger's cat, yes and no at the same time.\n\n> With -X ... we have all the necessary WAL locally, how does the workload on\n> the primary matter? If you pass --no-slot, pg_basebackup might fail to fetch\n> the necessary wal, but then you'd also have gotten an error.\n>\n> [...]\n> \n> Right now running pg_basebackup with -X stream, without --write-recovery-conf,\n> gives you a copy of a cluster that will come up correctly as a distinct\n> instance.\n>\n> [...]\n> \n> I also just don't think that it's always desirable to create a new timeline.\n\nYeah. Another argument I was mentioning to Robert is that we may want\nto just treat the case where you have a backup_label without any\nsignal files just the same as crash recovery, replaying all the local\npg_wal/, and nothing else. For example, something like the attached\nshould make sure that InArchiveRecovery=true should never be set if\nArchiveRecoveryRequested is not set.\n\nThe attached would still cause redo to complain on a \"WAL ends before\nend of online backup\" if not all the WAL is here (reason behind the\ntweak of 010_pg_basebackup.pl, but the previous tweak to pg_rewind's\n008_min_recovery_point.pl is not required here).\n\nAttached is the idea I had in mind, in terms of code, FWIW.\n--\nMichael",
"msg_date": "Tue, 31 Oct 2023 10:15:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Mon, Oct 30, 2023 at 8:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n> As far as I know, there's one paragraph in the docs that implies this\n> mode without giving an actual hint that this may be OK or not, so\n> shrug:\n> https://www.postgresql.org/docs/devel/continuous-archiving.html#BACKUP-TIPS\n> \"As with base backups, the easiest way to produce a standalone hot\n> backup is to use the pg_basebackup tool. If you include the -X\n> parameter when calling it, all the write-ahead log required to use the\n> backup will be included in the backup automatically, and no special\n> action is required to restore the backup.\"\n\nI see your point, but that's way too subtle. As far as I know, the\nonly actually-documented procedure for restoring is this one:\n\nhttps://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-PITR-RECOVERY\n\nThat procedure actually is badly in need of some updating, IMHO,\nbecause close to half of it is about moving your existing database\ncluster out of the way, which may or may not be needed in the case of\nany particular backup restore. Also, it unconditionally mentions\ncreating recovery.signal, with no mention of standby.signal. And\ncertainly not with neither. It also gives zero motivation for actually\ndoing this and says nothing useful about backup_label.\n\nBoth recovery.signal and standby.signal are documented in\nhttps://www.postgresql.org/docs/current/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVE-RECOVERY\nbut you'd have no real reason to look in a list of GUCs for\ninformation about a file on disk. recovery.signal but not\nstandby.signal is mentioned in\nhttps://www.postgresql.org/docs/current/warm-standby.html but nowhere\nthat I can find do we explicitly talk about running with at least one\nof them.\n\n> As you're telling me, and I've considered that as an option as well,\n> perhaps we should just consider the presence of a backup_label file\n> with no .signal files as a synonym of crash recovery? In the recovery\n> path, currently the essence of the problem is when we do\n> InArchiveRecovery=true, but ArchiveRecoveryRequested=false, meaning\n> that it should do archive recovery but we don't want it, and that does\n> not really make sense. The rest of the code sort of implies that this\n> is not a suported combination. So basically, my suggestion here, is\n> to just replay WAL up to the end of what's in your local pg_wal/ and\n> hope for the best, without TLI jumps, except that we'd do nothing.\n\nThis sentence seems to be incomplete.\n\nBut I was not saying we should treat the case where we have a\nbackup_label file like crash recovery. The real question here is why\nwe don't treat it fully like archive recovery. I don't know off-hand\nwhat is different if I start the server with both backup_label and\nrecovery.signal vs. if I start it with only backup_label, but I\nquestion whether there should be any difference at all.\n\n> A point of contention is if we'd better be stricter about satisfying\n> backupEndPoint in such a case, but the redo code only wants to do\n> something here when ArchiveRecoveryRequested is set (aka there's a\n> .signal file set), and we would not want a TLI jump at the end of\n> recovery, so I don't see an argument with caring about backupEndPoint\n> in this case.\n\nThis is a bit hard for me to understand, but I disagree strongly with\nthe idea that we should ever ignore a backup end point if we have one.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 Oct 2023 08:28:07 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Tue, Oct 31, 2023 at 08:28:07AM -0400, Robert Haas wrote:\n> On Mon, Oct 30, 2023 at 8:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> As far as I know, there's one paragraph in the docs that implies this\n>> mode without giving an actual hint that this may be OK or not, so\n>> shrug:\n>> https://www.postgresql.org/docs/devel/continuous-archiving.html#BACKUP-TIPS\n>> \"As with base backups, the easiest way to produce a standalone hot\n>> backup is to use the pg_basebackup tool. If you include the -X\n>> parameter when calling it, all the write-ahead log required to use the\n>> backup will be included in the backup automatically, and no special\n>> action is required to restore the backup.\"\n> \n> I see your point, but that's way too subtle. As far as I know, the\n> only actually-documented procedure for restoring is this one:\n> https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-PITR-RECOVERY\n> \n> That procedure actually is badly in need of some updating, IMHO,\n> because close to half of it is about moving your existing database\n> cluster out of the way, which may or may not be needed in the case of\n> any particular backup restore. Also, it unconditionally mentions\n> creating recovery.signal, with no mention of standby.signal. And\n> certainly not with neither. It also gives zero motivation for actually\n> doing this and says nothing useful about backup_label.\n> \n> Both recovery.signal and standby.signal are documented in\n> https://www.postgresql.org/docs/current/runtime-config-wal.html#RUNTIME-CONFIG-WAL-ARCHIVE-RECOVERY\n> but you'd have no real reason to look in a list of GUCs for\n> information about a file on disk. recovery.signal but not\n> standby.signal is mentioned in\n> https://www.postgresql.org/docs/current/warm-standby.html but nowhere\n> that I can find do we explicitly talk about running with at least one\n> of them.\n\nPoint 7. of what you quote says to use one? True that this needs a\nrefresh, and perhaps a bit fat warning about the fact that these are\nrequired if you want to fetch WAL from other sources than the local\npg_wal/. Perhaps there may be a point of revisiting the default\nbehavior of recovery_target_timeline in this case, I don't know.\n\n>> As you're telling me, and I've considered that as an option as well,\n>> perhaps we should just consider the presence of a backup_label file\n>> with no .signal files as a synonym of crash recovery? In the recovery\n>> path, currently the essence of the problem is when we do\n>> InArchiveRecovery=true, but ArchiveRecoveryRequested=false, meaning\n>> that it should do archive recovery but we don't want it, and that does\n>> not really make sense. The rest of the code sort of implies that this\n>> is not a suported combination. So basically, my suggestion here, is\n>> to just replay WAL up to the end of what's in your local pg_wal/ and\n>> hope for the best, without TLI jumps, except that we'd do nothing.\n>\n> This sentence seems to be incomplete.\n\nI've re-read it, and it looks OK to me. What I mean with this\nparagraph are two things:\n- Remove InArchiveRecovery=true and ArchiveRecoveryRequested=false as\na possible combination in the code.\n- Treat backup_label with no .signal file as the same as crash\nrecovery, that:\n-- Does no TLI jump at the end of recovery.\n-- Expects all the WAL to be in pg_wal/.\n\n> But I was not saying we should treat the case where we have a\n> backup_label file like crash recovery. The real question here is why\n> we don't treat it fully like archive recovery.\n\nTimeline jump at the end of recovery? Archive recovery forces a TLI\njump by default at the end of redo if there's a signal file, and some\nusers may not want a TLI jump by default?\n\n> I don't know off-hand\n> what is different if I start the server with both backup_label and\n> recovery.signal vs. if I start it with only backup_label, but I\n> question whether there should be any difference at all.\n\nPerhaps we could do that, but note that backup_label is renamed to\nbackup_label.old at the beginning of redo. The code has historically\nalways enforced InArchiveRecovery=true when there's a backup label,\nand InArchiveRecovery=false where there is no backup label, so we\ndon't get the same recovery behavior if a cluster is restarted while\nit was still performing recovery. I don't quite see how it is\npossible to make this code simpler without enforcing a policy to take\ncare of this inconsistency. I've listed two of them on this thread:\n- Force the presence of a .recovery file when there is a\nbackup_label, to force archive recovery.\n- Force crash recovery if there are no signal files but a\nbackup_label, then a restart of a cluster that began a restore while\nit processed a backup would be confused: should it do crash recovery\nor archive recovery?\n\nMy guess, based on what I read from the feedback of this thread, is\nthat it could be more helpful to do the second thing, not the third\none, because this is better with standalone backups: no TLI jumps and\nrestore happens with all the local WAL in pg_wal/, without any GUCs to\ncontrol how recovery should run.\n\nYou are suggesting a third, hybrid, approach. Now note we have always\nchecked for signal files before the backup_label. Recovery GUCs are\nchecked only if there's one of the two signal files. It seems to me\nthat what you are suggesting would make the code a bit harder to\nfollow, actually, and more inconsistent with stable branches because\nwe would need to check the control file contents *before* checking for\nthe .signal files or backup_label to be able to see if archive\nrecovery *should* happen, depending on if there's a backupEndPoint.\n\n>> A point of contention is if we'd better be stricter about satisfying\n>> backupEndPoint in such a case, but the redo code only wants to do\n>> something here when ArchiveRecoveryRequested is set (aka there's a\n>> .signal file set), and we would not want a TLI jump at the end of\n>> recovery, so I don't see an argument with caring about backupEndPoint\n>> in this case.\n> \n> This is a bit hard for me to understand, but I disagree strongly with\n> the idea that we should ever ignore a backup end point if we have one.\n\nActually, while experimenting yesterday before sending my reply to\nyou, I have noticed that redo cares about backupEndPoint even if you\nforce crash recovery when there's only a backup_label file. There's a\ncase in pg_basebackup that would fail, but that's accidental, AFAIK:\nhttps://www.postgresql.org/message-id/ZUBVKfL6FR6NOQyt%40paquier.xyz\n\nSee in StartupXLOG(), around the comment \"complain if we did not roll\nforward far enough to reach\". This complains if archive recovery has\nbeen requested *or* if we retrieved a backup end LSN from the\nbackup_label.\n--\nMichael",
"msg_date": "Wed, 1 Nov 2023 08:39:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "At Wed, 1 Nov 2023 08:39:17 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> See in StartupXLOG(), around the comment \"complain if we did not roll\n> forward far enough to reach\". This complains if archive recovery has\n> been requested *or* if we retrieved a backup end LSN from the\n> backup_label.\n\nPlease note that backupStartPoint is not reset even when reaching the\nbackup end point during crash recovery. If backup_label enforces\narchive recovery, I think this point won't be an issue as you\nmentioned. For the record, my earlier proposal aimed to detect\nreaching the end point even during crash recovery.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 02 Nov 2023 11:03:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering\n with a backup_label"
},
{
"msg_contents": "On Thu, Nov 02, 2023 at 11:03:35AM +0900, Kyotaro Horiguchi wrote:\n> At Wed, 1 Nov 2023 08:39:17 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n>> See in StartupXLOG(), around the comment \"complain if we did not roll\n>> forward far enough to reach\". This complains if archive recovery has\n>> been requested *or* if we retrieved a backup end LSN from the\n>> backup_label.\n> \n> Please note that backupStartPoint is not reset even when reaching the\n> backup end point during crash recovery. If backup_label enforces\n> archive recovery, I think this point won't be an issue as you\n> mentioned. For the record, my earlier proposal aimed to detect\n> reaching the end point even during crash recovery.\n\nGood point. Not doing ReachedEndOfBackup() at the end of crash\nrecovery feels inconsistent, especially since we care about some of\nthese fields in this case.\n\nIf a .signal file is required when we read a backup_label, yes that\nwould not be a problem because we'd always link backupEndPoint's\ndestiny with a requested archive recovery, but there seem to be little\nlove for enforcing that based on the feedback of this thread, so.. \n--\nMichael",
"msg_date": "Mon, 6 Nov 2023 16:05:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Tue, Oct 31, 2023 at 7:39 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Point 7. of what you quote says to use one? True that this needs a\n> refresh, and perhaps a bit fat warning about the fact that these are\n> required if you want to fetch WAL from other sources than the local\n> pg_wal/. Perhaps there may be a point of revisiting the default\n> behavior of recovery_target_timeline in this case, I don't know.\n\nI don't really know what to say to this -- sure, point 7 of\n\"Recovering Using a Continuous Archive Backup\" says to use\nrecovery.signal. But as I said in the preceding paragraph, it doesn't\nsay either \"use recovery.signal or standby.signal\". Nor does it or\nanything else in the documentation explain under what circumstances\nyou're allowed to have neither. So the whole thing is very unclear.\n\n> >> As you're telling me, and I've considered that as an option as well,\n> >> perhaps we should just consider the presence of a backup_label file\n> >> with no .signal files as a synonym of crash recovery? In the recovery\n> >> path, currently the essence of the problem is when we do\n> >> InArchiveRecovery=true, but ArchiveRecoveryRequested=false, meaning\n> >> that it should do archive recovery but we don't want it, and that does\n> >> not really make sense. The rest of the code sort of implies that this\n> >> is not a suported combination. So basically, my suggestion here, is\n> >> to just replay WAL up to the end of what's in your local pg_wal/ and\n> >> hope for the best, without TLI jumps, except that we'd do nothing.\n> >\n> > This sentence seems to be incomplete.\n>\n> I've re-read it, and it looks OK to me.\n\nWell, the sentence ends with \"except that we'd do nothing\" and I don't\nknow what that means. It would make sense to me if it said \"except\nthat we'd do nothing about <whatever>\" or \"except that we'd do nothing\ninstead of <something>\" but as you've written it basically seems to\nboil down to \"my suggestion is to replay WAL except do nothing\" which\nmakes no sense. If you replay WAL, you're not doing nothing.\n\n> > But I was not saying we should treat the case where we have a\n> > backup_label file like crash recovery. The real question here is why\n> > we don't treat it fully like archive recovery.\n>\n> Timeline jump at the end of recovery? Archive recovery forces a TLI\n> jump by default at the end of redo if there's a signal file, and some\n> users may not want a TLI jump by default?\n\nUggh. I don't know what to think about that. I bet some people do want\nthat, but that makes it pretty easy to end up with multiple copies of\nthe same cluster running on the same TLI, too, which is not a thing\nthat you really want to have happen.\n\nAt the end of the day, I'm coming around to the view that the biggest\nproblem here is the documentation. Nobody can really know what's\nsupposed to work right now because the documentation doesn't say which\nthings you are and are not allowed to do and what results you should\nexpect in each case. If it did, it would be easier to discuss possible\nbehavior changes. Right now, it's hard to change any code at all,\nbecause there's no list of supported scenarios, so you can't tell\nwhether a potential change affects a scenario that somebody thinks\nshould work, or only cases that nobody can possibly care about. It's\nsort of possible to reason your way through that, to an extent, but\nit's pretty hard. The fact that I didn't know that starting from a\nbackup with neither recovery.signal nor standby.signal was a thing\nthat anybody did or cared about is good evidence of that.\n\nI'm coming to the understanding that we have four supported scenarios.\nOne, no backup_label, no recovery.signal, and no standby.signal.\nHence, replay WAL until the end, then start up. Two, backup_label\nexists but neither recovery.signal nor standby.signal does. As before,\nbut if I understand correctly, now we can check that we reached the\nbackup end location. Three, recovery.signal exists, with or without\nbackup_label. Now we create a new TLI at the end of recovery, and\nalso, now can fetch WAL that is not present in pg_wal using\nprimary_conninfo or restore_command. In fact, I think we may prefer to\ndo that over using WAL we have locally, but I'm not quite sure about\nthat. Fourth, standby.signal exists, with or without backup_label. As\nthe previous scenario, but now when we reach the end of WAL we wait\nfor more to appear instead of ending recovery. I have a feeling this\nis not quite an exhaustive list of differences between the various\nmodes, and I'm not even sure that it lists all of the things someone\nmight try to do. Thoughts?\n\nI also feel like the terminology here sometimes obscures more than it\nilluminates. For instance, it seems like ArchiveRecoveryRequested\nreally means \"are any signal files present?\" while InArchiveRecovery\nmeans \"are we fetching WAL from outside pg_wal rather than using\nwhat's in pg_wal?\". But these are not obvious from the names, and\nsometimes we have additional variables with overlapping meanings, like\nreadSource, which indicates whether we're reading from pg_wal, the\narchive, or the walreceiver, and yet is probably not redundant with\nInArchiveRecovery. In any event, I think that we need to start with\nthe question of what behavior(s) we want to expose to users, and then\nback into the question of what internal variables and states need to\nexist in order to support that behavior. We cannot start by deciding\nwhat variables we'd like to get rid of and then trying to justify the\nresulting behavior changes on the grounds that they simplify the code.\nUsers aren't going to like that, hackers aren't going to like that,\nand the resulting behavior probably won't be anything great.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 8 Nov 2023 13:16:58 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Wed, Nov 08, 2023 at 01:16:58PM -0500, Robert Haas wrote:\n> On Tue, Oct 31, 2023 at 7:39 PM Michael Paquier <michael@paquier.xyz> wrote:\n>>>> As you're telling me, and I've considered that as an option as well,\n>>>> perhaps we should just consider the presence of a backup_label file\n>>>> with no .signal files as a synonym of crash recovery? In the recovery\n>>>> path, currently the essence of the problem is when we do\n>>>> InArchiveRecovery=true, but ArchiveRecoveryRequested=false, meaning\n>>>> that it should do archive recovery but we don't want it, and that does\n>>>> not really make sense. The rest of the code sort of implies that this\n>>>> is not a suported combination. So basically, my suggestion here, is\n>>>> to just replay WAL up to the end of what's in your local pg_wal/ and\n>>>> hope for the best, without TLI jumps, except that we'd do nothing.\n>>>\n>>> This sentence seems to be incomplete.\n>>\n>> I've re-read it, and it looks OK to me.\n> \n> Well, the sentence ends with \"except that we'd do nothing\" and I don't\n> know what that means. It would make sense to me if it said \"except\n> that we'd do nothing about <whatever>\" or \"except that we'd do nothing\n> instead of <something>\" but as you've written it basically seems to\n> boil down to \"my suggestion is to replay WAL except do nothing\" which\n> makes no sense. If you replay WAL, you're not doing nothing.\n\nSure, sorry for the confusion. By \"we'd do nothing\", I mean precirely\n\"to take no specific action related to archive recovery and recovery\nparameters at the end of recovery\", meaning that a combination of\nbackup_label with no signal file would be the same as crash recovery,\nreplaying WAL up to the end of what can be found in pg_wal/, and only\nthat.\n\n>>> But I was not saying we should treat the case where we have a\n>>> backup_label file like crash recovery. The real question here is why\n>>> we don't treat it fully like archive recovery.\n>>\n>> Timeline jump at the end of recovery? Archive recovery forces a TLI\n>> jump by default at the end of redo if there's a signal file, and some\n>> users may not want a TLI jump by default?\n> \n> Uggh. I don't know what to think about that. I bet some people do want\n> that, but that makes it pretty easy to end up with multiple copies of\n> the same cluster running on the same TLI, too, which is not a thing\n> that you really want to have happen.\n\nAndres has mentioned upthread that this is something he's been using\nto quickly be able to clone a cluster. I would not recommend doing\nthat, personally, but if that's useful in some cases, well, why not.\n\n> At the end of the day, I'm coming around to the view that the biggest\n> problem here is the documentation. Nobody can really know what's\n> supposed to work right now because the documentation doesn't say which\n> things you are and are not allowed to do and what results you should\n> expect in each case. If it did, it would be easier to discuss possible\n> behavior changes. Right now, it's hard to change any code at all,\n> because there's no list of supported scenarios, so you can't tell\n> whether a potential change affects a scenario that somebody thinks\n> should work, or only cases that nobody can possibly care about. It's\n> sort of possible to reason your way through that, to an extent, but\n> it's pretty hard. The fact that I didn't know that starting from a\n> backup with neither recovery.signal nor standby.signal was a thing\n> that anybody did or cared about is good evidence of that.\n\nThat's one problem, not all of it, because the code takes extra\nassumptions around that.\n\n> I also feel like the terminology here sometimes obscures more than it\n> illuminates. For instance, it seems like ArchiveRecoveryRequested\n> really means \"are any signal files present?\" while InArchiveRecovery\n> means \"are we fetching WAL from outside pg_wal rather than using\n> what's in pg_wal?\". But these are not obvious from the names, and\n> sometimes we have additional variables with overlapping meanings, like\n> readSource, which indicates whether we're reading from pg_wal, the\n> archive, or the walreceiver, and yet is probably not redundant with\n> InArchiveRecovery. In any event, I think that we need to start with\n> the question of what behavior(s) we want to expose to users, and then\n> back into the question of what internal variables and states need to\n> exist in order to support that behavior. We cannot start by deciding\n> what variables we'd like to get rid of and then trying to justify the\n> resulting behavior changes on the grounds that they simplify the code.\n> Users aren't going to like that, hackers aren't going to like that,\n> and the resulting behavior probably won't be anything great.\n\nNote as well that InArchiveRecovery is set when there's a\nbackup_label, but that the code would check for the existence of a\nrestore_command only if a signal file exists. That's strange, but if\npeople have been relying on this behavior, so be it.\n\nAt this stage, it looks pretty clear to me that there's no consensus\non what to do, and nobody's happy with the proposal of this thread, so\nI am going to mark it as rejected.\n--\nMichael",
"msg_date": "Thu, 9 Nov 2023 12:04:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Thu, Nov 09, 2023 at 12:04:19PM +0900, Michael Paquier wrote:\n> Sure, sorry for the confusion. By \"we'd do nothing\", I mean precirely\n> \"to take no specific action related to archive recovery and recovery\n> parameters at the end of recovery\", meaning that a combination of\n> backup_label with no signal file would be the same as crash recovery,\n> replaying WAL up to the end of what can be found in pg_wal/, and only\n> that.\n\nBy being slightly more precise. I also mean to fail recovery if it is\nnot possible to replay up to the end-of-backup LSN marked in the label\nfile because we are missing some stuff in pg_wal/, which is something\nthat the code is currently able to handle.\n--\nMichael",
"msg_date": "Thu, 9 Nov 2023 12:16:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-09 12:16:52 +0900, Michael Paquier wrote:\n> On Thu, Nov 09, 2023 at 12:04:19PM +0900, Michael Paquier wrote:\n> > Sure, sorry for the confusion. By \"we'd do nothing\", I mean precirely\n> > \"to take no specific action related to archive recovery and recovery\n> > parameters at the end of recovery\", meaning that a combination of\n> > backup_label with no signal file would be the same as crash recovery,\n> > replaying WAL up to the end of what can be found in pg_wal/, and only\n> > that.\n\nI don't think those are equivalent - in the \"backup_label with no signal file\"\ncase we start recovery at a different location than the \"crash recovery\" case\ndoes.\n\n\n> By being slightly more precise. I also mean to fail recovery if it is\n> not possible to replay up to the end-of-backup LSN marked in the label\n> file because we are missing some stuff in pg_wal/, which is something\n> that the code is currently able to handle.\n\n\"able to handle\" as in detect and error out? Because that's the only possible\nsane thing to do, correct?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Nov 2023 15:41:44 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "On Mon, Nov 13, 2023 at 03:41:44PM -0800, Andres Freund wrote:\n> On 2023-11-09 12:16:52 +0900, Michael Paquier wrote:\n>> On Thu, Nov 09, 2023 at 12:04:19PM +0900, Michael Paquier wrote:\n>> > Sure, sorry for the confusion. By \"we'd do nothing\", I mean precirely\n>> > \"to take no specific action related to archive recovery and recovery\n>> > parameters at the end of recovery\", meaning that a combination of\n>> > backup_label with no signal file would be the same as crash recovery,\n>> > replaying WAL up to the end of what can be found in pg_wal/, and only\n>> > that.\n> \n> I don't think those are equivalent - in the \"backup_label with no signal file\"\n> case we start recovery at a different location than the \"crash recovery\" case\n> does.\n\nIt depends on how you see things, and based on my read of the thread\nor the code we've never really put a clear definition what a\n\"backup_label with no signal file\" should do. The definition I was\nsuggesting is to make it work the same way as crash recovery\ninternally:\n- use the start LSN from the backup_label.\n- replay up to the end of local WAL.\n- don't rely on any recovery GUCs.\n- if at the end of recovery replay has not reached the end-of-backup\nrecord, then fail.\n\n>> By being slightly more precise. I also mean to fail recovery if it is\n>> not possible to replay up to the end-of-backup LSN marked in the label\n>> file because we are missing some stuff in pg_wal/, which is something\n>> that the code is currently able to handle.\n> \n> \"able to handle\" as in detect and error out? Because that's the only possible\n> sane thing to do, correct?\n\nBy \"able to handle\", I mean to detect that the expected LSN has not\nbeen reached and FATAL, or fail recovery. So yes.\n--\nMichael",
"msg_date": "Tue, 14 Nov 2023 09:13:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
},
{
"msg_contents": "Hi,\n\nOn 2023-11-14 09:13:44 +0900, Michael Paquier wrote:\n> On Mon, Nov 13, 2023 at 03:41:44PM -0800, Andres Freund wrote:\n> > On 2023-11-09 12:16:52 +0900, Michael Paquier wrote:\n> >> On Thu, Nov 09, 2023 at 12:04:19PM +0900, Michael Paquier wrote:\n> >> > Sure, sorry for the confusion. By \"we'd do nothing\", I mean precirely\n> >> > \"to take no specific action related to archive recovery and recovery\n> >> > parameters at the end of recovery\", meaning that a combination of\n> >> > backup_label with no signal file would be the same as crash recovery,\n> >> > replaying WAL up to the end of what can be found in pg_wal/, and only\n> >> > that.\n> > \n> > I don't think those are equivalent - in the \"backup_label with no signal file\"\n> > case we start recovery at a different location than the \"crash recovery\" case\n> > does.\n> \n> It depends on how you see things, and based on my read of the thread\n> or the code we've never really put a clear definition what a\n> \"backup_label with no signal file\" should do. The definition I was\n> suggesting is to make it work the same way as crash recovery\n> internally:\n> - use the start LSN from the backup_label.\n\nThat's fundamentally different from crash recovery!\n\n> - replay up to the end of local WAL.\n> - don't rely on any recovery GUCs.\n> - if at the end of recovery replay has not reached the end-of-backup\n> record, then fail.\n\nAlso different from crash recovery.\n\nIt doesn't make sense to me to say \"work the same way\" when there are such\nfundamental differences.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Nov 2023 16:17:38 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Requiring recovery.signal or standby.signal when recovering with\n a backup_label"
}
] |
[
{
"msg_contents": "Hi,\n With the redesign of the archive modules:\n35739b87dcfef9fc0186aca659f262746fecd778 - Redesign archive modules\n if we were to compile basic_archive module with USE_PGXS=1, we get\ncompilation error:\n\n[]$ make USE_PGXS=1\ngcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing\n-fwrapv -fexcess-precision=standard -g -g -O0 -fPIC\n-fvisibility=hidden -I. -I./\n-I/home/sravanv/work/workspaces/PGdevel_test/include/postgresql/server\n-I/home/sravanv/work/workspaces/PGdevel_test/include/postgresql/internal\n -D_GNU_SOURCE -I/usr/include/libxml2 -c -o basic_archive.o\nbasic_archive.c -MMD -MP -MF .deps/basic_archive.Po\nbasic_archive.c:33:36: fatal error: archive/archive_module.h: No such\nfile or directory\n #include \"archive/archive_module.h\"\n ^\ncompilation terminated.\nmake: *** [basic_archive.o] Error 1\n\nI have attached a patch that fixes the problem. Can you please review\nif it makes sense to push this patch?\n\n-- \nThanks & Regards,\nSravan Velagandula\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 10 Mar 2023 13:41:07 +0530",
"msg_from": "Sravan Kumar <sravanvcybage@gmail.com>",
"msg_from_op": true,
"msg_subject": "Compilation error after redesign of the archive modules"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 01:41:07PM +0530, Sravan Kumar wrote:\n> I have attached a patch that fixes the problem. Can you please review\n> if it makes sense to push this patch?\n\nIndeed, reproduced here. I'll fix that in a bit..\n--\nMichael",
"msg_date": "Fri, 10 Mar 2023 17:16:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Compilation error after redesign of the archive modules"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 05:16:53PM +0900, Michael Paquier wrote:\n> On Fri, Mar 10, 2023 at 01:41:07PM +0530, Sravan Kumar wrote:\n>> I have attached a patch that fixes the problem. Can you please review\n>> if it makes sense to push this patch?\n> \n> Indeed, reproduced here. I'll fix that in a bit..\n\n(Sorry for the late reply, I thought that I sent that on Friday but it\nwas stuck in my drafts.)\n\nNote that your patch took only care of the ./configure part of the\ninstallation process, but it was missing meson. Applied a fix for\nboth as of 6ad5793.\n--\nMichael",
"msg_date": "Mon, 13 Mar 2023 14:03:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Compilation error after redesign of the archive modules"
}
] |
[
{
"msg_contents": "While looking into issue [1], I came across $subject on master. Below\nis how to reproduce it.\n\nDROP TABLE IF EXISTS t1,t2,t3,t4 CASCADE;\nCREATE TABLE t1 AS SELECT true AS x FROM generate_series(0,1) x;\nCREATE TABLE t2 AS SELECT true AS x FROM generate_series(0,1) x;\nCREATE TABLE t3 AS SELECT true AS x FROM generate_series(0,1) x;\nCREATE TABLE t4 AS SELECT true AS x FROM generate_series(0,1) x;\nANALYZE;\n\nexplain (costs off)\nselect * from t1 left join (t2 left join t3 on t2.x) on t2.x left join t4\non t3.x and t2.x where t1.x = coalesce(t2.x,true);\n\nI've looked into this a little bit. For the join of t2/t3 to t4, since\nit can commute with the join of t1 to t2/t3 according to identity 3, we\nwould generate multiple versions for its joinquals. In particular, the\nqual 't3.x' would have two versions, one with varnullingrels as {t2/t3,\nt1/t2}, the other one with varnullingrels as {t2/t3}. So far so good.\n\nAssume we've determined to build the join of t2/t3 to t4 after we've\nbuilt t1/t2 and t2/t3, then we'd find that both versions of qual 't3.x'\nwould be accepted by clause_is_computable_at. This is not correct. We\nare supposed to accept only the one marked as {t2/t3, t1/t2}. The other\none is not rejected mainly because we found that the qual 't3.x' does\nnot mention any nullable Vars of outer join t1/t2.\n\nI wonder if we should consider syn_xxxhand rather than min_xxxhand in\nclause_is_computable_at when we check if clause mentions any nullable\nVars. But I'm not sure about that.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/0b819232-4b50-f245-1c7d-c8c61bf41827%40postgrespro.ru\n\nThanks\nRichard\n\nWhile looking into issue [1], I came across $subject on master. Belowis how to reproduce it.DROP TABLE IF EXISTS t1,t2,t3,t4 CASCADE;CREATE TABLE t1 AS SELECT true AS x FROM generate_series(0,1) x;CREATE TABLE t2 AS SELECT true AS x FROM generate_series(0,1) x;CREATE TABLE t3 AS SELECT true AS x FROM generate_series(0,1) x;CREATE TABLE t4 AS SELECT true AS x FROM generate_series(0,1) x;ANALYZE;explain (costs off)select * from t1 left join (t2 left join t3 on t2.x) on t2.x left join t4 on t3.x and t2.x where t1.x = coalesce(t2.x,true);I've looked into this a little bit. For the join of t2/t3 to t4, sinceit can commute with the join of t1 to t2/t3 according to identity 3, wewould generate multiple versions for its joinquals. In particular, thequal 't3.x' would have two versions, one with varnullingrels as {t2/t3,t1/t2}, the other one with varnullingrels as {t2/t3}. So far so good.Assume we've determined to build the join of t2/t3 to t4 after we'vebuilt t1/t2 and t2/t3, then we'd find that both versions of qual 't3.x'would be accepted by clause_is_computable_at. This is not correct. Weare supposed to accept only the one marked as {t2/t3, t1/t2}. The otherone is not rejected mainly because we found that the qual 't3.x' doesnot mention any nullable Vars of outer join t1/t2.I wonder if we should consider syn_xxxhand rather than min_xxxhand inclause_is_computable_at when we check if clause mentions any nullableVars. But I'm not sure about that.[1] https://www.postgresql.org/message-id/flat/0b819232-4b50-f245-1c7d-c8c61bf41827%40postgrespro.ruThanksRichard",
"msg_date": "Fri, 10 Mar 2023 16:13:59 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 4:13 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> I wonder if we should consider syn_xxxhand rather than min_xxxhand in\n> clause_is_computable_at when we check if clause mentions any nullable\n> Vars. But I'm not sure about that.\n>\n\nNo, considering syn_xxxhand is not right. After some join order\ncommutation we may form the join with only its min_lefthand and\nmin_righthand. In this case if we check against syn_xxxhand rather than\nmin_xxxhand in clause_is_computable_at, we may end up with being unable\nto find a proper place for some quals. I can see this problem in below\nquery.\n\nselect * from t1 left join ((select t2.x from t2 left join t3 on t2.x where\nt3.x is null) s left join t4 on s.x) on s.x = t1.x;\n\nSuppose we've formed join t1/t2 and go ahead to form the join of t1/t2\nto t3. If we consider t1/t2 join's syn_xxxhand, then the pushed down\nqual 't3.x is null' would not be computable at this level because it\nmentions nullable Vars from t1/t2 join's syn_righthand and meanwhile is\nnot marked with t1/t2 join. This is not correct and would trigger an\nAssert.\n\nBack to the original issue, if a join has more than one quals, actually\nwe treat them as a whole when we check if identity 3 applies as well as\nwhen we adjust them to be suitable for commutation according to identity\n3. So when we check if a qual is computable at a given level, I think\nwe should also consider the join's quals as a whole. I'm thinking that\nwe use a 'group' notion for RestrictInfos and then use the clause_relids\nof the 'group' in clause_is_computable_at. Does this make sense?\n\nThanks\nRichard\n\nOn Fri, Mar 10, 2023 at 4:13 PM Richard Guo <guofenglinux@gmail.com> wrote:I wonder if we should consider syn_xxxhand rather than min_xxxhand inclause_is_computable_at when we check if clause mentions any nullableVars. But I'm not sure about that. No, considering syn_xxxhand is not right. After some join ordercommutation we may form the join with only its min_lefthand andmin_righthand. In this case if we check against syn_xxxhand rather thanmin_xxxhand in clause_is_computable_at, we may end up with being unableto find a proper place for some quals. I can see this problem in belowquery.select * from t1 left join ((select t2.x from t2 left join t3 on t2.x where t3.x is null) s left join t4 on s.x) on s.x = t1.x;Suppose we've formed join t1/t2 and go ahead to form the join of t1/t2to t3. If we consider t1/t2 join's syn_xxxhand, then the pushed downqual 't3.x is null' would not be computable at this level because itmentions nullable Vars from t1/t2 join's syn_righthand and meanwhile isnot marked with t1/t2 join. This is not correct and would trigger anAssert.Back to the original issue, if a join has more than one quals, actuallywe treat them as a whole when we check if identity 3 applies as well aswhen we adjust them to be suitable for commutation according to identity3. So when we check if a qual is computable at a given level, I thinkwe should also consider the join's quals as a whole. I'm thinking thatwe use a 'group' notion for RestrictInfos and then use the clause_relidsof the 'group' in clause_is_computable_at. Does this make sense?ThanksRichard",
"msg_date": "Mon, 13 Mar 2023 17:03:11 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 5:03 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> Back to the original issue, if a join has more than one quals, actually\n> we treat them as a whole when we check if identity 3 applies as well as\n> when we adjust them to be suitable for commutation according to identity\n> 3. So when we check if a qual is computable at a given level, I think\n> we should also consider the join's quals as a whole. I'm thinking that\n> we use a 'group' notion for RestrictInfos and then use the clause_relids\n> of the 'group' in clause_is_computable_at. Does this make sense?\n>\n\nI'm imagining something like attached (no comments and test cases yet).\n\nThanks\nRichard",
"msg_date": "Mon, 13 Mar 2023 17:44:18 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 5:44 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> On Mon, Mar 13, 2023 at 5:03 PM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n>\n>> Back to the original issue, if a join has more than one quals, actually\n>> we treat them as a whole when we check if identity 3 applies as well as\n>> when we adjust them to be suitable for commutation according to identity\n>> 3. So when we check if a qual is computable at a given level, I think\n>> we should also consider the join's quals as a whole. I'm thinking that\n>> we use a 'group' notion for RestrictInfos and then use the clause_relids\n>> of the 'group' in clause_is_computable_at. Does this make sense?\n>>\n>\n> I'm imagining something like attached (no comments and test cases yet).\n>\n\nHere is an updated patch with comments and test case. I also change the\ncode to store 'group_clause_relids' directly in RestrictInfo.\n\nThanks\nRichard",
"msg_date": "Fri, 17 Mar 2023 11:05:03 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 11:05 AM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> Here is an updated patch with comments and test case. I also change the\n> code to store 'group_clause_relids' directly in RestrictInfo.\n>\n\nBTW, I've added an open item for this issue.\n\nThanks\nRichard\n\nOn Fri, Mar 17, 2023 at 11:05 AM Richard Guo <guofenglinux@gmail.com> wrote:Here is an updated patch with comments and test case. I also change thecode to store 'group_clause_relids' directly in RestrictInfo.BTW, I've added an open item for this issue.ThanksRichard",
"msg_date": "Fri, 12 May 2023 15:02:27 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "On 5/12/23 3:02 AM, Richard Guo wrote:\r\n> \r\n> On Fri, Mar 17, 2023 at 11:05 AM Richard Guo <guofenglinux@gmail.com \r\n> <mailto:guofenglinux@gmail.com>> wrote:\r\n> \r\n> Here is an updated patch with comments and test case. I also change the\r\n> code to store 'group_clause_relids' directly in RestrictInfo.\r\n> \r\n> \r\n> BTW, I've added an open item for this issue.\r\n\r\n[RMT hat]\r\n\r\nIs there a specific commit targeted for v16 that introduced this issue? \r\nDoes it only affect v16 or does it affect backbranches?\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 16 May 2023 09:10:53 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> Is there a specific commit targeted for v16 that introduced this issue? \n> Does it only affect v16 or does it affect backbranches?\n\nIt's part of the outer-join-aware-Vars stuff, so it's my fault ...\nand v16 only.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 May 2023 09:49:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "On 5/16/23 9:49 AM, Tom Lane wrote:\r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\r\n>> Is there a specific commit targeted for v16 that introduced this issue?\r\n>> Does it only affect v16 or does it affect backbranches?\r\n> \r\n> It's part of the outer-join-aware-Vars stuff, so it's my fault ...\r\n> and v16 only.\r\n\r\n*nods* thanks. I updated the Open Items page accordingly (doing RMT \r\nhousecleaning today in advance of Beta 1).\r\n\r\nJonathan",
"msg_date": "Tue, 16 May 2023 10:27:39 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> Here is an updated patch with comments and test case. I also change the\n> code to store 'group_clause_relids' directly in RestrictInfo.\n\nHmm ... I don't like this patch terribly much. It's not obvious why\n(or if) it works, and it's squirreling bits of semantic knowledge\ninto places they don't belong. ISTM the fundamental problem is that\nclause_is_computable_at() is accepting clauses it shouldn't, and we\nshould try to solve it there.\n\nAfter some poking at it I hit on what seems like a really simple\nsolution: we should be checking syn_righthand not min_righthand\nto see whether a Var should be considered nullable by a given OJ.\nMaybe that's still not quite right, but it seems like it might be\nright given that the last fix reaffirmed our conviction that Vars\nshould be marked according to the syntactic structure.\n\nIf we don't want to do it like this, another way is to consider\nthe transitive closure of commutable outer joins, along similar\nlines to your fixes to my earlier patch. But that seems like it\nmight just be adding complication.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 17 May 2023 15:34:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "... BTW, something I'd considered in an earlier attempt at fixing this\nwas to change clause_is_computable_at's API to pass the clause's\nRestrictInfo not just the clause_relids, along the lines of\n\n@@ -541,9 +547,10 @@ extract_actual_join_clauses(List *restrictinfo_list,\n */\n bool\n clause_is_computable_at(PlannerInfo *root,\n- Relids clause_relids,\n+ RestrictInfo *rinfo,\n Relids eval_relids)\n {\n+ Relids clause_relids = rinfo->clause_relids;\n ListCell *lc;\n \n /* Nothing to do if no outer joins have been performed yet. */\n\nwith corresponding simplifications at the call sites. That was with\na view to examining has_clone/is_clone inside this function. My\ncurrent proposal doesn't require that, but I'm somewhat tempted\nto make this API change anyway for future-proofing purposes.\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 May 2023 15:42:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "On Thu, May 18, 2023 at 3:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> After some poking at it I hit on what seems like a really simple\n> solution: we should be checking syn_righthand not min_righthand\n> to see whether a Var should be considered nullable by a given OJ.\n> Maybe that's still not quite right, but it seems like it might be\n> right given that the last fix reaffirmed our conviction that Vars\n> should be marked according to the syntactic structure.\n\n\nI thought about this solution before but proved it was not right in\nhttps://www.postgresql.org/message-id/CAMbWs48fObJJ%3DYVb4ip8tnwxwixUNKUThfnA1eGfPzJxJRRgZQ%40mail.gmail.com\n\nI checked the query shown there and it still fails with v3 patch.\n\nexplain (costs off)\nselect * from t1\n left join (select t2.x from t2\n left join t3 on t2.x where t3.x is null) s\n left join t4 on s.x\non s.x = t1.x;\nserver closed the connection unexpectedly\n\nThe failure happens when we are forming the join of (t1/t2) to t3.\nConsider qual 't3.x is null'. It's a non-clone filter clause so\nclause_is_computable_at is supposed to think it's applicable here. We\nhave an Assert for that. However, when checking outer join t1/t2, which\nhas been performed but is not listed in the qual's nullingrels,\nclause_is_computable_at would think it'd null vars of the qual if we\ncheck syn_righthand not min_righthand, and get a conclusion that the\nqual is not applicable here. This is how the Assert is triggered.\n\nThanks\nRichard\n\nOn Thu, May 18, 2023 at 3:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nAfter some poking at it I hit on what seems like a really simple\nsolution: we should be checking syn_righthand not min_righthand\nto see whether a Var should be considered nullable by a given OJ.\nMaybe that's still not quite right, but it seems like it might be\nright given that the last fix reaffirmed our conviction that Vars\nshould be marked according to the syntactic structure.I thought about this solution before but proved it was not right inhttps://www.postgresql.org/message-id/CAMbWs48fObJJ%3DYVb4ip8tnwxwixUNKUThfnA1eGfPzJxJRRgZQ%40mail.gmail.comI checked the query shown there and it still fails with v3 patch.explain (costs off)select * from t1 left join (select t2.x from t2 left join t3 on t2.x where t3.x is null) s left join t4 on s.xon s.x = t1.x;server closed the connection unexpectedlyThe failure happens when we are forming the join of (t1/t2) to t3.Consider qual 't3.x is null'. It's a non-clone filter clause soclause_is_computable_at is supposed to think it's applicable here. Wehave an Assert for that. However, when checking outer join t1/t2, whichhas been performed but is not listed in the qual's nullingrels,clause_is_computable_at would think it'd null vars of the qual if wecheck syn_righthand not min_righthand, and get a conclusion that thequal is not applicable here. This is how the Assert is triggered.ThanksRichard",
"msg_date": "Thu, 18 May 2023 14:37:43 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "On Thu, May 18, 2023 at 3:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> ... BTW, something I'd considered in an earlier attempt at fixing this\n> was to change clause_is_computable_at's API to pass the clause's\n> RestrictInfo not just the clause_relids, along the lines of\n>\n> @@ -541,9 +547,10 @@ extract_actual_join_clauses(List *restrictinfo_list,\n> */\n> bool\n> clause_is_computable_at(PlannerInfo *root,\n> - Relids clause_relids,\n> + RestrictInfo *rinfo,\n> Relids eval_relids)\n> {\n> + Relids clause_relids = rinfo->clause_relids;\n> ListCell *lc;\n>\n> /* Nothing to do if no outer joins have been performed yet. */\n>\n> with corresponding simplifications at the call sites. That was with\n> a view to examining has_clone/is_clone inside this function. My\n> current proposal doesn't require that, but I'm somewhat tempted\n> to make this API change anyway for future-proofing purposes.\n> Thoughts?\n\n\nThis change looks good to me.\n\nThanks\nRichard\n\nOn Thu, May 18, 2023 at 3:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:... BTW, something I'd considered in an earlier attempt at fixing this\nwas to change clause_is_computable_at's API to pass the clause's\nRestrictInfo not just the clause_relids, along the lines of\n\n@@ -541,9 +547,10 @@ extract_actual_join_clauses(List *restrictinfo_list,\n */\n bool\n clause_is_computable_at(PlannerInfo *root,\n- Relids clause_relids,\n+ RestrictInfo *rinfo,\n Relids eval_relids)\n {\n+ Relids clause_relids = rinfo->clause_relids;\n ListCell *lc;\n\n /* Nothing to do if no outer joins have been performed yet. */\n\nwith corresponding simplifications at the call sites. That was with\na view to examining has_clone/is_clone inside this function. My\ncurrent proposal doesn't require that, but I'm somewhat tempted\nto make this API change anyway for future-proofing purposes.\nThoughts?This change looks good to me.ThanksRichard",
"msg_date": "Thu, 18 May 2023 14:47:42 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Thu, May 18, 2023 at 3:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... BTW, something I'd considered in an earlier attempt at fixing this\n>> was to change clause_is_computable_at's API to pass the clause's\n>> RestrictInfo not just the clause_relids, along the lines of\n\n> This change looks good to me.\n\nDid that part.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 May 2023 10:39:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Thu, May 18, 2023 at 3:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> After some poking at it I hit on what seems like a really simple\n>> solution: we should be checking syn_righthand not min_righthand\n>> to see whether a Var should be considered nullable by a given OJ.\n\n> I thought about this solution before but proved it was not right in\n> https://www.postgresql.org/message-id/CAMbWs48fObJJ%3DYVb4ip8tnwxwixUNKUThfnA1eGfPzJxJRRgZQ%40mail.gmail.com\n> I checked the query shown there and it still fails with v3 patch.\n\nBleah. The other solution I'd been poking at involved adding an\nextra check for clone clauses, as attached (note this requires\n8a2523ff3). This survives your example, but I wonder if it might\nreject all the clones in some cases. It seems a bit expensive\ntoo, although as I said before, I don't think the clone cases get\ntraversed all that often.\n\nPerhaps another answer could be to compare against syn_righthand\nfor clone clauses and min_righthand for non-clones? That seems\nmighty unprincipled though.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 18 May 2023 12:32:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "On Fri, May 19, 2023 at 12:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Bleah. The other solution I'd been poking at involved adding an\n> extra check for clone clauses, as attached (note this requires\n> 8a2523ff3). This survives your example, but I wonder if it might\n> reject all the clones in some cases. It seems a bit expensive\n> too, although as I said before, I don't think the clone cases get\n> traversed all that often.\n\n\nI tried with v4 patch and find that, as you predicted, it might reject\nall the clones in some cases. Check the query below\n\nexplain (costs off)\nselect * from t t1\n left join t t2 on t1.a = t2.a\n left join t t3 on t2.a = t3.a\n left join t t4 on t3.a = t4.a and t2.b = t4.b;\n QUERY PLAN\n------------------------------------------\n Hash Left Join\n Hash Cond: (t2.b = t4.b)\n -> Hash Left Join\n Hash Cond: (t2.a = t3.a)\n -> Hash Left Join\n Hash Cond: (t1.a = t2.a)\n -> Seq Scan on t t1\n -> Hash\n -> Seq Scan on t t2\n -> Hash\n -> Seq Scan on t t3\n -> Hash\n -> Seq Scan on t t4\n(13 rows)\n\nSo the qual 't3.a = t4.a' is missing in this plan shape.\n\n\n> Perhaps another answer could be to compare against syn_righthand\n> for clone clauses and min_righthand for non-clones? That seems\n> mighty unprincipled though.\n\n\nI also checked this solution with the same query.\n\nexplain (costs off)\nselect * from t t1\n left join t t2 on t1.a = t2.a\n left join t t3 on t2.a = t3.a\n left join t t4 on t3.a = t4.a and t2.b = t4.b;\n QUERY PLAN\n------------------------------------------------------------------\n Hash Left Join\n Hash Cond: ((t3.a = t4.a) AND (t3.a = t4.a) AND (t2.b = t4.b))\n -> Hash Left Join\n Hash Cond: (t2.a = t3.a)\n -> Hash Left Join\n Hash Cond: (t1.a = t2.a)\n -> Seq Scan on t t1\n -> Hash\n -> Seq Scan on t t2\n -> Hash\n -> Seq Scan on t t3\n -> Hash\n -> Seq Scan on t t4\n(13 rows)\n\nThis time the qual 't3.a = t4.a' is back, but twice.\n\nI keep thinking about my proposal in v2 patch. It seems more natural to\nme to fix this issue, because an outer join's quals are always treated\nas a whole when we check if identity 3 applies in make_outerjoininfo, as\nwell as when we adjust the outer join's quals for commutation in\ndeconstruct_distribute_oj_quals. So when it comes to check if quals are\ncomputable at a join level, they should be still treated as a whole.\nThis should have the same effect regarding qual placement if the quals\nof an outer join are in form of 'qual1 OR qual2 OR ...' rather than\n'qual1 AND qual2 AND ...'.\n\nThanks\nRichard\n\nOn Fri, May 19, 2023 at 12:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nBleah. The other solution I'd been poking at involved adding an\nextra check for clone clauses, as attached (note this requires\n8a2523ff3). This survives your example, but I wonder if it might\nreject all the clones in some cases. It seems a bit expensive\ntoo, although as I said before, I don't think the clone cases get\ntraversed all that often.I tried with v4 patch and find that, as you predicted, it might rejectall the clones in some cases. Check the query belowexplain (costs off)select * from t t1 left join t t2 on t1.a = t2.a left join t t3 on t2.a = t3.a left join t t4 on t3.a = t4.a and t2.b = t4.b; QUERY PLAN------------------------------------------ Hash Left Join Hash Cond: (t2.b = t4.b) -> Hash Left Join Hash Cond: (t2.a = t3.a) -> Hash Left Join Hash Cond: (t1.a = t2.a) -> Seq Scan on t t1 -> Hash -> Seq Scan on t t2 -> Hash -> Seq Scan on t t3 -> Hash -> Seq Scan on t t4(13 rows)So the qual 't3.a = t4.a' is missing in this plan shape. \nPerhaps another answer could be to compare against syn_righthand\nfor clone clauses and min_righthand for non-clones? That seems\nmighty unprincipled though.I also checked this solution with the same query.explain (costs off)select * from t t1 left join t t2 on t1.a = t2.a left join t t3 on t2.a = t3.a left join t t4 on t3.a = t4.a and t2.b = t4.b; QUERY PLAN------------------------------------------------------------------ Hash Left Join Hash Cond: ((t3.a = t4.a) AND (t3.a = t4.a) AND (t2.b = t4.b)) -> Hash Left Join Hash Cond: (t2.a = t3.a) -> Hash Left Join Hash Cond: (t1.a = t2.a) -> Seq Scan on t t1 -> Hash -> Seq Scan on t t2 -> Hash -> Seq Scan on t t3 -> Hash -> Seq Scan on t t4(13 rows)This time the qual 't3.a = t4.a' is back, but twice.I keep thinking about my proposal in v2 patch. It seems more natural tome to fix this issue, because an outer join's quals are always treatedas a whole when we check if identity 3 applies in make_outerjoininfo, aswell as when we adjust the outer join's quals for commutation indeconstruct_distribute_oj_quals. So when it comes to check if quals arecomputable at a join level, they should be still treated as a whole.This should have the same effect regarding qual placement if the qualsof an outer join are in form of 'qual1 OR qual2 OR ...' rather than'qual1 AND qual2 AND ...'.ThanksRichard",
"msg_date": "Fri, 19 May 2023 11:23:33 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> I keep thinking about my proposal in v2 patch. It seems more natural to\n> me to fix this issue, because an outer join's quals are always treated\n> as a whole when we check if identity 3 applies in make_outerjoininfo, as\n> well as when we adjust the outer join's quals for commutation in\n> deconstruct_distribute_oj_quals.\n\nNo, I doubt that that patch works properly. If the join condition\ncontains independent quals on different relations, say\n\n\tselect ... from t1 left join t2 on (t1.a = 1 and t2.b = 2)\n\nthen it may be that those quals need to be pushed to different levels.\nI don't believe that considering the union of the rels mentioned in\nany qual is a reasonable thing to do here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 May 2023 15:29:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> I tried with v4 patch and find that, as you predicted, it might reject\n> all the clones in some cases. Check the query below\n> ...\n> So the qual 't3.a = t4.a' is missing in this plan shape.\n\nI poked into that more closely and realized that the reason that\nclause_is_computable_at() misbehaves is that both clones of the\n\"t3.a = t4.a\" qual have the same clause_relids: (4 5 6) which is\nt3, the left join to t3, and t4. This is unsurprising because\nthe difference in these clones is whether they are expected to be\nevaluated above or below outer join 3 (the left join to t2), and\nt2 doesn't appear in the qual. (It does appear in \"t2.b = t4.b\",\nwhich is why there's no similar misbehavior for that qual.)\n\nIf they have the same clause_relids, then clearly in its current\nform clause_is_computable_at() cannot distinguish them. So what\nI now think we should do is have clause_is_computable_at() examine\ntheir required_relids instead. Those will be different, by\nconstruction in deconstruct_distribute_oj_quals(), ensuring that\nwe select only one of the group of clones.\n\nBTW, while I've not tried it, I suspect your v2 patch also fails\non this example for the same reason: it cannot distinguish the\nclones of this qual.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 20 May 2023 11:24:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "I wrote:\n> If they have the same clause_relids, then clearly in its current\n> form clause_is_computable_at() cannot distinguish them. So what\n> I now think we should do is have clause_is_computable_at() examine\n> their required_relids instead. Those will be different, by\n> construction in deconstruct_distribute_oj_quals(), ensuring that\n> we select only one of the group of clones.\n\nSince we're hard up against the beta1 wrap deadline, I went ahead\nand pushed the v5 patch. I doubt that it's perfect yet, but it's\na small change and demonstrably fixes the cases we know about.\n\nAs I said in the commit message, the main knock I'd lay on v5\nis \"why not use required_relids all the time?\". I tried to do\nthat and soon found that the answer is that we're not maintaining\nrequired_relids very accurately. I found two causes so far:\n\n1. equivclass.c sometimes generates placeholder constant-true\njoin clauses, and it's being sloppy about that. It copies\nthe required_relids of the original clause, but fails to copy\nis_pushed_down, making the clause look like it's been assigned\nto the wrong side of the join-clause-vs-filter-clause divide.\nI found that we need to copy has_clone/is_clone as well. The\nattached quick-hack patch avoids the bugs, but now I feel like\nit was a mistake to not add has_clone/is_clone as full-fledged\narguments of make_restrictinfo. I'm inclined to change that,\nbut not right before beta1 when we have no evidence of a reachable\nbug. (Mind you, there might *be* a reachable bug, but ...)\n\n2. When distribute_qual_to_rels forces a qual up to a particular\nsyntactic level, it applies a relid set that very possibly refers\nto rels the clause doesn't actually depend on. This is problematic\nbecause if the clause gets postponed to above some outer join that\nnulls those rels, then it looks like it's being evaluated in an\nunsafe location. I think that when we detect commutability of two\nouter joins, we need to adjust the relevant min_xxxhand sets more\nthoroughly than we do now. I've not managed to write a patch\nfor that yet. One problem is that if we insist on removing all\nunreferenced rels from required_relids, we might end up with a\nset that mentions *none* of the RHS and therefore fails to keep\nthe clause from dropping into the LHS where it must not go.\nNot sure about a nice way to handle that.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 21 May 2023 15:44:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "On 2023-May-21, Tom Lane wrote:\n\n> Since we're hard up against the beta1 wrap deadline, I went ahead\n> and pushed the v5 patch. I doubt that it's perfect yet, but it's\n> a small change and demonstrably fixes the cases we know about.\n> \n> As I said in the commit message, the main knock I'd lay on v5\n> is \"why not use required_relids all the time?\".\n\nSo, is this done? I see that you made other commits fixing related code\nseveral days after this email, but none seems to match the changes you\nposted in this patch; and also it's not clear to me that there's any\ntest case where this patch is expected to change behavior. (So there's\nalso a question of whether this is a bug fix or rather some icying on\ncake.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 6 Jun 2023 22:11:32 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> So, is this done? I see that you made other commits fixing related code\n> several days after this email, but none seems to match the changes you\n> posted in this patch; and also it's not clear to me that there's any\n> test case where this patch is expected to change behavior. (So there's\n> also a question of whether this is a bug fix or rather some icying on\n> cake.)\n\nWell, the bugs I was aware of ahead of PGCon are all fixed, but there\nare some new reports I still have to deal with. I left the existing\nopen issue open, but maybe it'd be better to close it and start a new\none?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 06 Jun 2023 16:22:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
},
{
"msg_contents": "On Wed, Jun 7, 2023 at 4:22 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > So, is this done? I see that you made other commits fixing related code\n> > several days after this email, but none seems to match the changes you\n> > posted in this patch; and also it's not clear to me that there's any\n> > test case where this patch is expected to change behavior. (So there's\n> > also a question of whether this is a bug fix or rather some icying on\n> > cake.)\n\n\nThis issue is fixed at 991a3df22.\n\n\n> Well, the bugs I was aware of ahead of PGCon are all fixed, but there\n> are some new reports I still have to deal with. I left the existing\n> open issue open, but maybe it'd be better to close it and start a new\n> one?\n\n\nI went ahead and closed it, and then started two new open items for the\ntwo new issues --- one is about assert failure and wrong query results\ndue to incorrectly removing PHVs, the other is about inconsistent\nnulling bitmap in nestloop parameters.\n\nThanks\nRichard\n\nOn Wed, Jun 7, 2023 at 4:22 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> So, is this done? I see that you made other commits fixing related code\n> several days after this email, but none seems to match the changes you\n> posted in this patch; and also it's not clear to me that there's any\n> test case where this patch is expected to change behavior. (So there's\n> also a question of whether this is a bug fix or rather some icying on\n> cake.)This issue is fixed at 991a3df22. \nWell, the bugs I was aware of ahead of PGCon are all fixed, but there\nare some new reports I still have to deal with. I left the existing\nopen issue open, but maybe it'd be better to close it and start a new\none?I went ahead and closed it, and then started two new open items for thetwo new issues --- one is about assert failure and wrong query resultsdue to incorrectly removing PHVs, the other is about inconsistentnulling bitmap in nestloop parameters.ThanksRichard",
"msg_date": "Wed, 7 Jun 2023 10:25:40 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure of the cross-check for nullingrels"
}
] |
[
{
"msg_contents": "Hi!\n\nI wonder why does ExecMergeMatched() determine the lock mode using\nExecUpdateLockMode(). Why don't we use lock mode set by\ntable_tuple_update() like ExecUpdate() does? I skim through the\nMERGE-related threads, but didn't find an answer.\n\nI also noticed that we use ExecUpdateLockMode() even for CMD_DELETE.\nThat ends up by usage of LockTupleNoKeyExclusive for CMD_DELETE, which\nseems plain wrong for me.\n\nThe proposed change is attached.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Sat, 11 Mar 2023 00:42:35 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": true,
"msg_subject": "Lock mode in ExecMergeMatched()"
},
{
"msg_contents": "On Fri, 10 Mar 2023 at 21:42, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> I wonder why does ExecMergeMatched() determine the lock mode using\n> ExecUpdateLockMode(). Why don't we use lock mode set by\n> table_tuple_update() like ExecUpdate() does? I skim through the\n> MERGE-related threads, but didn't find an answer.\n>\n> I also noticed that we use ExecUpdateLockMode() even for CMD_DELETE.\n> That ends up by usage of LockTupleNoKeyExclusive for CMD_DELETE, which\n> seems plain wrong for me.\n>\n> The proposed change is attached.\n>\n\nThat won't work if it did a cross-partition update, since it won't\nhave done a table_tuple_update() in that case, and updateCxt.lockmode\nwon't have been set. Also, when it loops back and retries, it might\nexecute a different action next time round. So I think it needs to\nunconditionally use LockTupleExclusive, since it doesn't know if it'll\nend up executing an update or a delete.\n\nI'm currently working on a patch for bug #17809 that might change that\ncode though.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 11 Mar 2023 03:07:53 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lock mode in ExecMergeMatched()"
},
{
"msg_contents": "> On Fri, 10 Mar 2023 at 21:42, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >\n> > I wonder why does ExecMergeMatched() determine the lock mode using\n> > ExecUpdateLockMode(). Why don't we use lock mode set by\n> > table_tuple_update() like ExecUpdate() does? I skim through the\n> > MERGE-related threads, but didn't find an answer.\n> >\n> > I also noticed that we use ExecUpdateLockMode() even for CMD_DELETE.\n> > That ends up by usage of LockTupleNoKeyExclusive for CMD_DELETE, which\n> > seems plain wrong for me.\n>\n\nI pushed the patch for bug #17809, which in the end didn't directly\ntouch this code. I considered including the change of lockmode in that\npatch, but in the end decided against it, since it wasn't directly\nrelated to the issues being fixed there, and I wanted more time to\nthink about what changing the lockmode here really means.\n\nI'm wondering now if it really matters what lock mode we use here. If\nthe point of calling table_tuple_lock() after a concurrent update is\ndetected is to prevent more concurrent updates, so that the retry is\nguaranteed to succeed, then wouldn't even LockTupleNoKeyExclusive be\nsufficient in all cases? After all, that does block concurrent updates\nand deletes.\n\nPerhaps there is an issue with using LockTupleNoKeyExclusive, and then\nhaving to upgrade it to LockTupleExclusive later? But I wonder if that\ncan already happen -- consider a regular UPDATE (not via MERGE) of\nnon-key columns on a partitioned table, that initially does a simple\nupdate, but upon retrying needs to do a cross-partition update (DELETE\n+ INSERT).\n\nBut perhaps I'm thinking about this in the wrong way. Do you have an\nexample test case where this makes a difference?\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 13 Mar 2023 12:20:59 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lock mode in ExecMergeMatched()"
},
{
"msg_contents": "On 2023-Mar-13, Dean Rasheed wrote:\n\n> I'm wondering now if it really matters what lock mode we use here. If\n> the point of calling table_tuple_lock() after a concurrent update is\n> detected is to prevent more concurrent updates, so that the retry is\n> guaranteed to succeed, then wouldn't even LockTupleNoKeyExclusive be\n> sufficient in all cases? After all, that does block concurrent updates\n> and deletes.\n\nThe difference in lock mode should be visible relative to concurrent\ntransactions that try to SELECT FOR KEY SHARE the affected row. If you\nare updating a row but not changing the key-columns, then a KEY SHARE\nagainst the same tuple should work concurrently without blocking. If\nyou *are* changing the key columns, then such a lock should be made to\nwait.\n\nDELETE should be exactly equivalent to an update that changes any\ncolumns in the \"key\". After all, the point is that the previous key (as\nreferenced via a FK from another table) is now gone, which happens in\nboth these operations, but does not happen when an update only touches\nother columns.\n\nTwo UPDATEs of the same row should always block each other.\n\n\nNote that the code to determine which columns are part of the key is not\nvery careful: IIRC any column part of a unique index is considered part\nof the key. I don't think this has any implications for the discussion\nhere, but I thought I'd point it out just in case.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 13 Mar 2023 18:47:38 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Lock mode in ExecMergeMatched()"
},
{
"msg_contents": "On 2023-Mar-11, Alexander Korotkov wrote:\n\n> I wonder why does ExecMergeMatched() determine the lock mode using\n> ExecUpdateLockMode(). Why don't we use lock mode set by\n> table_tuple_update() like ExecUpdate() does? I skim through the\n> MERGE-related threads, but didn't find an answer.\n> \n> I also noticed that we use ExecUpdateLockMode() even for CMD_DELETE.\n> That ends up by usage of LockTupleNoKeyExclusive for CMD_DELETE, which\n> seems plain wrong for me.\n\nI agree that in the case of CMD_DELETE it should not run\nExecUpdateLockMode() --- that part seems like a bug.\n\nAs I recall, ExecUpdateLockMode is newer code that should do the same as\ntable_tuple_update does to determine the lock mode ... and looking at\nthe code, I see that both do a bms_overlap operation on \"columns in the\nkey\" vs. \"columns modified\", so I'm not sure why you say they would\nbehave differently.\n\nThinking about Dean's comment downthread, where an UPDATE could be\nturned into a DELETE, I wonder if trying to be selective would lead us\nto deadlock, in case a concurrent SELECT FOR KEY SHARE is able to\nlock the tuple while we're doing UPDATE, and then lock out the MERGE\nwhen the DELETE is retried.\n\nIf this is indeed a problem, then I can think of two ways out:\n\n1. if MERGE contains any DELETE, then always use LockTupleExclusive:\notherwise, use LockTupleNoKeyExclusive. This is best for concurrency\nwhen MERGE does no delete and the key columns are not modified.\n\n2. always use LockTupleExclusive. This is easier, but does not allow\nMERGE to run concurrently with SELECT FOR KEY SHARE on the same tuples.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 13 Mar 2023 19:05:49 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Lock mode in ExecMergeMatched()"
}
] |
[
{
"msg_contents": "Hi all,\n\n(cc'ed Amit as he has the context)\n\nWhile working on [1], I realized that on HEAD there is a problem with the\n$subject. Here is the relevant discussion on the thread [2]. Quoting my\nown notes on that thread below;\n\nI realized that the dropped columns also get into the tuples_equal()\n> function. And,\n> the remote sends NULL to for the dropped columns(i.e., remoteslot), but\n> index_getnext_slot() (or table_scan_getnextslot) indeed fills the dropped\n> columns on the outslot. So, the dropped columns are not NULL in the outslot\n\n\nAmit also suggested checking generated columns, which indeed has the same\nproblem.\n\nHere are the steps to repro the problem with dropped columns:\n\n- pub\nCREATE TABLE test (drop_1 jsonb, x int, drop_2 numeric, y text, drop_3\ntimestamptz);\nALTER TABLE test REPLICA IDENTITY FULL;\nINSERT INTO test SELECT NULL, i, i, (i)::text, now() FROM\ngenerate_series(0,1)i;\nCREATE PUBLICATION pub FOR ALL TABLES;\n\n-- sub\nCREATE TABLE test (drop_1 jsonb, x int, drop_2 numeric, y text, drop_3\ntimestamptz);\nCREATE SUBSCRIPTION sub CONNECTION 'host=localhost port=5432\ndbname=postgres' PUBLICATION pub;\n\n-- show that before dropping the columns, the data in the source and\n-- target are deleted properly\nDELETE FROM test WHERE x = 0;\n\n-- both on the source and target\nSELECT count(*) FROM test WHERE x = 0;\n┌───────┐\n│ count │\n├───────┤\n│ 0 │\n└───────┘\n(1 row)\n\n-- drop columns on both the the source\nALTER TABLE test DROP COLUMN drop_1;\nALTER TABLE test DROP COLUMN drop_2;\nALTER TABLE test DROP COLUMN drop_3;\n\n-- drop columns on both the the target\nALTER TABLE test DROP COLUMN drop_1;\nALTER TABLE test DROP COLUMN drop_2;\nALTER TABLE test DROP COLUMN drop_3;\n\n-- on the target\nALTER SUBSCRIPTION sub REFRESH PUBLICATION;\n\n-- after dropping the columns\nDELETE FROM test WHERE x = 1;\n\n-- source\nSELECT count(*) FROM test WHERE x = 1;\n┌───────┐\n│ count │\n├───────┤\n│ 0 │\n└───────┘\n(1 row)\n\n\n**-- target, OOPS wrong result!!!!**SELECT count(*) FROM test WHERE x = 1;\n┌───────┐\n│ count │\n├───────┤\n│ 1 │\n└───────┘\n(1 row)\n\n\n\nAttaching a patch that could possibly solve the problem.\n\nThanks,\nOnder KALACI\n\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CACawEhUN%3D%2BvjY0%2B4q416-rAYx6pw-nZMHQYsJZCftf9MjoPN3w%40mail.gmail.com#2f7fa76f9e4496e3b52a9be6736e5b43\n[2]\nhttps://www.postgresql.org/message-id/CACawEhUu6S8E4Oo7%2Bs5iaq%3DyLRZJb6uOZeEQSGJj-7NVkDzSaw%40mail.gmail.com",
"msg_date": "Sat, 11 Mar 2023 22:59:37 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "On Sun, Mar 12, 2023 4:00 AM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> \r\n> Attaching a patch that could possibly solve the problem. \r\n> \r\n\r\nThanks for your patch. I tried it and it worked well.\r\nHere are some minor comments.\r\n\r\n1.\r\n@@ -243,6 +243,17 @@ tuples_equal(TupleTableSlot *slot1, TupleTableSlot *slot2,\r\n \t\tForm_pg_attribute att;\r\n \t\tTypeCacheEntry *typentry;\r\n \r\n+\r\n+\t\tForm_pg_attribute attr = TupleDescAttr(slot1->tts_tupleDescriptor, attrnum);\r\n+\r\n\r\nI think we can use \"att\" instead of a new variable. They have the same value.\r\n\r\n2. \r\n+# The bug was that when when the REPLICA IDENTITY FULL is used with dropped\r\n\r\nThere is an extra \"when\".\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Mon, 13 Mar 2023 10:53:03 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "Hi Shi Yu,\n\n\n> 1.\n> @@ -243,6 +243,17 @@ tuples_equal(TupleTableSlot *slot1, TupleTableSlot\n> *slot2,\n> Form_pg_attribute att;\n> TypeCacheEntry *typentry;\n>\n> +\n> + Form_pg_attribute attr =\n> TupleDescAttr(slot1->tts_tupleDescriptor, attrnum);\n> +\n>\n> I think we can use \"att\" instead of a new variable. They have the same\n> value.\n>\n\nah, of course :)\n\n\n>\n> 2.\n> +# The bug was that when when the REPLICA IDENTITY FULL is used with\n> dropped\n>\n> There is an extra \"when\".\n>\n>\nFixed, thanks\n\n\nAttaching v2",
"msg_date": "Mon, 13 Mar 2023 15:56:28 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 6:26 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> Attaching v2\n>\n\nCan we change the comment to: \"Ignore dropped and generated columns as\nthe publisher doesn't send those.\"? After your change, att =\nTupleDescAttr(slot1->tts_tupleDescriptor, attrnum); is done twice in\nthe same function.\n\nIn test cases, let's change the comment to: \"The bug was that when the\nREPLICA IDENTITY FULL is used with dropped or generated columns, we\nfail to apply updates and deletes.\". Also, I think we don't need to\nprovide the email link as anyway commit message will have a link to\nthe discussion.\n\nDid you check this in the back branches?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Mar 2023 14:53:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "On Thu, Mar 16, 2023 5:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Mon, Mar 13, 2023 at 6:26 PM Önder Kalacı <onderkalaci@gmail.com>\r\n> wrote:\r\n> >\r\n> > Attaching v2\r\n> >\r\n> \r\n> Can we change the comment to: \"Ignore dropped and generated columns as\r\n> the publisher doesn't send those.\"? After your change, att =\r\n> TupleDescAttr(slot1->tts_tupleDescriptor, attrnum); is done twice in\r\n> the same function.\r\n> \r\n> In test cases, let's change the comment to: \"The bug was that when the\r\n> REPLICA IDENTITY FULL is used with dropped or generated columns, we\r\n> fail to apply updates and deletes.\". Also, I think we don't need to\r\n> provide the email link as anyway commit message will have a link to\r\n> the discussion.\r\n> \r\n> Did you check this in the back branches?\r\n> \r\n\r\nI tried to reproduce this bug in backbranch.\r\n\r\nGenerated column is introduced in PG12, and I reproduced generated column problem\r\nin PG12~PG15.\r\n\r\nFor dropped column problem, I reproduced it in PG10~PG15. (Logical replication\r\nwas introduced in PG10)\r\n\r\nSo I think we should backpatch the fix for generated column to PG12, and\r\nbackpatch the fix for dropped column to PG10.\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Thu, 16 Mar 2023 10:38:25 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "Hi Amit, Shi Yu,\n\n> Generated column is introduced in PG12, and I reproduced generated column\nproblem\nin PG12~PG15.\n> For dropped column problem, I reproduced it in PG10~PG15. (Logical\nreplication\nwas introduced in PG10)\n\nSo, I'm planning to split the changes into two commits. The first one fixes\nfor dropped columns, and the second one adds generated columns check/test.\n\nIs that the right approach for such a case?\n\n> and backpatch the fix for dropped column to PG10.\n\nStill, even the first commit fails to apply cleanly to PG12 (and below).\n\nWhat is the procedure here? Should I be creating multiple patches per\nversion?\nOr does the committer prefer to handle the conflicts? Depending on your\nreply,\nI can work on the followup.\n\nI'm still attaching the dropped column patch for reference.\n\n\nThanks,\nOnder\n\n\nshiy.fnst@fujitsu.com <shiy.fnst@fujitsu.com>, 16 Mar 2023 Per, 13:38\ntarihinde şunu yazdı:\n\n> On Thu, Mar 16, 2023 5:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Mar 13, 2023 at 6:26 PM Önder Kalacı <onderkalaci@gmail.com>\n> > wrote:\n> > >\n> > > Attaching v2\n> > >\n> >\n> > Can we change the comment to: \"Ignore dropped and generated columns as\n> > the publisher doesn't send those.\"? After your change, att =\n> > TupleDescAttr(slot1->tts_tupleDescriptor, attrnum); is done twice in\n> > the same function.\n> >\n> > In test cases, let's change the comment to: \"The bug was that when the\n> > REPLICA IDENTITY FULL is used with dropped or generated columns, we\n> > fail to apply updates and deletes.\". Also, I think we don't need to\n> > provide the email link as anyway commit message will have a link to\n> > the discussion.\n> >\n> > Did you check this in the back branches?\n> >\n>\n> I tried to reproduce this bug in backbranch.\n>\n> Generated column is introduced in PG12, and I reproduced generated column\n> problem\n> in PG12~PG15.\n>\n> For dropped column problem, I reproduced it in PG10~PG15. (Logical\n> replication\n> was introduced in PG10)\n>\n> So I think we should backpatch the fix for generated column to PG12, and\n> backpatch the fix for dropped column to PG10.\n>\n> Regards,\n> Shi Yu\n>",
"msg_date": "Thu, 16 Mar 2023 19:03:25 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 9:33 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n> Hi Amit, Shi Yu,\n>\n> > Generated column is introduced in PG12, and I reproduced generated column problem\n> in PG12~PG15.\n> > For dropped column problem, I reproduced it in PG10~PG15. (Logical replication\n> was introduced in PG10)\n>\n> So, I'm planning to split the changes into two commits. The first one fixes\n> for dropped columns, and the second one adds generated columns check/test.\n>\n> Is that the right approach for such a case?\n>\n\nWorks for me.\n\n> > and backpatch the fix for dropped column to PG10.\n>\n> Still, even the first commit fails to apply cleanly to PG12 (and below).\n>\n> What is the procedure here? Should I be creating multiple patches per version?\n>\n\nYou can first submit the fix for dropped columns with patches till\nv10. Once that is committed, then you can send the patches for\ngenerated columns.\n\n> Or does the committer prefer to handle the conflicts? Depending on your reply,\n> I can work on the followup.\n>\n\nThanks for working on it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 17 Mar 2023 05:38:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Thu, Mar 16, 2023 at 9:33 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>>> and backpatch the fix for dropped column to PG10.\n\n> You can first submit the fix for dropped columns with patches till\n> v10. Once that is committed, then you can send the patches for\n> generated columns.\n\nDon't worry about v10 --- that's out of support and shouldn't\nget patched for this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Mar 2023 20:11:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 5:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Thu, Mar 16, 2023 at 9:33 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> >>> and backpatch the fix for dropped column to PG10.\n>\n> > You can first submit the fix for dropped columns with patches till\n> > v10. Once that is committed, then you can send the patches for\n> > generated columns.\n>\n> Don't worry about v10 --- that's out of support and shouldn't\n> get patched for this.\n>\n\nOkay, thanks for reminding me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 17 Mar 2023 05:44:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "Hi Amit, all\n\n\n> You can first submit the fix for dropped columns with patches till\n> v10. Once that is committed, then you can send the patches for\n> generated columns.\n>\n>\nAlright, attaching 2 patches for dropped columns, the names of the files\nshows which\nversions the patch can be applied to:\nv2-0001-Ignore-dropped-columns-HEAD-REL_15-REL_14-REL_13.patch\nv2-0001-Ignore-dropped-columns-REL_12-REL_11.patch\n\nAnd, then on top of that, you can apply the patch for generated columns on\nall applicable\nversions (HEAD, 15, 14, 13 and 12). It applies cleanly. The name of the\nfile\nis: v2-0001-Ignore-generated-columns.patch\n\n\nBut unfortunately I couldn't test the patch with PG 12 and below. I'm\ngetting some\nunrelated compile errors and Postgrees CI is not available on\nthese versions . I'll try\nto fix that, but I thought it would still be good to share the patches as\nyou might\nalready have the environment to run the tests.\n\n\nDon't worry about v10 --- that's out of support and shouldn't\n> get patched for this.\n\n\nGiven this information, I skipped the v10 patch.\n\nThanks,\nOnder KALACI",
"msg_date": "Fri, 17 Mar 2023 10:38:02 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "On Friday, March 17, 2023 3:38 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> \r\n> Hi Amit, all\r\n> \r\n> You can first submit the fix for dropped columns with patches till\r\n> v10. Once that is committed, then you can send the patches for\r\n> generated columns.\r\n> \r\n> Alright, attaching 2 patches for dropped columns, the names of the files shows which \r\n> versions the patch can be applied to:\r\n> v2-0001-Ignore-dropped-columns-HEAD-REL_15-REL_14-REL_13.patch\r\n> v2-0001-Ignore-dropped-columns-REL_12-REL_11.patch \r\n> \r\n> And, then on top of that, you can apply the patch for generated columns on all applicable\r\n> versions (HEAD, 15, 14, 13 and 12). It applies cleanly. The name of the file\r\n> is: v2-0001-Ignore-generated-columns.patch\r\n> \r\n> \r\n> But unfortunately I couldn't test the patch with PG 12 and below. I'm getting some\r\n> unrelated compile errors and Postgrees CI is not available on these versions . I'll try\r\n> to fix that, but I thought it would still be good to share the patches as you might\r\n> already have the environment to run the tests. \r\n> \r\n\r\nThanks for updating the patch.\r\n\r\nI couldn't apply v2-0001-Ignore-dropped-columns-HEAD-REL_15-REL_14-REL_13.patch\r\ncleanly in v13 and v14. It looks the patch needs some changes in these versions.\r\n\r\n```\r\nChecking patch src/backend/executor/execReplication.c...\r\nHunk #1 succeeded at 243 (offset -46 lines).\r\nHunk #2 succeeded at 263 (offset -46 lines).\r\nChecking patch src/test/subscription/t/100_bugs.pl...\r\nerror: while searching for:\r\n$node_publisher->stop('fast');\r\n$node_subscriber->stop('fast');\r\n\r\ndone_testing();\r\n\r\nerror: patch failed: src/test/subscription/t/100_bugs.pl:373\r\nApplied patch src/backend/executor/execReplication.c cleanly.\r\nApplying patch src/test/subscription/t/100_bugs.pl with 1 reject...\r\nRejected hunk #1.\r\n```\r\n\r\nBesides, I tried v2-0001-Ignore-dropped-columns-REL_12-REL_11.patch in v12. The\r\ntest failed and here's some information.\r\n\r\n```\r\nCan't locate object method \"new\" via package \"PostgreSQL::Test::Cluster\" (perhaps you forgot to load \"PostgreSQL::Test::Cluster\"?) at t/100_bugs.pl line 74.\r\n# Looks like your test exited with 2 just after 1.\r\n```\r\n\r\n+my $node_publisher_d_cols = PostgreSQL::Test::Cluster->new('node_publisher_d_cols');\r\n\r\nIt seems this usage is not supported in v12 and we should use get_new_node()\r\nlike other test cases.\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Fri, 17 Mar 2023 08:45:29 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "Hi Shi Yu,\n\nThanks for the review, really appreciate it!\n\n\n> I couldn't apply\n> v2-0001-Ignore-dropped-columns-HEAD-REL_15-REL_14-REL_13.patch\n> cleanly in v13 and v14. It looks the patch needs some changes in these\n> versions.\n>\n>\n\n> ```\n> Checking patch src/backend/executor/execReplication.c...\n> Hunk #1 succeeded at 243 (offset -46 lines).\n> Hunk #2 succeeded at 263 (offset -46 lines).\n> Checking patch src/test/subscription/t/100_bugs.pl...\n> error: while searching for:\n> $node_publisher->stop('fast');\n> $node_subscriber->stop('fast');\n>\n> done_testing();\n>\n> error: patch failed: src/test/subscription/t/100_bugs.pl:373\n> Applied patch src/backend/executor/execReplication.c cleanly.\n> Applying patch src/test/subscription/t/100_bugs.pl with 1 reject...\n> Rejected hunk #1.\n> ```\n>\n>\nHmm, interesting, it behaves differently on Macos and linux. Now attaching\nnew patches that should apply. Can you please try?\n\n\nBesides, I tried v2-0001-Ignore-dropped-columns-REL_12-REL_11.patch in v12.\n> The\n> test failed and here's some information.\n>\n> ```\n> Can't locate object method \"new\" via package \"PostgreSQL::Test::Cluster\"\n> (perhaps you forgot to load \"PostgreSQL::Test::Cluster\"?) at t/100_bugs.pl\n> line 74.\n> # Looks like your test exited with 2 just after 1.\n> ```\n>\n> +my $node_publisher_d_cols =\n> PostgreSQL::Test::Cluster->new('node_publisher_d_cols');\n>\n> It seems this usage is not supported in v12 and we should use\n> get_new_node()\n> like other test cases.\n>\n>\nThanks for sharing. Fixed\n\n\nThis time I was able to run all the tests with all the patches applied.\n\nAgain, the generated column fix also has some minor differences\nper version. So, overall we have 6 patches with very minor\ndifferences :)\n\n\nThanks,\nOnder",
"msg_date": "Fri, 17 Mar 2023 18:28:46 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "On Fri, Mar 17, 2023 11:29 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> \r\n> Thanks for sharing. Fixed\r\n> \r\n> \r\n> This time I was able to run all the tests with all the patches applied.\r\n> \r\n> Again, the generated column fix also has some minor differences\r\n> per version. So, overall we have 6 patches with very minor \r\n> differences :) \r\n\r\nThanks for updating the patches. It seems you forgot to attach the patches of\r\ndropped columns for HEAD and pg15, I think they are the same as v2.\r\n\r\nOn HEAD, we can re-use clusters in other test cases, which can save some time.\r\n(see fccaf259f22f4a)\r\n\r\nIn the patches for pg12 and pg11, I am not sure why not add the test at end of\r\nthe file 100_bugs.pl. I think it would be better to be consistent with other\r\nversions.\r\n\r\nThe attached patches modify these two points. Besides, I made some minor\r\nchanges, ran pgindent and pgperltidy. These are patches for dropped columns,\r\nbecause I think this would be submitted first, and we can discuss the fix for\r\ngenerated columns later.\r\n\r\nRegards,\r\nShi Yu",
"msg_date": "Mon, 20 Mar 2023 07:18:22 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "Hi Shi Yu, all\n\nThanks for updating the patches. It seems you forgot to attach the patches\n> of\n> dropped columns for HEAD and pg15, I think they are the same as v2.\n>\n>\nYes, it seems I forgot. And, yes they were the same as v2.\n\n\n> On HEAD, we can re-use clusters in other test cases, which can save some\n> time.\n> (see fccaf259f22f4a)\n>\n>\n Thanks for noting.\n\n\n> In the patches for pg12 and pg11, I am not sure why not add the test at\n> end of\n> the file 100_bugs.pl. I think it would be better to be consistent with\n> other\n> versions.\n>\n\nI applied the same patch that I created for HEAD, and then the \"patch\"\ncommand\ncreated this version. Given that we are creating new patches per version, I\nthink\nyou are right, we should put them at the end.\n\n\n\n> The attached patches modify these two points. Besides, I made some minor\n> changes, ran pgindent and pgperltidy.\n\n\noh, I didn't know about pgperltidy. And, thanks a lot for making changes and\nmaking it ready for the committer.\n\n\n> These are patches for dropped columns,\n> because I think this would be submitted first, and we can discuss the fix\n> for\n> generated columns later.\n>\n>\nMakes sense, even now we have 5 different patches, lets work on generated\ncolumns\nwhen this is fixed.\n\nI applied all patches for all the versions, and re-run the subscription\ntests,\nall looks good to me.\n\n\nThanks,\nOnder KALACI\n\nHi Shi Yu, all\nThanks for updating the patches. It seems you forgot to attach the patches of\ndropped columns for HEAD and pg15, I think they are the same as v2.\nYes, it seems I forgot. And, yes they were the same as v2. \nOn HEAD, we can re-use clusters in other test cases, which can save some time.\n(see fccaf259f22f4a)\n Thanks for noting. \nIn the patches for pg12 and pg11, I am not sure why not add the test at end of\nthe file 100_bugs.pl. I think it would be better to be consistent with other\nversions.I applied the same patch that I created for HEAD, and then the \"patch\" commandcreated this version. Given that we are creating new patches per version, I thinkyou are right, we should put them at the end. \nThe attached patches modify these two points. Besides, I made some minor\nchanges, ran pgindent and pgperltidy. oh, I didn't know about pgperltidy. And, thanks a lot for making changes andmaking it ready for the committer. These are patches for dropped columns,\nbecause I think this would be submitted first, and we can discuss the fix for\ngenerated columns later. Makes sense, even now we have 5 different patches, lets work on generated columnswhen this is fixed.I applied all patches for all the versions, and re-run the subscription tests,all looks good to me.Thanks,Onder KALACI",
"msg_date": "Mon, 20 Mar 2023 12:28:06 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 2:58 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n>\n>\n> I applied all patches for all the versions, and re-run the subscription tests,\n> all looks good to me.\n>\n\nLGTM. I'll push this tomorrow unless there are more comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 20 Mar 2023 18:28:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 6:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Mar 20, 2023 at 2:58 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> >\n> >\n> > I applied all patches for all the versions, and re-run the subscription tests,\n> > all looks good to me.\n> >\n>\n> LGTM. I'll push this tomorrow unless there are more comments.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 21 Mar 2023 11:37:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "Hi Amit, Shi Yu\n\nNow attaching the similar patches for generated columns.\n\nThanks,\nOnder KALACI\n\n\n\nAmit Kapila <amit.kapila16@gmail.com>, 21 Mar 2023 Sal, 09:07 tarihinde\nşunu yazdı:\n\n> On Mon, Mar 20, 2023 at 6:28 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > On Mon, Mar 20, 2023 at 2:58 PM Önder Kalacı <onderkalaci@gmail.com>\n> wrote:\n> > >\n> > >\n> > > I applied all patches for all the versions, and re-run the\n> subscription tests,\n> > > all looks good to me.\n> > >\n> >\n> > LGTM. I'll push this tomorrow unless there are more comments.\n> >\n>\n> Pushed.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>",
"msg_date": "Tue, 21 Mar 2023 11:51:20 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "On Tue, Mar 21, 2023 4:51 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> \r\n> Hi Amit, Shi Yu\r\n> \r\n> Now attaching the similar patches for generated columns.\r\n> \r\n\r\nThanks for your patches. Here are some comments.\r\n\r\n1.\r\n $node_publisher->safe_psql(\r\n \t'postgres', qq(\r\n \t\tALTER TABLE dropped_cols DROP COLUMN b_drop;\r\n+\t\tALTER TABLE generated_cols DROP COLUMN b_gen;\r\n ));\r\n $node_subscriber->safe_psql(\r\n \t'postgres', qq(\r\n \t\tALTER TABLE dropped_cols DROP COLUMN b_drop;\r\n+\t\tALTER TABLE generated_cols DROP COLUMN b_gen;\r\n ));\r\n\r\nI think we want to test generated columns, so we don't need to drop columns.\r\nOtherwise the generated column problem can't be detected.\r\n\r\n2. \r\n# The bug was that when the REPLICA IDENTITY FULL is used with dropped columns,\r\n# we fail to apply updates and deletes\r\n\r\nMaybe we should mention generated columns in comment of the test.\r\n\r\n3.\r\nI ran pgindent and it modified some lines. Maybe we can improve the patch\r\nas the following.\r\n\r\n@@ -292,8 +292,8 @@ tuples_equal(TupleTableSlot *slot1, TupleTableSlot *slot2,\r\n \t\tatt = TupleDescAttr(slot1->tts_tupleDescriptor, attrnum);\r\n \r\n \t\t/*\r\n-\t\t * Ignore dropped and generated columns as the publisher\r\n-\t\t * doesn't send those\r\n+\t\t * Ignore dropped and generated columns as the publisher doesn't send\r\n+\t\t * those\r\n \t\t */\r\n \t\tif (att->attisdropped || att->attgenerated)\r\n \t\t\tcontinue;\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Tue, 21 Mar 2023 10:07:11 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "Hi Shi Yu,\n\n>\n>\n> 1.\n> $node_publisher->safe_psql(\n> 'postgres', qq(\n> ALTER TABLE dropped_cols DROP COLUMN b_drop;\n> + ALTER TABLE generated_cols DROP COLUMN b_gen;\n> ));\n> $node_subscriber->safe_psql(\n> 'postgres', qq(\n> ALTER TABLE dropped_cols DROP COLUMN b_drop;\n> + ALTER TABLE generated_cols DROP COLUMN b_gen;\n> ));\n>\n> I think we want to test generated columns, so we don't need to drop\n> columns.\n> Otherwise the generated column problem can't be detected.\n>\n>\nOw, what a mistake. Now changed (and ensured that without the patch\nthe test fails).\n\n\n\n> 2.\n> # The bug was that when the REPLICA IDENTITY FULL is used with dropped\n> columns,\n> # we fail to apply updates and deletes\n>\n> Maybe we should mention generated columns in comment of the test.\n>\n> makes sense\n\n\n> 3.\n> I ran pgindent and it modified some lines. Maybe we can improve the patch\n> as the following.\n>\n> @@ -292,8 +292,8 @@ tuples_equal(TupleTableSlot *slot1, TupleTableSlot\n> *slot2,\n> att = TupleDescAttr(slot1->tts_tupleDescriptor, attrnum);\n>\n> /*\n> - * Ignore dropped and generated columns as the publisher\n> - * doesn't send those\n> + * Ignore dropped and generated columns as the publisher\n> doesn't send\n> + * those\n> */\n> if (att->attisdropped || att->attgenerated)\n> continue;\n>\n> fixed\n\n\nAttached patches again.\n\n\nThanks,\nOnder KALACI",
"msg_date": "Tue, 21 Mar 2023 15:02:58 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "On Tuesday, March 21, 2023 8:03 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> \r\n> Attached patches again.\r\n> \r\n\r\nThanks for updating the patch.\r\n\r\n@@ -408,15 +412,18 @@ $node_subscriber->wait_for_subscription_sync;\r\n $node_publisher->safe_psql(\r\n \t'postgres', qq(\r\n \t\tALTER TABLE dropped_cols DROP COLUMN b_drop;\r\n+\t\tALTER TABLE generated_cols DROP COLUMN c_drop;\r\n ));\r\n $node_subscriber->safe_psql(\r\n \t'postgres', qq(\r\n \t\tALTER TABLE dropped_cols DROP COLUMN b_drop;\r\n+\t\tALTER TABLE generated_cols DROP COLUMN c_drop;\r\n ));\r\n\r\nIs there any reasons why we drop column here? Dropped column case has been\r\ntested on table dropped_cols. The generated column problem can be detected\r\nwithout dropping columns on my machine.\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Wed, 22 Mar 2023 02:13:13 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "Hi Shi Yu,\n\n\n>\n> Is there any reasons why we drop column here? Dropped column case has been\n> tested on table dropped_cols. The generated column problem can be detected\n> without dropping columns on my machine.\n>\n\nWe don't really need to, if you check the first patch, we don't have DROP\nfor generated case. I mostly\nwanted to make the test a little more interesting, but it also seems to be\na little confusing.\n\nNow attaching v2 where we do not drop the columns. I don't have strong\npreference on\nwhich patch to proceed with, mostly wanted to attach this version to\nprogress faster (in case\nyou/Amit considers this one better).\n\nThanks,\nOnder",
"msg_date": "Wed, 22 Mar 2023 09:53:04 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "On Wed, Mar 22, 2023 2:53 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\r\n> \r\n> We don't really need to, if you check the first patch, we don't have DROP for generated case. I mostly\r\n> wanted to make the test a little more interesting, but it also seems to be a little confusing.\r\n> \r\n> Now attaching v2 where we do not drop the columns. I don't have strong preference on\r\n> which patch to proceed with, mostly wanted to attach this version to progress faster (in case\r\n> you/Amit considers this one better).\r\n> \r\n\r\nThanks for updating the patches.\r\nThe v2 patch LGTM.\r\n\r\nRegards,\r\nShi Yu\r\n\r\n",
"msg_date": "Wed, 22 Mar 2023 08:08:22 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 1:39 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Wed, Mar 22, 2023 2:53 PM Önder Kalacı <onderkalaci@gmail.com> wrote:\n> >\n> > We don't really need to, if you check the first patch, we don't have DROP for generated case. I mostly\n> > wanted to make the test a little more interesting, but it also seems to be a little confusing.\n> >\n> > Now attaching v2 where we do not drop the columns. I don't have strong preference on\n> > which patch to proceed with, mostly wanted to attach this version to progress faster (in case\n> > you/Amit considers this one better).\n> >\n>\n> Thanks for updating the patches.\n> The v2 patch LGTM.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 23 Mar 2023 19:15:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Dropped and generated columns might cause wrong data on subs when\n REPLICA IDENTITY FULL"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.